minio, velero y tal

This commit is contained in:
2025-08-27 01:46:41 +02:00
parent 4265121e6e
commit 7e429dd17a
26 changed files with 993 additions and 0 deletions

9
minio/README-UPDATE.md Normal file
View File

@@ -0,0 +1,9 @@
Servicios actualizados (site-a y site-b):
- Selector reducido a `app: minio`
- targetPort numérico 9000/9001
Aplica con:
kubectl apply -f site-a/service.yaml
kubectl apply -f site-b/service.yaml
Luego verifica:
kubectl -n minio-site-a get endpoints minio -o wide
kubectl -n minio-site-b get endpoints minio -o wide

18
minio/site-a/service.yaml Normal file
View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: minio-site-a
spec:
type: ClusterIP
selector:
app: minio
ports:
- name: api
port: 9000
targetPort: 9000
protocol: TCP
- name: console
port: 9001
targetPort: 9001
protocol: TCP

18
minio/site-b/service.yaml Normal file
View File

@@ -0,0 +1,18 @@
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: minio-site-b
spec:
type: ClusterIP
selector:
app: minio
ports:
- name: api
port: 9000
targetPort: 9000
protocol: TCP
- name: console
port: 9001
targetPort: 9001
protocol: TCP

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,27 @@
annotations:
artifacthub.io/images: |
- name: csi-driver
image: {{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}
apiVersion: v2
appVersion: 1.10.0
description: A dynamic persistent volume (PV) provisioner for Seagate Exos X storage
systems.
home: https://github.com/Seagate/seagate-exos-x-csi
keywords:
- storage
- iscsi
- fc
- sas
- plugin
- csi
maintainers:
- email: css-host-software@seagate.com
name: Seagate
url: https://github.com/Seagate
- email: joseph.skazinski@seagate.com
name: Joe Skazinski
name: seagate-exos-x-csi
sources:
- https://github.com/Seagate/seagate-exos-x-csi/tree/main/helm/csi-charts
type: application
version: 1.10.0

View File

@@ -0,0 +1,59 @@
{{ template "chart.header" . }}
{{ template "chart.deprecationWarning" . }}
{{ template "chart.description" . }}
{{ template "chart.badgesSection" . }}
[![Artifact HUB](https://img.shields.io/endpoint?url=https://artifacthub.io/badge/repository/Seagate)](https://artifacthub.io/packages/search?repo=Seagate)
# Introduction
As of version `1.0.0`, this `csi` driver and the associated helm charts are released as open-source projects under the Apache 2.0 license.
Your contribution is most welcome!
{{ template "chart.homepageLine" . }}
## This helm chart
Is part of the project and is published on [Seagate](https://seagate.io)'s charts repository.
{{ template "chart.sourcesSection" . }}
# Installing the Chart
Create a file named `{{ template "chart.name" . }}.values.yaml` with your values, with the help of [Chart Values](#values).
Add our Charts repository:
```
$ helm repo add seagate https://charts.seagate.io
```
Install the {{ template "chart.name" . }} with release name `{{ template "chart.name" . }}` in the `seagate-exos-x-csi-system` namespace:
```
$ helm install -n seagate-exos-x-csi-system {{ template "chart.name" . }} seagate/{{ template "chart.name" . }} --values {{ template "chart.name" . }}.values.yaml
```
The `upgrade` command is used to change configuration when values are modified:
```
$ helm upgrade -n seagate-exos-x-csi-system {{ template "chart.name" . }} seagate/{{ template "chart.name" . }} --values {{ template "chart.name" . }}.values.yaml
```
# Upgrading the Chart
Update Helm repositories:
```
$ helm repo update
```
Upgrade release names `{{ template "chart.name" . }}` to the latest version:
```
$ helm upgrade {{ template "chart.name" . }} seagate/{{ template "chart.name" . }}
```
# Creating a storage class
In order to dynamically provision persistants volumes, you first need to create a storage class. To do so, please refer to the project [documentation](https://github.com/Seagate/seagate-exos-x-csi).
{{ template "chart.maintainersSection" . }}
{{ template "chart.requirementsSection" . }}
{{ template "chart.valuesSection" . }}

View File

@@ -0,0 +1,5 @@
Thank you for using Seagate Exos X provisioner. It will be up and running shortly.
Run 'kubectl get pods' to verify that the new pods have a 'STATUS' of 'Running'.
In order to dynamically provide a persistant volume, create a storage class first.
Please refer to this example to do so: https://github.com/Seagate/seagate-exos-x-csi/blob/main/example/storage-class.yaml

View File

@@ -0,0 +1,10 @@
{{- define "csidriver.labels" -}}
app.kubernetes.io/name: {{ .Chart.Name | kebabcase }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end -}}
{{- define "csidriver.extraArgs" -}}
{{- range .extraArgs }}
- {{ toYaml . }}
{{- end }}
{{- end -}}

View File

@@ -0,0 +1,126 @@
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: seagate-exos-x-csi-node-server
labels:
app.kubernetes.io/version: {{ .Chart.Version }}
app.kubernetes.io/component: dynamic-provisionning-node
{{ include "csidriver.labels" . | indent 4 }}
spec:
selector:
matchLabels:
name: seagate-exos-x-csi-node-server
{{ include "csidriver.labels" . | indent 6 }}
template:
metadata:
labels:
name: seagate-exos-x-csi-node-server
{{ include "csidriver.labels" . | indent 8 }}
spec:
hostNetwork: true
hostIPC: true
{{ if .Values.pspAdmissionControllerEnabled }}serviceAccount: csi-node-registrar{{ end }}
{{- if .Values.nodeServer.nodeAffinity }}
affinity:
nodeAffinity:
{{ toYaml .Values.nodeServer.nodeAffinity | indent 10 }}
{{- end }}
{{- if .Values.nodeServer.nodeSelector }}
nodeSelector:
{{ toYaml .Values.nodeServer.nodeSelector | indent 8 }}
{{- end }}
containers:
- name: seagate-exos-x-csi-node
image: {{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}
command:
- seagate-exos-x-csi-node
- -bind=unix://{{ .Values.kubeletPath }}/plugins/csi-exos-x.seagate.com/csi.sock
- -chroot=/host
{{- include "csidriver.extraArgs" .Values.node | indent 10 }}
env:
- name: CSI_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_NODE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: CSI_NODE_SERVICE_PORT
value: "978"
securityContext:
privileged: true
volumeMounts:
- name: plugin-dir
mountPath: {{ .Values.kubeletPath }}/plugins/csi-exos-x.seagate.com
- name: mountpoint-dir
mountPath: {{ .Values.kubeletPath }}/pods
mountPropagation: Bidirectional
- name: san-iscsi-csi-run-dir
mountPath: /var/run/csi-exos-x.seagate.com
- name: device-dir
mountPath: /dev
- name: iscsi-dir
mountPath: /etc/iscsi
- name: host
mountPath: /host
mountPropagation: Bidirectional
ports:
- containerPort: 9808
name: healthz
protocol: TCP
- containerPort: 9842
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: healthz
periodSeconds: 60
- name: liveness-probe
image: {{.Values.nodeLivenessProbe.image.repository }}:{{ .Values.nodeLivenessProbe.image.tag }}
args:
- --csi-address=/csi/csi.sock
{{- include "csidriver.extraArgs" .Values.nodeLivenessProbe | indent 10 }}
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: driver-registrar
image: {{ .Values.csiNodeRegistrar.image.repository }}:{{ .Values.csiNodeRegistrar.image.tag }}
args:
- --csi-address=/csi/csi.sock
- --kubelet-registration-path={{ .Values.kubeletPath }}/plugins/csi-exos-x.seagate.com/csi.sock
{{- include "csidriver.extraArgs" .Values.csiNodeRegistrar | indent 10 }}
volumeMounts:
- name: plugin-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
volumes:
- name: registration-dir
hostPath:
path: {{ .Values.kubeletPath }}/plugins_registry/
- name: mountpoint-dir
hostPath:
path: {{ .Values.kubeletPath }}/pods
- name: plugin-dir
hostPath:
path: {{ .Values.kubeletPath }}/plugins/csi-exos-x.seagate.com
type: DirectoryOrCreate
- name: iscsi-dir
hostPath:
path: /etc/iscsi
- name: device-dir
hostPath:
path: /dev
- name: san-iscsi-csi-run-dir
hostPath:
path: /var/run/csi-exos-x.seagate.com
- name: host
hostPath:
path: /

View File

@@ -0,0 +1,94 @@
kind: Deployment
apiVersion: apps/v1
metadata:
name: seagate-exos-x-csi-controller-server
labels:
app.kubernetes.io/version: {{ .Chart.Version }}
app.kubernetes.io/component: dynamic-provisionning-controller
{{ include "csidriver.labels" . | indent 4 }}
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: seagate-exos-x-csi-controller-server
{{ include "csidriver.labels" . | indent 6 }}
template:
metadata:
labels:
app: seagate-exos-x-csi-controller-server
{{ include "csidriver.labels" . | indent 8 }}
spec:
serviceAccount: csi-provisioner
containers:
- name: seagate-exos-x-csi-controller
image: {{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}
command:
- seagate-exos-x-csi-controller
- -bind=unix:///csi/csi.sock
{{- include "csidriver.extraArgs" .Values.controller | indent 10 }}
env:
- name: CSI_NODE_SERVICE_PORT
value: "978"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-run-dir
mountPath: /var/run/csi-exos-x.seagate.com
ports:
- containerPort: 9842
name: metrics
protocol: TCP
- name: csi-provisioner
image: {{ .Values.csiProvisioner.image.repository }}:{{ .Values.csiProvisioner.image.tag }}
args:
- --csi-address=/csi/csi.sock
- --worker-threads=1
- --timeout={{ .Values.csiProvisioner.timeout }}
{{- include "csidriver.extraArgs" .Values.csiProvisioner | indent 10 }}
imagePullPolicy: IfNotPresent
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-attacher
image: {{ .Values.csiAttacher.image.repository }}:{{ .Values.csiAttacher.image.tag }}
args:
- --csi-address=/csi/csi.sock
- --worker-threads=1
- --timeout={{ .Values.csiAttacher.timeout }}
{{- include "csidriver.extraArgs" .Values.csiAttacher | indent 10 }}
imagePullPolicy: IfNotPresent
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-resizer
image: {{ .Values.csiResizer.image.repository }}:{{ .Values.csiResizer.image.tag }}
args:
- --csi-address=/csi/csi.sock
{{- include "csidriver.extraArgs" .Values.csiResizer | indent 10 }}
imagePullPolicy: IfNotPresent
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-snapshotter
image: {{ .Values.csiSnapshotter.image.repository }}:{{ .Values.csiSnapshotter.image.tag }}
args:
- --csi-address=/csi/csi.sock
{{- include "csidriver.extraArgs" .Values.csiSnapshotter | indent 10 }}
imagePullPolicy: IfNotPresent
volumeMounts:
- name: socket-dir
mountPath: /csi
{{- if .Values.imagePullSecrets }}
imagePullSecrets:
{{ toYaml .Values.imagePullSecrets | indent 8 }}
{{- end }}
volumes:
- name: socket-dir
emptyDir:
medium: Memory
- name: csi-run-dir
hostPath:
path: /var/run/csi-exos-x.seagate.com

View File

@@ -0,0 +1,14 @@
{{- if .Values.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: seagate-exos-x-csi-node-exporter
labels:
{{ include "csidriver.labels" . | indent 4 }}
spec:
selector:
matchLabels:
name: seagate-exos-x-csi-node-server
podMetricsEndpoints:
- port: metrics
{{- end }}

View File

@@ -0,0 +1,26 @@
{{- if .Values.pspAdmissionControllerEnabled -}}
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: seagate-exos-x-csi
spec:
privileged: true
hostNetwork: true
hostIPC: true
hostPID: true
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
hostPorts:
- min: 0
max: 65535
volumes:
- '*'
allowedCapabilities:
- '*'
{{ end }}

View File

@@ -0,0 +1,166 @@
# This YAML file contains all RBAC objects that are necessary to run external
# CSI provisioner.
#
# In production, each CSI driver deployment has to be customized:
# - to avoid conflicts, use non-default namespace and different names
# for non-namespaced entities like the ClusterRole
# - decide whether the deployment replicates the external CSI
# provisioner, in which case leadership election must be enabled;
# this influences the RBAC setup, see below
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-provisioner
labels:
{{ include "csidriver.labels" . | indent 4 }}
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-provisioner-runner-systems
labels:
{{ include "csidriver.labels" . | indent 4 }}
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents/status"]
verbs: ["update"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role-systems
labels:
{{ include "csidriver.labels" . | indent 4 }}
subjects:
- kind: ServiceAccount
name: csi-provisioner
namespace: {{ .Release.Namespace }}
roleRef:
kind: ClusterRole
name: external-provisioner-runner-systems
apiGroup: rbac.authorization.k8s.io
---
# Provisioner must be able to work with endpoints in current namespace
# if (and only if) leadership election is enabled
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-provisioner-cfg-systems
labels:
{{ include "csidriver.labels" . | indent 4 }}
rules:
# Only one of the following rules for endpoints or leases is required based on
# what is set for `--leader-election-type`. Endpoints are deprecated in favor of Leases.
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
{{ if .Values.pspAdmissionControllerEnabled }}
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames:
- seagate-exos-x-csi
{{ end }}
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-provisioner-role-cfg-systems
labels:
{{ include "csidriver.labels" . | indent 4 }}
subjects:
- kind: ServiceAccount
name: csi-provisioner
roleRef:
kind: Role
name: external-provisioner-cfg-systems
apiGroup: rbac.authorization.k8s.io
{{ if .Values.pspAdmissionControllerEnabled }}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-node-registrar
labels:
{{ include "csidriver.labels" . | indent 4 }}
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-node-registrar-cfg-systems
labels:
{{ include "csidriver.labels" . | indent 4 }}
rules:
- apiGroups: ["policy"]
resources: ["podsecuritypolicies"]
verbs: ["use"]
resourceNames:
- systems-role
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-node-registrar-role-cfg-systems
labels:
{{ include "csidriver.labels" . | indent 4 }}
subjects:
- kind: ServiceAccount
name: csi-node-registrar
roleRef:
kind: Role
name: csi-node-registrar-cfg-systems
apiGroup: rbac.authorization.k8s.io
{{ end }}

View File

@@ -0,0 +1,31 @@
{{- if .Values.serviceMonitor.enabled }}
apiVersion: v1
kind: Service
metadata:
name: systems-controller-metrics
labels:
name: systems-controller-metrics
{{ include "csidriver.labels" . | indent 4 }}
spec:
ports:
- name: metrics
port: 9842
targetPort: metrics
protocol: TCP
selector:
app: seagate-exos-x-csi-controller-server
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: seagate-exos-x-csi-controller-exporter
labels:
{{ include "csidriver.labels" . | indent 4 }}
spec:
selector:
matchLabels:
name: systems-controller-metrics
endpoints:
- port: metrics
interval: 1s
{{- end }}

View File

@@ -0,0 +1,83 @@
# Default values CSI Driver.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# -- Path to kubelet
kubeletPath: /var/lib/kubelet
# -- Wether psp admission controller has been enabled in the cluster or not
pspAdmissionControllerEnabled: false
image:
# -- Docker repository to use for nodes and controller
repository: ghcr.io/seagate/seagate-exos-x-csi
# -- Tag to use for nodes and controller
# @default -- Uses Chart.appVersion value by default if tag does not specify a new version.
tag: "v1.10.0"
# -- Default is set to IfNotPresent, to override use Always here to always pull the specified version
pullPolicy: Always
# -- Controller sidecar for provisioning
# AKA external-provisioner
csiProvisioner:
image:
repository: registry.k8s.io/sig-storage/csi-provisioner
tag: v5.0.1
# -- Timeout for gRPC calls from the csi-provisioner to the controller
timeout: 60s
# -- Extra arguments for csi-provisioner controller sidecar
extraArgs: []
# -- Controller sidecar for attachment handling
csiAttacher:
image:
repository: registry.k8s.io/sig-storage/csi-attacher
tag: v4.6.1
# -- Timeout for gRPC calls from the csi-attacher to the controller
timeout: 60s
# -- Extra arguments for csi-attacher controller sidecar
extraArgs: []
# -- Controller sidecar for volume expansion
csiResizer:
image:
repository: registry.k8s.io/sig-storage/csi-resizer
tag: v1.11.1
# -- Extra arguments for csi-resizer controller sidecar
extraArgs: []
# -- Controller sidecar for snapshots handling
csiSnapshotter:
image:
repository: registry.k8s.io/sig-storage/csi-snapshotter
tag: v8.0.1
# -- Extra arguments for csi-snapshotter controller sidecar
extraArgs: []
# -- Node sidecar for plugin registration
csiNodeRegistrar:
image:
repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
tag: v2.9.0
# -- Extra arguments for csi-node-registrar node sidecar
extraArgs: []
controller:
# -- Extra arguments for seagate-exos-x-csi-controller container
extraArgs: [-v=0]
node:
# -- Extra arguments for seagate-exos-x-csi-node containers
extraArgs: [-v=0]
multipathd:
# -- Extra arguments for multipathd containers
extraArgs: []
# -- Container that convert CSI liveness probe to kubernetes liveness/readiness probe
nodeLivenessProbe:
image:
repository: registry.k8s.io/sig-storage/livenessprobe
tag: v2.12.0
# -- Extra arguments for the node's liveness probe containers
extraArgs: []
nodeServer:
# -- Kubernetes nodeSelector field for seagate-exos-x-csi-node-server Pod
nodeSelector:
# -- Kubernetes nodeAffinity field for seagate-exos-x-csi-node-server Pod
nodeAffinity:
podMonitor:
# -- Set a Prometheus operator PodMonitor resource (true or false)
enabled: false
serviceMonitor:
# -- Set a Prometheus operator ServiceMonitor resource (true or false)
enabled: false

32
velero/README.md Normal file
View File

@@ -0,0 +1,32 @@
# Velero + MinIO (c2et.net)
Este paquete contiene:
- `namespace.yaml`
- Secrets de credenciales (`cloud-credentials-site-a`, `cloud-credentials-site-b`)
- BackupStorageLocation (BSL) por YAML: `default` (site-a) y `site-b`
- Ejemplo de `Schedule` (nightly a las 02:00 y 02:30)
- Dos `values.yaml` de Helm:
- `helm/values-approach-a.yaml`: crea BSL por defecto y Secret desde Helm
- `helm/values-approach-b.yaml`: sin BSL/Secret; los aplicas tú en YAML (GitOps)
- `ServiceMonitor` (si usas Prometheus Operator)
- Dashboard de Grafana (JSON)
## Flujo recomendado (GitOps, Approach B)
```bash
# 1) Instala Velero por Helm sin BSL ni secrets
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
helm upgrade --install velero vmware-tanzu/velero -n velero --create-namespace -f helm/values-approach-b.yaml
# 2) Aplica Secrets, BSLs y Schedules
kubectl apply -f namespace.yaml
kubectl apply -f secrets/secret-site-a.yaml -f secrets/secret-site-b.yaml
kubectl apply -f bsl/bsl-default-site-a.yaml -f bsl/bsl-site-b.yaml
kubectl apply -f schedules/schedules.yaml
```
## Notas
- MinIO requiere `s3ForcePathStyle=true`.
- Si usas CA propia, añade `spec.config.caCert` en los BSL.
- `ServiceMonitor` requiere Prometheus Operator; ajusta `metadata.labels.release` al valor que use tu Prometheus.
- Importa el dashboard JSON en Grafana (datasource `prometheus`).

View File

@@ -0,0 +1,16 @@
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: default
namespace: velero
spec:
provider: aws
objectStorage:
bucket: velero
config:
region: minio
s3Url: https://s3-a.c2et.net
s3ForcePathStyle: "true"
credential:
name: cloud-credentials-site-a
key: cloud

View File

@@ -0,0 +1,16 @@
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
name: site-b
namespace: velero
spec:
provider: aws
objectStorage:
bucket: velero
config:
region: minio
s3Url: https://s3-b.c2et.net
s3ForcePathStyle: "true"
credential:
name: cloud-credentials-site-b
key: cloud

View File

@@ -0,0 +1,36 @@
credentials:
useSecret: true
existingSecret: ""
secretContents:
cloud: |
[default]
aws_access_key_id=velero-a
aws_secret_access_key=Clave-Velero-A
configuration:
features: EnableCSI
backupStorageLocation:
- name: default
provider: aws
bucket: velero
config:
region: minio
s3Url: https://s3-a.c2et.net
s3ForcePathStyle: "true"
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:v1.9.0
imagePullPolicy: IfNotPresent
volumeMounts:
- name: plugins
mountPath: /target
- name: velero-plugin-for-csi
image: velero/velero-plugin-for-csi:v0.7.0
imagePullPolicy: IfNotPresent
volumeMounts:
- name: plugins
mountPath: /target
nodeAgent:
enabled: true

View File

@@ -0,0 +1,23 @@
credentials:
useSecret: false # Secrets y BSLs los aplicas tú por YAML
configuration:
features: EnableCSI
backupStorageLocation: [] # ninguno desde Helm
initContainers:
- name: velero-plugin-for-aws
image: velero/velero-plugin-for-aws:v1.9.0
imagePullPolicy: IfNotPresent
volumeMounts:
- name: plugins
mountPath: /target
- name: velero-plugin-for-csi
image: velero/velero-plugin-for-csi:v0.7.0
imagePullPolicy: IfNotPresent
volumeMounts:
- name: plugins
mountPath: /target
nodeAgent:
enabled: true

View File

@@ -0,0 +1,92 @@
{
"annotations": {
"list": []
},
"editable": true,
"gnetId": null,
"graphTooltip": 0,
"panels": [
{
"type": "stat",
"title": "Backups - Total",
"targets": [
{
"expr": "sum(velero_backup_total)",
"legendFormat": "total"
}
],
"id": 1,
"datasource": {
"type": "prometheus",
"uid": "prometheus"
},
"options": {
"reduceOptions": {
"calcs": [
"lastNotNull"
]
}
}
},
{
"type": "timeSeries",
"title": "Backups por estado",
"targets": [
{
"expr": "sum by (phase) (increase(velero_backup_attempt_total[1h]))",
"legendFormat": "{{phase}}"
}
],
"id": 2,
"datasource": {
"type": "prometheus",
"uid": "prometheus"
}
},
{
"type": "timeSeries",
"title": "Duraci\u00f3n de backups (p95)",
"targets": [
{
"expr": "histogram_quantile(0.95, sum(rate(velero_backup_duration_seconds_bucket[5m])) by (le))",
"legendFormat": "p95"
}
],
"id": 3,
"datasource": {
"type": "prometheus",
"uid": "prometheus"
}
},
{
"type": "timeSeries",
"title": "Errores del node-agent",
"targets": [
{
"expr": "sum(rate(velero_node_agent_errors_total[5m]))",
"legendFormat": "errores"
}
],
"id": 4,
"datasource": {
"type": "prometheus",
"uid": "prometheus"
}
}
],
"schemaVersion": 37,
"style": "dark",
"tags": [
"velero",
"backup"
],
"templating": {
"list": []
},
"time": {
"from": "now-24h",
"to": "now"
},
"title": "Velero (MinIO S3)",
"version": 1
}

View File

@@ -0,0 +1,16 @@
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: velero
namespace: velero
labels:
release: prometheus # ajusta al selector de tu Prometheus
spec:
selector:
matchLabels:
app.kubernetes.io/name: velero
namespaceSelector:
matchNames: ["velero"]
endpoints:
- port: metrics
interval: 30s

4
velero/namespace.yaml Normal file
View File

@@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: velero

View File

@@ -0,0 +1,27 @@
apiVersion: velero.io/v1
kind: Schedule
metadata:
name: nightly-a
namespace: velero
spec:
schedule: "0 2 * * *"
template:
ttl: 168h
includedNamespaces:
- gitea
- apolo
storageLocation: default
---
apiVersion: velero.io/v1
kind: Schedule
metadata:
name: nightly-b
namespace: velero
spec:
schedule: "30 2 * * *"
template:
ttl: 168h
includedNamespaces:
- giteay
- apolo
storageLocation: site-b

View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: Secret
metadata:
name: cloud-credentials-site-a
namespace: velero
type: Opaque
stringData:
cloud: |
[default]
aws_access_key_id=velero-a
aws_secret_access_key=Pozuelo12345

View File

@@ -0,0 +1,11 @@
apiVersion: v1
kind: Secret
metadata:
name: cloud-credentials-site-b
namespace: velero
type: Opaque
stringData:
cloud: |
[default]
aws_access_key_id=velero-b
aws_secret_access_key=Pozuelo12345