Compare commits
10 Commits
3b939d4b7c
...
bda9a3be17
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bda9a3be17 | ||
|
|
ac5f10b281 | ||
| 3d41b2cda4 | |||
|
|
6bdc7e0e30 | ||
| 7e429dd17a | |||
|
|
4265121e6e | ||
| 33aced03a0 | |||
| fc3640f5e6 | |||
|
|
a321cc7928 | ||
|
|
4c39e29748 |
@@ -7,10 +7,11 @@ metadata:
|
||||
app.kubernetes.io/name: apolo-mediamtx
|
||||
app.kubernetes.io/part-of: apolo
|
||||
app.kubernetes.io/component: media
|
||||
annotations:
|
||||
metallb.universe.tf/allow-shared-ip: streaming
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
loadBalancerIP: 192.168.200.12
|
||||
externalTrafficPolicy: Local
|
||||
selector:
|
||||
app.kubernetes.io/name: apolo-mediamtx
|
||||
ports:
|
||||
|
||||
@@ -7,6 +7,8 @@ metadata:
|
||||
app.kubernetes.io/name: apolo-streamer
|
||||
app.kubernetes.io/part-of: apolo
|
||||
app.kubernetes.io/component: streamer
|
||||
annotations:
|
||||
metallb.universe.tf/allow-shared-ip: streaming
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
loadBalancerIP: 192.168.200.12
|
||||
|
||||
78
minio/readme.md
Normal file
78
minio/readme.md
Normal file
@@ -0,0 +1,78 @@
|
||||
# MinIO en Kubernetes — c2et.net (Site A/B)
|
||||
|
||||
Este paquete contiene manifiestos sin Helm para desplegar **dos instancias independientes de MinIO**,
|
||||
una por site, usando tus StorageClasses `sc-me5-site-a` y `sc-me5-site-b`, y forzando programación por zona.
|
||||
|
||||
## Estructura
|
||||
|
||||
```
|
||||
minio-k8s-c2et-net/
|
||||
site-a/
|
||||
namespace.yaml
|
||||
secret-root.yaml
|
||||
pvc.yaml
|
||||
statefulset.yaml
|
||||
service.yaml
|
||||
ingress-api.yaml
|
||||
ingress-console.yaml
|
||||
site-b/
|
||||
(idéntico con valores del site B)
|
||||
```
|
||||
|
||||
## Credenciales de administración
|
||||
- Usuario: **admin**
|
||||
- Password: **Pozuelo12345**
|
||||
|
||||
> Cambia estas credenciales en `secret-root.yaml` antes de ir a producción.
|
||||
|
||||
## Dominios
|
||||
- Site A API: `s3-a.c2et.net`
|
||||
- Site A Consola: `console.s3-a.c2et.net`
|
||||
- Site B API: `s3-b.c2et.net`
|
||||
- Site B Consola: `console.s3-b.c2et.net`
|
||||
|
||||
Requisitos previos:
|
||||
- IngressClass `nginx` operativo.
|
||||
- `cert-manager` con `ClusterIssuer` llamado `letsencrypt-prod`.
|
||||
- DNS apuntando los hosts anteriores al Ingress Controller.
|
||||
|
||||
## Despliegue rápido
|
||||
|
||||
```bash
|
||||
kubectl apply -f site-a/namespace.yaml
|
||||
kubectl apply -f site-a/secret-root.yaml
|
||||
kubectl apply -f site-a/pvc.yaml
|
||||
kubectl apply -f site-a/service.yaml
|
||||
kubectl apply -f site-a/statefulset.yaml
|
||||
kubectl apply -f site-a/ingress-api.yaml
|
||||
kubectl apply -f site-a/ingress-console.yaml
|
||||
|
||||
kubectl apply -f site-b/namespace.yaml
|
||||
kubectl apply -f site-b/secret-root.yaml
|
||||
kubectl apply -f site-b/pvc.yaml
|
||||
kubectl apply -f site-b/service.yaml
|
||||
kubectl apply -f site-b/statefulset.yaml
|
||||
kubectl apply -f site-b/ingress-api.yaml
|
||||
kubectl apply -f site-b/ingress-console.yaml
|
||||
```
|
||||
|
||||
## Probar
|
||||
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID=admin
|
||||
export AWS_SECRET_ACCESS_KEY='Pozuelo12345'
|
||||
export AWS_S3_FORCE_PATH_STYLE=true
|
||||
|
||||
aws --endpoint-url https://s3-a.c2et.net s3 mb s3://mi-bucket-a
|
||||
aws --endpoint-url https://s3-a.c2et.net s3 ls
|
||||
|
||||
aws --endpoint-url https://s3-b.c2et.net s3 mb s3://mi-bucket-b
|
||||
aws --endpoint-url https://s3-b.c2et.net s3 ls
|
||||
```
|
||||
|
||||
## Notas
|
||||
|
||||
- Los PVC usan `WaitForFirstConsumer` a través de tus StorageClasses; el `nodeSelector` del StatefulSet garantiza
|
||||
que cada volumen se cree en el **site** correcto.
|
||||
- Imagen MinIO: `quay.io/minio/minio:RELEASE.2025-02-20T00-00-00Z` (ajústala a la que certifiques).
|
||||
- Tamaño del PVC por defecto: `2Ti` (modifícalo a tu necesidad).
|
||||
18
minio/site-a/service.yaml
Normal file
18
minio/site-a/service.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minio
|
||||
namespace: minio-site-a
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: minio
|
||||
ports:
|
||||
- name: api
|
||||
port: 9000
|
||||
targetPort: 9000
|
||||
protocol: TCP
|
||||
- name: console
|
||||
port: 9001
|
||||
targetPort: 9001
|
||||
protocol: TCP
|
||||
18
minio/site-b/service.yaml
Normal file
18
minio/site-b/service.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: minio
|
||||
namespace: minio-site-b
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app: minio
|
||||
ports:
|
||||
- name: api
|
||||
port: 9000
|
||||
targetPort: 9000
|
||||
protocol: TCP
|
||||
- name: console
|
||||
port: 9001
|
||||
targetPort: 9001
|
||||
protocol: TCP
|
||||
47
readme.md
47
readme.md
@@ -32,6 +32,9 @@ Este repositorio contiene los **manifiestos, scripts y documentación** para des
|
||||
* **Multus** ➝ permite varias interfaces de red por pod (**NAD**).
|
||||
|
||||
### 2.3. Almacenamiento
|
||||
* **CSI contra DELL Powervault**:
|
||||
* 1 cabina por site.
|
||||
* Requiere nodeSelector por zonas (para hacerlo bien)
|
||||
|
||||
* **Ceph distribuido**:
|
||||
|
||||
@@ -104,17 +107,25 @@ Este repositorio contiene los **manifiestos, scripts y documentación** para des
|
||||
* **Dashboard** para gestionarlas.
|
||||
* **iso-server** ➝ sirve ISOs por HTTPS (subidas vía Samba).
|
||||
|
||||
### 5.3. Copias de seguridad
|
||||
|
||||
* **Velero**
|
||||
|
||||
* Montado sobre dos **almacenes S3** (Minio), uno por **SITE**
|
||||
* Cada almacen en una cabina de almacenamiento **(DriverCSI)**
|
||||
|
||||
---
|
||||
|
||||
## 6. 📚 Índice de documentos y referencias cruzadas
|
||||
|
||||
| Documento | Descripción | Referencia |
|
||||
| --------------------------- | --------------------------------------------- | ---------------------------------- |
|
||||
| `estructura_manifiestos.md` | Explicación de nuestra estructura de manifiestos | [Ver](estructura_manifiestos.md) |
|
||||
| `estructura_manifiestos.md` | Explicación de la estructura de manifiestos | [Ver](estructura_manifiestos.md) |
|
||||
| `cluster_init.md` | Proceso de inicialización del cluster en SUSE | [Ver](cluster_init.md) |
|
||||
| `redes_internet.md` | MetalLB, Multus y demás | [Ver](redes_internet.md) |
|
||||
| `ingress.md` | Capítulo de cert-manager e ingress | [Ver](ingress.md) |
|
||||
| `cephrook.md` | Instalación e integración de Ceph/Rook | [Ver](./cephrook.md) |
|
||||
| `rook\readme.md` | Instalación e integración de Ceph/Rook | [Ver](./rook/readme.md) |
|
||||
| `seagate\readme.md` | Instalación del driver CSI para DELL SAN | [Ver](./seagate/readme.md) |
|
||||
| `kubevirt\readme.md` | Despliegue de KubeVirt y gestión de VMs | [Ver](./kubevirt/readme.md) |
|
||||
| `vm-windows-demo\readme.md` | Máquina virtual de ejemplo | [Ver](./vm-windows-demo/readme.md) |
|
||||
| `comprobaciones.md` | Checklist tras cada paso crítico | [Ver](./comprobaciones.md) |
|
||||
@@ -132,28 +143,32 @@ Este repositorio contiene los **manifiestos, scripts y documentación** para des
|
||||
| `mapas\readme.md` | Manual de instalación de Tileserver-GL | [Ver](./mapas/readme.md) |
|
||||
| `argos\readme.md` | Manual de instalación de Argos Core | [Ver](./argos/readme.md) |
|
||||
| `multusk3s.md` | Notas para Multus en K3s | [Ver](./multusk3s.md) |
|
||||
|
||||
| `minio\readme.md` | Manual de instalación de Minio para Velero | [Ver](./minio/readme.md) |
|
||||
| `velero\readme.md` | Manual de instalación de Velero | [Ver](./velero/readme.md) |
|
||||
---
|
||||
|
||||
## 7. 📊 Estado actual de la instalación
|
||||
|
||||
| Componente | Estado | Comentario | Enlace | User/Pass |
|
||||
| ------------------------ | ------------------ | -------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | -------------------- |
|
||||
| ------------------------ | ---------------------- | -------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | -------------------- |
|
||||
| `Arranque Cluster` | ✅ Completado | Instalación básica validada | [https://k8s.c2et.net](https://k8s.c2et.net) | kubeconfig |
|
||||
| `Networking` | ✅ Completado | probado Multus, flannel y MetalLB y validado | - | - |
|
||||
| `Ingress` | ✅ Completado Nginx | Nginx funcionando | - | - |
|
||||
| `Volumenes persistentes` | ✅ Completado | Rook Ceph a 4 nodos, falta ampliar a 5 nodos | [https://ceph.c2et.net/](https://ceph.c2et.net/) | admin / Pozuelo12345 |
|
||||
| `Maquinas Virtuales` | ✅ Completado | Desplegado kubevirt, dashboard e isoserver | [https://kubevirt.c2et.net/](https://kubevirt.c2et.net/) [https://isoserver.c2et.net/](https://isoserver.c2et.net/) | - |
|
||||
| `Wireguard` | ✅ Completado | Funcionando | [https://wireguard.c2et.net/](https://wireguard.c2et.net/) | Pozuelo12345 |
|
||||
| `Volumenes persistentes` | ✅ Completado | Rook Ceph a 4 nodos, falta ampliar a 5 nodos | [https://ceph.c2et.net](https://ceph.c2et.net/) | admin / Pozuelo12345 |
|
||||
| `Volumenes persistentes` | ✅ Completado | Driver para las cabinas de almacenamiendo DEEL Powervault | | |
|
||||
| `Maquinas Virtuales` | ✅ Completado | Desplegado kubevirt, dashboard e isoserver | [https://kubevirt.c2et.net](https://kubevirt.c2et.net/) <br>[https://isoserver.c2et.net](https://isoserver.c2et.net/) | - |
|
||||
| `Wireguard` | ✅ Completado | Funcionando | [https://wireguard.c2et.net](https://wireguard.c2et.net/) | Pozuelo12345 |
|
||||
| `CoreDNS` | ✅ Completado | Funcionando | | |
|
||||
| `Apolo` | ✅ Completado | Funcionando | [https://portal.apolo.c2et.net/](https://portal.apolo.c2et.net/) | admin / 123456 |
|
||||
| `Gitea` | ✅ Completado | Funcionando | [https://git.c2et.net/](https://git.c2et.net/) | |
|
||||
| `Harbor` | ✅ Completado | Funcionando | [https://harbor.c2et.net/](https://harbor.c2et.net/) | |
|
||||
| `Guacamole` | ✅ Completado | Funcionando | [https://heimdall.c2et.net/](https://heimdall.c2et.net/) | |
|
||||
| `VSCode` | ✅ Completado | Funcionando | [https://vscode.c2et.net/](https://vscode.c2et.net/) | Pozuelo12345 |
|
||||
| `Tileserver-GL` | ✅ Completado | Funcionando | [https://mapas.c2et.net/](https://mapas.c2et.net/) | |
|
||||
| `External` | ✅ Completado | Funcionando | varias | |
|
||||
| `Argos Core` | ✅ Completado | Funcionando | [https://argos.panel.c2et.net/](https://argos.panel.c2et.net/) | |
|
||||
| `Apolo` | ✅ Completado | Funcionando | [https://portal.apolo.c2et.net](https://portal.apolo.c2et.net/) | admin / 123456 |
|
||||
| `Gitea` | ✅ Completado | Funcionando | [https://git.c2et.net](https://git.c2et.net) | |
|
||||
| `Harbor` | ✅ Completado | Funcionando | [https://harbor.c2et.net](https://harbor.c2et.net) | |
|
||||
| `Guacamole` | ✅ Completado | Funcionando | [https://heimdall.c2et.net](https://heimdall.c2et.net) | |
|
||||
| `VSCode` | ✅ Completado | Funcionando | [https://vscode.c2et.net](https://vscode.c2et.net) | Pozuelo12345 |
|
||||
| `Tileserver-GL` | ✅ Completado | Funcionando | [https://mapas.c2et.net](https://mapas.c2et.net) | |
|
||||
| `External` | ✅ Completado | Funcionando | [https://admin.firewall.c2et.net](https://admin.firewall.c2et.net) <br>[https://admin.powervault1.c2et.net](https://admin.powervault1.c2et.net)<br> [https://admin.powervault2.c2et.net](https://admin.powervault2.c2et.net) | |
|
||||
| `Argos Core` | ✅ Completado | Funcionando | [https://argos.panel.c2et.net/](https://argos.panel.c2et.net) | |
|
||||
| `Minio` | ✅ Completado | Funcionando | [https://console.s3-a.c2et.net](https://console.s3-a.c2et.net) <br>[https://console.s3-b.c2et.net](https://console.s3-b.c2et.net) | admin / Pozuelo12345 |
|
||||
| `Velero` | ✅ Completado | Funcionando | | |
|
||||
|
||||
---
|
||||
|
||||
@@ -164,7 +179,7 @@ Este repositorio contiene los **manifiestos, scripts y documentación** para des
|
||||
* Dos redes: administración y servicios.
|
||||
* Seguridad basada en **VPN + DNS + ACLs**.
|
||||
* Ingress con SSL automático.
|
||||
* Funcionalidades extra: proxy externo + VMs con KubeVirt.
|
||||
* Funcionalidades extra: proxy externo + VMs con KubeVirt + backup.
|
||||
|
||||
---
|
||||
|
||||
|
||||
13
seagate/csi-exos-x-csidriver.yaml
Normal file
13
seagate/csi-exos-x-csidriver.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: CSIDriver
|
||||
metadata:
|
||||
name: csi-exos-x.seagate.com
|
||||
spec:
|
||||
attachRequired: true
|
||||
podInfoOnMount: false
|
||||
requiresRepublish: false
|
||||
fsGroupPolicy: File
|
||||
seLinuxMount: false
|
||||
storageCapacity: false
|
||||
volumeLifecycleModes:
|
||||
- Persistent
|
||||
6
seagate/namespace.yaml
Normal file
6
seagate/namespace.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: seagate
|
||||
labels:
|
||||
app.kubernetes.io/name: seagate
|
||||
18
seagate/pod-a.yaml
Normal file
18
seagate/pod-a.yaml
Normal file
@@ -0,0 +1,18 @@
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: pod-a
|
||||
spec:
|
||||
nodeSelector:
|
||||
topology.kubernetes.io/zone: site-a
|
||||
containers:
|
||||
- name: app
|
||||
image: busybox:1.36
|
||||
command: ["sh","-c","sleep 3600"]
|
||||
volumeMounts:
|
||||
- name: data
|
||||
mountPath: /data
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc-a
|
||||
8
seagate/pvc-pod-a.yaml
Normal file
8
seagate/pvc-pod-a.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: pvc-a
|
||||
spec:
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources: { requests: { storage: 10Gi } }
|
||||
storageClassName: sc-me5-site-a
|
||||
316
seagate/readme.md
Normal file
316
seagate/readme.md
Normal file
@@ -0,0 +1,316 @@
|
||||
# Seagate Exos X CSI (ME5 dual‑site) — Guía de instalación y operación
|
||||
|
||||
Este README documenta cómo he dejado **reproducible** la instalación del *Seagate Exos X CSI Driver* (soporta ME5) en un clúster Kubernetes con **dos cabinas / dos zonas** (site‑a y site‑b) usando iSCSI + multipath y *topología por zona*.
|
||||
|
||||
> **Objetivo**
|
||||
>
|
||||
> * Un único despliegue del driver (Helm).
|
||||
> * **Dos StorageClass** (uno por sitio) con `allowedTopologies` y credenciales (Secret) separadas.
|
||||
> * *WaitForFirstConsumer* para que el volumen se cree en la **misma zona** del pod.
|
||||
> * Montajes iSCSI rápidos gracias a multipath bien configurado (modo `greedy`).
|
||||
|
||||
---
|
||||
|
||||
## 1) Prerrequisitos en los nodos
|
||||
|
||||
1. **Multipath** y **iSCSI** instalados/activos.
|
||||
|
||||
2. **/etc/multipath.conf** — opciones relevantes usadas:
|
||||
|
||||
```conf
|
||||
defaults {
|
||||
user_friendly_names "no"
|
||||
find_multipaths "greedy"
|
||||
no_path_retry "queue"
|
||||
}
|
||||
|
||||
devices {
|
||||
device {
|
||||
vendor "DellEMC"
|
||||
product "ME5"
|
||||
path_grouping_policy "multibus"
|
||||
path_checker "tur"
|
||||
prio "alua"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
> **Por qué `greedy`?**
|
||||
>
|
||||
> * `find_multipaths "greedy"` evita crear *maps* hasta que haya más de un camino **o** el dispositivo sea claramente multipath, reduciendo falsos positivos y estabilizando el *udev settle*. Mejora tiempos de descubrimiento y evita *flapping*.
|
||||
|
||||
Reiniciar servicios y refrescar paths tras cambiar multipath:
|
||||
|
||||
```bash
|
||||
sudo systemctl restart multipathd
|
||||
sudo multipath -r
|
||||
```
|
||||
|
||||
3. **Propagación de montajes (rshared)**
|
||||
|
||||
Asegurar que `/` y `/var/lib/kubelet` están en **rshared** para que los montajes hechos por el plugin dentro del pod del *node‑server* aparezcan en el host:
|
||||
|
||||
```bash
|
||||
sudo mount --make-rshared /
|
||||
|
||||
# systemd drop‑in para kubelet
|
||||
sudo install -d /etc/systemd/system/kubelet.service.d
|
||||
cat <<'EOF' | sudo tee /etc/systemd/system/kubelet.service.d/10-mount-propagation.conf
|
||||
[Service]
|
||||
MountFlags=
|
||||
ExecStartPre=/bin/mkdir -p /var/lib/kubelet
|
||||
ExecStartPre=/bin/mount --bind /var/lib/kubelet /var/lib/kubelet
|
||||
ExecStartPre=/bin/mount --make-rshared /var/lib/kubelet
|
||||
EOF
|
||||
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl restart kubelet
|
||||
```
|
||||
|
||||
Comprobar:
|
||||
|
||||
```bash
|
||||
sudo findmnt -o TARGET,PROPAGATION /
|
||||
sudo findmnt -o TARGET,PROPAGATION /var/lib/kubelet
|
||||
```
|
||||
|
||||
4. **Etiquetas de topología en nodos**
|
||||
|
||||
Etiquetar cada nodo con su zona:
|
||||
|
||||
```bash
|
||||
kubectl label nodes <nodo-del-site-a> topology.kubernetes.io/zone=site-a --overwrite
|
||||
kubectl label nodes <nodo-del-site-b> topology.kubernetes.io/zone=site-b --overwrite
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2) Despliegue del Driver con Helm
|
||||
|
||||
### 2.1. Namespace y valores
|
||||
|
||||
```bash
|
||||
kubectl apply -f namespace.yaml # namespace: seagate
|
||||
```
|
||||
|
||||
**values.yaml** (resumen de lo usado):
|
||||
|
||||
* Imagen del driver: `ghcr.io/seagate/seagate-exos-x-csi:v1.10.0`
|
||||
* Sidecars:
|
||||
|
||||
* `csi-provisioner v5.0.1` (timeout 60s)
|
||||
* `csi-attacher v4.6.1`
|
||||
* `csi-resizer v1.11.1`
|
||||
* `csi-snapshotter v8.0.1`
|
||||
* `csi-node-driver-registrar v2.9.0`
|
||||
* `controller.extraArgs: ["-v=2"]`
|
||||
* `node.extraArgs: ["-v=2"]`
|
||||
|
||||
> **Nota:** no es necesario tocar `CSIDriver` para topología; la topología se maneja desde los `StorageClass` + etiquetas de nodo.
|
||||
|
||||
### 2.2. Instalación
|
||||
|
||||
```bash
|
||||
helm upgrade --install exos-x-csi \
|
||||
-n seagate --create-namespace \
|
||||
./seagate-exos-x-csi \
|
||||
-f ./values.yaml
|
||||
```
|
||||
|
||||
#### Si hay residuos de una instalación anterior (RBAC)
|
||||
|
||||
Si aparece un error de *invalid ownership metadata* con recursos tipo `ClusterRole/ClusterRoleBinding` de un release previo (p.ej. `exosx-csi`), eliminarlos:
|
||||
|
||||
```bash
|
||||
kubectl delete clusterrole external-provisioner-runner-systems
|
||||
kubectl delete clusterrolebinding csi-provisioner-role-systems
|
||||
# (si hubiera más, listarlos por label y borrarlos)
|
||||
# kubectl get clusterrole,clusterrolebinding -A -l app.kubernetes.io/instance=<old-release>
|
||||
```
|
||||
|
||||
Reintentar `helm upgrade --install`.
|
||||
|
||||
---
|
||||
|
||||
## 3) Secret por cabina (A y B)
|
||||
|
||||
Un `Secret` por sitio en el *namespace* `seagate` con `apiAddress`, `username`, `password` en Base64.
|
||||
|
||||
```bash
|
||||
kubectl apply -f secret-me5-site-a.yaml
|
||||
kubectl apply -f secret-me5-site-b.yaml
|
||||
```
|
||||
|
||||
> **Importante:** Los `StorageClass` deben usar las **claves estándar CSI** para que el provisioner pase el Secret al driver:
|
||||
>
|
||||
> * `csi.storage.k8s.io/provisioner-secret-name|namespace`
|
||||
> * `csi.storage.k8s.io/controller-publish-secret-name|namespace`
|
||||
> * `csi.storage.k8s.io/controller-expand-secret-name|namespace`
|
||||
> * `csi.storage.k8s.io/node-stage-secret-name|namespace` *(si aplica)*
|
||||
> * `csi.storage.k8s.io/node-publish-secret-name|namespace` *(si aplica)*
|
||||
|
||||
El síntoma de no usar estos nombres es: `missing API credentials` en el evento del PVC.
|
||||
|
||||
---
|
||||
|
||||
## 4) StorageClass por zona (topología)
|
||||
|
||||
Se definen **dos** `StorageClass` idénticos salvo:
|
||||
|
||||
* Secret (A o B)
|
||||
* `pool` (p. ej., `dg01` para site‑a, `dg02` para site‑b)
|
||||
* `volPrefix` (ej. `sza` / `szb` para identificar site en el nombre de LUN)
|
||||
* `allowedTopologies` con la zona correspondiente
|
||||
* `volumeBindingMode: WaitForFirstConsumer`
|
||||
|
||||
> Con WFFC, el PVC **no** se enlaza hasta que exista un Pod consumidor; el scheduler elige un nodo, y el provisioner crea el volumen en la **zona del nodo**.
|
||||
|
||||
Aplicar ambos `StorageClass`:
|
||||
|
||||
```bash
|
||||
kubectl apply -f sc-me5-site-a.yaml
|
||||
kubectl apply -f sc-me5-site-b.yaml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5) Prueba de extremo a extremo
|
||||
|
||||
### 5.1. PVC + Pod en site‑a
|
||||
|
||||
* PVC: `pvc-a` con `storageClassName: sc-me5-site-a`
|
||||
* Pod: `pod-a` con `nodeSelector: topology.kubernetes.io/zone=site-a`
|
||||
|
||||
```bash
|
||||
kubectl apply -f pvc-pod-a.yaml
|
||||
kubectl apply -f pod-a.yaml
|
||||
kubectl get pvc,pod
|
||||
```
|
||||
|
||||
Deberías ver el Pod en *Running* y el volumen creado/montado en la ME5 del site‑a.
|
||||
|
||||
### 5.2. Verificaciones útiles
|
||||
|
||||
* **iSCSI nodes vistos:**
|
||||
|
||||
```bash
|
||||
sudo iscsiadm -m node | sort
|
||||
```
|
||||
|
||||
* **Multipath:**
|
||||
|
||||
```bash
|
||||
sudo multipath -ll
|
||||
```
|
||||
|
||||
* **Eventos del PVC:**
|
||||
|
||||
```bash
|
||||
kubectl describe pvc <nombre>
|
||||
```
|
||||
|
||||
* **Logs del controller:** (búsqueda de credenciales / errores de provisión)
|
||||
|
||||
```bash
|
||||
kubectl -n seagate logs deploy/seagate-exos-x-csi-controller-server \
|
||||
-c seagate-exos-x-csi-controller | grep -i -E 'cred|secret|error'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6) Medir el tiempo de *NodePublish* (montaje)
|
||||
|
||||
Para medir cuánto tarda el montaje (fase *NodePublishVolume*) desde el *node‑server*:
|
||||
|
||||
```bash
|
||||
kubectl -n seagate logs -l name=seagate-exos-x-csi-node-server \
|
||||
-c seagate-exos-x-csi-node --tail=10000 \
|
||||
| grep "NodePublishVolume" \
|
||||
| grep "ROUTINE END" \
|
||||
| sed -E 's/.*NodePublishVolume.*<([^>]*)>.*/\1/'
|
||||
```
|
||||
|
||||
* Valores \~**< 2 min** indican que el montaje completa dentro de la ventana de kubelet, evitando `DeadlineExceeded`.
|
||||
* Si ves \~**4m34s** constantes: el driver está esperando a que aparezcan *dm‑name* de portales inaccesibles. Revisa topologías, conectividad y que solo se prueben portales de la zona activa.
|
||||
|
||||
> Para validar zona‑B, lanza un Pod/PVC análogo en `site-b` y repite el grep anterior en los logs.
|
||||
|
||||
---
|
||||
|
||||
## 7) Solución de problemas
|
||||
|
||||
* **`missing API credentials` al provisionar**
|
||||
|
||||
* Asegúrate de usar las **claves CSI** en `parameters:` del `StorageClass` (ver §3).
|
||||
|
||||
* **Errores Helm de *invalid ownership metadata***
|
||||
|
||||
* Borra los `ClusterRole/ClusterRoleBinding` residuales del release antiguo (ver §2.2).
|
||||
|
||||
* **`DeadlineExceeded` durante montaje**
|
||||
|
||||
* Comprueba:
|
||||
|
||||
* `find_multipaths "greedy"` y resto de multipath según §1.2.
|
||||
* Etiquetas de zona en el nodo donde programa el Pod.
|
||||
* Que el `StorageClass` correcto tenga `allowedTopologies` de esa zona.
|
||||
|
||||
* **Ver puertos/portales iSCSI efectivos**
|
||||
|
||||
* `sudo iscsiadm -m node | sort` para ver a qué destinos quedó configurado el nodo. Con topología bien aplicada, deben ser los del sitio correspondiente.
|
||||
|
||||
---
|
||||
|
||||
## 8) Limpieza y reintentos
|
||||
|
||||
Para repetir la prueba desde cero (manteniendo el driver):
|
||||
|
||||
```bash
|
||||
kubectl delete -f pod-a.yaml
|
||||
kubectl delete -f pvc-pod-a.yaml
|
||||
```
|
||||
|
||||
Si quisieras limpiar *todo el despliegue* del driver:
|
||||
|
||||
```bash
|
||||
helm uninstall exos-x-csi -n seagate
|
||||
# Si quedaron RBAC de releases previos:
|
||||
kubectl delete clusterrole external-provisioner-runner-systems || true
|
||||
kubectl delete clusterrolebinding csi-provisioner-role-systems || true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9) Resumen de lo que quedó en repo (carpeta `seagate/`)
|
||||
|
||||
* `namespace.yaml` — Namespace `seagate`.
|
||||
* `secret-me5-site-a.yaml` / `secret-me5-site-b.yaml` — Credenciales por sitio.
|
||||
* `values.yaml` — Valores de Helm usados para el driver v1.10.0.
|
||||
* `sc-me5-site-a.yaml` / `sc-me5-site-b.yaml` — StorageClass con `allowedTopologies`, `pool`, `volPrefix`, claves CSI de Secret y `WaitForFirstConsumer`.
|
||||
* `pvc-pod-a.yaml` + `pod-a.yaml` — Manifests de prueba en `site-a`.
|
||||
* *(Opcional)* `csi-exos-x-csidriver.yaml` — no es necesario modificarlo para topología en esta versión.
|
||||
|
||||
---
|
||||
|
||||
## 10) Anexos — Comandos útiles ejecutados
|
||||
|
||||
* Reinicio multipath/kubelet y propagación de montajes.
|
||||
* Limpieza iSCSI/multipath (cuando se rehizo la prueba):
|
||||
|
||||
```bash
|
||||
sudo iscsiadm -m node -u || true
|
||||
sudo iscsiadm -m node -o delete || true
|
||||
sudo multipath -F || true
|
||||
sudo multipath -r
|
||||
```
|
||||
|
||||
* Despliegue Helm + manejo de residuos RBAC (ver §2.2).
|
||||
* Aplicación secuencial de `namespace`, `secrets`, `StorageClass`, `PVC` y `Pod`.
|
||||
|
||||
---
|
||||
|
||||
### Resultado
|
||||
|
||||
* **Reproducible**: con esta receta, el volumen se crea en la cabina de su zona y el Pod arranca.
|
||||
* **Tiempos de montaje**: bajan de \~4m34s a **≈1m30s** (observado), dentro del presupuesto de kubelet.
|
||||
* **Aislamiento por zona**: cada StorageClass limita portales iSCSI a su sitio gracias a `allowedTopologies` + etiquetas de nodo.
|
||||
22
seagate/sc-me5-site-a.yaml
Normal file
22
seagate/sc-me5-site-a.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: sc-me5-site-a
|
||||
provisioner: csi-exos-x.seagate.com
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
csi.storage.k8s.io/provisioner-secret-name: seagate-me5-site-a
|
||||
csi.storage.k8s.io/provisioner-secret-namespace: seagate
|
||||
csi.storage.k8s.io/controller-publish-secret-name: seagate-me5-site-a
|
||||
csi.storage.k8s.io/controller-publish-secret-namespace: seagate
|
||||
csi.storage.k8s.io/controller-expand-secret-name: seagate-me5-site-a
|
||||
csi.storage.k8s.io/controller-expand-secret-namespace: seagate
|
||||
csi.storage.k8s.io/fstype: ext4
|
||||
pool: dg01 # pool de la ME5 del Site A
|
||||
volPrefix: sza # prefijo corto para identificar Site A
|
||||
storageProtocol: iscsi
|
||||
allowedTopologies:
|
||||
- matchLabelExpressions:
|
||||
- key: topology.kubernetes.io/zone
|
||||
values: ["site-a"]
|
||||
22
seagate/sc-me5-site-b.yaml
Normal file
22
seagate/sc-me5-site-b.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
apiVersion: storage.k8s.io/v1
|
||||
kind: StorageClass
|
||||
metadata:
|
||||
name: sc-me5-site-b
|
||||
provisioner: csi-exos-x.seagate.com
|
||||
volumeBindingMode: WaitForFirstConsumer
|
||||
allowVolumeExpansion: true
|
||||
parameters:
|
||||
csi.storage.k8s.io/provisioner-secret-name: seagate-me5-site-b
|
||||
csi.storage.k8s.io/provisioner-secret-namespace: seagate
|
||||
csi.storage.k8s.io/controller-publish-secret-name: seagate-me5-site-b
|
||||
csi.storage.k8s.io/controller-publish-secret-namespace: seagate
|
||||
csi.storage.k8s.io/controller-expand-secret-name: seagate-me5-site-b
|
||||
csi.storage.k8s.io/controller-expand-secret-namespace: seagate
|
||||
csi.storage.k8s.io/fstype: ext4
|
||||
pool: dg02
|
||||
volPrefix: szb
|
||||
storageProtocol: iscsi
|
||||
allowedTopologies:
|
||||
- matchLabelExpressions:
|
||||
- key: topology.kubernetes.io/zone
|
||||
values: ["site-b"]
|
||||
23
seagate/seagate-exos-x-csi/.helmignore
Normal file
23
seagate/seagate-exos-x-csi/.helmignore
Normal file
@@ -0,0 +1,23 @@
|
||||
# Patterns to ignore when building packages.
|
||||
# This supports shell glob matching, relative path matching, and
|
||||
# negation (prefixed with !). Only one pattern per line.
|
||||
.DS_Store
|
||||
# Common VCS dirs
|
||||
.git/
|
||||
.gitignore
|
||||
.bzr/
|
||||
.bzrignore
|
||||
.hg/
|
||||
.hgignore
|
||||
.svn/
|
||||
# Common backup files
|
||||
*.swp
|
||||
*.bak
|
||||
*.tmp
|
||||
*.orig
|
||||
*~
|
||||
# Various IDEs
|
||||
.project
|
||||
.idea/
|
||||
*.tmproj
|
||||
.vscode/
|
||||
27
seagate/seagate-exos-x-csi/Chart.yaml
Normal file
27
seagate/seagate-exos-x-csi/Chart.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
annotations:
|
||||
artifacthub.io/images: |
|
||||
- name: csi-driver
|
||||
image: {{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}
|
||||
apiVersion: v2
|
||||
appVersion: 1.10.0
|
||||
description: A dynamic persistent volume (PV) provisioner for Seagate Exos X storage
|
||||
systems.
|
||||
home: https://github.com/Seagate/seagate-exos-x-csi
|
||||
keywords:
|
||||
- storage
|
||||
- iscsi
|
||||
- fc
|
||||
- sas
|
||||
- plugin
|
||||
- csi
|
||||
maintainers:
|
||||
- email: css-host-software@seagate.com
|
||||
name: Seagate
|
||||
url: https://github.com/Seagate
|
||||
- email: joseph.skazinski@seagate.com
|
||||
name: Joe Skazinski
|
||||
name: seagate-exos-x-csi
|
||||
sources:
|
||||
- https://github.com/Seagate/seagate-exos-x-csi/tree/main/helm/csi-charts
|
||||
type: application
|
||||
version: 1.10.0
|
||||
59
seagate/seagate-exos-x-csi/README.md.gotmpl
Normal file
59
seagate/seagate-exos-x-csi/README.md.gotmpl
Normal file
@@ -0,0 +1,59 @@
|
||||
{{ template "chart.header" . }}
|
||||
{{ template "chart.deprecationWarning" . }}
|
||||
{{ template "chart.description" . }}
|
||||
|
||||
{{ template "chart.badgesSection" . }}
|
||||
[](https://artifacthub.io/packages/search?repo=Seagate)
|
||||
|
||||
# Introduction
|
||||
As of version `1.0.0`, this `csi` driver and the associated helm charts are released as open-source projects under the Apache 2.0 license.
|
||||
|
||||
Your contribution is most welcome!
|
||||
|
||||
{{ template "chart.homepageLine" . }}
|
||||
|
||||
## This helm chart
|
||||
Is part of the project and is published on [Seagate](https://seagate.io)'s charts repository.
|
||||
|
||||
{{ template "chart.sourcesSection" . }}
|
||||
|
||||
# Installing the Chart
|
||||
|
||||
Create a file named `{{ template "chart.name" . }}.values.yaml` with your values, with the help of [Chart Values](#values).
|
||||
|
||||
Add our Charts repository:
|
||||
```
|
||||
$ helm repo add seagate https://charts.seagate.io
|
||||
```
|
||||
|
||||
Install the {{ template "chart.name" . }} with release name `{{ template "chart.name" . }}` in the `seagate-exos-x-csi-system` namespace:
|
||||
```
|
||||
$ helm install -n seagate-exos-x-csi-system {{ template "chart.name" . }} seagate/{{ template "chart.name" . }} --values {{ template "chart.name" . }}.values.yaml
|
||||
```
|
||||
|
||||
The `upgrade` command is used to change configuration when values are modified:
|
||||
```
|
||||
$ helm upgrade -n seagate-exos-x-csi-system {{ template "chart.name" . }} seagate/{{ template "chart.name" . }} --values {{ template "chart.name" . }}.values.yaml
|
||||
```
|
||||
|
||||
# Upgrading the Chart
|
||||
|
||||
Update Helm repositories:
|
||||
```
|
||||
$ helm repo update
|
||||
```
|
||||
|
||||
Upgrade release names `{{ template "chart.name" . }}` to the latest version:
|
||||
```
|
||||
$ helm upgrade {{ template "chart.name" . }} seagate/{{ template "chart.name" . }}
|
||||
```
|
||||
|
||||
# Creating a storage class
|
||||
|
||||
In order to dynamically provision persistants volumes, you first need to create a storage class. To do so, please refer to the project [documentation](https://github.com/Seagate/seagate-exos-x-csi).
|
||||
|
||||
{{ template "chart.maintainersSection" . }}
|
||||
|
||||
{{ template "chart.requirementsSection" . }}
|
||||
|
||||
{{ template "chart.valuesSection" . }}
|
||||
5
seagate/seagate-exos-x-csi/templates/NOTES.txt
Normal file
5
seagate/seagate-exos-x-csi/templates/NOTES.txt
Normal file
@@ -0,0 +1,5 @@
|
||||
Thank you for using Seagate Exos X provisioner. It will be up and running shortly.
|
||||
Run 'kubectl get pods' to verify that the new pods have a 'STATUS' of 'Running'.
|
||||
|
||||
In order to dynamically provide a persistant volume, create a storage class first.
|
||||
Please refer to this example to do so: https://github.com/Seagate/seagate-exos-x-csi/blob/main/example/storage-class.yaml
|
||||
10
seagate/seagate-exos-x-csi/templates/_helpers.tpl
Normal file
10
seagate/seagate-exos-x-csi/templates/_helpers.tpl
Normal file
@@ -0,0 +1,10 @@
|
||||
{{- define "csidriver.labels" -}}
|
||||
app.kubernetes.io/name: {{ .Chart.Name | kebabcase }}
|
||||
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||
{{- end -}}
|
||||
|
||||
{{- define "csidriver.extraArgs" -}}
|
||||
{{- range .extraArgs }}
|
||||
- {{ toYaml . }}
|
||||
{{- end }}
|
||||
{{- end -}}
|
||||
126
seagate/seagate-exos-x-csi/templates/daemonset.yaml
Normal file
126
seagate/seagate-exos-x-csi/templates/daemonset.yaml
Normal file
@@ -0,0 +1,126 @@
|
||||
apiVersion: apps/v1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: seagate-exos-x-csi-node-server
|
||||
labels:
|
||||
app.kubernetes.io/version: {{ .Chart.Version }}
|
||||
app.kubernetes.io/component: dynamic-provisionning-node
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
name: seagate-exos-x-csi-node-server
|
||||
{{ include "csidriver.labels" . | indent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: seagate-exos-x-csi-node-server
|
||||
{{ include "csidriver.labels" . | indent 8 }}
|
||||
spec:
|
||||
hostNetwork: true
|
||||
hostIPC: true
|
||||
{{ if .Values.pspAdmissionControllerEnabled }}serviceAccount: csi-node-registrar{{ end }}
|
||||
{{- if .Values.nodeServer.nodeAffinity }}
|
||||
affinity:
|
||||
nodeAffinity:
|
||||
{{ toYaml .Values.nodeServer.nodeAffinity | indent 10 }}
|
||||
{{- end }}
|
||||
{{- if .Values.nodeServer.nodeSelector }}
|
||||
nodeSelector:
|
||||
{{ toYaml .Values.nodeServer.nodeSelector | indent 8 }}
|
||||
{{- end }}
|
||||
containers:
|
||||
- name: seagate-exos-x-csi-node
|
||||
image: {{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}
|
||||
command:
|
||||
- seagate-exos-x-csi-node
|
||||
- -bind=unix://{{ .Values.kubeletPath }}/plugins/csi-exos-x.seagate.com/csi.sock
|
||||
- -chroot=/host
|
||||
{{- include "csidriver.extraArgs" .Values.node | indent 10 }}
|
||||
env:
|
||||
- name: CSI_NODE_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: spec.nodeName
|
||||
- name: CSI_NODE_IP
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: status.podIP
|
||||
- name: CSI_NODE_SERVICE_PORT
|
||||
value: "978"
|
||||
securityContext:
|
||||
privileged: true
|
||||
volumeMounts:
|
||||
- name: plugin-dir
|
||||
mountPath: {{ .Values.kubeletPath }}/plugins/csi-exos-x.seagate.com
|
||||
- name: mountpoint-dir
|
||||
mountPath: {{ .Values.kubeletPath }}/pods
|
||||
mountPropagation: Bidirectional
|
||||
- name: san-iscsi-csi-run-dir
|
||||
mountPath: /var/run/csi-exos-x.seagate.com
|
||||
- name: device-dir
|
||||
mountPath: /dev
|
||||
- name: iscsi-dir
|
||||
mountPath: /etc/iscsi
|
||||
- name: host
|
||||
mountPath: /host
|
||||
mountPropagation: Bidirectional
|
||||
ports:
|
||||
- containerPort: 9808
|
||||
name: healthz
|
||||
protocol: TCP
|
||||
- containerPort: 9842
|
||||
name: metrics
|
||||
protocol: TCP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: healthz
|
||||
periodSeconds: 60
|
||||
- name: liveness-probe
|
||||
image: {{.Values.nodeLivenessProbe.image.repository }}:{{ .Values.nodeLivenessProbe.image.tag }}
|
||||
args:
|
||||
- --csi-address=/csi/csi.sock
|
||||
{{- include "csidriver.extraArgs" .Values.nodeLivenessProbe | indent 10 }}
|
||||
volumeMounts:
|
||||
- name: plugin-dir
|
||||
mountPath: /csi
|
||||
- name: driver-registrar
|
||||
image: {{ .Values.csiNodeRegistrar.image.repository }}:{{ .Values.csiNodeRegistrar.image.tag }}
|
||||
args:
|
||||
- --csi-address=/csi/csi.sock
|
||||
- --kubelet-registration-path={{ .Values.kubeletPath }}/plugins/csi-exos-x.seagate.com/csi.sock
|
||||
{{- include "csidriver.extraArgs" .Values.csiNodeRegistrar | indent 10 }}
|
||||
volumeMounts:
|
||||
- name: plugin-dir
|
||||
mountPath: /csi
|
||||
- name: registration-dir
|
||||
mountPath: /registration
|
||||
{{- if .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{ toYaml .Values.imagePullSecrets | indent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: registration-dir
|
||||
hostPath:
|
||||
path: {{ .Values.kubeletPath }}/plugins_registry/
|
||||
- name: mountpoint-dir
|
||||
hostPath:
|
||||
path: {{ .Values.kubeletPath }}/pods
|
||||
- name: plugin-dir
|
||||
hostPath:
|
||||
path: {{ .Values.kubeletPath }}/plugins/csi-exos-x.seagate.com
|
||||
type: DirectoryOrCreate
|
||||
- name: iscsi-dir
|
||||
hostPath:
|
||||
path: /etc/iscsi
|
||||
- name: device-dir
|
||||
hostPath:
|
||||
path: /dev
|
||||
- name: san-iscsi-csi-run-dir
|
||||
hostPath:
|
||||
path: /var/run/csi-exos-x.seagate.com
|
||||
- name: host
|
||||
hostPath:
|
||||
path: /
|
||||
94
seagate/seagate-exos-x-csi/templates/deployment.yaml
Normal file
94
seagate/seagate-exos-x-csi/templates/deployment.yaml
Normal file
@@ -0,0 +1,94 @@
|
||||
kind: Deployment
|
||||
apiVersion: apps/v1
|
||||
metadata:
|
||||
name: seagate-exos-x-csi-controller-server
|
||||
labels:
|
||||
app.kubernetes.io/version: {{ .Chart.Version }}
|
||||
app.kubernetes.io/component: dynamic-provisionning-controller
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
|
||||
spec:
|
||||
replicas: 1
|
||||
strategy:
|
||||
type: Recreate
|
||||
selector:
|
||||
matchLabels:
|
||||
app: seagate-exos-x-csi-controller-server
|
||||
{{ include "csidriver.labels" . | indent 6 }}
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: seagate-exos-x-csi-controller-server
|
||||
{{ include "csidriver.labels" . | indent 8 }}
|
||||
spec:
|
||||
serviceAccount: csi-provisioner
|
||||
containers:
|
||||
- name: seagate-exos-x-csi-controller
|
||||
image: {{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}
|
||||
command:
|
||||
- seagate-exos-x-csi-controller
|
||||
- -bind=unix:///csi/csi.sock
|
||||
{{- include "csidriver.extraArgs" .Values.controller | indent 10 }}
|
||||
env:
|
||||
- name: CSI_NODE_SERVICE_PORT
|
||||
value: "978"
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /csi
|
||||
- name: csi-run-dir
|
||||
mountPath: /var/run/csi-exos-x.seagate.com
|
||||
ports:
|
||||
- containerPort: 9842
|
||||
name: metrics
|
||||
protocol: TCP
|
||||
- name: csi-provisioner
|
||||
image: {{ .Values.csiProvisioner.image.repository }}:{{ .Values.csiProvisioner.image.tag }}
|
||||
args:
|
||||
- --csi-address=/csi/csi.sock
|
||||
- --worker-threads=1
|
||||
- --timeout={{ .Values.csiProvisioner.timeout }}
|
||||
{{- include "csidriver.extraArgs" .Values.csiProvisioner | indent 10 }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /csi
|
||||
- name: csi-attacher
|
||||
image: {{ .Values.csiAttacher.image.repository }}:{{ .Values.csiAttacher.image.tag }}
|
||||
args:
|
||||
- --csi-address=/csi/csi.sock
|
||||
- --worker-threads=1
|
||||
- --timeout={{ .Values.csiAttacher.timeout }}
|
||||
{{- include "csidriver.extraArgs" .Values.csiAttacher | indent 10 }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /csi
|
||||
- name: csi-resizer
|
||||
image: {{ .Values.csiResizer.image.repository }}:{{ .Values.csiResizer.image.tag }}
|
||||
args:
|
||||
- --csi-address=/csi/csi.sock
|
||||
{{- include "csidriver.extraArgs" .Values.csiResizer | indent 10 }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /csi
|
||||
- name: csi-snapshotter
|
||||
image: {{ .Values.csiSnapshotter.image.repository }}:{{ .Values.csiSnapshotter.image.tag }}
|
||||
args:
|
||||
- --csi-address=/csi/csi.sock
|
||||
{{- include "csidriver.extraArgs" .Values.csiSnapshotter | indent 10 }}
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: socket-dir
|
||||
mountPath: /csi
|
||||
{{- if .Values.imagePullSecrets }}
|
||||
imagePullSecrets:
|
||||
{{ toYaml .Values.imagePullSecrets | indent 8 }}
|
||||
{{- end }}
|
||||
volumes:
|
||||
- name: socket-dir
|
||||
emptyDir:
|
||||
medium: Memory
|
||||
- name: csi-run-dir
|
||||
hostPath:
|
||||
path: /var/run/csi-exos-x.seagate.com
|
||||
14
seagate/seagate-exos-x-csi/templates/podmonitor.yaml
Normal file
14
seagate/seagate-exos-x-csi/templates/podmonitor.yaml
Normal file
@@ -0,0 +1,14 @@
|
||||
{{- if .Values.podMonitor.enabled }}
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: PodMonitor
|
||||
metadata:
|
||||
name: seagate-exos-x-csi-node-exporter
|
||||
labels:
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
name: seagate-exos-x-csi-node-server
|
||||
podMetricsEndpoints:
|
||||
- port: metrics
|
||||
{{- end }}
|
||||
26
seagate/seagate-exos-x-csi/templates/psp.yaml
Normal file
26
seagate/seagate-exos-x-csi/templates/psp.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
{{- if .Values.pspAdmissionControllerEnabled -}}
|
||||
apiVersion: policy/v1beta1
|
||||
kind: PodSecurityPolicy
|
||||
metadata:
|
||||
name: seagate-exos-x-csi
|
||||
spec:
|
||||
privileged: true
|
||||
hostNetwork: true
|
||||
hostIPC: true
|
||||
hostPID: true
|
||||
seLinux:
|
||||
rule: RunAsAny
|
||||
supplementalGroups:
|
||||
rule: RunAsAny
|
||||
runAsUser:
|
||||
rule: RunAsAny
|
||||
fsGroup:
|
||||
rule: RunAsAny
|
||||
hostPorts:
|
||||
- min: 0
|
||||
max: 65535
|
||||
volumes:
|
||||
- '*'
|
||||
allowedCapabilities:
|
||||
- '*'
|
||||
{{ end }}
|
||||
166
seagate/seagate-exos-x-csi/templates/rbac.yaml
Normal file
166
seagate/seagate-exos-x-csi/templates/rbac.yaml
Normal file
@@ -0,0 +1,166 @@
|
||||
# This YAML file contains all RBAC objects that are necessary to run external
|
||||
# CSI provisioner.
|
||||
#
|
||||
# In production, each CSI driver deployment has to be customized:
|
||||
# - to avoid conflicts, use non-default namespace and different names
|
||||
# for non-namespaced entities like the ClusterRole
|
||||
# - decide whether the deployment replicates the external CSI
|
||||
# provisioner, in which case leadership election must be enabled;
|
||||
# this influences the RBAC setup, see below
|
||||
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: csi-provisioner
|
||||
labels:
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
|
||||
---
|
||||
kind: ClusterRole
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: external-provisioner-runner-systems
|
||||
labels:
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumes"]
|
||||
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims"]
|
||||
verbs: ["get", "list", "watch", "update"]
|
||||
- apiGroups: [""]
|
||||
resources: ["persistentvolumeclaims/status"]
|
||||
verbs: ["update", "patch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["storageclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["events"]
|
||||
verbs: ["list", "watch", "create", "update", "patch"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshots"]
|
||||
verbs: ["get", "list"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotclasses"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotcontents"]
|
||||
verbs: ["create", "get", "list", "watch", "update", "delete"]
|
||||
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||
resources: ["volumesnapshotcontents/status"]
|
||||
verbs: ["update"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["csinodes"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["nodes"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["volumeattachments"]
|
||||
verbs: ["get", "list", "watch", "update", "patch"]
|
||||
- apiGroups: ["storage.k8s.io"]
|
||||
resources: ["volumeattachments/status"]
|
||||
verbs: ["get", "list", "watch", "update", "patch"]
|
||||
- apiGroups: [""]
|
||||
resources: ["pods"]
|
||||
verbs: ["get", "list", "watch"]
|
||||
|
||||
---
|
||||
kind: ClusterRoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: csi-provisioner-role-systems
|
||||
labels:
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: csi-provisioner
|
||||
namespace: {{ .Release.Namespace }}
|
||||
roleRef:
|
||||
kind: ClusterRole
|
||||
name: external-provisioner-runner-systems
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
---
|
||||
# Provisioner must be able to work with endpoints in current namespace
|
||||
# if (and only if) leadership election is enabled
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: external-provisioner-cfg-systems
|
||||
labels:
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
rules:
|
||||
# Only one of the following rules for endpoints or leases is required based on
|
||||
# what is set for `--leader-election-type`. Endpoints are deprecated in favor of Leases.
|
||||
- apiGroups: [""]
|
||||
resources: ["endpoints"]
|
||||
verbs: ["get", "watch", "list", "delete", "update", "create"]
|
||||
- apiGroups: ["coordination.k8s.io"]
|
||||
resources: ["leases"]
|
||||
verbs: ["get", "watch", "list", "delete", "update", "create"]
|
||||
{{ if .Values.pspAdmissionControllerEnabled }}
|
||||
- apiGroups: ["policy"]
|
||||
resources: ["podsecuritypolicies"]
|
||||
verbs: ["use"]
|
||||
resourceNames:
|
||||
- seagate-exos-x-csi
|
||||
{{ end }}
|
||||
|
||||
---
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: csi-provisioner-role-cfg-systems
|
||||
labels:
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: csi-provisioner
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: external-provisioner-cfg-systems
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
|
||||
{{ if .Values.pspAdmissionControllerEnabled }}
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: csi-node-registrar
|
||||
labels:
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
|
||||
---
|
||||
kind: Role
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: csi-node-registrar-cfg-systems
|
||||
labels:
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
rules:
|
||||
- apiGroups: ["policy"]
|
||||
resources: ["podsecuritypolicies"]
|
||||
verbs: ["use"]
|
||||
resourceNames:
|
||||
- systems-role
|
||||
|
||||
---
|
||||
kind: RoleBinding
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
metadata:
|
||||
name: csi-node-registrar-role-cfg-systems
|
||||
labels:
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: csi-node-registrar
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: csi-node-registrar-cfg-systems
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
{{ end }}
|
||||
31
seagate/seagate-exos-x-csi/templates/servicemonitor.yaml
Normal file
31
seagate/seagate-exos-x-csi/templates/servicemonitor.yaml
Normal file
@@ -0,0 +1,31 @@
|
||||
{{- if .Values.serviceMonitor.enabled }}
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: systems-controller-metrics
|
||||
labels:
|
||||
name: systems-controller-metrics
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
spec:
|
||||
ports:
|
||||
- name: metrics
|
||||
port: 9842
|
||||
targetPort: metrics
|
||||
protocol: TCP
|
||||
selector:
|
||||
app: seagate-exos-x-csi-controller-server
|
||||
---
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: seagate-exos-x-csi-controller-exporter
|
||||
labels:
|
||||
{{ include "csidriver.labels" . | indent 4 }}
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
name: systems-controller-metrics
|
||||
endpoints:
|
||||
- port: metrics
|
||||
interval: 1s
|
||||
{{- end }}
|
||||
83
seagate/seagate-exos-x-csi/values.yaml
Normal file
83
seagate/seagate-exos-x-csi/values.yaml
Normal file
@@ -0,0 +1,83 @@
|
||||
# Default values CSI Driver.
|
||||
# This is a YAML-formatted file.
|
||||
# Declare variables to be passed into your templates.
|
||||
|
||||
# -- Path to kubelet
|
||||
kubeletPath: /var/lib/kubelet
|
||||
# -- Wether psp admission controller has been enabled in the cluster or not
|
||||
pspAdmissionControllerEnabled: false
|
||||
image:
|
||||
# -- Docker repository to use for nodes and controller
|
||||
repository: ghcr.io/seagate/seagate-exos-x-csi
|
||||
# -- Tag to use for nodes and controller
|
||||
# @default -- Uses Chart.appVersion value by default if tag does not specify a new version.
|
||||
tag: "v1.10.0"
|
||||
# -- Default is set to IfNotPresent, to override use Always here to always pull the specified version
|
||||
pullPolicy: Always
|
||||
# -- Controller sidecar for provisioning
|
||||
# AKA external-provisioner
|
||||
csiProvisioner:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/csi-provisioner
|
||||
tag: v5.0.1
|
||||
# -- Timeout for gRPC calls from the csi-provisioner to the controller
|
||||
timeout: 60s
|
||||
# -- Extra arguments for csi-provisioner controller sidecar
|
||||
extraArgs: []
|
||||
# -- Controller sidecar for attachment handling
|
||||
csiAttacher:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/csi-attacher
|
||||
tag: v4.6.1
|
||||
# -- Timeout for gRPC calls from the csi-attacher to the controller
|
||||
timeout: 60s
|
||||
# -- Extra arguments for csi-attacher controller sidecar
|
||||
extraArgs: []
|
||||
# -- Controller sidecar for volume expansion
|
||||
csiResizer:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/csi-resizer
|
||||
tag: v1.11.1
|
||||
# -- Extra arguments for csi-resizer controller sidecar
|
||||
extraArgs: []
|
||||
# -- Controller sidecar for snapshots handling
|
||||
csiSnapshotter:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/csi-snapshotter
|
||||
tag: v8.0.1
|
||||
# -- Extra arguments for csi-snapshotter controller sidecar
|
||||
extraArgs: []
|
||||
# -- Node sidecar for plugin registration
|
||||
csiNodeRegistrar:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
|
||||
tag: v2.9.0
|
||||
# -- Extra arguments for csi-node-registrar node sidecar
|
||||
extraArgs: []
|
||||
controller:
|
||||
# -- Extra arguments for seagate-exos-x-csi-controller container
|
||||
extraArgs: [-v=0]
|
||||
node:
|
||||
# -- Extra arguments for seagate-exos-x-csi-node containers
|
||||
extraArgs: [-v=0]
|
||||
multipathd:
|
||||
# -- Extra arguments for multipathd containers
|
||||
extraArgs: []
|
||||
# -- Container that convert CSI liveness probe to kubernetes liveness/readiness probe
|
||||
nodeLivenessProbe:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/livenessprobe
|
||||
tag: v2.12.0
|
||||
# -- Extra arguments for the node's liveness probe containers
|
||||
extraArgs: []
|
||||
nodeServer:
|
||||
# -- Kubernetes nodeSelector field for seagate-exos-x-csi-node-server Pod
|
||||
nodeSelector:
|
||||
# -- Kubernetes nodeAffinity field for seagate-exos-x-csi-node-server Pod
|
||||
nodeAffinity:
|
||||
podMonitor:
|
||||
# -- Set a Prometheus operator PodMonitor resource (true or false)
|
||||
enabled: false
|
||||
serviceMonitor:
|
||||
# -- Set a Prometheus operator ServiceMonitor resource (true or false)
|
||||
enabled: false
|
||||
10
seagate/secret-me5-site-a.yaml
Normal file
10
seagate/secret-me5-site-a.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: seagate-me5-site-a
|
||||
namespace: seagate
|
||||
type: Opaque
|
||||
data:
|
||||
apiAddress: aHR0cHM6Ly9hZG1pbi5wb3dlcnZhdWx0MS5jMmV0Lm5ldA== # https://admin.powervault1.c2et.net
|
||||
username: YWRtaW4uYzNz # admin.c3s
|
||||
password: UG96dWVsby4xMjM0NQ== # Pozuelo.12345
|
||||
10
seagate/secret-me5-site-b.yaml
Normal file
10
seagate/secret-me5-site-b.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: seagate-me5-site-b
|
||||
namespace: seagate
|
||||
type: Opaque
|
||||
data:
|
||||
apiAddress: aHR0cHM6Ly9hZG1pbi5wb3dlcnZhdWx0Mi5jMmV0Lm5ldA== # https://admin.powervault2.c2et.net
|
||||
username: YWRtaW4uYzNz
|
||||
password: UG96dWVsby4xMjM0NQ==
|
||||
64
seagate/values.yaml
Normal file
64
seagate/values.yaml
Normal file
@@ -0,0 +1,64 @@
|
||||
kubeletPath: /var/lib/kubelet
|
||||
pspAdmissionControllerEnabled: false
|
||||
|
||||
image:
|
||||
repository: ghcr.io/seagate/seagate-exos-x-csi
|
||||
tag: "v1.10.0"
|
||||
pullPolicy: IfNotPresent
|
||||
|
||||
csiProvisioner:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/csi-provisioner
|
||||
tag: v5.0.1
|
||||
timeout: 60s
|
||||
extraArgs: []
|
||||
|
||||
csiAttacher:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/csi-attacher
|
||||
tag: v4.6.1
|
||||
timeout: 60s
|
||||
extraArgs: []
|
||||
|
||||
csiResizer:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/csi-resizer
|
||||
tag: v1.11.1
|
||||
extraArgs: []
|
||||
|
||||
csiSnapshotter:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/csi-snapshotter
|
||||
tag: v8.0.1
|
||||
extraArgs: []
|
||||
|
||||
csiNodeRegistrar:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/csi-node-driver-registrar
|
||||
tag: v2.9.0
|
||||
extraArgs: []
|
||||
|
||||
controller:
|
||||
extraArgs: ["-v=2"]
|
||||
|
||||
node:
|
||||
extraArgs: ["-v=2"]
|
||||
|
||||
multipathd:
|
||||
extraArgs: []
|
||||
|
||||
nodeLivenessProbe:
|
||||
image:
|
||||
repository: registry.k8s.io/sig-storage/livenessprobe
|
||||
tag: v2.12.0
|
||||
extraArgs: []
|
||||
|
||||
nodeServer:
|
||||
nodeSelector: {}
|
||||
nodeAffinity: {}
|
||||
|
||||
podMonitor:
|
||||
enabled: false
|
||||
|
||||
serviceMonitor:
|
||||
enabled: false
|
||||
16
velero/bsl/bsl-default-site-a.yaml
Normal file
16
velero/bsl/bsl-default-site-a.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: velero
|
||||
config:
|
||||
region: minio
|
||||
s3Url: https://s3-a.c2et.net
|
||||
s3ForcePathStyle: "true"
|
||||
credential:
|
||||
name: cloud-credentials-site-a
|
||||
key: cloud
|
||||
16
velero/bsl/bsl-site-b.yaml
Normal file
16
velero/bsl/bsl-site-b.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: velero.io/v1
|
||||
kind: BackupStorageLocation
|
||||
metadata:
|
||||
name: site-b
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: aws
|
||||
objectStorage:
|
||||
bucket: velero
|
||||
config:
|
||||
region: minio
|
||||
s3Url: https://s3-b.c2et.net
|
||||
s3ForcePathStyle: "true"
|
||||
credential:
|
||||
name: cloud-credentials-site-b
|
||||
key: cloud
|
||||
36
velero/helm/values-approach-a.yaml
Normal file
36
velero/helm/values-approach-a.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
credentials:
|
||||
useSecret: true
|
||||
existingSecret: ""
|
||||
secretContents:
|
||||
cloud: |
|
||||
[default]
|
||||
aws_access_key_id=velero-a
|
||||
aws_secret_access_key=Clave-Velero-A
|
||||
|
||||
configuration:
|
||||
features: EnableCSI
|
||||
backupStorageLocation:
|
||||
- name: default
|
||||
provider: aws
|
||||
bucket: velero
|
||||
config:
|
||||
region: minio
|
||||
s3Url: https://s3-a.c2et.net
|
||||
s3ForcePathStyle: "true"
|
||||
|
||||
initContainers:
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.9.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: plugins
|
||||
mountPath: /target
|
||||
- name: velero-plugin-for-csi
|
||||
image: velero/velero-plugin-for-csi:v0.7.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: plugins
|
||||
mountPath: /target
|
||||
|
||||
nodeAgent:
|
||||
enabled: true
|
||||
30
velero/helm/values-approach-b.yaml
Normal file
30
velero/helm/values-approach-b.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
# values-combined.yaml
|
||||
credentials:
|
||||
useSecret: false # Secrets y BSLs los aplicas tú por YAML (como ya hiciste)
|
||||
|
||||
configuration:
|
||||
features: ""
|
||||
backupStorageLocation: [] # ninguno desde Helm (los gestionas por YAML)
|
||||
defaultVolumesToFsBackup: true # copia datos de PV vía node-agent/Kopia al BSL
|
||||
|
||||
# Dejamos SOLO el plugin de AWS; el CSI externo se quita (viene integrado en Velero 1.16)
|
||||
initContainers:
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.9.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- name: plugins
|
||||
mountPath: /target
|
||||
|
||||
# **activar** el node-agent (DaemonSet) y darle tolerations "catch-all"
|
||||
deployNodeAgent: true
|
||||
nodeAgent:
|
||||
podConfig:
|
||||
tolerations:
|
||||
- key: "node-role.kubernetes.io/master"
|
||||
operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
- key: "node-role.kubernetes.io/control-plane"
|
||||
operator: "Exists"
|
||||
effect: "NoSchedule"
|
||||
- operator: "Exists" # tolera cualquier otro taint
|
||||
92
velero/monitoring/grafana-dashboard-velero.json
Normal file
92
velero/monitoring/grafana-dashboard-velero.json
Normal file
@@ -0,0 +1,92 @@
|
||||
{
|
||||
"annotations": {
|
||||
"list": []
|
||||
},
|
||||
"editable": true,
|
||||
"gnetId": null,
|
||||
"graphTooltip": 0,
|
||||
"panels": [
|
||||
{
|
||||
"type": "stat",
|
||||
"title": "Backups - Total",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(velero_backup_total)",
|
||||
"legendFormat": "total"
|
||||
}
|
||||
],
|
||||
"id": 1,
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "prometheus"
|
||||
},
|
||||
"options": {
|
||||
"reduceOptions": {
|
||||
"calcs": [
|
||||
"lastNotNull"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "timeSeries",
|
||||
"title": "Backups por estado",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum by (phase) (increase(velero_backup_attempt_total[1h]))",
|
||||
"legendFormat": "{{phase}}"
|
||||
}
|
||||
],
|
||||
"id": 2,
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "prometheus"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "timeSeries",
|
||||
"title": "Duraci\u00f3n de backups (p95)",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "histogram_quantile(0.95, sum(rate(velero_backup_duration_seconds_bucket[5m])) by (le))",
|
||||
"legendFormat": "p95"
|
||||
}
|
||||
],
|
||||
"id": 3,
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "prometheus"
|
||||
}
|
||||
},
|
||||
{
|
||||
"type": "timeSeries",
|
||||
"title": "Errores del node-agent",
|
||||
"targets": [
|
||||
{
|
||||
"expr": "sum(rate(velero_node_agent_errors_total[5m]))",
|
||||
"legendFormat": "errores"
|
||||
}
|
||||
],
|
||||
"id": 4,
|
||||
"datasource": {
|
||||
"type": "prometheus",
|
||||
"uid": "prometheus"
|
||||
}
|
||||
}
|
||||
],
|
||||
"schemaVersion": 37,
|
||||
"style": "dark",
|
||||
"tags": [
|
||||
"velero",
|
||||
"backup"
|
||||
],
|
||||
"templating": {
|
||||
"list": []
|
||||
},
|
||||
"time": {
|
||||
"from": "now-24h",
|
||||
"to": "now"
|
||||
},
|
||||
"title": "Velero (MinIO S3)",
|
||||
"version": 1
|
||||
}
|
||||
16
velero/monitoring/servicemonitor.yaml
Normal file
16
velero/monitoring/servicemonitor.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: monitoring.coreos.com/v1
|
||||
kind: ServiceMonitor
|
||||
metadata:
|
||||
name: velero
|
||||
namespace: velero
|
||||
labels:
|
||||
release: prometheus # ajusta al selector de tu Prometheus
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: velero
|
||||
namespaceSelector:
|
||||
matchNames: ["velero"]
|
||||
endpoints:
|
||||
- port: metrics
|
||||
interval: 30s
|
||||
4
velero/namespace.yaml
Normal file
4
velero/namespace.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: velero
|
||||
86
velero/readme.md
Normal file
86
velero/readme.md
Normal file
@@ -0,0 +1,86 @@
|
||||
# Velero + MinIO (c2et.net)
|
||||
|
||||
Este paquete contiene:
|
||||
- `namespace.yaml`
|
||||
- Secrets de credenciales (`cloud-credentials-site-a`, `cloud-credentials-site-b`)
|
||||
- BackupStorageLocation (BSL) por YAML: `default` (site-a) y `site-b`
|
||||
- Ejemplo de `Schedule` (nightly a las 02:00 y 02:30)
|
||||
- `helm/values-approach-b.yaml`: despliegue de Velero sin BSL/Secret (GitOps)
|
||||
- `ServiceMonitor` (si usas Prometheus Operator)
|
||||
- Dashboard de Grafana (JSON)
|
||||
|
||||
## Flujo recomendado (GitOps, Approach B)
|
||||
```bash
|
||||
# 1) Instala Velero por Helm sin BSL ni secrets
|
||||
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts
|
||||
helm upgrade --install velero vmware-tanzu/velero -n velero --create-namespace -f helm/values-approach-b.yaml
|
||||
|
||||
# 2) Aplica Secrets, BSLs y Schedules
|
||||
kubectl apply -f namespace.yaml
|
||||
kubectl apply -f secrets/secret-site-a.yaml -f secrets/secret-site-b.yaml
|
||||
kubectl apply -f bsl/bsl-default-site-a.yaml -f bsl/bsl-site-b.yaml
|
||||
kubectl apply -f schedules/schedules.yaml
|
||||
```
|
||||
|
||||
## Cliente Velero
|
||||
Para interactuar con Velero necesitas el binario en tu máquina de administración.
|
||||
|
||||
```bash
|
||||
# Linux AMD64
|
||||
wget https://github.com/vmware-tanzu/velero/releases/download/v1.16.2/velero-v1.16.2-linux-amd64.tar.gz
|
||||
tar -xvf velero-v1.16.2-linux-amd64.tar.gz
|
||||
sudo mv velero-v1.16.2-linux-amd64/velero /usr/local/bin/
|
||||
|
||||
# MacOS Intel
|
||||
wget https://github.com/vmware-tanzu/velero/releases/download/v1.16.2/velero-v1.16.2-darwin-amd64.tar.gz
|
||||
tar -xvf velero-v1.16.2-darwin-amd64.tar.gz
|
||||
sudo mv velero-v1.16.2-darwin-amd64/velero /usr/local/bin/
|
||||
```
|
||||
|
||||
Verifica la instalación:
|
||||
```bash
|
||||
velero version
|
||||
```
|
||||
|
||||
## Hacer un backup manual
|
||||
Ejemplo: respaldar el namespace `wireguard`.
|
||||
```bash
|
||||
velero backup create wireguard-backup --include-namespaces wireguard --wait
|
||||
velero backup describe wireguard-backup --details
|
||||
```
|
||||
|
||||
Puedes excluir recursos innecesarios (ej. CRDs de KubeVirt):
|
||||
```bash
|
||||
velero backup create smoke --include-namespaces default --exclude-resources uploadtokenrequests.upload.cdi.kubevirt.io --wait
|
||||
```
|
||||
|
||||
## Programar backups (Schedules)
|
||||
Ejemplo de programación diaria a las 03:15, TTL de 30 días:
|
||||
```bash
|
||||
velero schedule create daily-wireguard --schedule "15 3 * * *" --include-namespaces wireguard --ttl 720h --default-volumes-to-fs-backup
|
||||
```
|
||||
|
||||
Los schedules también se pueden definir por YAML en `schedules/schedules.yaml`.
|
||||
|
||||
## Restaurar un backup
|
||||
### Restaurar al mismo namespace (desastre real)
|
||||
```bash
|
||||
# 1) Borrar el namespace roto
|
||||
kubectl delete ns wireguard
|
||||
|
||||
# 2) Restaurar desde el backup
|
||||
velero restore create wireguard-restore --from-backup wireguard-backup --wait
|
||||
velero restore describe wireguard-restore --details
|
||||
```
|
||||
|
||||
### Restaurar a otro namespace (ensayo)
|
||||
```bash
|
||||
kubectl create ns wireguard-restore
|
||||
velero restore create wireguard-restore-test --from-backup wireguard-backup --namespace-mappings wireguard:wireguard-restore --wait
|
||||
```
|
||||
|
||||
## Notas
|
||||
- MinIO requiere `s3ForcePathStyle=true`.
|
||||
- Si usas CA propia, añade `spec.config.caCert` en los BSL.
|
||||
- `ServiceMonitor` requiere Prometheus Operator; ajusta `metadata.labels.release` al valor que use tu Prometheus.
|
||||
- Importa el dashboard JSON en Grafana (datasource `prometheus`).
|
||||
27
velero/schedules/schedules.yaml
Normal file
27
velero/schedules/schedules.yaml
Normal file
@@ -0,0 +1,27 @@
|
||||
apiVersion: velero.io/v1
|
||||
kind: Schedule
|
||||
metadata:
|
||||
name: nightly-a
|
||||
namespace: velero
|
||||
spec:
|
||||
schedule: "0 2 * * *"
|
||||
template:
|
||||
ttl: 168h
|
||||
includedNamespaces:
|
||||
- gitea
|
||||
- apolo
|
||||
storageLocation: default
|
||||
---
|
||||
apiVersion: velero.io/v1
|
||||
kind: Schedule
|
||||
metadata:
|
||||
name: nightly-b
|
||||
namespace: velero
|
||||
spec:
|
||||
schedule: "30 2 * * *"
|
||||
template:
|
||||
ttl: 168h
|
||||
includedNamespaces:
|
||||
- giteay
|
||||
- apolo
|
||||
storageLocation: site-b
|
||||
11
velero/secrets/secret-site-a.yaml
Normal file
11
velero/secrets/secret-site-a.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cloud-credentials-site-a
|
||||
namespace: velero
|
||||
type: Opaque
|
||||
stringData:
|
||||
cloud: |
|
||||
[default]
|
||||
aws_access_key_id=velero-a
|
||||
aws_secret_access_key=Pozuelo12345
|
||||
11
velero/secrets/secret-site-b.yaml
Normal file
11
velero/secrets/secret-site-b.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: cloud-credentials-site-b
|
||||
namespace: velero
|
||||
type: Opaque
|
||||
stringData:
|
||||
cloud: |
|
||||
[default]
|
||||
aws_access_key_id=velero-b
|
||||
aws_secret_access_key=Pozuelo12345
|
||||
7
velero/vsl-default.yaml
Normal file
7
velero/vsl-default.yaml
Normal file
@@ -0,0 +1,7 @@
|
||||
apiVersion: velero.io/v1
|
||||
kind: VolumeSnapshotLocation
|
||||
metadata:
|
||||
name: default
|
||||
namespace: velero
|
||||
spec:
|
||||
provider: velero.io/csi
|
||||
Reference in New Issue
Block a user