207 lines
4.3 KiB
Markdown
207 lines
4.3 KiB
Markdown
# Despliegue de Rook-Ceph en clúster K3s (SUSE) con discos locales (Bluestore)
|
|
|
|
Esta guía describe cómo desplegar un clúster Rook-Ceph sobre K3s en servidores SUSE con **discos locales** estándar (sin iSCSI). Se adapta a la arquitectura de 4 servidores: `srvfkvm1`, `srvfkvm2`, `srvfkvm3`, `srvfkvm4`, cada uno con 6 discos de \~900GB.
|
|
|
|
---
|
|
|
|
## 1. Requisitos previos
|
|
|
|
* 4 nodos K3s funcionando: `SRVFKVM01`, `SRVFKVM02`, `SRVFKVM03`, `SRVFKVM04`
|
|
* Cada nodo con 6 discos locales dedicados para Ceph (`/dev/sdj` a `/dev/sdo` en cada servidor)
|
|
* K3s y `kubectl` funcionando y configurado
|
|
* Acceso completo a Internet desde todos los nodos
|
|
|
|
---
|
|
|
|
## 2. Preparar los nodos SUSE (sólo discos locales)
|
|
|
|
No es necesario configurar iSCSI ni multipath. **Asegúrate de que los discos están vacíos y sin particionar,** o bien elimina las particiones creadas (Ceph las sobreescribirá).
|
|
|
|
Verifica los discos en cada nodo:
|
|
|
|
```bash
|
|
lsblk | grep sd[j-o]
|
|
```
|
|
|
|
---
|
|
|
|
## 3. Crear namespace y CRDs de Rook-Ceph
|
|
|
|
```bash
|
|
kubectl create namespace rook-ceph
|
|
|
|
# Clona el repositorio oficial de Rook
|
|
git clone https://github.com/rook/rook.git
|
|
cd rook/deploy/examples
|
|
|
|
# Aplica CRDs y recursos comunes
|
|
kubectl apply -f crds.yaml
|
|
kubectl apply -f common.yaml
|
|
```
|
|
|
|
---
|
|
|
|
## 4. Desplegar el operador Rook-Ceph
|
|
|
|
```bash
|
|
kubectl apply -f operator.yaml
|
|
```
|
|
|
|
---
|
|
|
|
## 5. Crear el clúster Ceph con discos locales
|
|
|
|
Crea un archivo `ceph-cluster.yaml` con el siguiente contenido (ajusta nombres/discos según corresponda):
|
|
|
|
```yaml
|
|
apiVersion: ceph.rook.io/v1
|
|
kind: CephCluster
|
|
metadata:
|
|
name: rook-ceph
|
|
namespace: rook-ceph
|
|
spec:
|
|
cephVersion:
|
|
image: quay.io/ceph/ceph:v18
|
|
dataDirHostPath: /var/lib/rook
|
|
mon:
|
|
count: 3
|
|
allowMultiplePerNode: false
|
|
dashboard:
|
|
enabled: true
|
|
storage:
|
|
useAllNodes: false
|
|
useAllDevices: false
|
|
nodes:
|
|
- name: SRVFKVM01
|
|
devices:
|
|
- name: /dev/sdj
|
|
- name: /dev/sdk
|
|
- name: /dev/sdl
|
|
- name: /dev/sdm
|
|
- name: /dev/sdn
|
|
- name: /dev/sdo
|
|
- name: SRVFKVM02
|
|
devices:
|
|
- name: /dev/sdj
|
|
- name: /dev/sdk
|
|
- name: /dev/sdl
|
|
- name: /dev/sdm
|
|
- name: /dev/sdn
|
|
- name: /dev/sdo
|
|
- name: SRVFKVM03
|
|
devices:
|
|
- name: /dev/sdj
|
|
- name: /dev/sdk
|
|
- name: /dev/sdl
|
|
- name: /dev/sdm
|
|
- name: /dev/sdn
|
|
- name: /dev/sdo
|
|
- name: SRVFKVM04
|
|
devices:
|
|
- name: /dev/sdj
|
|
- name: /dev/sdk
|
|
- name: /dev/sdl
|
|
- name: /dev/sdm
|
|
- name: /dev/sdn
|
|
- name: /dev/sdo
|
|
```
|
|
|
|
> \*\*Asegúrate de que los nombres de los nodos (`name:`) coinciden con el valor mostrado por `kubectl get nodes`.
|
|
|
|
Aplica el manifiesto:
|
|
|
|
```bash
|
|
kubectl apply -f ceph-cluster.yaml
|
|
```
|
|
|
|
---
|
|
|
|
## 6. Verifica el despliegue de Ceph
|
|
|
|
```bash
|
|
kubectl -n rook-ceph get pods
|
|
```
|
|
|
|
* Espera a que los pods estén en estado `Running`.
|
|
|
|
Para comprobar el estado de Ceph:
|
|
|
|
```bash
|
|
# Primero espera a que el pod rook-ceph-tools esté disponible
|
|
kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- ceph status
|
|
```
|
|
|
|
---
|
|
|
|
## 7. Crear CephBlockPool y StorageClass (replica:2)
|
|
|
|
**ceph-blockpool.yaml:**
|
|
|
|
```yaml
|
|
apiVersion: ceph.rook.io/v1
|
|
kind: CephBlockPool
|
|
metadata:
|
|
name: replicado-2x
|
|
namespace: rook-ceph
|
|
spec:
|
|
failureDomain: host
|
|
replicated:
|
|
size: 2
|
|
```
|
|
|
|
**ceph-storageclass.yaml:**
|
|
|
|
```yaml
|
|
apiVersion: storage.k8s.io/v1
|
|
kind: StorageClass
|
|
metadata:
|
|
name: ceph-rbd-replica2
|
|
provisioner: rook-ceph.rbd.csi.ceph.com
|
|
parameters:
|
|
clusterID: rook-ceph
|
|
pool: replicado-2x
|
|
imageFormat: "2"
|
|
imageFeatures: layering
|
|
csi.storage.k8s.io/fstype: ext4
|
|
reclaimPolicy: Delete
|
|
allowVolumeExpansion: true
|
|
mountOptions:
|
|
- discard
|
|
```
|
|
|
|
Aplica ambos:
|
|
|
|
```bash
|
|
kubectl apply -f ceph-blockpool.yaml
|
|
kubectl apply -f ceph-storageclass.yaml
|
|
```
|
|
|
|
---
|
|
|
|
## 8. Accede al Ceph Dashboard
|
|
|
|
Obtén el puerto del dashboard:
|
|
|
|
```bash
|
|
kubectl -n rook-ceph get svc | grep dashboard
|
|
```
|
|
|
|
Obtén la contraseña:
|
|
|
|
```bash
|
|
kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{.data.password}" | base64 -d
|
|
```
|
|
|
|
Accede en tu navegador a:
|
|
|
|
```
|
|
https://<IP_nodo>:<NodePort>
|
|
```
|
|
|
|
Usuario: `admin`
|
|
Contraseña: (la anterior)
|
|
|
|
---
|
|
|
|
# Clúster Rook-Ceph listo para producción con discos locales y réplica cruzada en 4 nodos.
|