505 lines
13 KiB
Markdown
505 lines
13 KiB
Markdown
# 📘 Guía de instalación del clúster K3s - Valhalla Cluster
|
|
|
|
## 1. Instalar el sistema operativo
|
|
|
|
En los tres servidores:
|
|
|
|
- Configurar **bond0** como interfaz principal.
|
|
- Crear un **bridge `br0`** sobre `bond0` para tráfico sin NAT de VLAN 1 (administración y acceso).
|
|
- Montar **VLAN 30 (almacenamiento)** y **VLAN 40 (internode)**.
|
|
- Configurar **una interfaz de respaldo** (`enp88s0`, `eno1`, etc.) con IP secundaria en VLAN 1.
|
|
|
|
### Asignación de IPs principales:
|
|
|
|
| Nodo | VLAN 1 (Administración) | VLAN 30 (Storage) | VLAN 40 (Internode) |
|
|
|:-----------|:------------------------|:------------------|:--------------------|
|
|
| tartaro | 192.168.1.11 | 192.168.3.1 | 192.168.4.1 |
|
|
| styx | 192.168.1.12 | 192.168.3.2 | 192.168.4.2 |
|
|
| niflheim | 192.168.1.13 | 192.168.3.3 | 192.168.4.3 |
|
|
|
|
---
|
|
|
|
## 📄 Netplan para cada servidor
|
|
|
|
### Tartaro (`/etc/netplan/00-installer-config.yaml`):
|
|
|
|
```yaml
|
|
network:
|
|
version: 2
|
|
ethernets:
|
|
enp2s0f0np0: {}
|
|
enp2s0f1np1: {}
|
|
enp88s0:
|
|
addresses:
|
|
- "192.168.1.8/24"
|
|
bonds:
|
|
bond0:
|
|
interfaces:
|
|
- enp2s0f0np0
|
|
- enp2s0f1np1
|
|
parameters:
|
|
mode: "802.3ad"
|
|
lacp-rate: "fast"
|
|
transmit-hash-policy: "layer3+4"
|
|
vlans:
|
|
bond0.30:
|
|
id: 30
|
|
link: bond0
|
|
addresses:
|
|
- "192.168.3.1/24"
|
|
bond0.40:
|
|
id: 40
|
|
link: bond0
|
|
addresses:
|
|
- "192.168.4.1/24"
|
|
bridges:
|
|
br0:
|
|
interfaces:
|
|
- bond0
|
|
addresses:
|
|
- "192.168.1.11/24"
|
|
nameservers:
|
|
addresses:
|
|
- 192.168.1.1
|
|
- 1.1.1.1
|
|
- 8.8.8.8
|
|
search: []
|
|
routes:
|
|
- to: "default"
|
|
via: "192.168.1.1"
|
|
```
|
|
|
|
---
|
|
|
|
### Styx (`/etc/netplan/00-installer-config.yaml`):
|
|
|
|
```yaml
|
|
network:
|
|
version: 2
|
|
ethernets:
|
|
enp2s0f0np0: {}
|
|
enp2s0f1np1: {}
|
|
enp88s0:
|
|
addresses:
|
|
- "192.168.1.21/24"
|
|
bonds:
|
|
bond0:
|
|
interfaces:
|
|
- enp2s0f0np0
|
|
- enp2s0f1np1
|
|
parameters:
|
|
mode: "802.3ad"
|
|
lacp-rate: "fast"
|
|
transmit-hash-policy: "layer3+4"
|
|
vlans:
|
|
bond0.30:
|
|
id: 30
|
|
link: bond0
|
|
addresses:
|
|
- "192.168.3.2/24"
|
|
bond0.40:
|
|
id: 40
|
|
link: bond0
|
|
addresses:
|
|
- "192.168.4.2/24"
|
|
bridges:
|
|
br0:
|
|
interfaces:
|
|
- bond0
|
|
addresses:
|
|
- "192.168.1.12/24"
|
|
nameservers:
|
|
addresses:
|
|
- 192.168.1.1
|
|
- 1.1.1.1
|
|
- 8.8.8.8
|
|
search: []
|
|
routes:
|
|
- to: "default"
|
|
via: "192.168.1.1"
|
|
```
|
|
|
|
---
|
|
|
|
### Niflheim (`/etc/netplan/00-installer-config.yaml`):
|
|
|
|
```yaml
|
|
network:
|
|
version: 2
|
|
ethernets:
|
|
ens1f0: {}
|
|
ens1f1: {}
|
|
eno1:
|
|
addresses:
|
|
- "192.168.1.22/24"
|
|
bonds:
|
|
bond0:
|
|
interfaces:
|
|
- ens1f0
|
|
- ens1f1
|
|
parameters:
|
|
mode: "802.3ad"
|
|
lacp-rate: "fast"
|
|
transmit-hash-policy: "layer3+4"
|
|
vlans:
|
|
bond0.30:
|
|
id: 30
|
|
link: bond0
|
|
addresses:
|
|
- "192.168.3.3/24"
|
|
bond0.40:
|
|
id: 40
|
|
link: bond0
|
|
addresses:
|
|
- "192.168.4.3/24"
|
|
bridges:
|
|
br0:
|
|
interfaces:
|
|
- bond0
|
|
addresses:
|
|
- "192.168.1.13/24"
|
|
nameservers:
|
|
addresses:
|
|
- 192.168.1.1
|
|
- 1.1.1.1
|
|
- 8.8.8.8
|
|
search: []
|
|
routes:
|
|
- to: "default"
|
|
via: "192.168.1.1"
|
|
```
|
|
|
|
---
|
|
|
|
## 2. Preparación básica
|
|
|
|
En **todos los servidores**:
|
|
|
|
```bash
|
|
sudo apt update && sudo apt upgrade -y
|
|
sudo apt install -y keepalived nfs-common
|
|
```
|
|
|
|
> Asegúrate de tener los manifiestos clonados desde Gitea o preparados localmente antes de empezar.
|
|
|
|
---
|
|
|
|
### Configuración de ZFS en `niflheim`
|
|
|
|
1. Instalar ZFS:
|
|
|
|
sudo apt install -y zfsutils-linux
|
|
|
|
|
|
2. Crear el pool ZFS con los 4 discos Toshiba (RAID10 con 2 espejos):
|
|
|
|
sudo zpool create -o ashift=12 k8spool \
|
|
mirror /dev/sda /dev/sdb \
|
|
mirror /dev/sdc /dev/sde
|
|
|
|
3. Crear dataset:
|
|
|
|
sudo zfs create k8spool/k8s
|
|
sudo zfs set mountpoint=/mnt/storage/k8s k8spool/k8s
|
|
sudo zfs set compression=lz4 k8spool/k8s
|
|
sudo chown nobody:nogroup /mnt/storage/k8s
|
|
|
|
4. Verificar:
|
|
|
|
sudo zpool status
|
|
sudo zfs list
|
|
sudo zfs get compression k8spool/k8s
|
|
|
|
---
|
|
|
|
## 3. Configuración de Keepalived
|
|
|
|
En tartaro (MASTER):
|
|
|
|
`sudo nano /etc/keepalived/keepalived.conf`
|
|
|
|
vrrp_instance VI_1 {
|
|
state MASTER
|
|
interface br0
|
|
virtual_router_id 51
|
|
priority 150
|
|
advert_int 1
|
|
authentication {
|
|
auth_type PASS
|
|
auth_pass 42manabo42
|
|
}
|
|
virtual_ipaddress {
|
|
192.168.1.10/24
|
|
}
|
|
}
|
|
|
|
En styx (BACKUP):
|
|
|
|
`sudo nano /etc/keepalived/keepalived.conf`
|
|
|
|
vrrp_instance VI_1 {
|
|
state BACKUP
|
|
interface br0
|
|
virtual_router_id 51
|
|
priority 100
|
|
advert_int 1
|
|
authentication {
|
|
auth_type PASS
|
|
auth_pass 42manabo42
|
|
}
|
|
virtual_ipaddress {
|
|
192.168.1.10/24
|
|
}
|
|
}
|
|
|
|
Después en ambos:
|
|
|
|
sudo systemctl enable keepalived
|
|
sudo systemctl start keepalived
|
|
|
|
## 4. Instalar K3s
|
|
En `tartaro` (control plane principal):
|
|
|
|
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--cluster-init --disable traefik \
|
|
--node-name tartaro \
|
|
--node-ip 192.168.4.1 \
|
|
--advertise-address 192.168.4.1 \
|
|
--tls-san 192.168.1.10 \
|
|
--tls-san 192.168.1.11 \
|
|
--write-kubeconfig-mode 644" sh -
|
|
|
|
Obtener token:
|
|
|
|
sudo cat /var/lib/rancher/k3s/server/node-token
|
|
|
|
En `styx` (segundo nodo):
|
|
|
|
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik \
|
|
--node-name styx \
|
|
--node-ip 192.168.4.2 \
|
|
--advertise-address 192.168.4.2 \
|
|
--tls-san 192.168.1.10 \
|
|
--tls-san 192.168.1.12" \
|
|
K3S_URL=https://192.168.1.10:6443 \
|
|
K3S_TOKEN="token" \
|
|
sh -
|
|
|
|
En `niflheim` (control plane adicional, dedicado exclusivamente a almacenamiento):
|
|
|
|
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik \
|
|
--node-name niflheim \
|
|
--node-ip 192.168.4.3 \
|
|
--advertise-address 192.168.4.3 \
|
|
--tls-san 192.168.1.10 \
|
|
--tls-san 192.168.1.13 \
|
|
--node-taint storage=only:NoSchedule" \
|
|
K3S_URL=https://192.168.1.10:6443 \
|
|
K3S_TOKEN="token" \
|
|
sh -
|
|
|
|
### Verificar estado del clúster
|
|
|
|
Desde tartaro (o con el kubeconfig copiado):
|
|
|
|
kubectl get nodes
|
|
|
|
En styx, niflheim para permitir acceso al kubeconfig:
|
|
|
|
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
|
|
|
|
## 5. Instalar driver de almacenamiento.
|
|
|
|
En **Tartaro** (o en el nodo donde previamente hayamos clonado los repositorios):
|
|
|
|
cd ~/k3s/k8s-storage/
|
|
kubectl apply -k .
|
|
|
|
Comprobar:
|
|
|
|
kubectl get pods -n nfs-provisioner
|
|
|
|
## 6. Desplegar sistema automatizado de Ingress
|
|
|
|
### Redirección de puertos desde el router
|
|
|
|
- Hay que hacer port forwarding de puertos externos 80 y 443 a la IP virtual de Keepalived (192.168.1.9)
|
|
- El `NodePort` está configurado en el manifiesto como:
|
|
- 30080 → 80 (HTTP)
|
|
- 30443 → 443 (HTTPS)
|
|
- Por lo tanto la redireccion sera:
|
|
- de 80 a 192.168.1.9:30080
|
|
- de 443 a 192.168.1.9:30443
|
|
|
|
>Si necesitamos ver los puertos en uso podemos listarlos por la via rapida con el comando:
|
|
|
|
kubectl get svc --all-namespaces -o jsonpath="{range .items[*]}{.metadata.namespace}:{.metadata.name} → {.spec.ports[*].nodePort}{'\n'}{end}" | grep -v "→ $"
|
|
|
|
### Desplegar cert-manager
|
|
|
|
cd ~/k3s/k8s-cert-manager/
|
|
kubectl apply -f namespace.yaml
|
|
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/latest/download/cert-manager.yaml
|
|
kubectl apply -f clusterissuer-staging.yaml
|
|
kubectl apply -f clusterissuer-prod.yaml
|
|
|
|
### Desplegar ingress-controller
|
|
|
|
cd ~/k3s/k8s-ingress-controller/
|
|
kubectl apply -k .
|
|
|
|
## 7. Desplegar Gitea manualmente
|
|
|
|
### En Tartaro (o en el nodo donde hayas copiado los repositorios)
|
|
|
|
cd ~/k3s/k8s-gitea/
|
|
kubectl apply -k .
|
|
|
|
Comprueba que los pods estén en estado `Running`:
|
|
|
|
kubectl get pods -n gitea -w
|
|
|
|
>Con acceso ya a gitea, seria el momento de crear todos los repositorios remotos. Es una buena idea apoyarnos en [git-publish](herramienta%20git-publish.md).
|
|
>Si tambien te has hartado de teclear git-publish, tambien tenemos un script para ti: [publicar-todos](herramienta%20publicar-todos.md)
|
|
|
|
## 8. Instalar Harbor
|
|
|
|
### 8.1 Instalar `helm`
|
|
|
|
```bash
|
|
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
|
|
```
|
|
|
|
Verifica que `helm` está instalado:
|
|
|
|
```bash
|
|
helm version
|
|
```
|
|
|
|
---
|
|
|
|
### 8.2 Añadir el repositorio de charts de Harbor
|
|
|
|
```bash
|
|
helm repo add harbor https://helm.goharbor.io
|
|
helm repo update
|
|
```
|
|
|
|
---
|
|
|
|
### 8.3 Instalar Harbor
|
|
|
|
```bash
|
|
helm install harbor harbor/harbor --namespace harbor --create-namespace -f values.yaml
|
|
```
|
|
|
|
---
|
|
|
|
> Si todo es correcto, podrás acceder a **https://harbor.manabo.org** con usuario `admin` y contraseña `Harbor12345`.
|
|
|
|
---
|
|
|
|
>Puedes loguearte desde tu equipo con:
|
|
|
|
```bash
|
|
docker login harbor.manabo.org
|
|
```
|
|
|
|
Y usar Harbor igual que Docker Hub.
|
|
|
|
---
|
|
|
|
> **En esta fase:** dejamos desplegado nuestro servidor de imágenes privado (`harbor.manabo.org`) para reemplazar Docker Hub en nuestros proyectos.
|
|
|
|
## 9. Instalar ArgoCD
|
|
|
|
### En Tartaro (o donde tengamos los manifiestos locales clonados de Gitea)
|
|
|
|
cd ~/k3s/k8s-argocd/
|
|
kubectl apply -f namespace.yaml
|
|
# Instalar ArgoCD desde manifiesto oficial (26000 líneas aprox)
|
|
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
|
|
kubectl apply -f services/argocd.yaml
|
|
kubectl apply -f ingress/ingress.yaml
|
|
|
|
### Acceder
|
|
>Crear acceso en NPM es lo mas adecuado.
|
|
|
|
Se puede obtener la contraseña de admin con:
|
|
|
|
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
|
|
|
|
### Crear la App of Apps
|
|
En la interfaz web:
|
|
1. Name: app-of-apps (todo en minúsculas)
|
|
2. Project: default
|
|
3. Repository URL: el repositorio k8s-master en Gitea: https://git.manabo.org/xavor/k8s-master.git
|
|
4. Path: apps
|
|
5. Cluster URL: https://kubernetes.default.svc
|
|
6. Namespace: argocd
|
|
7. Sync policy: automática
|
|
8. Marca las casillas: `AUTO-CREATE NAMESPACE` `PRUNE` `SELF HEAL` `DIRECTORY RECURSE`
|
|
|
|
## 10. Instalar KubeVirt
|
|
|
|
export KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | grep tag_name | cut -d '"' -f 4)
|
|
kubectl create namespace kubevirt
|
|
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml
|
|
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml
|
|
kubectl apply -f https://raw.githubusercontent.com/k8snetworkplumbingwg/multus-cni/master/deployments/multus-daemonset.yml
|
|
|
|
export CDI_VERSION=$(curl -s https://api.github.com/repos/kubevirt/containerized-data-importer/releases/latest | grep tag_name | cut -d '"' -f 4)
|
|
|
|
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/${CDI_VERSION}/cdi-operator.yaml
|
|
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/${CDI_VERSION}/cdi-cr.yaml
|
|
|
|
### Comprobar despliegue
|
|
|
|
kubectl get pods -n kubevirt
|
|
|
|
### Instalar virtctl (herramienta de cliente)
|
|
|
|
export KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases/latest | grep tag_name | cut -d '"' -f 4)
|
|
curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/virtctl-${KUBEVIRT_VERSION}-linux-amd64
|
|
chmod +x virtctl
|
|
sudo mv virtctl /usr/local/bin/
|
|
|
|
### 10.1 Configurar Multus y las redes virtuales
|
|
En cada nodo:
|
|
|
|
sudo cp /var/lib/rancher/k3s/agent/etc/cni/net.d/10-flannel.conflist /etc/cni/net.d/
|
|
echo 'KERNEL=="kvm", MODE="0666"' | sudo tee /etc/udev/rules.d/99-kvm.rules
|
|
sudo udevadm control --reload-rules && sudo udevadm trigger
|
|
|
|
y en tartaro (o cualquier nodo)
|
|
|
|
kubectl -n kube-system delete pod -l app=multus
|
|
|
|
## 11. Desplegar servidor HTTP para ISOs (KubeVirt ISO Server)
|
|
|
|
cd ~/k3s/k8s-kubevirt-isoserver/
|
|
kubectl apply -k .
|
|
|
|
## 12. Desplegar Apache Guacamole
|
|
|
|
cd ~/k3s/k8s-guacamole/
|
|
kubectl apply -k .
|
|
|
|
>⚠️ Es necesario inyectar manualmente el esquema SQL de la base de datos tras el despliegue.
|
|
|
|
## Inyectar full-schema.sql
|
|
|
|
cd ~/k3s/k8s-guacamole/
|
|
kubectl cp full-schema.sql -n guacamole \
|
|
$(kubectl get pod -n guacamole -l app=mysql -o jsonpath="{.items[0].metadata.name}"):/full-schema.sql
|
|
|
|
kubectl exec -n guacamole deploy/mysql -- \
|
|
bash -c "mysql -u root -pguacroot guacamole_db < /full-schema.sql"
|
|
|
|
## Comprobación
|
|
|
|
kubectl exec -n guacamole deploy/mysql -it -- \
|
|
mysql -uguacuser -pguacpass -D guacamole_db -e \
|
|
"SELECT name FROM guacamole_entity WHERE type='USER';"
|
|
|
|
>Usuario/pass por defecto: ```guacadmin/guacadmin```
|