Highly Available IPv4/IPv6 dual-stack Kubernetes cluster on Alpine Linux, with CRI-O, Calico network and Longhorn storage, in a fully automated way using Ansible
https://github.com/pvamos/alpine-k8s
A simple “run only one playbook” solution deploying Highly Available IPv4/IPv6 dual-stack Kubernetes cluster on Alpine Linux, with Calico network and Longhorn storage, in a fully automated way using Ansible. The cluster has 3 controlplane nodes and any number of worker nodes.
lastrun.log
at https://github.com/pvamos/alpine-k8s/blob/main/lastrun.logAll nodes have:
Worker nodes have:
/var/lib/longhorn
to store Longhorn Persistent Volumes
/var/lib/longhorn
on workers.group_vars/all
The global configuration parameters are set at group_vars/all
.
ansible_port: 22
....
ansible_port: 31212
ansible_user: p
ansible_ssh_private_key_file: ~/.ssh/id_ed25519
ansible_user_publickey: ~/.ssh/id_ed25519.pub
...
I use 2 small VPS-es as a HA Active/Standby HAProxy setup in front of the 3 Kubernetes API servers running on the 3 controlplane nodes. The 2 HAProxy hosts share a floating IP with Keepalived.
...
loadbalancer_ipv4: 141.95.37.233
...
Both HAProxy hosts are configured similarly, forwarding requests to the API servers on the 3 controlplane nodes:
defaults
maxconn 20000
mode tcp
option dontlognull
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 86400s
timeout server 86400s
timeout tunnel 86400s
frontend k8s-api
bind 141.95.37.233:6443
mode tcp
default_backend k8s-api
backend k8s-api
balance roundrobin
mode tcp
option tcp-check
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server control1 51.195.116.80:6443 check
server control2 51.195.116.87:6443 check
server control3 51.195.117.170:6443 check
All nodes have a similar block in group_vars/all
.
...
c1_fqdn: control1.frankfurt.pvamos.net
c1_hostname: control1
c1_domain: frankfurt.pvamos.net
c1_ipv4addr: 51.195.116.80
c1_ipv6addr: 2001:41d0:701:1100::5901
c1_ipv6mask: /128
c1_ipv6_defaultgw: 2001:41d0:701:1100::1
c1_city: Frankfurt
...
alpine-k8s.yaml
You can simply run it by:
time ansible-playbook -i hosts alpine-k8s.yaml
The alpine-k8s.yaml
playbook applies the below roles to the cluster nodes:
---
- name: Alpine linux cri-o install
hosts: workers:controlplane
gather_facts: true
roles:
- role: alpine_apkrepo
- role: alpine_kernelparams
- role: alpine_crio
- name: Alpine linux install kubelet
hosts: workers:controlplane
gather_facts: true
roles:
- role: alpine_kubelet
- name: Alpine linux kubeadm init
hosts: control1
gather_facts: true
roles:
- role: alpine_kubeadm_init
- name: Alpine linux kubeadm join controlplane
hosts: control23
gather_facts: true
roles:
- role: alpine_kubeadm_join_control
- name: Alpine linux kubeadm join workers
hosts: workers
gather_facts: true
roles:
- role: alpine_kubeadm_join_workers
- name: Alpine linux after kubeadm join
hosts: workers:controlplane
gather_facts: true
roles:
- role: alpine_kubeadm_afterjoin
- role: alpine_calico
- role: alpine_longhorn
Full output of a successful deployment run is available in lastrun.log
at https://github.com/pvamos/alpine-k8s/blob/main/lastrun.log
Use the kube config placed at ~/.kube/config
on ansible node, also present at /etc/kubernetes/admin.conf
on the controlplane node.
[p@ansible alpine-k8s]$ kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
control1.frankfurt.pvamos.net Ready control-plane 17m v1.25.0 51.195.116.80 <none> Alpine Linux v3.17 6.1.2-0-virt cri-o://1.24.1
control2.frankfurt.pvamos.net Ready control-plane 9m44s v1.25.0 51.195.116.87 <none> Alpine Linux v3.17 6.1.2-0-virt cri-o://1.24.1
control3.frankfurt.pvamos.net Ready control-plane 15m v1.25.0 51.195.117.170 <none> Alpine Linux v3.17 6.1.2-0-virt cri-o://1.24.1
worker1.frankfurt.pvamos.net Ready <none> 9m13s v1.25.0 51.195.116.254 <none> Alpine Linux v3.17 6.1.2-0-virt cri-o://1.24.1
worker2.frankfurt.pvamos.net Ready <none> 9m16s v1.25.0 51.195.118.45 <none> Alpine Linux v3.17 6.1.2-0-virt cri-o://1.24.1
worker3.frankfurt.pvamos.net Ready <none> 9m16s v1.25.0 51.195.119.241 <none> Alpine Linux v3.17 6.1.2-0-virt cri-o://1.24.1
[p@ansible alpine-k8s]$
[p@ansible alpine-k8s]$
[p@ansible alpine-k8s]$ kubectl get node control1.frankfurt.pvamos.net -o go-template --template=''
InternalIP: 51.195.116.80
InternalIP: 2001:41d0:701:1100::5901
Hostname: control1.frankfurt.pvamos.net
[p@ansible alpine-k8s]$
[p@ansible alpine-k8s]$
[p@ansible alpine-k8s]$ kubectl get node worker1.frankfurt.pvamos.net -o go-template --template=''
InternalIP: 51.195.116.254
InternalIP: 2001:41d0:701:1100::5fa8
Hostname: worker1.frankfurt.pvamos.net
[p@ansible alpine-k8s]$
[p@ansible alpine-k8s]$
[p@ansible alpine-k8s]$ kubectl get pods -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-system calico-node-7h7ck 1/1 Running 1 (6m30s ago) 8m13s 51.195.116.87 control2.frankfurt.pvamos.net <none> <none>
calico-system calico-node-bnw8q 1/1 Running 0 8m13s 51.195.116.254 worker1.frankfurt.pvamos.net <none> <none>
calico-system calico-node-ck4kl 1/1 Running 0 8m13s 51.195.117.170 control3.frankfurt.pvamos.net <none> <none>
calico-system calico-node-hhz54 1/1 Running 0 8m13s 51.195.119.241 worker3.frankfurt.pvamos.net <none> <none>
calico-system calico-node-lhwxx 1/1 Running 0 8m13s 51.195.116.80 control1.frankfurt.pvamos.net <none> <none>
calico-system calico-node-z9zlx 1/1 Running 0 8m13s 51.195.118.45 worker2.frankfurt.pvamos.net <none> <none>
calico-system calico-typha-57444d8bb7-b22c7 1/1 Running 0 8m7s 51.195.119.241 worker3.frankfurt.pvamos.net <none> <none>
calico-system calico-typha-57444d8bb7-bsqbs 1/1 Running 0 8m7s 51.195.118.45 worker2.frankfurt.pvamos.net <none> <none>
calico-system calico-typha-57444d8bb7-n6vzv 1/1 Running 0 8m13s 51.195.116.254 worker1.frankfurt.pvamos.net <none> <none>
kube-system coredns-565d847f94-q5pr6 1/1 Running 0 16m 10.244.23.65 control3.frankfurt.pvamos.net <none> <none>
kube-system coredns-565d847f94-rhmbv 1/1 Running 0 16m 10.244.23.66 control3.frankfurt.pvamos.net <none> <none>
kube-system etcd-control1.frankfurt.pvamos.net 1/1 Running 0 16m 51.195.116.80 control1.frankfurt.pvamos.net <none> <none>
kube-system etcd-control2.frankfurt.pvamos.net 1/1 Running 0 9m26s 51.195.116.87 control2.frankfurt.pvamos.net <none> <none>
kube-system etcd-control3.frankfurt.pvamos.net 1/1 Running 0 15m 51.195.117.170 control3.frankfurt.pvamos.net <none> <none>
kube-system kube-apiserver-control1.frankfurt.pvamos.net 1/1 Running 0 17m 51.195.116.80 control1.frankfurt.pvamos.net <none> <none>
kube-system kube-apiserver-control2.frankfurt.pvamos.net 1/1 Running 0 9m10s 51.195.116.87 control2.frankfurt.pvamos.net <none> <none>
kube-system kube-apiserver-control3.frankfurt.pvamos.net 1/1 Running 1 (14m ago) 14m 51.195.117.170 control3.frankfurt.pvamos.net <none> <none>
kube-system kube-controller-manager-control1.frankfurt.pvamos.net 1/1 Running 1 (15m ago) 17m 51.195.116.80 control1.frankfurt.pvamos.net <none> <none>
kube-system kube-controller-manager-control2.frankfurt.pvamos.net 1/1 Running 0 9m27s 51.195.116.87 control2.frankfurt.pvamos.net <none> <none>
kube-system kube-controller-manager-control3.frankfurt.pvamos.net 1/1 Running 0 13m 51.195.117.170 control3.frankfurt.pvamos.net <none> <none>
kube-system kube-proxy-crk48 1/1 Running 0 9m9s 51.195.119.241 worker3.frankfurt.pvamos.net <none> <none>
kube-system kube-proxy-gc4jk 1/1 Running 0 15m 51.195.117.170 control3.frankfurt.pvamos.net <none> <none>
kube-system kube-proxy-kqjj6 1/1 Running 0 16m 51.195.116.80 control1.frankfurt.pvamos.net <none> <none>
kube-system kube-proxy-qcdtj 1/1 Running 0 9m7s 51.195.116.254 worker1.frankfurt.pvamos.net <none> <none>
kube-system kube-proxy-x2vvr 1/1 Running 0 9m9s 51.195.118.45 worker2.frankfurt.pvamos.net <none> <none>
kube-system kube-proxy-zms2h 1/1 Running 0 9m37s 51.195.116.87 control2.frankfurt.pvamos.net <none> <none>
kube-system kube-scheduler-control1.frankfurt.pvamos.net 1/1 Running 1 (15m ago) 16m 51.195.116.80 control1.frankfurt.pvamos.net <none> <none>
kube-system kube-scheduler-control2.frankfurt.pvamos.net 1/1 Running 0 9m18s 51.195.116.87 control2.frankfurt.pvamos.net <none> <none>
kube-system kube-scheduler-control3.frankfurt.pvamos.net 1/1 Running 0 14m 51.195.117.170 control3.frankfurt.pvamos.net <none> <none>
longhorn-system csi-attacher-66f9979b99-7w7sm 1/1 Running 0 5m32s 10.244.87.137 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system csi-attacher-66f9979b99-gsxlw 1/1 Running 0 5m32s 10.244.240.136 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system csi-attacher-66f9979b99-wtjk4 1/1 Running 0 5m32s 10.244.168.74 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system csi-provisioner-64dcbd65b4-2vpv2 1/1 Running 0 5m32s 10.244.87.136 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system csi-provisioner-64dcbd65b4-45c45 1/1 Running 0 5m32s 10.244.168.75 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system csi-provisioner-64dcbd65b4-frfqm 1/1 Running 0 5m32s 10.244.240.138 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system csi-resizer-ccdb95b5c-f6slv 1/1 Running 0 5m31s 10.244.87.140 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system csi-resizer-ccdb95b5c-sw7rv 1/1 Running 0 5m31s 10.244.240.137 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system csi-resizer-ccdb95b5c-ttxnm 1/1 Running 0 5m31s 10.244.168.73 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system csi-snapshotter-546b79649b-7q679 1/1 Running 0 5m31s 10.244.87.138 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system csi-snapshotter-546b79649b-bjgzf 1/1 Running 0 5m31s 10.244.168.76 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system csi-snapshotter-546b79649b-dr7lg 1/1 Running 0 5m31s 10.244.240.139 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system engine-image-ei-fc06c6fb-7xxhk 1/1 Running 0 5m41s 10.244.240.132 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system engine-image-ei-fc06c6fb-9p2pm 1/1 Running 0 5m41s 10.244.87.135 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system engine-image-ei-fc06c6fb-rwpdb 1/1 Running 0 5m41s 10.244.168.72 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system instance-manager-e-4080ef99f3335d2815269fb81fcbf459 1/1 Running 0 5m41s 10.244.87.134 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system instance-manager-e-96ffff261e0b1235ad56bd5a5b6474d6 1/1 Running 0 5m41s 10.244.168.70 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system instance-manager-e-c2dac33e70c79d60f88e502d3d28a98e 1/1 Running 0 5m38s 10.244.240.134 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system instance-manager-r-4080ef99f3335d2815269fb81fcbf459 1/1 Running 0 5m41s 10.244.87.133 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system instance-manager-r-96ffff261e0b1235ad56bd5a5b6474d6 1/1 Running 0 5m41s 10.244.168.71 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system instance-manager-r-c2dac33e70c79d60f88e502d3d28a98e 1/1 Running 0 5m38s 10.244.240.135 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-admission-webhook-6489cc5747-x6pdh 1/1 Running 0 6m30s 10.244.168.69 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-admission-webhook-6489cc5747-z6wn6 1/1 Running 0 6m30s 10.244.87.132 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-conversion-webhook-58b5f48bbd-88tpc 1/1 Running 0 6m31s 10.244.240.131 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-conversion-webhook-58b5f48bbd-g7b96 1/1 Running 0 6m31s 10.244.168.68 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-csi-plugin-5x7f8 3/3 Running 0 5m31s 10.244.87.139 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-csi-plugin-xzdpj 3/3 Running 0 5m31s 10.244.168.77 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-csi-plugin-zxs4x 3/3 Running 0 5m31s 10.244.240.140 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-driver-deployer-7fdddb9f99-62vd9 1/1 Running 0 6m32s 10.244.240.129 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-manager-ftc9x 1/1 Running 0 6m33s 10.244.168.65 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-manager-k4jw4 1/1 Running 0 6m32s 10.244.240.130 worker2.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-manager-qts4n 1/1 Running 0 6m32s 10.244.87.131 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-recovery-backend-d67444cf5-gbcr7 1/1 Running 0 6m32s 10.244.87.129 worker3.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-recovery-backend-d67444cf5-jcslr 1/1 Running 0 6m32s 10.244.168.66 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-ui-6768fbbc6c-gnf2d 1/1 Running 0 6m31s 10.244.168.67 worker1.frankfurt.pvamos.net <none> <none>
longhorn-system longhorn-ui-6768fbbc6c-st9w6 1/1 Running 0 6m31s 10.244.87.130 worker3.frankfurt.pvamos.net <none> <none>
tigera-operator tigera-operator-55dd57cddb-h5lgk 1/1 Running 0 8m27s 51.195.118.45 worker2.frankfurt.pvamos.net <none> <none>
[p@ansible alpine-k8s]$
created by Péter Vámos pvamos@gmail.com https://www.linkedin.com/in/pvamos
MIT License
Copyright (c) 2022 Péter Vámos
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.