A simple solution to install a Kubernetes test / development cluster in a fully automated way, in minutes.
https://github.com/pvamos/fcos-k8s
A simple solution to install a Kubernetes test / development cluster in a fully automated way, in minutes.
The cluster has one controlplane node and any number of worker nodes.
/etc/hosts
file in the example).fcc
ignition config and ignition files (in the ignititon
directory)ansible
directory)lastrun.log
.fcc
filesModify the example configuration in ignition
directory according to each available node.
/etc/hostname
./etc/hosts
file on the nodes, like in the example.```yaml variant: fcos version: 1.4.0 passwd: users: - name: core groups: - docker - wheel - sudo ssh_authorized_keys: - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINHKcyMU729C1TDw+o+Ubh2Fz9VO11Ir6cOnVd30mKlG p@fedora storage: files: - path: /etc/hostname overwrite: true contents: inline: node1.k8s.local - path: /etc/hosts overwrite: true contents: inline: | 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.201 node1.k8s.local node1 192.168.1.202 node2.k8s.local node2 192.168.1.203 node3.k8s.local node3 192.168.1.204 node4.k8s.local node4 192.168.1.210 ansible.peter.local ansible 192.168.1.22 amd.peter.local amd 192.168.1.222 truenas.peter.local truenas - path: /etc/sysctl.d/10-disable-ipv6.conf contents: inline: | # disable IPv6 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.enp0s4.disable_ipv6 = 1 - path: /etc/sysctl.d/99-kubernetes-cri.conf contents: inline: | # Kernel parameters required by the cri-o container engine. net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 #net.bridge.bridge-nf-call-ip6tables = 1 - path: /etc/modules-load.d/crio-net.conf contents: inline: | # Kernel modules required by the cri-o container engine. overlay br_netfilter systemd: units: - name: python3-for-ansible.service enabled: true contents: | [Unit] Requires=network-online.target After=network-online.target Before=sshd.service [Service] Type=oneshot ExecCondition=/usr/bin/test ! -f /etc/python3-for-ansible.done ExecStart=/usr/bin/sed -i '/\\[updates\\]/,/^\\[/ s/^enabled=.*$/enabled=0/' /etc/yum.repos.d/fedora-updates.repo ExecStart=/usr/bin/rpm-ostree install python3 libselinux-python3 ExecStart=/usr/bin/sed -i '/\\[updates\\]/,/^\\[/ s/^enabled=.*$/enabled=1/' /etc/yum.repos.d/fedora-updates.repo ExecStart=/usr/bin/sed -i '/^\\[updates\\]/a exclude=libxcrypt-compat* mpdecimal* python-pip-wheel* python-setuptools-wheel* python-unversioned-command* python3* python3-libs* python3-selinux*' /etc/yum.repos.d/fedora-updates.repo ExecStartPost=/usr/bin/touch /etc/python3-for-ansible.done ExecStartPost=/usr/sbin/shutdown -r now [Install] WantedBy=multi-user.target ```
https://github.com/pvamos/fcos-k8s/blob/main/ignition/node1.fcc
.ign
files from .fcc
After modifying what’s necessary in the *.fcc
configs, *.ign
files can be generated with butane
.
butane --pretty --strict ignition/node1.fcc > node1.ign
https://docs.fedoraproject.org/en-US/fedora-coreos/producing-ign/
To install Fedora CoreOS on nodes,
fedora-coreos-37.20221106.3.0-live.x86_64.iso
used in the example) and manually apply the ignition config,
.ign
file locally with coreos-installer install /dev/XXX --ignition-file=/path/file.ign
,coreos-installer install /dev/XXX --ignition-url=https://host.domain.tld/path/file.ign
.https://docs.fedoraproject.org/en-US/fedora-coreos/bare-metal/
After completion (re)boot from the installed system.
Note that the nodes will have an additional reboot: We use a one time run only service to deploy python 3 at ignition to the nodes before ansible:
[Unit]
Requires=network-online.target
After=network-online.target
Before=sshd.service
[Service]
Type=oneshot
ExecCondition=/usr/bin/test ! -f /etc/python3-for-ansible.done
ExecStart=/usr/bin/sed -i '/\\[updates\\]/,/^\\[/ s/^enabled=.*$/enabled=0/' /etc/yum.repos.d/fedora-updates.repo
ExecStart=/usr/bin/rpm-ostree install python3 libselinux-python3
ExecStart=/usr/bin/sed -i '/\\[updates\\]/,/^\\[/ s/^enabled=.*$/enabled=1/' /etc/yum.repos.d/fedora-updates.repo
ExecStart=/usr/bin/sed -i '/^\\[updates\\]/a exclude=libxcrypt-compat* mpdecimal* python-pip-wheel* python-setuptools-wheel* python-unversioned-command* python3* python3-libs* python3-selinux*' /etc/yum.repos.d/fedora-updates.repo
ExecStartPost=/usr/bin/touch /etc/python3-for-ansible.done
ExecStartPost=/usr/sbin/shutdown -r now
[Install]
WantedBy=multi-user.target
We remove this temporary service with ansible later.
ansible/hosts
file[controlplane]
node1.k8s.local
[workers]
node2.k8s.local
node3.k8s.local
node4.k8s.local
ansible/group_vars/all
file---
# The variables file used by the playbooks on all hosts.
# These don't have to be explicitly imported by vars_files: they are autopopulated.
gather_facts: true
k8s_version: 1.25.4
crictl_version: v1.25.0
crio_version: 1.25.1
crio_repo_version: 1.25
fcos-k8s.yaml
This ansible playbook deploys and configures the Kubernetes cluster to the Fedora CoreOS nodes.
time ansible-playbook -i hosts fcos-k8s.yaml
The fcos-k8s.yaml
playbook applies the below roles to the cluster nodes:
---
- name: Fedora CoreOS Kubernetes setup with kubeadm
hosts: controlplane:workers
roles:
- remove-python3-for-ansible-service
- fcos-k8s-base
# kubeadm init on controlplane
- role: kubeadm-init
when: inventory_hostname in groups.controlplane
# flannel depoy on controlplane
- role: flannel-deploy
when: inventory_hostname in groups.controlplane
# kubeadm join on workers (test on controlplane)
- role: kubeadm-join
- role: coredns-pods
when: inventory_hostname in groups.controlplane
Use the kube config placed at ~/.kube/config
on ansible node, also present at /etc/kubernetes/admin.conf
on the controlplane node.
[p@ansible ansible]$ k get no -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node1.k8s.local Ready control-plane 8m53s v1.25.4 192.168.1.201 <none> Fedora CoreOS 37.20221106.3.0 6.0.7-301.fc37.x86_64 cri-o://1.25.1
node2.k8s.local Ready <none> 7m5s v1.25.4 192.168.1.202 <none> Fedora CoreOS 37.20221106.3.0 6.0.7-301.fc37.x86_64 cri-o://1.25.1
node3.k8s.local Ready <none> 7m5s v1.25.4 192.168.1.203 <none> Fedora CoreOS 37.20221106.3.0 6.0.7-301.fc37.x86_64 cri-o://1.25.1
node4.k8s.local Ready <none> 7m5s v1.25.4 192.168.1.204 <none> Fedora CoreOS 37.20221106.3.0 6.0.7-301.fc37.x86_64 cri-o://1.25.1
[p@ansible ansible]$ k get po -A -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-flannel kube-flannel-ds-jq27g 1/1 Running 0 7m16s 192.168.1.202 node2.k8s.local <none> <none>
kube-flannel kube-flannel-ds-krb2g 1/1 Running 0 7m16s 192.168.1.204 node4.k8s.local <none> <none>
kube-flannel kube-flannel-ds-msjrc 1/1 Running 1 7m49s 192.168.1.201 node1.k8s.local <none> <none>
kube-flannel kube-flannel-ds-xdgvq 1/1 Running 0 7m16s 192.168.1.203 node3.k8s.local <none> <none>
kube-system coredns-565d847f94-d7j89 1/1 Running 0 6m14s 10.244.2.2 node4.k8s.local <none> <none>
kube-system coredns-565d847f94-tcv92 1/1 Running 0 6m14s 10.244.1.2 node3.k8s.local <none> <none>
kube-system etcd-node1.k8s.local 1/1 Running 2 9m1s 192.168.1.201 node1.k8s.local <none> <none>
kube-system kube-apiserver-node1.k8s.local 1/1 Running 2 9m3s 192.168.1.201 node1.k8s.local <none> <none>
kube-system kube-controller-manager-node1.k8s.local 1/1 Running 2 9m1s 192.168.1.201 node1.k8s.local <none> <none>
kube-system kube-proxy-2r8g5 1/1 Running 0 7m16s 192.168.1.202 node2.k8s.local <none> <none>
kube-system kube-proxy-67mgb 1/1 Running 1 7m49s 192.168.1.201 node1.k8s.local <none> <none>
kube-system kube-proxy-7dqjz 1/1 Running 0 7m16s 192.168.1.203 node3.k8s.local <none> <none>
kube-system kube-proxy-tdx4x 1/1 Running 0 7m16s 192.168.1.204 node4.k8s.local <none> <none>
kube-system kube-scheduler-node1.k8s.local 1/1 Running 2 9m3s 192.168.1.201 node1.k8s.local <none> <none>
created by Péter Vámos pvamos@gmail.com https://www.linkedin.com/in/pvamos/
MIT License
Copyright (c) 2022 Péter Vámos
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.