Installing Kubernetes is like nailing jelly to the ceiling. Here’s a script that actually does it, with comments so you know what’s going on.
Realistically, there’s no point in running Kubernetes on a single host – its for clustering. But just to prove it’s possible, this will do it. You can’t normally run a pod on a control node, but with one node you can remove the taint and do it anyway.
I had so many goes at doing this that I wrote this script so I could automate it until I got it right. You can run this script, or do it a command at a time (probably better) as this is only known to work on one particular configuration.
Because its very picky about which versions of various things you have, for the important stuff like containerd and the Kubernetes utilities itself I’ve ended up downloading them from github directly, and installing the various config files for systemd manually. At the start of the script I’m defining these version numbers so they are easy to tweak. One day I might rewrite using dnf based on information gleaned by using the direct approach.
To understand what’s going on, read the comments.
And as a bonus there’s a “hello” pod installed, running Nginx. You can pull its “welcome” page using curl.
#!/bin/bash
# This is a trick that will cause the script to exit immediately if
# any command returns a failure. Do not use this if you are running
# the commands by hand!
set -euo pipefail
# Set up the versions we're using. Trial and error has proved
# that these versions of various things play nice together
# on Oracle or RHEL 9.7
K8S_VERSION="v1.29.15"
CONTAINERD_VERSION="1.7.5"
APISERVER_IP=$(hostname -I | awk '{print $1}')
# Choose a published version (e.g., v1.32.0) as 1.29 isn't.
CRICTL_VERSION="v1.32.0"
CNI_VER="v1.1.1" # stable version
# Prior to K8S 1.21 alpha, having swapping enabled was
# a disaster. Now it's supposed to work, but having a swap device
# on a VM is a bit crazy, so we'll disable it anyway.
echo "=== Disable swap ==="
swapoff -a
sed -i '/swap/d' /etc/fstab
# These packages are going to be needed and may not be installed
# already.
echo "=== Install dependencies ==="
dnf install -y curl tar wget socat conntrack iptables iproute-tc git
# Assuming firewalld is running it's going to stop us communicating.
# It's probably best to disable it completely with
#
# systemctl disable --now firewalld
#
# Getting K8S running can be tricky enough without a firewall getting
# in the way. However, we can open the ports we know about and hope
# for the best.
echo "=== Open firewall ports ==="
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --reload
# So now we're set up for installing Kubernetes. We'll start with
# our container managing choice, containerd. Note that it no longer
# requires Docker (or Podman). It handles the contaiers directly.
#
# To avoid problems with repos and "latest versions" I'm just
# downloading the versions I want from the github repos.
echo "=== Install containerd from GitHub ==="
curl -LO https://github.com/containerd/containerd/releases/download/v${CONTAINERD_VERSION}/\
containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
tar Cxzvf /usr/local containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
rm -f containerd-${CONTAINERD_VERSION}-linux-amd64.tar.gz
# This is the systemd service file, which we need to install manually.
cat <<EOF > /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
After=network.target
[Service]
ExecStart=/usr/local/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
# We need to create a config file for containerd manually too.
# This is done by getting it to dump its default config.
mkdir -p /etc/containerd
/usr/local/bin/containerd config default > /etc/containerd/config.toml
# We need to enable the systemd cgroup driver in containerd
# as it's disabled by default. Edit /etc/containerd/config.toml
# and find the line SystemdCgroup = false, and change it to true.
# (Scripted here using sed)
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
# The next three commands will tell systemd something
# has changed so it will reload our new files. we
# can also kick containerd off now. I think the third
# systemctl is unnecessary as the --now when it is enabled
# should be enough.
systemctl daemon-reload
systemctl enable --now containerd
systemctl restart containerd
# So much for systemd, now wen need to do the same for Kubernetes.
# Installing it manually is very easy - just download the files into
# /usr/local/bin/ and make them executable.
echo "=== Install kubeadm/kubelet/kubectl from dl.k8s.io ==="
mkdir -p /usr/local/bin
cd /usr/local/bin
# Download kubeadm kubelet and kubectl
curl -LO https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/kubeadm
curl -LO https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/kubelet
curl -LO https://dl.k8s.io/release/${K8S_VERSION}/bin/linux/amd64/kubectl
chmod +x kubeadm kubelet kubectl
# Now we have to set up kubelet systemd service
cat <<EOF > /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=network.target containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \
--kubeconfig=/etc/kubernetes/kubelet.conf \
--config=/var/lib/kubelet/config.yaml \
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
--cgroup-driver=systemd \
--v=2
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
# As before, we'll tell systemd something has changed and kick
# off the kubelet service.
systemctl daemon-reload
systemctl enable --now kubelet
systemctl restart kubelet
# crictl is a lightweight command line utility for managing containers
# and suchlike, used by Kubernetes in preference to Docker or Podman.
# Again, we're going to download a specific version direct from github.
# Download and extract crictl as a tarball, unpack it in to /usr/local/bin
# and clean up afterwards.
curl -LO https://github.com/kubernetes-sigs/cri-tools/releases/download/\
${CRICTL_VERSION}/crictl-${CRICTL_VERSION}-linux-amd64.tar.gz
tar zxvf crictl-${CRICTL_VERSION}-linux-amd64.tar.gz -C /usr/local/bin
chmod +x /usr/local/bin/crictl
rm -f crictl-${CRICTL_VERSION}-linux-amd64.tar.gz
# Next we need to attend to the networking.
# We need to bridge two network interfaces
# using the br_netfilter kernel module and then
# enable port forwarding.
# This sets it up now, live.
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward
# This creates a sysctl file so it will be set on boot.
# I've added some support for IPv6. We can make it reload
# to make it live immediately using sysctl.
cat <<EOF > /etc/sysctl.d/99-kubernetes.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
# Some installs of RHEL don't have /usr/local/bin in the path all
# of the time, especially if you're switching user. This bit of
# script checks and adds it if necessary.
echo $PATH | grep /usr/local/bin || PATH=$PATH:/usr/local/bin
# Static pods like kube-apiserver don't need CNI, but kubelet
# requires the pause image to start sandbox pods.
echo "=== Get the pause image ==="
ctr images pull k8s.gcr.io/pause:3.10
# Check it's worked using grep. Note this script stops it it doesn't find it.
ctr images ls | grep pause
# kubeadm will pull the required images during init
# but we're going to pull them ahead of time. They can take
# a while and we want to make sure they're available.
echo "=== Pull the k8s images ==="
kubeadm config images pull
# Finally, we initialise the Kubernetes cluster. Note we
# finessed our main IP address at the start. The pod network
# is your choice. This step can take a while.
echo "=== Initialize Kubernetes cluster ==="
kubeadm init \
--pod-network-cidr=10.244.0.0/16 \
--cri-socket /run/containerd/containerd.sock \
--apiserver-advertise-address=${APISERVER_IP}
# Assuming it initialised, you're probably good!
#
# If we're running K8S as the root user, which isn't always
# a good idea but for testing it's fine, we need to create about
# .kube directory for root's configuration files.
#
# We could instead export KUBECONFIG the one in /etc with:
#
# export KUBECONFIG=/etc/kubernetes/admin.conf
#
echo "=== Configure kubectl for root user ==="
mkdir -p /root/.kube
cp -f /etc/kubernetes/admin.conf /root/.kube/config
chmod 600 /root/.kube/config
export KUBECONFIG=/root/.kube/config
# Flannel is the last important thing we need configured. It's a CNI
# plugin for Kubernetes that provires the layer 3 (IP) networking.
# Here we're telling kubectl to download it direct from github as it's
# a bit long to embed in this script.
echo "=== Install Flannel CNI ==="
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
# You may start off with the CNI plugins in /opt/cni/bin but they're
# probably gone by now. To be sure we'll download them from github.
# However, before you do this you can check /opt/cni/bin and it ought to container
# things like localhost as well as flannel.
mkdir -p /opt/cni/bin
curl -L https://github.com/containernetworking/plugins/releases/\
download/$CNI_VER/cni-plugins-linux-amd64-$CNI_VER.tgz | tar -xz -C /opt/cni/bin
# Right now we're running Kubernetes on a single node, which defeats
# the whole point but we're only testing at present. The snag is that
# Kubernetes needs a control node and worker nodes to run pods on, and you
# can't have a pod on the contoller. It's a bad idea. But we can force
# it to allow us anyway by removing the taint from it. A taint is a node
# property that tells the Kubernetes scheduler to "keep away", and is
# normally used to reserve a node for specific workloads. This include
# the control node, so if we remove the taint it will drop pods on
# itself anyway.
echo "=== Remove control-plane taint for single-node ==="
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
# And to prove it's all working, we'll drop the nginx web server
# on a hello pod and expose http port 80 so we can talk to it.
echo "=== Deploy Hello World NodePort ==="
kubectl create deployment hello --image=nginx --replicas=1
kubectl expose deployment hello --type=NodePort --port=80
# This will verify that everything is running, although it may take a moment or
# three to start.
kubectl get svc
# This bit of shell scriptery extracts the node port visible
# when you run kubectl get svc and prints out the line necessary
# to allow you to access the http server using curl.
NODEPORT=$(kubectl get svc | grep hello | awk '{print $5}' | cut -d: -f2 | cut -d/ -f1)
if test -z "$NODEPORT"
then
echo "It doesn't look like the hello pod is running."
else
echo "To get the nginx hello page use curl $APISERVER_IP:$NODEPORT"
echo "once it's had time to start"
fi
As a bonus, here’s a script to “clean up” after a bad attempt at kubeadm init. kumeadm reset doesn’t do enough!
#!/bin/sh
# Nuclear reset
# This code cleans up after a bad attempt at configuration (kubeadm init)
systemctl stop kubelet
systemctl stop containerd
systemctl disable kubelet
systemctl disable containerd
kubeadm reset -f
rm -rf /etc/kubernetes
rm -rf /var/lib/kubelet/*
rm -rf /var/lib/etcd
systemctl stop containerd
rm -rf /var/lib/containerd/*
echo Checking ports
ss -lntp | grep -E "6443|10250|10251|10252|10257|10258|10259"
# Line to automate kill, but leave it manual "| cut -d = -f 2 | cut -d , -f 1"
echo Anything come up? Please kill -9 the PID







