CKA Preparation: Setting Up a Kubernetes Cluster with 2 Nodes

As I prepare for my CKA exam I decided to create a series of articles (more like exercises) discussing different topics about Kubernetes. A good way to kick this off is by actually installing Kubernetes and setting up our environment using Kubeadm.

Let’s start off with some of the requirements:

  • 3 CentOS 7 VMs with hostnames ‘control-node’, ‘node1’ and ‘node2’
  • use any IP addresses you like for these VMs just make sure they are able to talk to each other
  • control-node should have 1GB of RAM and 2 CPU cores (2 cores is a hard requirement)
  • node1 and node2 should have 1GB of RAM and 1 CPU core

My approach for installing K8 clusters is by using Sander van Vugt’s installation scripts. What these scripts do is take care of some basic configuration changes that are needed, aswell as configuring the firewall, installing Docker and more. Feel free to have a look by yourself before running them.

Start by updating your system, then installing git (needed to clone the repo) and vim (always good to have it) on all of your nodes

Figure 1

[root@control-node ~]# yum update -y
[root@control-node ~]# yum install -y vim git

Now clone the repos:

Figure 2

root@control-node ~]# git clone https://github.com/sandervanvugt/cka

Cloning into 'cka'...
remote: Enumerating objects: 171, done.
remote: Counting objects: 100% (171/171), done.
remote: Compressing objects: 100% (134/134), done.
remote: Total 171 (delta 65), reused 141 (delta 35), pack-reused 0
Receiving objects: 100% (171/171), 343.04 KiB | 0 bytes/s, done.
Resolving deltas: 100% (65/65), done.

If you have a look at the cloned directory you will see more than a few scripts and files

[root@control-node cka]# ls -ltr | wc -l
81

Thankfully we’ll only need 2 of these scripts to set up everything. setup-container.sh will take care of installing Docker as previously mentioned, whilst setup-kubetools.sh will install Kubernetes and make some OS changes.

Run both scripts:

Figure 3

[root@control-node cka]# ./setup-container.sh
...
...
[root@control-node cka]# ./setup-kubetools.sh

If everything worked as expected you should see the Docker systemd service up and running.

Figure 4

[root@control-node cka]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2022-01-03 16:10:12 CET; 15s ago
     Docs: https://docs.docker.com
 Main PID: 24292 (dockerd)
   CGroup: /system.slice/docker.service
           └─24292 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

You’ll have to repeat the steps in Figure 1 to 4 in all 3 VMs.

After you’re done with that we can proceed to configure our local DNS. To do so open up your /etc/hosts file and add the IP addresses of your nodes. In my case my hosts file looks as follows

Figure 5

[root@control-node ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.0.101  control-node  control
192.168.0.102  node1
192.168.0.103  node2

If you’re feeling lazy to edit the /etc/hosts file of node1 and node2 manually you can use scp to copy it over to the other nodes. Use whichever approach you prefer for this.

Now we will install Kubernetes using the kubeadm init command. From the control-node run:

Figure 6

[root@control-node ~]# kubeadm init
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
....
....
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.101:6443 --token dnddmc.xae0d345f4032py1 \
	--discovery-token-ca-cert-hash sha256:ab20dd063f7c123c1fd2afe1570ac9577bf12dab1503fedc939291ff1af49575

The last few lines of the output of our previous command are really important. You’ll need to create a regular user with sudo permissions and run the 3 commands that are mentioned as that user. On the other hand, you will need this token that was generated to run kubeadm join from the other nodes.

This is how you can create the regular user, set up a password for that user and add it to the wheel group.

Figure 7

[root@control-node ~]# useradd rdbreak

[root@control-node ~]# passwd rdbreak
Changing password for user rdbreak.
New password:
BAD PASSWORD: The password fails the dictionary check - it is based on a dictionary word
Retype new password:
passwd: all authentication tokens updated successfully.

[root@control-node ~]# usermod -aG wheel rdbreak

[root@control-node ~]# id rdbreak
uid=1000(rdbreak) gid=1000(rdbreak) groups=1000(rdbreak),10(wheel)

Now we can switch users and run the commands that were presented in the output of kubeadm init

Figure 8

[root@control-node ~]# su - rdbreak
[rdbreak@control-node ~]$ mkdir -p $HOME/.kube
[rdbreak@control-node ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for rdbreak:
[rdbreak@control-node ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[rdbreak@control-node ~]$ ls -ltr ~/.kube/config
-rw-------. 1 rdbreak rdbreak 5641 Jan  3 17:05 /home/rdbreak/.kube/config

To check some the details of our cluster you can use the kubectl cluster-info command. Also, if you want to check the status of your nodes you can use kubectl get nodes. The outputs of these commands should look like so:

Figure 9

[rdbreak@control-node ~]$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.0.101:6443
CoreDNS is running at https://192.168.0.101:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[rdbreak@control-node ~]$ kubectl get nodes
NAME           STATUS     ROLES                  AGE     VERSION
control-node   NotReady   control-plane,master   8m39s   v1.23.1

We can see a status of NotReady and no other nodes attached to the cluster. We’ll be working on joining the nodes in the next step. To do so copy and paste the join command that you saved earlier and run it on node1 and node2 as root

Figure 10

[root@node1 ~]# kubeadm join 192.168.0.101:6443 --token dnddmc.xae0d345f4032py1 \
> --discovery-token-ca-cert-hash sha256:ab20dd063f7c123c1fd2afe1570ac9577bf12dab1503fedc939291ff1af49575
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

And one more time on node2

Figure 11

[root@node2 ~]# kubeadm join 192.168.0.101:6443 --token dnddmc.xae0d345f4032py1 \
> --discovery-token-ca-cert-hash sha256:ab20dd063f7c123c1fd2afe1570ac9577bf12dab1503fedc939291ff1af49575
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

After you successfully joined both nodes to our master, go back to the control-node and check the output of your kubectl get nodes command, you should see some changes. Sometimes you might need to wait for a couple of minutes before seeing everything in a Ready state so keep that in mind.

Figure 12

[rdbreak@control-node ~]$ kubectl get nodes
NAME           STATUS     ROLES                  AGE    VERSION
control-node   Ready      control-plane,master   22m    v1.23.1
node1          Ready      <none>                 8m3s   v1.23.1
node2          Ready      <none>                 16s    v1.23.1

And that leaves us with a fully working node!

In the upcoming articles I’ll be showing you how to start playing with your new cluster by creating deployments, replica sets, scaling your applications and more.

Cheers!

Leave a comment

Your email address will not be published. Required fields are marked *