Hello again, this article is a walk through how to setup your own kubernetes cluster with Ubuntu 20.04 LTS. Some steps are very straightforward, and you can directly follow along while you try to setup yourself.
So before get started, I tried this using 2 ubuntu servers :
- ks8-master : 2gb memory, 2vCPUs
- k8s-node-0 : 1gb memory, 1vCPU
I believe this is the cheapest kubernetes cluster specs that you can get. The purpose of this is only to try to init the cluster from the get-go and do the simple deployment. So here it goes :
Install the docker and disable the swap on all k8s nodes :
$ sudo apt update $ sudo apt install -y docker.io $ sudo systemctl start docker $ sudo systemctl enable docker $ sudo sed -i '/ swap / s/^\(.*\)$/#/g' /etc/fstab $ sudo swapoff -a
Enable the port forwarding on all k8s nodes:
To enable the ip forwarding permanently, edit the file “/etc/sysctl.conf” and look for line “net.ipv4.ip_forward=1″ and un-comment it. After making the changes in the file, execute the following command :
$ sudo sysctl -p net.ipv4.ip_forward = 1
Install k8s packages on all k8s nodes :
Execute the following command on all nodes :
$ sudo apt install -y apt-transport-https curl $ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add $ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" $ sudo apt update $ sudo apt install -y kubelet kubeadm kubectl
Init the cluster on k8s master :
On k8s master, now let’s init the cluster :
$ kubeadm init
This command will give you the output something like this :
Error when running kubectl
After ini the cluster, I encountered error that prevent me to run kubectl command :
The connection to the server localhost:8080 was refused – did you specify the right host or port?
If you also face the same issue, the solution is simply to run this command :
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install network plugin on k8s master :
In this tutorial, I use Calico (https://www.projectcalico.org/)
$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Enable bash completion for kubectl commands
This following command is optional, but recommended. It is to enable the bash completion, when you executing kubectl sub commands. Do this on k8s master :
$ echo 'source <(kubectl completion bash)' >>~/.bashrc $ source .bashrc
Enable nginx ingress
Enable ingress with nginx on k8s master :
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml
How to join the k8s node to k8s master :
Once the k8s master is ready, then we need to connect the k8s node to the master. We can simply do that by SSH to the k8s node, and execute the join command that we got after cluster creation completed.
$ kubeadm join 188.8.131.52:6443 --token htsn3w.juidt9j3t4zbgu3t --discovery-token-ca-cert-hash sha256:ea2e5654fb6e8bc31be463f60177f3b5d31b1da5019a20fd7a2336435b970a77
Check on the k8s master whether the nodes are ready :
$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 24h v1.20.1 k8s-node-0 Ready <none> 24h v1.20.1
if you get to see the nodes ready and we’re set. Now we can continue with the deployment.
Deploy nginx on k8s cluster
Now, we come to the fun stuff. After cluster is ready, and let’s deploy something on it. Let’s create deployment for nginx, the easy one.
From k8s master, save this file below as nginx-deployment.yml (or whatever you can call it).
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 1 selector: matchLabels: run: nginx-deployment template: metadata: labels: run: nginx-deployment spec: containers: - image: nginx name: nginx-webserver ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: NodePort selector: run: nginx-deployment ports: - port: 80
Then create deployment from this file :
$ kubectl create -f nginx-deployment.yml deployment.apps/nginx-deployment created service/nginx-service created
Check the deployment, whether it has succeed :
$ kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 1/1 1 1 110s
Now, you see the nginx deployment has started the replica, and it’s now running fine.
Next, you can check whether the service has deployed :
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34h nginx-service NodePort 10.111.139.39 <none> 80:30992/TCP 5m27s
We can see the nginx service is already in place, and since the deployment already succeed, let’s also check whether nginx is really running by testing the cluster IP. So we can do something like :
$ curl 10.111.139.39
and the output is :
<title>Welcome to nginx!</title>
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
Commercial support is available at
<p><em>Thank you for using nginx.</em></p>
Yes, the nginx now running successfully!
Next, lets try to increase the replica of the existing deployment. We want to increase the replica from 1 to 4. By doing that, we just need to update the yml file we just deployed with.
$ vim nginx-deployment.yml
I set the font to bold to indicates that line that I altered in the file. Change the number with the desired number.
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: replicas: 4 selector: matchLabels: run: nginx-deployment template: metadata: labels: run: nginx-deployment spec: containers: - image: nginx name: nginx-webserver ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: nginx-service spec: type: NodePort selector: run: nginx-deployment ports: - port: 80
Save the file again, and run the command to update the deployment :
$ kubectl apply -f nginx-deployment.yml deployment.apps/nginx-deployment unchanged service/nginx-service unchanged
And also check whether the number of replicas have increased :
$ kubectl get deployments nginx-deployment NAME READY UP-TO-DATE AVAILABLE AGE nginx-deployment 4/4 4 4 14h
so if the number of replicas already equal with desired count, then we have successfully scaled up the service.
Another related k8s articles :