Deploying Kubernetes Cluster With Ingress and Load Balancer on DigitalOcean

This article will give you the a simple way how to deploy a kubernetes cluster and it’s components on DigitalOcean Managed Kubernetes (DOKS).

Along with my learning journey with kubernetes, I started to get my hands on trying kubernetes on DigitalOcean. It’s actually one of my favorite hosting platform that also offers Kubernetes managed service (DOKS). I have some of my small projects running on k8s on DO, since it’s very easy to deploy and I can run completely manged k8s cluster with only 15 dollars per month!

Getting started

So now I’m going to deploy a new kubernetes cluster and try to run a simple service along with it. Not only that, I want my service to be internal and to have the ingress controller and a Load balancer in front of the cluster to serve the traffic.

Let’s take a look at the picture below, if you familiar with k8s, this diagram must be quite straightforward. It starts with Load balancer and with ingress controller, the traffic goes through the internal service and eventually ended up to the pods.

Image by
Image by


There are things you want to prepare before deploying kubernetes cluster on DO :


If everything set, let’s continue with the implementation. So as I mentioned before, deploying managed kubernetes cluster on DigitalOcean is quite straightforward.

Deploy cluster

First step, let’s deploy a new cluster on DOKS. We can do that simply by using doctl command line.

$ doctl kubernetes cluster create my-cluster --node-pool "name=my-cluster-node;size=s-1vcpu-2gb;count=1" --region sgp1

The command execution will result to create a new kubernetes cluster along with the node pool with only one node and minimum spec. You might want to change the parameters depends on your preference and also to change the region. In this testing, I’m deploying to Singapore region (sgp1).

and we just need to wait until it finished. The output will look like this:

Notice: Cluster is provisioning, waiting for cluster to be running
Notice: Cluster created, fetching credentials
Notice: Adding cluster credentials to kubeconfig file found in "/home/.kube/config"
Notice: Setting current-context to do-sgp1-my-cluster
ID Name Region Version Auto Upgrade Status Node Pools
d570cdaa-c985-495c-b6e7-d005aa1ef5dd my-cluster sgp1 1.20.2-do.0 false running my-cluster-node

Once the provisioning finished, check the current-context to make sure we’re on the exact cluster.

$ kubectl config current-context

If everything looks good, then we have a new k8s cluster provisioned. Let’s continue to setup other stuff.

Deploy the internal service

Once we have deployed the cluster, if we take a look at the console (, it will showing my new cluster :

And let’s create a YAML file to define our internal service.

# service.yml
apiVersion: v1
kind: Service
  name: test-backend
  type: ClusterIP
    app: test-app
    - port: 80
      targetPort: 80
apiVersion: apps/v1
kind: Deployment
  name: test-app
  replicas: 2
      app: test-app
        app: test-app
      - image: tutum/hello-world:latest
        name: test-app
        - containerPort: 80
          protocol: TCP

Apply the YAML file :

$ kubectl apply -f service.yml
service/test-backend created
deployment.apps/test-app created

Check the service and if the pods are already running:

$ kubectl get service
kubernetes ClusterIP <none> 443/TCP 129m
test-backend ClusterIP <none> 80/TCP 28m
$ kubectl get pods
test-app-65f85568c4-4t685 1/1 Running 0 29m
test-app-65f85568c4-6q49w 1/1 Running 0 29m

It looks like everything set and we already deployed our first internal service on kubernetes DigitalOcean. Let’s move on!

Deploy the ingress controller and Load Balancer

$ kubectl apply -f
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created created created created created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created created
serviceaccount/ingress-nginx-admission created created created created created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created

Note that after applied the ingress controller, it also deployed a new load balancer automatically:

And let’s create a new YAML file for the ingress definition:

# ingress.yml
kind: Ingress
  name: my-ingress
  annotations: "nginx"
  - host:
      - path: /
          serviceName: test-backend
          servicePort: 80

As usual, apply the YAML file :

$ kubectl apply -f ingress.yml created

View the ingress :

$ kubectl get ingress
my-ingress <none> 80 5m34s

As you can see, after we have deployed the ingress, we will get the external IP address which is came from the external Load Balancer DigitalOcean. And as I mentioned before, I have created a new record on my DNS provider, and point the record to the IP address of Load Balancer. so I can access the app via

Finally, let’s check our new app on browser. If it’s working fine, we should be able to see this web page :

If you got any questions, let me know in the comments!

Setup kubernetes cluster in Ubuntu 20.04 from scratch

Hello again, this article is a walk through how to setup your own kubernetes cluster with Ubuntu 20.04 LTS. Some steps are very straightforward, and you can directly follow along while you try to setup yourself.

So before get started, I tried this using 2 ubuntu servers :

    • ks8-master : 2gb memory, 2vCPUs
    • k8s-node-0 : 1gb memory, 1vCPU

I believe this is the cheapest kubernetes cluster specs that you can get. The purpose of this is only to try to init the cluster from the get-go and do the simple deployment. So here it goes :

Install the docker and disable the swap on all k8s nodes :

$ sudo apt update
$ sudo apt install -y
$ sudo systemctl start docker
$ sudo systemctl enable docker
$ sudo sed -i '/ swap / s/^\(.*\)$/#/g' /etc/fstab
$ sudo swapoff -a

Enable the port forwarding on all k8s nodes:

To enable the ip forwarding permanently, edit the file “/etc/sysctl.conf” and look for line “net.ipv4.ip_forward=1″ and un-comment it. After making the changes in the file, execute the following command :

$ sudo sysctl -p
net.ipv4.ip_forward = 1

Install k8s packages on all k8s nodes :

Execute the following command on all nodes :

$ sudo apt install -y apt-transport-https curl
$ curl -s | sudo apt-key add
$ sudo apt-add-repository "deb kubernetes-xenial main"
$ sudo apt update
$ sudo apt install -y kubelet kubeadm kubectl

Init the cluster on k8s master :

On k8s master, now let’s init the cluster :

$ kubeadm init

This command will give you the output something like this :

Error when running kubectl

After ini the cluster, I encountered error that prevent me to run kubectl command :

The connection to the server localhost:8080 was refused – did you specify the right host or port?

If you also face the same issue, the solution is simply to run this command :

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install network plugin on k8s master :

In this tutorial, I use Calico (

$ kubectl apply -f

Enable bash completion for kubectl commands

This following command is optional, but recommended. It is to enable the bash completion, when you executing kubectl sub commands. Do this on k8s master :

$ echo 'source <(kubectl completion bash)' >>~/.bashrc
$ source .bashrc

Enable nginx ingress

Enable ingress with nginx on k8s master :

$ kubectl apply -f

How to join the k8s node to k8s master :

Once the k8s master is ready, then we need to connect the k8s node to the master. We can simply do that by SSH to the k8s node, and execute the join command that we got after cluster creation completed.

$ kubeadm join --token htsn3w.juidt9j3t4zbgu3t --discovery-token-ca-cert-hash sha256:ea2e5654fb6e8bc31be463f60177f3b5d31b1da5019a20fd7a2336435b970a77

Check on the k8s master whether the nodes are ready :

$ kubectl get nodes
k8s-master Ready control-plane,master 24h v1.20.1
k8s-node-0 Ready <none> 24h v1.20.1

if you get to see the nodes ready and we’re set. Now we can continue with the deployment.

Deploy nginx on k8s cluster

Now, we come to the fun stuff.  After cluster is ready, and let’s deploy something on it. Let’s create deployment for nginx, the easy one.

From k8s master, save this file below as nginx-deployment.yml (or whatever you can call it).

apiVersion: apps/v1
kind: Deployment
  name: nginx-deployment
  replicas: 1
      run: nginx-deployment
        run: nginx-deployment
      - image: nginx
        name: nginx-webserver
        - containerPort: 8080

apiVersion: v1
kind: Service
  name: nginx-service
  type: NodePort
    run: nginx-deployment
    - port: 80

Then create deployment from this file :

$ kubectl create -f nginx-deployment.yml
deployment.apps/nginx-deployment created
service/nginx-service created

Check the deployment, whether it has succeed :

$ kubectl get deployments
nginx-deployment 1/1   1          1         110s

Now, you see the nginx deployment has started the replica, and it’s now running fine.

Next, you can check whether the service has deployed :

$ kubectl get services
kubernetes    ClusterIP     <none>      443/TCP      34h
nginx-service NodePort <none>      80:30992/TCP 5m27s

We can see the nginx service is already in place, and since the deployment already succeed, let’s also check whether nginx is really running by testing the cluster IP. So we can do something like :

$ curl

and the output is :

<!DOCTYPE html>
<title>Welcome to nginx!</title>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=””></a>.<br/>
Commercial support is available at
<a href=””></a>.</p>

<p><em>Thank you for using nginx.</em></p>

Yes, the nginx now running successfully!

Increase replica

Next, lets try to increase the replica of the existing deployment. We want to increase the replica from 1 to 4. By doing that, we just need to update the yml file we just deployed with.

$ vim nginx-deployment.yml

I set the font to bold to indicates that line that I altered in the file. Change the number with the desired number.

apiVersion: apps/v1
kind: Deployment
  name: nginx-deployment
  replicas: 4
      run: nginx-deployment
        run: nginx-deployment
      - image: nginx
        name: nginx-webserver
        - containerPort: 8080

apiVersion: v1
kind: Service
  name: nginx-service
  type: NodePort
    run: nginx-deployment
    - port: 80

Save the file again, and run the command to update the deployment :

$ kubectl apply -f nginx-deployment.yml
deployment.apps/nginx-deployment unchanged
service/nginx-service unchanged

And also check whether the number of replicas have increased :

$ kubectl get deployments nginx-deployment
nginx-deployment 4/4   4          4         14h

so if the number of replicas already equal with desired count, then we have successfully scaled up the service.

Another related k8s articles :

Nginx: set the server_name as wildcard without hostname

Simple trick to run the nginx with no server_name.

server {
  listen 80 default_server;
  server_name _;
  location / {
    root /path/to/app;
    index index.php;
    try_files $uri $uri/ /index.php?q=$uri&$args;
    location ~* \.php {
      try_files $uri =404;
      include fastcgi_params;
      fastcgi_index index.php;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;

Generate CSR for Nginx server

This is how to generate the .csr file, requirement for SSL certificate.

$ openssl req -new -newkey rsa:2048 -nodes -keyout server.key -out server.csr
Country Name (2 letter code) [AU]:ID
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:

Install SSL certificate

cat domain_com.crt > domain_chain.crt ; echo "" >> domain_chain.crt ; cat >> domain_chain.crt

(Your Private Key: your_domain_name.key)
(Your Primary SSL certificate: your_domain_name.crt)
(Your Intermediate certificate: DigiCertCA.crt)
(Your Root certificate: TrustedRoot.crt)

Setup python app in centos from scratch (centos 6.9+uwsgi+nginx+flask+mysql)

Initial setup

$ sudo yum update
$ sudo yum install epel-release
$ sudo yum groupinstall "Development tools"
$ sudo yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel telnet htop
$ sudo yum install python-devel python-virtualenv
$ sudo yum install mysql-connector-python mysql-devel mysql-server

Install Python

Download and install Python :

./configure && make && make altinstall

Install uWSGI

$ wget
$ which python2.7
$ sudo /usr/local/bin/python2.7
$ which pip2.7
$ sudo /usr/local/bin/pip2.7 install uWSGI
$ which uwsgi
$ uwsgi --version

Setup vassels

$ sudo mkdir -p /etc/uwsgi/vassels

Setup Emperor service

$ sudo vim /etc/init.d/emperor
# chkconfig: 2345 99 10
# Description: Starts and stops the emperor-uwsgi
# See how we were called.
RUNEMPEROR="/usr/local/bin/uwsgi --emperor=/etc/uwsgi/vassels"
start() {
if [ -f /var/run/$PIDNAME ] && kill -0 $(cat /var/run/$PIDNAME); then
echo 'Service emperor-uwsgi already running' >&2
return 1
echo 'Starting Emperor...' >&2
local CMD="$RUNEMPEROR &> \"$LOGFILE\" & echo \$!"
su -c "$CMD" > "$PIDFILE"
echo 'Service started' >&2
stop() {
if [ ! -f "$PIDFILE" ] || ! kill -0 $(cat "$PIDFILE"); then
echo 'Service emperor-uwsgi not running' >&2
return 1
echo 'Stopping emperor-uwsgi' >&2
kill -7 $(cat "$PIDFILE") && rm -f "$PIDFILE"
echo 'Service stopped' >&2
status() {
if [ ! -f "$PIDFILE" ]; then
echo "Emperor is not running." >&2
return 1
echo "Emperor (pid  `cat ${PIDFILE}`) is running..."
ps -ef |grep `cat $PIDFILE`| grep -v grep
case "$1" in
echo "Usage: emperor {start|stop|restart}"
exit 1

 Setup app user & environment

$ useradd foobar
$ usermod -md /srv/foobar foobar
$ chmod 755 /srv/foobar
$ sudo su - foobar
foobar@local~$ virtualenv --python=python2.7 ~/venv
foobar@local~$ mkdir www
foobar@local~$ mkdir logs
foobar@local~$ touch logs/uwsgi.log
foobar@local~$ touch uwsgi.ini
foobar@local~$ echo "source ~/venv/bin/activate" >> ~/.bashrc
foobar@local~$ source ~/venv/bin/activate
(venv)foobar@local~$ vim uwsgi.ini
master = true
processes = 2
socket = /tmp/foobar.sock
chdir = /srv/foobar/www
virtualenv = /srv/foobar/venv
module = app:app
uid = foobar
chown-socket = foobar:nginx
chmod-socket = 660
vacuum = true
die-on-term = true
python-autoreload = 3
py-autoreload = 1
logger = file:/srv/foobar/logs/uwsgi.log

Exit from foobar user & create uwsgi symlink

(venv)foobar@local~$ exit
$ sudo ln -s /srv/foobar/uwsgi.ini /etc/uwsgi/vassels/foobar.ini

Start emperor service & setup set the startup

$ sudo service emperor start
$ sudo chkconfig emperor on

Nginx config for staging application

resolver_timeout 10s;
server {
listen 80;
charset utf-8;
gzip_vary on;
access_log /var/log/nginx/app.access.log;
error_log /var/log/nginx/app.error.log;
add_header 'Access-Control-Allow-Origin' '*' always;
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE' always;
add_header X-Frame-Options "SAMEORIGIN";
set $appweb http://app-web.service.consul;
location / {
proxy_pass $appweb:5002;
proxy_redirect     off;
proxy_set_header   Host $host;
proxy_set_header   X-Real-IP $remote_addr;
proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header   X-Forwarded-Host $server_name;
auth_basic "Private Property";
auth_basic_user_file /tmp/.htpasswd;
#allow; # xxx
#deny all;

Renew ssl certificate let’s encrypt with nginx

When you hate to see your website has ssl invalid certificate and it crossed out like this:

That means you need to update ssl certificate, in this case I use let’s encrypt. I just want to get rid of the invalid ssl certificate logo that make your website looks very unprofessional 🙂

Navigate to path where you place let’s encrypt directory:

$ cd ~/letsencrypt
~/letsencrypt$ sudo ./certbot-auto renew

Finally, restart the nginx, This is using centos, so it’s gonna be like this:

$ sudo /etc/init.d/nginx restart


WordPress still cannot Establishing a Database Connection – Error

Have you ever experiencing this problem like this? after your install wordpress, setup anything, webserver, database, everything is completed. Yet, wordpress still cannot establishing your database connection.

Eventhough you’ve already make sure that your database is up and running (I’m using MySQL) and your port is already open, though.

Trust me! I’ve done anything properly, I spent hours just to figure out what the root cause was and after some googling and stackoverflowing, I found this problem was due to SE in linux.

$ sudo setsebool -P httpd_can_network_connect_db=1

Thanks in advance SElinux!

Configure php-fpm and nginx to run in low memory server

It was pain in the ass to have php-fpm and nginx together to serve php app, especially when you’re running on low-memory server, especially when you’re running cms like wordpress which basically heavy duty. I had a Centos server running with memory only 1024Mb (1Gb).

My web kept crushed every single time. And the problem stil remains, memory leak.

I don’t know what the root cause was. I still don’t know what that is but I think it’s something to do with php-fpm configuration or even nginx.

Just several days ago I had my blog up and running with wordpress in the same type of server, same OS (Centos) with the same memory (1Gb) and since I didn’t want to use apache for some reason, guess I had to get my nginx and php-fpm working together again.

I thought my web would be running very smoothly since the blog was still unlikely to receive much traffic. But I was wrong, somehow the memory was leaked again. My server only running with 1024Mb, but I thought It didn’t matter. The only way to get this problem mitigated is to tune up both the php-fpm and nginx configurations which I never liked this. But I need this, so let’s get this done.

Continue reading “Configure php-fpm and nginx to run in low memory server”