Deploying Kubernetes Cluster With Ingress and Load Balancer on DigitalOcean

This article will give you the a simple way how to deploy a kubernetes cluster and it’s components on DigitalOcean Managed Kubernetes (DOKS).

Along with my learning journey with kubernetes, I started to get my hands on trying kubernetes on DigitalOcean. It’s actually one of my favorite hosting platform that also offers Kubernetes managed service (DOKS). I have some of my small projects running on k8s on DO, since it’s very easy to deploy and I can run completely manged k8s cluster with only 15 dollars per month!

Getting started

So now I’m going to deploy a new kubernetes cluster and try to run a simple service along with it. Not only that, I want my service to be internal and to have the ingress controller and a Load balancer in front of the cluster to serve the traffic.

Let’s take a look at the picture below, if you familiar with k8s, this diagram must be quite straightforward. It starts with Load balancer and with ingress controller, the traffic goes through the internal service and eventually ended up to the pods.

Image by Devopsid.com
Image by Devopsid.com

Prerequisites

There are things you want to prepare before deploying kubernetes cluster on DO :

Implementation

If everything set, let’s continue with the implementation. So as I mentioned before, deploying managed kubernetes cluster on DigitalOcean is quite straightforward.

Deploy cluster

First step, let’s deploy a new cluster on DOKS. We can do that simply by using doctl command line.

$ doctl kubernetes cluster create my-cluster --node-pool "name=my-cluster-node;size=s-1vcpu-2gb;count=1" --region sgp1

The command execution will result to create a new kubernetes cluster along with the node pool with only one node and minimum spec. You might want to change the parameters depends on your preference and also to change the region. In this testing, I’m deploying to Singapore region (sgp1).

and we just need to wait until it finished. The output will look like this:

Notice: Cluster is provisioning, waiting for cluster to be running
..................................................................
Notice: Cluster created, fetching credentials
Notice: Adding cluster credentials to kubeconfig file found in "/home/.kube/config"
Notice: Setting current-context to do-sgp1-my-cluster
ID Name Region Version Auto Upgrade Status Node Pools
d570cdaa-c985-495c-b6e7-d005aa1ef5dd my-cluster sgp1 1.20.2-do.0 false running my-cluster-node

Once the provisioning finished, check the current-context to make sure we’re on the exact cluster.

$ kubectl config current-context
do-sgp1-my-cluster

If everything looks good, then we have a new k8s cluster provisioned. Let’s continue to setup other stuff.

Deploy the internal service

Once we have deployed the cluster, if we take a look at the console (https://cloud.digitalocean.com/kubernetes/clusters), it will showing my new cluster :

And let’s create a YAML file to define our internal service.

# service.yml
apiVersion: v1
kind: Service
metadata:
  name: test-backend
spec:
  type: ClusterIP
  selector:
    app: test-app
  ports:
    - port: 80
      targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: test-app
  template:
    metadata:
      labels:
        app: test-app
    spec:
      containers:
      - image: tutum/hello-world:latest
        name: test-app
        ports:
        - containerPort: 80
          protocol: TCP

Apply the YAML file :

$ kubectl apply -f service.yml
service/test-backend created
deployment.apps/test-app created

Check the service and if the pods are already running:

$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 129m
test-backend ClusterIP 10.245.93.249 <none> 80/TCP 28m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-app-65f85568c4-4t685 1/1 Running 0 29m
test-app-65f85568c4-6q49w 1/1 Running 0 29m

It looks like everything set and we already deployed our first internal service on kubernetes DigitalOcean. Let’s move on!

Deploy the ingress controller and Load Balancer

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/do/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created

Note that after applied the ingress controller, it also deployed a new load balancer automatically:

And let’s create a new YAML file for the ingress definition:

# ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: app1.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: test-backend
          servicePort: 80

As usual, apply the YAML file :

$ kubectl apply -f ingress.yml
ingress.networking.k8s.io/my-ingress created

View the ingress :

$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress <none> app1.devopsid.com 139.59.195.196 80 5m34s

As you can see, after we have deployed the ingress, we will get the external IP address which is came from the external Load Balancer DigitalOcean. And as I mentioned before, I have created a new record on my DNS provider, and point the record app1.devopsid.com to the IP address of Load Balancer. so I can access the app via app1.devopsid.com.

Finally, let’s check our new app on browser. If it’s working fine, we should be able to see this web page :

If you got any questions, let me know in the comments!

Setup kubernetes cluster in Ubuntu 20.04 from scratch

Hello again, this article is a walk through how to setup your own kubernetes cluster with Ubuntu 20.04 LTS. Some steps are very straightforward, and you can directly follow along while you try to setup yourself.

So before get started, I tried this using 2 ubuntu servers :

    • ks8-master : 2gb memory, 2vCPUs
    • k8s-node-0 : 1gb memory, 1vCPU

I believe this is the cheapest kubernetes cluster specs that you can get. The purpose of this is only to try to init the cluster from the get-go and do the simple deployment. So here it goes :

Install the docker and disable the swap on all k8s nodes :

$ sudo apt update
$ sudo apt install -y docker.io
$ sudo systemctl start docker
$ sudo systemctl enable docker
$ sudo sed -i '/ swap / s/^\(.*\)$/#/g' /etc/fstab
$ sudo swapoff -a

Enable the port forwarding on all k8s nodes:

To enable the ip forwarding permanently, edit the file “/etc/sysctl.conf” and look for line “net.ipv4.ip_forward=1″ and un-comment it. After making the changes in the file, execute the following command :

$ sudo sysctl -p
net.ipv4.ip_forward = 1

Install k8s packages on all k8s nodes :

Execute the following command on all nodes :

$ sudo apt install -y apt-transport-https curl
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
$ sudo apt update
$ sudo apt install -y kubelet kubeadm kubectl

Init the cluster on k8s master :

On k8s master, now let’s init the cluster :

$ kubeadm init

This command will give you the output something like this :

Error when running kubectl

After ini the cluster, I encountered error that prevent me to run kubectl command :

The connection to the server localhost:8080 was refused – did you specify the right host or port?

If you also face the same issue, the solution is simply to run this command :

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install network plugin on k8s master :

In this tutorial, I use Calico (https://www.projectcalico.org/)

$ kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Enable bash completion for kubectl commands

This following command is optional, but recommended. It is to enable the bash completion, when you executing kubectl sub commands. Do this on k8s master :

$ echo 'source <(kubectl completion bash)' >>~/.bashrc
$ source .bashrc

Enable nginx ingress

Enable ingress with nginx on k8s master :

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml

How to join the k8s node to k8s master :

Once the k8s master is ready, then we need to connect the k8s node to the master. We can simply do that by SSH to the k8s node, and execute the join command that we got after cluster creation completed.

$ kubeadm join 178.128.103.123:6443 --token htsn3w.juidt9j3t4zbgu3t --discovery-token-ca-cert-hash sha256:ea2e5654fb6e8bc31be463f60177f3b5d31b1da5019a20fd7a2336435b970a77

Check on the k8s master whether the nodes are ready :

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 24h v1.20.1
k8s-node-0 Ready <none> 24h v1.20.1

if you get to see the nodes ready and we’re set. Now we can continue with the deployment.

Deploy nginx on k8s cluster

Now, we come to the fun stuff.  After cluster is ready, and let’s deploy something on it. Let’s create deployment for nginx, the easy one.

From k8s master, save this file below as nginx-deployment.yml (or whatever you can call it).

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      run: nginx-deployment
  template:
    metadata:
      labels:
        run: nginx-deployment
    spec:
      containers:
      - image: nginx
        name: nginx-webserver
        ports:
        - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    run: nginx-deployment
  ports:
    - port: 80

Then create deployment from this file :

$ kubectl create -f nginx-deployment.yml
deployment.apps/nginx-deployment created
service/nginx-service created

Check the deployment, whether it has succeed :

$ kubectl get deployments
NAME             READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1   1          1         110s

Now, you see the nginx deployment has started the replica, and it’s now running fine.

Next, you can check whether the service has deployed :

$ kubectl get services
NAME          TYPE      CLUSTER-IP    EXTERNAL-IP PORT(S)      AGE
kubernetes    ClusterIP 10.96.0.1     <none>      443/TCP      34h
nginx-service NodePort  10.111.139.39 <none>      80:30992/TCP 5m27s

We can see the nginx service is already in place, and since the deployment already succeed, let’s also check whether nginx is really running by testing the cluster IP. So we can do something like :

$ curl 10.111.139.39

and the output is :

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Yes, the nginx now running successfully!

Increase replica

Next, lets try to increase the replica of the existing deployment. We want to increase the replica from 1 to 4. By doing that, we just need to update the yml file we just deployed with.

$ vim nginx-deployment.yml

I set the font to bold to indicates that line that I altered in the file. Change the number with the desired number.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 4
  selector:
    matchLabels:
      run: nginx-deployment
  template:
    metadata:
      labels:
        run: nginx-deployment
    spec:
      containers:
      - image: nginx
        name: nginx-webserver
        ports:
        - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    run: nginx-deployment
  ports:
    - port: 80

Save the file again, and run the command to update the deployment :

$ kubectl apply -f nginx-deployment.yml
deployment.apps/nginx-deployment unchanged
service/nginx-service unchanged

And also check whether the number of replicas have increased :

$ kubectl get deployments nginx-deployment
NAME             READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 4/4   4          4         14h

so if the number of replicas already equal with desired count, then we have successfully scaled up the service.

Another related k8s articles :

How to Push Docker Image to Google Cloud Repository (GCR) (Easy steps!)

 

 

 

Here’s how to push your local docker image to your GCR (Google Cloud Repository). You might want to use your own docker image later for your containerized app or kubernetes.

For this research, I use Ubuntu 18.04.03 LTS

Install gcloud, see the full instructions here : https://cloud.google.com/sdk/docs/quickstarts

Once you’ve installed the gcloud, let’s continue to the next step:

$ gcloud init
Welcome! This command will take you through the configuration of gcloud.

Your current configuration has been set to: [default]

You can skip diagnostics next time by using the following flag:
  gcloud init --skip-diagnostics

Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.                                                                                                                                                                                               
Reachability Check passed.
Network diagnostic passed (1/1 checks passed).

You must log in to continue. Would you like to log in (Y/n)?  y

Go to the following link in your browser:

    https://accounts.google.com/o/oauth2/auth?code_challenge=00SNwoATS92Is6deNqmONmgLM_RPM8x7n_IT_9ZU55s&prompt=select_account&code_challenge_method=S256&access_type=offline&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&client_id=xxxxxxxxxxxxxxxxxxxxxxx


Enter verification code: 

For the first time initiation, gcloud will ask you to log in. Therefore it will provide the link that you can click on your browser. Follow the instruction from google browser, then you will get the verification code. You also will be asked to pick the cloud project that you want to use:

Enter verification code: 4/sgEzW-mHbbknYMPEIM7vPPjENJuIxRnT1ov9xxxxxxxxxx
You are logged in as: [xxxxxx@gmail.com].

Pick cloud project to use: 
 [1] my-project-id
 [2] Create a new project
Please enter numeric choice or text value (must exactly match list 
item):  1

Next, select the the region and zone:

Your current project has been set to: [my-project-id].

Do you want to configure a default Compute Region and Zone? (Y/n)?  Y

Which Google Compute Engine zone would you like to use as project 
default?
If you do not specify a zone via a command line flag while working 
with Compute Engine resources, the default is assumed.
 [1] us-east1-b
 [2] us-east1-c
 [3] us-east1-d
 [4] us-east4-c
 [5] us-east4-b
 [6] us-east4-a
 [7] us-central1-c
 [8] us-central1-a
 [9] us-central1-f
 [10] us-central1-b
 [11] us-west1-b
 [12] us-west1-c
 [13] us-west1-a
 [14] europe-west4-a
 [15] europe-west4-b
 [16] europe-west4-c
 [17] europe-west1-b
 [18] europe-west1-d
 [19] europe-west1-c
 [20] europe-west3-c
 [21] europe-west3-a
 [22] europe-west3-b
 [23] europe-west2-c
 [24] europe-west2-b
 [25] europe-west2-a
 [26] asia-east1-b
 [27] asia-east1-a
 [28] asia-east1-c
 [29] asia-southeast1-b
 [30] asia-southeast1-a
 [31] asia-southeast1-c
 [32] asia-northeast1-b
 [33] asia-northeast1-c
 [34] asia-northeast1-a
 [35] asia-south1-c
 [36] asia-south1-b
 [37] asia-south1-a
 [38] australia-southeast1-b
 [39] australia-southeast1-c
 [40] australia-southeast1-a
 [41] southamerica-east1-b
 [42] southamerica-east1-c
 [43] southamerica-east1-a
 [44] asia-east2-a
 [45] asia-east2-b
 [46] asia-east2-c
 [47] asia-northeast2-a
 [48] asia-northeast2-b
 [49] asia-northeast2-c
 [50] europe-north1-a
Did not print [12] options.
Too many options [62]. Enter "list" at prompt to print choices fully.
Please enter numeric choice or text value (must exactly match list 
item):  30

Next, we need to configure the docker config. Run the command below to add some config file. Input Y if prompted.

$ gcloud auth configure-docker
The following settings will be added to your Docker config file 
located at [/home/ubuntu/.docker/config.json]:
 {
  "credHelpers": {
    "gcr.io": "gcloud", 
    "us.gcr.io": "gcloud", 
    "eu.gcr.io": "gcloud", 
    "asia.gcr.io": "gcloud", 
    "staging-k8s.gcr.io": "gcloud", 
    "marketplace.gcr.io": "gcloud"
  }
}

Do you want to continue (Y/n)?  Y

Docker configuration file updated.

Now, let’s build your docker image, tag, and push it to your GCR registry

$ sudo docker build -t simple-image:v1 .
$ sudo docker tag simple-image:v1 asia.gcr.io/my-project-id/simple-image:v1
$ sudo docker push asia.gcr.io/my-project-id/simple-image:v1

That’s it!

Let’s understand Git Submodule

What is git submodule?

It is another git project inside your git repository. If you working on multiple git projects but need them in one repository instead. You will be happy to setup git submodule.

So here I have two git repositories for example:

https://github.com/muffat/repo-a
https://github.com/muffat/repo-b

And since I’m updating the repo-b frequently, I want my repo-b to be inside the repo-a so it helps me to work with multiple repos as possible and easy to maintain codes across the repo and be able to work more faster and efficiently.

Setup git submodule

Add git submodule in your current repo

$ git clone git@github.com:muffat/repo-a.git
$ cd repo-a/
/repo-a$
/repo-a$ git submodule add git@github.com:muffat/repo-b.git

Commit and push your submodule

/repo-a$ git add .
/repo-a$ git commit -m "Add submodule repo-b"
/repo-a$ git push origin master

Note that, git submodule doesn’t automatically get pulled after you clone your repository (repo-a). You need to do manual initialize and pull your submodule. So after you clone your repository, you will find out that your submodule directory (repo-b) is empty.

$ git clone git@github.com:muffat/repo-a.git
$ cd repo-a/
/repo-a$ cd repo-b/
/repo-a/repo-b/$ ls
/repo-a/repo-b/$

go to submodule directory, and do this:

/repo-a$ cd repo-b/
/repo-a/repo-b/$ git submodule init
/repo-a/repo-b/$ git submodule update

How to git clone including submodules

$ git clone --recursive git@github.com:muffat/repo-a.git

Some problem occurred whenever I tried to clone with recursive. It failed to fetch the submodule repository because of the permission. Although my submodule is a public repository, yet still failed.

Then I changed the url of submodule from git to be https.

$ cat .gitmodules
[submodule "repo-b"]
path = repo-b
url = https://github.com/muffat/repo-b.git
branch = master

After that, the git recursive cloning was working.

Setup plugin manager for vim on fedora 28

Vim is one of the tool that must have in any unix like system. One of the reason for using vim is the customization and the plugins that can be used depending on the needs. To install plugins on vim, I’m using vim-plugin from junegunn/vim-plug.

Check vim version

In fedora vim already installed, but you might want to check the version just in case yours behind of the newest version.

$ vim --version

Update vim

Update vim to the newest, for better experience

$ sudo dnf update vim

Setup plugin manager for vim

Setting up vim-plug, this will be downloaded the plugin and put it inside ~/.vim/ autoload directory:

$ curl -fLo ~/.vim/autoload/plug.vim --create-dirs \
    https://raw.githubusercontent.com/junegunn/vim-plug/master/plug.vim

Open ~/.vimrc, and put these lines at the bottom of the file. Between the lines of call plug#begin(‘~/.vim/plugged’) and call plug#end(), are the plugins that you want to install.

...
...
call plug#begin('~/.vim/plugged')
Plug 'junegunn/vim-easy-align'
Plug 'itchyny/lightline.vim'
Plug 'scrooloose/nerdtree'
Plug 'tpope/vim-fugitive'
Plug 'airblade/vim-gitgutter'
Plug 'Valloric/MatchTagAlways'
call plug#end()

Save, and restart vim config without restart:

:w
:so ~/.vimrc

Update plugin

After the vim-plug has been setup, now we can easily install these plugin that has been listed inside ~/.vimrc by this command:

:PlugInstall

Finally, let vim-plug installing these plugins

All set!

 

 

Setup Systemd Service on Ubuntu 16.04

$ sudo vim /etc/systemd/system/myservice.service
[Unit]
Description=Run the service

[Service]
User=ubuntu
# change the workspace
WorkingDirectory=/usr/local/src

#path to executable. 
#executable is a bash script which calls jar file
ExecStart=/usr/local/src/somescript

SuccessExitStatus=143
TimeoutStopSec=10
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
$ sudo vim /usr/local/src/somescript
#!/bin/sh

java -jar /some/file.jar
sudo systemctl daemon-reload
sudo systemctl enable myservice.service
sudo systemctl start myservice
sudo systemctl status myservice

 

Setup Simple Ruby on Rails App On Ubuntu 16.04 From Scratch

Rails is one of the most popular ruby framework out there. And now, I want to try to run the simple app on Ubuntu 16.04 machine. it’s for testing purpose.

First, update the system and install essential dependencies:

$ sudo apt-get update
$ sudo apt-get build-essential curl sudo vim

Install nodejs:

$ curl -sL https://deb.nodesource.com/setup_9.x | sudo -E bash -
$ apt-get install nodejs

Create a dedicated user for the app, for example, ubuntu user. And this also make the ubuntu user with sudo privilege and run the command without password. Which is useful to run command that needs sudo privilege in the next steps.

$ useradd ubuntu -m
$ echo 'ubuntu ALL=(root) NOPASSWD: ALL' >> /etc/sudoers

swith to ubuntu user and install GPG keys for install rvm:

$ su - ubuntu
ubuntu~$ gpg --keyserver hkp://keys.gnupg.net \ --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 \ 7D2BAF1CF37B13E2069D6956105BD0E739499BDB

Download and install rvm:

ubuntu~$ \curl -sSL https://get.rvm.io | bash -s stable

Install ruby interpreter with version 2.5.1, you might wanna change it with your preferable version:

ubuntu~$ source ~/.rvm/scripts/rvm
ubuntu~$ rvm install 2.5.1
ubuntu~$ rvm use 2.5.1 --default

Install rails with gem, and create new app without writing the Gemfile. Why? because everytime I create new app, I ended up facing errors with dependencies in Gemfile. So, it safe to setup new app without the Gemfile, we’ll create it manually later.

ubuntu~$ gem install rails
ubuntu~$ rails new app --skip-gemfile

Create the Gemfile:

ubuntu~$ touch ~/app/Gemfile
ubuntu~$ vim ~/app/Gemfile

Gemfile, fill these dependencies below into the file, save and exit:

source 'https://rubygems.org'
gem 'rails', '~> 5.2.1'
gem 'bootsnap', '~> 1.3.2'
gem 'tzinfo-data', '~> 1.2018.5'
gem 'listen', '~> 3.1.5'
gem 'sqlite3'

Now, install all the gems with bundle:

ubuntu~$ cd ~/app
ubuntu app~$ bundle install

Try run the rails:

ubuntu app~$ rails server -b 0.0.0.0

 

Run containerized python app in kubernetes

Run containerized python app in kubernetes

First of all we need a Docker image that will be run inside the kubernetes cluster. So I assumed that we already have a kubernetes cluster. So the next we do is to build the docker image or you can use your docker image yourself.

But in this tutorial, I will show you how to run the containerized python app with my version from the start.

What we need

These applications should be installed on your local machine before get started. In my case, I use my remote server with ubuntu 16.04 installed.

1. Docker
2. Kubernetes

Setup Kubernetes on Ubuntu 16.04

Build docker image

Let’s begin with clone of of my repo that contains Dockerfile to build the image:

$ git clone https://github.com/muffat/docker-images.git
$ cd docker-images/simple-python-app/
~/docker-images/simple-python-app$ sudo docker build -t simple .

Wait until the process successfully built. And then you’ll see a new docker image when you type this command:

$ docker images

Push docker image to repository (docker hub)

Before pushing the image to docker hub, we need to tag the successfully built image.

$ docker tag fbd064597ae4 cerpin/simple:1.0

Push the image

$ docker push cerpin/simple
The push refers to a repository [docker.io/cerpin/simple]
bc69ee44ef1a: Pushed 
7957c9ab59bb: Pushed 
2366fc011ccb: Pushed 
b18f9eea2de6: Pushed 
6213b3fcd974: Pushed 
fa767832af66: Pushed 
bcff331e13e3: Mounted from cerpin/test 
2166dba7c95b: Mounted from cerpin/test 
5e95929b2798: Mounted from cerpin/test 
c2af38e6b250: Mounted from cerpin/test 
0a42ee6ceccb: Mounted from cerpin/test

After it pushed. You will have the docker image in the repository and ready to use it:

cerpin/simple:1.0

Run the image in kubernetes

First of all, I’m not a big fan of kubectl command, so I usually make a symlink to create the shorter version of kubectl:

$ sudo ln -s /usr/bin/kubectl /usr/bin/cap

Run the docker image in kubernetes

$ cap run simple --image=cerpin/simple:1.0

Then the container will be created. Just wait a moment until the state becomes Running

$ cap get pods
simple-79d85db8b9-466kd 1/1 Running 0 26m

After it’s ready, expose the service with port 5002 to become LoadBalancer. So the service will be accessible from the outside world

$ cap expose deployment simple --type=LoadBalancer --port=5002

Check the service that has been exposed:

$ cap get services
simple LoadBalancer 10.105.115.251 <pending> 5002:31969/TCP 21m

You will see that the service will have forwarded port to 31969 from 5002.

If you open up the browser and navigate to http://external IP:31969, you’ll see the app is running.

Or, just use a curl command instead:

$ curl http://167.xxx.xxx.xxx:31969
{
"message": "welcome", 
"status": "ok"
}

Setup Kubernetes on Ubuntu 16.04

Summary

This setup is supposedly to install the kubernetes on ubuntu machine with version 16.04 (64bit). I did this in the cloud and have worked perfectly.

# whoami && pwd
root
/root
# apt-get update
# apt-get install -y apt-transport-https
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
# apt-get update -y
# apt install docker.io
# apt-get install -y kubelet kubeadm kubernetes-cni

Check the swaps, if there any, swith them off

# /proc/swaps

Init kubernetes for the first time using kubeadm:

# kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=<private IP>

*Note: Change <private IP> to <public IP>, if you run the kubernetes master on single node and wish to publicly open.

# cp /etc/kubernetes/admin.conf $HOME/
# export KUBECONFIG=$HOME/admin.conf
# echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc

Check pods status, wait until all running

# kubectl get pods --all-namespaces

When their status are RUNNING, moving forward install the network/flannel. Please choose between these two below, I prefer to use the calico one (the second).

# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

# or

# kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

Continue to taint the nodes:

# kubectl taint nodes --all node-role.kubernetes.io/master-

Install kubernetes dashboard

# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Create user dashboard

create-user.yml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

create-role.yml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
# kubectl create -f create-user.yml
# kubectl create -f create-role.yml
# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

How to set the kubernetes dashboard to publicly accessible with public IP

Read this : https://github.com/kubernetes/dashboard/wiki/Accessing-Dashboard—1.7.X-and-above

References