ta-ching chen

6 minute read


If your OS is Ubuntu 16.04+, please visit Kubernetes - Two Steps Installation instead.


Docker is getting popular among cloud provider and developers, because it helps people to package their own application(service) into one Docker image and delivery it without worried any external dependencies. Also, Docker is a kind of lightweight virtual machine(compare to full virtualization), operation engineer can launch hundred of docker instances on the same host.

However, it becomes a great challenge for the engineers to maintain, upgrade and monitoring all containers in an elegant way. At this point, we definitely need a tool that helps us to solve such problem, and fortunately, right now we have different tools to meet the problem such as Docker Swarm, Nomad, Mesos, DC/OS and our topic today - Kubernetes !


Kubernetes is an open-source system developed by Google with years of experience of running production workload in large scale. It’s built for

  • Management of containerized applications
  • Automating deployment
  • Scaling

Kubernetes is a huge system with design philosophy in it, we will go through every part of it in later tutorials and articles. And now, let’s start to build our three nodes Kubernetes cluster!


  • Ubuntu 14.04 LTS (Kernel 4.2.0-35)
  • Packages:
    • docker-engine
    • bridge-utils
  • SSH login with key auth
  • Sudo permission


Clone repo

$ git clone https://github.com/kubernetes/kubernetes.git
$ git checkout 5c8dd576e28f605be270b3590092ea859d4f6a25

Modify 67 line in kubernetes/cluster/ubuntu/download-release.sh, so that we can specify Kubernetes version through the environment variable.

#KUBE_VERSION=$(get_latest_version_number | sed 's/^v//')
if [ -z "$KUBE_VERSION" ]; then
  KUBE_VERSION=$(get_latest_version_number | sed 's/^v//')

Find correspond binaries version

$ export KUBE_VERSION=1.2.4
$ export FLANNEL_VERSION=0.5.4
$ export ETCD_VERSION=2.3.4

Modify kubernetes/cluster/ubuntu/config-default.sh

if you dont want to enter password during the installation - use key authentication - add user sudo permisson with NOPASSWD user ALL=(ALL) NOPASSWD:ALL - remove NOPASSWD after installation

export nodes="<user>@<ip1> <user>@<ip2> <user>@<ip3>"
# ai -> both master and minion, a -> master, i -> minion
export role="ai i i"
  • nodes: make sure that user exists in nodes and also has sudo permission
  • role: currently, official deployment script only supports one master in Kubernetes. So here we choose one of the nodes to be the master.
export NUM_NODES=${NUM_NODES:-3}
  • SERVICE_CLUSTER_IP_RANGE: CIDR for Kubernetes service object, we leave it as default for now
  • FLANNEL_NET: CIDR for Docker instances, we also leave it as default

Comment out following lines if you want to enable UI and DNS add-ons



Install Kubernetes

We may need to type password for root permission during the installation

$ cd kubernetes/cluster
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh

Some of you may encounter following ssh related issues

$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
... Starting cluster using provider: ubuntu
... calling verify-prereqs
Could not find or add an SSH identity.
Please start ssh-agent, add your identity, and retry.

We can solve it by typing

$ eval $(ssh-agent)
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh

Install add-ons

$ cd kubernetes/cluster/ubuntu
$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh

Copy binaries

Kubectl is the management tool for Kubernetes, we copy it to /usr/local/bin for later usage

$ cp kubernetes/cluster/ubuntu/binaries/kubectl /usr/local/bin


Now typing the following command, Kubernetes will list all nodes in the Kubernetes cluster

$ kubectl get nodes
NAME           STATUS    AGE   Ready     3h   Ready     3h   Ready     3h

You can find Kubernetes Dashboard url through the following command

$ kubectl cluster-info
Kubernetes master is running at
KubeDNS is running at
kubernetes-dashboard is running at


Dashboard keeping crashLoopBackOff

There are serveral reasons that cause dns and dashboard pods keep CrashLoopBackOff. Like following

k8s@beta1:~/kubernetes/cluster/ubuntu$ kubectl --namespace kube-system get pod
NAME                                      READY     STATUS             RESTARTS   AGE
kube-dns-v14-zgb98                        2/3       Running            0          9s
kubernetes-dashboard-v1.1.0-beta1-n4fsa   0/1       CrashLoopBackOff   1          9s

Following are two solutions to solve the problem.

  • bridges not on the same subnet
  • credential problems

Solution 1 - bridges not on the same subnet

docker0 and flannel.1 are not on the same subnets

$ ip r dev flannel.1  proto kernel  scope link  src dev docker0  proto kernel  scope link  src

Please modify bip in /etc/default/docker let docker0 stay in same subnets with flannel.1, and restart docker daemon.

DOCKER_OPTS=" -H tcp:// -H unix:///var/run/docker.sock --bip= --mtu=1450"

DOCKER_OPTS=" -H tcp:// -H unix:///var/run/docker.sock --bip= --mtu=1450"

Solution 2 - credential problems

After creating Kubernetes dashboard pod, the dashboard(http://<ip>:8080/ui) shows the following message. It looks like there might be somthing wrong with dns service.

    "kind": "Status",
    "apiVersion": "v1",
    "metadata": {
    "status": "Failure",
    "message": "no endpoints available for service \"kubernetes-dashboard\"",
    "reason": "ServiceUnavailable",
    "code": 503

kubedns’s log shows that apiserver asked client to provide correct credentials.

$ kubectl logs kube-dns-v19-xg5or -c kubedns --namespace=kube-system
I1013 06:24:08.506645       1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server has asked for the client to provide credentials (get services kubernetes). Sleeping 1s before retrying.
E1013 06:24:08.506744       1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints)
E1013 06:24:08.597927       1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server has asked for the client to provide credentials (get services)

In order to fix this problem, we need to remove current token store in Kubernetes and sytem will automatically recreate it.

$ kubectl get secrets --namespace=kube-system
NAME                  TYPE                                  DATA      AGE
default-token-5bhvr   kubernetes.io/service-account-token   3         33m

$ kubectl delete secrets/default-token-5bhvr --namespace=kube-system

After the new token was created, delete dns and dashboard pod(not replicationcontrollers) as well.

$ kubectl delete pod kube-dns-v19-xg5or --namespace=kube-system
$ kubectl delete pod kubernetes-dashboard-v1.1.1-g1we7 --namespace=kube-system

Replication controller will recreate the pods, then dashbaord and dns service should be working normally right now.

Docker-engine failed to start

Some of nodes’ may stay in NotReady state.

$ kubectl get nodes
x.x.x.x    Ready      2h
y.y.y.y    NotReady   2h
z.z.z.z    NotReady   2h

Found that docker-engine stopped, and it crashed immediately when I tried to launch docker service manually with options located in /etc/default/docker.

$ sudo /usr/bin/dockerd  -H tcp:// -H unix:///var/run/docker.sock --bip= --mtu=1450 --raw-logs
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x745b88]

goroutine 1 [running]:
panic(0x1a619a0, 0xc82000e0f0)
    /usr/local/go/src/runtime/panic.go:481 +0x3e6

    /usr/local/go/src/math/unsafe.go:21 +0x58
$ docker -v
Docker version 1.12.1, build 23cf638

If you encounter the same problem, just purge and install docker-engine again.


$ sudo apt-get purge docker-engine
$ sudo apt-get install docker-engine

Restart other Kubernetes services

$ sudo service kubelet start
$ sudo service kube-proxy start

Further Readings


comments powered by Disqus