Table of Contents
If your OS is Ubuntu 16.04+
, please visit Kubernetes - Two Steps Installation instead.
Docker is getting popular among cloud provider and developers, because it helps people to package their own application(service) into one Docker image and delivery it without worried any external dependencies. Also, Docker is a kind of lightweight virtual machine(compare to full virtualization), operation engineer can launch hundred of docker instances on the same host.
However, it becomes a great challenge for the engineers to maintain, upgrade and monitoring all containers in an elegant way. At this point, we definitely need a tool that helps us to solve such problem, and fortunately, right now we have different tools to meet the problem such as Docker Swarm, Nomad, Mesos, DC/OS and our topic today - Kubernetes !
Kubernetes is an open-source system developed by Google with years of experience of running production workload in large scale. It’s built for
- Management of containerized applications
- Automating deployment
- Scaling
Kubernetes is a huge system with design philosophy in it, we will go through every part of it in later tutorials and articles. And now, let’s start to build our three nodes Kubernetes cluster!
- Ubuntu 14.04 LTS (Kernel 4.2.0-35)
- Packages:
- docker-engine
- bridge-utils
- SSH login with key auth
- Sudo permission
$ git clone https://github.com/kubernetes/kubernetes.git
$ git checkout 5c8dd576e28f605be270b3590092ea859d4f6a25
Modify 67 line in kubernetes/cluster/ubuntu/download-release.sh
, so that we can specify Kubernetes version through the environment variable.
#KUBE_VERSION=$(get_latest_version_number | sed 's/^v//')
if [ -z "$KUBE_VERSION" ]; then
KUBE_VERSION=$(get_latest_version_number | sed 's/^v//')
fi
$ export KUBE_VERSION=1.2.4
$ export FLANNEL_VERSION=0.5.4
$ export ETCD_VERSION=2.3.4
if you dont want to enter password during the installation
- use key authentication
- add user sudo permisson with NOPASSWD
user ALL=(ALL) NOPASSWD:ALL
- remove NOPASSWD after installation
export nodes="<user>@<ip1> <user>@<ip2> <user>@<ip3>"
# ai -> both master and minion, a -> master, i -> minion
export role="ai i i"
nodes
: make sure that user exists in nodes and also has sudo permissionrole
: currently, official deployment script only supports one master in Kubernetes. So here we choose one of the nodes to be the master.
export NUM_NODES=${NUM_NODES:-3}
export SERVICE_CLUSTER_IP_RANGE=192.168.3.0/24
export FLANNEL_NET=172.16.0.0/16
SERVICE_CLUSTER_IP_RANGE
: CIDR for Kubernetes service object, we leave it as default for nowFLANNEL_NET
: CIDR for Docker instances, we also leave it as default
Comment out following lines if you want to enable UI and DNS add-ons
ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
DNS_SERVER_IP="192.168.3.10"
DNS_DOMAIN="cluster.local"
DNS_REPLICAS=1
We may need to type password for root permission during the installation
$ cd kubernetes/cluster
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
Some of you may encounter following ssh related issues
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
... Starting cluster using provider: ubuntu
... calling verify-prereqs
Could not find or add an SSH identity.
Please start ssh-agent, add your identity, and retry.
We can solve it by typing
$ eval $(ssh-agent)
$ KUBERNETES_PROVIDER=ubuntu ./kube-up.sh
$ cd kubernetes/cluster/ubuntu
$ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh
Kubectl is the management tool for Kubernetes, we copy it to /usr/local/bin
for later usage
$ cp kubernetes/cluster/ubuntu/binaries/kubectl /usr/local/bin
Now typing the following command, Kubernetes will list all nodes in the Kubernetes cluster
$ kubectl get nodes
NAME STATUS AGE
10.211.55.10 Ready 3h
10.211.55.12 Ready 3h
10.211.55.13 Ready 3h
You can find Kubernetes Dashboard url through the following command
$ kubectl cluster-info
Kubernetes master is running at http://10.211.55.10:8080
KubeDNS is running at http://10.211.55.10:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at http://10.211.55.10:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
There are serveral reasons that cause dns and dashboard pods keep CrashLoopBackOff. Like following
k8s@beta1:~/kubernetes/cluster/ubuntu$ kubectl --namespace kube-system get pod
NAME READY STATUS RESTARTS AGE
kube-dns-v14-zgb98 2/3 Running 0 9s
kubernetes-dashboard-v1.1.0-beta1-n4fsa 0/1 CrashLoopBackOff 1 9s
Following are two solutions to solve the problem.
- bridges not on the same subnet
- credential problems
docker0
and flannel.1
are not on the same subnets
$ ip r
172.16.0.0/16 dev flannel.1 proto kernel scope link src 172.16.9.0
172.16.90.0/24 dev docker0 proto kernel scope link src 172.16.90.1
Please modify bip
in /etc/default/docker
let docker0 stay in same subnets with flannel.1, and restart docker daemon.
[X] WRONG
DOCKER_OPTS=" -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.90.1/24 --mtu=1450"
[O] RIGHT
DOCKER_OPTS=" -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.9.1/24 --mtu=1450"
After creating Kubernetes dashboard pod, the dashboard(http://<ip>:8080/ui
) shows the following message. It looks like there might be somthing wrong with dns service.
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
kubedns
’s log shows that apiserver asked client to provide correct credentials.
$ kubectl logs kube-dns-v19-xg5or -c kubedns --namespace=kube-system
I1013 06:24:08.506645 1 dns.go:172] Ignoring error while waiting for service default/kubernetes: the server has asked for the client to provide credentials (get services kubernetes). Sleeping 1s before retrying.
E1013 06:24:08.506744 1 reflector.go:216] pkg/dns/dns.go:154: Failed to list *api.Endpoints: the server has asked for the client to provide credentials (get endpoints)
E1013 06:24:08.597927 1 reflector.go:216] pkg/dns/dns.go:155: Failed to list *api.Service: the server has asked for the client to provide credentials (get services)
In order to fix this problem, we need to remove current token store in Kubernetes and sytem will automatically recreate it.
$ kubectl get secrets --namespace=kube-system
NAME TYPE DATA AGE
default-token-5bhvr kubernetes.io/service-account-token 3 33m
$ kubectl delete secrets/default-token-5bhvr --namespace=kube-system
After the new token was created, delete dns and dashboard pod(not replicationcontrollers) as well.
$ kubectl delete pod kube-dns-v19-xg5or --namespace=kube-system
$ kubectl delete pod kubernetes-dashboard-v1.1.1-g1we7 --namespace=kube-system
Replication controller will recreate the pods, then dashbaord and dns service should be working normally right now.
Some of nodes’ may stay in NotReady state.
$ kubectl get nodes
NAME STATUS AGE
x.x.x.x Ready 2h
y.y.y.y NotReady 2h
z.z.z.z NotReady 2h
Found that docker-engine stopped, and it crashed immediately when I tried to launch docker service manually with options located in /etc/default/docker
.
$ sudo /usr/bin/dockerd -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock --bip=172.16.84.1/24 --mtu=1450 --raw-logs
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x0 pc=0x745b88]
goroutine 1 [running]:
panic(0x1a619a0, 0xc82000e0f0)
/usr/local/go/src/runtime/panic.go:481 +0x3e6
math.init()
/usr/local/go/src/math/unsafe.go:21 +0x58
$ docker -v
Docker version 1.12.1, build 23cf638
If you encounter the same problem, just purge and install docker-engine again.
DONT FORGET TO BACKUP/RESTORE DOCKER OPTIONS!
$ sudo apt-get purge docker-engine
$ sudo apt-get install docker-engine
Restart other Kubernetes services
$ sudo service kubelet start
$ sudo service kube-proxy start
- http://kubernetes.io/docs/getting-started-guides/ubuntu/
- https://github.com/kubernetes/kubernetes/issues/19332
See Also
- Kubernetes - Two Steps Installation
- Rolling Updates with Kubernetes Deployments
- Kubernetes - Pod
- Kubernetes - High Availability
- Adopting Container and Kubernetes in Production
To reproduce, republish or re-use the content,
please attach with link: https://tachingchen.com/
Twitter
Google+
Facebook
Reddit
LinkedIn
StumbleUpon
Pinterest
Email