...
Code Block |
---|
enp0s3 Link encap:Ethernet HWaddr 08:00:27:56:82:00 inet addr:192.168.56.3 Bcast:192.168.56.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe56:8200/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:170 errors:0 dropped:0 overruns:0 frame:0 TX packets:112 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:18488 (18.4 KB) TX bytes:19156 (19.1 KB) enp0s8 Link encap:Ethernet HWaddr 08:00:27:f0:a2:f5 inet addr:10.0.3.15 Bcast:10.0.3.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fef0:a2f5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:329 errors:0 dropped:0 overruns:0 frame:0 TX packets:141 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:389613 (389.6 KB) TX bytes:10690 (10.6 KB) ... |
Update apt-get
> sudo su
> apt-get update
Install openssh (if not already installed)
> apt-get install openssh-server
...
> apt-get install -y apt-transport-https curl
Install Kubernetes
Code Block |
---|
...
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add |
...
- echo "deb https://apt.kubernetes.io/ kubernetes-xenial |
...
> apt-get update
> apt-get install -y kubelet kubeadm kubectl
...
main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list sudo apt-get update sudo apt-get install -y kubectl kubelet kubeadm sudo apt-mark hold kubelet kubeadm kubectl |
Pull images
> kubeadm config images pull
Code Block |
---|
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.13.1 [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.13.1 [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.13.1 [config/images] Pulled k8s.gcr.io/kube-proxy:v1.13.1 [config/images] Pulled k8s.gcr.io/pause:3.1 [config/images] Pulled k8s.gcr.io/etcd:3.2.24 [config/images] Pulled k8s.gcr.io/coredns:1.2.6 |
Now clone (full clone) this VM with names:
- k8master
- k8worker1
- k8worker2
For the k8master, set the CPU cores to 2.
...
Setup Networking on VMs
On the VMs that we have defined, lets get them configured.
VM | Ip Address |
---|---|
k8master | 192.168.56.100 |
k8worker1 | 192.168.56.101 |
k8worker2 | 192.168.56.102 |
Set Hostname
> sudo vi /etc/hostname
Code Block |
---|
k8master |
> sudo vi /etc/hosts
Code Block |
---|
127.0.0.1 localhost 127.0.1.1 k8master # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters |
...
Set a static ip address for our host-only interface (enp0s3)
> sudo su> vi vi /etc/network/interfaces
Code Block |
---|
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto enp0s8 iface enp0s8 inet dhcp auto enp0s3 iface enp0s3 inet static address 192.168.56.100 netmask 255.255.255.0 network 192.168.56.0 broadcast 192.168.56.255 auto enp0s8 iface enp0s8 inet dhcp |
Disable SWAP
> swapoff -va
...
Code Block |
---|
... Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 192.168.56.100:6443 --token gi6ugh69sqqp.jufhrmb9rrcxn95cyelc6ct7o3v3uoqp --discovery-token-ca-cert-hash sha256:6c9406ae054946f8f33122a8acf1afb9ae560d8aeffff3969c1f2218e4ddf9bb |
Install Network Plugin
> sudo kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
As your non root user:
...
03b55f52661338d761e8dd68203b738f3e126428cda239db81c2723a7bccba83
|
Record the kubeadm join command!
As your non root user:
Code Block |
---|
mkdir -p $HOME/.kube
sudo cp -i |
...
/etc/kubernetes/admin.conf |
...
$HOME/.kube/config |
...
sudo chown $(id -u):$(id -g) $HOME/.kube/config |
...
|
Verify that your network is on the right network interface
Code Block |
---|
kubectl get pods -o wide --all-namespaces
NAMESPACE NAME |
Join Worker Nodes
User kubeadm join to join the cluster.
> kubeadm join 192.168.56.100:6443 --token gi6ugh.jufhrmb9rrcxn95c --discovery-token-ca-cert-hash sha256:6c9406ae054946f8f33122a8acf1afb9ae560d8aeffff3969c1f2218e4ddf9bb
Verify Everything is Working
> kubectl get pods --all-namespaces
Code Block |
---|
NAMESPACE NAME READY STATUS RESTARTS AGE IP READY STATUS NODE NOMINATED NODE RESTARTS READINESS AGEGATES kube-system calicocoredns-kube-controllers-74bbfbfd85-bnpwp86c58d9df4-8zk5t 0/1 Pending 0/1 Pending 0 0 2d3h 6d18h kube-system <none> coredns-86c58d9df4-2qhsk <none> <none> 0/1 <none> kube-system coredns-86c58d9df4-tsftk ContainerCreating 0 0/1 6d19h kube-system Pending coredns-86c58d9df4-dff98 0 2d3h <none> 0/1 <none> ContainerCreating <none> 0 6d19h<none> kube-system etcd-k8master 1/1 Running 1/1 Running 2d3h 10.0.3.15 k8master 1<none> 6d19h<none> kube-system kube-apiserver-k8master 1/1 Running 1/1 Running 2d3h 10.0.3.15 k8master 1<none> 6d19h<none> kube-system kube-controller-manager-k8master 1/1 Running 1/1 Running 2d3h 10.0.3.15 k8master 1 <none> 6d19h <none> kube-system kube-proxy-dgmfh88gdq 1/1 Running 1/1 Running 2d3h 10.0.3.15 k8master 1<none> 6d19h<none> kube-system kube-proxyscheduler-t9qsgk8master 1/1 Running 1 1/1 2d3h Running 10.0.3.15 k8master <none> 2 6d19h kube-system kube-proxy-zhrc4 <none> |
The Ip should not be 10.0.3.xxx
Install Flannel Network Plugin
> kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Verify that all of your kubernetes pods are running
> kubectl get pods --all-namespaces
Code Block |
---|
NAMESPACE NAME 1/1 Running READY STATUS 0 RESTARTS 6d18hAGE kube-system kubecoredns-scheduler86c58d9df4-k8master8zk5t 1/1 Running 0 1/1 Running 47h kube-system coredns-86c58d9df4-tsftk 1 6d19h |
References
1/1 Running 0 47h
kube-system etcd-k8master 1/1 Running 1 47h
kube-system kube-apiserver-k8master 1/1 Running 1 47h
kube-system kube-controller-manager-k8master 1/1 Running 1 47h
kube-system kube-flannel-ds-amd64-fl5wp 1/1 Running 0 12s
kube-system kube-proxy-88gdq 1/1 Running 1 47h
kube-system kube-scheduler-k8master 1/1 Running 1 47h |
...
Join Worker Nodes
User kubeadm join to join the cluster.
> kubeadm join 192.168.56.100:6443 --token 69sqqp.yelc6ct7o3v3uoqp --discovery-token-ca-cert-hash sha256:03b55f52661338d761e8dd68203b738f3e126428cda239db81c2723a7bccba83
...
Verify it is all working
From the master node:
Code Block |
---|
sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8master Ready master 47h v1.13.1
k8worker1 Ready <none> 12m v1.13.1
k8worker2 Ready <none> 6m12s v1.13.1
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-8zk5t 1/1 Running 2 47h
kube-system coredns-86c58d9df4-tsftk 1/1 Running 2 47h
kube-system etcd-k8master 1/1 Running 3 47h
kube-system kube-apiserver-k8master 1/1 Running 3 47h
kube-system kube-controller-manager-k8master 1/1 Running 3 47h
kube-system kube-flannel-ds-amd64-fl5wp 1/1 Running 3 25m
kube-system kube-flannel-ds-amd64-k26xv 1/1 Running 0 5m4s
kube-system kube-flannel-ds-amd64-ncg64 1/1 Running 1 11m
kube-system kube-proxy-88gdq 1/1 Running 3 47h
kube-system kube-proxy-b6m4d 1/1 Running 0 5m4s
kube-system kube-proxy-nxwmh 1/1 Running 1 11m
kube-system kube-scheduler-k8master 1/1 Running 3 47h |
Now deploy something and verify it all works.
Install Some Example Pods
Code Block |
---|
> kubectl create -f https://kubernetes.io/examples/application/deployment.yaml
> kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-76bf4969df-hkmjp 1/1 Running 0 2m18s
nginx-deployment-76bf4969df-x7f9h 1/1 Running 0 2m18s
|
Install Dashboard
From the master node:
Code Block |
---|
> sudo su
> kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
> kubectl proxy
|
From your local machine:
> ssh -L 8001:127.0.0.1:8001 test@192.168.56.100
Browse to:
....
References
Reference | URL |
---|---|
Building a Kuburnetes Cluster | https://medium.com/@KevinHoffman/building-a-kubernetes-cluster-in-virtualbox-with-ubuntu-22cd338846dd |
Cluster Networking | https://kubernetes.io/docs/concepts/cluster-administration/networking/ |
Flannel | https://github.com/coreos/flannel#flannel |
Dashboard | https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#using-dashboard |
Reference | URL |
Building a Kuburnetes Cluster | https://medium.com/@KevinHoffman/building-a-kubernetes-cluster-in-virtualbox-with-ubuntu-22cd338846dd