...
> sudo vi /etc/hostname
Code Block |
---|
k8master |
> sudo vi /etc/hosts
Code Block |
---|
127.0.0.1 localhost 127.0.1.1 k8master # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters |
...
Set a static ip address for our host-only interface (enp0s3)
> sudo su> vi /etc/network/interfaces
...
Record the kubeadm join command!
Install Network Plugin
> sudo kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
As your non root user:>
Code Block |
---|
mkdir -p $HOME/.kube |
...
sudo cp -i /etc/kubernetes/admin.conf |
...
$HOME/.kube/config |
...
sudo chown $(id -u):$(id -g) $HOME/.kube/config |
...
|
Verify that your network is on the right network interface
Code Block |
---|
kubectl get pods -o wide --all-namespaces
|
Join Worker Nodes
User kubeadm join to join the cluster.
> kubeadm join 192.168.56.100:6443 --token gi6ugh.jufhrmb9rrcxn95c --discovery-token-ca-cert-hash sha256:6c9406ae054946f8f33122a8acf1afb9ae560d8aeffff3969c1f2218e4ddf9bb
Verify Everything is Working
> kubectl get pods --all-namespaces
Code Block |
---|
NAMESPACE NAME READY STATUS RESTARTS AGE IP READY STATUS NODE NOMINATED NODE RESTARTS READINESS AGEGATES kube-system calicocoredns-kube-controllers-74bbfbfd85-bnpwp86c58d9df4-8zk5t 0/1 Pending 0/1 Pending 0 0 2d3h 6d18h kube-system <none> coredns-86c58d9df4-2qhsk <none> <none> 0/1 <none> kube-system coredns-86c58d9df4-tsftk ContainerCreating 0 0/1 6d19h kube-system Pending coredns-86c58d9df4-dff98 0 2d3h <none> 0/1 <none> ContainerCreating <none> 0 6d19h<none> kube-system etcd-k8master 1/1 Running 1/1 Running 2d3h 10.0.3.15 k8master 1 <none> 6d19h<none> kube-system kube-apiserver-k8master 1/1 Running 1/1 Running 2d3h 10.0.3.15 k8master <none> 1 6d19h<none> kube-system kube-controller-manager-k8master 1/1 Running 1/1 Running 2d3h 10.0.3.15 k8master 1 <none> 6d19h <none> kube-system kube-proxy-dgmfh88gdq 1/1 Running 1/1 Running 2d3h 10.0.3.15 k8master 1<none> 6d19h<none> kube-system kube-proxyscheduler-t9qsgk8master 1/1 Running 1 1/1 2d3h Running 10.0.3.15 k8master <none> 2 6d19h kube-system kube-proxy-zhrc4 <none> |
The Ip should not be 10.0.3.xxx
Install Flannel Network Plugin
> kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Verify that all of your kubernetes pods are running
> kubectl get pods --all-namespaces
Code Block |
---|
NAMESPACE NAME 1/1 Running READY 0STATUS RESTARTS 6d18hAGE kube-system kubecoredns-scheduler86c58d9df4-k8master8zk5t 1/1 Running 0 1/1 Running 47h kube-system coredns-86c58d9df4-tsftk 1 6d19h |
References
1/1 Running 0 47h
kube-system etcd-k8master 1/1 Running 1 47h
kube-system kube-apiserver-k8master 1/1 Running 1 47h
kube-system kube-controller-manager-k8master 1/1 Running 1 47h
kube-system kube-flannel-ds-amd64-fl5wp 1/1 Running 0 12s
kube-system kube-proxy-88gdq 1/1 Running 1 47h
kube-system kube-scheduler-k8master 1/1 Running 1 47h |
...
Join Worker Nodes
User kubeadm join to join the cluster.
> kubeadm join 192.168.56.100:6443 --token 69sqqp.yelc6ct7o3v3uoqp --discovery-token-ca-cert-hash sha256:03b55f52661338d761e8dd68203b738f3e126428cda239db81c2723a7bccba83
...
Verify it is all working
From the master node:
Code Block |
---|
sudo kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8master Ready master 47h v1.13.1
k8worker1 Ready <none> 12m v1.13.1
k8worker2 Ready <none> 6m12s v1.13.1
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-8zk5t 1/1 Running 2 47h
kube-system coredns-86c58d9df4-tsftk 1/1 Running 2 47h
kube-system etcd-k8master 1/1 Running 3 47h
kube-system kube-apiserver-k8master 1/1 Running 3 47h
kube-system kube-controller-manager-k8master 1/1 Running 3 47h
kube-system kube-flannel-ds-amd64-fl5wp 1/1 Running 3 25m
kube-system kube-flannel-ds-amd64-k26xv 1/1 Running 0 5m4s
kube-system kube-flannel-ds-amd64-ncg64 1/1 Running 1 11m
kube-system kube-proxy-88gdq 1/1 Running 3 47h
kube-system kube-proxy-b6m4d 1/1 Running 0 5m4s
kube-system kube-proxy-nxwmh 1/1 Running 1 11m
kube-system kube-scheduler-k8master 1/1 Running 3 47h |
Now deploy something and verify it all works.
Install Some Example Pods
Code Block |
---|
> kubectl create -f https://kubernetes.io/examples/application/deployment.yaml
> kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-76bf4969df-hkmjp 1/1 Running 0 2m18s
nginx-deployment-76bf4969df-x7f9h 1/1 Running 0 2m18s
|
Install Dashboard
From the master node:
Code Block |
---|
> sudo su
> kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
> kubectl proxy
|
From your local machine:
> ssh -L 8001:127.0.0.1:8001 test@192.168.56.100
Browse to:
....
References
Reference | URL |
---|---|
Building a Kuburnetes Cluster | https://medium.com/@KevinHoffman/building-a-kubernetes-cluster-in-virtualbox-with-ubuntu-22cd338846dd |
Cluster Networking | https://kubernetes.io/docs/concepts/cluster-administration/networking/ |
Flannel | https://github.com/coreos/flannel#flannel |
Dashboard | https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#using-dashboard |
Reference | URL |
Building a Kuburnetes Cluster | https://medium.com/@KevinHoffman/building-a-kubernetes-cluster-in-virtualbox-with-ubuntu-22cd338846dd