Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
[config/images] Pulled k8s.gcr.io/kube-apiserver:v1.13.1
[config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.13.1
[config/images] Pulled k8s.gcr.io/kube-scheduler:v1.13.1
[config/images] Pulled k8s.gcr.io/kube-proxy:v1.13.1
[config/images] Pulled k8s.gcr.io/pause:3.1
[config/images] Pulled k8s.gcr.io/etcd:3.2.24
[config/images] Pulled k8s.gcr.io/coredns:1.2.6

Configure OS 

Disable firewall

> systemctl disable firewalld
> systemctl stop firewalld
> systemctl status firewalld

...

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sda1 during installation
UUID=e7b204f7-9f41-42d4-b55f-292990f4137a /               ext4    errors=remount-ro 0       1
# swap was on /dev/sda5 during installation
#UUID=9ca9f4cb-876e-4e23-91a4-2f543b5537ac none            swap    sw              0       0


> reboot

...


Build the

...

Load Balancer

Set Hostname

> sudo hostnamectl set-hostname k8slb

> sudo hostnamectl


Update Yum

> yum update

Disable firewall

> systemctl disable firewalld
> systemctl stop firewalld
> systemctl status firewalld


Install haproxy

> yum install haproxy


> vi /etc/haproxy/haproxy.cfg

Code Block
languagebash
title/etc/haproxy/haproxy.cfg
global
...
defaults
...
#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes
  bind *:6443
  mode tcp
  default_backend kubernetes-master-nodes

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
backend kubernetes-master-nodes
  mode tcp
  balance roundrobin
  option tcp-check
  server k8smaster1 172.20.233.181:6443 check fall 3 rise 2
  server k8smaster2 172.20.233.182:6443 check fall 3 rise 2
  server k8smaster3 172.20.233.183:6443 check fall 3 rise 2


> sudo systemctl start haproxy

> sudo systemctl enable haproxy

> sudo systemctl status haproxy


Verify that you can connect

nc -v LOAD_BALANCER_IP 6443

> nc -v 172.20.233.180 6443


...

Build a K8sMaster1 Node

Login to your Master node

> ssh test@172.20.233.181

Set the hostname

> sudo hostnamectl set-hostname k8smaster1
> sudo hostnamectl

Generate SSH Key 

As test:

> ssh-keygen -t rsa -b 2048


Copy to other nodes

> ssh-copy-id test@172.20.233.182

...

> ssh-copy-id test@172.20.233.186


As root:

> sudo su

> ssh-keygen -t rsa -b 2048


Copy to other nodes

> ssh-copy-id test@172.20.233.182

...

> ssh-copy-id test@172.20.233.186

Create kubeadm-config file

> vi kubeadm-config.yaml

Code Block
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: stable
apiServer:
  certSANs:
  - "k8slb.ott.dev.intra"
controlPlaneEndpoint: "k8slb.ott.dev.intra:6443"
networking:
  podSubnet: 10.244.0.0/16


Initialize Master (using Flannel)

> sudo kubeadm init --config=kubeadm-config.yaml 


Code Block
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
...
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join k8slb.ott.dev.intra:6443 --token ktemf3.pshisb9lspt1i40i --discovery-token-ca-cert-hash sha256:1e737466a59f00083a4ddf43c9fcf446a5b1cee8346afd1565d341fe5dee2c46


Record the kubeadm join command! 


As your non root user:

Code Block
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config


Verify that your network is on the right network interface

Code Block
kubectl get pods -o wide --all-namespaces

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE    IP          NODE       NOMINATED NODE   READINESS GATES
kube-system   coredns-86c58d9df4-8zk5t           0/1     Pending   0          2d3h   <none>      <none>     <none>           <none>
kube-system   coredns-86c58d9df4-tsftk           0/1     Pending   0          2d3h   <none>      <none>     <none>

Customize the Node

Set the hostname

> sudo hostnamectl set-hostname k8smaster1
> sudo hostnamectl

Initialize Master (using Flannel)

> sudo kubeadm init --apiserver-advertise-address <IP ADDRESS> --pod-network-cidr=10.244.0.0/16

Code Block
[init] Using Kubernetes version: v1.13.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [deepthought kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.50]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [deepthought localhost] and IPs [192.168.1.50 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [deepthought localhost] and IPs [192.168.1.50 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 26.002483 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "deepthought" as an annotation
[mark-control-plane] Marking the node deepthought as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node deepthought as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 0s0oa4.2i5lo5vyuyvbnze6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.1.50:6443 --token 0s0oa4.2i5lo5vyuyvbnze6 --discovery-token-ca-cert-hash sha256:20b8104c05927611df68ebb0eb9fbf8f65d3b85d2e57de9ecc5468e5369b9c22

Record the kubeadm join command! 

As your non root user:

Code Block
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify that your network is on the right network interface

Code Block
kubectl get pods -o wide --all-namespaces

NAMESPACE     NAME           <none>
kube-system   etcd-k8master                 READY   STATUS  1/1  RESTARTS   AGERunning   1 IP         2d3h NODE  10.0.3.15   k8master  NOMINATED NODE<none>   READINESS GATES        <none>
kube-system   corednskube-86c58d9df4apiserver-8zk5tk8master            01/1     PendingRunning   01          2d3h   <none>10.0.3.15   k8master   <none>     <none>           <none>
kube-system   corednskube-controller-86c58d9df4manager-tsftkk8master   1/1     Running   0/1     Pending   0  2d3h   10.0.3.15   k8master  2d3h   <none>      <none>     <none>
kube-system   kube-proxy-88gdq        <none>
kube-system   etcd-k8master                      1/1     Running   1          2d3h   10.0.3.15   k8master   <none>           <none>
kube-system   kube-apiserverscheduler-k8master            1/1     Running   1          2d3h   10.0.3.15   k8master   <none>           <none>
kube-system   kube-controller-manager-k8master   1/1     Running   1 


Install Flannel Network Plugin

> sudo sysctl net.bridge.bridge-nf-call-iptables=1

> kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml


NOTE: See https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ for details on the various plugins and their setup.


Verify that all of your kubernetes pods are running

> kubectl get pods --all-namespaces

Code Block
NAMESPACE     NAME              2d3h          10.0.3.15   k8master   <none> READY   STATUS    RESTARTS   <none>AGE
kube-system   kubecoredns-proxy86c58d9df4-88gdq8zk5t           1/1     Running   1/10     Running   1  47h
kube-system   coredns-86c58d9df4-tsftk     2d3h   10.0.3.15   k8master1/1   <none>  Running   0      <none>
kube-system    47h
kube-schedulersystem   etcd-k8master                      1/1     Running   1          47h
kube-system   kube-apiserver-k8master          2d3h   10.0.3.151/1    k8master Running  <none> 1          <none>

Install Flannel Network Plugin

> sudo sysctl net.bridge.bridge-nf-call-iptables=1

> kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

NOTE: See https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ for details on the various plugins and their setup.

Verify that all of your kubernetes pods are running

> kubectl get pods --all-namespaces

Code Block
NAMESPACE47h
kube-system   kube-controller-manager-k8master   1/1     Running   1          47h
kube-system   kube-flannel-ds-amd64-fl5wp     NAME   1/1     Running   0          12s
kube-system   kube-proxy-88gdq       READY   STATUS    RESTARTS   AGE
kube-system   coredns-86c58d9df4-8zk5t           1/1     Running   01          47h
kube-system   corednskube-86c58d9df4scheduler-tsftkk8master            1/1     Running   01          47h
kube-system   etcd-k8master    

Copy Certificates to Other Master Nodes

> sudo su

> vi copyCertsToMasters.sh

Code Block
languagebash
titlecopyCertsToMasters.sh
USER=test # customizable
CONTROL_PLANE_IPS="172.20.233.182 172.20.233.183"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
            1/1 scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    Running   1scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
     47h
kube-system   kube-apiserver-k8master scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
     1/1 scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
    scp /etc/kubernetes/admin.conf "${USER}"@$host:
done

> chmod +x copyCertsToMasters.sh

> ./copyCertsToMasters.sh


Setup Other Master Nodes

Perform the following steps on the other master nodes (k8smaster2 and k8smaster3).

Set the hostname

> sudo su
> hostnamectl set-hostname k8smaster1
> hostnamectl

> reboot

Move Certificates

> ssh test@<ip of master node>

> sudo su

> vi moveFilesFromMaster.sh

Code Block
languagebash
titlemoveFilesFromMaster.sh
USER=test # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /home/${USER}/ca.crt /etc/kubernetes/pki/
mv /home/${USER}/ca.key /etc/kubernetes/pki/
mv /home/${USER}/sa.pub /etc/kubernetes/pki/
mv /home/${USER}/sa.key /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /home/${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /home/${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /home/${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
mv /home/${USER}/admin.conf /etc/kubernetes/admin.conf

> chmod +x moveFilesFromMaster.sh

> ./moveFilesFromMaster.sh

Join the Cluster

> sudo su

> kubeadm join k8slb.ott.dev.intra:6443 --token ktemf3.pshisb9lspt1i40i --discovery-token-ca-cert-hash sha256:1e737466a59f00083a4ddf43c9fcf446a5b1cee8346afd1565d341fe5dee2c46 --experimental-control-plane


Notice the addition of the --experimental-control-plane flag. This flag automates joining this control plane node to the cluster.


As your non root user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config



Verify that the node has joined the cluster:

From the k8smaster1 under the test account issue the following command:

> kubectl get nodes


Setup Worker Nodes

Perform the following steps on all of the worker nodes (k8sworker1, k8sworker2, k8sworker3).

Set the hostname

> sudo su
> hostnamectl set-hostname k8sworker1
> hostnamectl

> reboot


Join the Cluster


From the worker nodes issue the following command. If your token has expired from you may need to create a new one.

> sudo su

> kubeadm join k8slb.ott.dev.intra:6443 --token ktemf3.pshisb9lspt1i40i --discovery-token-ca-cert-hash sha256:1e737466a59f00083a4ddf43c9fcf446a5b1cee8346afd1565d341fe5dee2c46 


Verify that the nodes have been added by issuing the following command at the master:

> kubectl get nodes 

Code Block
NAMERunning   1          47h
kube-system   kube-controller-manager-k8master   1/1     Running   1          47h
kube-system   kube-flannel-ds-amd64-fl5wp        1/1 STATUS   ROLES Running   0AGE     VERSION
k8smaster1   Ready  12s
kube-system  master kube-proxy-88gdq  27m     v1.13.4
k8smaster2   Ready    master   8m43s  1/1  v1.13.4
k8smaster3   Ready Running   1master   7m56s       47h
kube-system   kube-scheduler-k8masterv1.13.4
k8sworker1   Ready    <none>   71s      1/1 v1.13.4
k8sworker2   Ready    Running<none>   163s     v1.13.4
k8sworker3     47h

By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster for development, run:

...

Ready    <none>   25s     v1.13.4


Install Dashboard

From the master node:

Code Block
> kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml

secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

...

Sign in using the token previously retrieved.

Install Sample Pod

> vi nginx-example.yaml

Code Block
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 23 # tells deployment to run 23 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      metadatacontainers:
      labels- name: nginx
        appimage: nginx:1.7.9
      spec  ports:
      containers:
  - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
 - name: nginx
spec:
  type: NodePort
  selector:
   image app: nginx:1.7.9
  
  ports:
    - port: 80
      portsnodePort: 31080
        - containerPortname: 80nginx


> kubectl apply -f nginx-example.yaml

Expose your nginx pods via a nodePort

> kubectl expose deployment nginx-deployment --type=NodePort --name=nginx


> kubectl get services 

Code Block
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        95m5h32m
nginx        NodePort    10.9899.77190.176114   <none>        80:3149031080/TCP   119s2m46s


From the above we can see that the nginx service is exposed on port 3149031080.

Verify by issuing the following command:

> curl http://<NODE_IP>:31490:31080


http://172.20.233.184:31080/

Troubleshooting

Reset and start all over

> sudo kubeadm reset

...

CmdDescription
hostname -IGet the ip address
ip addrifconfig



References

...