Skip to Content

Description:

This guide will walk through a basic installation of Kubernetes on a 3-node Monsoon cluster. This guide is based on Hasan’s Kubernetes Installation on RHEL 7 on Monsoon. It has been updated for Kubernetes 1.7 and includes a bit more detail on individual steps. This guide is a basis for development/testing or proof-of-concept builds and security configuration is largely ignored. As such this guide is NOT recommended for production environments.

This guide uses kubeadm to initialize and start the cluster.

Environment:

3 node cluster:

  • Master(1): 4 CPU / 32 GB
  • Workers(2): 4 CPU / 16 GB

OS: RedHat Enterprise Linux 7.4 (Maipo)

Kubernetes 1.7.1

Docker 1.12.6

 

[All Nodes] Pre-Installation:

In this example I am using a cluster built using SAP Converged Cloud (Monsoon) infrastructure. Many of the details can be applied regardless of the infrastructure used, but keep in mind some steps may not be needed if your cluster is not behind a proxy.

Once the three nodes are running, we’ll first update all packages from the default image.

sudo yum -y update

Once all nodes are updated reboot the machines if necessary.

 

Next, we need to add all node IPs to the no_proxy environment variable. On monsoon, by default all domains are added, but not actual IPs. Without adding them prior to installation there will issues getting containers to communicate between nodes.

On monsoon by default, each node’s IP is already added to the no_proxy list. So we only need to add the other node IPs in the cluster for each machine.

You can add this under /etc/environment or /etc/bashrc (if using bash shell):

export no_proxy=$no_proxy,<node_ip1>,<node_ip2>

Exit your shell and re-enter. Issue the command below to confirm IPs were correctly added:

echo $no_proxy

Optionally, on monsoon only lowercase proxy environment variables are set. I went ahead and also added uppercase variables names in cases where applications only check one or the other. Might not be necessary, but better safe than sorry!

All environment variable additions below are added to /etc/bashrc on all nodes (again, make sure the IPs listed are for all other nodes in the cluster)

export HTTP_PROXY=$http_proxy
export HTTPS_PROXY=$https_proxy
export no_proxy=$no_proxy,10.x.x.x,10.x.x.x
export NO_PROXY=$no_proxy
Next, we'll add repositories for both Kubernetes and Docker

Create the files under /etc/yum.repos.d/ for RHEL instances:
vim /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

And for docker:

vim /etc/yum.repos.d/docker.repo

[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/7/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

Now import the Google package key:

rpm --import https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Refresh the repository list:

yum repolist

 

Now we’ll stop all firewalls on the hosts to ensure no communication issues between nodes:

setenforce 0
systemctl disable firewalld
systemctl stop firewalld
systemctl disable iptables
systemctl stop iptables

 

Finally, we need to add additional disk space to the /var directory. Kubernetes uses /var for mountpoints, device mapping, logging, etc. so it is recommended to make sure at least 65+ GB disk space is available under /var on master node and 30+ GB available on workers.

In some environments, just having adequate disk space will suffice however in the case of Monsoon root directories are segregated into individual LVM partitions. So we need to attach additional drives and extend the LVM vg0 to add the additional space.

If you need help in extending your LVM you can see my guide here.

 

[All Nodes] Installation:

Now we’ll install docker and kubernetes on all nodes. You can specify different versions if you want to install a specific version of either docker or kubernetes:

yum -y install docker-engine-1.12.6-1.el7.centos docker-engine-selinux-1.12.6-1.el7.centos
yum -y install kubelet-1.7.1-0 kubeadm-1.7.1-0 kubectl-1.7.1-0 kubernetes-cni-1.7.1-0

 

Now we’ll enable and start both docker and kubelet on all nodes:

systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

 

Once docker is running on all nodes, we need to add the system proxy to docker manually. Add a systemd daemon directory to add the configuration file:

mkdir -p /etc/systemd/system/docker.service.d

Copy the values of your HTTP_PROXY and NO_PROXY environment variables (if you are behind an HTTPS_PROXY you can add that as well) and add them to a file http-proxy.conf in our newly created folder:

vim /etc/systemd/system/docker.service.d/http-proxy.conf

[Service]
Environment="HTTP_PROXY=http://your_proxy:port” "NO_PROXY=your_no_proxy_list”

Once added, we need to reload daemons and restart the docker service on all nodes:

systemctl daemon-reload
systemctl restart docker

To confirm the environment variables were added, run the below command and make sure your proxy values are present:

systemctl show --property=Environment docker

 

[Master Only] Initialize the cluster:

Next we’ll initialize the cluster using kubeadm. Here you can specify what CIDR (Classless Inter-Domain Routing) address range you want to use for pods in the cluster:

kubeadm init --pod-network-cidr 172.16.0.0/12

The kubeadm utility should execute the necessary steps to start up your kubernetes cluster on master. If there are any errors or if the service times out, you can check journalctl or the status of kubelet/docker to get more information.

Once the cluster starts up successfully you should see a message similar to below:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join --token c0054c.03ab5cd2a700e674 10.x.x.x:6443

Save this output. We will need the join command and token to join other nodes to our cluster.

Follow the instructions and copy the admin.conf file to the home directory of the user who will administer the cluster (or root to avoid permissions issues if this is purely a proof-of-concept).

 

Once the file is copied to $HOME/.kube/config, we need to add that path as an environment variable named KUBECONFIG. Edit your user’s ~/.bashrc file (you can add it globally to /etc/bashrc as well, but other users may not be able to access this file) and add:

export KUBECONFIG=$HOME/.kube/config

 

Next we need to deploy an overlay network in order to enable communication of Docker across all nodes. In this instance I’ll be using flannel, but there are many options. You can see find alternatives in the official Kubernetes documentation here.

 

First, we’ll grab the YAML file used to deploy flannel:

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Next we need to update the yaml file with our CIDR network (applied when we ran kubeadm init). Edit the file and find the Network section and update it to apply to your CIDR IP range:

vi kube-flannel.yml

…
>> "Network": "172.16.0.0/12”,
…

Now we’ll use kubectl to add the service/pods/roles etc to our cluster:

kubectl create -f kube-flannel.yml

 

[Workers Only] Join the cluster:

Now, we can join the nodes to our cluster. For each node, run the join command provided by kubeadm output above (your token and master IP will be unique).

kubeadm join --token c0054c.03ab5cd2a700e674 10.x.x.x:6443

If you get a networking error, you can try manually specifying no_proxy as well:

no_proxy=10.x.x.x kubeadm join --token c0054c.03ab5cd2a700e674 10.x.x.x:6443

You should get output similar to below:

…
[discovery] Successfully established connection with API Server "10.x.x.x:6443”
…
Node join complete

Verify the nodes are joined and in a ‘ready’ state by running from master:

kubectl get nodes

NAME           STATUS    AGE       VERSION
mo-xxxxxxxx1   Ready     2d        v1.7.1
mo-xxxxxxxx2   Ready     2d        v1.7.1
mo-xxxxxxxx3   Ready     2d        v1.7.1

Administering the cluster is usually done using the powerful kubectl command. If you are unfamiliar with using kubectl you can see the official documentation here.

In part 2 of this guide we deploy the Kubernetes dashboard, helm, and set up a docker registry.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply