Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
mike_dehart
Explorer
0 Kudos
This is a continuation of Kubernetes 1.7 Installation part 1.

At this point we should have a working Kubernetes cluster with all worker nodes joined and in the Ready state.

None of the below steps are required, but add functionality and operability to the cluster.

 

[Master Only] Deploying the Dashboard:

Next, we'll deploy the Kubernetes dashboard which provides a graphical way of administering the cluster.

We'll need to make sure we use a version of dashboard compatible with our version of Kubernetes. Obtaining the 'lastest' dashboard YAML will most likely lead to compatibility issues unless you happen to be using the latest Kubernetes version as well.

Since I am using Kuberenetes 1.7.1, I will apply the yaml file below:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.7.1/src/deploy/recommended/kubernetes-dash...

For a full list of dashboard versions, see the Kubernetes dashboard releases page.

 

Versions of Kubernetes dashboard 1.7.x and above include additional security features that can make it difficult to access outside of the cluster. You can read more about proper dashboard accessibility here.

Since this is for development/testing, we can grant full admin privileges to the dashboard service account and avoid setting up tokens, just be warned this can pose a security risk.

Create a yaml file called dashboard-admin.yaml and paste in the below role binding authorization:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system

Save the file and run the below command to deploy it:
kubectl create -f dashboard-admin.yaml

This will provide a "skip" option on the login page when accessing the dashboard.

 

There are a number of ways to access the dashboard, I find the NodePort option to be the most hassle-free, but all are provided below:

 

Using kubectl proxy:

Install kubectl on local machine

Configure and use NodePort:

  • Edit the kubernetes-dashboard service to use NodePort rather than ClusterIP:
    kubectl edit svc kubernetes-dashboard -n kube-system​


  • Change type as follows:
    ...
    sessionAffinity: None
    -- type: ClusterIp
    ++ type: NodePort​


  • Issue a describe command to see the value of the port:
    kubectl describe service kubernetes-dashboard -n kube-system​


  • Look for the value of NodePort:
    NodePort:               <unset> 30006/TCP


  • Determine the master IP used by the cluster:
    kubectl cluster-info


  • Access the dashboard at:
    https://<master_IP>:<NodePort_port>;
    *NOTE: NodePort is exposed via HTTPS. Make sure you access the dashboard using https protocol.


 

Note: the shortcut http://<node_fqdn>:<port>/ui has been deprecated. It is recommended to use the full URL above when using kubectl proxies.

If dashboard-admin.yaml was correctly applied, the login page should have a "Skip" button that allows quick access to the dashboard.

 

[Master Only] Install Helm:

Helm is a tool that streamlines installing and managing Kubernetes applications. The official docs liken it to apt/yum/homebrew for Kubernetes.

You can find more information as well as which versions are tested with specific Kubernetes releases on their official Github page.

I'll be using Helm 2.6.2 in this installation as 2.6 has been tested against Kubernetes 1.7.

 

First, download the Helm package:
wget https://kubernetes-helm.storage.googleapis.com/helm-v2.6.2-linux-amd64.tar.gz

Make a directory for helm and unpack the download media:
mkdir /opt/helm
tar -xzvf helm-v2.6.2-linux-amd64.tar.gz -C /opt/helm

Add helm to your PATH environment variable to access the executable:
vi /etc/bashrc     #or wherever you want to add the environment variable

export PATH=$PATH:/opt/helm/linux-amd64/

Exit and re-enter your shell to confirm the path has been added.

Finally initialize helm:
helm init

[Master Only - Optional] Add helm cluster roles:

If you intend to install Vora or DataHub on the Kubernetes cluster, this step will be needed otherwise the Vora installation will fail during 'get configmaps'.

Run the below commands to create a service account and cluster role binding for tiller (helm's server-side agent) and apply them to helm/tiller:
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm init --service-account tiller --upgrade

 

[All Nodes] Setup Docker Registry

The Docker registry is a stateless server-side application that allows us to store and distribute Docker images within our cluster.

[Single selected node] Run the below command to start the docker registry on port 5000:
docker run -d -p 5000:5000 --restart=always --name registry -v /regdata:/var/lib/registry registry:latest

 

[All nodes] Configure nodes to use the registry. On each node we need to add an INSECURE_REGISTRY option to docker and restart the service:

Note: the node and port below should reflect the node/port where the docker registry was created above.
vi /etc/sysconfig/docker

INSECURE_REGISTRY="--insecure-registry=<registry_node>:5000"

Next, we have to add this as an environment file to the docker service itself and use our INSECURE_REGISTRY environment variable at startup.
vi /usr/lib/systemd/system/docker.service

EnvironmentFile=/etc/sysconfig/docker
ExecStart=/usr/bin/dockerd $INSECURE_REGISTRY

Finally restart the daemon and the docker service:
systemctl daemon-reload
systemctl restart docker

 

Now you should have a sufficient Kubernetes cluster for testing / development. Happy Kubeing!