Skip to Content
Personal Insights
Author's profile photo Chank Chen

Certificated Kubernetes Administrator – CKA best practice

Background:

    As Strategy to go mentioned, cloud is playing a more and more important role in SAP’s product, hence it is necessary for each one who is part of the contributors of the product, to get familiar with the cloud techniques.

    Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. It has become the criteria of containers, on which our cloud products are running.

    I have not been working with Kubernetes for much time, before that the main IaaS solution that I was handling with is Openstack and VMware. After be preparing for almost a month, recently I managed to pass the CKA (Certified Kubernetes Administrator) examination and got certified by CNCF.

    During my learning path, I have setup some scenarios to let myself get familiar with Kubernetes. These mock questions covers most of the knowledge points that you may encounter during the real certification. Hence I’m noting the questions and my own solution to each subtasks here, for your reference.

Introduction of CKA:

  • Who Is It For

          This certification is for Kubernetes administrators, cloud administrators and other IT professionals who manage Kubernetes instances.
  • About This Certification

          CKA was created by The Linux Foundation and the Cloud Native Computing Foundation (CNCF) as a part of their ongoing effort to help develop the Kubernetes ecosystem. The exam is an online, proctored, performance-based test that requires solving multiple tasks from a command line running Kubernetes.
  • What It Demonstrates

           A certified K8s administrator has demonstrated the ability to do basic installation as well as configuring and managing production-grade Kubernetes clusters. They will have an understanding of key concepts such as Kubernetes networking, storage, security, maintenance, logging and monitoring, application lifecycle, troubleshooting, API object primitives and the ability to establish basic use-cases for end users.

Main topics and hands-on samples:

First question: RBAC reference document

Context

You have been asked to create a new ClusterRole for a deployment pipeline and bind it to a specific ServiceAccount scoped to a specific namespace.

Task

Create a new ClusterRole named deployment cluster role, which only allows to create the following resource types:

  • Deployment
  • StatefulSet
  • DaemonSet

Create a new ServiceAccount named cicd-token in the existing namespace app-team1

Bind the new ClusterRole deployment clusterrole to the new ServiceAccount cicd-token, limited to the namespace app-team1.

$ kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets
$ kubectl create namespace app-team1
$ kubectl -n app-team1 create serviceaccoun t cicd-token
$ kubectl -n app-team1 create rolebinding cicd-token-binding --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
$ kubectl -n app-team1 describe rolebindings.rbac.authorization.k8s.io cicd-token-binding

 

Second question: node maintenance

Task

If you need to further evict the pods of the node, you can refer to the drain detailed.
Set the node named ek8s-node-1 as unavailable, and reschedule all the pods running on it.

Set the node named ek8s-node-1 as unavailable, and reschedule all allowed pods on the node.

$ kubectl cordon ek8s-node-1 # Mark the node as unschedulable through the cordon subcommand
$ kubectl drain ek8s-node-1 --delete-local-data --ignore-daemonsets --force # Safely eject all pods of a node

 

Third question: upgrade Kubernetes node

Task

Given an existing Kubernetes cluster running version 1.21.1,upgrade all of the Kubernetes control plane and node components on the master node only to version 1.22.1.

You are also expected to upgrade kubelet and kubectl on the master node.

The current version of Kubernetes centralized and running is 1.21.1, only upgrade all kubernetes control panels and components on the master node to version 1.22.1 In addition, upgrade kubelet and kubectl on the master node

$ kubectl config use-context mk8s
$ kubectl get node
$ kubectl cordon mk8s-master-1
$ kubectl drain mk8s-master-1 --delete-local-data --ignore-daemonsets --force
$ ssh mk8s-master-1
$ sudo -i
# apt-get install -y kubeadm=1.22.0-00
# kubeadm version 
# kubeadm upgrade plan
# kubeadm upgrade apply v1.22.0 --etcd-upgrade=false
# apt-get install kubelet=1.22.0-00 kubectl=1.22.0-00
# kubelet version
# kubelet version
# systemctl status kubelet
# systemctl daemon-reload
# exit
$ exit
$ kubectl get node   #  ensure only the master node has been upgraded to 1.22

 

Fourth Question: etcd backup and restore reference document

Task

First, the Create A Snapshot of existing ETCD instance running at the endpoint https://127.0.0.1: 2379, save the snapshot to /srv/data/etcd-snapshot.db.
Next , restore an existing, previous snapshot localted at /var/lib/backup/etcd-snapshot-previous.db.

The question will tell you the certification path of etcd.

Backup to the specified path, save according to the specified file name:

$ export ETCDCTL_API=3  # set api version to 3
$ etcdctl --endpoints 127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot save /srv/data/etcd-snapshot.db

 

Use the specified file to restore

$ export ETCDCTL_API=3 # set api version to 3
$ etcdctl --endpoints 127.0.0.1:2379 --cacert=/opt/KUIN00601/ca.crt --cert=/opt/KUIN00601/etcd-client.crt --key=/opt/KUIN00601/etcd-client.key snapshot restore /var/lib/backup/etcd-snapshot-previous.db

 

Fifth question: Create Network Policy reference document

Task

Create a new NetworkPolicy named allow-port-from-namespace that allows Pods in the existing namespace internal to connect to port 9000 of other Pods in the same namespace.

Ensure that the new NetworkPolicy:

  • does not allow access to Pods not listening on port 9000
  • does not allow access from Pods not in namespace internal

Sample YAML file:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-from-namespace
  namespace: internal
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector: {}
    ports:
    - protocol: TCP
      port: 9000

kubectl describe ns corp-bar  # create new namespace corp-bar
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-from-namespace
  namespace: internal
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector: {}
      matchLabels:
        project: corp-bar
    ports:
    - protocol: TCP
      port: 9000

 

Sixth question: Create Service Reference Document

Task

Reconfigure the existing deployment front-end and add a port specification named http exposing port 80/tcp of the existing container nginx.

Create a new service named front-end-svc exposing the container port http.

Configure the new service to also expose the individual Pods via a NoedPort on the nodes on which they are scheduled.

$ kubectl config use-context k8s 
$ kubectl expose deployment front-end --port=80 --target-port=80 --protocol=TCP --type=NodePort --name=front-end-svc

 

Seventh question: Create Ingress

Task

Create a new nginx ingress resource as follows:

  • Name: pong
  • Namespace: ing-internal
  • Exposing service hi on path /hi using service port 5678
Sample YAML file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ping
  namespace: ing-internal
  annotations:
    nginx.ingerss.kubernetes.io/rewrite-target:/
spec:
  rules:
  - http:
    paths:
    - path: /hi
      pathType: Prefix
      backend:
        service:
        name: hi
        port:
          number: 5678
Then we can simply deploy and verify with the steps below:
$ kubectl apply -f ping-ingress.yaml
$ kubectl get pod -n ing-internal -o wide   # fetch the IP address of ingress
$ curl -kL # returning hi means OK

 

Eighth question: Expand deployment

Task

Scale the deployment webserver to 6 pods.

$ kubectl config use-context k8s kubectl scale deployment webserver --replicas=6

 

Ninth question: Deploy the pod to the specified node

Task

Schedule a pod as follows:

  • Name: nginx-kusc00401
  • Image: nginx
  • Node selector: disk=spinning
Sample YAML file:
apiVersion: v1
kind: Pod
metadata:
  name: nginx-kusc00401
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    disk: ssd

Then we can simply deploy and verify with the steps below:

$ kubectl run nginx-kusc00401 --image=nginx --dry-run=client -oyaml > pod-nginx.yaml
$ kubectl get po nginx-kusc00401 -o wide  # Verify

 

 

Tenth question: Inspect the number of the healthy nodes

Task

Check to see how many nodes are ready (not including nodes tainted NoSchedule) and write the number to /opt/CKA0402/kucc1.txt

$ kubectl config use-context k8s kubectl describe node | grep -i taints|grep -v -i noschedule echo $num > /opt/CKA0402/kucc1.txt

 

Eleventh question: Create a pod with multiple containers

Task

Create a pod named kucc1 with a single app container for each of the following images running inside(there may be between 1 and 4 images specified): nginx+redis+memcached+consul.

Sample YAML file:

apiVersion: v1
kind: Pod
metadata:
  name: kucc1
spec:
  containers:
  - name: nginx
    image: nginx
  - name: redis
    image: redis
  - name: memcached
    image: memcached
  - name: consul
    image: consul

Then issue the command below to create the pod:

$ kubectl run kucc4 --image=nginx --dry-run=client -oyaml > pod-kucc4.yaml

 

Twelfth question: Create Persistent Volume

Task

Create a persistent volume with name app-config, of capacity 2Gi and access mode ReadWriteMany. the type of volume is hostPath and its location is /srv/app-config.

Sample YAML file:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-config
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteMany
  hostPath:
path: "/srv/app-config

 

Thirteenth question: Create PersistentVolumeClaim

Task

Create a new PersistentVolumeClaim:

  • Name: pv-volume
  • Class: csi-hostpath-sc
  • Capacity: 10Mi

Create a new Pod which mounts the persistentVolumeClaim as a volume:

  • Name: web-server
  • Image: nginx
  • Mount path: /usr/share/nginx/html

Configure the new Pod to have ReadWriteOnce access on the volume.

Finally, using kubectl edit or kubectl patch expand the PersistentVolumeClaim to a capacity of 70Mi and record that change.

Sample YAML for PVC and Pod deployment:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-volume
spec:
  storageClassName: csi-hostpath-sc
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Mi
 
---
apiVersion: v1
kind: Pod
metadata:
  name: web-server
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: pv-volume
  containers:
    - name: web-server
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage
  nodeSelector:
    disk: ssd

 

Hit below command the verify:

$ kubectl apply -f pv-volume-pvc.yaml 
$ kubectl get pvc # verify
$ kubectl edit pvc pv-volume --record   # modify pvc 10Mi --> 70Mi 

 

Fourteenth question: Monitor pod logs

Task

Monitor the logs of pod foobar and :

  • Extract log lines corresponding to error unable-to-access-website
  • Write them to /opt/KUCH901/foobar
$ kubectl config use-context k8s
$ kubectl logs foobar | grep unable-to-access-website > /opt/KUCH901/foobar

 

Fifteenth question: Add a sidecar container

Context

Without changing its existing containers, an existing Pod needs to be integrated into Kubernetes’s build-in logging architecture(e.g kubectl logs).Adding a streaming sidecar container is a good and common way accomplish this requirement.

Task

Add a busybox sidecar container to the existing Pod legacy-app. The new sidecar container has to run the following command:

/bin/sh -c tail -n+1 -f /var/log/legacy-app.log

Use a volume mount named logs to make the file /var/log/legacy-app.log available to the sidecar container.

TIPS
Don’t modify the existing container.
Don’t modify the path of the log file, both containers
must access it at /var/log/legacy-app.log

Sample YAML file:

apiVersion: v1
kind: Pod
metadata:
  name: legacy-app
spec:
  containers:
  - name: count
    image: busybox
    args:
    - /bin/sh
    - -c
    - >
      i=0;
      while true;
      do
        echo "$i: $(date)" >> /var/log/big-corp-app.log;
        sleep 1;
      done
    volumeMounts:
    - name: logs
      mountPath: /var/log
  - name: busybox
    image: busybox
    args: [/bin/sh, -c, 'tail -n+1 -f /var/log/legacy-app.log']
    volumeMounts:
    - name: logs
      mountPath: /var/log
  volumes:
  - name: logs
emptyDir: {}

Then hit below commands to go ahead:

$ kubectl config use-context k8s
$ kubectl get po legacy-app -o yaml > 15.yaml # 
$ kubectl delete -f 15.yaml
$ kubectl apply -f 15.yaml

 

 

Sixteenth question: Inspect the pod that consumes the highest CPU workloads

Task

From the pod label name=cpu-user, find pods running high CPU workloads and write the name of the pod consuming most CPU to the file /opt/CKA00321/CKA00123.txt (which already exists.)

Check the CPU usage of the Pod tag with name=cpu-user and write the name of the pod with the highest CPU usage to the /opt/CKA00321/CKA00123.txt file.

 

$ kubectl config use-context k8s
$ kubectl  top  pod -l name=cpu-user -A
    NAMAESPACE NAME        CPU   MEM
    delault    cpu-user-1  45m   6Mi
    delault    cpu-user-2  38m   6Mi
    delault    cpu-user-3  35m   7Mi
    delault    cpu-user-4  32m   10Mi
$ echo 'cpu-user-1' >> /opt/CKA00321/CKA00123.txt

 

Remarks:

  • The real exam includes 17 hands-on tasks and you have to finish all of them in 2 hours. For experienced k8s user, I think 1 hour and a half would be enough.
  • In my case, the only way to prepare for it is read official documents, and practice one and one more time.
  • You have to inspect every specific word in the question, otherwise you may receive 0 point in one specific task. (e.g. missing to add –record, forget to switch the context file)

 

Last but now least, good luck to your CKA exam!

You’ll get notified by Linux Foundation and receive a PDF certificate as below after 24 hours of the completion. (scores over 66 points of 100)

my%20certificate

my certificate

Assigned Tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.