Technical Articles
#HelloWorld SAPUI5 meets Kubernetes – Persistent Storage
All Blogs in the series
Part | Description |
1 | Docker Set up by Ronnie André Bjørvik Sletta |
2 | Hello World – Containers |
3 | Container meet Orchestrator |
4 | Database integration |
5 | Persistence Storage |
6 | Secrets and Configurations |
Background
In our previous blog we have deployed a SAPUI5 application pod connected to a MySQL pod with usage of services etc. The problem with it was the database changes were not persistent. The moment you delete/kill the mysql pod the data vanished. So in this blog we will try to explore one of the different ways to persist our data with a demo.
How to make our data persistent?
Kubernetes use concept of volume to persist disk data. It provide different types of options such as hostpath, emptyDir, NFS, persistent volume claim etc. For example emptyDir volume is created as soon as the pod is is assigned to a node and vanishes once the pod is moved to a different node owing to any reason. So we have different types of volumes options available which are relevant for different situations.
In this blog we will use one of the most common method which is persistent volume claim referred as PVC. This type of claim is created at the cluster level(minikube case, in cloud it can refer to a remote disk based on persistent volume type) where as emptyDir type is at node level. A persistent volume claim internally is linked to a persistent volume(PV) where you specify what kind of storage it is whether NFS, hostpath etc. PVC is more like a binding between pod and persistent volume. A relevant example is lets say you have total 10 GB space of disk which is nothing but our PV and any request for using that space for example 1 GB is PVC. The persistent volume can be of different types such as Google cloud persistent disk, cluster disk etc.
Just to remind at a high level we have a k8s cluster which has nodes, node in turn has pods running inside them. Node also has some space as well as pod and cluster the availability of the them depends on the lifecycle.
Current situation
As of now we have not added any volume related information to our deployment so each time a pod is deleted the data is vanished as can be seen in the demo below. Here we first create MySQL deployments without any claim-> MySQL service -> Create some database objects-> delete the pod -> Data is no longer available for new node.
So lets make our database persistent
Lets first make a persistent volume claim(PVC). In persistent volume claim we define how much space we need along with access modes. Important point to be noted here is we are not defining any persistent volume here as it will be auto generated by the infrastructure. In case we have special requirement for different types of storage then we need to create persistent volumes(PV) first followed by claims(PVC)
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: mysql-volumeclaim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Now lets adapt our deployment to include the volume information.We have volume mounts section in the end where we specify the mount path inside the pod. Secondly the associated volumes which is nothing but volume of type persistent volume claim and the name created before
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
labels:
app: mysql
spec:
replicas: 1
selector:
matchLabels:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: sql
key: password
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-volumeclaim
Now to access the pod’s all we need is a service same like last time.
apiVersion: v1
kind: Service
metadata:
name: mysql
labels:
app: mysql
spec:
type: ClusterIP
ports:
- port: 3306
selector:
app: mysql
Live Demo showcasing creation of PVC -> Deployment of MySQL-> My SQL Service -> Database artefacts creation -> Delete the pod -> New pod still has access to old data.
What is next?
Now we have an idea of how to create a persistent deployment in a local minikube cluster. We will explore Stateful sets(a step beyond), GKE, Kyma, Gardener,Upgrades,Other types of volumes, RBAC etc. We will be sharing the experience as we move further on this journey. Feel free to provide your feedback.
I'm not commenting your blog series, since I have not enough skills to match them, but they are great and I'm managing to follow your "steps" for a side project of mine.
I hope you do not mind if i steal your knowledge!
Thanks for sharing your work, it's a true inspiration!
Thanks for the encouraging comment. It will be awesome if it helps you in certain way plus do share your experience.. Kubernetes is huge and interesting trying to learn and see where it takes us