Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
KishoreKumarV
Advisor
Advisor
Unless you have been living under a rock last two years, you have certainly heard about these buzz words around you. I am developing since a decade and I was never impatient to write a simple Hello World or Database select before. All it takes was just installing some software and copying few lines of code either from a book or internet, but now it took me a while to understand each of these technology stack and run some basic programs. Though basic program is not a typical scenario for these but it’s a good start to understand each towards building a complex scenario.

Disclaimer:

I am not a Linux guy; I develop applications on the windows operating system and it was initially tough for me to understand and remember simple commands. With this blog, I have followed all the hands on with my local Windows 10 machine running Hypervisor virtual machine.

If you are a beginner and looking forward to learn these technology buzz, I could help you here. The intention of this blog is not to dive deep into each but to give you an end to end experience. However, I will try to give some reference to learn more in detail.

 

Let me start with some terminologies,

Microservices is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.



To read more in detail https://martinfowler.com/articles/microservices.html

Containers are a method of operating system virtualization that allow you to run an application and its dependencies in resource-isolated processes. Containers allow you to easily package an application's code, configurations, and dependencies into easy to use building blocks that deliver environmental consistency, operational efficiency, developer productivity, and version control. Containers can help ensure that applications deploy quickly, reliably, and consistently regardless of deployment environment. Containers also give you more granular control over resources giving your infrastructure improved efficiency.

Container Image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings. Available for both Linux and Windows based apps, containerized software will always run the same, regardless of the environment. Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.

Cluster is a group of servers and other resources that act like a single system and enable high availability and, in some cases, load balancing and parallel processing.

 

Now let’s look at the technology

Docker is the world’s leading software container platform. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers. Operators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Enterprises use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server apps.

To read more in detail https://www.docker.com/

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

To read more in detail https://kubernetes.io/

 

As a developer when you want to develop day-to-day in your local machine, you need some software packages. Here I have listed then

Docker for Windows

An integrated, easy-to-deploy development environment for building, debugging and testing Docker apps on a Windows PC. Docker for Windows is a native Windows app deeply integrated with Hyper-V virtualization, networking and file system, making it the fastest and most reliable Docker environment for Windows.

Get it from https://store.docker.com/editions/community/docker-ce-desktop-windows

Kubernetes on Windows

I am not going to use the Kubernetes tool ‘minikube’ with this blog, as I chose to use another platform. If you are looking forward to work with Google Cloud Platform, you can start with minikube refer here https://kubernetes.io/docs/getting-started-guides/minikube/, However irrespective of the local set up you use to develop, you can run the containerized production application anywhere as the base technology is docker.

OpenShift Origin

Origin is the upstream community project that powers OpenShift. Built around a core of Docker container packaging and Kubernetes container cluster management, Origin is also augmented by application lifecycle management functionality and DevOps tooling. Origin provides a complete open source container application platform.

Minishift is a tool that helps you run OpenShift locally by launching a single-node OpenShift cluster inside a virtual machine. With Minishift you can try out OpenShift or develop with it, day-to-day, on your local machine. You can run Minishift on Windows, Mac OS, and GNU/Linux operating systems. Minishift uses libmachine for provisioning virtual machines, and OpenShift Origin for running the cluster.  Refer here https://www.openshift.org/minishift/

If you had followed my older blogs, I am always the fan of Redhat’s OpenShift because of the community, documentation and user experience. Here I choose to use Minishift over minikube because of some technical challenges I had on Minikube running with hyperV, where as Docker for windows need hyperV. But Minishift and Docker for windows both can run parallely on hyperV with no issues.

Get it from https://github.com/minishift/minishift/releases

 

Enough of talking, let’s get started.

If Hyper-V is not enabled, you can do it under Windows Features

Search -> Windows Features -> Turn Windows Features on or off



Install Docker for Windows and make sure it’s running



Look at the VM created at the Hyper-V Manager

Search -> Hyper-V Manager



Before you install MiniShift, you should add a Virtual Switch using the Hyper-V Manager. Make sure that you pair the virtual switch with a network card (wired or wireless) that is connected to the network.

In Hyper-V Manager, select Virtual Switch Manager... from the 'Actions' menu on the right.

Under the 'Virtual Switches' section, select New virtual network switch.

Under 'What type of virtual switch do you want to create?', select External.

Select the Create Virtual Switch button.

Under ‘Virtual Switch Properties’, give the new switch a name such as External VM Switch.

Under ‘Connection Type’, ensure that External Network has been selected.

Select the physical network card to be paired with the new virtual switch. This is the network card that is physically connected to the network.



Select Apply to create the virtual switch. At this point you will most likely see the following message.

Click Yes to continue.



Select OK to close the Virtual Switch Manager Window.

Place minishift.exe in the C: folder

Open Powershell as an Administrator and execute the following statement



On successful creation of minishift VM you get the server and login details



You can also view the minishift VM created under the Hyper-V Manager



You can access the OpenShift server using the web console via a the https URL



By default a project named ‘My Project’ is created, You can choose to create a new project.



 

Let’s start the real development. I would like to build a simple Hello World node app and containerize the same and build the image and deploy it to OpenShift Origin.

Create a folder in your C: drive where you can place all your apps. In my case I have created a folder ‘home’ in C: drive and a folder named ‘nodejs-docker-webapp’ for this app.

I have followed the example from nodejs , I don’t want to get in to details of the same. you can refer here https://nodejs.org/en/docs/guides/nodejs-docker-webapp/



For the ease of development, I would suggest you to use the Docker for Windows initially and then you can change to Docker daemon on the minishift VM. Build your container based on the dockerfile from the current folder.



Run your container



and verify the output in your browser



Go back to your powershell as an Administrator

Execute the following command to get the Docker-Environment of minishift VM

.\minishift.exe docker-env and invoke expression to reuse the docker daemon of VM from the Docker client.



Now let’s quickly repeat the same steps on the minishift docker and check the image.



OpenShift Origin provides an Integrated Container Registry that adds the ability to provision new image repositories on the fly. Whenever a new image is pushed to the integrated registry, the registry notifies OpenShift Origin about the new image, passing along all the information about it, such as the namespace, name, and image metadata.

All the docker images should be tagged and pushed to the Integrated Registry, so they will be available for deployment as an Image Stream in the Web Console.

The syntax for the tag should be docker tag <imageid> <registryip:port>/<projectname>/<appname>

You could also combine this step with the docker build statement as docker build -t <registryip:port>/<projectname>/<appname> .



To login to Integrated Registry for pushing the image, we need the token from OpenShift Origin, we can get the same using OpenShift Client command line



Use the command oc whoami -t to get the token and use the same in the docker login command and push the image.



We are done and now let’s deploy the application on the Web Console. You can also open the Web console using the command

PS C:\> .\minishift.exe console

 

On the web console, navigate to your project and click on ‘Add to Project’ -> Deploy Image

Select the Project, app and version on the Image Stream Tag.



With no change to any of the properties, just click on Create.

Navigate to the overview page.



Next step is to create a default route to access the app, just click on the Create Route link



Click on Create. The route will be updated on the overview page. You can access this application using the route.



You can scale up and scale down your application by increasing or decreasing the number of pods on the overview page. Pod is the basic building block of Kubernetes–the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents a running process on your cluster. When making a connection to a service, OpenShift will automatically route the connection to one of the pods associated with that service.

Let’s quickly recap what we did.

  • We developed a simple Hello World node application. We build and tested the same locally, containerized it using Docker for windows.

  • Build a local OpenShift Cluster using MiniShift and rebuild the app image using the MiniShift Docker daemon.

  • We pushed the image to Integrated registry and deployed the same using Image Stream in web console and looked the scale up and scale down options.You can group two services together depends on the dependancies.


You can also push your images to Docker hub, or using Google Cloud SDK easily deploy these images in to your Google cloud engine or any other cloud platform like AWS or Azure.



 

Before as a developer I was never bothered about how DevOps works, how scaling happens, and efficiency in maintenance and security. With Microservices architecture each microservice team is responsible for the entire business process. Technology is moving fast, especially SAP moving towards cloud company we don’t talk anymore about quarterly or monthly releases; we deliver the services over night or even on an hourly basis.


 

Feel free to share your comments and suggesstion. I am also in the process of learning these new concepts and technology and I am open to learn together.

 

 
  • SAP Managed Tags:
3 Comments
Labels in this area