Skip to Content
Technical Articles

Run SAP Commerce (Hybris) on Google cloud using Kubernetes: Part 2 – Create Docker Image

Introduction

This is the second part of the three-part blog post series on containerization & orchestration technologies Docker and Kubernetes with SAP Commerce (Hybris). This series aims at leveraging Docker and Kubernetes to deploy and run SAP Commerce (Hybris) and Solr in master-slave mode on the cloud.

If you are interested in the introduction to aspects, details of the custom recipe that makes the creation of docker images easier, you can visit the first part of this series.

If you are interested in deploying SAP Commerce, DB, and Solr to the cloud, you can proceed to the third part of this series.


This post assumes that you have knowledge of Docker and SAP Commerce (Hybris). The platform version used in this blog post is SAP Commerce 1905, Solr version is 7.7. SAP Commerce 1905 requires Java 11 or above.

In this post, terms SAP Commerce and Hybris are used interchangeably, they both mean the same.


Create Docker image

Now that you are familiar with the recipe, time to create images from the recipe.

Before proceeding any further, make sure to install docker.

Clone the recipe and copy to $HYBRIS_HOME/installer/recipes directory.


Note that the recipe contains configuration to create Docker images for Hybris and HSQLDB, it does not include configuration for Solr. Solr image that is used in this article is a custom image that eases running Solr in master-slave mode using Docker. For more details about Solr image, refer to this blog post.


Navigate to $HYBRIS_HOME/installer directory and run the below command –

$ cd $HYBRIS_HOME/installer
$ ./install.sh -r valtech_b2c_acc_plus_spartacus_dockerized createImagesStructure

It would take around 45 min to create Docker files and the corresponding structure. The Docker files and the structure would be created in the directory $HYBRIS_HOME/installer/work/output_imagesBelow is the folder structure.

 

The Gradle task above creates a convenient script build-images.sh to build all images in one go.

$ cd $HYBRIS_HOME/installer/work/output_images/valtech_b2c_spartacus_dockerized
$ sh build-images.sh

If you prefer to tag differently, you can update the script or build each image one at a time. Once you run the script, docker images are created on your machine.

Execute the below command, and you should see the created images.

$ docker image ls

Now that Docker images are created successfully let’s run Hybris and dependent containers from these images.

Run Docker image

Time to create and start containers from the images. Let’s run DB and Solr containers before Hybris.

We are running HSQLDB and Solr in daemon mode (as a background process) by setting ‘-d’ to run command. If you choose to remove the file system once containers are stopped, you can include the ‘–rm ‘ option with the run command.

Run HSQLDB

$ docker run -itd -p 9090:9090 valtech_b2c_spartacus_dockerized_hsql

Run Solr

For further information on the Solr image, refer to this blog post.

$ docker run -itd -p 8983:8983 valtechus/solr:7.7.1

Run Hybris

$ docker run -it -v path-to-secrets-folder-on-your-local:/etc/ssl/certs/hybris -e "DB_URL=jdbc:hsqldb:hsql://host.docker.internal:9090/hybris;hsqldb.tx=MVCC" -e "SOLR_CONFIG_DEFAULT_URLS=http://host.docker.internal:8983/solr" -p 8088:8088 valtech_b2c_spartacus_dockerized_platform

There are a few things that need an explanation.

Mount of certs folder as volume

You may notice that we are mounting certs folder on to Hybris container, for your convenience certs are provided here. You can drop the certs into a directory of your choice on your machine and mount that directory in on to Hybris container with option -v.  We need the certs folder to run Hybris in HTTPS (please note that by default, HTTP is disabled in this image), which is the reason why only HTTPS port 8088 is exposed. You can feel free to change that behavior.

Set HSQLDB and Solr URLs

As mentioned before, one of the advantages of using the recipe is that configuration can be externalized and overridden for different environments, Docker provides a way to override the environment variables runtime by setting the argument ‘-e’ to run command (see the docker run command for Hybris above).


When running separate containers and there is a need to establish connectivity between the containers, the recommended approach is to use ‘host.docker.internal‘ URL because docker hosts will be changing the IP address. Also, the URL mentioned above is known to work only on Docker for MAC and Windows. For more details, please refer to this link.


Because of the networking limitations as mentioned above with running containers separately, the best approach is run docker-compose, which will avoid issues of connectivity between containers. Please proceed to the next section to see how Hybris can be run with docker-compose.

Run Hybris, Solr and DB with docker-compose

‘docker-compose’, as the name suggests, will allow us to compose docker run configuration of different images in one place and establish dependencies between them.

For convenience, a sample docker-compose file is available here.

As you can notice, the same configuration that we included when running individual containers is now grouped in one place, and dependencies are established by setting ‘depends_on‘. This way, it is easy to maintain configuration in one place, store it in a repo to maintain version history.

Finally, all you have to do to run Hybris, Solr, and DB applications is to execute the below command

$ docker-compose up

or, if you want to run just platform service and run DB and Solr in the background

$ docker-compose run platform

Oh, wait! If you are not caught up too deep in the specifics of running a container, one question that should have crossed your mind is, how do I run different aspects of the Hybris image? It’s straightforward! Just pass the aspect name as an argument to the run command, that’s all!!.

Let’s see that with an example. Let’s say if you want to run backoffice, then the aspect for that purpose is ‘onlyBackoffice’. Remember, if you don’t mention anything, the default aspect would be invoked, and it includes all applications.

With docker run command:

$ docker run -it -v path-to-secrets-folder-on-your-local:/etc/ssl/certs/hybris -e "DB_URL=jdbc:hsqldb:hsql://host.docker.internal:9090/hybris;hsqldb.tx=MVCC" -e "SOLR_CONFIG_DEFAULT_URLS=http://host.docker.internal:8983/solr" -p 8088:8088 b2c_spartacus_dockerized_platform onlyBackoffice

With docker-compose:

$ docker-compose run platform onlyBackoffice

With the above commands, you have the Hybris container with just the backoffice running.

Once you have the Hybris container up and running, you can access Hybris on localhost port 8088 (because that is the port exposed) by accessing URLs https://localhost:8088/backoffice, http://localhost:8088/yaccleratorstorefront/?site=electronics. Once you have the application running, it is business as usual (initialization, access storefronts, backoffice, etc.).

If you want to create a different configuration for production, which would have different JVM settings, DB setup, Solr setup, etc. Docker provides a way for this use case by allowing to create a different compose file and override the default compose file.

$ docker-compose -f docker-compose.yml -f docker-compose-production.yml up

Conclusion

That’s all to creating Docker images of Hybris, HSQLDB using a custom recipe. Also, we looked at the creation of Hybris, DB, and Solr containers from the images. In the third and final part of this series, will take you through deploying Hybris, HSQLDB, Solr on Google cloud platform’s (GCP) Google Kubernetes Engine (GKE) from the images that we created in this blog post.

About the Author

Ravi Avulapati – Specializes in Java, J2EE, and frameworks, SAP Commerce (Hybris), Search with Solr, Solution & Enterprise Architecture, Microservices, DevOps, Cloud solutions. Machine learning and deep learning enthusiast.

About Valtech

Valtech is a global full-service digital agency focussed on business transformation with offerings in strategy consulting, experience design & technology services. Valtech is an SAP partner and is an SAP recognized expert in SAP Commerce.

4 Comments
You must be Logged on to comment or reply to a post.
  • Hello Ravi,

     

    many thanks for your detailed series of articles and prepared code snippets.

    I’ve followed your setup and was able to create and launch images with docker-compose. However the URLs https://localhost:8088/https://localhost:8088/backofficehttp://localhost:8088/yaccleratorstorefront/?site=electronics  always return 404 to me.

    The interesting detail is, that requests are reaching the Docker container, because on a first request I can see a warning in browser about self-signed certificates.

    I guess, I’m missing some tiny configuration property or something like this to make it work.

    Do you have an idea, what can it be?

      • Hi Ravi Avulapati

        thank you for your answer.

        Yes, the secrets are mounted, as in tutorial.

        All the 3 containers are up and running.

        Name Command State Ports
        ——————————————————————————————————————————
        valtech_b2c_spartacus_dockerized_hsql_1 /opt/hsqldb/start.sh Up 9090/tcp
        valtech_b2c_spartacus_dockerized_platform_1 /opt/startup/startup.sh Up 0.0.0.0:8081->8081/tcp, 0.0.0.0:8088->8088/tcp
        valtech_b2c_spartacus_dockerized_solr_1 /opt/solr/start.sh Up 0.0.0.0:8984->8983/tcp

        The docker logs don’s show anything suspicious, as well as no incoming requests. The last entry I can see with docker-compose logs -f are:

        platform_1 | Feb 25, 2020 7:05:07 PM org.apache.catalina.startup.Catalina start
        platform_1 | INFO: Server startup in 30280 ms

        Any of my attempts to reach https://localhost:8088 are not shown in this log.

        • Hi Viktor Livakivskyi , If there are no issues in logs then it could be an issue with network. Check existing networks and see what network is used when you run docker compose and inspect that network. You can even create a dedicated network that can be shared for the three containers and see if it works. Good luck!