Skip to Content
Author's profile photo Matthias Steiner

Microservices on HCP – Part II

Enterprise Granny 2.0 – An exemplary blueprint for a microservice architecture

As stated in my last blog post I believe there are many shared characteristics of cloud (native) applications and microservices, which is why I opted for using a refactored version of the Granny application as the baseline for an exemplary microservice architecture blueprint rather than starting from scratch.

NOTE: Please note that we called it an exemplary microservice architecture blueprint. By definition, microservices are independent of any specific technology or programming language and hence there’s no such thing as one-size-fits-all template or anything like that. The provided blueprint is just something that has been proven to work for the author in numerous enterprise-scale projects, but please don’t mistake it for the one and only architceture approach!

During the course of this post we’ll elaborate on the basic components of this architecture blueprint and discuss the pros and cons of each design consideration taken.

General structure

The refactored Granny application consists of three (Maven) sub-modules:

  • a shared enterprise-granny-core module, which contains the domain model and the API,
  • a (micro-) service (provider) called enterprise-granny-service and
  • an exemplary client application called enterprise-granny-client

Of course one could argue that the core and the service (provider) module could be merged into one and that the client is an optional component – and rightly so! But then, given we provided a user interface in the original version it didn’t feel right to deprecate it now… after all a lot of the previous posts in this series covered UI-related topics. Furthermore, we consider it a common case that there’s a central UI component (or application) that leverages the individual microservices and as such it makes sense to provide an example of that as well.

A common question is also why we stick to this old-fashioned approach of using a classic (Java) web application instead of a Javascript based MVC framework as it is en vogue these days! Great question, thanks for asking!

Well, when reading about the fundamental characteristics of microservices, sooner or later the topics of resilience, fault-tolerance and how-to cope with latency isues are brought up. In this context, the Hystrix project from Netxflix seems to establish itself as a standard – at least in the Java space. Now imagine you have a UI (or application) that interacts with a lot of (micro-)services. Of course you could do that directly via Javascript (e.g. jQuery or other frameworks), but then you would also have to cope with the above mentioned topics using Javascript. And while there seem to be circuit breaker implementations in Javascript available out there, but I’m just not convinced that’s the right choice! So, if that kind of thinking makes me ‘old-fashioned’, then so be it… 10+ years of making a living as a software architect have taught me that it is always good to have a level of indirection – just in case! Consequently, we stick to a classic Java web application for the presentation layer/UI.

Before we dig deeper, let’s briefly discuss the purpose of the core module. True, the service exposes a RESTful API and this is how clients interact with the service. Now, imagine that one of the clients would be a Java application as well; wouldn’t it be so much more convenient to use a Java API than having to use the low-level HTTP communication including JSON (de-) serialization etc. As such, we have ‘out-sourced’ the domain model and the interfaces of the microservice so that it can be reused by the client to interact ‘natively’ with the service using a Java-based API (instead of a REST-based one).

Architecture blueprint

HCP_Microservices_Arch_Blueprint_web.jpg

The illustration above shows the general architecture blueprint we are promoting as one potential candidate for your own microservice implementations. True, it does indeed look like the typical n-tier architecture used for decades now! At this point, I’d like to refer you back to the Zef Hemel quote I’ve used in my Cloud Platform Play presentation:

“Build amazing apps with the most boring technology you can find. The stuff that has been in use for years and years.”Zef Hemel in Pick your battles

Those familiar with the original Granny application will see immediately that there have been no changes to the libraries and frameworks used. We have discussed many of them in great detail during the course of the blog post series that complement the Granny application. Still, it sounds valid to quickly summarize it all and explicitly point out reasons why we believe that a particular library or framework is a good choice from a microservice architecture viewpoint.

The general building blocks are as follows:

So, let’s go through them one-by-one…

Business logic

Personally, I’m strongly advocating to use POJOs for the domain model and business logic services as it ensures that this layer remains very light-weight and easy to maintain. It also makes unit testing a breeze! Given that the whole point of using microservices is to gain/keep business agility being able to quickly roll out new versions is fundamental. For that your team needs to embrace continuous delivery, ergo – solid test coverage is a must to ensure stability and quality of your microservices over their entire lifetimes. Last, but not least POJOs usually have little or no dependencies at all (well, besides maybe Apache Commons or the like).

Connectivity

Truth be told… the java.net library is a pain to use and because of that the Apache HTTP Components have sort of become the de-facto standard library for HTTP-based communication. Many other frameworks and libraries build on top of Apache HTTP Components. This makes it the obvious choice for our blueprint, but there’s more. Apache HTTP Components is also a first-class citizen within the classic Java runtime or SAP HANA Cloud Platform (aka NEO): e.g. the Connectivity API and Destination API make use of this library (see Online documentation. This integration allows it to pass-through the currently logged-on user of a web/cloud application all the way down to the backend systems, which is a common requirement in enterprise software projects (e.g. for general auditing, SOX compliance).

Persistence

Next stop: persistence. Over the years the Java Persistence API has matured and EclipseLink is a powerful implementation of this standard. Since version 2.5 it ships with built-in support for SAP HANA, which comes in handy of course. 😉 Furthermore, Eclipselink also support two unique extensions to the JPA standard namely optimistic locking (which is the only scalable approach of handling high concurrency) and multi-tenancy. Last, but not least JPA itself is quite unobtrusive, which frees us from having to introduce a dedicated persistence model next to the domain model.

API Layer

One of the main characteristics of a microservice architecture is the provisioning of a (RESTful) API. In the Java space, there’s a standard for that called JAX-RS. Apache CXF is an implementation of this standard. And while there are also other popular alternatives (such as the JAX-RS reference implementation Jersey) we opted for CXF for a variety of reasons:

  • its modular design (reads: flexibility)
  • its support for not only JAX-RS, but also Web Service standards, which is still a a common requirement in enterprise software projects
  • a strong community backing it
NOTE: In case you intend to also provide an OData-based API you may want to look into Apache Olingo.

Programming model

I’ve been a big fan of the Spring framework for a long time and it’s still my weapon of choice. True, some concepts such as the extensive XML configuration files have evolved since the early days and the annotation-based alternative is recommended these days, but what is more astonishing is that Spring (and its many sub projects) are still going strong!

Side note: I’m sure that sooner or later we’ll touch upon Spring Boot in this series!

Besides the general strong sides of using a DI-container and the flexibility it brings there are many of reasons why I still consider Spring a great choice:

Aspect-oriented Programming

We touched the topic of AOP in episode 3 – Enterprise Granny Part 3: The Good, the Bad and the Ugly already and the arguments still apply! What I like in particular about the usage of aspects is that it keeps the main business logic clean and uncluttered – all the cross-cutting concerns like logging/tracing, authentication and authorization checks, input validation etc. are handled centrally and outside of the main business logic methods. This way, the business logic code is as easy to read, understand and maintain as possible – ultimately catering to our number one good: business agility!

Wrap-up

So, with that we conclude our architecture review – I hope you found it worthwhile. Of course, we were only able to scratch the surface and re-iterate on some of the design considerations we discussed in earlier episodes. As such, those interested to get to the essence of it may want to read through the series (again).

Going forward we’ll continue to have plenty of fun with Granny and multi-tenancy and the introduction of a circuit-breaker are only two examples of what’s in our backlog. Furthermore, we’ll also touch upon some of the topics related with operation of microservices such as rolling updates and continuous database refactoring using Liquibase

Hope to see you around!

Assigned Tags

      4 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo VINCENZO TURCO
      VINCENZO TURCO

      hi Matthias, great blog as usual!

      what's your take on using RAML as interface definition? Several nice code generators exists (yaas, mulesoft) that can work API-first to generated code based on JAX-RS (Jersey). Also, very nice raml2html generators exist for documenting your (micro)service.

      Any feedback would be very valuable

      thanks, regards

      Vincenzo

      Author's profile photo Matthias Steiner
      Matthias Steiner
      Blog Post Author

      Hi Vincenzo,

      great questions. I believe RAML is interesting and as you mentioned its used in the broader SAP context (e.g. YaaS). Personally I prefer to work bottom up and craft the app together myself, but that's personal preference more than anything else. Yeap, the documentation aspect is very handy... I use enunciate (see Enterprise Granny Part 10: Everybody's favorite - documentation ) for that, but again - personal preference.

      So, I have little hands-on experience myself with a RAML API-first approach, but I do know it works for others, so... worth a try! (Always happy to read about such experiences - hint, hint!) 😉

      Cheers,

      Matthias

      Author's profile photo Manish Kumar
      Manish Kumar

      Hi Matthias,

      Really great blog!

      i deployed the application on SAP HANA Cloud Foundry with Postgres Database and application running well & fine but when i change the database like hanatrail or hana shared db and mongo db it's give error. Actually i want to run this application with multiple bankend system like hana,mongo etc.

      is there any additional changes i need configure in my granny's application so pls guide me what i have to do to run this application with multiple database on cloud foundry this application is running or supported only postgres database i have tried with hana shared but no luck.

      pls help me out on different different db.

      Thanks & Regards

      Manish Kumar

       

       

      Author's profile photo Matthias Steiner
      Matthias Steiner
      Blog Post Author

      Hi Manish,

      WOW, that blog post is two years old - happy to know people still find it useful.

      A couple of observations, in general the example should work with HANA - I've tested that. Yet, depending on which build pack you use you may need to manually add the HANA JDBC driver. I mentioned this explicitly in the README.md:

      > **NOTE:** If you intend to deploy this application to a Cloud Foundry landscape provided by SAP or its partners in order to leverage the capabilities of the SAP HANA database platform you need to manually provide the HANA JDBC driver (`ngdbc.jar`) within the [`WEB-INF/lib`](enterprise-granny-service/src/main/webapp/WEB-INF/lib) folder.

      For Redis, Mongo etc. the same rationale applies. At least you'd need to include the respective libraries (JAR files). I've never tested those in conjunction with JPA and honestly I wonder how that should work as both Redis and Mongo have a completely different approach to storing data.

      Hope that helps!

      Cheers,

      matthias