Microservices on HCP – Part II
Enterprise Granny 2.0 – An exemplary blueprint for a microservice architecture
As stated in my last blog post I believe there are many shared characteristics of cloud (native) applications and microservices, which is why I opted for using a refactored version of the Granny application as the baseline for an exemplary microservice architecture blueprint rather than starting from scratch.
During the course of this post we’ll elaborate on the basic components of this architecture blueprint and discuss the pros and cons of each design consideration taken.
The refactored Granny application consists of three (Maven) sub-modules:
- a shared
enterprise-granny-coremodule, which contains the domain model and the API,
- a (micro-) service (provider) called
- an exemplary client application called
Of course one could argue that the core and the service (provider) module could be merged into one and that the client is an optional component – and rightly so! But then, given we provided a user interface in the original version it didn’t feel right to deprecate it now… after all a lot of the previous posts in this series covered UI-related topics. Furthermore, we consider it a common case that there’s a central UI component (or application) that leverages the individual microservices and as such it makes sense to provide an example of that as well.
Before we dig deeper, let’s briefly discuss the purpose of the core module. True, the service exposes a RESTful API and this is how clients interact with the service. Now, imagine that one of the clients would be a Java application as well; wouldn’t it be so much more convenient to use a Java API than having to use the low-level HTTP communication including JSON (de-) serialization etc. As such, we have ‘out-sourced’ the domain model and the interfaces of the microservice so that it can be reused by the client to interact ‘natively’ with the service using a Java-based API (instead of a REST-based one).
The illustration above shows the general architecture blueprint we are promoting as one potential candidate for your own microservice implementations. True, it does indeed look like the typical n-tier architecture used for decades now! At this point, I’d like to refer you back to the Zef Hemel quote I’ve used in my Cloud Platform Play presentation:
“Build amazing apps with the most boring technology you can find. The stuff that has been in use for years and years.” – Zef Hemel in Pick your battles
Those familiar with the original Granny application will see immediately that there have been no changes to the libraries and frameworks used. We have discussed many of them in great detail during the course of the blog post series that complement the Granny application. Still, it sounds valid to quickly summarize it all and explicitly point out reasons why we believe that a particular library or framework is a good choice from a microservice architecture viewpoint.
The general building blocks are as follows:
- plain old Java objects (aka POJOs) for the domain model and business logic
- Apache HTTP Client for HTTP-based communication
- EclipseLink as the JPA implementation of choice
- Apache CXF for exposing the RESTful API
- (optional) Apache Olingo in case you want to expose your API using OData
- Spring (framework) as the over-arching programming model and DI-container plus some additional Spring projects (Spring Data JPA, Spring Cloud Connectors, …) to simplify various aspects of the application (more on that later-on)
- AOP to ‘out-source’ cross-cutting concerns (XCCs) as
Aspectsand hereby separating them from the core business logic
So, let’s go through them one-by-one…
Personally, I’m strongly advocating to use POJOs for the domain model and business logic services as it ensures that this layer remains very light-weight and easy to maintain. It also makes unit testing a breeze! Given that the whole point of using microservices is to gain/keep business agility being able to quickly roll out new versions is fundamental. For that your team needs to embrace continuous delivery, ergo – solid test coverage is a must to ensure stability and quality of your microservices over their entire lifetimes. Last, but not least POJOs usually have little or no dependencies at all (well, besides maybe Apache Commons or the like).
Truth be told… the
java.net library is a pain to use and because of that the Apache HTTP Components have sort of become the de-facto standard library for HTTP-based communication. Many other frameworks and libraries build on top of Apache HTTP Components. This makes it the obvious choice for our blueprint, but there’s more. Apache HTTP Components is also a first-class citizen within the classic Java runtime or SAP HANA Cloud Platform (aka NEO): e.g. the
Connectivity API and
Destination API make use of this library (see Online documentation. This integration allows it to pass-through the currently logged-on user of a web/cloud application all the way down to the backend systems, which is a common requirement in enterprise software projects (e.g. for general auditing, SOX compliance).
Next stop: persistence. Over the years the Java Persistence API has matured and EclipseLink is a powerful implementation of this standard. Since version 2.5 it ships with built-in support for SAP HANA, which comes in handy of course. 😉 Furthermore, Eclipselink also support two unique extensions to the JPA standard namely optimistic locking (which is the only scalable approach of handling high concurrency) and multi-tenancy. Last, but not least JPA itself is quite unobtrusive, which frees us from having to introduce a dedicated persistence model next to the domain model.
One of the main characteristics of a microservice architecture is the provisioning of a (RESTful) API. In the Java space, there’s a standard for that called JAX-RS. Apache CXF is an implementation of this standard. And while there are also other popular alternatives (such as the JAX-RS reference implementation Jersey) we opted for CXF for a variety of reasons:
- its modular design (reads: flexibility)
- its support for not only JAX-RS, but also Web Service standards, which is still a a common requirement in enterprise software projects
- a strong community backing it
I’ve been a big fan of the Spring framework for a long time and it’s still my weapon of choice. True, some concepts such as the extensive XML configuration files have evolved since the early days and the annotation-based alternative is recommended these days, but what is more astonishing is that Spring (and its many sub projects) are still going strong!
Besides the general strong sides of using a DI-container and the flexibility it brings there are many of reasons why I still consider Spring a great choice:
- Spring Data JPA: we touched upon that in episode 3 – Enterprise Granny Part 3: The Good, the Bad and the Ugly
- Spring Cloud Connectors: ultimately, this is what makes our app/service PaaS-agnostic as explained in episode 11 – Enterprise Granny Part 11: One for All
We touched the topic of AOP in episode 3 – Enterprise Granny Part 3: The Good, the Bad and the Ugly already and the arguments still apply! What I like in particular about the usage of aspects is that it keeps the main business logic clean and uncluttered – all the cross-cutting concerns like logging/tracing, authentication and authorization checks, input validation etc. are handled centrally and outside of the main business logic methods. This way, the business logic code is as easy to read, understand and maintain as possible – ultimately catering to our number one good: business agility!
So, with that we conclude our architecture review – I hope you found it worthwhile. Of course, we were only able to scratch the surface and re-iterate on some of the design considerations we discussed in earlier episodes. As such, those interested to get to the essence of it may want to read through the series (again).
Going forward we’ll continue to have plenty of fun with Granny and multi-tenancy and the introduction of a circuit-breaker are only two examples of what’s in our backlog. Furthermore, we’ll also touch upon some of the topics related with operation of microservices such as rolling updates and continuous database refactoring using Liquibase…
Hope to see you around!
hi Matthias, great blog as usual!
what's your take on using RAML as interface definition? Several nice code generators exists (yaas, mulesoft) that can work API-first to generated code based on JAX-RS (Jersey). Also, very nice raml2html generators exist for documenting your (micro)service.
Any feedback would be very valuable
great questions. I believe RAML is interesting and as you mentioned its used in the broader SAP context (e.g. YaaS). Personally I prefer to work bottom up and craft the app together myself, but that's personal preference more than anything else. Yeap, the documentation aspect is very handy... I use enunciate (see Enterprise Granny Part 10: Everybody's favorite - documentation ) for that, but again - personal preference.
So, I have little hands-on experience myself with a RAML API-first approach, but I do know it works for others, so... worth a try! (Always happy to read about such experiences - hint, hint!) 😉
Really great blog!
i deployed the application on SAP HANA Cloud Foundry with Postgres Database and application running well & fine but when i change the database like hanatrail or hana shared db and mongo db it's give error. Actually i want to run this application with multiple bankend system like hana,mongo etc.
is there any additional changes i need configure in my granny's application so pls guide me what i have to do to run this application with multiple database on cloud foundry this application is running or supported only postgres database i have tried with hana shared but no luck.
pls help me out on different different db.
Thanks & Regards
WOW, that blog post is two years old - happy to know people still find it useful.
A couple of observations, in general the example should work with HANA - I've tested that. Yet, depending on which build pack you use you may need to manually add the HANA JDBC driver. I mentioned this explicitly in the README.md:
For Redis, Mongo etc. the same rationale applies. At least you'd need to include the respective libraries (JAR files). I've never tested those in conjunction with JPA and honestly I wonder how that should work as both Redis and Mongo have a completely different approach to storing data.
Hope that helps!