Skip to Content

In the previous post, we saw that a Multi Target Application comprises a set of modules that, executed together, form a business application with its own lifecycle. We also found that those modules are executed as microservices.

What are Micro-services?

Micro-services are independent modules in the sense that they can be executed and deployed separately. They are also isolated from each other and share no resources between themselves, even if that means packaging the same library twice in different modules and into their own runtime environments. These modules also offer an interface to communicate with each other (e.g., an OData service, JDBC, etc.).

I think one of the key behaviors in micro-service architecture, the one that helps to understand the isolation and independence, is that they can be deployed separately. In other words, you can make some changes to, for example, the Node.js module, re-build and re-deploy it and you would not need to restart or re-deploy the other two modules.

This also means that if a single service crashes for whatever the reason (usually not our fault, we’re great developers…), the rest of the services are not necessarily disrupted or at least have the chance to crash gracefully. A very tangible example of this is that you can change the service instance of the UI5 library, re-build only the web module and see the changes in execution immediately. The Node.js and database modules will not even notice (and if you think about it, there is generally no reason for them to notice).

When you “activate” your application (or push, in our new Cloud Foundry jargon), the micro-services for each module are created.

How is this glued together?

I don’t want to be reiterative, but these independent, isolated, etc. services do not make any sense unless they are all glued together and serve a business purpose. Explained to my ABAPer self, the equivalent to a FRICEW object, with its own gap or business requirement to fulfill is the Multi-Target Application.

Technically, there is a file that declares this cocktail of modules is a single application and it also explains how these services are bound to each other and in which order they should be deployed.

This file, called development descriptor is filled mostly automatically for you as you create modules in Web IDE. Of course you will need to add the necessary information to glue your modules together as it cannot yet read your mind.

Here is a sneak peak of the file you will edit in Web IDE (called MTA.yaml) for a basic (incomplete) app in which the web module depends on the Node.js module (presenting itself as “js-api”) and a little surprise at the end:

 

This yaml is not taking some details (e.g., authentication) into account. We will go deeper into it as we progress.

We have a surprised kitty and we cannot blame it. When you first create the MTA app and add the database module, that “hdi-container” and its parameters are added to the mta.yaml file automatically under “resources” (go, take another look at it…)

The mta.yaml file

In the first piece of the development descriptor file, you (the Web IDE, actually) added the modules that should be bound to each other and treated as a business app.

One of those modules is the “database” module. In here, and in this particular case, you will create a design-time artifact called “entity”, which will then become a runtime artifact called “table” (yes, the good ol’ table) and you will then cast some other Core Data Services spells.

You will also add data to those entities, access groups of tables created by another developer or even the schema that has been replicated onto HANA from your ECC system (that pretty BKPF table you will use in a Calculation View… you naughty).

That hdi-container is your own piece of database. It is what your database module will need to access it’s own piece of HANA. It’s your own piece of a backing service.

Backing, Application and Mashup services

Why is the hdi-container there? Because you need to access the database (simple, huh?)

So do you just create a schema and hardcode some credentials for the other modules to access? Do you really think it’s a good idea to give full access to the full database to a full developer? What would happen to all the isolation and independence and consequent robustness we’ve been fighting for?

No, my friends, we are not stopping here. You are getting your very own piece (instance) of database (a service) in the shape of a managed service. The database, the User Account and Authentication service, the job scheduler they are all backing services, the base layer that will serve different applications across different spaces.

Let that sink in the shape of an example. This is what some running services look like from the console:

What am I looking at?

That is the list of services running, for example, in the “development” space (an environment in which resources are shared and can access each other). 

You can see they have a name. Some of these are called as if somebody had appended a fit of rage on the keyboard between the User ID and the name of the application. It’s not a fit of rage, it is an auto-generated id (and this is why computers do not perform creative tasks).

You can add your own predefined name as a parameter to the service to override this in the mta.yaml file, for example. I keep going back to mta.yaml so you understand how it orchestrates most of this.

The plan indicates the scope or level of resource or feature assigned to that resource. These will vary depending on the service. For example, the service plan for the “hana” service, for which you could use “schema” for a plain schema or “hdi-shared” for an HDI container (schema plus metadata). The plan is also associated to the type you declared in the descriptor.

When you right-click on the MTA app and click on build, the files are uploaded to the platform and the buildpacks are called to produce the executables. What is uploaded is an archive called “<<projectname>>.mtar” like the ones below:

What happens when we move the .mtar file to the QA environment for testing?

The backing services (again…database, UAA, etc.) are declared as resources to the application services, so the platform calls some elves, called service brokers, that will create a service instance for your specific application. The platform knows which service broker gets called thanks to the “type” (e.g., com.sap.xs.hdi-container) and the instances are made available.  The application services are built on top of backing services.

Now that the backing services are provided and bound to the application services, the application can run.

Then come the Mashup services. They combine application services and expose a single point of entry, routing the requests it receives to the right service. An example of this is a Fiori interface, a service whose tiles call other services. The other most pure example of this is the application router, which is necessary for your MTA application to have a single entry point too. Routes are a key concept itself and it is explored in the next blog (coming soon).

Wait! Before you leave…I would say it’s high time you experimented all of this yourself if you haven’t already.  Now that XS Advanced is easily available in different cloud providers for SAP HANA, express edition, you can follow the introductory step-by-step tutorials here https://www.sap.com/developer/groups/hana-xsa-get-started.html 

 

 

Stay tuned for the next releases on  or on LinkedIn ! and if you are lucky enough to be going to SAP TechEd, we will be building an XS Advanced app and other cool stuff at the App Space. Hope to see you there!

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply