Personal Insights
First Thoughts about Architecture of ABAP Cloud Applications
At SAP Inside Track Frankfurt, I gave a talk “Is your ABAP Code Ready for the Cloud?“. In my talk, I discussed my experiences gained during a development project on this platform. In this blog entry, I would like to discuss some aspects. You can find the slides here.
Why ABAP Cloud Environment Matters
You are able to run existing ABAP applications in ABAP Cloud Environment if you manage to decouple them from the core systems. This helps you to keep the core systems clean since you reduce the amount of custom code in those systems. Moreover, you can use it to simplify your landscape and let SAP do the provisioning of the system. Another obvious advantage is that you get a more flexible release cycle for your application.
ABAP Cloud Environment is also a development and runtime system for edge applications since you have the latest ABAP features. When you develop an edge application for this environment or if you chose to redesign an existing one the question appears, what kind of architecture you should choose. Before I present my first ideas about this topic I will touch the question when a redesign for the cloud could make sense.
Lifiting a Monolith to the Cloud?
When you port your ABAP applications to the platform you have to be aware of certain restrictions. The number of APIs is restricted, also you can’t use ABAP Dynpro since there is no SAPgui. So very likely, you will have to adjust the code. On a case-by-case basis, you will find out how much effort it will take. But even when the effort is not that high you ask yourself the question when it makes sense to redesign at least parts of the application.
From my experience, most ABAP applications are quite monolithic. That doesn’t necessarily mean that this is bad. When the decoupling from the Digital Core is possible it makes IMHO absolutely sense to put an ABAP application to SCP.
In my case, I decided against it and favored a redesign. Why? In the end following questions lead to the decision:
- Will I have to change the code quite often due to new requirements?
- Can I establish short release cycles?
- Are there enough automated tests?
- Does the implementation of a new release of the application take effort?
In my case, I decided that it makes more sense to redesign the application from scratch. I found in the legacy app many interesting design patterns that became part of the new solution. But I also decided to break new ground. This is the topic I would like to discuss here.
Quality Criteria for ABAP Cloud Applications
I started with following assumptions about cloud readyness in the ABAP context:
- Simplification: How can you get rid of complexity?
- Cloud Qualities: How can you establish short release cycles?
- Modularization & Decoupling: Avoid side effects when changing the application
- Smooth Operations: Keep update cycles short! Make installation easy!
Then I made some assumptions about cloud readiness and verified them using prototypes. You can find them in the mind map below.
This leads to an iterative approach: Based on my assumptions I derive development guidelines. I develop certain modules following those guidelines. During the process of this work, I develop different vertical prototypes. Last but not least, I refactor my application and I obtain also a horizontal prototype. This is easily done by using the refactoring capabilities of ABAP in Eclipse. Then I look at the overall architecture and see whether it fulfills my needs. – So how do ABAP Cloud applications do look like?.
ABAP Cloud Applications as Set of HTTP Services
ABAP Cloud Environment applications consist of HTTP services. I case of Fiori UIs you will implement them as OData services. For A2X integration you can also develop plain REST services with JSON interfaces, too.
The modularization of Services follows from the domain model. When two services share a common persistence I develop them in the same deployment unit in different ABAP packages. Here is the architecture of a typical microservice.
Quality Criteria for ABAP Microservices
In cloud applications, it is necessary that you new versions of a microservice can be deployed very easily. Complex parametrization (typical customizing) makes this complicated. So I decided to avoid it whenever possible. I used the following strategies:
- configuration by exception
- configuration as part of the API
A typical example of it is logging. If the caller of the microservice needs a more detailed log, you can define a trace level as part of the API and give the result back. This is also a method to avoid unnecessary persistence of data.
How to get Avoid Customizing?
I try to avoid customizing whenever possible. Why?
- Customizing increases complexity
- It is difficult to test highly paramterized applications with unit tests
- It is difficult to separate customizing activities since changes are written on the same transport request
- You will have to create your own customizing maintenance UIs since there is no SM30 in ABAP Cloud
For me, the reasons are so severe that I tried to avoid customizing whenever it is possible. In the case where this is not possible I recommend to use only “initial customizing” that is done only once.
My Development Guidelines
I decided to keep the application architecture simple by introducing some principles that should ensure that ABAP microservices could be maintained and evolved easily. My first guideline was that I avoided the creation of frameworks and built libraries instead. So what is the difference? I understand as a framework a set of software modules that allow clients to plug in. From the view of control flow, the framework calls the client. So a framework enforces standardization of the application. Once the framework is understood different developers can plug in their modules. But there is also the problem that problem the framework restricts and inhibits the evolution of the application. This can be prevented if you develop libraries instead. In contrast to frameworks, libraries are called from the user. If you use ABAP interfaces to access the library you can easily replace a library with another implementation.
The second guideline was that I decided not to use customizing in system tables. I find this ABAP idiom in many applications. This is an efficient way to parameterize checks for example. Sometimes those system tables even contain names of function modules and classes that are called dynamically. This is also a pattern that is used often to implement code for validations, since aspects like time dependency can be implemented very easily and many developers can work in parallel.
I decided to use this pattern as well with the exception that I avoided dynamic calls. Therefore I had to store the former customizing in system tables in attributes of that classes and filled it using the constructor. This worked well although because the VALUE operator makes this very easy. On the other some pitfall can occur: The size of a single code line is limited (~150kb) so I had to split up the content of huge customizing tables. When I made a type I sometimes got violations of primary keys in internal tables. This takes some time to analyze.
I used in most cases type definitions in interfaces or classes instead of creating DDIC domains and data elements. I consider this as much faster and more flexible due to the refactoring capabilities of ABAP in Eclipse.
Last but not least I was seeking a very high coverage using ABAP Unit tests. I consider this necessary when you develop applications for the cloud.
Summary
In the past I was designing ABAP enterprise applications inside-out. Starting from a data model I develop a virtual data model and the business objects on top of it and OData services using Referenced Data Sources.
In the case of ABAP Cloud Applications, I suggest to alter this approach a little bit:
- Think of your application as a set of (micro-) services.
- Specify what kind of services offers the application for different clients.
- Those clients can be the UI for end users, a health check infrastructure for administrators, REST services for a Fiori UI or A2A services that are used by other applications in your enterprise IT…
- Then start to design APIs. Try to simplify them whenever you can.
- Starting the Domain Model designing business objects.
Working with the ABAP Cloud Environment was fun although it took some time until I learned that I had to use different ABAP classes, f.e CL_ABAP_UNIT_ASSERT instead of CL_AUNIT_ASSERT and CL_ABAP_CONV_CODEPAGE instead of CL_ABAP_CODEPAGE. I missed a lot of features like transaction ST22 to access ABAP logs. Some parts of the development have been a little tedious since there was no general health check infrastructure which is IMHO necessary to operate cloud services. So I had to implement a rudimentary one by myself.
And now I am curious: Which kind of programming patterns should be used in cloud applications and which should be avoided? What kind of possibilities do you see for simplification of ABAP applications so that they become ready for the cloud?
When you mention avoid customizing, you mean technical customization? like an API running in changing data or just simulating it's change.
most business applications run a set of business configured rules, which would mean the services looking up for those customized rules.
In my case I avoided any kind of technical customizing in the sense of system tables. I also managed to reduce customizing for the Business process to the minimum.
IMHO customizing doesn't fit into the cloud. IMHO we need a concept of configuration that supports the following:
But even with this kind of toolset, I would try to minimize it whenever possible. Just imagine the following: would you like to "customize" an application like Google Drive before using it? IMHO we should try everything to prevent it perhaps using a Design Thinking approach.
Best Regards,
Tobias
I thought customizing tables were kind of passe already and we're supposed to use BRF+ instead. Is BRF+ available in the Cloud?
If you find any good method for getting the business users off the "customize everything" train - let me know. There is one humongous table in our system that almost acts like BRF+ but worse and it is used to send programs in all kinds of different directions.
This is a terrible thing, I couldn't agree more. But our users think this is the best thing since sliced bread. So there is a bit of a problem...
Hi Jevelina,
this is an interesting point.
I didn’t look at the new steampunk release but I don’t expect BRFplus to be supported. For S/4HANA 1809 SAP announced SAP Business Rules Cloud as a way to go. This is a compatible subset of BRFplus (transactional and analytical flavor) which can be called like BRFplus functions. See here for example.
Should BRFplus replace all customizing? I’m not sure. The strength of BRFplus is if-then conditions. For most users customizing tables are more convenient and much simpler to use. But there is also a technical reason: the content of BRFplus tables can’t be accessed via CDS and can’t be part of a virtual data model. When you have a parameterizable data model (the simples example is the internationalization of texts) it has to be part of the data model. So we need mechanisms for customizing.
Best Regards,
Tobias
Hi Tobias and Jevelina,
well, "SAP Business Rules Cloud" has two deployment for optimal performance: Deploy to BRF+ and deploy to HANA.
But the question remains, what is available for steam punk/abap cloud (besides calling the REST API for rule evaluation, which would be too slow IMHO)?
Regards,
Wolfgang
In my company they also wrote their own version of BRF+ back in 2010. In a lot of ways it is a work of genius, it certainly has a much better UI than BRF+
And yes, the end users also think this is a fantastic thing and the whole company runs off it. I have to say it is better than ten million different Z customising tables, and you get a change log, and are forced to put documentation against everything, even if people put a dot or an X.
That was in Germany. Back in Australia I once spent nine months poring through a load of Z programs and changing all the hard coding to customising. The end result is that programs are much easier to port from one place to another and there are (immutable) variables with meaningful names rather than 0093 or YXDF. This is the "once only" customising Tobias alluded to.
Going back to mental attitudes, if it is difficult to get people (programmers) to do OO, and ten times more difficult to get them to grasp what a CDS view is, and ten times more difficult again to grasp the RAP, I would say it is a hundred times more difficult again to tell them not to do Z customising because "we have always done it this way"
Indeed I imagine most people on their first SAP day are told what made SAP different/successful was that you could customise the one software base to run in any country and any industry. It is quite difficult to argue with that.
Jumping subject again I would imagine the cloud version of the Business Rules will be the one in the ABAP cloud. It looks just like BRF+ to me, which is good, as Carsten (BRF+ inventor) told me his end goal was a unified front end for all the dozens of disparate business rule systems SAP has acquired over the years, and you deploy the rules to wherever and the target system implements them how it likes (e.g. generated code with hard coded values in ABAP that change when the rules change). He would say all customising should e done in BRF+, as might be imagined.
Was that chaotic and rambling enough for you?
Cheersy Cheers
Paul
One more rambling thing.
In the CAP(M) a CDS definition can also call a service as a data source (so they don't call them CDS Views). One day (perhaps) ABAP might go down the same path.
Then you could call a business rules service, which I presume you can do in the CAPM already. Christian Drumms Zombie demonstration ran in the SAP Cloud Platform (not using ABAP) and used workflow and business rules services.
Disagree on this one. Somehow Thorsten Franz was able to explain it to me in like 10 min. conversation at TechEd years ago (when no one even heard about them). The way it came up in conversation: I was sharing a problem and Thorsten suggested CDS as a solution.
Many times the education process goes like this: oh, here is this thing you can use and it's awesome. But people don't learn like that in real life (at least not effectively). They learn based on the real problems, they look for "what can this thing do for me?". You search for a solution, try it and learn something. If you see that this new thing just made your life easier you will want to learn more.
This was part of my TechEd presentation last year (which, sadly, wasn't as well attended as I'd hoped): we are trying to educate adults as if they're children and it doesn't work like that.
P.S. This is not meant about you specifically, Paul. You're actually one of the few doing good job with it. But vast majority of content is just ineffective IMHO.
Hi Paul,
SAP defined the so-called Enterprise Rule Model standard which consists of analytical and transactional rules. I like the UI of SAP Business Rules Cloud and I think the simplification is great. But I fear that many existing BRFplus rule systems are too complex to be modeled according to the standard. So lets wait and see how that standard evolves.
Best Regards,
Tobias
Hi Tobias, what about the UI part?