Skip to Content
Technical Articles

Using the Cloud SDK in an existing Spring app

There seems to be a lot of hype these days around the S/4 Cloud SDK / new Cloud Application Programming Model. I’ve briefly used it in the past, but decided that it is not mature enough to invest time into it.

As I am between projects right now, I decided to take another look over it. Most of the blogs here on SCN focus on how to build new applications (“side by side extensions”) using this SDK. Even though I have my disagreements with the approach proposed for creating new apps, I decided to look into another use case: adding stuff to existing applications.

This will be a terribly honest look into how my journey was and all the problems that I encountered.

A “school” app

GitHub: https://github.com/serban-petrescu/spring-training-workshop 

As it happens, I have a small Spring Boot application lying around. Initially, I created it during a small workshop that I held internally at our company, but I re-purposed it to also explore the SDK.

The business context of the app is very simple: we want to help a teacher manage grades for his students. Basically, we have three phases during a university year:

  • Start of the year: the teacher gets a list of students in each group from the university.
    • He should be able to import this data into our app.
  • During the course of the year: he will browse through his groups, randomly select some students for homework checks each week and grade his students.
    • He should be able to view all the groups / students / grades, add new grades into the system and also get a random subset of students for the homework check.
  • At the end of the year: he will compute the average grades for each student.
    • Our app should compute this for him.

An existing code base

During the aforementioned workshop, I implemented everything using typical Java + Spring constructs:

  • Flyway for managing the database structure, organized into three migrations: initial data structure, some mock data and a view for computing the averages.
  • JPA entities for representing the database tables in our Java runtime + a set of Spring Data repositories for handling the communication with the DB.
  • Simple transactional services containing the very little business logic of the app. These services map the entities into DTOs using specialized classes.
  • Some REST controllers for exposing the services via a REST API.

Before starting with the SDK, I did a small cleanup, upgraded to the latest version of Spring Boot and finally wrote a simple docker compose file for setting up a local PostgreSQL database (instead of using one installed directly on my machine).

Our goals

Let’s imagine the following situation: we built our Spring app and delivered it to production. Some external applications are already using the REST API that we provided. Now our customer contacts us asking if he can integrate our app into his SAP ecosystem.

As we all know, SAP took a strategic decision to go for OData as a protocol some while ago, so after some analysis we decide to also provide an OData API for our app. It goes without saying that the existing consumers should not be impacted at all or as little as possible.

Because we are unsure exactly who will consume our APIs, we also think that providing both OData V2 and OData V4 would make sense (if possible).

Starting with the SDK

After reading a little around, I realized that I only need a small portion of the SDK, namely the service development part. I used Olingo a while back for creating V2 services and it was for sure not a pleasant development experience.

At a glance, it looked like I will just need to annotate some methods and that’s it. Well, after thinking about it a little, I then realized there must be some hooks or listeners to trigger the annotations scanning at the very least.

First I read and went through some blogs. Carlos Roggan created a nice blog series dedicated to the OData service development, so I started from there.

Managing our dependencies

Initially, I dissected the V2 provider libraries, but I struggled a lot to get the correct dependencies inside my POM. So this is my first point of contention: there are a lot of dependencies and it is very difficult to navigate through them. In the BOM(s), many of them are listed, some with exclusions, basically forcing you to import the BOM even if you actually only need one single dependency.

I ended up including the following dependencies:

<dependency>
	<groupId>com.sap.cloud.servicesdk.prov</groupId>
	<artifactId>odata-core</artifactId>
</dependency>
<dependency>
	<groupId>com.sap.cloud.servicesdk.prov</groupId>
	<artifactId>odatav2</artifactId>
	<type>pom</type>
</dependency>
<dependency>
	<groupId>com.sap.cloud.servicesdk.prov</groupId>
	<artifactId>odatav2-prov</artifactId>
</dependency>
<dependency>
	<groupId>com.sap.cloud.servicesdk.prov</groupId>
	<artifactId>odatav2-lib</artifactId>
</dependency>
  • odata-core seems to have dependencies and code which is decoupled from the concrete OData version.
  • odatav2 seems to be a BOM-like artifact, pulling in various other modules.
  • odatav2-lib seems to be a shallow wrapper around Olingo.

This seems to pull a lot of other stuff that I don’t want, but I don’t have a better alternative (without having to exclude manually transitive dependencies that I don’t care about):

Then I added the V4 dependencies.

<dependency>
	<groupId>com.sap.cloud.servicesdk.prov</groupId>
	<artifactId>odatav4</artifactId>
</dependency>
<dependency>
	<groupId>com.sap.cloud.servicesdk.prov</groupId>
	<artifactId>odatav4-prov</artifactId>
</dependency>
<dependency>
	<groupId>com.sap.cloud.servicesdk.prov</groupId>
	<artifactId>odatav4-lib</artifactId>
	<version>${s4.sdk.version}</version>
</dependency>

They are mostly symmetrical to the V2 dependencies, with two annoying differences:

  • The odatav4-lib module’s version is not managed via the BOM.
  • The odatav4 “BOM”, even though it contains absolutely no classes at all, is of type JAR.

Verdict: for sure it is a good thing that we can use Maven Central to pull the right dependencies. But the way in which the artifacts are organized is not clean in my view, having a lot of dependencies to each other and forcing developers to do trial-and-error rounds and include dependencies step by step until we don’t get ClassNotFoundExceptions anymore.

The metadata

Ok, so the first step I did was to manually create metadata for all my operations. This was fairly easy, I basically just created three entities (Group, Student, Grade), a complex type (AverageGrade) and several function imports (GetAverageGrades, GetHomeWorkCheck) to match the REST operations.

I created two metadata files: one for V2 and one for V4. This was actually the first time I created a V4 metadata file by hand and I must say that it is a lot less verbose than its V2 counterpart.

Bootstrapping V2

The next step is to bootstrap the V2 API. I found out that in my dependencies, I have:

  • One EndPointsList servlet which exposes the OData V2 root document for each service.
  • One ODataServlet which comes from Olingo and for which I have to pass a service factory (which is also from the dependencies).
  • One ServiceInitializer listener which does some initialization in the various singletons that the libraries are built upon.

I initialized everything through a configuration:

	@Bean
	public ServletContextListener serviceInitializerV2() {
		return new ServiceInitializer();
	}

	@Bean
	public ServletRegistrationBean endpointsServletV2() {
		ServletRegistrationBean<EndPointsList> bean = new ServletRegistrationBean<>(new EndPointsList());
		bean.addUrlMappings("/api/odata/v2/");
		return bean;
	}

	@Bean
	public ServletRegistrationBean odataServletV2(ApplicationContext context) {
		ODataInstanceProvider.setContext(context);
		ServletRegistrationBean<ODataServlet> bean = new ServletRegistrationBean<>(new ODataServlet());
		bean.setUrlMappings(Collections.singletonList("/api/odata/v2/*"));
		bean.addInitParameter("org.apache.olingo.odata2.service.factory", ServiceFactory.class.getName());
		bean.addInitParameter("org.apache.olingo.odata2.path.split", "1");
		bean.setLoadOnStartup(1);
		return bean;
	}

I wrote some simple controllers annotated with the SAP OData annotations (@Query, @Read, etc), whose only job is to extract the data from the incoming requests, call the service and then map the result back to the OData objects.

This actually went pretty well, the annotations are pretty intuitive, although with slightly inconsistent naming (the @Function has a first-capital-letter Name property).

When running the app, I hit a problem: these annotated classes were being instantiated with newInstance, but I did not have a no-arg constructor (because of constructor-based dependency injection).

Luckily, the library uses the native ServiceLoader mechanism for creating an InstanceProvider. With some small hacks, I linked my own provider which uses the Spring Application Context for retrieving instances:

public class ODataInstanceProvider implements InstanceProvider {
	private static ApplicationContext context;

	static void setContext(ApplicationContext context) {
		ODataInstanceProvider.context = context;
	}

	@Override
	public Object getInstance(Class clazz) {
		return context.getBean(clazz);
	}
}

Due to the structure of the entities and the REST API, I wanted to be able to directly create a student inside a group by doing POST Groups(1)/Students. This seems to not be possible, so I had to create some specialized POJOs for the OData side (which also have the parent entity IDs inside the child entities):

Trivia: I found out that the SDK uses the Jackson’s convertValue method to transform POJOs into data maps. This means that if you put any @JsonProperty annotations on your fields, they will be renamed in the output (as long as the metadata also has this name in it). I also don’t know how I feel about the performance of this, considering that Jackson performs a serialization and then a de-serialization to do this mapping.

Verdict: This part went pretty smooth, and SAP actually used a well-known mechanism for allowing developers to connect to their framework, so I was pretty happy at this point.

Bootstrapping V4

The V4 configuration is very similar, with the difference that we don’t have an EndPointsList servlet:

	@Bean
	public ServletContextListener odataApplicationInitializerV4() {
		return new ODataApplicationInitializer();
	}

	@Bean
	public ServletRegistrationBean<ODataServlet> odataServletV4() {
		ServletRegistrationBean<ODataServlet> bean = new ServletRegistrationBean<>(new ODataServlet());
		bean.setUrlMappings(Collections.singletonList("/api/odata/v4/*"));
		bean.setName("ODataServletV4");
		bean.setLoadOnStartup(1);
		return bean;
	}

After doing this configuration, sh*t started hitting the fan.

It looks like there is a so-called AnnotationRepository which stores all the annotated methods. Unfortunately, this singleton class keeps everything inside a map of lists. Both the V2 and the V4 perform the same annotation scanning, so these lists will contain exact duplicates. In some well intended logic there is a sanity check that throws an exception because of this situation.

Trivia: when I was digging through this to find a solution, I stumbled upon a magnificent piece of coding:

I then thought that maybe at some point I will want to do something slightly different in V4 than in V2, so I extracted some base classes for the OData controllers and created dedicated controllers for V2 and for V4.

Now I had a slightly different problem: both V2 and V4 context listeners use the same package init parameter to determine where to perform the scanning. I needed to be able to specify a different package for each listener.

To achieve this, I created a simple decorator for the listeners which spoofs the package parameter. This hack looks like so:

@RequiredArgsConstructor
class SingleParameterSettingListener implements ServletContextListener {
	private static final String PARAMETER_NAME = "package";

	private final ServletContextListener delegate;
	private final String parameterValue;

	@Override
	public void contextInitialized(ServletContextEvent sce) {
		delegate.contextInitialized(new ServletContextEvent(new Context(sce.getServletContext())));
	}

	@Override
	public void contextDestroyed(ServletContextEvent sce) {
		this.delegate.contextDestroyed(sce);
	}

	private interface GetInitParameter {
		String getInitParameter(String name);
	}

	@RequiredArgsConstructor
	private class Context implements ServletContext {
		@Delegate(excludes = GetInitParameter.class)
		private final ServletContext delegate;

		@Override
		public String getInitParameter(String name) {
			if (PARAMETER_NAME.equals(name)) {
				return parameterValue;
			} else {
				return delegate.getInitParameter(name);
			}
		}
	}
}

The next thing that I had to do was to give different names to the v2 and v4 services (by adjusting the metadata file names accordingly):

Lastly, the V4 seems to not use the ServiceLoader mechanism for creating controller instances as the V2 library, but instead directly does a newInstance call on the class.

To circumvent this problem but still get the dependencies from Spring, I had to create a no-arg constructor in each V4 controller and get the dependencies from my ODataInstanceProvider from before:

@Component
@RequiredArgsConstructor
public class RandomODataV4Controller {
	private final RandomODataController base;

	public RandomODataV4Controller() {
		this.base = ODataInstanceProvider.getInstanceTyped(RandomODataController.class);
	}

	// annotated methods follow...
}

After doing this final change, my OData services were up and running.

Verdict: During this phase, I wanted to throw my laptop out of the window. It looks like the V4 libraries are of noticeably worse quality than the V2 ones. There is also a clear lack of consistency between the two (as the V2 uses the ServiceLoader concept whilst the V4 does not).

An extra listener

Initially I added one more listener for the V2 service: the ServletListener. Based on the listener initialization order, I encountered several problems (and I decided to just dump it):

  • The CDSDataProvider would be used instead of the CXSDataProvider, so I would get errors regarding the HANA connection.
  • The AnnotationContainer seems to have a list of supported annotations. All the SAP annotations for service development (@Read@Query, etc) are added in all three listeners. The difference is that this listener does not check if they were previously added by someone else. If this is run after another listener adds these annotations, then each annotation is present twice in this container, which then causes exceptions down the road.

One positive side effect of this listener was that it adds support for XSLX. Initially, I manually added the configuration for getting this to work even without the listener:

    private void initializexlsxSupport() {
        try {
          CustomFormatProvider.setCustomFormatRepository((ICustomFormatRepository)Class.forName("com.sap.cloud.sdk.service.prov.v2.rt.core.xls.CustomFormatRepository").newInstance());
        } catch (Exception var2) {
            logger.error("Error Initializing XLSX Processor", var2);
        }
    }

But then I noticed that I keep seeing a random XLSX file in my project directory after making a call requesting $format=xlsx. After checking the source code I found another magnificent implementation:

So I decided to scrap support for this format.

Overall verdict

I am sure that some stuff can be done better on my side, so I would appreciate some hints.

Nevertheless, I would suggest several actions for product owning team to improve the developer experience while using the SDK:

  • Improve the dependency structure or provide “starters” to allow developer to include a single dependency in the POM and get support for plain OData Vx (maybe one starter with CDS and one without).
  • Some more improvements to the code base. Sorry, but it is far from clean and decoupled. Looking at the sources, I get the impression that everything is tightly coupled and in some artifacts, assumptions are even made related to the archetype usage.
  • A lot of stuff is done via static properties, singletons and newInstance. This is generally bad practice and reduces the capacity of integrating your SDK with other libraries.
  • Put the source code on GitHub. The source code is already readily available on the Maven Central, why not put it on GitHub? At least then you can get issues and pull requests to fix stuff like the XLSX issue from above.
  • Some more documentation. IMO, the JavaDocs are sometimes useful, but at times they provide absolutely no insight.
  • A more library-centric mindset: right now for me it looks like the SDK is purely focused on supporting the creation of new “side by side extensions” in a way that completely “marries” the extension with the SDK. In my view, you should consider the other use cases as well (in which only a sub-section of the SDK is used, and only in smaller components of an app).

Frankly, as a software architect I reject the notion that I should be pushed by a framework to chose a particular structure, application server, lifecycle management, testing strategy, etc. for my solutions.

It is ok to provide recommendations, but when most of the information is about this end-to-end scenario when all the technical decisions you could take are dictated by the SDK, then they are no longer just recommendations. I have a slight sense of deja vu when thinking about the SDK and its on-premise ABAP cousin, who dictates pretty firmly how an application must be structured.

 

2 Comments
You must be Logged on to comment or reply to a post.
  • Hello Serban,

    thanks a lot for your honest feedback on the the SDK formerly known as SAP Cloud Platform SDK for service development. To overcome some of the issues you faced, this SDK already today is embedded into a holistic approach with the SAP Cloud Application Programming Model.

    Your feedback will surely help the product teams improve the SAP Cloud Application Programming Model.

    Please note, however, that what you describe in your blog post is not using the SAP S/4HANA Cloud SDK, which you mention in your first paragraph. May I ask you to update the blog post and remove this reference to the SAP S/4HANA Cloud SDK and the corresponding tag?

    With the SAP S/4HANA Cloud SDK, we in fact strive to support what you mention: not dictating anything, but offering convenience where needed. Of course, we can also improve there and are always eager to get feedback.

    Best regards,

    Henning

    • Hi Henning,

      Long time no chat!

      For sure, I updated the tag. I only saw afterwards that this older SDK is merged into the new CAP – it did not seems to have its own tag so I just threw your tag on it 😉

      I will still keep the initial paragraph mention tho – this is how the flow worked for me – I knew from TechED and SCN that there is a “(S/4) Cloud SDK” which can be used in Java, later in the blog I describe that I found out that I only need the service development “part” (or SDK as you would want to call it).

      Regarding to these multiple SDKs, which for the outside really look like a single big SDK, the feedback from my side would be:

      • SAP has or had several SDKs in this area, which have similar naming. Maybe in the future, a better naming strategy would be a happier choice.
      • Both this “SCP SDK for service development” and the “S/4 Cloud SDK” are co-dependent from a POM point of view. One example would be that the first SDK has a “com.sap.cloud.servicesdk.prov:odata2.xsa” JAR which depends on “com.sap.cloud.s4hana.cloudplatform:tenant”.

      Now that I look at it, it seems that there is indeed a qualitative difference between the s4hana packages and the servicesdk packages (s4 is better), so the fact that different teams are responsible for them is somewhat transparent 🙂

      With the statement that you strive to support anything, that is for sure laudable, but it doesn’t seem like that from the outside to be honest (yet).

      Right now all the content that I could find shows use cases starting from the archetype, doing CI, organizing the code into commands, doing security, caching, etc, all as described by the SDK.

      Actually, I would argue that if you would want to migrate away from a project built as described in Ekaterina’s blog post series, you would be completely coupled with the SDK, with no real chance of moving away without re-writing a significant portion of the code base.

      A framework or SDK that is agnostic usually has it the other way around: blogs showing how you can do something agnostic of the setup, and then maybe some soft recommendations.

      BR,
      Serban