Skip to Content


Principles of reactive programming model, as opposite to classic imperative, have been defined a while ago and written down in Reactive Manifesto. Major shift in paradigm is implementation of non-blocking (asynchronous) applications and their components, that are based on concept of components reacting to event streams. Reacting to the event / stream instead of calling blocking functions and waiting until they complete their work and return control to the caller function, improves performance and throughput of the entire application. It shall be noted that usage of only few reactive components within the application that relies on other major and blocking components is unlikely to bring significant improvement, as reactive nature of upstream components will be neutralized by blocking nature of downstream components – for example, implementation of reactive non-blocking principles in the API or service layer will not have desired effect for the entire application, if database layer that is queried by corresponding requests, doesn’t support reactive model and uses classic blocking principles when processing database requests. It is also important to ensure that runtime where application is executed, is also compliant to reactive model and supports it.

In Java world, significant milestone in moving towards reactive applications was release of Java 8 that introduced Streams API and that became foundation for several libraries which enable development of reactive applications – such as RxJava and Project Reactor. Going further, Java 9 introduced Reactive Streams / Flows API, embedding this functionality into standard JDK. Popular Java frameworks were also extended with support for reactive applications development – for example, Spring 5 (released to general availability in September 2017) and, correspondingly, Spring Boot 2.0 (released to general availability in March 2018). New runtime environments / servers have been provided as well to support fully reactive, non-blocking streams for deployed applications – such as Netty and Undertow.

Commonly, reactive programming is associated with functional programming paradigm which is based on declarative programming, although in fact, this is not the only approach available when developing reactive applications – in this blog, I would like to draw attention to an alternative approach, where imperative programming paradigm can be employed to migrate traditional thread blocking application to reactive non-blocking one, thanks to corresponding framework wrappers.
Why would we want to assess alternative options if functional reactive streams programming is already available and natively supported in recent Java release? One of major reasons is related to potential migration efforts – in complex legacy applications, switching programming paradigm, even if technically feasible, is resource consuming and can turn to be not worth efforts that need to be invested to make this migration happen reliably and ensure further support of the migrated application.


With these arguments and motivation, let’s get to technical aspects. Application that I will use in the demo throughout the blog, is Java Spring Boot application deployed to Cloud Foundry environment of SCP. Application exposes REST API that can be used to query and retrieve data persisted in MongoDB repository. Spring Boot 2.0 (which is based on Spring 5 framework) includes modules to support reactive stack, as well as more traditional and commonly used modules available in earlier releases of Spring framework. Postman will be used to consume APIs.

Development of involved Spring Boot application has been carried in Eclipse IDE extended with Spring related plugins, dependencies management and build has been conducted using Gradle.

Application’s source code and Gradle build script can be found in GitHub repository.


Baseline version of the application: development

At the beginning we develop basic Spring Boot application using traditional concepts – this is going to be our baseline version of the application.

Dependencies that are required by the developed application:

  • Spring Web MVC module (artefact ID ‘spring-boot-starter-web’ of group ID ‘org.springframework.boot’) – to enable exposure of REST APIs,
  • Spring Cloud Connectors (artifact IDs ‘spring-cloud-spring-service-connector’ and ‘spring-cloud-cloudfoundry-connector’ of group ID ‘’) – to enable integration with cloud provided services such as MongoDB service,
  • Spring Data MongoDB module (artifact ID ‘spring-boot-starter-data-mongodb’ of group ID ‘org.springframework.boot’) – to enable integration with MongoDB database.

I also make use of Spring Developer Tools (artifact ID ‘spring-boot-devtools’ of group ID ‘org.springframework.boot’) to facilitate development and local testing of the developed application.

Tomcat is used as a servlet container for the application.


Below is application’s main class:

Controller that implements API layer – handlers for two API methods: query for specific document by its unique identifier (query can return at most single document) and query for all documents by one of their non-unique attributes (query can return list of documents):

MongoDB repository definition interface:

Entity type / MongoDB document type definition class:

Cloud Connector configuration class that scans for all relevant cloud provisioned services available for the application at runtime:


Baseline version of the application: deployment and test

After development phase had been completed, the application has been assembled into JAR file and deployed generated JAR file to Cloud Foundry environment. Earlier created MongoDB service instance to which some documents had been inserted, was bound to the deployed application.

In sake of demonstration, I use Postman to consume the exposed API by sending HTTP GET request to the application:

Application’s log retrieved from Cloud Foundry space, indicates the issued request handling:


Reactive version of the application: development

We are now done with the baseline version of the application development and checking – now it is time to develop another Spring Boot application to fulfil exactly same functional requirements and implement same logic – but this time, we replace all major thread blocking components with their reactive analogues.

Dependencies that are required by the developed application:

  • Spring Web Reactive module (artifact ID ‘spring-boot-starter-webflux’ of group ID ‘org.springframework.boot’) – to enable exposure of REST APIs using reactive model. This module replaces earlier used Spring Web MVC module,
  • Spring Cloud Connectors (artifact IDs ‘spring-cloud-spring-service-connector’ and ‘spring-cloud-cloudfoundry-connector’ of group ID ‘’) – to enable integration with cloud provided services such as MongoDB service. This is same module as the one used earlier,
  • Spring Data MongoDB Reactive module (artifact ID ‘spring-boot-starter-data-mongodb-reactive’ of group ID ‘org.springframework.boot’) – to enable integration with MongoDB database using reactive model. This module replaces earlier used Spring Data MongoDB module.

Spring Developer Tools are used in this application, too.

Together with enablement of reactive ready modules for the developed application, underlying runtime is replaced to meet reactive streaming requirements – instead of earlier used Tomcat, we go ahead with Netty, which is one of non-servlet runtimes for reactive applications,
Now let’s focus on changes that had to be applied to application’s implementation in order to migrate it to reactive model.


First of all, it is necessary to add additional annotation in application’s main class – ‘@EnableReactiveMongoRepositories’ – to introduce usage of reactive MongoDB repositories (which, in its turn, implies usage of reactive MongoDB driver when accessing MongoDB repository):

Besides this annotation, it is also required to adjust definition of MongoDB repository: instead of earlier used interface MongoRepository, it now has to inherit from corresponding reactive counterpart – ReactiveCrudRepository. Note that definition of methods for querying documents by their attributes also slightly changes in part of their return types in order to comply to reactive types (Flux and Mono). In this particular example, I use Flux return type as this is reactive counterpart for earlier used List:

Another adjustment that has to be conducted, is in the controller. Change has same background as the one mentioned in repository definition above – accommodation of return types of corresponding controller methods that handle respective exposed API methods in such a way that they comply to reactive model – namely, Flux, if the API method handler can return multiple entities / documents, and Mono, if the API method handler can return at most up to one single entity / document. Hence, handler for querying for a specific unique document by its identifier gets changed to return objects of type Mono, and handler for querying for all matching documents by their non-unique attribute gets changed to return objects of type Flux:


That’s it. As it can be seen, migration of the basic application from traditional model to reactive model in this simplistic case was smooth and didn’t make the developer to step outside of traditional Spring development and explicitly switch to another programming paradigm, such as functional programming. We used annotated controllers when developing API layer, made minor amendments in some relevant components to enable reactive interaction with MongoDB, used reactive types to wrap entity types, made few adjustments in dependencies to make use of respective Spring components and reactive enabled driver to MongoDB – and that was all we did so far.


Reactive version of the application: deployment and startup (failure)

We are now ready to assemble application and give a try to deploy and run it in Cloud Foundry environment following similar steps to those used earlier for the baseline version of the application. Assembly and deployment succeeded, but when starting the application, startup process completed with errors due to an exception caused by application’s failure to establish connection to MongoDB repository:

It shall be noted that during deployment, the same instance of MongoDB service was bound to the application, none of application properties or cloud connector configuration have been changed in reactive version of the application compared to its baseline version. Surprisingly, the application now attempts to establish connection to phantom locally hosted MongoDB repository (which for sure, doesn’t exist in cloud environment) instead of connecting to the bound MongoDB service instance. As it can be seen from the provided startup log, application’s MongoDB driver was initially able to recognize location of bound MongoDB service instance, but for some reason it didn’t use correct connection parameters and replaced them with connection string to localhost. Materials on this and similar topics suggest that there are still some issues that have to be addressed and fixed by corresponding migrated framework modules, Spring Data MongoDB Reactive module being one of them when used in Cloud Foundry environment.


Reactive version of the application: deployment and startup (workaround), test

To overcome this issue in current environment and using available modules, it is now time for workarounds. One of them is to explicitly define connectivity parameters for the bound instance of MongoDB service in application properties.

Firstly, it is necessary to retrieve MongoDB instance connection parameters – this can be done using administration cockpit and navigating to details of MongoDB service instance. Note that the required information includes sensitive data – such as user name and password used during authentication procedure when accessing respective MongoDB service instance – hence, appropriate security controls shall be applied. Below example is illustrated in plain text and is based on temporarily created MongoDB service instance in demo purposes only. Searched information is MongoDB database connection string that shall hold value in format ‘mongodb://username:password@host:port/database’:

Next, the obtained MongoDB connection string shall be maintained in the developed application properties. In this demo, I use a single property ‘’ to specify the entire connection string – alternatively, combination of corresponding individual properties ‘*’ that specify MongoDB host, port, database, user and password can be used. Not to distract attention from the main subject of this blog, I maintain it directly in properties file (‘’) of the reactive version of the application:

In production ready developments, this kind of configuration has to be externalized and be environment specific to achieve better flexibility and maintainability of the application.


After this is done, the application is redeployed and restarted – and this time, startup process succeeds and connection to the bound MongoDB service instance is established with no errors:

We can now test this application by consuming the exposed API using Postman:

Corresponding entry in application’s log indicates the issued request handling:


Runtime analysis

By now, deployment to Cloud Foundry environment and end to end demonstration of a presented scenario have been completed – but we haven’t yet looked under the hood of the application at its runtime, though this outlook would illustrate fundamental difference between traditional and reactive processing of sample requests. Although we haven’t performed sophisticated and complex migration activities from development perspective, we caused significant change to behaviour of the application at its runtime – thanks to built-in capabilities of Spring 5 / Spring Boot 2.0 that allow usage of imperative programming and annotation driven configuration when developing reactive applications and that enable transparent handling of reactive streams which happens behind the scenes. Let’s gain insight into application runtime in general and handling and processing of the request by such application in particular. For this, I’m going to use Eclipse IDE with Java profiler plugin to profile locally running application – that is the same application as described above, but executed as standalone Spring application that communicates with locally running instance of MongoDB. Performance hotspot profiling will allow us to look into application threads and method calls that were performed in context of each of them.


In the first instance, let’s profile API request processed by baseline version of the application. From a list of runnable threads, it can be noted that there is one HTTP thread (‘http-nio-8080-exec-1’) that consumed most of CPU time (343 ms), and this is the thread that handled the request and executed all underlying logic:

Methods invoked within this thread, include execution of controller logic, as well as thread blocking querying of MongoDB repository:


Next, let’s profile the same API request processed by reactive version of the application. From a list of runnable threads, we can immediately notice difference compared to threads used in baseline version of the application: instead of earlier observed single HTTP thread that processed the entire API request and occupied the most of CPU time, we now can see two threads that contributed to CPU time consumption: HTTP thread (‘reactor-http-nio-3’) consumed much less CPU time (only 15,6 ms), and it is now accompanied with the other thread (‘nioEventLoopGroup-2-2’) – threading used in event-loop architecture implemented in Netty framework used by Spring Web Reactive module, which consumed significantly more CPU time (171 ms):

Methods invoked within HTTP thread, include execution of controller logic, but this thread is not blocked by querying MongoDB repository – resource consuming querying is executed within a dedicated separate thread. Exploration of invoked methods of both highlighted threads provides clear evidence of usage of reactive principles – in particular, usage of observable emitter (publisher) and observer (subscriber) entities:


This an is important observation: as it has been described above, we didn’t explicitly use Reactive Streams API when migrating traditional application to reactive, but framework did this for us, which is useful functionality to be aware of when developing Spring Boot applications in general and reactive applications in particular.

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Ram Prasad

    Hi Vadim

    Awesome post ! Thanks.

    I am curious about SpringBoot application with HANA DB on Neo platform. Is there any good source of documentation or tutorial that you can point me to ?



    1. Vadim Klimov Post author

      Hi Ram,

      This blog isn’t really about integration with HANA DB in the cloud, so your question might be more relevant for Questions & Answers section of SAP Community. In general, you can use several approaches to access HANA DB in the cloud from Java Spring application deployed to a cloud tenant – Spring Data JPA as a classic and generic way of accessing data sources, and Spring Cloud Connectors for HCP as a more sophisticated and cloud friendly way.



  2. Ivan Chupris

    Hi Vadim

    enjoyed reading your article! I saw resource consumption decreased, what about overall response time for both versions – did you had a chance to measure that on some sample volumes to see the difference ?

    Best regards,



    1. Vadim Klimov Post author

      Hi Ivan,

      Glad to see you here!

      It is a valid question about performance and throughput… Given asynchronous nature of communication between components of reactive application (assuming all or most major thread blocking parts of application are implemented using reactive model), we shall expect performance and throughput improvement for such an application, that would be one of good reasons to consider migration to reactive model. Although, if migration is not complete and most “expensive” long running calls remain thread blocking, we will not really benefit from such migration. So, the idea is to at least identify most time consuming components of the application and to see if they can be moved to reactive model.

      Given this sample application is relatively simple (no logic in the application layer, and only querying small sized documents from single documents collection in persistence layer), load tests might not be that much accurate and representative for real life applications, but it is worth checking what we get from this example. As a preparation step, I have uploaded 1 million documents to the MongoDB collection with random values in the payload. After that, I used Apache JMeter to produce requests (similar to those earlier generated using Postman) to applications and generate load, with the following test script configuration:

      • 10 concurrent threads are used to simulate 10 concurrent consumers of the tested application,
      • each thread generates 1000 requests (resulting in 10.000 requests to the application in total for the whole test script execution),
      • there are no delays between requests, they are produced instantly,
      • to introduce some randomization factor, requests use parameter to query documents with the given code (one of queried documents’ attributes), where code is random generated by the test script for each produced request – as a result, size of response message and returned number of documents will vary.

      Test setup is synthetic, but obtained results reflect earlier statement about reactive application being quicker in processing incoming requests than its baseline (classic) version.


      Summary of execution for baseline (classic) application:

      Average response time: 65 ms

      Minimum response time: 29 ms

      Maximum response time: 9245 ms

      Throughput: 141,8 requests/second


      Summary of execution for reactive application:

      Average response time: 43 ms

      Minimum response time: 28 ms

      Maximum response time: 1030 ms

      Throughput: 224,8 requests/second


      Network latency can be considered as a fixed value, as both runs were executed on a machine hosted in the UK, and requests targeted applications deployed to the same SCP subaccount (trial subaccount in Cloud Foundry environment of SCP provisioned in St. Leon-Rot), JMeter setup and environment configuration on the client machine is same in both cases. Hence, difference in response time and corresponding throughput captured by JMeter, primarily originates from difference in robustness of tested deployed applications.





Leave a Reply