Skip to Content

Intro

Runtimes provided by SAP Cloud Platform, especially those available in Cloud Foundry environment via concept of buildpacks, leave technical architects and application developers with wide variety of options when it comes to choice of programming language, server runtime and framework. For certain runtimes, there are clear and well recognized use cases, but for some others, choice is not that straightforward. Probably, one of the most intensive discussions and debates that take place for many years and that is very much relevant for SAP Cloud Platform as well, is choice between server-side JavaScript and Java. To be more specific, commonly discussions are focused not on programming languages as such, but on applications developed using frameworks for Node.js runtime and applications developed using Java Spring framework. In many materials that cover this topic, Java applications are commonly considered to be better suited for complex operations potentially requiring high level of parallelization (thanks to JVM’s multi-threading), but payoff for this is higher resource consumption (especially when it comes to memory consumption of underlying JVM), whereas Node.js applications are considered to be more lightweight and relevant for CPU non-intensive and non-blocking operations (thanks to Node.js’s concurrency mechanisms and non-blocking nature of JavaScript). Java was commonly associated with classic imperative (blocking) programming model put into multi-threaded runtime, whereas Node.js was associated with non-blocking programming model put into single-threaded runtime with support of concurrency. But both parties evolved over time: reactive (non-blocking) programming principles were introduced to Java and became supported in later versions of Java and its frameworks, Node.js got modules that bring multi-threading support…

In this blog, I would like to reflect on this topic and make a brief comparison of two lightweight applications – one is written in Java, based on Java Spring Boot and uses Spring Reactive modules, the other one is written in JavaScript, based on Node.js and uses Express framework. Both applications use the same persistence layer (NoSQL database – MongoDB), implement the same querying logic and have been deployed to SAP Cloud Platform, Cloud Foundry environment using similarly sized containers and default buildpacks (java_buildpack and nodejs_buildpack, correspondingly). Applications expose REST API that can be consumed to query corresponding documents stored in MongoDB repository. For load testing, I’m going to use Apache JMeter that will produce large number of parallel requests and invoke applications under test.

The exercise doesn’t aim comparison of applications’ resource footprint (as mentioned above, Java applications commonly have higher initial memory consumption than Node.js applications), as well as there will be no comparison of resource consumption patterns for these applications when being put under load test (this analysis alone deserves a separate thorough blog), but I’m going to focus attention on one single metric – throughput.

The blog was inspired by reading comparisons between Node.js and Java applications’ performance for applications deployed to Cloud Foundry (for example, refer to the blog written by Marius Obert), as well as other comparisons on the subject, that get published in Java and Node.js communities.

 

Overview and notes

High level overview and component diagram that depicts involved components employed in the demo, are illustrated below:

Comparison of non-blocking Node.js application with classic thread blocking Java application when it comes to I/O operations doesn’t seem right to me, so let’s stick to parity here and compare Node.js application with Java application based on reactive principles.

Another important aspect is that both applications don’t implement complex application logic and remain very lightweight (which also results in absence of usage of some abstraction patterns that would commonly present in production grade applications) – ultimately, applications are going to only implement very basic router and controller logic for querying documents from MongoDB repository.
Together with this, some important modules and functionalities that shall be present in production grade applications – for example, authentication and authorization, logging, thorough exception handling, etc. – are intentionally removed to keep demo applications as simple as possible.

 

Applications under test

Java Spring Boot Reactive application re-uses sample application that has been developed earlier when demonstrating migration of Spring Boot application from classic imperative model to reactive model, so please refer to my earlier blog, if you would like to get into details of that application. The application is based on Spring Boot 2.0 and uses:

  • Spring Web Reactive – to expose REST APIs using reactive model,
  • Spring Data MongoDB Reactive – to interact with MongoDB database using reactive model,
  • Spring Cloud Connectors – to interact with cloud provided services and in particular, with services that are bound to the application in Cloud Platform environment.

 

Node.js application is based on Node.js version 10 and uses:

  • Express web framework module – to expose REST APIs,
  • Mongoose module – to interact with MongoDB database,
  • cfenv module – to interact with application environment provided by Cloud Foundry.

An application is implemented using promise pattern (to be more precise, async/await pattern that is based on the concept of promise) in order to avoid heavy usage of callbacks.

Application’s source code and manifest file used for deployment to Cloud Foundry can be found in GitHub repository.

I encourage you to read the blog written by Florian Pfeffer – the blog contains very detailed, step by step instructions on how Node.js application can be developed and deployed to Cloud Foundry environment.

Both applications have been deployed to Cloud Foundry environment of SAP Cloud Platform and bound to the service instance of MongoDB:

 

Test execution

Series of tests were conducted using the same structure of the test plan in JMeter:

Requests produced by HTTP sampler and sent to both applications, are similar to those used in the earlier referenced blog – these are HTTP GET requests to query documents that match the code provided in request as a query parameter. Randomizer function is used to generate code within allowed interval, so that produced requests are less static.

For every run, test script executed 1000 loops of HTTP requests to Spring Boot Reactive and Node.js applications each, with increasing number of virtual users to emulate increasing number of concurrent calls to the API. Summary of test runs was collected using JMeter standard listener and special attention to identified throughput has been paid.

  • 1 virtual user running 1000 loops of calls (1000 HTTP requests to each tested application):

  • 5 concurrent virtual users running 1000 loops of calls (5000 HTTP requests to each tested application in total):

  • 10 concurrent virtual users running 1000 loops of calls (10000 HTTP requests to each tested application in total):

  • 20 concurrent virtual users running 1000 loops of calls (20000 HTTP requests to each tested application in total):

  • 30 concurrent virtual users running 1000 loops of calls (30000 HTTP requests to each tested application in total):

  • 40 concurrent virtual users running 1000 loops of calls (40000 HTTP requests to each tested application in total):

  • 50 concurrent virtual users running 1000 loops of calls (50000 HTTP requests to each tested application in total):

 

Test results summary

As it can be seen from tests execution summary, Spring Boot Reactive and Node.js applications performed well with almost identical throughput and reached their optimal throughput on 20+ concurrent requests, approaching throughput of approximately 240-250 requests per second:

There were some trials were Spring Boot Reactive application tended to be slightly more robust than Node.js application, but there were also some other trials where Node.js application handled slightly higher throughput. Given difference was negligibly small and cannot be seen as a repeatable trend, I tend to consider that difference as statistically acceptable margin of error that can be ignored in defined circumstances.

Given applications under test don’t implement any sophisticated logic, test can be considered as examination of robustness of Spring Boot Reactive and Node.js technology stack – being applied to this particular setup, robustness of technologies that form stack of a typical microservice application – web framework, interaction with persistence layer, and, given cloud focus of the application, interaction with application context / environment when being deployed to cloud provisioned container. And, based on obtained measurements, both Spring Boot, when using reactive modules, and Node.js based technology stacks that were utilized in application development, can demonstrate relatively equal robustness.

 

Outro

It has been demonstrated that we can achieve almost equal robustness when developing Java / Spring Boot Reactive and JavaScript / Node.js application. In this particular setup, applications were deployed to Cloud Foundry, but very similar outcome might apply to on premise applications or applications deployed to other cloud platforms. Now let’s reflect on observations and measurements… Does this signify that Spring Boot Reactive is as high-performing, as Node.js?

Does this imply that application developers remain uncertain on what is a reasonably better choice – Java or server-side JavaScript – when it comes to justify choice based on application robustness for high-load applications? (if we think for a moment about many other evaluation factors – such as application resource footprint and utilization, maintenance, learning curve, existing skill set and qualifications of developers and support teams, unification or diversity of programming languages applied to front end and back end, etc. – then choice might become very different)

One key note that shall be taken as a takeaway from reading this blog (as well as many other materials that illustrate comparison of various development technologies, runtimes and frameworks) – never ever make definitive conclusions based on such tests and measurements, taken alone. Every comparison summary similar to the one provided in this blog, reflects very specific use case being put in certain environment and setup (of both application under test and test script that is used for load generation). Hence, be sure that use case and environment that were used for measurements, match those of yours. Enterprise applications – including microservices – can vary significantly in their behavioral pattern and area of application: they can be memory intensive (for example, processing large volumes of data), CPU intensive (for example, involving complex calculations or highly recursive functions), I/O intensive (for example, intensive interaction with other components such as persistence layer or other services), combinations of above and so on. Unless you are very confident that notable conditions are the same or difference between them is insignificant and is not going to invalidate results of measurements, making them inappropriate for assessed alternatives, any such comparison is good for educational purposes, but shall always be taken with fair share of criticism.

This idea sounds very straightforward, but for some reasons, it is not uncommon to evidence it being neglected when it comes to debates about which runtime is more robust, which framework is more suitable for applications with high load profile, etc.

To report this post you need to login first.

3 Comments

You must be Logged on to comment or reply to a post.

  1. Moya Watson

    This is a really nice article, Vadim. Thanks for researching and writing it up so clearly.

    PS: don’t know if you’re on Twitter but you might find some interesting conversations under @sapcp.  thanks again for contributing!

     

    (0) 
    1. Vadim Klimov Post author

      Hello Moya,

      Thank you for your feedback. I haven’t yet created an account in Twitter, but looking into how many announcements and discussions take place there, it shall not be overlooked.

      Have a good weekend!

      Regards,

      Vadim

      (1) 
  2. Marius Obert

    Hi Vadim,

     

    thanks for this nice post! It is interesing to compare applications, which mimic a flow that is compariable to real-life applications.

     

    Regards,

    Marius

    (0) 

Leave a Reply