The exercise doesn’t aim comparison of applications’ resource footprint (as mentioned above, Java applications commonly have higher initial memory consumption than Node.js applications), as well as there will be no comparison of resource consumption patterns for these applications when being put under load test (this analysis alone deserves a separate thorough blog), but I’m going to focus attention on one single metric – throughput.
The blog was inspired by reading comparisons between Node.js and Java applications’ performance for applications deployed to Cloud Foundry (for example, refer to the blog written by Marius Obert), as well as other comparisons on the subject, that get published in Java and Node.js communities.
Overview and notes
High level overview and component diagram that depicts involved components employed in the demo, are illustrated below:
Comparison of non-blocking Node.js application with classic thread blocking Java application when it comes to I/O operations doesn’t seem right to me, so let’s stick to parity here and compare Node.js application with Java application based on reactive principles.
Another important aspect is that both applications don’t implement complex application logic and remain very lightweight (which also results in absence of usage of some abstraction patterns that would commonly present in production grade applications) – ultimately, applications are going to only implement very basic router and controller logic for querying documents from MongoDB repository.
Together with this, some important modules and functionalities that shall be present in production grade applications – for example, authentication and authorization, logging, thorough exception handling, etc. – are intentionally removed to keep demo applications as simple as possible.
Applications under test
Java Spring Boot Reactive application re-uses sample application that has been developed earlier when demonstrating migration of Spring Boot application from classic imperative model to reactive model, so please refer to my earlier blog, if you would like to get into details of that application. The application is based on Spring Boot 2.0 and uses:
- Spring Web Reactive – to expose REST APIs using reactive model,
- Spring Data MongoDB Reactive – to interact with MongoDB database using reactive model,
- Spring Cloud Connectors – to interact with cloud provided services and in particular, with services that are bound to the application in Cloud Platform environment.
Node.js application is based on Node.js version 10 and uses:
- Express web framework module – to expose REST APIs,
- Mongoose module – to interact with MongoDB database,
- cfenv module – to interact with application environment provided by Cloud Foundry.
An application is implemented using promise pattern (to be more precise, async/await pattern that is based on the concept of promise) in order to avoid heavy usage of callbacks.
Application’s source code and manifest file used for deployment to Cloud Foundry can be found in GitHub repository.
I encourage you to read the blog written by Florian Pfeffer – the blog contains very detailed, step by step instructions on how Node.js application can be developed and deployed to Cloud Foundry environment.
Both applications have been deployed to Cloud Foundry environment of SAP Cloud Platform and bound to the service instance of MongoDB:
Series of tests were conducted using the same structure of the test plan in JMeter:
Requests produced by HTTP sampler and sent to both applications, are similar to those used in the earlier referenced blog – these are HTTP GET requests to query documents that match the code provided in request as a query parameter. Randomizer function is used to generate code within allowed interval, so that produced requests are less static.
For every run, test script executed 1000 loops of HTTP requests to Spring Boot Reactive and Node.js applications each, with increasing number of virtual users to emulate increasing number of concurrent calls to the API. Summary of test runs was collected using JMeter standard listener and special attention to identified throughput has been paid.
- 1 virtual user running 1000 loops of calls (1000 HTTP requests to each tested application):
- 5 concurrent virtual users running 1000 loops of calls (5000 HTTP requests to each tested application in total):
- 10 concurrent virtual users running 1000 loops of calls (10000 HTTP requests to each tested application in total):
- 20 concurrent virtual users running 1000 loops of calls (20000 HTTP requests to each tested application in total):
- 30 concurrent virtual users running 1000 loops of calls (30000 HTTP requests to each tested application in total):
- 40 concurrent virtual users running 1000 loops of calls (40000 HTTP requests to each tested application in total):
- 50 concurrent virtual users running 1000 loops of calls (50000 HTTP requests to each tested application in total):
Test results summary
As it can be seen from tests execution summary, Spring Boot Reactive and Node.js applications performed well with almost identical throughput and reached their optimal throughput on 20+ concurrent requests, approaching throughput of approximately 240-250 requests per second:
There were some trials were Spring Boot Reactive application tended to be slightly more robust than Node.js application, but there were also some other trials where Node.js application handled slightly higher throughput. Given difference was negligibly small and cannot be seen as a repeatable trend, I tend to consider that difference as statistically acceptable margin of error that can be ignored in defined circumstances.
Given applications under test don’t implement any sophisticated logic, test can be considered as examination of robustness of Spring Boot Reactive and Node.js technology stack – being applied to this particular setup, robustness of technologies that form stack of a typical microservice application – web framework, interaction with persistence layer, and, given cloud focus of the application, interaction with application context / environment when being deployed to cloud provisioned container. And, based on obtained measurements, both Spring Boot, when using reactive modules, and Node.js based technology stacks that were utilized in application development, can demonstrate relatively equal robustness.
One key note that shall be taken as a takeaway from reading this blog (as well as many other materials that illustrate comparison of various development technologies, runtimes and frameworks) – never ever make definitive conclusions based on such tests and measurements, taken alone. Every comparison summary similar to the one provided in this blog, reflects very specific use case being put in certain environment and setup (of both application under test and test script that is used for load generation). Hence, be sure that use case and environment that were used for measurements, match those of yours. Enterprise applications – including microservices – can vary significantly in their behavioral pattern and area of application: they can be memory intensive (for example, processing large volumes of data), CPU intensive (for example, involving complex calculations or highly recursive functions), I/O intensive (for example, intensive interaction with other components such as persistence layer or other services), combinations of above and so on. Unless you are very confident that notable conditions are the same or difference between them is insignificant and is not going to invalidate results of measurements, making them inappropriate for assessed alternatives, any such comparison is good for educational purposes, but shall always be taken with fair share of criticism.
This idea sounds very straightforward, but for some reasons, it is not uncommon to evidence it being neglected when it comes to debates about which runtime is more robust, which framework is more suitable for applications with high load profile, etc.