Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member606813
Participant
Configuring a connection pool is something that is very difficult given the nature of different Database products and how each one is unique. For Java applications, various JDBC Pool implementations are available.

In the Tomcat (version 😎 and TomEE containers in SAP Cloud Platform Cloud Foundry environment we get option to use the container managed datasource.

These environments are shipped with two implementations of a database connection pool. The first implementation (the default one) is a copy of Apache Commons DBCP 2.x project, renamed to a different package. The second implementation is Tomcat JDBC Connection Pool. Note that there might be different properties available with different implementations.

However, In case you are using SAP HANA as a service, to easily work with HDI service in SAP Cloud Platform the SAP Java buildpack provides a different pool implementation. The help documentation that shows how to set up a datasource for HDI managed HANA is here. Note that in this case we don't have to provide username or password, the factory looks in environment for service and creates datasource from those credentials all at runtime (12 factor app) and other benefits.

Also another common implementation, HikariCP is generally preferred due to its performance and reliability and therefore has established itself as modern de facto standard in connection pooling and comes by default in a spring boot application.

It’s up to the use case which factory implementation to use. For this blog post I am using going to use SAP Cloud Foundry environment (canary) and my test application uses SAP HANA as database and using it via an HDI service and SAP provided pooling factory and runs in Tomcat Container in SAP JVM buildpack. The pool factory I used is
com.sap.xs.jdbc.datasource.tomcat.TomcatDataSourceFactory

 

Now coming to pool sizing I googled a bit about what should be pool size and various blogs that came up were HikariCP/wiki/About-Pool-Sizing, Managing connections in Postgres,or any other database & Tuning postgres connection pools.

But none for SAP HANA so I decided to do a little experiment of my own.

It's highly recommended that you read the HikariCP wiki post as it gives a very good view of the topic. This wiki starts with

"Imagine that you have a website that while maybe not Facebook-scale still often has 10,000 users making database requests simultaneously -- accounting for some 20,000 transactions per second. How big should your connection pool be? You might be surprised that the question is not how big but rather how small!"

And then the wiki explains why pool size should be small (Please read it and watch the video as well). A large number of connections does not necessarily increase the performance, in fact it can lead to a performance degradation. Reducing the pool size from >1000 down to 96 increased the performance by factor of 50x in the case discussed in above wiki and recommended to reduce is further.

So, pool size should be less, okay noted.

I then started my experiment i.e. that is to verify the same that is being said in above blogs posts.

I wrote a spring controller, that when we hit the endpoint, gets connection from container provided datasource, inserts 10 random records, select these records and updates these records and finally deleted these records. To have some dummy data I insert 10000 dummy values in database. The code is here. It is a Multi Target Application and deployed in SAP Cloud Foundry (Canary). After deployment, I used loadtest package to check how requests per second the server can handle. Using loadtest we get an advantage that is the server must adjust to the requests throughput we send using the tool.

To configure the pool size, we can use maxActive attribute in resource configuration in context.xml file. For details please refer to HDI documentation mentioned earlier.

maxActive is the maximum number of active connections that can be allocated from this pool at the same time.

PS don't get confused between maxActive and maxTotal, these are different attributes for different pool implementations and each one has its own attributes. Read documentation carefully.

 

For example,
<?xml version="1.0" encoding="UTF-8"?>
<Context>

<Resource name="jdbc/java-hdi-container"
auth="Container"
type="javax.sql.DataSource"
factory="com.sap.xs.jdbc.datasource.tomcat.TomcatDataSourceFactory"
service="${service_name_for_DefaultDB}"
maxActive="20"
/>
</Context>

 

The basic idea of experiment is to change the value maxActive value and see how the throughput (in rps) of application changes. Given that I am on canary landscape the actual numbers will differ if you replicate the experiment.

I used the below command for the tests
loadtest https://org-scp-poc-srv-pool.cfapps.sap.hana.ondemand.com/test -t 30 -c 10 --rps 400

This hits the url endpoint continuously for 30 seconds, simulating 10 different users and a total of 400 rps

Here is the result



 

The results are opposite to that we should have small pool size as mentioned in starting of blog and recommended by default in HikariCP and other blogs where transactions rate decreases on increasing the pool size. The default value suggested by Hikari is 10. Also note that rps decreases a bit at 500 pool size. I think this is because of the application layer code(the test code generates random numbers) and compute resources limitation.

Let's try to understand what is happening.

If you have gone through the HikariCP wiki and watched the video, when we have large connections the database server receives large number of requests from application server and this leads to high I/O blocking.

If you haven’t, I will put it again here.

A database server has 3 main resources which are CPU, Disk, Network. Whenever a query request comes the database (Network I/O), based on the situation (data might not be cached) data has to be loaded from hard drive(Disk I/O) to main memory (RAM) and processed ( by CPU) and result is to be sent back(again Network).

Now if Disk and Network delays are ignored then pool size should be the number of cores available in database server to get best out of CPU resource. So a for a 16 core database server connection size of 16 will be optimal and on 32 core server 32 will be good pool size.

While we can't ignore Network and Disk delays, the data load from disk requires the disk spindle to seek to right place on disk and while this is happening the CPU is free. We say the CPU is blocked as it must wait till disk read I/O operation is complete. Here we use the old mantra that if we have a blocking problem then add more threads (meaning more connections from application server). The idea here is that let the current problem be waiting/blocked to be solved, while the we switch to other thread so that we can handle another problem.

As noted in wiki "More threads only perform better when blocking creates opportunities for executing".

More threads here refer to threads on database server and # of connections.

While this helps but we cannot increase the number of threads(connections) unreasonably. The reason being more threads(connections) to database more I/O requests will queue up.

"The disk can’t respond to 10000 disk reads at the same time, there’s going to be some sort of queueing in postgres, the OS, and maybe the hardware. In many cases, having very few connections might improve latency for incoming queries" - One of the blog

You might have guessed by now why our experiments results are opposite. It is because SAP HANA is in memory database, while other vendors databases are not. SAP HANA doesn't get affected much by the high I/O blocking and thus the pool size is close to core size on database server.

While we are at this topic, Also consider setting maxIdle property. Let's say you have defined the attributes as maxActive = 100, which essentially means Size of the pool is maxActive i.e. 100. So,

Size of pool under heavy load = maxActive i.e 100
Size of pool under low load = maxActive i.e 100

Even when there is no load(none of the connections are actually in use), all 100 connections are consuming resources. Here if we use maxIdle and Let's say you have defined the attributes as maxActive = 100, maxIdle = 60 This means

Size of pool under heavy load = maxActive i.e 100

Size of pool under low load = maxIdle i.e. 60

maxIdle gives the connection pool the flexibility to adapt to load. During high load maxActive is the only attribute that determines the size of the connection-pool. But during low load, maxIdle determines the size of the connection-pool. Using maxIdle can relieve database resources.

Also, generally it is said that the connection pool should be smaller than the number of worker threads of the web server (tomcat's default is 200). This also depends which Protocol Handler tomcat is running. The old tomcat servers used a protocol handler which was single thread per request model. Meaning that it won't matter if you have a pool of 500 connections while only 200 will be used (assuming that a request requires one connection per request). While that makes sense, the modern tomcat has NIO connectors for which request handling and request processing is done by separate group of threads, so the above generalization doesn't directly apply, so need more research here.

The above experiment takes in account for threads that acquire the connection once and then close it. In other applications it might be the case the thread acquires more than one connection, then the pool size should be increased. Also, in case of scaling out the and multi-tenancy we need to carefully plan the pool size of each instance/tenant so that we don't go above the optimal pool size.

In the end, I would like to say that for a good pool size we need to use trial and error method and find that sweet number keeping in mind the request patterns & be careful when going through recommendations given on wiki as they might be related to products of specific vendors or old technologies. For me TODO is to see actual behaviour of connection pools & how many connections are actually in use when I do a load test, for which I am thinking to use JMX beans and second, optimize the test application code to getter better rps for test limits of database.

I request community to help me if I misunderstood something or left something out ?.
1 Comment