Product Information
SAP HANA Cloud – Unlocking Performance Classes
Introduction
In the realm of cloud computing, the seamless delivery of services and applications relies heavily on maintaining an optimal balance between processing power and memory resources. Striking the right balance between these two components is crucial for achieving peak performance and cost-effectiveness. In this blog, we will delve into the flavors of the SAP HANA Cloud performance class offerings and it’s significance on SAP HANA Cloud.
Performance Classes and Core to Memory Ratios
SAP HANA Cloud offers 4 performance classes with different core to memory ratios, each tailored to specific use case, which lets you allocate either more memory or more processing power to an SAP HANA Cloud instance.
The following performance classes are available on SAP HANA Cloud:
- Memory (Default): Default configuration, which is suitable for most workloads.
- High Memory: Optimized to support the processing of large data sets that require a lot of memory.
- Compute: Optimized to support compute-intensive workloads
- High Compute: Optimized to support compute-intensive workloads that require less memory resources.
The core to memory ratio represents the relationship between the number of vCPUs and the size of the (compressed) in-memory data in your SAP HANA database which is allocated to an SAP HANA Cloud instance. Essentially, it measures the ratio of processing power to available memory, and it serves as a fundamental determinant of a system’s capacity to handle workloads efficiently.
The maximum amount of memory depends on the hyperscaler and the region in which the instance is created. For more information, see Memory and Storage Sizes Supported by SAP HANA Database.
The number of vCPUs cannot be set manually. It is allocated according to the size chosen at the time of provisioning the SAP HANA database instance.
For each hyperscaler the following tables show the different step sizes in which the memory is increased depending on the selected performance class.
Memory (Default)
Hyperscaler | 1 vCPU per 16 GB / 15 GB / 16 GB | 4 vCPU per 64 GB / 60 GB / 64 GB | 412 vCPUs / 440 vCPUs / 412 vCPUs |
Microsoft Azure | up to 960 GB | 960 GB – 1920 GB | 5600 GB |
Amazon Web Services | up to 900 GB | 960 GB – 1800 GB | 5970 GB |
Google Cloud | up to 1024GB | 1024 GB – 1344 GB | n/a |
High Memory
Hyperscaler | 120 vCPUs / 156 vCPUs | 204 vCPUs |
Microsoft Azure | 3776 GB | 5955 GB |
Amazon Web Services | 3600 GB | – |
Google Cloud | 3700 GB | 5750 GB |
Compute
Hyperscaler | 1 vCPU per 8 GB |
Microsoft Azure | 32 GB – 480 GB |
Amazon Web Services | 32 GB – 912 GB |
Google Cloud | 32 GB – 608 GB |
High Compute
Hyperscaler | 120 vCPUs / 156 vCPUs |
Microsoft Azure | 32 GB – 360 GB |
Amazon Web Services | 32 GB – 360 GB |
Google Cloud | 32 GB – 296 GB |
The performance class of an already existing or new SAP HANA Cloud, SAP HANA database instance can be adjusted via self-service in the SAP HANA Cloud Central. Use the Performance Class slider to choose one of the four configurations that range from High Memory to High Compute.
Adjusting Performance Class in SAP HANA Cloud Central
Conclusion
The performance class is a critical factor in determining the performance and cost-efficiency of SAP HANA Cloud instances. Understanding the specific demands of your application workload and selecting the appropriate performance class with the right core to memory balance is essential for unlocking the full potential of SAP HANA Cloud.
As we continue to evolve, new optimizations may be introduced, offering even more tailored solutions for different workloads. Stay updated with SAP HANA Cloud documentation and best practices to make informed decisions and ensure your SAP HANA cloud based applications thrive in the dynamic world of cloud computing.
Please note, that specifying or changing the performance class of an SAP HANA database instance is only supported via the new Multi-Environment tooling of SAP HANA Cloud.
Do you have any decision tree for Performance Class ? For example, in my case, my main driver is to minimize cost. I plan to put a maximum of data on NSE to minimize memory requirement. Main activity will be large data extraction via SELECT statements spending most of the time in doing JOIN operations between large tables. Which Performance Class should I choose?
Hi Michael,
first of all, you should carefully revise your NSE strategy. NSE is intended for warm data only, meaning data that is less-frequently accessed should be put on NSE. Moving large tables into NSE is not an issue, however, you should size the buffer cache appropriately and consider enough memory for it. Using NSE reduces your memory footprint and you can more focus on the compute power for complex join queries. But again, you'd need to evaluate how much data are being loaded into the buffer cache for your queries just by doing some tests. So, there is no simple decision tree. But you could start with a "compute" performance class having a large set of tables for warm data that are managed by NSE. Please see also my blog for configuring NSE: Understanding the Configuration of SAP HANA NSE | SAP Blogs