SAP HANA Sizing Essentials – A quick guide to HANA Sizing for license calculation
1. Overview and conciderations
A proper HANA Sizing is important for any customer who wants to switch to SAP HANA as platform to run the SAP Business Suite or SAP Business Warehouse. It is mandatory for calculating the SAP license for Enterprise Edition (in-memory metric based) and it is as well important to assure the optimal performance of SAP HANA.
Whereas the SAP HANA memory requirement is most important for the license calculation there is also a sizing for CPU cores, disk space and network load which are important for the expected performance of the HANA-System. In this article we concentrate on the HANA-in-momory sizing.
SAP HANA memory size is determined by the data footprint of the system which consists of two main areas:
- Column store (contains the business data)
- Row store (contains mainly meta data like data dictionary, some system tables)
The HANA memory is also used by other processes like
- HANA caches
- Operating System
- Working Memory
A typical memory distribution of the HANA memory can look like this:
The following questions are important before doing the HANA-Sizing:
- Does the customer already run SAP-Systems or non-SAP (Greenfield sizing)?
- Does the customer want to convert ERP-Systems or BW-Systems?
- If ERP, does the customer want to run Suite on HANA or S/4HANA?
- If BW, does the customer want to run BW on HANA or BW/4HANA?
- Does the customer want to convert his whole IT-Landscape (multiple ERP and/or BW-systems)? Is there potential for System Landscape Optimisation?
2 Methods of HANA-Sizing
2.1 Quick-Sizer (Greenfield)
If a customer runs non-SAP systems the only way of Sizing the required Hardware for SAP HANA is the Quick-Sizer tool (https://www.sap.com/about/benchmark/sizing.quick-sizer.html#quick-sizer).
This is more or less a questionnaire the customer has to provide online to achieve a rough estimate of the HW-requirements. As SAP does not know the figures of data and tables in the non-SAP system it is the only way to estimate the required memory.
You will find detailed information and hands-on for the Quick-sizer tool on the landing page above. Be aware that there are different versions of Quicksizer available: QS classic, QS for HANA, QS for HANA Cloud and choose according to customer needs.
2.2 SAP HANA-sizing for ERP systems
If the customer runs SAP ERP systems on any DB and wants to change to SAP HANA the HANA-Sizing report (also known as ABAP sizing report) is the tool to use. Detailed information and prerequisites are described in SAP note 1872170
The procedure described in SAP note 187210 provides a SAP ABAP report to estimate the memory requirement for SAP S/4HANA and SAP Suite on HANA. You have to set the required target in the starting page of the report parameter set.
The report ZNEWHDB_SIZE can be implemented in the customers namespace. It requires at least SAP_BASIS 620. It is also available in ST-PI 2008_1, SP09 and ST-PI 740 SP00 and above under the name /SDF/HDB_SIZING. Both reports have the same code and functionalities. The only difference is the installation procedure. The reports are suitable for sizing of all Net-Weaver based products at the exception of BW.
At the end of SAP note 1872170 you will find an attachment FAQ_of_SAP_note_1872170 which is highly recommended to read carefully before installing and using the Sizing Report.
You will also find detailed information on the installation and prerequisites of the Report how to set it up and run it.
Here we will focus on some common mistakes and questions that arise often when interpreting the results of the report.
Please note that the selection screen of the sizing report varies slightly depending on the system if it is already a SAP HANA system or not. Set the
- choice of HANA version to HANA 2.0
and set the
- sizing scenario to Suite on HANA or S/4HANA
Please also consider the settings for Data aging estimations on technical objects, where you put in the residence time in days (the default is 15 days, which might be too less for your needs) You can set it up to 365 for one year (max. is 999).
If a transition to S/4HANA is possible you have some other choices on the Sizing scenario:
After performing the sizing report don’t forget to watch for the error list at the end of the output. If errors occur the results might not be reliable and the problems should be solved before re-running the report. The above mentioned FAQ document will help you to solve the main error sources.
How to read and interpret the report
In the top part of the report you will find a short summary with the essential information regarding the HANA memory requirement:
The report shows if the calculated values are with respect to S/4HANA or Suite on HANA and reveals the following information:
For the initial installation you need a HANA memory min. of 2.420 GB. If the customer is willing to do some initial data cleansing on his data the estimation goes down to 1.340 GB (3. row). This estimation is derived on data aging information (depending on the settings on the parameter screen) and the age of data in the tables incl. system tables like logging data, workflow, idocs and so on. If the difference is more than 50% this indicates a lot of old data in the system and probably lots of aggregated data in temporary system tables, logging e.g.
In this case it is recommended to do some elementary data cleanup and re-run the report.
The 2. and 4. row shows the requirement for disk space (initial and after data cleansing).
This information does not pay to the license calculation and can be omitted for that purpose.
The estimated memory size of business data is useful for a S/4HANA Cloud deployment, as it shows the pure value of business data.
Following are some informations on the report version, the system evaluated, the SAP HANA version and some other technical information.
Further on you have some detailed information about the sizing calculation ending up in an overview of the biggest tables in the system and some data aging information. This information is only needed for an expert sizing, to find errors or potential for optimization.
Interesting might be the section of detailed calculation for the data cleansing and reduction of memory due to S/4HANA compression and data structure reduction:
You can see, that the initial data size is reduced from 1.170 GB to 626 GB due to S/4HANA data structure changes and to data aging.
The result of 626 GB has do be doubled for the HANA work space and some caches and code stack. The result is the HANA memory requirement after data cleanup (from above).
2.3 SAP HANA-sizing for BW systems
Please follow the instructions in this note and pay attention to the prerequisites: ST-PI 2008_1_700 SP 12, ST-PI 2008_1_710 SP 12, ST-PI 740 SP 2 mentioned.
As an attachement to the SAP note you will find the most recent version of a PDF document that describes the process of installation, running and interpreting the report in more detail.
The new report HANA_BW_SIZING is applicable for “BW on HANA” as well as on “BW/4HANA scenarios as well.
As with the S/4HANA Sizing report there is also a selections screen for the BW sizing report:
If you want to do a sizing for BW/4HANA don’t forget to check this marker in the first part of the selection screen, otherwise a BW on HANA sizing is performed.
In the document attached to the sap note you will find detailed information on each parameter of the selection screen and how to use it.
Reading and interpreting the results of the BW sizing report
The interpretation of the BW sizing should be done by an expert as the values can be misinterpreted easily without a technical background. There are several situations where the results can be misleading, esp. when the temporary data (in the PSA e.g.) is very high. In this cases an elementary data housekeeping has to be performed on the BW-system and the report has to be re-run.
Here we will focus on the most important parts of the report results.
In the first part you will find some general infos on the sizing result and its distribution to row and column store and typical BW data types:
If the size of Change log, PSA and others is high compared to the overall sizing amount, the temporary technical data has a lot of influence in the overall sizing leading to possible wrong numbers. In case these values increase 60% you will see a warning “Too much non-active data…” at the end of this section and a data housekeeping has to be performed for better results.
The next parts shows the main results of the sizing “MEMEORY SIZING RESULTS”.
It is important to understand the meaning of the two columns at the right side. The first is a so called scale-out scenario, the second is a scale-up scenario. In this example in the scale-out scenario you need at least 2 nodes for the BW-system leading to little higher memory requirement as there will be overlapping data in the 2 nodes to run properly.
In the scale-up scenario on the right with 1536 GB of physical memory you only need one node to run the BW-system and need little less overall memory in HANA.
So it is important to understand that these values are with respect to the desired HW architecture. For the licensing this has to be taken into account. The best values you can achieve is with the scale-up scenario and as less nodes as possible to reduce the amount of duplicate data (also system tables distributed to the nodes).
On the selection screen you can set the option of Future Growth Simulation to calculate the memory requirements for the future. Consider the growth values carefully as they will add up each year of the simulation:
Here you can see the results for 3 years assuming a growth rate of 10%. The results are shown for each HW-scenario discussed above.
3 Further improving the hana sizing
If the HANA sizing results seem too high to fit the customers needs or are exceeding his budget there are further steps which can be considered for reducing the minimum HANA memory requirement.
- If multiple ERP and BW systems are involved in the calculation you can consider doing a System Landscape Optimisation (SLO) to reduce the total data footprint by merging systems together.
- If the customer has a fair amount of “old data” you can consider doing an archiving project in front of the migration to reduce the needs for HANA in-memory.
- Do a general data cleansing and implement a corporate data quality project to get rid of duplicate or outdated data.
- The most elegant way (esp. for BW-systems) is to use multiple temperature data tiering to assure only the most recent and important data is kept in memory whereas warm and cold data is stored externally.
Most promising results will be achieved by combining all methods together but will also lead to the most effort and expensive pre-project. SAP offers many helpful tools with respect to data quality and data archiving and you will find more information in this blog: https://blogs.sap.com/2019/11/19/sap-datenbereinigung-und-migration-nach-sap-s-4hana/
Whereas the last proposition seems to be the most elegant, the options for the ERP side are quite limited for now. For BW there are solutions for each temperature region (i.e. hot, warm and cold data) but for ERP this is limited to hot and cold data only, and cold data here means archiving. There are technologies available for warm data storage in ERP but they are not yet implemented (HANA native storage extensions, NSE).
You can find further information of multi-temperature data tiering here:
SAP Note: 2416490 – FAQ: SAP HANA Data Aging in SAP S/4HANA
SAP JAM: Dynamic Tiering for S/4HANA roadmap
Beside the SAP HANA sizing report you can also use the SAP Readiness Check for S/4HANA and BW/4 HANA Systems to get a deeper look into the data aging structure of you database SAP Landing Page for S/4HANA Readiness Check
Here you will find additional information on how to use the SAP S/4HANA Readiness Check: