Skip to Content

Interpreting Quick Sizer Results

In this blog, which succeeds Efficient SAP Hardware Sizing: Quick Sizer and Quick Sizer 2005 – Getting started, I’ll focus on interpreting the different sizing results. For reasons of better analysis, the Quick Sizer allows you to analyze sizing results at very different levels of detail. Here are the most important ones.

Different Result Levels

There are seven different result levels that show different perspectives on the sizing results. The ones that provide most important information for analysis purposes are described in detail; the others share one sub chapter.

One important principle of the Quick Sizer is that it calculates the results for each hour over a 24-hour time period and displays the highest result at any given hour. For example, on sizing project level, the result displayed is the highest over all solutions. At solution level it shows the highest result for each solution. Another principle is that the sizing results for user sizing and for throughput sizing are calculated and displayed separately.

Result by solution

By default, the Quick Sizer displays the sizing result by sizing method (user and throughput) and by SAP solution, for example SAP ERP or SAP NetWeaver. For each solution the Quick Sizer displays the required:


  • CPU power in SAPS (hardware-independent, see above) at a target utilization of 65%, rounded in units of 100

  • Disk space in MB, rounded in units of 1000

  • Disk I/O in operations in seconds in units of 100

For easier analysis, there are also result categories, ranging from small (S) to extra large (XXL). The categories are to provide some estimate what size your project is in relation. XXL does not really indicate a very large sizing, it only means that if you have sizings beyond 30,000 SAPS and/or 1 TB you should not rely on a tool such as the Quick Sizer alone but check with your hardware provider.

Table: Sizing categories









































Category Up to … SAPS Up to … MB disk Up to … I/Os per second
XS 1600 70,000 700
S 6400 210,000 2500
M 12,800 420,000 5000
L 20,000 700,000 8000
XL 30,000 1,000,000 12,000
XXL Contact hardware vendor or SAP for detailed sizing analysis

If you only have a user-based sizing and if this result exceeds category S, you should either perform a throughput sizing or contact your hardware vendor or SAP. If you did both sizing methods and the category exceeds XL, you should also contact the hardware vendor or SAP for help.

The results at solution level include disk and memory offsets to reflect the resources you need for the standard software. These offsets may be different for each solution.

Result at software component level

To plan your system landscape it is helpful to know the hardware requirements for each software component, for example Enterprise Core Component (ECC) or Portal Server. We define software component as any a separately installable deployment unit.

At this level, the Quick Sizer provides more detailed information, for example the ratio between database server and application server, an information which may become important if you are sizing a multi-tier environment and need to now how to split up DB and application resources. The results for CPU and memory consist of three columns, total, DB and App.



















Column Meaning
Total Total displays the highest requirements at a given point in time
DB Highest DB requirements at any given point in time (individual peak)
App Highest application requirements at any given point in time (individual peak)

However, it may be that the individual peaks for DB and application server may be at different times, so in order to better configure CPU and memory, the values for DB and App reflect the individual peaks. As a consequence of this independence, total is not necessarily the sum of the results for DB and App but instead may be higher when added. See an example below, where the (rounded) individual results for DB and App are higher than the total result.


If you size mySAP SCM, you’ll find additional columns for livecache memory.


The results at software component level also include disk and memory offsets to reflect the resources you need for the standard software. These offsets are different for the different software components. Note that we assume one physical application server only. If you plan to install more application servers you particularly need to add memory.

Inputs, Statistics and Results overview

This function is predominantly useful for documenting the sizing project as it contains all the data you entered and all the results of the Quick Sizer. With the print function you can download this information to save on your PC. This function is also helpful if you want to analyze the sizing data in more detail. More often that you think you can detect erroneous entries by simply doing a plausibility check on the highest CPU or disk contributors. This result level also provides detailed information on the top disk contributors, the table names, the projected growth after one year and according to the specified retention period, available archiving objects, and so on.

Other Result levels

  • “Project” displays the result for all questionnaires added.

  • “Key capabilities” displays the result at questionnaire level. This level does not include offsets.

  • “Sizing element” displays the result at sizing element level. This level includes the archiving objects specified for the respective sizing element. It does not include offsets.

  • “Line results and inputs” is helpful for cause-effect analyses of your entries and the respective results.
Interpreting the results

Interpreting SAPS or disk results has a lot to do with applying some common sense to what you see. A very good method to spot discrepancies is to compare the results for users and throughput. Huge differences may indicate wrong assumptions. Usually, the user activity is rated too high. The screenshot below shows a typical example from the solution level of the result screen.


The CPU has a difference of a factor 16, and the disk shows a difference of a factor 6. That there is no memory result for the throughput is ok, since memory in CRM is user context-driven.

Let’s take a look at the inputs on the result level for inputs and results.


The Results table shows how the result splits up into different sizing elements. The highest contributor is SLS-USER, the user in sales transactions. When you compare the results between the users and the respective throughput you can see that the factors are very high as well.

The input data show the data entered. For example, there are 130 active users in Activity Management (ACT-USER). When you compare this data with the 50,000 activities in the throughput (CRM-ACT), you can understand the discrepancy. Assuming 220 workdays per year, approximately 30 activities are being created per hour (50,000/220*8). If you assume 130 active users as done above, then there is something rotten. The other sizing elements follow suit in this argumentation. The question that now arises is: what data is incorrect, the volume numbers or the user figures? Experience shows that the user activity is often overrated. Often, customers enter named users instead of active users. Even if users fill in screens rapidly, the time between the saves or other actions that create load on the application server is termed think time.

Frequently asked Questions on Quick Sizer results

Q: There are results for users and throughput. Do I have to add them?

A: No. They are obtained separately and should not be added. Ideally, they are similar. If there is a strong divergence between both results, there are probably entry errors in the questionnaires or wrong assumptions about the business case or user activity.

Q: Does the Quick Sizer consider Unicode?

A: Yes. just make sure that the corresponding SAPS ratings of the hardware configuration include Unicode as well.

Q: I only have user information (3000 ERP users); can I do a user-only sizing?

A: We recommend not to. User-only sizing can be quite treacherous because it doesn’t consider background activity, system-system communication and others, for example.

Additional functions and options on the result screen

A list with the relevant functions is shown below.



















Display charts To get a graphic impression of the 24-slope of CPU requirements, you can use this function.
You must be Logged on to comment or reply to a post.
      • Hi Susanne,

        We have 4500 users and we have used the quciksizer tool to calculate the cpu’s it giving 56,000 SAPS and the CPU category is XXL. Can you please advice us.


        • Hi Vamshi,
          Please see beloythe ansyer of a colleague:
          Please also fill-in throughput-based sizing data as user-based sizing alone makes only sense up to several hundred users. When you have entered your throughput-based data you get two independent results: one for user-based and one for throughput-based sizing which should not be added. If the results differ, you may re-think you inputs: E.g.  What do the users do? Are the things the users do reflected in the throughput-based sizing?
  • Hi Susanne,

    Thanks for posting a very good article on sizing. I do have a question re disk requirements. Is DB disk sugested by quicksizer only good for go live only? How about DB growth?



    • Hi Ben,
      DB growth is included in the sense that in throughput sizing you can specify the residence time of the data on the database.
      • Hello
        I have a question, what about disk requirements when using User sizing? What are the assumptions? (i mean how long the application data remains in the database etc.).
        • Hello,
          You’ll find this info in the document: Background: Sizing SAP Systems on the the Service Marketplace (login required) at -> guidelines -> general procedures.
          Best regards,
  • Is there any method of determine the number of IO operations per second. I see the Disk.Cat. mentioned, but what should be the speed or response times of these disks?
    • Hi Maurice,
      that depends very much on the provider. We can only state the requirements, but SLAs are delivered by the actual provider.
      Best regards,
  • What are the trade-off considerations between fewer but more powerful CPUs versus more but less powerful CPUs? This would be for a NW2004s BI usage type.


    • Hi Simon,
      The key driver for I/O performance is the actual number of I/Os per second, not the amount of data transferred. This KPI is mostly relevant for frontend network load.
      best regards, Susanne
  • Hi Susanne,

    Thanks for the blog it is very informative. 

    We are currently in the process of sizing our r/3 and bw systems and I have two questions. 

    The first question is that our team was able to pull together record counts in R/3 tables such as COEP, CATSB,COBK etc, how can we map these tables back to the elements or buckets that the sizer is asking for? for example should our table numbers go to CO-PA-FI or CO-PA-BIL?

    To ask in reverse, is there a set of tables that each throughput element is based off of?  If we know that CO-PA-BIL is based off of x,y,z tables we can then use an aggregate of these tables to create an object count.  

    The second question is can you tell me if our sizing should be considered as a resize or delta?  We are currently live and will be adding both incremental volume to the system as well as new volume i.e new cubes plus impact to old cubes/standard cubes.  Should we use the quicksizer for this?

    thank you for your help.

    Sashi Tanuku

    • Hi Sashi,
      Your sizing is a mixture of delta and re-sizing. The basis however, is not the Quick Sizer, but you need to determine the basic load by analyzing CPU-requirements, memory consumption, disk growth. You do not need to feed the tables back into the Quick sizer. Quick Sizer is for initial sizings only.
      best regards,

      • Hi Susanne,

        I’d appreciate if you could tell me from wich tables can I get the information required for the SAPsizing form. The information I need is this.
        My email is
        Many thanks,

        CO-PA-BIL-AVERAGE Billing docs posted to CO
        CO-PA-BIL-PEAK Billing docs posted to CO per PEAK TIME
        CO-PA-FI-AVERAGE Financial docs posted to CO
        CO-PA-FI-PEAK Financial docs posted to CO per PEAK TIME
        CO-PA-SLS-AVERAGE Sales orders posted to CO
        CO-PA-SLS-PEAK Sales orders posted to CO per PEAK TIME
        CO-AVERAGE Controlling docs or postings
        CO-PEAK Controlling docs or postings per PEAK TIME
        EC-PCA-AVERAGE Profit center charged off docs
        EC-PCA-PEAK Profit center charged off docs per PEAK TIME
        FIN-BAC-AVERAGE Business Accounting docs
        FIN-BAC-PEAK Business Accounting docs per PEAK TIME

        CO-OM-PEAK Sender-receiver relations for all cycles
        CO-OM-RAT-PEAK Orders/period with overhead rate
        CO-OM-SET-PEAK Orders allocated/period

        • Hello,
          There is no such retro-sizing-tool, because the Quick Sizer is not meant to be used for sizing production systems. Using the production system as a basis for re-sizing or delta sizing is much more efficient than the Quick Sizer.
          Best regards, Susanne
        • Hi Gonzalo,

          I want to refresh old subject, you’ve posted. Did you find table or other way for sizing above objects? If you do, pls send me informations about the way you did it.


  • Hello Susanne,

    On the new Quicksizer, a few new fields have been added to the Table 4: Throughput – Definition of InfoCubes section. Namely: Distinct Values, % DVL and % DV1.

    In this regard I have a few questions.
    Are these primarily for BIA sizing? If so is it necessary to estimate these values if you have an existing BI system and run the BIA sizing program ZZ_BIAMEMCONSUMPTION_BW3X?

    If this is not the case could you kindly suggest the how one might estimate these values?

    Thanks and best regards,

  • Hello Susanne,

    On the new Quicksizer, a few new fields have been added to the Table 4: Throughput – Definition of InfoCubes section. Namely: Distinct Values, % DVL and % DV1.

    In this regard I have a few questions.
    Are these primarily for BIA sizing? If so is it necessary to estimate these values if you have an existing BI system and run the BIA sizing program ZZ_BIAMEMCONSUMPTION_BW3X?

    If this is not the case could you kindly suggest the how one might estimate these values?

    Thanks and best regards,

  • HI Sussane ,

    Does SAP use a different sizing approach for SAP All-in-one solutions ?

    If user based approach for standard solutions can be used for Small and Medium implementations in SAP all-in-one , which have have up to 50-300 concurent users ?

    Best Regards,


    • Hello,
      For All-in-one you can still use the Quick Sizer. The memory may be slightly higher than needed, and also disk if you include many different solutions, but otherwise Quick Sizer is fine.

      Best regards,