Skip to Content

An overview of Quick Sizer and Quick Sizer Design Guidelines

The goal of sizing is to plan hardware expenditures required to run SAP software. Hardware requirements can be expressed in terms of CPU processing power, projected disk growth, memory and network for WAN connections. When analyzing a productive system, you quickly learn that roughly 20% of all transactions account for 80% of the capacity requirements. SAP’s sizing guidelines take this ratio into consideration: the Quick Sizer and its related guidelines help you transform information about your most important business processes into high-level requirements for CPU, memory and disk.

The Quick Sizer is an online application on the SAP Service Marketplace that consists of a questionnaire and provides two sizing methods, user sizing and throughput sizing for customers and partners. If information about your planned SAP implementation is limited and you do not have more than 200 concurrent users, we suggest a user-based sizing. If you have more detailed knowledge of the mySAP Business Suite and how you plan to implement it,  throughput sizing is the recommended option. Since throughput sizing requires more detailed information, it yields more accurate estimates of the hardware resource requirements.

SAP has derived the sizing guidelines implemented by the Quick Sizer by measuring the hardware resource consumption of realistic business processes. This is achieved with the help of SAP Standard Application Benchmarks, in-house measurements at SAP, and actual customer system experience.

Sizing is done on the basis of sizing elements, i.e. business objects or transactional documents that are the result of a business transaction in SAP software. For example, in the course of a standard Sales & Distribution (SD) process as defined in the SAP SD standard application benchmark, the following business objects are created and thus can be sized in the Quick Sizer: customer order, delivery note, goods issue, and billing document. Each sizing element is associated with particular hardware “cost factors” such as CPU, memory or disk. Let’s look into these cost factors in a bit more detail.

1.1 Sizing the CPU

Both the user sizing and throughput sizing calculate the CPU resource consumption.

In the user sizing, the Quick Sizer assumes a certain CPU load depending on the business process and the user activity. This is a rather broad approximation but it makes the sizing very easy.

The more detailed method is the throughput sizing. To determine the CPU requirements, the Quick Sizer needs to know the number of sizing relevant objects, their size, and the time frame in which they are being processed. The number of data changes and displays also influences this processing power.

To make the sizer’s life easier, the Quick Sizer makes a number of assumptions, for example, it does not distinguish between document processing in background or in dialog mode. Also, it calculates the same CPU consumption  for “creating with reference” and “without reference”. Possible optimizations for mass processing, for example, in invoicing or goods movement, are not taken into consideration. That way, the assumptions are more expensive in some cases, and less expensive in others.

The result for CPU sizing is divided into requirements for the application and the database layer and is specified in SAPS (SAP Application Performance Standard), rather than GHz. SAPS is a hardware independent CPU performance unit devised to describe the throughput power of a server, as CPU measurements are highly configuration dependent. It refers to the SD standard application benchmark (www.sap.com/benchmark): 2,000 fully business-processed order line items per hour equate to 100 SAPS.

This is important to know because the number of SAPS a specific configuration can achieve may change between releases. If, for example, the resource consumption of release B differs from that of release A the number of line items processed by the configuration will change, and  the number of SAPS will be different, too. For example, a configuration that delivered 10,000 SAPS in release A will deliver 9500 SAPS in release B, if release B has a 5% higher resource consumption.

At www.sap.com/benchmark SD you can view sample configurations and the number of SAPS they have achieved.

1.2 Sizing the Disk

The disk size calculation in user sizing is analogous to that of CPU consumption.  That means, the Quick Sizer assumes a specific disk value per user and workday. In throughput sizing, disk growth is determined by the number of objects per year, their size and the length of time they will remain in the system before they are archived.

The following data instances that potentially have a certain influence on the disk size are considered as exceptions and are therefore disregarded:

  • System source data defined by the minimum system requirements
  • Objects that reside in the system for a very short time only. Typically this includes “intermediate” or temporary data such as Idocs, WORKFLOWs, SPOOL jobs, batch input jobs, job logs, data that is deleted automatically, purchase requisitions, planned orders created by the Materials Requirement Planning run, incompletion protocols and due lists created by an order.

  • Master data usually does not contribute greatly to the overall disk sizing. In general, preference should be given to document type data because they are larger. If very large master data exists, we recommend an expert sizing where actual custom data are analyzed.

In the disk sizing method the Quick Sizer ignores tables which are either small or rarely used, hardware dependent table compression and custom tables and indexes.

For reasons of better disk growth analysis, the Quick Sizer provides information about data archiving possibilities, such as projected disk growth after one year, after the retention period, and information, if archiving objects are available.

1.3 Sizing the Memory

In general, the highest contributor to memory consumption is the memory required by online users. System settings such as buffers and caches may also influence the memory requirements of the application server. To account for this, the Quick Sizer assumes one application server as a memory offset and adds the user specific memory requirements according to your entries. In Java applications, the memory required by Garbage Collection also plays an important role and therefore is included in the Quick Sizer’s memory sizing. The liveCache, which is used for business planning purposes, basically only consists of memory.

In user-based sizing, memory requirements are determined by the net consumption of the online users. All other memory requirements, for example those of the operating system, need to be added when the final configuration or system landscape is planned.

In general, only user sizing provides memory sizing, with the exception of the Java application Enterprise Portal and the Advanced Planner and Optimizer (APO) whose liveCache is driven by memory. These throughput sizings also yield memory requirements

For more information on sizing see service.sap.com/sizing (Service Marketplace ID required).

To report this post you need to login first.

38 Comments

You must be Logged on to comment or reply to a post.

  1. Ralf Haida
    Hi,
    we will set up an SAP to check configurations. Using characteristics, criterions value,  there assignments and attributes. The most of SAP standard in this area will be used but NO master or transaction data will be created.
    The system will work as backend and a significant ammount of RFC calls will be performed.
    I checked the sizing tool and the according documentation, but I did not found any hints regarding custom sizing of R/3 / ERP.
    Can you give any hints?
    (0) 
    1. Susanne Janssen Post author
      Hi,
      Do you already have this service up and running an a test system? The best approach is to run a fraction of the required traffic and measure the CPU time required for this RFC traffic and scale out.
      Best regards,
      Susanne
      (0) 
      1. Ralf Haida
        Hi Susanne,
        thank you for you fast response.
        I was evaluating this approach to. Our futur production system will contain hundred thousands of classifications… I’m not sure if a test system with may be 1000 records allows to calculate future mass data processing.
        Can you give any hints regarding calculation?
        (0) 
        1. Susanne Janssen Post author
          Hi Ralf,
          oh, I thought you only wanted to size the RFCs, but now it seems that you want to size variant configation and the classification system as well.
          The sizing of variant configuration and the classification system depends strongly on the model. Do you use the ERP classification system, what do you configure?
          Other questions would be how many RFCs there are and how many are parallel open ones, which in again depends on the application.
          Maybe you want to further discuss this per mail.
          Best regards,
          Susanne
          (0) 
  2. Fernando Mauric
    Hi Suzanne,
    I have some problems with HR sizing. I understand HR-PT is now part of HCM-PY but I don’t know how I must consider it. My input fields are:
    PT time data: 600.000
    Peak load: 2.600

    Number of employees: 2.500
    Execution Time period: 09-10

    PT time evaluation: 6.500
    Execution Time period: 09-10

    How Can I fill the quicksizer with these inputs?

    (0) 
  3. Max Salvadori
    Hi,

    I am interested in evaluating the installed SAPS of our infrastructure. 

    Do you know how I might go about this or if it is an “invalid” idea ?

    We run Solaris.

    Thanks

    Max

    (0) 
    1. Susanne Janssen Post author
      Hi,
      the procedure is fairly simple.
      Take the utilization of your installation and match it against the allocated SAPS (Sun’ll know). e.g.: CPU used by 48%, 10,000 SAPS allocated means you are roughly using 4800 SAPS.
      Best regards,
      Susanne
      (0) 
  4. Sam Venkat
    Is quicksizer recommended only for initial sizing or can it be used for subsequent sizing exercises? We would like to size our future XI implementations and are looking for a tool that would guide us in the right direction i.e. to make hw determinations. We are running XI7.0  currently. We are looking for some rough pointers to tell us if we need to add additional hardware.
    (0) 
    1. Susanne Janssen Post author
      hi,
      If XI is a new implementation, you can use it, sure. If you already have XI up and running, post golive sizings apply.

      Best regards, Susanne

      (0) 
  5. Anonymous
    I would like to know if is possible to do a sizing of a BW 3.5 with the new quick sizer as it only has the netweaver 2004s instead both, 2004 and 2004s.
    How to do this sizing…Can I use the 2004s to size 2004???
    (0) 
  6. Michael Fritchley
    Susanne,

    Would it be possible to set-up a Blog or Forum with examples of large implementations; e.g.:
    – Large DB examples
    – Large numbers of Messages (i.e. Idocs)
    – Numbers of spool per day
    – etc.

    We have an “as-is” projection of 115,000 spools per day, but no where in SAPnet can I find any information on large volumes and what the individual sub-components of SAP can actually handle.

    (0) 
    1. Susanne Janssen Post author
      Hi Michael,
      thanks for the feedback. I’ll see what I can do. Issue is that customers usually do not deliberatly want to share sensitive information such as DB size, iDocs, and so on. However, maybe I can put something together with anonymous data.
      115,000 spools per day / 8 hours / 3600 seconds = 4 per second is certainly not a little, but sounds managgeable.
      Regards,
      Susanne
      (0) 
  7. Robin Krisher
    Susanne,
    I’ve read documentation which states that after a system is “live” Quicksizer should no longer be utilized. 

    In the past, SAP has directed us to update Quick Sizer with current APO information before meeting with them regarding sizing.  

    Currently, we are looking at an increase in volume of approximately 50% and want to ensure we are taking the right steps for estimating our system sizing. 

    What steps should we be taking to appropriately size our system?  Is APO a different beast than other software in which Quick Sizer is used through out the process?

    Thank you, Robin

    (0) 
    1. Susanne Janssen Post author
      Robin,

      Are you talking about an upgrade or will you stay on the same release and simply extend the volume?
      In the latter case, you’ll find more information on the service marketplace at service.sap.com/sizing -> guidelines -> processes -> “post golive sizings” (login required)
      The statement regarding the QS usage for new sizings is valid. My liveCache colleagues made an exception for upgrading liveCache, I believe. but that was only valid for the approximation of memory changes caused by an upgrade.
      Best regards,
      Susanne

      (0) 
  8. bruno gouiffes
    Hi,
    i am trying to size a dev ECC 6 system for a team of 50 actives users (configuration+abap).
    I was planning to use a t-shirt approach for this.
    is it included in the quicksizer? is it a different tool? Where can i find it?

    Regards

    Bruno

    (0) 
      1. bruno gouiffes
        Thanks

        a consultant (Basis) told me 2 days ago that the sizing should be made on the assumption that a developper equals 5 “high activity users”. 

        Do you agree with this assumption?

        Regards

        (0) 
        1. Susanne Janssen Post author
          Hello,
          This is certainly no standard rule of thumb. Developers do need more resources than application people. That depends, however, on the application they are developing.

          Best regards,
          Susanne

          (0) 
  9. Dennis Jacoby
    Okay, I am missing something.  If SAPS is suppose to help you with your CPU sizing then how does it work.  Let’s say I have a machine.  I want to know if it will work for a project that requires x amount of SAPS.  How do I know?  I do I know how many SAPS are in a dual cpu system running 900 mhz?  What am I missing?
    (0) 
    1. Susanne Janssen Post author
      Hello,
      I am not sure I understand your questions. Maybe the following helps: in the SAP world, each server is valuated with a specific for each release. This is done individually by each hardware vendor. Some servers in addition are benchmarked by the vendors and certified by SAP on behalf of the Benchmark Council. For more information see sap.com/benchmark.
      best regards, Susanne
      (0) 
  10. Glen Canessa
    Hi Suzanne,
    is there any significative impact on CPU or disk sizing, from the implementation of IFRS or the use of New General Ledger?

    Regards
    Glen

    (0) 
    1. Susanne Janssen Post author
      Hi Glen,
      a) IFRS impact on sizing: not that I know. To my understanding of IFRS its more a way how data is being displyed, not a fundamentally new calculation algorithm.
      b) When you compare actual runtimes of old and new GL the performance impact is the same. However, New GL has a very high flexibility. This is hard to anticipate in a sizing, so therefore we have no standard guideline for, e.g. parallel ledgers or compelx splitting.

      Regards,
      Susanne

      (0) 
  11. SC SAP Proyecto
    Hi,

    We are starting a new SAP ERP 6.0 implementation.

    In the SAP Sizing Tool I d’ont find any entry for FI/AM. In this project this sub-module will have a great impact.
    So, is there any whay to include the data information of this sub-module ?
    Is any other whay to do this ?

    Thanks in advance.
    Arnaldo Calçada.

    (0) 
    1. Susanne Janssen Post author
      Hello,
      Are you talking about Asset Management? If this is a particular project than it makes sense to perform an expert sizing by analyzing the data in more detail.
      Best regards, Susanne
      (0) 
      1. SC SAP Proyecto
        Hi Susana,

        Thank you for your reply.
        Who can I do an expert sizing ?
        It’s any specific tool ?
        Where can I get it?

        Best regards.
        Arnaldo Calçada

        (0) 
    2. Susanne Janssen Post author
      Hello Arnaldo,
      This is more of a methodology. You’ll find more information at service.sap.com/sizing –> general sizing guidelines.

      Best regards,
      Susanne

      (0) 
  12. Glen Canessa
    Susanne,
    I need a initial sizing for a decentralized process management ERP system. This is a specialized ERP system to execute decoupled process management for PP-PI functionality of ERP manufacturing execution (using PI-PCS Interface).
    This system interfaces to central ERP (PP-PI process orders, recipes) and to process control systems (SCADA or manual data entry).
    I haven’t found any information abou this. And also couldn’t map functionality to any specific section in quicksizer.
    Please any hint will be appreciated.
    Regards,
    Glen
    (0) 
    1. Susanne Janssen Post author
      Hello Glen,
      PP-PI is not part of the Quick Sizer. Our experts advise you to take a slim, but expandable system for the decentral process coordination at the beginning (minimum requirements for an ERP system). Make sure you archive the PI sheets from the beginning, so that the database does not fill up too quickly.
      Best regards,
      Susanne
      (0) 
  13. Eitan - RealTech Bi
    Hello
    I run the quick sizer on almost un-utilised ERP system and noticed that the result came back with disk size of 150GB. I know that a new ERP installation (after SGEN) will use about 70 GB.
    What are the estimates SAP used to estimate the disk size to be 150GB ?

    Thank you
    Regards
    Eitan

    (0) 
    1. Susanne Janssen Post author
      Hello Eitan,
      I confered with the Quick Sizer team. The 150 GB is correct and based on feedback from customers and experience from our Business Warehouse experts.
      Best regards,
      Susanne
      (0) 

Leave a Reply