Skip to Content

We’re in the midst of a hardware upgrade, replacing the previous generation with the next one, going from partly virtualized to 100% virtualized for all of our SAP systems, and many non-SAP systems.  I’m working on a blog on that project, currently waiting for a few trends to shake out so I can share what we’ve learned.  In addition to that project, we’re upgrading SCM from 4.1 to 7.0 (also possibly known as SCM 2007 — I am not really sure as the OSS notes mainly say 7.0).  Doing these both simultaneously can make benchmarking more challenging than either in isolation.

[see prior blog on SCM performance monitoring – Measuring SCM ATP Workload Impact on R/3 ]

As part of our quality assurance testing, we’ve brought in a consultant (Suresh R.) who has been reviewing how our systems and applications look prior to the production upgrade to SCM 7.  Right now, QA is on new hardware (IBM Power6), with SCM 7.0, and production is on new hardware, with SCM 4.1.  Comparing the two environments is tricky, especially as we’ve generally found volume testing with APO/GATP to be unreliable as a predictor of actual workloads.

Suresh wrote a first draft report analyzing our memory allocations, focused more on the SCM side of the suite than the LiveCache side.  Application components such as real time pricing/availability checks and batch runs such as back order processing are mission critical, as we’ve learned during earlier software stability battles.  I offered to review his recommendations, and set up an hour to chat with him and the Basis team upgrade lead.

 

Back In The Day

10 years ago or so, as SAP released a new version of their enterprise software, you got a nice (though general) note describing how much more hardware you were likely to need to throw in the data center to keep the same performance level as the old version.  If it was 10% more CPU, and you had 5 already running, you needed one more.  Or maybe two.  Better get two just in case.  Memory?  Same deal.  Have 2GB?  Better go to 3GB.  Or 4GB to be safe.  And the notes were generally helpful, though you must always (and I mean always) have your own measurements to satisfy both your users and your financial approval chain.  Just pointing to an SAP note doesn’t cut it.

 

  • 323263 – Resource Requirements for Release 4.6 C SR1
  • 517085 – Resource Requirements for SAP R/3 Enterprise 47×110
  • 752532 – Resource Requirements for SAP R/3 Enterprise 47×200
  • 778774 – Resource requirements for SAP Enterprise Core Component 5.0
  • 901070 – Resource requirements for SAP Enterprise Core Component 6.0
  • 1311835 – Resource requirements for SAP ERP Central Component 6.0 EHP4

 

Today

Are there nice, neat recommendations for SCM?  Don’t bet on it.  Here’s what I have found (without reading the Upgrade Guide – I’ll leave that until I can’t sleep one night):

  • 869651 – Prerequisites for upgrading to SCM 5.0
  • 1021662 – Release Restrictions for SCM 2007 [replaced by 1413545]
  • 1413545 – “The requested SAP Note is either in reworking or is released internally only” – Ooops.

 

Memory, virtually

We got into a discussion about how much memory is needed on the DB/CI node.  Let’s see, there are 2 Oracle databases (one for ABAP, one for Java stack); there’s the ABAP stack, the Java stack, the O/S, and oh yeah, the Oracle shadow processes.

Component Memory GB
 Oracle DBs – shared pool and block buffers  6
 ABAP  4
 Java  2
 Shadow processes (100 @ 100MB each  10
 Total  22

 

Except I don’t believe these numbers for a second.  Suresh did a calculation, and came up with 15 GB, which I still think is too high.  He was looking at vmstat, which told him how much memory had been touched, not what we needed.  I’d look at this resource, for example, to see how to interpret vmstat:

 

Memory usage determination with the vmstat command
 

 

 “When determining if a system might be short on memory or if some memory tuning needs to be done, run the vmstat command over a set interval and examine the pi and po columns on the resulting report.”

“constantly non-zero, there might be a memory bottleneck”

 

The key phrase here is “might be”.  While it’s great to keep your finger on the pulse of the system, ask the patient how they are doing.  In this case, compare batch runs before and after the software upgrade.  If they are about the same or better (adjusting for hardware differences, easier said than done), great. If they are worse, find out if the application is suffering from memory faults or not.  Ask the application team when they are running tests and monitor the system closely.

 

My sense it that we still need work configuring the parallelism of SCM, including RFC groups and other workload balancing efforts.  If I see memory interfering with that. I can ask for more.  But in the meantime, I’m unconvinced the hardware is at fault.

 

AIX specific memory and tuning notes

  • 789477 – Large extended memory on AIX (64-bit) as of Kernel 6.20
  • 790099 – R/3 Parameter Settings for Unicode conversion
  • 856848 – AIX Extended Memory Disclaiming
  • 973227 – AIX Virtual Memory Management: Tuning Recommendations
  • 1048686 – Recommended AIX settings for SAP
To report this post you need to login first.

8 Comments

You must be Logged on to comment or reply to a post.

  1. Shaun Wimpory
    I never though I would see an admin these days say that he thinks he needs “less” memory.

    It is always best to plan for the exception, and thus having plenty of spare memory on hand is without a doubt the best way to go.

    We run IBM P570s onsite and I know for a fact that unlike any other operating system, AIX just doesn’t like paging/swapping at all.  The slighest bit of paging translates to performance degradation.

    Given the cheap cost for memory (including IBM), its better to just procure “more” than what you actually need.

    Now I don’t know anything about your systems (eg load, number of users, size of systems), but our 2TB production 4.6c LPAR with 400 concurrent users uses approx 60GB of memory around monthend.  The LPAR has 70GB allocated, and we NEVER page/swap.  Our SAP buffers are configured nice and large, and Oracle SGA/PGA..etc.. are also configured large enough to handle whatever we throw at it (within reason).

    Cheers
    Shaun

    (0) 
    1. Bala Prabahar
      I wouldn’t necessarily buy more memory to address performance problems without knowing the root cause. More memory doesn’t automatically translate to better performance. 

      In my experience, the ratio between memory/disk has been almost the same in the last 15-20 years. In other words, 15 years ago, 2GB used to be the maximum a database could access. At that time, anything more than 100GB database size was considered to be large. The ratio was 1:50 between memory(2GB) and DB size(100GB).

      Today I normally see Unix servers with 32-64GB memory. And more common database sizes from 1TB to a few Terabytes. The ratio is either the same as before (assuming you allocate 20GB memory for DB and 1TB is the DB size) or worse.

      Additionally today the business expects a lot more because they know – thanks to the internet/email/chatting/collaboration tools- someone somewhere is experiecning something more and better stuff than them.

      A few housekeeping tasks which used to be critical is still critical today. examples: Running application table statistics, System Statistics or data dictionary statistics, staying current with DB/OS/SAP patches and configuration. Today they are still as relevant (if not more relevant because of larger DB size) as yesterday. This would mean the database vendors still expect the developers to use indexes (because i/o is not cheap) which in turn means thay (the developers) should focus on writing efficient code.

      In my opinion, writing efficient code is as critical as yesterday because of larger database size (and growing much faster than it used to) and more demanding users. If the hw/sw/application environment and/or code is not efficient, I would focus on fixing it (code and/or db/os patching/configuration;)  first than buying “extra” memory.
      (Jim, you may be surprised to know there are customers who haven’t applied even a single mandatory patch for Oracle 10.2.0.4 with DB size of 2TB! and they are moving to a server with 16CPUS and 64GB of memory from 8cpus and 32GB memory to address their performance problems!!)

      Thanks,
      Bala Prabahar

      (0) 
  2. Vijay Vijayasankar
    Memory is not all that costly anymore – so, I am surprised to see you using some good analysis before buying more memory for your system. Most people these days would not have bothered.

    I have seen this affect coding too – new generation programmers who are used to quadcore processors and 10s of GBs of RAM, some times don’t see the need to squeeze out the most from their systems.

    But all that being said, time spent by consultants/employees is probably more costly in several cases than buying additional memory.

    (0) 
    1. Shaun Wimpory
      I couldn’t agree more.

      Most developers these days pay no consideration to performance/resource opertimisation of their ABAPs prior to introducting them to production.

      In their defence the pressure on them to churn out new developments or to correct bug fixes out weighs the time they have to analyse performance or resourcing issues in their code.

      They (and thus the business) just rely on faster/better infrastructure, and as stated earlier, given the inexpensive cost of memory, CPU, disk, and network resources in comparision to consulting hours, most sites would opt to just increase the size of their infrastructure.

      Shaun

      (0) 
      1. Jim Spath Post author
        Vijay said: “Memory is not all that costly”.  True, but disk is cheaper.  Operating systems are designed to put unused memory blocks on disk, which can mask really bad coding.  We’ve sized our systems using best practices.  If an application grows because business requirements grow, we run a sizing exercise, go through a capital appropriations, and that business unit funds the infrastructure they need.  It’s not infinitely expandable; it is not free either.  And “time spent by consultants is … more costly”?; our consultant is, or should be, looking at SCM configuration.  We probably spent an hour talking memory analysis and sizing ($200?).  You might be able to buy PC RAM for that little, but not much high end system memory.

        Shaun said: “admin … thinks he needs “less” memory.”  Great, but I am not an admin.  I’m a systems analyst charged with proving return on investment, writing capital request documents, and reviewing application performance to find tunable code.  We’re strong proponents of LEAN/6Sigma, using the SAP Memory Inspector, and I regularly challenge developers to fix their bloated code.  A colleague used the Memory Inspector to shave time off our billing cycles, after repeatedly hitting out-of-memory faults (the presentation is around the ASUG site somewhere I think).  And, Shaun said: “most sites would opt to just increase the size”.  We have grown over time, trust me.  I’ll also make the point that memory is cheaper tomorrow than today.  We always plan for system expansion to accommodate the expected and unexpected application demands.

        Finally, I’ll quibble with the claim that “The slightest bit of paging translates to performance degradation.”  Per the AIX 6.1 online reference I quoted (and 20+ years of UNIX experience) a little paging has no effect on performance.  Knowing  the difference between a little and too much is easy enough, if the application teams keep good run time metrics.  If they don’t, I don’t always jump when the cry is that “the system is slow.”

        Jim

        (0) 
        1. Shaun Wimpory
          Jim said: “True, but disk is cheaper. Operating systems are designed to put unused memory blocks on disk, which can mask really bad coding.”

          Fact – paging is slower than reading/writing to/from physical memory.  Why page/swap at all when you can just keep it resident in memory?  Especially on 64bit environments.

          As for your SCM consultant, my comments were more targeted at an ABAPer spending days rewriting a poorly performing transaction or report.  I wouldn’t expect a functional consultant to be performing any type of detailed performance analysis, except informing your development team that there is a problem.

          The fact that part of your role is to identify poorly performing code is admirable.  I wish we had someone doing that here.  Do you have dedicated development resources that spent time fixing the issues you find?
          Our development team is too busy with new change requests and business developments, than spending time optermising poorly performing abaps.

          We currently run on AIX 5.3, and your right in saying that you can page “a little” … but not much.  I’ve been a SAP basis admin for 15 years now, and I’ve worked on all the major unix flavors.  AIX by far is the least tolerant of application paging.  In my opinion I perfer to have AIX customers run as much of their DB & application buffers in main memory to avoid paging.

          I look forward to upgrading to AIX 6.1 to see if your comments are applicable to our site.

          Shaun

          (0) 
  3. Bala Prabahar
    Shaun said: “They (and thus the business) just rely on faster/better infrastructure, and as stated earlier, given the inexpensive cost of memory, CPU, disk, and network resources in comparision to consulting hours, most sites would opt to just increase the size of their infrastructure.”

      “…most sites would opt to just increase the size of their infrastructure…..”

    By increasing the size of the infrastructure, aren’t we generating more heat, more carbon and  consume more electricity?

    As Jim pointed out in another blog “Sustainability, the efficiency proxy”, the phrase “carbon is a proxy for efficiency” is a compelling argument not just for business units but also for technical community.

    I would focus on total cost of ownership (TCO) while making infrastructure expansion decisions rather than just acquisition cost.

    Thanks,
    Bala

    (0) 

Leave a Reply