Skip to Content
Business Trends
Author's profile photo John Appleby

SAP HANA – Scale-up or Scale-out Hardware?

I have a lot of customer conversations about SAP HANA hardware. It’s no wonder, given that there are nearly 500 certified appliances on the Certified SAP HANA Hardware Directory at the time of writing.

In addition, it’s possible to certify almost any sane configuration with Enterprise Storage like Violin or EMC VMAX, and it’s possible to use almost any Intel server for non-production use cases, with almost any configuration.

This provides fantastic flexibility – as a customer you can choose the vendor, storage, networking of your choice and for non-production scenarios, it is possible to build systems which are much more cost-effective. The most important thing though, is to get your production hardware correctly provisioned: it can be an expensive mistake.

There are two ways to scale SAP HANA into large systems – up, or out.

Scale-up vs Scale-out

Scale-up vs Scale-out

The first thing to remember is that HANA systems require a CPU to RAM ratio, which is fixed for production systems, at 256GB/socket for analytic use cases, and 768GB/socket for SAP Business Suite. Mainstream Intel systems are available with 4-8 sockets, which means that there is a maximum of 2TB for analytics and 6TB for Business Suite customers with today’s hardware, in a single system.

How does Scale-Up work?

With scale-up, we look to build a single system with as many resources as possible. As mentioned above, there is a maximum of 8 socket, 2TB for analytics use cases. These are available from Cisco, Hitachi, HP, IBM, Lenovo, Fujitsu, Huawei and SGI at the time of writing, but the link will update with the latest available systems. Those same vendors make 6TB systems for Business Suite.

There are two vendors who make systems larger than this, and certification is pending; my team has worked on pilot implementations.

HP have what they affectionately call the DragonHawk (what is it with IT vendors and their naming conventions?). The marketers call this the HP ConvergedSystem 900, and it is available with up to 16 sockets and 4TB for analytics, or 12TB for Business Suite. The HP CS900 uses their Superdome 2 architecture, which is 2-socket blades with a NUMA backplane, up to 8 blades.

SGI have their SGI UV300H appliance, available in building blocks of 4-sockets with up to 8 building blocks to 32 sockets and 8TB for analytics, or 24TB for Business Suite. They use a proprietary connector called NUMAlink, which allows all CPUs to be a single hop from each other.

Bear in mind that bigger scale-up systems will come, as newer generations of Intel CPUs come around. The refresh cycle is roughly every 3-4 years, with the last refresh happening in 2013.

How does Scale-Out work?

Scale-out systems connect a cluster of smaller SAP HANA systems together into one clustered database. HANA is a shared-nothing architecture, so there must be shared storage for data persistence. This is delivered either with a clustered filesystem (Lenovo with IBM GPFS) or a SAN (all the other vendors).

Interestingly, there is a lot of variance in HANA scale-out appliances. Cisco, Hitachi, HP, Huawei, Lenovo, Dell, Fujitsu and IBM have 1TB scale-0ut appliances. That list drops to Hitachi, Lenovo, Fujitsu and HP for 2TB appliances. IBM have an impressive 56-node cluster (up to 112TB, yes) certified, and all the others are limited to 16-nodes for certified appliances.

HP have a scale up-and-out solution with the ConvergedSystem 900, up to 16-CPU/4TB building blocks (not certified yet). However, the ConvergedSystem 900 has higher average latency than a 4- or 8-socket single-node system, so is best suited to Business Suite use cases.

Do note that this isn’t a major limitation – any of these vendors will certify an appliance as big as your pocket book. In most cases, we find customers buy 5-10TB of HANA database in production.

Note that in a scale-out environment, data has to be distributed amongst the nodes. SAP BW does a great job of this – striping big fact tables across multiple nodes, and residing dimension tables together in a single node. It uses one “master” node for configuration tables. All in all, this does an excellent job of dealing with the major disadvantage of scale-out: the cost of intra-node network traffic for temporary datasets.

For custom data-marts, you will have to partition your own data, which isn’t a big deal, but does require a HANA expert. A good HANA consultant can define a suitable partitioning strategy in a very short period of time.

Remember that for scale-out, you will need one “hot-spare” node, and for BW you also need a master node, which is used for configuration tables and calculations. In effect, if you buy 5 1TB nodes (the minimum I recommend for scale-out) then you only get roughly 3TB of usable database.

The SAP Business Suite is more interesting, because data has to be grouped into sets of tables. This is discussed in SAP Note 1825774, but the short version is that it isn’t supported.

Should you scale-up, or out? Business Suite

The answer for the SAP Business Suite is simple right now: you have to scale-up. This advice might change in future, but even an 8-socket 6TB system will fit 95% of SAP customers, and the biggest Business Suite installations in the world can fit in a SGI 32-socket with 24TB – and that’s before considering Simple Finance or Data Aging, both of which decrease memory footprint dramatically.

My advice is to conduct a sizing exercise to decide what size you need today, and to buy this size (assuming you are a mature customer and not greenfield). It’s not necessary in most cases to worry about RAM to expand into, because you will naturally undertake optimization projects, which will reduce your memory footprint as you grow.

Should you scale-up, or out? BW and Analytics

This is a more subtle question. My advice is to scale-up first before considering scale-out. With scale-up, you don’t have any of the expense of GPFS or a SAN, and none of the complexity of managing a cluster.

With BW, you also have the option of the IQ NearLine store, where you can store cold data at very low cost. You should consider implementing BW NLS before considering scale-out, it is much more cost-effective and will increase HANA performance. With HANA SPS09 there is also Dynamic Tiering for BW, which allows PSA data to be persisted on a warm store, further reducing HANA footprint.

In addition, there is a new feature called the Inverted Index in HANA SPS09, which shrinks tables by up to 40%. This isn’t supported for BW yet (no doubt it will come in a future patch), but it is for data-mart scenarios. In BW 7.4 SP08, there are further features to migrate row-oriented tables to the column store, further reducing footprint.

SAP are continuing to invest in ways to reduce the HANA memory footprint – better to keep on top of these than to scale-out.

If, given all of this, you need BW or Analytics greater than 2TB, then you should scale-out. BW scale-out works extremely well, and scales exceptionally well – better than 16-socket or 32-socket scale-up systems even. Just remember that a 2TB scale-up system can be bought for $100k, but a 4TB (2TB usable) BW system costs $500k, so your costs will increase.

Don’t consider any of the > 8-socket systems for BW or Analytics, because the NUMA overhead of those sockets is already in effect at 8-sockets (you lose 10-12% of power, or thereabouts). With 16- and 32-sockets, this is amplified slightly, and whilst this is acceptable for Business Suite, but not necessary for BW.

Does the Hardware Vendor matter?

Various details of the different hardware vendor offerings should have come out in this blog – there are pros and cons to all the hardware vendors, especially for large Business Suite on HANA systems.

That said, especially for SAP BW, we have used all of the hardware vendors, and all of them work great, when set up correctly. The biggest variance we have seen is the quality of implementation by services professionals. A badly designed and maintained HANA system won’t work well! In days past, the hardware vendors might not install them correctly, but that doesn’t happen often any more.

For ultra-high-end use cases, there are specific things that can be done (better networking, SSD storage) and HANA appliances that perform extraordinarily well can be built using Tailored Datacenter Integration, but for standard use cases (1-10 billion rows, 1-10TB), there is no need for this.

Final Words

If you don’t need more than 2TB in the short-mid term, then don’t buy scale-out. Instead, buy a scale-up system that will meet your requirements today, and replace with scale-out later on if you need it. The money you will save on your balance sheet for depreciation of the scale-out hardware will pay for the 2TB appliance!

But if you do need > 2TB of HANA, then scale-out is the way forward. It works exceptionally well and you will get near-linear scalability for complex queries.

HANA Product Management have issued a HANA SPS09 Scalability document, which is worth a read.

I hope this moves your thinking along, do feel free to ask any questions below!

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Akhilesh Kumar Maurya
      Akhilesh Kumar Maurya

      Thanks for detailed write-up.

      I have few follow-up questions

      1. In case of scale-out setup for custom database (open HANA), how does the query execution work? Do we need to write different code/specify index server name to run on scale-out boxes ?

      2. How does HA works in scale-out ? In case of failure of one node, how the passive node is enabled ?

      Author's profile photo Jay Roble
      Jay Roble

      If you are doing a BW on HANA Scale out, are there Pros & Cons to 2TB vs. 1TB nodes?
      i.e. 6+1 1TB or 3+1 2TB nodes? More smaller nodes = more network traffic? Larger nodes = more expensive standby node? etc.

      Author's profile photo Cezar Manechini
      Cezar Manechini

      Hi John, What about the possibility of running SoH in scale out scenarios as described at SAP Note "1781986 - Business Suite on SAP HANA Scale Out". There is a limited availability based on a specific approval.

      Author's profile photo Former Member
      Former Member

      Hey John,
      Nice blog.
      1 quick query to confirm, is it possible for adding up Incremental Memory on our HANA DB Master/slave?
      Also is it possible, like keeping Master at 1 TB & Slave at around 2 TB?

      Thanks in advance..


      Author's profile photo Former Member
      Former Member

      This is a very hot topic for a lot of customers, especially when migrating from AnyDB to SoH, but not yet considering S/4 HANA. The ability to Scale-out is key, rather than have to invest in new bigger hardware once they reach a certain size. Getting budget approval to throw to purchase an even bigger box is hard, to get the business to agree to Archive data is even harder. However to buy another smaller box to scale out is relatively easy. Although one question remains, will the concept of warm data in S/4 HANA filter down into SoH?

      Author's profile photo Former Member
      Former Member

      Question: SAP sizing guidelines suggest 1200 IOPS per HANA node. Now if I have a single node HANA DB of 1TB that would suggest an IOPS requirement of 1200. If however I have a 4 node HANA scale-out scenarion with each node being 256GB that would suggest and IOPS requirement of 4800?

      Author's profile photo Former Member
      Former Member

      "HANA is a shared-nothing architecture", I guess thats not correct if the cluster needs a single management or distribution point "you will need one “hot-spare”".
      A shared nothing architecture would work like the Netflix or Amazon landscape, where every node can always be destroyed without impact on the whole system or any loss of data.
      HANA Clusters are built with a technique like the Oracle RAC, as HANA is an ACID following DB. A shared nothing Architecture instead would't be truly ACID, more following the eventual consistency patterns.

      Author's profile photo Srinivas Sunkara
      Srinivas Sunkara

      Hi , Is there a scale out step by step install procedure document available? And also any hdbsql queries to identify the whether configuration is scaleout or non scale out or scale up? how do we know?

      Author's profile photo Rajkumar Iyer
      Rajkumar Iyer

      Excellent write-up John; would love to see your updates to this article based on the latest updates/support for dynamic tiering & DTO

      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      The hardware market has certainly changed since I wrote this. HP is now certified at 20TB, and IBM have a similar system based on Power8 and Power9 (in the works). SGI was acquired by HP. We don't see dramatically bigger systems in the works.

      One of the reasons why is because we now support scale-out for our largest S/4 customers. The largest is 2x24TB = 48TB, which is big enough to account for any of our largest ERP systems (taking into account good data retention and information lifecycle management strategies).

      My advice still remains the same: scale-up before you scale-out. Scale-out systems have additional complexity which is fine if you need it, but I do not like to introduce unnecessary complexity to the customer.

      As an analogy, why have an AWD car when you only drive on paved roads in the dry?