Skip to Content
Author's profile photo John Appleby

Unleashing Lightening: Building a quarter trillion rows in SAP HANA

For the Keynote at SAP TechEd/d-code 2014, we built out a quarter trillion row model in a single scale-up HANA system. You can read the high level Unleashing Lightening with SAP HANA overview and watch the video.

I thought that people might be interested in how the demo was built, and the logistical and technical challenges of loading such a large data model.

Building the SGI UV300H

The first challenge we had was finding a system big enough for 30TB of flat files in short time. The SGI UV300H is a scale-up HANA appliance, made up from 4-socket building blocks. The SGI folks therefore had to string 8 of these together using their NUMAlink connectors and attach 6 NetApp direct attached storage arrays for a total of 30TB of storage.

Today, only 4- and 8-socket SGI systems are certified and the 32-socket system is undergoing certification. The nature of the UV300H means that there is non-uniform data access speed. On a 4-socket Intel system you have either local (100ns) or remote (300ns) – you can read Memory Latencies on Intel® Xeon® Processor E5-4600 and E7-4800 product families for more details.

With the NUMAlink system there is also a hop via NUMAlink to the remote chassis, which increases the memory latency to 500ns. Whilst that is blindingly fast by any standard, it increases the non-uniformity of RAM access on HANA. For SAP HANA SPS09, SAP optimized HANA for the UV300H by improving average memory locality.

However HANA SPS09 wasn’t available, so we ran on stock SAP HANA SPS08 Revision 83. It’s tough to say how big a penalty this cost us, but on a theoretical 7.5bn aggregations/sec, we got closer to 5bn, so I’m guessing SPS09 would provide a 25-50% hike in performance under certain circumstances.

But to be clear, this is the same HANA software that you run on any HANA server, like AWS Developer Edition. There was no customization involved.

Downloading 30TB of flat files

Here was our next challenge. I did the math, and realized this was going to take longer than the time available, so I put a call into Verizon FIOS to see if they could help. They came out the next day and installed a new fiberoptic endpoint which could service up 300/300Mbit internet. With my laptop hard-wired into the router, we could get a constant 30MByte/sec download from the Your.Org Wikimedia Mirror. Thanks guys!

Once these were on USB hard disks, we shipped them to SGI Labs, which cost another 4 days, due to the Columbus Day holiday.

From there, we found we could load into HANA faster than we could copy the files onto the server (USB 2.0).

Building the HANA Model

Thankfully, I have a few smaller HANA systems in my labs, so I tested the configuration on a 4S/512GB system with 22bn rows, and on a scale-out 4x4S/512GB system with 100bn rows. There were a few things that we found that would later be of value.

First, partitioning by time (month) is useful, because you can load a month at a time, and let HANA merge and compress the last month whilst you load the next month. This saves the constant re-merging that happens if you don’t partition by time. A secondary partition by title is useful, because it ensures partition pruning during data access, which means that much less RAM is scanned for a specific query. This led to a RANGE(MONTH), HASH(TITLE) two-level partition strategy, which is very typical of data of this type.

Second, the amount of data we had meant that it was going to be most practical to logically partition the data into tables by year. This wasn’t strictly necessary, but it meant that if something went wrong with one table, it wouldn’t require a full load. This decision was vindicated because user error meant I wiped out one table the night before the Keynote, and it was easily possible to reload that year.

Third, a secondary index on TITLE was used. This was based on research by Lars Breddemann and Werner Steyn Further Playing with SAP HANA which led us to understand that when a small amount of data is selected from a large table, an index on the filter predicate column is beneficial. Therefore if the SQL query is SELECT DATE, SUM(PAGEVIEWS) FROM PAGECOUNTS WHERE TITLE = ‘Ebola’ GROUP BY DATE, then a secondary index on TITLE will increase performance.

Fourth, we built a simple HANA model to UNION back in all the tables in a Calculation View, and join it to the M_TIME_DIMENSION system table so we could get efficient time aggregation in OData and ensure query pruning.

Optimizing SAP HANA for the UV300H

By this time, we had 30TB of flat files on the /hana/shared folder of the UV300H and had to get them loaded. We realized there was a challenge, which is the Wikipedia files come in space delimited, with no quotes around text, and the date is in the filename, not a column. We didn’t have Data Services or another ETL product, and the fastest way to get data into HANA is using the bulk loader.

So, I wrote a script which uncompressed the file into a memory pipe, reformatted it in awk to contain the timestamp and convert it to CSV with quotes, write it out to a RAMdisk, run it into the bulk loader and delete the RAMdisk file. Each hour takes around 20 seconds to process, and I ran 12 threads, in parallel, plus an additional 40 threads for the bulk loader process.

What we realized at this point was that SAP HANA SPS08 wasn’t optimized for the amount of power that the UV300H had, so I tweaked the settings to be more aggressive, particularly with mergedog, which only uses 2 CPU cores by default. We enabled the integrated statistics server, installed PAL and the script server.

In addition, I found that it was necessary not to be too aggressive, because the log volume is only 500GB, and you can easily fill this up between 5 minute savepoints if you get too aggressive with loading (remember you have to buffer enough logs until the savepoint is complete). I suspect the certified 32-socket system will have a 1 or 2TB log volume for this reason.

Other than that, we pretty much found that it just worked. Here’s a screenshot of using all the 960 vCores in the UV300H during some early query testing. I’m sure glad I didn’t pay for the power bill!

Screen Shot 2014-10-14 at 3.41.44 PM.png

Building the Web App in HANA XS

We put together the Web App in HANA XS using SAP UI5 controls and OData services to access the underlying data model. More on this in a later blog when Brenton O’Callaghan is going to describe how it was built.

What’s critical about this is that the OData services which is accessed directly by the browser runs in-memory, and has access directly to the Calculation Scenario which is generated by the SAP HANA Calculation View. This means that the response time in the browser is very little more than a SQL query ran in a console on the server itself.

There were really no special considerations required to use HANA XS with a model of this size – it worked exactly the same as for any other HANA model. One thing we did to ensure we didn’t cause problems was to restrict the HANA models so you couldn’t return very large data volumes by using Input Parameters. This means you can’t return 250bn rows in a browser!

Final Words

I’ve said this in the other blog, but whilst there were huge logistical challenges in building a model like this in 10 days, HANA made it possible. The fact that HANA self-optimizes the whole model for compression and query performance and requires no tuning is a huge benefit. Once we had built a simple data model, we were able to directly load all the data overnight.

One thing that was worth noting is because of the direct attached storage model in the UV300H, we found we can load around 200GB/minute into RAM (once it has been loaded and merged once). That means we can load the entire 6TB model on this system in around 30 minutes, which is the fastest load speed I’ve ever seen on a HANA system.

Anyhow, the purpose of this blog was to open the kimono on specifically how we built this demo, and to show that there was no special optimization to do so. This, despite the fact that the UV300H 32-socket edition certification is still in progress and the HANA optimizations for it weren’t available to us. If you have any questions on it then please go ahead and ask them, I’d be very happy to answer.

And remember if you’d like to Build your own Wikipedia Keynote Part 1 – Build and load data then please follow that series – we’ll be building out the whole demo in 4 parts.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Patrick Bachmann
      Patrick Bachmann

      Nice. Incidentally I wonder which is more impressive to say, a quarter trillion or 250 billion?  I rather like 250 billion as 250 sounds bigger than a measly 1/4.  I wonder how the marketing gurus would spin it.  At any rate very cool indeed. 

      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      Steve preferred the quarter trillion in rehearsal so it stuck.


      Eother way it's a lot of zeros!


      Author's profile photo Kay Kanekowski
      Kay Kanekowski

      It sounds more impressive in Germany or France. Here trillion is the name for 10e18. And a quarter of a trillion is

      Author's profile photo Lars Breddemann
      Lars Breddemann

      Thanks John, appreciate the reference to my old stuff even in the light of this very impressive demo.


      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      Actually I think it was a different internal paper you wrote that helped me but I wanted to tip my hat regardless! 🙂

      The key is that when you are only selecting a small portion of rows out of a big table, a secondary index can dramatically reduce query response time.

      I chose a partial CBTREE index which according to the papers I read should be O(log N) rather than O(N) for scan speed. That corresponded to a 10-100x increase in query response.

      Author's profile photo Lars Breddemann
      Lars Breddemann

      Thanks again 🙂

      About the CBTree... this kind of index is actually only available for row store tables. It's a special kind of key compressed b*tree that works great for character data types in the row store. There's not too much literature available though... at least I had to look for it quite a while...

      For the main store of column store table a BTree would not really be a good choice - it would waste a lot of memory and there is no need to support index updates since we rebuild the complete structure during delta merge.

      So, for column store tables, the index structure actually is always the inverted index. The SQL layer gracefully ignores whatever is put into the CREATE INDEX clause (even though the catalog may still keep this setting... just in case anyone notices it popping up in an export .SQL file... ).

      The inverted index is basically just a copy of the main vector 'pivoted' (ok, that's not a word... let's use transposed instead) so that we immediately get to the matching records for any key.

      This leads to an even better access speed, since the time to find a record by its key is constant.

      In turn the runtime is dominated by how many records are actually found to be matching (but that's also a big contributor in non-unique b*trees - so not a big difference here 😉 ).

      Unfortunately the available implementation documentation is pretty... well... extendable. So, the details of column store index implementation are a mystery to me as well.

      Taking all that into account, I think you boiled it down very well to "...when you are only selecting a small portion of rows out of a big table, a secondary index can dramatically reduce query response time."

      Cheers, Lars

      Author's profile photo Former Member
      Former Member

      Hello John,

      That is some seriously impressive stuff. To scale out that much, it will be intreresting to see if HANA has any limits or is it limited to our imagination. This should get those entities that require a huge amount of dating crunching thinking of using HANA. Amazing that in the past you had to use something like a CRAY computer to do tihs.



      Author's profile photo Akshay Gupta
      Akshay Gupta

      So Cool.

      I just got so impressed only by the read of it.

      Then had to bring myself back to reality that it wasn't scifi.


      Author's profile photo Former Member
      Former Member

      Great results!

      Are you using selects with joins on other tables (star or snowflake DWH approach) or it is only one table with quarter trillion rows ?

      Author's profile photo Former Member
      Former Member

      Great results!

      Are you using selects with joins on other tables (star or snowflake DWH approach) or it is only one table with quarter trillion rows ?

      Author's profile photo John Appleby
      John Appleby
      Blog Post Author

      This use case is basically a DSO, so single table, but I've also joined the same data model into multiple tables and it works great still.

      We are in the process of updating this appliance to SPS09, so we'll see what kind of results that brings. It should be substantially faster.

      Author's profile photo Former Member
      Former Member

      Very interesting to see results with joins on this huge amount of rows.