Skip to Content
In the previous blog I had covered the overall brief summary of HANA In-Memory database and explained the power of utilizing the Software and Hardware innovation. In this article I will cover the topic of ‘Memory Hierarchy’ Innovation. How HANA In-Memory Database capitalized the Memory Hierarchy for greater performance, generating in milliseconds/seconds response to queries for million/billions of records. On the basis of the programming experience in various languages we consider the memory to be continuous bytes of storage. Processing of information at the CPU level depends upon the location of byte. Fastest execution is possible when the information/byte is exists in the CPU register and slowest when it is in the disk, as shown in the ‘Memory Hierarchy’ diagram. In the current architecture of computers we have different kind of storage devices for different purposes. CPU executes or processes the information using the registers, in case it does have the information in registers it fires the load instruction to look into the Cache (L1&L2, SRAM Static Random Access Memory). It is possible that it may not find that in the Cache and it would fire instruction to check in DRAM (Dynamic Radom Access Memory) and further to disk drive to read it from there. Each of the read for the above mentioned storage device has a cost associated with it and it increases as you move further from the processor. Picture below shows the latency of each storage device. Memory hierarchy starts with the disk at the bottom with very large capacity of storage to few KB’s/MB’s of Cache to few registers. Storage device at the top are close to CPU and thus avoids any wait by CPU for processing to give high performance. Unfortunately the cost factor too increases as we move closer to the processor. * *

* *

*Main Memory Advantage:*

HANA In-Memory database as the name suggest runs all the process and stores the information up to DRAM (Dynamic Radom Access Memory) level. This immediately improves the performance of database by huge factor. It completely changes the Disk I/O problem of database which is in factor of 10000’s. Please refer the grid picture below for the latency of each storage device. In HANA the worst case of instruction execution would reach the DRAM and seek the information. Where as in the disk based traditional database it reaches another level below where the main bottleneck exists.

 

HANA completely changes the way database used to tackle the performance bottlenecks. In traditional database implementation recent and most frequently data is cached in DRAM to improve the performance and lots of work is done around it. With HANA one need not to worry about such concerns as the whole Database information is DRAM.

 

 

Diagram of Memory Hierarchy:

Memory Hierarchy

To report this post you need to login first.

1 Comment

You must be Logged on to comment or reply to a post.

  1. Lars Breddemann
    Sorry, but why?

    Why was it necessary to publish a second blog post that doesn’t bring any new insights, thoughts, experiences but just re-re-re-re-iterates basic first-second-thoughts on any generic in-memory database?

    I don’t get it. What do you want to tell us?

    regards,
    Lars

    (0) 

Leave a Reply