Why does the DEFAULT Oracle “INMEMORY_SIZE” Parameter = 0?
According to the Wikipedia definition, a default, in computer science, refers to a setting or a value automatically assigned to a software application, computer program or device, outside of user intervention. Such settings are also called presets. The Oxford English Dictionary dates this usage to the mid-1960s, as a variant of the older meaning of “failure in performance.”
Default values are generally intended to make a device (or control) usable “out of the box.” A common setting, or at least a usable setting, is typically assigned. In many contexts, such an assignment makes the choice of that setting or value more likely (the so-called default effect).
With this up front declaration, I was intrigued when I learned the default behaviour for the Oracle In memory column store is set to “zero” implying their eagerly anticipated in-memory offering is simply not used. Philosophically, this clearly exposes arcane thinking and misalignment with modern computing platforms. Cobbling together a 30-year old database kernel with layers of new features and options doesn’t make sense for the industry at large today. A digital modern enterprise requires a different architecture: enter SAP HANA.
What happens when transformative technology like SAP HANA disrupts everything? It creates a new category. Here’s what happens next: competitors jostle each other to stake the claim of market leader. It becomes a race to the top where most play fair and some simply don’t. In the world of databases for instance, when benchmarks are mutually-agreed upon among competitors, it’s expected that measurements against those benchmarks are accurately portrayed. Again, not everyone plays by the rules.
When it comes to databases, SAP HANA is creating a new category. SAP HANA is a modern in-memory computing platform, with a default capability to optimize ALL workloads at in-memory speeds. Consequently, many in the industry are taking aim to catch up. While a disk-based database with in-memory cache defaults to zero out of the box, SAP HANA, with its all in-memory approach and ground-zero architecture, continues to move the needle of performance and offers an all-inclusive solution that is truly in a class of its own.
Old world versus new digital age
Many important events happened 30 years ago. For example, the first mobile phone call was made in the U.K. on a handset that weighed about 11lbs. We’ve made quite a bit of progress since then. Today, smart phones can weigh as little as 97g. As processors became smaller and more powerful, mobile phones shrunk in size but grew in functionality. Databases are making their mark as well. But over the years, instead of re-thinking the database, vendors simply built layers upon an infrastructure that has changed very little.
Without getting into the details on why this add-on approach is burdensome to businesses in the digital age (I will discuss this in a future blog post), let’s just say that if the phone industry followed the path of some database companies – that is, via incremental upgrades on an aging base design – mobile phones would be too heavy to carry today. Simply put, it doesn’t make sense to continually add layers to a 30-year old database infrastructure.
Here’s where the story of SAP HANA begins. SAP first recognized the need for in-memory computing when it realized there wasn’t a solution on the market capable of SAP grade performance in terms of digital business transformation requirements. SAP took it upon itself to define a new category and came up with a new platform, ground zero up, resulting in the new in-memory computing platform paradigm that has now become the new foundation of a reimagined suite of applications “S4” for both on premise and cloud digital transformations.
HANA is a result of a re-imagined vision for databases outlined on a whiteboard many years ago. HANA is the answer to the question, “What if we assume the database always has zero response time?”
HANA derives from the fundamental design principle of not needing to deal with pre-materialized aggregates while providing near-zero response time. Zero response time in the database eliminates the constructs we find in all databases today and provides users with better response times. Manufacturing, sales, and financials; these are examples of applications that deal with a lot of data. What if the redundancies typically found in all databases are eliminated? Then processes for applications speed up and business workflows improve.
The true key differentiator for HANA is its real-time nature. HANA eliminates disk latency and duplicate data copies, and provides consistency of performance not possible with an in-memory cache. What’s more HANA offers high availability and supports a broad range of recovery scenarios.
What customers want: the real benchmark of performance
If you want to understand the difference between a traditional database and the enhanced capabilities you can only find in SAP HANA, then you have to look at what customers demand. If you want to know what customers want, you have to hear their concerns. Here is a sampling of top questions customers ask for choosing in-memory database platforms:
- Can I simplify my IT landscape with an in-memory solution?
Yes. SAPHANA eliminates many specialized systems and tools to move data among them because it is an all inclusive platform. You just need one copy of data for all requests. SAP HANA allows you to take advantage of in-memory computing to minimize IT complexity by delivering application, database, and integration services in a single platform, resulting in better performance, a simplified IT infrastructure, and lower administration costs.
- Are my applications accelerated without manual intervention and additional hardware?
Yes, with SAP HANA, data is in-memory by default without having to switch anything on, so additional hardware is not required. And because you only have a single copy of data – no configuration is ever needed. With Oracle, on the other hand, manual intervention is required to select or duplicate tables. For DBAs, this means having to figure out which buttons to push for which table. What’s more, DBAs have to do it for every single table they want to copy into memory; consider the time it takes to complete such a task considering that tables in database systems number in the thousands.
On the other hand, all data is on the disk by default in a disk-based database with in-memory cache: you can expect multiple data copies, duplication of data, and lots of time spent configuring it to your needs. And disk-based database with in-memory cache requires additional hardware to maintain duplicate copies and to constantly synchronize them.
- Can my application provide analytics while updating the same copy of data in real time?
Yes, because the SAP HANA platform is an ACID, persistent, in-memory, columnar database. The architecture accelerates both queries and transactions using one data copy, in-memory. And transactional and analytical workloads can work in parallel while preserving data integrity and system performance.
- Do my SAP Applications run better on an in-memory platform?
Running ahead without looking backward
Ever since its launch in 2010, HANA has been outrunning the competition and over 5,800 customers and 2,000+ startups have already adopted the platform. It’s because SAP HANA offers customers more choices in an in-memory solution that is all-inclusive. And working closely with partners, customers can ensure that their SAP HANA solution is optimized with their hardware.
There’s no other benchmark more important to SAP than helping our customers transform the way they run their businesses. When I hear customers surpassing their own business benchmarks thanks to SAP HANA next generation database solutions, I know we’ve done our job well. I liken our customers’ stories of success to a marathon. Today, the standard distance for a marathon is 26 miles and 385 yards; this is the distance by which all marathoners have measured their performance. However, many athletes are now exceeding this benchmark by running and completing ultra-marathons; 62 miles or more. They simply are in a different league. And this is also the difference between HANA and other traditional databases: no matter how many new features they add, HANA is in a different league altogether.
In closing, I would like to revisit the semantics of vendors postulating that their in-memory offerings are “drop in solutions” that don’t require any app changes. The simple fact being, if their DEFAULT behavior is set to zero for inmemory_size then you are not using the new in-memory capabilities, simply it’s the same as the old. You MUST also make scheme/DDL changes to use its capabilities hence. The notion of no app changes is an oxymoron.
Follow me on Twitter: @i_khana
Hi Irfan,
sorry, but i am a little bit confused.
> The simple fact being, if their DEFAULT behavior is set to zero for inmemory_size then you are not using the new in-memory capabilities, simply it's the same as the old. You MUST also make scheme/DDL changes to use its capabilities hence.
The simple fact why it is set to zero is the following:
> When it comes to databases, SAP HANA is creating a new category. SAP HANA is a modern in-memory computing platform, with a default capability to optimize ALL workloads at in-memory speeds.
Really? ALL? If this is the case why does SAP not provide SD benchmarks and so on? Even the SAP development teams started to reveal (to clients) that classical OLTP load is not optimized / improved at all (e.g. SAP VC, etc.).
> With Oracle, on the other hand, manual intervention is required to select or duplicate tables. For DBAs, this means having to figure out which buttons to push for which table. What’s more, DBAs have to do it for every single table they want to copy into memory; consider the time it takes to complete such a task considering that tables in database systems number in the thousands.
Yes, you are partially right. With Oracle you need to select the tables that should be stored in memory, but this does not need to be done manually. Run a script at workload time and afterwards you get a script to run once - finish. By the way you also need to choose for SAP HANA, if the table should be placed in column or row store.
> HANA is a result of a re-imagined vision for databases outlined on a whiteboard many years ago. HANA is the answer to the question, "What if we assume the database always has zero response time?"
How do you argument the newest Oracle IM benchmark with SAP BW-EML then: http://www.oracle.com/technetwork/database/in-memory/overview/benefits-of-dbim-for-sap-apps-2672504.html
Thank you.
Regards
Stefan
Stefan,
I can see from your profile, you have many years of tuning expertise. In a past life I also spent a significant amount of time micro managing complex systems/queries using the tried and tested techniques such as cost based optimization tips and tricks, ensuring index coverage, join selectivity etc., etc…. that said you are correct, all systems need to be sized. However, there is a major difference between sizing a system once at setup (as in HANA), with very simple parameters (everything goes in memory) vs having to regularly decide what portion of the data (and which specific data) you want to allow to flow into memory. That is also a key reason why choosing default values become very tricky here. In addition to being a complex task to start with, having to pre-configure what goes in memory assumes from DBAs that they not only know exactly what their users need today but also what will need tomorrow - or that they will have to reconfigure/tune again and again. In the end, memory is always finite - that is true of a HANA system as well but with HANA things are far simpler (it’s all in memory) and predictable (it’s always in memory).
Thanks for the comments, it’s good to hear from someone with knowledge of both sides of the story!
Hi Irfan,
> However, there is a major difference between sizing a system once at setup (as in HANA), with very simple parameters (everything goes in memory) vs having to regularly decide what portion of the data (and which specific data) you want to allow to flow into memory
I may misunderstood HANA, but you also need to make sure that enough memory is always available. So i can really see no difference compared to Oracle IM. You just define the table once (with Oracle IM) and that's it. Both systems grow and may need to resized as the data grows. In addition i see an additional benefit in case of partitioned tables and Oracle IM as you may do not need the very old data in memory and so you just exclude it (no idea if this is also possible with HANA).
> In addition to being a complex task to start with, having to pre-configure what goes in memory assumes from DBAs that they not only know exactly what their users need today
I guess running just one PL/SQL script (for the start) is not too complex in contrast to re-learning a whole new database platform and re-writing a lot of code 🙂
> that is true of a HANA system as well but with HANA things are far simpler (it’s all in memory) and predictable (it’s always in memory).
With HANA you also have to choose between column and row store, right? So maybe SAP provides some default settings for the SAP standard tables, but you still have to choose for all the others afaik.
Thanks.
Regards
Stefan