Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
i038615
Product and Topic Expert
Product and Topic Expert
This is the 3rd part of the 4 blog posts about the technical architecture of native cloud applications:

In our previous blog post, we explored how reaching 10,000 users is no longer a daunting engineering challenge thanks to the cloud reference architectures and scalable PaaS infrastructure running on Hyperscalers which offers you multizone and multiregion support for resiliency,  Content Delivery Networks (CDN), Load Balancer for workload distribution, Caches to reduce the stress on the relational database, etc… These architectural patterns allow for scaling without needing substantial changes to your application.


Scalable apps via PaaS and IaaS architecture from Hyperscalers


However, as you aim to scale beyond 10,000 users, it becomes necessary to adapt your application code and shift from a monolithic architecture to a more modular one, based on more granular services or microservices. It's time to strangle the monolith!

Strangling the Monolith


As your application scales, monolithic architecture can become a limitation. Gradually moving to a more modular architecture helps to manage complexity, improves scalability, and allows independent deployments of smaller components.

APIs play a critical role in modularising applications. By creating well-defined APIs, components can interact seamlessly, enabling separation of concerns and promoting maintainability. In fact, modern application development follow the "API first" principle versus the more traditional "Code first" principle.

Refactoring monolithic applications into modular architectures involves various approaches, such as Incremental Modularization, Domain-Driven Design, Event-Driven Architecture, Containerization, API Gateway Pattern, Anti-Corruption Layer, and many more that are too extensive to detail in a blog post.

One widely-used strategy is the Strangler Pattern, which progressively replaces sections of the monolith with smaller, modular components or services through the introduction of a proxy or facade layer. This enables a controlled transition towards a more modular architecture. The selection of an approach depends on the specific needs and limitations of the application, as well as factors like team size, skill set, and organizational culture.

Strangling the monolith is a gradual process. New functionality should be implemented outside the existing monolith to maintain a clean core. Each module can then scale-up or scale-out independently, allowing them to scale and/or innovate at their own pace.


Modularization via APIs


Eventually, there will be certain basic services such as authentication, SSO, monitoring or integration that you'll need to harmonize across different applications, modules, services, or microservices. The standardization of these basic services helps reduce the complexity of the technology stack and streamline the overall architecture. Additionally, it prevents the "reinventing the wheel" effect, where developers may end up implementing their own SSO or monitoring functionality when there are already standard services available for use as part of the technology platform.


The role of the Technology Platform


We've discussed general applications, but let's now examine how SAP has applied modernization and modularization techniques to SAP S/4HANA and the Intelligent Suite. The original SAP ERP 6.0 code, a massive modular monolith, is undergoing a gradual decomposition and modernization to meet current application architecture standards.

As highlighted in previous blog posts, SAP has defined 6 End-to-End processes (Lead to Cash, Source to Pay, etc.) and modernized the existing SAP ERP and SAP Business Suite processes. This includes updating the original ABAP code to new programming models based on FIORI, CDS views, oData services, and native HANA functionalities. Additionally, some code has been rewritten as cloud-native services or microservices, while other portions have been replaced by best-in-class functionality from SAP's LoB native cloud applications like SAP Ariba and SAP SuccessFactors.

And many of the existing basis functions (like user management, authentication and SSO, monitoring, integration, etc,...) has been migrated to the SAP Business Technology Platform (SAP BTP).


From ERP to S/4HANA



API Gateways


Introducing an API gateway as a single entry point for all external requests can help in gradually refactoring the monolith. This pattern enables the segregation of specific functionality into separate services while maintaining a consistent API for clients.


API Gateways


Another additional benefit of the API gateways is that allow you to monetize your APIs. You can implement various strategies, such as usage-based pricing, tiered subscription plans, and freemium models. By tracking API usage and offering different levels of access or features, you can generate revenue while providing value to clients through a well-structured and managed API ecosystem.

Reaching the limits of the relational databases


As applications scale, it's vital to recognize the inherent scalability limitations of databases, as they are often the primary bottleneck.

Native cloud applications developed with a start-up mindset typically embrace microservices, polyglot persistence, and schemaless or NoSQL databases. This approach allows each microservice to store and manage its own data, enabling each database to grow and scale independently.

When it comes to enterprise applications, such as ERPs, their complexity demands a different approach. Unlike simpler cloud applications that can leverage NoSQL schemaless databases and straightforward data models, traditional enterprise applications require meticulously planned central relational databases. These databases provide the necessary robustness and structured framework to handle intricate processes efficiently.

Let's take a moment to examine a typical S/4HANA query from a FIORI application. As you can see below, this rather normal query involves over 367 tables and 381 views! This complexity showcases the indispensable role of relational databases in managing enterprise operations effectively.


S4HANA SQL Query Explain Plan


Relational databases provide significant benefits for enterprise applications, such as transactional consistency or query optimization and offer various tools to assist database administrators (DBAs) in managing databases up to a certain size.

However, OLTP RDBMSs struggle with scalability, and beyond specific limits, the Total Cost of Ownership (TCO) becomes exorbitant, calling for advanced techniques for scalability.

Regular maintenance tasks:

- Database performance tuning (parameters, index creation, statistics, reorganization, etc.)
- SQL optimization (hints, joins, and where clauses)

Advanced techniques required for very large databases:

- In-memory databases
- Offloading binary files
- Standalone indexing engines
- Read-only replicas
- Partitioning, Sharding and scale-out
- Denormalization

Let's have a detail look to each one of these techniques and compare how you can use these techniques for native cloud applications and how we use it at SAP.

 

In-Memory Databases

Although In-Memory databases has been extensively used for web and cloud developments for long time (MemCached, 2003, Redis 2009, VoltDB 2009), for enterprise applications situation was more conservative. Relational databases were primarily row-based and stored data on disks, using memory solely as a cache to minimize disk access.

With the introduction of SAP HANA in 2010 as a columnar in-memory database for SAP applications, a significant debate and marketing competition arose regarding its performance advantages for traditional relational workloads. However, today, most traditional RDBMS have embraced in-memory options, making these features more commonplace and widely accepted in the industry.

 

Offloading binary files to external storage

Another widely adopted technique employed by both Native Cloud Applications and traditional relational enterprise applications to enhance scalability is the offloading of binary files from the database.

Instead of storing these files directly within the database, they are transferred to external filesystems or BLOB storages. Alternatively, organizations can leverage specialized solutions such as Opentext or SAP Content Server, or even utilize Content Delivery Networks (CDNs) for enhanced file distribution and faster content delivery globally.


Offloading documents


 

Similarly, we can also offload the indexing and searching of data using specialised services to minimize the access main database.


Standalone Search engines


See below how these techniques can be combined using Microsoft Azure, SAP BTP, or a combination of both to build innovative services that facilitate importing, enriching, indexing, and searching of unstructured data from documents and structured data from SAP system:


Scaling applications with Microsoft Azure and/or SAP BTP services


 

Read-only replicas

Next logical step in our scalability journey is to use read replicas to separate read and write access to our database although this approach require a careful re-engineering of our application.

Read replicas are essentially copies of the primary database, kept in sync with the original data. By directing read queries to these replicas, the load on the primary database is reduced, significantly improving performance and scalability. This approach works especially well for read-heavy applications, as it spreads the read workload across multiple instances, enabling the system to handle a higher volume of concurrent users and queries. Additionally, pairing read replicas with cache databases can further enhance read and write performance.

In the case of SAP S/4HANA, a comparable approach is employed, utilizing the HANA System Replication to a secondary node (HA/DR node). Typically, this configuration is Active-Passive, but enabling the Active-Active read-enabled option allows you to redirect some of the read workload to the secondary node. While this is not an universal solution, it can prove beneficial for specific workloads under certain conditions, despite introducing added complexity to the architecture.


Read replicas


Partitioning, Sharding and Scale-out

Despite all these techniques, eventually we will reach a limit on the scalability of single database servers. Nowadays modern Intel servers currently offer around 16 sockets and 24 TB of memory. To process more workload, it's necessary to explore scale-out architectures.

Partitioning, Sharding and scale-out are similar. The idea is to distribute large amount of data across multiple partitions that can run on the same node or different nodes using a shared-nothing architecture, where each node operates independently without sharing memory or storage. This enhances parallel processing and data management efficiency.

However, implementing sharding / scale-out adds complexity to the architecture and management, requiring careful planning and potential application logic changes to ensure data consistency, integrity, and availability.


Partitioning vs Sharding vs Scale-out


Database denormalization

Final step in search of the limits of the scalability of the relational databases is to sacrifice one of the core principles of the relational model, the database normalization.

Database normalization ensures data efficiency by eliminating redundancy and ensuring consistency while preventing anomalies. This principle involves breaking down tables into smaller, related ones and defining their relationships.


Database Normalization - Avoiding anomalies


Compromising the normalization model is a controversial but necessary step when searching for scalability. By denormalizing the data model, we can reduce the number of joins needed for complex queries, but it comes at the cost of potentially inconsistent data. It's a delicate balance between maintaining data integrity and optimizing performance, and it requires careful consideration and planning.

During the SAP ERP modernization process to SAP S/4HANA, key finance tables were denormalized. This involved introducing a new massive ACDOCA table, also known as The Universal Journal, with near 500 fields running on SAP  HANA which enabled real-time OLTP+OLAP reporting on transactional data with minimal latency.



Denormalization - ACDOCA Universal Journal on SAP S/4HANA


 

Final Thoughts

Scaling applications up to 100,000 users requires transitioning from monolithic architectures to more modular ones. APIs play a vital role in enabling seamless interaction between components and promoting maintainability. The Strangler Pattern facilitates a controlled transition, gradually replacing sections of the monolith with modular architecture.

Scaling enterprise applications like SAP S/4HANA presents even greater challenges, pushing the limits of relational databases. Techniques such as offloading binary files to external storage, specialized indexing services, partitioning, scale-out, and denormalization have been implemented by SAP to optimize database performance.

With some of the largest companies worldwide running massive ERP and S/4HANA systems, SAP has experience in handling large-scale databases with tens of terabytes of data and much more than 300K concurrent users.

In the next blog post, we will discuss how to scale beyond these limits and discuss the techniques necessary for building true SaaS public cloud applications with millions of users.

Stay tuned!

Brought to you by the SAP S/4HANA Customer Care and RIG