Skip to Content
Business Trends

Architecture and deep-dive into leveraging Predictive Intelligence for S/4HANA

Part 2 of the blog series:

A podcast on the architecture concepts is here.

As a continuation from my previous blog, Leveraging Predictive Intelligence with S/4HANA, let us now do a deep-dive into the architecture and the different approaches involved here. Typically in enterprise processes where the right information must be provided to the right person, at the right time and right place, predictive algorithms must be integrated into these business processes. However there is also need of unforeseen, exceptional, and irregular usage of these Predictive algorithms to discover hidden insights and relationships in data.

Let us now focus on the different approaches and the architecture behind these approaches.

Architecture:

Use cases like forecasting, key influencer identification, trending, relationship analysis or anomalies can be solved with algorithms like regression, clustering, classification, or time series analysis. Usually these algorithms are not resources intensive in terms of memory usage and CPU time. These can be implemented within the SAP S/4HANA stack or called embedded ML (building into the core) where both the application data for model training and ML consuming business processes are located.

The embedded ML architecture is very simple and powerful at the same time as illustrated in the figure below where embedded ML architecture is based on HANA ML and PAi(Predictive Analytics Integrator – now upgraded to ISLM and refer to the blog). While HANA ML provides the required algorithms with PAL (Predictive Analysis Library) and APL (Automated Predictive Library). PAi is in charge of life cycle management of ML models, integration into ABAP, and training of models at customer side. The embedded ML has very low TCO and TCD.

Use cases like image recognition, sentiment analysis, or language recognition require deep learning algorithms based on neural networks. For model training etc., these algorithms require huge amounts of data and CPU time. Therefore the model training for these kind of scenarios is outsourced from the SAP S/4HANA stack to the Leonardo Foundation (now upgraded to the AI foundation) platform on SCP (SAP Cloud Platform) or called side-by-side ML (expanding around the core). Anyway the requested data for these scenarios like the images, audio files, video files, text documents, historical data are stored not in SAP S/4HANA but on a big data solution leveraging the SCP. The Leonardo Foundation library complements the overall solution architecture where specific algorithms are not provided on the SAP S/4HANA stack since the classic methods (eg., regression, classification) consume too many resources of the transactional system, or huge volumes of external data(eg., twitter, facebook etc.,) are required for model training. Hence the SAP S/4HANA extensions consume the Leonardo Foundation services (now called AI business services) and HANA ML capabilities as application data and business processes are founded on SCP thus bringing the golden rule of algorithms to the data applies.

 

Building into the core (Embedded ML):

Now let us dive into the embedded ML approach in SAP S/4HANA, as explained in the figure below, the solution is based on 2 main architecture decisions: using CDS views and making use of the ML techniques provided by SAP HANA. The algorithms for embedded ML can be performance intensive as high volumes of application data must be processed. For performance optimization, the algorithms should be processed closer to the data – SAP HANA provides the Predictive Analysis Library (PAL) and Automated Predictive Library (APL) application function libraries that offer statistical and data mining algorithms, additional algorithms can be implemented as required. These algorithms require application data as input for model training. The trained models are exposed to business processes by wrapping them with CDS views. These CDS views can be combined with other VDM CDS views and can then be exposed to the consumers. By consuming ML models through CDS views, existing content and concepts are re-used which results in a simple and very powerful solution architecture. The purpose of Predictive Analytics Integrator (PAi now upgraded to ISLM) is to provide a common interface for the consumption of ML models, independent of the underlying predictive engine. PAi contains information regarding the installed SAP HANA libraries. It provides a repository for ML models that includes, for example, information concerning model types (eg., regression, classification or time-series), model data sources (e.g., tables or views), model training data, or model quality figures etc. PAi shall also provide a pluggable infrastructure and adapters for automated (eg., APL) and expert (e.g., PAL, R or EML) libraries. Models created with SAP Predictive Analytics (automated or expert mode) can be imported into the PAi repository and executed.

Embedded ML shall be applied when the following criteria are valid:

  • Business & ML logic reside on the SAP S/4HANA platform
  • Use case is simple like forecasting or trending where the algorithms have low demand for data, RAM and CPU time
  • Data located in SAP S/4HANA is sufficient for model training, no need for huge external training data
  • Required algorithms are provided by HANA ML (e.g. PAL, APL, Text Analysis) and handled by PAi/HEMI in terms of ML model lifecycle management and ABAP integration

The following figure explains in detail the architecture of embedded ML at a high level. You will notice the SAP HANA layer where the SQL views are built based out of the application tables and the PAi repository accesses the ML models. The PAi framework has now been updated to the ISLM (Integrated Scenario Life cycle Management) framework and we did discuss in the part 6 of the blog series while explaining the embedded ML in detail. In the SAP S/4HANA layer, you will observe that the CDS views are built based out of the SQL views from the SAP HANA layer and the corresponding CDS views for the ML model are created as well. The Analytical engine on top of this processes these results and delivers them to the built-in Fiori app for the consumer.

Expanding around the core (side-by-side ML):

While embedded ML targets scenarios where the business and ML logic reside in the SAP S/4HANA stack, the term side-by-side ML is used in the following use cases:

  • SAP S/4HANA ML app based on SCP: The SAP S/4HANA application and the according business logic are based on SAP Cloud Platform. Such applications should consume the required ML services directly from Leonardo ML (now AI foundation)respectively HANA ML following the rule of bringing the algorithms to the data.
  • SAP S4HANA ML app based on ABAP AS: The SAP S/4HANA applications and the according business logic are based on the SAP S/4HANA stack. However, the required ML capabilities are not available on the S/4HANA stack, e.g., image & language recognition or sentiment analysis. These features are consumed from Leonardo ML (now AI foundation) remotely based on exchange of trained models.

Hence side-by-side ML scenarios are usually based on the AI foundation or the SAP Data Intelligence running on SAP Cloud Platform. The architecture of AI foundation is illustrated in the figure below based on the older legacy MLF. This legacy MLF is organized into 3 different development initiatives – Foundation, Functional services and the AI Business services. The newer version of the AI foundation will be available sooner and this blog will be updated accordingly.

  • Machine Learning Foundation (MLF) defines a framework for creating and executing “intelligent” algorithms in the context of the overall SAP echo system.
  • The ML functional services execute on top of the MLF and provide ready-to-use implementations of a selected set of standard use cases such as image classification, sentiment analysis etc.
  • Finally the AI business services implement complex use cases that are typically tightly integrated with SAP application business processes in S/4HANA.

In the figure below you will still notice the older terminology of the “Leonardo” and the architecture around it. This will be updated with the new AI business services architecture.

Side-by-Side ML shall be applied when the following criteria are valid:

  • ML logic resides on the SCP platform while the business logic can be based on SAP S/4HANA or SCP
  • Use case is complex like image recognition or natural language processing where among others neural networks with high demand for data, RAM and CPU/GPU time
  • Huge volume of external data is required for model training, main focus is on processing unstructured data
  • Required algorithms are not provided by HANA ML, but by other libraries for e.g. TenserFlow, SciKit-learn

Continuing a bit further, I would also like to provide a brief architectural representation of how side-by-side ML models can be built using SAP Data Intelligence. The architecture below explains how the ML models can be built using SAP DI which leverages the pipeline engine and the other data science tools. The business logic stays in SAP S/4HANA and the ML service is consumed by the SAP S/4HANA app as shown below. We discuss in detail in the part 7 of the blog series on how the consumption of these ML services happen in the context of SAP S/4HANA business processes.

Extending the core:

Further you should be able to extend the current predictive models embedded in SAP S/4HANA processes by leveraging the “SAP Analytics Cloud – Smart Predict”. You are already able to create new predictive models using “SAC – smart predict” based out of the SAP S/4HANA business processes and visualize the predictions in SAP Analytics Cloud dashboards. In this approach, the business user has the added opportunity to try out or create a few templates or user stories based on the white listed CDS views from SAP S/4HANA. While some of the machine learning functionality is already embedded into the SAP S/4HANA business processes, in some instances the business user might want to just create or showcase new BI dashboards based on the available functionality in SAP S/4HANA. A quick prediction or simulation leveraging the Smart insights and Smart Predict functionality in SAP Analytics Cloud would help the business user present this functionality to the end users and executive management. If some of this functionality is needed to be embedded into the business process, the request could be raised and developed into the SAP S/4HANA business process as an embedded ML model/process.

In the meanwhile with the inception of the upgraded PAi into ISLM (Intelligent Scenario Lifecycle Management), the concept of updating and modifying the models would be possible already with the embedded ISLM in S/4. Please check the blog and blog series in this regard.

The below figure explains the different ways of leveraging end-to-end predictive scenarios using SAP Analytics Cloud:

In the next blog, we discuss a bit more details on the different approaches and process flows involved in leveraging Predictive Intelligence.

Here are some quick links to the blogs in this series to give you a complete understanding of how Predictive Intelligence is infused into SAP S/4HANA.

Happy predicting the future!!

 

2 Comments
You must be Logged on to comment or reply to a post.