So far, their solutions including SAP HANA had been mostly process based.
On the OS level:
On the ABAP level:
Even on the Java level:
As well as on the HANA level:
Also, data often had to be local to the applications, which led to redundant data multiplying extract, transform and load (ETL) solutions.
With tools like SAP Data Intelligence, this is no longer required. Big data must be processed in situ, because of its size or nature, with only the results being propagated leveraging technologies like MapReduce.
With SAP Data Intelligence, SAP embarked upon the world of containerization and data lakes, and I am sure there will be more solutions to follow:
This is what an SAP Data Intelligence pipeline looks like:
Running on Kubernetes:
Underpinned by a cpeh storage cluster:
Accessing an Ambari based HDFS data lake:
Welcome to next generation SAP.