Planned Native Integration of Lumira into BI Platform Details
Architectural & Deployment Differences between Lumira Server integration into BI4 (aka LIMA) and the new planned native BI4 Lumira integration.
This article attempts to explain the planned changes between the BI add-on for SAP Lumira 1.x (LIMA) integration which is in rampup today, and the new planned BI4 integration, SAP Lumira Server for BI platform, that fits natively into the BI4 platform.
You can review the “LIMA” details here if you’re not familiar with the current Lumira integration offering with the BI platform: http://scn.sap.com/community/bi-platform/blog/2014/06/27/lumira-integration-for-bi4
This is also a more detailed extension to the FAQ published by Adrian Westmoreland here:
“LIMA” is an integration of BI4 with Lumira Server, which runs natively on HANA.
With this integration, HANA is a hard requirement. Active web content actually runs in XSXJ in HANA as shown below.
In LIMA, HANA is used as a data repository for all data and a platform for running the application layer, including data fetched via a universe.
How this changes with the new planned native BI4 integration that will replace the LIMA architecture.
Point #1 – Lumira Server is no longer required. Everything runs natively on the BI platform, meaning HANA is not necessary, but can be used optionally as a data source and calculation engine. Run Simple!
With the exception of HANA ‘on-line’ queries, all other data is pulled directly to the in memory engine running on the BI Platform. HANA remains as an optional calculation engine if you have HANA in your environment.
For ‘on-line’ queries, data in HANA will remain in HANA, and the queries to run the visualizations will be run against HANA directly. Only the required calculated results will come back to the BI layer in order to render the visualization. This means if you have 10 million rows of sales data and you visualize a the sum of your sales, only that final number gets returned to the BI stack. The visualization then can take advantage of HANA’s ability to handle big data fast.
Of course you can still download data from HANA using the desktop tool, and then store that data in BI4.1 platform in memory engine. This is an option if you want to avoid having your users run queries directly against HANA which may be running mission critical applications and you have memory usage restrictions.
Data will not be put INTO HANA when publishing new datasets, at least not initially. This may become an option eventually. Instead, a dataset based on universe, freehand sql, excel or data downloaded from HANA will be stored in the platform in memory engine*.
*Note that unlike a regular database, the published artifact will first be stored in a “.lums” file on the FRS, and loaded into the in memory engine only when the document is requested by a user.
Major Changes to BI Lumira document & security model.
With “LIMA”, there is a separation of Story & Dataset. They are stored & secured separately, as shown in the below images.
The new planned SAP Lumira, server for BI platform will manage a single document, a “.lums” file. These documents will be managed and stored just like all other BI content in the BI Folders.
The datasets section will be removed from the CMC, and instead of a “Lumira Story” object, there will simply be a single Lumira object, replacing the “Story” object that today is available with “LIMA”.
Migrating Content from “LIMA” to new planned BI4 Integration:
The planned SAP Lumira, server for BI platform integration will only deal with .lums files. If you have created some content in the current BI4 LIMA integration, you will need to save the .lums files to the BI platform again.
Note that with LIMA, you will already have .lums files present on the file repository server, even if they’re not present on the desktop client.
“LIMA” must be uninstalled for the planned new SAP Lumira server for BI platform integration to work. You can optionally uninstall Lumira Server from HANA, as this will no longer be necessary for the planned new integration.
If you have a live system, make sure you delete the “LIMA” objects first before uninstalling.
Operations at .lums file level, not dataset:
With the “LIMA” architecture, a dataset would be scheduled to run. In the planned new model, scheduling will be run at the Lumira object level (.lums file). This is the same behaviour as a Webi or CR document.
How the planned Lumira BI4 backend works:
When the end user saves a Lumira document using the “Save As” workflow, the file will be physically stored on the file repository server, just like the other BI content artifacts.
When an end user tries to view the published visualization associated with the file, the BI in memory backend will load the file on demand. You must have sufficient memory in place to load the file. As the data is loaded on demand into an in-memory engine on the BI platform, you may want to plan to add a separate box to the cluster to handle the added load, rather than contesting for memory with the other BI services. This will depend on you current BI system architecture.
Sizing recommendations will follow when the planned integration is released.
The first iteration of this new architecture is planned for a Q1 release, although release timelines are always subject to change.