Skip to Content

Planned Native Integration of Lumira into BI Platform Details

Architectural & Deployment Differences between Lumira Server integration into BI4 (aka LIMA) and the new planned native BI4 Lumira integration.

This article attempts to explain the planned changes between the BI add-on for SAP Lumira 1.x (LIMA) integration which is in rampup today, and the new planned BI4 integration, SAP Lumira Server for BI platform, that fits natively into the BI4 platform.

You can review the “LIMA” details here if you’re not familiar with the current Lumira integration offering with the BI platform: 

This is also a more detailed extension to the FAQ published by Adrian Westmoreland here:

“LIMA” is an integration of BI4 with Lumira Server, which runs natively on HANA.

With this integration, HANA is a hard requirement.  Active web content actually runs in XSXJ in HANA as shown below.

In LIMA, HANA is used as a data repository for all data and a platform for running the application layer, including data fetched via a universe.


How this changes with the new planned native BI4 integration that will replace the LIMA architecture.

Point #1 – Lumira Server is no longer required.  Everything runs natively on the BI platform, meaning HANA is not necessary, but can be used optionally as a data source and calculation engine.  Run Simple!

With the exception of HANA ‘on-line’ queries, all other data is pulled directly to the in memory engine running on the BI Platform.   HANA remains as an optional calculation engine if you have HANA in your environment.

For ‘on-line’ queries, data in HANA will remain in HANA, and the queries to run the visualizations will be run against HANA directly.   Only the required calculated results will come back to the BI layer in order to render the visualization.  This means if you have 10 million rows of sales data and you visualize a the sum of your sales, only that final number gets returned to the BI stack.   The visualization then can take advantage of HANA’s ability to handle big data fast.

Of course you can still download data from HANA using the desktop tool, and then store that data in BI4.1 platform in memory engine.  This is an option if you want to avoid having your users run queries directly against HANA which may be running mission critical applications and you have memory usage restrictions.

Data will not be put INTO HANA when publishing new datasets, at least not initially.  This may become an option eventually.    Instead, a dataset based on universe, freehand sql, excel or data downloaded from HANA will be stored in the platform in memory engine*.

*Note that unlike a regular database, the published artifact will first be stored in a “.lums” file on the FRS, and loaded into the in memory engine only when the document is requested by a user.

Major Changes to BI Lumira document & security model.

With “LIMA”, there is a separation of Story & Dataset.  They are stored & secured separately, as shown in the below images.

The new planned SAP Lumira, server for BI platform will manage a single document, a “.lums” file.  These documents will be managed and stored just like all other BI content in the BI Folders.

The datasets section will be removed from the CMC, and instead of a “Lumira Story” object, there will simply be a single Lumira object, replacing the “Story” object that today is available with “LIMA”.

Migrating Content from “LIMA” to new planned BI4 Integration:

The planned SAP Lumira, server for BI platform integration will only deal with .lums files.  If you have created some content in the current BI4 LIMA integration, you will need to save the .lums files to the BI platform again.

Note that with LIMA, you will already have .lums files present on the file repository server, even if they’re not present on the desktop client.

No coexistence.

“LIMA” must be uninstalled for the planned new SAP Lumira server for BI platform integration to work. You can optionally uninstall Lumira Server from HANA, as this will no longer be necessary for the planned new integration.

If you have a live system, make sure you delete the “LIMA” objects first before uninstalling.

Operations at .lums file level, not dataset:

With the “LIMA” architecture, a dataset would be scheduled to run.  In the planned new model, scheduling will be run at the Lumira object level (.lums file).  This is the same behaviour as a Webi or CR document.

How the planned Lumira BI4 backend works:

When the end user saves a Lumira document using the “Save As” workflow, the file will be physically stored on the file repository server, just like the other BI content artifacts.

When an end user tries to view the published visualization associated with the file, the BI in memory backend will load the file on demand.  You must have sufficient memory in place to load the file. As the data is loaded on demand into an in-memory engine on the BI platform, you may want to plan to add a separate box to the cluster to handle the added load, rather than contesting for memory with the other BI services.  This will depend on you current BI system architecture.

Sizing recommendations will follow when the planned integration is released.  


   The first iteration of this new architecture is planned for a Q1 release, although release timelines are always subject to change. 

You must be Logged on to comment or reply to a post.
  • This addition will be welcomed with open arms as SME's not expecting to buy HANA in the near future will be able to start using Lumira rather than moving away to other competing technologies.

    This would definitely help in establishing a feasible business case to buy SAP Lumira for existing agnostic SAP BO customers.

    SAP Listens. SAP Delivers. Simples.

  • Great great great news.

    We're on starting block to promote this feature to our clients who are waiting for hardly !

    Can't wait, it's a VERY good new.

  • This is great thanks. I have successfully published a Lumira 1.23 visualisation that is connected to a universe into the BI 4.1 SP5 platform and retrieve it again using Lumira desktop.

    All I need now is to be able to open it using OpenDocument so we can surface it through our software solutions as we currently do with Webi and Crystal. Hopfully this will be in BI 4.1 SP6?



    • Hi,

      I can confirm that the team who handle "Lumira Server for BIPlatform" do have OpenDocument (OpenDoc) support planned for H2 2015.  It is one of their requirement themes.

      However, as for SP6 timelines, I can't say for sure,  but my opinion is that BI4.1 SP6 will come before this feature implemented into Lumira SP2x.

      Perhaps it might be supported from a particular Patch6.x, else SP7, or BI 4.2 later in the year.



  • Christian,

    Thanks for validation the publish with 4.1 SP5 P3. We are in the same boat with enterprise BI at 4.1 SP5.3 and Lumira Desktop v 1.23 but running into issues while exporting the stories to SAP BI.

    Anything special that needs to be configured to achieve the same?

    • 4.1 will be the minimum requirement.  Keep in mind that an updated from 4.0 to 4.1 is not the same as xir3 to 4.x.   There is no migration required, it is more like a big SP install. 

  • Will the HANA ‘on-line’ queries option be delivered when the product is GA? Rumor has it that this will be adder in 2015 Q3 after the initial release. I really hope that is not true.

    • Hi,

      This kind of detail is typically within the realms of "Non-Disclosure Agreements" during the restricted customer validation phase (which is currently happening now).

      But Yes, i can confirm that for this initial release, lumira documents created using "HANA Online" connectivity won't be refreshable on Lumira Server for BIPlatform.

      The current implementation option (i.e. workaround) is to add a Universe ontop of your View to acquire the data (using the new Query Panel extension) and have it refresh on BIP that way.

      This is accounted for in the release restrictions documentation, and will be openly communicated nearer general-availability release date.

      Going back to the initial requirement, this is very much understood as being top of the scope list for subsequent releases, and yes something like early Q3 sounds quite likely.

      Kind regards,


      • Thanks for the information. This will help with the planning for several inflight projects.

        Concerning moving the data into the plugin's in-memory engine, have there been any benchmarks to determine the number of cells (rows x columns) that can be managed by plugin? Using your workaround sounds like a great compromise unless there are scalability limitations. For example, Explorer had issues starting with about ~30,000,000 to ~50,000,000 cells.