Skip to Content
Author's profile photo Former Member

Using the Version 1 Data Access Extension SDK – SAP Lumira

Update: This blog post was written for the Version 1 Data Access Extension SDK. Along with SAP Lumira 1.25, we’ve released the Version 2 Data Access Extension SDK that has many new features like Extension Manager, Server support for extensions, better UI etc. Version 1 SDK will continue to be supported but will lack these new features. Please take a look at this blog post for a detailed comparison to decide which SDK will better fit your needs, SAP Lumira Data Access Extension SDKs – Version 1 vs Version 2


SAP Lumira ships with a data access extension SDK that allows users to connect to a custom data source and bring data into SAP Lumira documents. This blog post is about my experience creating a data access extension using the SDK to integrate a web service with Lumira.

Recap


Data imported into Lumira using any connector passes through three main workflows, Preview, Edit and Refresh. A user enters the connection information required to connect to a particular data source in the Preview mode. Lumira then saves the connection information and shows a preview of the dataset. This saved connection information is used to bring the complete dataset into a document for the first time using the Edit mode and refresh it with new data subsequently during the Refresh mode. The Data Access Extension SDK enables a developer to create an extension that can connect to any data source and handle these three modes of operation as long as the data can be formatted into a csv table.

Diving in


There are a few things to check before writing code for the access extension. The first check is to ensure data exported from the data source can be formatted into a csv table. For example, if you are trying to create a extension to connect to a web service, ensure the JSON response from the web service consistently has an array of objects that can be re-formatted into a csv table.

At the core of any connector is the ability to convert data from one format at the source to a table in that can be used in Lumira. If this conversion can be done, then other Lumira specific workflows can be added to create an extension. Hence, a good starting point would be to create an executable that can convert data from the source to a table.


The extension needs to handle three different modes when Lumira requests it to. Connection parameters like connection source, username and password, or queries from the extension executable are captured in the Preview mode. To do this, the executable needs to implement a native GUI to get the required inputs from the user. These parameters are then stored securely within Lumira. Edit mode is used to create a document with the parameters and fetch and convert the dataset for the first time. Refresh mode can be used by the user later to refresh the contents of the document using the parameters that were saved earlier.

Developer Tips


xyz is an invalid token?


Please ensure all the debug statements that print to the output stream are commented before testing the extension. Since the extension works by writing to the output stream, debug statements in code that also write to the output stream can interfere and result in errors like xyz is an invalid token. Whenever you see this error message while working with a extension, please check whether the all the debug statements are commented or not.

Not able to see what parameters are being saved in EDIT mode?


The SDK comes with a nifty utility, DAExtensionsTest.exe, which tests all the modes and generates a log file along with the data outputs of various modes into a csv. This tool is really helpful once the extension is finished and we need to do quick checks whether all the modes work or not. But while developing an extension, we need a better view of how various parameters are being stored inside Lumira. We can use a simple hack in our code to do this. A debug flag can be used to switch to a special debug mode in our code. When the debug flag is true, instead of writing actual data in the data block, we can create a table and write all the parameters that Lumira stores. This allows us to create a document where we can look at the parameters that Lumira is supplying the extension for the Edit and Refresh modes. Using a debug flag in our code will allow us to make quick checks whenever we want to look at the connection parameters getting stored inside Lumira.

What’s up with my parameter values?

The SDK uses ; as a separator to store parameter names and their values in the form of name;value;true/false. The boolean flag is usually set to true to write the parameters being supplied to the DSInfo store. A parameter is not stored or used by Lumira if the flag is false The value of a parameter could itself be boolean though. The SDK then parses each line between beginDSInfo and endDSInfo and stores the name value pairs. Lumira automatically converts the first semicolon to = and reads the value until the next semi-colon. It then stores the parameter with the format name=value; Hence, we write and parse parameters and their values in different ways.

I have many lines to store in one parameter?

Another common issue you could encounter is storing multiple lines of information as a part of a single parameter. Since there is no way to do that with the SDK, we can do a quick hack ourselves by encoding the newline character while storing the parameters in Lumira and decoding them after reading them back from the store. A bullet-proof way is to implement a set of encoding and decoding functions to handle LF U+000A, CR U+000D and encode them into something like %0A and %0D or any pattern which has a less chance of occurring in the value of the parameter.

I also did a small test on all the special characters to see which ones the SDK has conflicts with. Semi-colon and double-quotes in the value field were breaking things too and the encode and decode functions are a good place to handle them as well.

Also, ensure the same parameter name is not used multiple times as Lumira will only store the last one that is sent and overwrite the previous ones sent with the same name.

A good tip to speed up development is to create a windows batch file early on to copy the executable to the target directory as it reduces the overall development time spent.

The extension I’ve created sends a set of http requests to a url endpoint to retrieve a JSON collection and sends it to Lumira. I’ve used Go as it had a good set of libraries to handle HTTP requests and JSON and it also compiles to an executable. It is some food for thought if you have a similar use case in mind.

I hope these tips will help you when you create a data access extension. I’ve uploaded this extension to a GitHub repo at https://github.com/SAP/httpaccess-dae-lumira

Assigned Tags

      3 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      Hello Vamsi,

      Nice post, thanks for the video and a working extension code. I have a question about java data extensions. You briefly mentioned about java in your video, it will be a little different because it does not produce an executable extension. Do you have any documentation of how to deploy and use java based extensions? Is that as simple as creating a batch file with the proper java environment setup and calling an executable jar? Please share any info you have on this.

      Thanks

      Ravi Ada

      Author's profile photo Former Member
      Former Member

      Hi Ravi! I think I converted the jar into an executable using Excelsior JET Excelsior JET - Java Virtual Machine (JVM) and Native Code Compiler

      Author's profile photo Martin Korn
      Martin Korn

      Hi Vamsi,

      very nice extension. But unfortunately it doesn't work neither with https nor with SAML authentication (HCP Cloud applications). You still are working on this extension (aka there might be the hope you implement those capabilities)?
      Would be a charm... ^^

      Thanks & best regards,

      Martin