Skip to Content
Technical Articles
Author's profile photo Cameron Swift

Ariba Analytics using SAP Analytics Cloud, Data Intelligence Cloud and HANA DocStore – Part 1

Introduction

SAP Analytics Cloud makes it easy for businesses to understand their data through its stories, dashboards and analytical applications. However, sometimes we might not be sure how we can leverage SAC to create these based on data from other applications

For this worked example, we’re going to make use of SAP Data Intelligence Cloud to retrieve data from SAP Ariba through its APIs, before storing it in SAP HANA Cloud’s JSON Document Store

In further blog posts, we will build a model on top of this stored data and show how this can be consumed in an SAP Analytics Cloud Story. The focus of this series of blog posts is to show a technical approach to this need, not to provide a turnkey ready-to-run SAC story. After following this series you should have an understanding of how you can prepare your own stories using this approach

 

Ariba Analytics in SAP Analytics Cloud

For this example, we’re going to create a simple story that lets you know how much spend has been approved within Ariba Requisitions created in the last thirty days

A%20simple%20story%20tracking%20Approved%20Requisitions

A simple SAC Story tracking Approved Requisitions

Requisition is the approvable document created when a request is made to purchase goods or services. Our approach will let us view only the Approved Requisitions, excluding those still awaiting approval

For those feeling more adventurous, this setup can be repeated with different document types, and those combined to create more in depth SAP Analytics Cloud Stories. This is outside of the scope of our blog series

 

Solution Overview

Our finished solution will need SAP HANA runtime artifacts such as Document Store Collections, SQL Views and Calculation Views. We will define these as design-time artifacts in Business Application Studio, then deploy them to an HDI Container within our SAP HANA Cloud instance

Deploying%20our%20Design-time%20artifacts%20into%20SAP%20HANA%20Cloud

Deploying our Design-time artifacts into SAP HANA Cloud

 

Using a scheduled SAP Data Intelligence Cloud Pipeline, we’ll query SAP Ariba’s APIs and place the data within our HANA Cloud Document Store Collection

Scheduled%20replication%20of%20Ariba%20Data

Scheduled replication of Ariba Data

Our SQL View lets us create a view on top of the data within our JSON DocumentsCreating a Calculation View on top of one or many SQL views will let us expose the data to SAP Analytics Cloud

Viewing%20the%20data%20in%20SAP%20Analytics%20Cloud

Viewing the data in SAP Analytics Cloud

SAP Analytics Cloud can use HANA Cloud Calculation Views as the source for Live Data Models. With Live Data Models, data is stored in HANA Cloud and isn’t copied to SAP Analytics Cloud

This gives us two main benefits:  We avoid unnecessarily duplicating the data, and ensure changes in the source data are available immediately (provided no structural changes are made)

Finally, we use the Live Data Model to create a Story within SAP Analytics Cloud. Once we’ve got everything set up, we can use this story to check our data at any time, with the Data Intelligence Pipeline refreshing the data in the background on a predefined schedule

 

Creating an Ariba Application

In order to access the APIs provided by Ariba, we’ll need to have what’s known as an Ariba Application. We do this through the SAP Ariba Developer Portal

For our use case we will be requesting access to the Operational Reporting for Procurement API

From%20the%20Ariba%20Developer%20Portal%2C%20click%20on%20Create%20Application

From the Ariba Developer Portal, click on Create Application

Click%20on%20the%20Plus%20Symbol

Click on the Plus Symbol

Enter%20an%20Application%20Name%20and%20Description%20then%20click%20on%20Submit

Enter an Application Name and Description then click on Submit

 

Once the Application has been created, we’ll need to request API access for the Application

Click%20on%20Actions%2C%20then%20Request%20API%20Access

Click on Actions, then Request API Access

Select%20the%20Operational%20Reporting%20for%20Procurement%20API%2C%20then%20select%20your%20Realm%20and%20click%20on%20Submit

Select the Operational Reporting for Procurement API, then select your Realm and click on Submit

Once the API Access Request has been approved by Ariba, your admin will be able to generate the OAuth Secret for our application

Your%20Ariba%20admin%20can%20click%20on%20Actions%2C%20then%20Generate%20OAuth%20Secret

Your Ariba admin can click on Actions, then Generate OAuth Secret

This will generate our OAuth Secret, which is required to use the API. The secret will only be displayed once, so the admin should (securely) store this and provide it to you for use in the application

If the OAuth Secret is lost, the admin can regenerate it, at which point the old secret will stop working and you will have to use the newly generated secret

For a more comprehensive look at Ariba Applications we can refer to the Ariba APIs Datasheet and this blog post by Antonio Maradiaga

 

Ariba API

When we call the Ariba API, we have a number of things to consider. For our example, we’re using the Synchronous API to retrieve data, but there’s also a set of Asynchronous APIs you should consider when retrieving bulk data

Documentation%20is%20available%20online

Documentation is available online

In addition, when retrieving data sets, you have to specify an Ariba View that you wish to retrieve. These are similar to reporting facts in the Ariba solution, such as Requisition or Invoice. Views will specify which fields are returned, and may also specify filters you should provide when calling them

To simplify our example we’re going to use a System View, which is predefined in Ariba. You are also able to work with Custom Views using the View Management API to better match your requirements but this falls outside the scope of this blog series

To explore these at your own pace, you can visit developer.ariba.com

 

Enabling Document Store in HANA Cloud

The Document Store is SAP HANA Cloud‘s solution for storing JSON Documents. While the Column and Row Stores use Tables to store their data, the Document Store stores data inside Collections

Before we activate the Document Store in HANA Cloud, just a word about resources. Like the Script Server, the Document Store is an additional feature that can be enabled, however we should consider HANA Cloud’s current resourcing before enabling it. For more information on this we can consult the help documentation

When we’re ready to enable, we’ll need to navigate to SAP HANA Cloud Central.

 

From%20the%20BTP%20Control%20centre%2C%20we%20select%20our%20Global%20Account%20and%20click%20Open%20in%20Cockpit

From the BTP Control centre, we select our Global Account and click Open in Cockpit

From%20here%20we%20see%20our%20Subaccounts%20-%20we%20choose%20the%20Subaccount%20where%20our%20HANA%20instance%20resides

From here we see our Subaccounts – we choose the Subaccount where our HANA instance resides

From%20our%20Subaccount%2C%20we%20click%20on%20Spaces

From our Subaccount, we click on Spaces

From%20the%20Spaces%20page%2C%20we%20select%20the%20Space%20that%20contains%20our%20HANA%20instance

From the Spaces page, we select the Space that contains our HANA instance

Click%20on%20SAP%20HANA%20Cloud

Click on SAP HANA Cloud

Click%20on%20Actions%2C%20then%20Open%20In%20SAP%20HANA%20Cloud%20Central

Click on Actions, then Open In SAP HANA Cloud Central

 

From HANA Cloud Central, we can then activate the Document Store

 

Test

Click on the dots, then choose Manage Configurations

Click%20on%20Edit

Click on Edit

Go%20to%20Advanced%20Settings%2C%20select%20Document%20Store%20then%20click%20on%20Save

Go to Advanced Settings, select Document Store then click on Save

 

Once our HANA Cloud instance has restarted, we’ll be able to use the Document Store

 

Creating a DocStore Collection in Business Application Studio

While we can create a Collection directly using SQL through Database Explorer, we want to make sure we also have a design-time artifact for our DocStore Collection

To do this, we’ll use the Business Application Studio. For those unfamiliar with Business Application Studio, you can follow this Learning Journey Lesson to set up a Workspace – we’ll assume this is already in place

It’s time to set up our SAP HANA Database Project, and create the HDI Container where our runtime objects will reside

Creating%20our%20Project

Creating our Project

 

Select%20SAP%20HANA%20Database%20Project

Select SAP HANA Database Project

 

Next we’ll need to provide some information for our project

Give%20our%20Project%20a%20name%20and%20click%20Next

Give our Project a name and click Next

Leave%20the%20Module%20name%20as%20is%20and%20click%20Next

Leave the Module name as is and click Next

Double check the Database Version and Binding settings then click Next

Setting%20our%20Database%20Information

Setting our Database Information

Next we have to bind the project to a HANA Cloud instance within Cloud Foundry. The Endpoint should be automatically filled, but we have to provide our Email and Password before we can perform the binding

Binding%20our%20Cloud%20Foundry%20Account

Binding our Cloud Foundry Account

For this example we’re going to create a new HDI Container

If our Cloud Foundry space has more than one HANA Cloud instance, we may want to disable the default selection and manually choose the HANA Cloud instance where our container will reside

 

Creating%20our%20HDI%20Container

Creating our HDI Container

 

Now that we have our HDI Container and SAP HANA Project set up, it’s time to create our design-time objects. First, we login to Cloud Foundry

 

Click%20on%20View%2C%20then%20Find%20Command%20or%20press%20Ctrl+Shift+P

Click on View, then Find Command or press Ctrl+Shift+P

Search%20and%20select%20CF%3A%20Login%20to%20Cloud%20Foundry%2C%20then%20following%20the%20instructions%20before%20selecting%20the%20Space%20with%20your%20HANA%20Cloud%20instance

Search and select CF: Login to Cloud Foundry, then follow the instructions before selecting the Space with our HANA Cloud instance

 

Next, we’ll create our DocStore Collection

Use%20Find%20Command%20again%20to%20find%20Create%20SAP%20HANA%20Database%20Artifact%2C%20then%20click%20on%20it

Use Find Command again to find Create SAP HANA Database Artifact, then click on it

Ensure%20that%20the%20artifact%20type%20is%20Document%20Store%20Collection%2C%20name%20is%20aribaRequisition%20and%20that%20the%20artifact%20will%20be%20created%20within%20the%20src%20folder%20of%20a%20HANA%20Project%2C%20then%20click%20on%20Create

Ensure that the artifact type is Document Store Collection, name is aribaRequisition and that the artifact will be created within the src folder of a HANA Project, then click on Create

Finally%2C%20we%20want%20to%20find%20our%20SAP%20HANA%20Project%20on%20the%20Explorer%20on%20the%20left%2C%20and%20click%20on%20the%20rocket%20icon%20to%20Deploy

Finally, we want to find our SAP HANA Project on the Explorer on the left, and click on the rocket icon to Deploy

After the deployment is successful, we have both our design-time .hdbcollection artifact, as well as the runtime DocStore collection which has been created in our HDI Container

 

Creating our Connections in Data Intelligence

So far we’ve gained access to Ariba APIs and enabled the Document Store in our HANA Cloud Instance. Next, we’ll be setting up two Connections in Data Intelligence Cloud

The first Connection will allow our Data Intelligence Pipeline to query the Ariba APIs to retrieve our data, and the second will allow us to store this data in the Document Store within our HDI Container

First, we use the DI Connection Manager to create a new Connection, selecting OPENAPI as the Connection Type

Create%20a%20new%20Connection%20in%20the%20Connection%20Manager

Create a new Connection in the Connection Manager

Our OpenAPI Connection will be used to send the request to Ariba. We’re going to set the connection up as below, using the credentials we received when we created our Ariba Application

Using%20our%20Ariba%20Application%20OAuth%20Credentials%20to%20create%20the%20OpenAPI%20Connection

Using our Ariba Application OAuth Credentials to create the OpenAPI Connection

 

Next, we’re going to create a HANA Connection that will let us work with the HDI Container we created earlier. To get the credentials, we have to go to the BTP Cockpit

 

Select%20our%20HDI%20Container%20from%20the%20SAP%20BTP%20Cockpit

Select our HDI Container from the SAP BTP Cockpit

Click%20on%20View%20Credentials

Click on View Credentials

Click%20on%20Form%20View

Click on Form View

We’ll want to keep this window open as we create our HANA DB Connection, as it has the details we need. Within Data Intelligence Cloud, create a new connection of type HANA_DB and fill it out as below using the credentials

Enter%20the%20credentials%20to%20create%20our%20HDI%20Connection

Enter the credentials to create our HDI Connection

While we have the credentials open, take note of the Schema name. We’ll need this to set up our pipeline

 

Pipeline Overview

The source code for our pipeline can be found here. Copy the contents of this JSON to a new Graph within the Data Intelligence Modeler. If you’re not familiar with how to do this, you can refer to the README

Our%20extraction%20pipeline

When the pipeline starts, a GET request is made to the Ariba API. If there are more records to be fetched, the pipeline will make further requests until it has all available data. To avoid breaching Ariba’s rate limiting, there is a delay of 20 seconds between each call

Fetching%20data%20from%20Ariba

Fetching data from Ariba

Once all of the records have been fetched, the Document Store Collection is truncated to remove outdated results, and the most up to date data is inserted into our collection

Updating%20records

Updating records

  1. A copy of the data is stored as a flat file in the DI Data Lake as reference
  2. The HANA Document Store Collection is truncated, and Documents are added to the Collection one at a time
  3. Once all records have been added to the Collection, the Graph will be terminated after a configurable buffer time (1 minute by default)

 

Configuring our Pipeline

In order to run this pipeline, you will have to make some changes to the pipeline:

 

In the Format API Request Javascript Operator, you should set your own values for openapi.header_params.apiKey and openapi.query_params.realm

You%20can%20edit%20this%20code%20from%20within%20the%20Script%20View%20of%20the%20Format%20API%20Request%20Operator

You can edit this code from within the Script View of the Format API Request Operator

 

If your Connection names are different to ARIBA_PROCUREMENT and ARIBA_HDI, then you will want to select those under Connection for the OpenAPI Client and SAP HANA Client respectively

Changing%20the%20Connection%20for%20the%20OpenAPI%20Client

Changing the Connection for the OpenAPI Client

Changing%20the%20Connection%20for%20the%20HANA%20Client

Changing the Connection for the HANA Client

 

Check the Path values for the operators “Write Payload Flat File” and “Write Error Log”. This will be where the pipeline will write the Flat File and API Error Logs respectively. If you’d like them to save elsewhere, edit that here

Setting%20the%20log%20paths

Setting the log paths

Finally, we’ll want to set the Document Collection Schema name in the DocStoreComposer Operator. This is the Schema we noted earlier while setting up the Connections

View%20the%20Script%20for%20our%20DocStoreComposer%20Operator

View the Script for our DocStoreComposer Operator

Add%20the%20Schema%20to%20the%20DocStoreComposer%20Operator

Add the Schema to the DocStoreComposer Operator

 

Testing our Pipeline

Now we’re ready to test our pipeline. Click on Save, then Run

Testing%20our%20Pipeline

Testing our Pipeline

Once our pipeline has completed successfully, we’ll be able to see that our JSON Documents are stored within our Collection by checking in the Database Explorer. We can access this easily through Business Application Studio by clicking the icon next to our SAP HANA Project

Getting%20to%20the%20Database%20Explorer%20from%20Business%20Application%20Studio

Getting to the Database Explorer from Business Application Studio

We can see that our pipeline has been successful, and that 206 JSON Documents have been stored in our Collection

Our%20Collection%20contains%20590%20Documents

Our Collection contains 206 Documents

 

Wrap-Up

In this blog post we’ve walked through how we can use SAP Data Intelligence Cloud to extract data from SAP Ariba, before storing it in a collection in SAP HANA Cloud’s Document Store

In the next blog post in this series, we will discuss how we can create SQL and Calculation Views on top of our Document Store Collection

In the third and final blog post in this series, we will use our Calculation View as a Live Data Model which we then visualize in an SAP Analytics Cloud Story

 

Other Resources

SAP Ariba | How to Create Applications and Consume the SAP Ariba APIs by Antonio Maradiaga (~30 minutes viewing time)

SAP Ariba | Developer Homepage

SAP HANA Document Store | SAP HANA Document Store Guide

SAP HANA Document Store | Spotlight: SAP HANA Cloud JSON Document Store by Laura Nevin (2 minute read)

SAP Data Intelligence Cloud | Introduction to the SAP Data Intelligence Cloud Modeler

SAP Data Intelligence Cloud | Modeling Guide for SAP Data Intelligence

JSON | Introducing JSON (short primer, technically-focused)

 

Special Thanks

This blog series has had a lot of input from my colleagues – any errors are mine not theirs. In particular, thanks go to the Cross Product Management – SAP HANA Database & Analytics team, Antonio Maradiaga, Bengt Mertens, Andrei Tipoe, Melanie de Wit and Shabana Samsudheen

Note: While I am an employee of SAP, any views/thoughts are my own, and do not necessarily reflect those of my employer

Assigned Tags

      6 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Michał Majer
      Michał Majer

      Congrats Cameron, perfect description of each step.
      Why did you decided to store data as a JSON instead of simply in the table?

      Author's profile photo Antonio Maradiaga
      Antonio Maradiaga

      Michał Majer , I cannot speak for Cameron but in my case, I prefer storing the data as raw as possible. For two reasons:

      1. When storing it as JSON, you are storing all the data that's returned in the Ariba API response. There is no need to transform it so that it can be stored as rows in a table. If storing it in a table you will be constrained by the schema and given the nature of a JSON response, some elements (documents) can contain different data structures within it. Hard to translate from that JSON response to a table.
      2. Reporting: let's assume you are using the data for reporting.... you know how that is, you start reporting on a few fields, and then you want more data in the report. If you only store a subset of the data, e.g. the fields you originally wanted to report on, then that means a full load of data will need to be run to get that additional data in a table. If it is in a JSON collection that data will already be there and all you will need to do is add it to the select statement.
      Author's profile photo Michał Majer
      Michał Majer

      Thanks for the response, I agree with your point of view, it make a lot of sense for that case.

      Author's profile photo Cameron Swift
      Cameron Swift
      Blog Post Author

      Hi Michał Majer, thanks for reading and I appreciate your question

      My considerations were much along the same lines as Antonio has already mentioned. In addition, using HANA we can actually join the schema-flexible data in the Document Store with strict-schema data like that stored in Column Store tables (with some restrictions and caveats).

      This means that we can make the most of the strengths of both documents and tables while storing the raw data where it's most natural

      Author's profile photo Peter Baumann
      Peter Baumann

      Hi Cameron Swift!

      Very cool and detailed description!

      One question about how to control the data read from Ariba API. You load 206 records now. How would you ensure to get just the delta next time or how is the way to make selections?

      Author's profile photo Cameron Swift
      Cameron Swift
      Blog Post Author

      Thanks Peter Baumann, glad you found it helpful. I've designed this scenario around a rolling 30 day window to keep it simple enough to understand. Having said that, and with the caveat that I haven't implemented this myself here are some thoughts:

      Within our Data Intelligence Pipeline

      We're currently calling the view Requisition_SAP_createdRangeWe could instead call Requisition_SAP_updatedRange. Then, instead of getting the Requisitions created within a timespan, we'd be retrieving the Requisitions updated within that timespan. This is set within Path Pattern on the OpenAPI Client Operator

      The filters we send for created range are createdDateFrom and createdDateTo. We would instead need to send updatedDateFrom and updatedDateTo to get the updated data. This would happen within the Format API Request Javascript Operator

      We need to implement some logic to keep track of our last run timestamp. The way I would do this is to write the createdDateTo value to a text file in DI_DATA_LAKE. On the next run, we would read the value from our text file and use it as the createdDateFrom value so we ensure we're capturing all updates since the last run. This should be achievable using the Write File and Read File Operators

      Our pipeline currently Truncates the DocStore Collection before writing to it. To stop this behaviour, we can delete Line 14 in the code of our DocStore Composer Javascript Operator

      Here's where things get interesting. To the best of my knowledge we don't have an Upsert in Document Store at this present time. To make our own equivalent, consider using Select to retrieve all of the values for UniqueName within our DocStore Collection. Then, if the UniqueName for a Document we retrieved from Ariba matches one we've just pulled from our Collection we'll want to use Update instead of the current Insert command

      I hope these thoughts are enough to guide you and thanks again for reading