Skip to Content
|

Category Archives: Uncategorized

Overview

DropDownListBox is a control that allows user to select one single value from a list of values. This control can be created using UIDesigner/KeyUserTool. SAP Application teams and Partners use UIDesigner to create the control. The keyuser/Admin use KeyUserTool to create a dropdownlistbox.

Steps to configure DropDownListBox in UI Designer

Scope: Application Teams/Partners

  1. Drag and drop the control DropDownListBox from the ToolBox section onto the Designer tab in the selected UI Component
  2. Value property of the control is prepopulated with a DataField (/Root/DropDownListBox)
  3. This data field should be bound to a node element of a BO which is of type code
  4. The datafield can be bound to the BO element using the button Bind. Once bound, datafield is automatically renamed. This datafield is called as codefield. Codefield holds one single value selected by the user from a list of values called CodeList. Codelist is a set of values with code and description.

Pictorial representation of DrodDownListBox, CodeField and CodeList

DropDownListBox properties and values

Property Definition
CodePresentationMode

This property determines the value to be displayed on the UI. By default the value is set to ValueOnly. Other values are CodeOnly & CodeAndValue

ValueHelpSortOrder

This property sorts the values of the codelist to be displayed on the UI. By default the value is set to Standard. Other values are Unsorted, AscendingCode & AscendingDescription

Behavior of sorting when value of ValueHelpSortOrder is
1. Standard – If sortOrder is Standard, sorts the codelist based on the CodePresentationMode value
Codeonly – sort based on code
Valueonly – sort based on description; if description is not maintained, sort based on code
Codeandvalue – sort based on code
2. AscendingCode – sorts the codelist based on code in ascending order
3. AscendingDescription – sorts the codelist based on description in ascending order
4. Unsorted – the order maintain in the codelist is retained

CodeField properties and values

Property Definition
CCTSType This property is set to code indicating that the field is a CodeField.
CodeList This property indicates the list of values maintained for CodeField. CodeList has a type=<CodeListType>, typeName  & esrNamespace

CodeList types, properties and values

CodeListType Definition
 Static The list of values is populated by the codelist provider (Application/Partner). These values are fetched during the initialization phase of the UI component.
Case1 – If the list contains less than 50 values, on clicking the dropdown, the values are displayed on the UI without any additional request. This is called the Static CodeList – Complete scenario
Case2: If the list contains 50 or more values, on clicking the dropdown, framework triggers a request to fetch the values. This can be observed by capturing the network in Chrome Developer Tools. This is called the Static CodeList – Incomplete scenario
Dynamic The list of values is populated by the codelist provider (Application/Partner). These values are fetched only when user clicks on the DropDownListBox and not during initialization of the UI component.
Case – Dynamic with context mapping
The list of values of DropDownListBox is dependent on the value selected from another DropDownListBox. For instance consider Country and State DropDownListBox, here, the list of values for State is dependent on the value selected as Country. For configuration details check section Steps to configure Dynamic with context mapping
ListBound

The list of values is populated from a datalist. The code and description values are mapped to elements of the datalist.

ListBoundCodeListProperties – The list of values is populated as a datalist. The code and description values are mapped to elements of the datalist.

Property Definition
ListBinding Map datalist present in the datamodel. This datalist can be boundlist or an unbound list.In case of bound datalist, the datalist is bound to a BO Node element and values of the datalist are populated in the backend. In case of unbound datalist, the values are populated in the client (Ex: script operation )
ListCodeField Map the datalist element containing the code values (This is a relative path starting with ./)
ListValueField Map the datalist element containing the description (This is a relative path starting with ./)

 

Steps to configure Dynamic with context mapping

Dynamic with context mapping – The list of values of DropDownListBox is dependent on the value selected from another DropDownListBox. For instance, consider Country and State DropDownListBox, the list of values for State is dependent on the value selected as Country.

  1. Right click on the CodeField bound to State and select Codelist context mapping
  2. Select the Country field from the pop upNote: Country field should be marked as context relevant in MRDS

Steps to configure DropDownListBox via KeyUserTool

  1. Edit Master Layout using Adapt
  2. Hover over any existing field and select Add Fields
  3. In the Additional Fields pop up, click on New Field, enter any name in Label field and select Type as List
  4. Maintain codelist values (code and description)
  5. Save changes and apply the same. This creates an extension field of type code which is of CodeListType static.

Known Issues and Limitations

1. CodeField is present under a Structure

If the control is bound to a datafield under a structure, then the cctstype of the structure should be set to none and not code.

Ex: /Root/DataStructure/CurrencyCode

  • CctsType of CurrencyCode should be set to code
  • Cctstype of Structure should be set to none unless it of type amount, quantity etc,.There will be inconsistencies in the UI with respect to data if the cctsType of the Structure is not set to “none”

2. DropDownListBox loads code values when CodePresentationMode is set to ValueOnly

If the CodePresentationMode of the DropDownListBox is set to ValueOnly and the description is not maintained in CodeList in the BO element then corresponding code values will be displayed in the dropdown.

How to identify if description is maintained in the BO element?

  • Login to the backend of the system
  • Launch the metadata repository using the transaction code mdrs
  • Navigate to the BO node element in the BO
  • Double click on DataType and navigate to RuntimeCodeList tab

3. CodeField in Advanced Find Form (ObjectWorkList – OWL) does not load any values

This issue occurs when the BOQueryParameter is missing in the component. Download the XML of the UI component from UI designer and check if BoQueryParameter exists for the CodeField.

For instance: – Consider a DropDownListBox with name Role in Advanced Find Form of Accounts OWL. This control is bound to CodeField /Root/SearchParameters/RoleCode.

<uxc:BoQueryParameter id=”7yA4Y$33Ja6OnOsSjTz_im” name=”RoleCode-~content1″ bind=”/Root/SearchParameters/RoleCode” joinPath=”-.CustomerRole-~RoleCode-~content” joinOCMPath=”-.Role-~RoleCode”/>

4. How to identify control specific details on the UI without the help of UI designer?

Append debugMode=true in the URL as shown below and reload the UI 

CTRL + click on the DropDownListBox control to launch Client Inspector. You can check the properties of the control under Control Tree tab

 

 

Best Regards,

Malini Krishnamurthy

 

 

 

 

SAP Cloud Platform, API Management offers many out of the box API Security best practices which can be customized based on your enterprise requirements.  These API Security Best Practices includes security policies for Authentication and Authorization,  Traffic Management and many more.

Data masking is the process of hiding original data with random characters or data and is an essential component of a comprehensive data security plan. Data masking reduces the exposure of the sensitive data with an organization. Gartner in their paper describe the data masking concepts to prevent data loss.  Data masking is also described in the GDPR ( General Data Protection Regulation) that takes effect on May 2018 as way to support pseudonymization.

In this blog we will cover the data masking for OData ( an OASIS standard that defines a set of best practices for building and consuming RESTful APIs) APIs.  Data masking can be easily achieved in SAP API Management using an XSL Transform policy for XML response and a JavaScript Policy for JSON response. We choose OData in the blog because it supports both XML and JSON format and this concept can be applied to any REST APIs.

This blog is a continuation of the API Security best practices blog series and in the previous blog rate limiting for OData APIs was covered.

Prerequisites

 

Launch API Portal

 

  • Click on the link Access API Portal to open API Portal.

 

Data masking for OData calls

In this section we would describe the usage of the XSL tranform policy to mask the properties in XML format and JavaScript policy to mask properties in JSON format of the OData APIs.  As an example for the OData APIs, we would be using Business Partner collection of OData service from SAP Gateway and would be masking the following properties from the API response received from the SAP Gateway system:-

  • EmailAddress
  • PhoneNumber
  • FaxNumber
Refer Rate limit API calls blog to create an API Proxy to an OData API from SAP Gateway and applying an API Rate limit using Quota policy. In this blog we would be extending the same to add the support for data masking for OData APIs.

  • Navigate to the Define from the hamburger icon, then select the tab APIs. Select the API Proxy to which API Rate limiting was applied.

 

  • Click on the Policies button of  the selected API Proxy.

 

  • Click on the Edit button from the Policy designer and then from Scripts tab click on the button to add the JavaScript file for masking the properties from the OData APIs.

 

 

  • In the Create Script dialog provide the name of the XSL file say maskResponseXSLT, select XSL from Type and select Create from the Script drop down. Finally click on the Add button.

 

 

  • Select the newly added XSL file maskResponseXSLT and in the Script Resource copy paste the following code snippet.

 

<?xml version="1.0" encoding="UTF-8"?><xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:ns3="http://www.hr-xml.org/3" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:m="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata" xmlns:d="http://schemas.microsoft.com/ado/2007/08/dataservices">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes"/>
<xsl:strip-space elements="*"/>
<xsl:param name="maskvalue">*******</xsl:param>

<xsl:template match="@*|node()">
    <xsl:copy>
        <xsl:choose>
            <xsl:when test="local-name(.)='EmailAddress' or local-name(.)='PhoneNumber' or local-name(.)='FaxNumber'">
				<xsl:value-of select="$maskvalue"/>
			</xsl:when>
            <xsl:otherwise>
                <xsl:apply-templates select="@*|node()"/>
            </xsl:otherwise>
        </xsl:choose>
    </xsl:copy>
</xsl:template>

</xsl:stylesheet>

 

The above XSL checks for the element name EmailAddress, PhoneNumber and FaxNumber and if a match is found then it would replace the value with maskValue parameter value else the response value would be copied.

 

Note that the XSL in the blog is just a sample snippet and this snippet would have to be adjusted to handle all the edge cases of OData calls.
  • Select PostFlow from the ProxyEndPoint  section and then click on the + button next to the XSL Policy available under the Mediation Policies segment.

 

 

  • In the Create policy screen specify the policy name say maskDataInXMLResponse, select Outgoing Response from Stream and  then click on the Add button.

 

  • Select the policy newly added maskDataInXMLResponse  policy then add the following policy snippet to invoke the maskResponseXSLT XSL file.
<XSL async="true" continueOnError="false" enabled="true" xmlns="http://www.sap.com/apimgmt">
                <!-- A variable that stores the output of the transformation -->
                <OutputVariable>response.content</OutputVariable>         
                <!-- The XSLT file to be used for transforming the message. -->
                <ResourceURL>xsl://maskResponseXSLT.xsl</ResourceURL>
                <!-- Contains the message from which information needs to be extracted -->
                <Source>response</Source>
</XSL>

 

 

  • In the Condition String text box, enter the following snippet so that the XSL policy is executed only for BusinessPartnerSet for XML format.
proxy.pathsuffix MatchesPath "/BusinessPartnerSet**" and ((request.header.Accept = null or request.header.Accept = "" or request.header.Accept = "application/xml") or request.queryparam.$format != "json")

 

 

 

With this we have successfully added an XSL transformation policy to mask the properties from BusinessPartnerSet collection for OData response in XML format. In the subsequent steps we would using adding the support for masking the response in JSON format.

  • From Scripts tab click on the button to add the JavaScript file for masking the properties for the JSON format.

 

 

  • In the Create Script dialog provide the name of the JavaScript file say maskDataInJsonResponse and then select Create from the Script drop down. Finally click on the Add button.

 

  • Select the newly added JavaScript file maskDataInJsonResponse and in the Script Resource copy paste the following code snippet.
var maskProperties = ["EmailAddress", "PhoneNumber","FaxNumber"];
var maskValue = "****"

function maskData(data){
    for(var propertyName in data){
        if(maskProperties.indexOf(propertyName) > -1){
            data[propertyName] = maskValue;
        }
    }
    return data;
}


function processResponse(response){
    if(response != null && response.d){
        if(response.d.results && response.d.results.length > 0){
            for(var i=0,length =response.d.results.length;i<length ; i++){
             response.d.results[i] = maskData(response.d.results[i]);
            }
        }else{
            response.d = maskData(response.d);
        }
    }
    return response;
}

var response = JSON.parse(context.proxyResponse.content);
var maskedResponse = processResponse(response);
context.proxyResponse.content = JSON.stringify(maskedResponse);

 

 

The above snippet, checks if this is OData response returns an Array of business partner set or single business partner set entity and then accordingly mask the selected properties from the OData response for JSON format.

Note that the above JavaScript is just a sample snippet and this snippet would have to be adjusted to handle all the edge cases of OData calls.

 

  • Select PostFlow from the ProxyEndPoint  section and then click on the + button next to the JavaScript Policy available under the Extensions Policies segment.

  • In the Create policy screen specify the policy name say maskDataInJsonFormat, select Outgoing Response from Stream and   then click on the Add button.

 

 

  • Select the policy newly added maskDataInJsonFormat  policy then add the following policy snippet to invoke the maskDataInJsonResponse JavaScript file.
<!-- this policy allows us to execute java script code during execution of an API Proxy -->
<Javascript async="false" continueOnError="false" enabled="true" timeLimit="200" xmlns='http://www.sap.com/apimgmt'>
	<ResourceURL>jsc://maskDataInJsonResponse.js</ResourceURL>
</Javascript> 

 

  • In the Condition String text box, enter the following snippet so that the XSL policy is executed only for BusinessPartnerSet for JSON format.

 

 

With this we have successfully added a JavaScript transformation policy to mask the properties from BusinessPartnerSet collection for OData response in JSON format. In the subsequent steps we would using adding the support for masking the response for JSON format.

 

Finally testing the flow

 

  • Navigate to the Test tab from the hamburger icon

 

 

  • From the APIs list search for the API Proxy that you would like to test say GatewayServiceRestrictedAccess and then click the API to test.

 

 

  • Click on the Authentication: None link and select Basic Authentication to set the user credential to connect to the SAP Gateway ES4 system

 

 

  • Enter your user credential to the SAP Gateway ES4 system and click on the OK button

 

  • Append /BusinessPartnerSet to the API Proxy URL and Send button

 

 

  • In the response body,  the EmailAddress, PhoneNumber and FaxNumber values would be masked with *****.

 

 

  • Click on the Headers button. Enter Accept as header name and application/json as the header value and then click on the Send button

 

 

  • In the response body,  the EmailAddress, PhoneNumber and FaxNumber values would be masked with *****.

 

 

Introduction

Welcome to the blog post of the Expert Services Marketing Practice.

We are happy to share our experience with you around Marketing Integration, Analytics, and Business Technology.

You want to see more? Click here

Background

When working the location based data, you will sooner or later have to do some geocoding. Geocoding basically is the transformation of full addresses to geographical location data (latitude/longitude) and the other way around (reverse geocoding). Google Maps provides a service to easily transform addresses.

With the geographical data available, we can use the information in the segmentation to create location-based target groups.

 

In this Blog Post, we will build a simple scenario using Google Maps geocoding capabilities.

Note:
In this blog post, we only consider (reverse) geocoding for single addresses, not bulk or mass processing. With the tests conducted we have stayed within the limitations according to the Standard Usage Limits of the Google Maps Geocoding API.
Please review the Google Maps documentation before using the API.

Google Maps Geocoding API Usage Limits: https://developers.google.com/maps/documentation/geocoding/usage-limits

Configuration: Google Maps Geocoding API

When setting up the Google Application you have to create an API key to enable access to call the Google Maps Geocoding API.

Since the API key is provided with the request URL, it is recommended to restrict access to the API key usage.

Depending on where you SAP Cloud Platform Integration cluster is maintained, the IP-range applicable for your cluster is different. You can retrieve that information from the SAP Help documentation for Cloud Platform Integration:

SAP Cloud Platform Help: https://help.sap.com/viewer/product/CP/Cloud/en-US
SAP Cloud Platform > Product Overview > Accounts > Regions and Hosts

Google API Manager: https://console.developers.google.com/apis/credentials

Note:
Please make yourself familiar with the usage of the Google Maps API.
The service used in this blog post is not designed to respond in real time to user input.
Documentation: https://developers.google.com/maps/documentation/geocoding/intro?hl=en

Configuration: SAP Hybris Marketing Cloud

For this scenario, only inbound communication is configured.

Configure SAP Hybris Marketing Cloud Inbound Communication
For importing data using the OData Service for Master Data integration, configure an Inbound Communication Channel.

  • Create a Communication System
    • Define a name for the Communication System.
    • Define the Authentication Method for Inbound Communication.
  • Create a Communication Arrangement Inbound Scenario
    • Select the Communication System created before
    • Select the Scenario SAP_COM_0003
    • Depending on the Authentication option you use, you need to create a Communication User and assign the user to the Communication Arrangement.

Inbound Communication Arrangement

Configuration: SAP Cloud Platform Integration IFlow

The depicted Iflow below provides information on the general approach on how to call the Google Maps Geocoding API, mapping of the target structure and creating an interaction contact in SAP Hybris Marketing Cloud.

The Iflow created here is just one of many approaches how to process geocoding with Google Maps.

Part 1: Geocoding

Now, let’s review some of the mandatory steps to translate an address to GPS coordinates.

  1. Receive Message

From the sending system, we receive an XML formatted payload containing some customer information.

The location data is provided as address data. To relate the information to a location that can, for example, placed as a marker on a map, we need to get the geographical location data (Longitide/Latitude).

Inbound message Payload:

<root>
	<row>
		<Id>C-100</Id>
		<IdOrigin>SAP_FILE_IMPORT</IdOrigin>
		<Timestamp>20170729122215</Timestamp>
		<TitleDescription>Mr.</TitleDescription>
		<FirstName>Ron</FirstName>
		<LastName>Scuba</LastName>
		<Street>North Lower Wacker Drive</Street>
		<HouseNumber>20</HouseNumber>
		<City>Chicago</City>
		<Region>IL</Region>
		<Country>USA</Country>
		<EMailAddress>Ron.Scuba@abcd123.com</EMailAddress>
		<EMailOptIn>Y</EMailOptIn>
		<PhoneNumber>+1 555 123123123</PhoneNumber>
		<PhoneOptin>N</PhoneOptin>
		<DateOfBirth>19480902</DateOfBirth>
	</row>
</root>
  1. Enrich Message in a separate branch

Since we are missing the longitude and latitude, the message must be enriched.
To do so, we simply create a second branch in which the GPS location is persisted in properties while keeping the original message payload in the first branch.
The address is then submitted to the Google Maps Geocoding API with the Google Maps API Key.
Google Maps provides parameters to filter, sort, and preselect the results to be retrieved.

  1. Call Google Maps Geocoding API

With the parameters available for retrieving the address data from Google Maps the query to be submitted to Google Maps is defined in the HTTP Communication Channel.

Query: address=<HouseNumber>+<Street>,+<City>,+<Region>,<Country>&key=<API key>

Google Maps provides you the option to retrieve the response as JSON or XML.

Google Maps Response (XML format):

<?xml version="1.0" encoding="UTF-8"?>
<GeocodeResponse>
    <status>OK</status>
    <result>
        <type>premise</type>
        <formatted_address>20 N Upper Wacker Dr, Chicago, IL 60606, USA</formatted_address>
        <address_component>
            ...
        </address_component>
        <geometry>
            <location>
                <lat>41.8825640</lat>
                <lng>-87.6374246</lng>
            </location>
            <location_type>ROOFTOP</location_type>
            <viewport>
                <southwest>
                    <lat>41.8812150</lat>
                    <lng>-87.6387735</lng>
                </southwest>
                <northeast>
                    <lat>41.8839130</lat>
                    <lng>-87.6360756</lng>
                </northeast>
            </viewport>
            <bounds>
                <southwest>
                    <lat>41.8820520</lat>
                    <lng>-87.6377680</lng>
                </southwest>
                <northeast>
                    <lat>41.8830760</lat>
                    <lng>-87.6370811</lng>
                </northeast>
            </bounds>
        </geometry>
        <place_id>ChIJEz3RjbgsDogR5z1PbIL8Ib0</place_id>
    </result>
</GeocodeResponse>
  1. Merge Messages

Now, we have two messages that need to be merged to one.

Message 1: Original message payload we have received from the sender system
Message 2: Google Maps Geocoding API response

In the previous step, we have used a Parallel Multicast du create a second branch to get the address information from Google Maps. With all information needed available, we combine both messages using a combination of the Join and Gather Integration Pattern.

This will create a new Message combining both messages in one.

<?xml version="1.0" encoding="UTF-8"?>
<multimap:Messages xmlns:multimap="http://sap.com/xi/XI/SplitAndMerge">
<multimap:Message1>
...Original Sender Message Payload...
</multimap:Message1>
<multimap:Message2>
...Google Maps Response...
</multimap:Message2>
</multimap:Messages>
  1. Message Mapping

The Message Mapping is an essential step to transform the multi-message we have created to the message that is expected by SAP Hybris Marketing Cloud.

Therefore we have to create a Multi-Message-Mapping where we have two source messages and one target message.

Still, after the mapping, the message structure is not correct. This is because we have multi-message in the mapping. A simple approach, using standard integration patterns, is just to define a Filter step to filter for the part of the message you need before sending the message to SAP Hybris Marketing Cloud.

Note:
When mapping the fields, make sure to use the correct context. You should make yourself familiar how queues and contexts work with the graphical mapping. (See the blogs posts referred below)

Yellow: Original Sender Message Payload (Source)
Green: Google Maps Geocoding API Response (Source)
Blue: SAP Hybris Marketing Interaction Contact (Target)

Two excellent posts decribing queues and context with Graphical Mapping:

  1. Create Contact in SAP Hybris Marketing Cloud

To verify that the message complies with the message structure expected by Marketing Cloud, we have added an XML Validator, what validated the message structure against the Schema XML added to the XML Validator integration pattern.

From here we simply need to configure the Receiver Communication Channel to submit the message to our receiving system.

You can review the contact created using the Inspect Contact app on Marketing Cloud.
On SAP Hybris Marketing Cloud we only used standard functionality and no customization is needed.

 

Part 2: Reverse Geocoding

The procedure for reverse geocoding is the same as for the geocoding example described above.

  1. Receive Message

From the sending system, we receive an XML formatted payload containing some customer information.

This time the location data is provided as GPS location (latitude/longitude) but we like to import a full address to Marketing Cloud.
In this example, we have used dummy data for explanation.

Inbound message Payload:

<root>
  <row>
    <Id>C-100</Id>
    <IdOrigin>SAP_FILE_IMPORT</IdOrigin>
    <Timestamp>20170729122215</Timestamp>
    <FirstName>Ron</FirstName>
    <LastName>Scuba</LastName>
    <TitleDescription>Mr.</TitleDescription>
    <EMailAddress>Ron.Scuba@abcd123.com</EMailAddress>
    <EMailOptIn>Y</EMailOptIn>
    <PhoneNumber>+1 619 62234267</PhoneNumber>
    <PhoneOptin>N</PhoneOptin>
    <DateOfBirth>19480902</DateOfBirth>
    <Latitude>41.88255</Latitude>
    <Longitude>-87.637167</Longitude>
  </row>
</root>
  1. Enrich Message in a separate branch

Since we are missing full address data, the message must be enriched.
As before, we create a seperate branch to enricht the data.
The GPS location data is then submitted to the Google Maps Geocoding API with the Google Maps API Key.

  1. Call Google Maps Geocoding API

With the parameters available for retrieving the address data from Google Maps the query to be submitted to Google Maps is defined in the HTTP Communication Channel.

Here we used:

  • “ROOFTOP” indicates that the returned result is a precise geocode for which we have location information accurate down to street address precision.
  • street_address indicates a precise street address.

Query: latlng=<latitude>,<longitude>&location_type=ROOFTOP&result_type=street_address&key=<API key>

Google Maps Response (XML format):

<?xml version="1.0" encoding="UTF-8"?>
<GeocodeResponse>
    <status>OK</status>
    <result>
        <type>street_address</type>
        <formatted_address>20 N Lower Wacker Dr, Chicago, IL 60606, USA</formatted_address>
        <address_component>
            <long_name>20</long_name>
            <short_name>20</short_name>
            <type>street_number</type>
        </address_component>
        <address_component>
            <long_name>North Lower Wacker Drive</long_name>
            <short_name>N Lower Wacker Dr</short_name>
            <type>route</type>
        </address_component>
        <address_component>
            <long_name>Chicago Loop</long_name>
            <short_name>Chicago Loop</short_name>
            <type>neighborhood</type>
            <type>political</type>
        </address_component>
        <address_component>
            <long_name>Chicago</long_name>
            <short_name>Chicago</short_name>
            <type>locality</type>
            <type>political</type>
        </address_component>
        <address_component>
            <long_name>Cook County</long_name>
            <short_name>Cook County</short_name>
            <type>administrative_area_level_2</type>
            <type>political</type>
        </address_component>
        <address_component>
            <long_name>Illinois</long_name>
            <short_name>IL</short_name>
            <type>administrative_area_level_1</type>
            <type>political</type>
        </address_component>
        <address_component>
            <long_name>United States</long_name>
            <short_name>US</short_name>
            <type>country</type>
            <type>political</type>
        </address_component>
        <address_component>
            <long_name>60606</long_name>
            <short_name>60606</short_name>
            <type>postal_code</type>
        </address_component>
        <geometry>
            <location>
                <lat>41.8825731</lat>
                <lng>-87.6374299</lng>
            </location>
            <location_type>ROOFTOP</location_type>
            <viewport>
                <southwest>
                    <lat>41.8812241</lat>
                    <lng>-87.6387789</lng>
                </southwest>
                <northeast>
                    <lat>41.8839221</lat>
                    <lng>-87.6360809</lng>
                </northeast>
            </viewport>
        </geometry>
        <place_id>ChIJBRwtjLgsDogRPYuy9JcGjfw</place_id>
    </result>
</GeocodeResponse>
  1. Merge Messages

Again, we have two messages that need to be merged to one.

Message 1: Original message payload we have received from the sender system
Message 2: Google Maps Geocoding API response.

  1. Message Mapping

The Message Mapping is very similar to the one from the geocoding example. Make sure you have added the correct XML Schema files.

Yellow: Original Sender Message Payload (Source)
Green: Google Maps Response (Source)
Blue: SAP Hybris Marketing Interaction Contact (Target)

  1. Create Contact in SAP Hybris Marketing Cloud

Configure the Receiver Communication Channel and submit the message to our receiving system.

You can review the contact created using the Inspect Contact app on Marketing Cloud.
On SAP Hybris Marketing Cloud we only used standard functionality and no customization is needed.

In the segmentation, we can display contacts with location information.

Summary

Depending on your use case, you might need to provide address data either as GPS location or full address information. The Google Maps Geocoding API provides an easy to use interface to do both geocoding and reverse geocoding.

With this, you should be able to configure a simple scenario with the Google Maps Geocoding API to create an Interaction Contact in Marketing Cloud. The setup on SAP Hybris Marketing requires you to do only a couple steps and can be done within minutes. The easy-to-use user interface on SAP Cloud Platform Integration enables quick onboarding where you can easily design and configure your IFlow to perform complex tasks.

You want to see more? Check out our blogs by searching for the tags assigned to this blog.
Your SAP Hybris Expert Services – Marketing Practice team.

 

… In continuation from part 1

Offline Interface

When building offline applications using Fiori Mobile, an interface file (offlineInterface.js) is added for ease of development. This file has wrapper methods for flush, refresh, sync (flush and refresh together), easy error handling and other utility functions.

Initialization

constructor
Makes sure only one instance is present (Singleton) and initializes Application Endpoint and Default Store
Input / Output Type Name Description
parameter object oChildObject existing object, if created before
parameter string sApplicationEndpoint service root
parameter string sDefaultStore default offline store
return object returns the offline interface object

 

Offline Interface can be instantiated and used as follows:

var applicationep = this.getMetadata().getManifestEntry("sap.mobile").stores[0].serviceRoot;
var defaultstore = this.getMetadata().getManifestEntry("sap.mobile").stores[0].name;
var oOfflineInterface = new sap.smp.OfflineInterface (null, applicationep, defaultstore);

You can get the service root and default store name from manifest file as above and pass it to constructor to initiate. This initialization can be done in Component.js – init() function.

 

Sync, Flush & Refresh

sync()
synchronizes the data between system and offline database for a given store. sync method combines the operations of flush and refresh methods.
Input / Output Type Name Description
parameter function fnCallbackSyncSuccess function to call after success of flush
parameter function fnCallbackSyncError function to call after error in flush
parameter string sStoreID ID of the offline store. If it is not passed, default store id is used.

 

Example code:

//If the reference to OfflineInterface is there, then use it. If not, create new (OfflineInterface uses singleton pattern)

new sap.smp.OfflineInterface().sync(	
    function(error){ 
        //show the error (business error, if present) on sync call back success
    }, function(error) {
       //show the error (technical error, if present) on sync call back failure
    }, 
    null //no need to pass if default store
);

 

flush()
flushes the data from offline database to the system for a given store
Input / Output Type Name Description
parameter function fnCallbackSuccess callback function executed when the flush succeeds
parameter function fnCallbackError callback function executed when the flush fails
parameter string sStoreID ID of the offline store. If it is not passed, default store id is used.

 

Example code:

//If the reference to OfflineInterface is there, then use it. If not, create new (OfflineInterface uses singleton pattern)

new sap.smp.OfflineInterface().flush(	
    function(error){ 
         //show the error (business error, if present) on flush call back success
    }, function(error) {
        //show the error (technical error, if present) on flush call back failure
    }, 
    null //no need to pass if default store
);

 

refresh()
refreshes the data from system to offline database for a given store
Input / Output Type Name Description
parameter function fnCallbackSuccess callback function executed when the refresh succeeds
parameter function fnCallbackError callback function executed when the refresh fails
parameter string sStoreID ID of the offline store. If it is not passed, default store id is used.
parameter array aRefreshObjects [subset] – List of the names of the defining requests to refresh

 

Example code:

//If the reference to OfflineInterface is there, then use it. If not, create new (OfflineInterface uses singleton pattern)

new sap.smp.OfflineInterface().refresh(	
    function(error){ 
        //show the error (business error, if present) on refersh call back success
    }, function(error) {
        //show the error (technical error, if present) on refresh call back failure
    }, 
    null, //no need to pass if default store
    null //if all the defining requests of a store needs to be refreshed
);

 

getLastFlush()
returns the last flushed timestamp
Input / Output Type Name Description
return timestamp Date/Time of the last flush

 

getLastRefresh()
returns the last refreshed timestamp
Input / Output Type Name Description
return timestamp Date/Time of the last refresh

 

Utility Functions

isDeviceOnline()

Function to check if device is online

– While you can assume that the browser is offline when it returns a false value,

– you cannot assume that a true value necessarily means that the browser can access the internet.

– You could be getting false positives.

Input / Output Type Name Description
return boolean true if device is online

 

isOfflineEnabledApp()
checks whether the app is offline enabled – if the smp, registration, stores objects are initialized
Input / Output Type Name Description
return boolean true, if the app is offline enabled

 

getServiceUrl()
returns the service url for a given store id (or default store)
Input / Output Type Name Description
parameter string sStoreID ID of the offline store for which the service URL is requested
return string return the service url

Offline Error Handling

There are certain error situations related to offline use in Fiori. Handling of these errors shall be done in a consistent way by the application itself.

The method of error handling is different in online and in offline mode, therefore you need to be aware of the offline related scenarios.

In case of an online scenario you get an almost immediate response from the OData Producer for every request. The application provides various solutions to help you to handle situations, such as correcting a field or resending a request.

On the other hand, in an offline scenario the store doesn’t contain any kind of business logic and doesn’t do the same checks as the back end does. When the update of an entity happens locally but on the back end the same entity is already deleted, the offline queue is filled with requests that will all fail when the flush call is initiated. In some cases, you can check the preconditions of the entries on the client side, but apart from this the offline-enabled application must be prepared for handling error messages coming from the back end.

The failed requests are stored in the ErrorArchive entity set dedicated to each offline store initialized on the device.

You can query the ErrorArchive collection the same way as any other OData collection by using, for example, the $filter $top query options to modify the search. Entries of the archive have to be deleted by the application after the issues have been resolved.

You have to make sure that a flush rejected by the OData producer is followed by a refresh call before any correction is made to clear the offline store from the wrong entities. If the Error Archive is up-to-date, then refresh is not necessary.

Technical Exception Handling

There are technical issues that are related to system errors or infrastructure failures. These can include server communication issues (for example, server is not reachable, there is a communication time out or broken connection) and runtime errors not related to the business data (for example, client side implementation errors, internal server errors, and runtime errors). These errors appear as an object in the error callback of the flush and refresh methods.

Business Logic Exception Handling

There can be issues that are detected by the business logic at the OData producer. These issues include invalid business object data (for example, forbidden property values and combinations, or missing mandatory properties) and forbidden or restricted operations on the business objects (for example, the object cannot be deleted or modified by the user, or there are authorization issues). These errors are stored in the Error Archive as mentioned above.

Conflict Handling

There might be conflicts between the back-end server and the local data set during the data synchronization, for example, the business object is already deleted or modified on the server by another user. It is necessary to define what options the Fiori apps shall provide to the end users to resolve these conflicts.

For those entities that can be modified by the application you need to implement OData ETag support.

 

Error handling using Offline Interface

When there are issues during flush operation, the errors are retrieved from error archive and stored in an array. You can get the error messages using the below function:

getErrorMessages()
returns the error messsages array
Input / Output Type Name Description
return array array containing the error messages from the error archive.

 

The structure of the ErrorArchive entity is described in the below link:

http://help.sap.com/saphelp_smp308sdk/helpdata/en/ff/35db37335f4bb8a1e188b997a2b111/content.htm

Example Code:

if (new sap.smp.OfflineInterface().getErrorMessages().length) {
    new sap.smp.OfflineInterface().getErrorMessages().forEach(
        function(oErrorMessage) {
            //show the oErrorMessage.Message		
	}, this);
}

 

You can delete the error archive using the following function:

deleteErrorArchive()
function that deletes messages in error archive
Input / Output Type Name Description
parameter array aErrorArchive Array containing the error messages from the error archive
parameter function fnCallbackSuccess success callback function
parameter function fnCallbackError error callback function

 

Example Code:

if (new sap.smp.OfflineInterface().getErrorMessages().length) {
    new sap.smp.OfflineInterface().deleteErrorArchive(
        new sap.smp.OfflineInterface().getErrorMessages(),
        function() { }, // success call back on delete error archive
        function(oError) { } // failure call back on delete error archive
    );
}

 

Offline Configuration

There are settings that can be changed for the offline application. For example, you can share data sets among the users of an offline-enabled application to improve performance. In this case, data from these collections can be cached on the server to decrease the time of data synchronization. By default, no data is cached on the server.

When an offline-enabled application is initialized, by default the server creates an offline database with initial content from the back end (using defining requests). Then, the database is pushed to the client application on the device.

Change the settings of the defining requests to share data among the users. To do so, define the settings in a configuration file and save it as a text file.

In the Fiori Mobile Admin Console, open the App Settings of your application, and on the Offline tab, select Import Settings and upload your configuration file.

 

Since all the necessary settings have default values, you only need this file if you want to override this behavior. Default values are not visible on the application configuration UI.

For more information about the application configuration file, see

http://help.sap.com/saphelp_smp3011sdk/helpdata/en/f5/a25877c16f4fc384c44fcf0b92dab8/content.htm?frameset=/en/7c/1c784070061014ae07e52333d5e566/frameset.htm&current_toc=/en/7c/01cda6700610149b10c2f2a86d335b/plain.htm&node_id=472

 

References:

https://blogs.sap.com/2015/07/19/getting-started-with-kapsel-part-10-offline-odata-sp09/

http://help.sap.com/saphelp_smp308sdk/helpdata/en/a7/d7b40c47024809aee453b3016650a3/content.htm?frameset=/en/6f/38926c58b34ec9bd931a7f9799de52/frameset.htm&current_toc=/en/7c/01cda6700610149b10c2f2a86d335b/plain.htm&node_id=418&show_children=false

 

How to create a search engine in WebI using an input control where we can type some tag-words and see the result of our lookup in number of hits and the result in detail.

 

in this example we wanted to know all the prices of the black Bermudas

In this example we wanted to know something about the state with “or” in it and something about Trousers but how do you write “Trousers”, so we filled out something like “ous”

This resulted in a lot of lines off course because ous and or combination is not a very specific one!

Well things to know about this searchengine:

  • It is set up for 4 dimensions (Sate, Category, Color, Lines) we concatenated those dimensions in one Variable: Lookup
  • The search input control / entry field can handle  1 or 2 separate strings separated via a space in between, if you want to use * or something else to separate the strings you’ll need to change the formulas in the variables
  • if you use Bermudas or bermudas you will have different results, because I didn’t cleansed my data.  Or check out Mahboob Mohammed comment for the solution
  • If you use “ous or” or “or ous” that will have no effect on the result, because each string is valuated seperatedly

Okay steps to reproduce

  1. create variable “search” as a dimension: =”…”
  2. setup an input control entry field based upon “search”
  3. create variable “lookup” as a dimension =concatenation of your dimensions you want to consider in your search: =[Query 1].[State]+[Query 1].[Category]+[Query 1].[Color]+[Query 1].[Lines]
  4. create variable “Match 1″ as a dimension =Substr([Search];0;(Pos([Search];” “)-1))
  5. create variable “Match 2 as a dimension =LeftTrim(RightTrim(Substr([Search];Pos([Search];” “);99999999)))
  6. create variable “like” as a dimension: =If Replace([Lookup];[Match 2];”XXX”)=[Lookup] Then “no match” ElseIf IsNull([Match 1])= 1 Then “match” ElseIf Replace([Lookup];[Match 1];”XXX”)=[Lookup] Then “nomatch”  Else “match”
  7. Add the lookup and like dimensions to your resulttable and hide both of them (for testing you can leave as show)
  8. filter the resulttable upon the variable “like” set as equal to match

 

Now we are ready to play

I want a wallet in Ecru

 

A parent child relationships can be used to model many types of hierarchy, including ragged hierarchies, balanced hierarchies, and unbalanced hierarchies.

SAP Analytics Cloud (SAC) originally required us to use parent child hierarchies. Often when connecting live to HANA, you could be modeling your hierarchies in this way.

Below, we can see an example organisational structure. This is an unbalanced hierarchy as the depth of the hierarchy varies depending which part of the organisation you look at.

 

For clarity, we have added the ID of each member.
This ID will be this also becomes the child member within the hierarchy.

As we can see below, the parent child hierarchy only requires a simple structure of two columns, the child entity (Job Title), and the parent or level above that. It is also common to include the text related to that organisation level.

create column table ORG_STRUCTURE (ORG_ID INT, PARENT_ID INT, JOB_TITLE VARCHAR(50));
insert into ORG_STRUCTURE values (1, NULL, 'CEO');
insert into ORG_STRUCTURE values (2, 1, 'EA');
insert into ORG_STRUCTURE values (3, 1, 'COO');
insert into ORG_STRUCTURE values (4, 1, 'CHRO');
insert into ORG_STRUCTURE values (5, 1, 'CFO');
insert into ORG_STRUCTURE values (6, 1, 'CMO');
insert into ORG_STRUCTURE values (7, 3, 'SVP Sales');
insert into ORG_STRUCTURE values (8, 5, 'SVP Finance');
insert into ORG_STRUCTURE values (9, 6, 'SVP Marketing');
insert into ORG_STRUCTURE values (10, 7, 'US Sales');
insert into ORG_STRUCTURE values (11, 7, 'EMEA Sales');
insert into ORG_STRUCTURE values (12, 7, 'APJ Sales');
insert into ORG_STRUCTURE values (13, 9, 'Global Marketing');
insert into ORG_STRUCTURE values (14, 9, 'Regional Marketing');
insert into ORG_STRUCTURE values (15, 11, 'UK Sales');
insert into ORG_STRUCTURE values (16, 11, 'France Sales');
insert into ORG_STRUCTURE values (17, 11, 'Germany Sales');
insert into ORG_STRUCTURE values (18, 12, 'China Sales');
insert into ORG_STRUCTURE values (19, 12, 'Australia Sales');
select * from ORG_STRUCTURE;

With just this single table we can create a calculation view to model this structure.

 

Add a parent child hierarchy, more details on this step can be found in the official documentation. SAP HANA Developer Guide – Parent Child Hierarchies

 

To be able to report on this we need a measure.
The easiest and most sensible option here is to add a counter to count the ORG_IDs.

 

To test hierarchies, we should use a tool that properly understands the hierarchical structures.
Below we can see the hierarchy with SAP BusinessObjects Analysis for Microsoft Office

 

Alternatively, if Analysis for Office is not available then a workaround is to view the hierarchy within the Analytic Privileges.
To do this, we need to “Enable Hierarchies for SQL Access” in the Calc View properties.  This property is exposed if we have a Star Join within Calc View.

 

Within the Analytic Privileges dialogue, we can find our hierarchy after first selecting the child attribute, ORG_ID.

 

We can then test and browse our hierarchy, here it shows both the ID (Value) and the Label (Description)

 

So far so good, now joining this hierarchy dimension to a fact table should be straight forward, and it is, provided you use the correct join – an outer join.

create column table EXPENSES (ORG_ID int, EXPENSE_AMOUNT int);
insert into EXPENSES values (1,430);
insert into EXPENSES values (2,120);
insert into EXPENSES values (3,100);
insert into EXPENSES values (4,250);
insert into EXPENSES values (5,530);
insert into EXPENSES values (6,180);
insert into EXPENSES values (8,450);
insert into EXPENSES values (9,250);
insert into EXPENSES values (10,160);
insert into EXPENSES values (12,350);
insert into EXPENSES values (13,130);
insert into EXPENSES values (14,300);
insert into EXPENSES values (15,140);
insert into EXPENSES values (16,550);
insert into EXPENSES values (18,170);
insert into EXPENSES values (19,150);

 

A common scenario is that not all the organisation entities will appear in the fact table, but they are still part of the hierarchy.  We want to ensure that our reporting is accurate, and not lose and information.  To achieve this we should use a left outer join on the dimension table.

 

We now have a simple calculation view with a fact table, dimension and a parent child hierarchy.

 

Switching to Analysis Office, we can report against our parent-child hierarchy. Notice how all members are returned, including the parent and child members where there are no expense amounts.

As of S4/HANA we have a new way of managing Output Type Management (old NACE transaction) – using BRFPlus. Together with my two colleagues from Int4 – Michal Michalski and Krzysztof Luka we’ve decided to describe how does this new functionality work and show different options where it can be used (for example with output IDOCs, Ariba integration, SAP AIF Integration)

SAP Press has published our book with it’s new format – E-bite. E-bites are supposed to be smaller books available only in electronic format and they need to concentrate on a single topic only – in our case BRFplus Output Type Management. Our book contains step-by-step instructions and screenshots that will enable you to explore the new BRFplus Output Type Management with many step by steps guides.

You can buy it here – SAP PRESS shop

Below you can find the table of contents:

1 Introduction

1.1 New Output Management for SAP S/4HANA

1.2 Prerequisites

1.3 SAP Ariba and IDoc Output Management Scenarios

1.4 Licensing for SAP Application Interface Framework

2 SAP Ariba Scenario

2.1 Ariba Network Technical Connectivity

2.2 Message Output Type Configuration

2.3 SAP Application Interface Framework Monitoring

3 IDoc Scenario

3.1 IDoc Customizing

3.2 Customizing Output Management

3.3 Testing the New Output Determination

3.4 IDoc SAP Application Interface Framework Enablement

4 Reactive Monitoring with SAP Application Interface Framework Alerts

4.1 Interface Setup for SAP Application Interface Framework Full Mode

4.2 Setting Up SAP Application Interface Framework Alert and Recipient Determination

 

 

References:

  1. My first book on AIF topic for business users – SOA Integration – Enterprise Service Monitoring (LIE, FEH/ECH, AIF)
  2. SAP Press e-bite on SAP AIF: New SAP Press book (E-bite series): Serializing Interfaces in SAP AIF

Suppose you have packaged a UI5 application into a mobile device via Cordova, and in your UI5 application you have consumed some Cordova plugin which provides native API in mobile platform, and you would like to debug your application. In this blog, I will show steps how to debug UI5 JavaScript code and Cordova plugin code in Android platform.
I will continue to use the UI5 application Step by step to create a custom Cordova plugin for Android and consume it in your UI5 application described in my previous blog for demo.

How to debug UI5 code running in Android device

Suppose you would like to debug whether your UI5 code runs correctly in a real Android device. The steps to debug in Chrome is almost the same as when you debug the UI5 application running in PC, only a few additional steps are necessary.
1. Enable USB debug option for your Android device. And then connect your mobile device with your PC, open Chrome development tool:
Now your should see your Android device here:
2. Launch the UI5 application in your mobile device, then you should find a new entry appears under your device name. Click button “Inspect”:
3. Now switch to Sources tab, and all loaded html and JavaScript resources are visible. There is nothing new starting from here: you could just set breakpoint in whatever positions. For example, in the screenshot below I set a breakpoint in line 38, where the plugin written in Java is to be called.
4. Re-launch the application, and now breakpoint is triggered.
Press F11, and we can still step into and check how Cordova plugin written by Java is called:
The magic of call from JavaScript to Java starts in line 967. For more detail see this blog How is JavaScript code in OData offline plugin delegated to native Java code in Android.
How to debug Cordova plugin developed in Java
I am using Android development studio to debug the Java code.
Suppose the root folder of my project is JerryUI5HelloWorld, just open the folder android under platforms folder, as highlighted below.
Once the project is opened via Android studio, it looks like as below:
Launch the application under debug mode:
Repeat the operation in UI, and the breakpoint set previously is triggered now.
You can still switch between different callstack frame to observe how the custom plugin is called by Cordova framework in Java side.

The Global Biobank Week conference in Stockholm (September 13-15) is rapidly approaching. I spoke with two colleagues working closely together in a cooperative project between SAP SE and CBmed GmbH (the Austrian K1 Competence Center for Biomarker Research in Medicine). Markus Kreuzthaler, PhD is a Research Associate at CBmed, specialized in clinical natural language processing (NLP) and Peter Kaiser, PhD is a Development Project Manager at SAP Health. Both will be onsite at the conference.

Markus, you are one of the speakers at the event in Stockholm. What will you present?

Markus: I will talk about the CBmed project Innovative Use of Information for Clinical Care and Biomarker Research (IICCAB), aimed at mining large-scale clinical data sets for primary and secondary use. Specifically, I will highlight challenges and solutions in cohort building, where we work together with SAP and the BioBank Graz. For this project, the retrieval of clinical information is fundamental for a precise selection of suitable biospecimens by querying clinical routine data linked to biobank sample data. The challenge is that electronic health records (EHRs) contain most of the relevant clinical information as free text only. This is sufficient for the communication and documentation needs of clinicians, but is challenging when it comes to machine-based information extraction for a defined task. A robust NLP engine, powered by dictionaries that reflect the local language, is indispensable for comprehensive data integration. In the end, semantically normalized patient profiles using international terminology standards can be stored and queried via the SAP Connected Health platform to support biomarker research.

Peter, you have been working closely with CBmed in this project. What have been the outcomes?

Peter: To bring all types of biomedical data together, to ensure appropriately standardized data for biomarker research, and ultimately to improve clinical decisions is both ambitious and exciting. We had to address the right processing and federation of the data, and we had to design the repositories. As all clinical documents are in German, a semantic layer for that language had to be put into place. Pseudonymisation1 and de-identification1 of patient data (which is a mandatory requirement to ensure data privacy) is also addressed in this project. Finally, the analytics must be in place to be able to mine the data. These are just some functionalities of this system, the basis of which is the SAP Connected Health platform, which uses the real-time analytics capabilities of SAP HANA.

Markus: Considering the amount of data which must be analyzed, this becomes a Big Data challenge. SAP HANA is well suited as the basis for a highly responsive system for managing these data loads. CBmed’s strategy is to drive innovative topics such as biomarker-based precision medicine, and optimized clinical trial -execution and -recruitment. For this, real-time analytics is important, as strict in-time response requirements to the system must be considered; for instance, if a clinician collects and inspects patient data, and wants to know whether certain patients are good candidates for a clinical trial. Therefore, fast access and response times must be guaranteed, and well-structured data (extracted from unstructured sources as previously mentioned) must be available and easily accessible at all times.

Why is the Global Biobank Week conference of interest to you?

Markus: First of all, CBmed and SAP will both be exhibiting at this event, in two adjacent booths (#34 and #35), so I am looking forward to conversations with the other delegates, and to hear about their experiences, expectations and the challenges that they have today or expect for the future. Biobanking is a Big Data topic, and the biobank specimens hold a wealth of information about known, but also yet to be discovered biomarkers. This treasure can only be unveiled with the right tools, and applying them in the right order. Only then, the data can be made accessible, and semantically interpretable and interoperable, ultimately leading to an ideal connection of the clinical and biospecimen information. The attendees at this event may be curious to hear how CBmed and SAP are solving this challenge, how we go about mining biomedical information, and how they themselves can benefit from our joint effort.

Peter: Just like Markus, I look forward to speaking to as many people as possible on site. I am curious to learn what the data needs and challenges are for “biobankers” and other researchers in this area. One additional aspect is cohort analysis, for which SAP has developed a dedicated application (SAP Medical Research Insights), which also uses the real-time analytic capabilities of SAP HANA. In research scenarios this has proven very useful, for instance for the analysis of melanoma patient cohorts, across hundreds of parameters per patient. At last year’s event, several presentations addressed national and international cohorts; and I would like to explore with the attendees how SAP’s technology can support these activities.

What are the greatest challenges that hamper successful mining of Big Data?

Markus: Unleashing the data from the clinical information systems is a challenge for all of us. The transfer of structured data, such as lab results poses little problems, but the analysis of text documents requires a robust interface. Clinical text is difficult to analyze due to its compactness and idiosyncratic terminology. Privacy is another issue for projects like these. Right from the start of the project we have addressed this by storing all data in the SAP Connected Health platform in a pseudonymized1 manner. Our activities are constantly monitored by a data protection expert and take into account national and international regulations like the EU-GDPR or the U.S. HIPAA “safe harbor” criteria. We are also evaluating de-identification systems, which identify and eliminate sensitive passages like patient names in clinical texts, so that access to clinical documents can be granted to a broader group of researchers. This would mean that de-identification is fulfilled on the fly with the help of a trained system. A lot of functionality has to be in place: Extract-Transform-Load (ETL) workflows (specifying which data items are embedded where, and where they have to be transferred to) and NLP as a service (extracting information through machine learning and rules, as well as ontological and terminology services adaptable to the language and clinical domain); these are just some of the aspects that have to be addressed. I will reveal results in my presentation “Secondary Use of Clinical Routine Data for Enhanced Phenotyping of Biobank Sample Data” – Conference Session 6B, “Biobanks and electronic health records”, on Thursday 14th September, 15h45.

Peter: To add to that, we soon realized that end-users of this system (researchers and physicians) have specific expectations on how the data is presented. A special interface, the “Patient Quick View,” is being developed with and for these users. Physicians simply do not have the time to browse through hundreds of pages to find the information necessary to treat patients with chronic illnesses, and therefore smarter solutions must be provided.

Markus: Within this “Patient Quick View,” the “Timeline View” visualizes the frequency and characteristics of a patient’s past encounters, or how a clinical biomarker (e.g. creatinine or HBA1) used for monitoring chronic disorders evolved over time. Another feature we plan to implement is personalization; for a certain user profile, data is shown in a special, prioritized way. Consider a surgeon: this person is more interested in seeing past operations, whereas a cardiologist is more interested in lab values and past medications. A core asset of the “Patient Quick View” will thus be a focused display of the most relevant patient parameters for a specific user.

Peter: The language issues are challenging too. Whereas many systems focus exclusively on English, in Europe systems must be adapted to many more languages. SAP is very familiar with this issue, and knows how to tackle it. NLP systems need to process documents in the local language, and store the biomedical information in a format that enables understanding: in German in case of this project.

Markus: Language-specific resources must be built and adapted, including vocabularies or text collections (so-called corpora) that represent the language of a specific type of documents (like radiology reports or dermatology discharge summaries). Mapping of German language clinical terms to international semantic standards like ICD, LOINC, or SNOMED CT must be in place. We are pioneering the automated mapping of German language clinical terms – as used in clinical texts – to codes of the international terminology SNOMED CT. Finally, we need corpora with human mark-up, which are necessary to train certain NLP components and, most importantly, we must assess the quality of components so that we can predict what is found and missed. For example, in the German speaking community there are no existing clinical de-identified (gold-standard) corpora openly available, which could foster NLP in the clinical domain. Recently, the advantage of data-driven approaches and deep learning have been demonstrated for NLP, but their use requires a certain amount of training data, ideally made available to the research community.

Peter: The advantage is that many of these CBmed-specific needs may well be included in future releases of SAP products, developed with input and feedback from CBmed.

Additional information

  • Markus, Peter and I will be on site in Stockholm. Visit CBmed and SAP in booth 34 and 35 at the Global Biobank Week. Pre-arrange a meeting by leaving a comment below this post, or contacting me through @clesucr.
  • Mark your calendar to attend Markus’ presentation: “Secondary Use of Clinical Routine Data for Enhanced Phenotyping of Biobank Sample Data” – Session: 6B  “Biobanks and electronic health records”, Thursday 14th September, 15h45.
  • Follow us on Twitter: @SAPHealth, @Clesucr and @CBmed_News. #GBWstockholm

1 De-identification is the process used to prevent a person’s identity from being connected with information. Pseudonymization is a procedure by which the most identifying fields within a data record are replaced by one or more artificial identifiers, or pseudonyms. Anonymization is the process of either encrypting or removing personally identifiable information from data sets, so that the individuals described by the data remain anonymous (source: wikipedia.org)

As your business is progressing and growing, the amount of data to keep in the database tables of your SAP system is growing as well. Like any other database, Db2 for IBM i has several technical limits that cannot be exceeded, among them the maximum number of rows in a non-partitioned table or a table partition, the maximum size of an index or the maximum amount of variable-length data. A complete list of database limits can be found in the section SQL Limits of the SQL Reference information in the IBM Knowledge Center for IBM i.

There are different ways to monitor database growth in your SAP system: You can use the SAP transactions DB02 or DBACockpit to display the largest tables or tables with the most rows in your database, or you can look at the system catalog view QSYS2/SYSLIMITS to get information about objects that are approaching a system limit. In our blog entry from January 16th, 2016, we have explained how to look at the system limits view through transaction DBACockpit.

However, all these methods require a regular check of object sizes and limits. So far there has been no alerting when critical limits were approached. With the latest set of Db2 PTF groups for IBM i (SF99701 level 42 for IBM i 7.1, SF99702 level 17 for IBM i 7.2 or SF99703 level 5 for IBM i 7.3), alerting for some system limits was introduced. When these PTF groups are installed, a message will be sent to the QSYSOPR message queue when the number of rows in a non-partitioned table or table partition exceeds 90% of the maximum number, when an index size exceeds 90% of the maximum index size, or when the number of storage segments for variable-length data (as used in VARGRAPHIC, BLOB, or DBCLOB columns) exceeds 90% of the maximum value. The message will be repeatedly sent once per day until the problem is resolved. A detailed description about the new alerting can be found at http://ibm.biz/DB2foriAlerts.

When a limit is approaching, you should look for a solution as soon as possible. Once the limit is reached, you can no longer insert data into the affected table. The solution depends on the application and table that is affected. The first approach should always be data reduction, for example by archiving and deleting data. If archiving is not an option, you can also consider table partitioning. For SAP Business Warehouse (BW) systems, table partitioning is described in SAP Note 815186. For non-BW systems, table partitioning is described in SAP Note 2187681.