In the first part of my documentation (pt.1), I have explained and showed in detail how to proceed  with the mandatory landscape and component setup of the SAP Predictive Maintenance and Service solution.
In this second part (pt.2) I will explain and show how to configure the applications regarding:

  • Thing Model Service
  • Data Science Service
  • Insight Provider (Map, Asset Explorer, Components, Key Figures)

 

What is the Thing Model Service?

Before you can start configuring all other software components of SAP PDMS we need to configure IoT application services first, which consist of:

Configure the Configuration Services

  • The configuration services are used to manage the configuration of the Thing model in the form of packages. A package is a logical group of meta-data objects like thingTypes, propertySetTypes, properties and so on.

Configure Thing Services

  • The Thing services allow you to create, update, and delete things that are instances of the thing types modeled using the configuration services

What is the Data Science Service?

The DSS mainly contain 3 services for specific use case:

What are the Insight Providers?

Insight Providers are micro-services that provide analytical or predictive functionalities.
Typically, three tier XSA application with UI layer (UI5, JavaScript), Service layer (node.js, java) and Persistence layer (HANA using HDI)
Insight Providers consume the data from PDMS data model using HANA views.

 

Thing Model Service Configuration

You configure the thing model services using REST APIs. The SAP HANA REST API includes a file API that enables you to browse and manipulate files and directories via HTTP.

The File API included in the SAP HANA REST API uses the basic HTTP methods GET, PUT, and POST to send requests, and JSON is used as the default representation format

For my future need I will show below the composition of the package that will define and configure for next documentation.

The package needs to be define e creating .json code, once the code is generated I recommend you to validate it with a json validator

Once the code is validated, we need to use the REST Post method in order to load it in Hana, since I’m using Mozilla I have download the RESTClient add-on in order to perform it.
Once opened, I use the POST method, copy my Hana Rest url and copy the code into the Body, if all is right you should have a return code 201

To validate the creation of the package I will use a GET method on the package ID (‘core.evaq’) and check the response body

My configuration service completed I will now create the thing services; I will proceed the same way I did for the configuration service

The thing url is not the same

When create I can check ID by using the GET method, this is useful when we have several things in the configuration

Note: you can check all the model configuration setup and definition from all the views “com.sap.pdms.smd:META.xxxx” in the SAP_PMDS_DATA schema.

The based configuration is now done; I can proceed further with the next step by configuring Data Science Services.

 

Configuration of Data Science Service

As an alternative to REST APIs configuration, we can configure DSS y using UIs, we need to run the configuration with the “datascience” user.
Select “Manage Data Science services” and select the app Model, Training and Scoring

Each of the field needs to be filled up according the package created earlier in order to match property and table field.
Note that I will use the PCA use case algorithm

Let’s have a look at the detail:

No rocket science here 😉

For the field below:
Table for training and scoring: This is the name of the data fusion view in SAP HANA used for training, this view is executed whenever a model is trained,
and is additionally filtered by a time frame defined in the model training call for the Training View.

The view and table for the above picture should be created before to filled up, here is my table and view creation below

Property Set Type ID: this field is the where you define the propertySetTypeID you want to configure the model for.

In my case those ID are

Data Science Service: this field is pre-configured, such as the namespace “com.sap.pdms.datascience” in case only standard algorithm are available, and the algorithm available

Model – Generic Data : those field contain the name of the columns to be used as input to the model

HyperParameters : Parameters are specific to the model you want to configure, since i’m running data science service PAC algorithm the following field are mandatory “group.by” and “sort.by”

Here is my model once created

After my model is created, two new title appear “Train” and “Score”, from here you can define the time frame of the training and check the scoring status in the Scoring Data section of the scored model. Once schedule the train job can be checked from the Job Log URL

 

Configuration Insight Providers

The configuration of Insight Provider can be process by the web interface by use the PDMS_TECH_USER

But before to configure any of info provider, I need to create to 2 specific fusion view

  • One fusion view to define the parent-child relationship between the assets and components used for key figures
  • Another fusion view to define the readings used for key figures

I will use the following script to create my views

Note: As you can see in my picture I have highlighted SAP_PDMS_DATA schema, several users created during the installation of PDMS are type “Restricted”.

In order to run the different necessary script which required to have some authorization on the different schema, you will need to grant the necessary role to the user which create the Key Figures

Once the view created I will need to create a store procedure.

Now done, I can create my key figures from the ui

Once created I add it to the key figure set

Now from the AHCC i can add the new KPI from the Insight Provider Catalog

I can see my created KPI

Once added
Note: for my documentation, since I do not have any data yet injected data into my PdMS environment, I’ll then use a custom procedure which calculates a random double value between 0 and 100

My KPI configuration done, I will start now take care of the configuration of the different InsightProviders needed for my documentation

Let’s start with the Asset Explorer

All the point of configuration is based the model created, the first attribute to configure in order be expose for the AHCC is the filter

The second point is the Asset List

And final my Component List

Now completed, I will add now Asset Explorer from the AHCC Insight Provider Catalog

The next one I will configure is the “Component”

And add it form AHCC

I continue with the Map

For MAP the only 3 following providers is supported:

  • OpenStreetMap
  • Esri
  • Noki Here

For my documentation, I will use OpenStreetMap, thus the parameter highlighted are mandatory.
The url in “Layer Url” is http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png and “Mandatory” check box must be checked

From “Layer Option” the attribution must contain the copyright of the map provider, in case of of OpenStreetMap I use “&copy; <a href=\”http://osm.org/copyright\”>OpenStreetMap</a> contributors

Once done with my config, I add the MAP Insight Provider from AHCC

My base configuration for PdMS is now completed for my future model, in my next part (pt3) I will explain how to inject data for PdMS from my Raspberry PI with Groove PI sensors.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply