Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
JBrandt
Participant

The SAP Sustainability Control Tower (SCT) is SAP’s solution for holistic environmental, social and governance (ESG) reporting. With the SAP SCT, recording, reporting and acting on your companies ESG data becomes easy!

The SCT offers a lot of pre-defined metrics according to legal regulations such as ESRS and EU Taxonomy. To calculate these metrics within the SCT, you of course have to provide the data, making uploading data into the SCT one of the key steps. While this can be done via manual file upload better solutions are automatic integrations which are offered for the SAP Sustainability Footprint Management or the SAP Datasphere for example. But there is also another way to upload data into the SCT: The Inbound API. Follow along for a comprehensive tutorial on how to use this API and learn how you can connect any system to the SCT yourself!

Using one example data record we will guide along the different API calls that you can and have to make in order to publish this record in the SCT. The code examples will be presented in python but the information for the API request can of course be used universally.

Requirements

In order to start using the SCT API, you need to make sure that you are subscribed to the API service.

  1. Check in your BTP Subaccount in which you have set up the subscription for the SCT service under “Instances and Subscriptions” whether you already have an instance of the SCT API with a service key. You need the Subaccount Administrator role for that.

  2. If no instance for your SCT has been created, you can create one by clicking “Create”. Then choose Sustainability Control Tower (sct-service-api) as service and select a standard (Instance) plan. Select a runtime of your choosing, e.g. Cloud Foundry and a space, set a instance name and click create. An instance subscribing you to the SCT API will be created.

  3. Lastly, create a service key for this instance. This contains client credentials that you will need to authorize yourself via OAuth2.0, as well as the endpoints for the Inbound and Outbound API of the SCT.

Now you are all set up connecting to the API for pushing data to the SCT!

These steps are also explained in the SCT Setup documentation, SAP Help Portal - SCT - Subscribing to the Application and Services, but this link has restricted access to SCT system owners only.

The general API reference can be found in the SAP Business Accelerator Hub.


Screenshot 2024-03-01 at 10.14.42.png
Screenshot 2024-03-01 at 10.25.17.pngScreenshot 2024-03-01 at 10.27.19.png
service_key_3.pngScreenshot 2024-02-16 at 10.01.20.png
 

Before using the API

Before actually pushing data to the SCT via the API, let’s have a brief discussion about the OAuth2.0 authorization and the expected data format for the SCT.

Preparing your data

The SCT requires a specific data model for uploading individual records for any of the measures that are provided. The exact schema for each measure can be found in the “Manage ESG Data” app under Export Template or in the SCT Help Portal.

So let’s take the following example, where we have one record stored in a excel file, using the master data structure of the SCT demo data:

excel_example.png

We want to upload that one employee had been injured in January 1986 within our company with the code MX001. The columns ID_BODY_SIDE, ID_INJURY_SEVERITY and ID_INJURY_TYPE refer to parameters of the injury (which part of the body was affected, whether it was deadly and how it occurred) and as we are talking about an injury we are uploading this data to the INJ_INJURED_PERSON measure.

For it to work with the API however, your data must be in JSON-format. This means that every record (row in your excel file) is an element in a list of objects. Every object contains key-value pairs with the variable names (columns in your excel file) and their respective values. See Push Data into Injuries DPI for reference on key names (all in camelCase) as they differ from the column names of the excel file.

This turns our example record into this format:

 

 

{
  "runContext": {
    "measureId": "INJ_INJURED_PERSON",
    "isUpdateProcess": false
  },
  "injuries": [
    {
      "sourceId": "SCT_DEMO",
      "companyCodeId": "MX001",
      "isMainInjury": "0",
      "bodySideId": "3",
      "contractTypeId": "EMP",
      "orgUnitId": "",
      "businessLocationId": "",
      "periodType": "M",
      "periodYear": "1986",
      "periodMonth": "1",
      "periodQuarter": "1",
      "injurySeverityId": "1",
      "injuryTypeId": "6",
      "measureUnit": "",
      "measureValue": "1",
      "customDimensions": [
        {
          "dimensionId": "Z_CUSTOM_DIS",
          "value": "DEB"
        }
      ]
    }
  ]
}

 

 

In addition to our example record that is listed under the “injuries” part, we also need to specify the “runContext”. The “runContext” specifies the measure you want to push data for and whether your are updating records or pushing new ones.

With “measureId”, the measure is specified. In our case, this is the aforementioned INJ_INJURED_PERSON.

With “isUpdateProcess”, you can specify whether you want to update existing data points (then “isUpdateProcess” would be true) or append new records (then “isUpdateProcess” would be false). As we want to publish new records, we set this to false.

The block for “customDimensions” is optional. It is added here only for demonstration purposes.
For certain DPIs, the SCT also supports custom dimensions to be pushed via the API. In the “Manage Custom Dimensions” app, you can set up a new custom dimension and link it to a DPI. After that, you need to upload master data for this dimension (containing allowed values and their technical IDs). When this is done, you can push records that contain data on this custom dimension following the structure given in the example: The “dimensionId” refers to the Dimension ID, here Z_CUSTOM_DIS, the “value” refers to the technical IDs of your accepted values, e.g. DEB in our example above. See the screenshot for clarification:

custom_dimensions.png

Retrieving your access token

As a last preparation step before you pushing records to the SCT, you will also have to retrieve an access token via OAuth2.0 using the service key credentials set up earlier. You will need your "clientid", your "clientsecret" and your TokenURL. Your TokenURL is the URL in the "uaa" part of the service key plus “/oauth/token”. For example:

 

 

token_url = "https://<your_company_space>.sct.authentication.eu20.hana.ondemand.com/oauth/token"

 

 

You can retrieve an access token by posting an API request against the TokenURL. Here is an example on how to do it in Python:

 

 

import requests

#retrieve access token

client_id = "your_client_id" 
client_secret = "your_client_secret"
token_url = "your_token_url" 

token_data = {
    'grant_type': 'client_credentials',
    'client_id': client_id,
    'client_secret': client_secret,
}

# Make a POST request
token_response = requests.post(token_url, data=token_data)

if token_response.status_code == 200:
    print("Retrieval of access token successful")
else:
    print(f"Token request failed with status code: {token_response.status_code}")
    print(token_response.text)

#store access_token in a variable for later use
access_token = token_response.json().get('access_token')

 

 

 

Request Table

In the following table you can see all the API requests that you can make to successfully publish data to the SCT as a quick overview on the necessary headers and URLs.

Just follow along as we explain each step in more detail based on our example record.

The base url for all the Inbound API related requests can be taken from the service key under "DPIs". The full API reference can be found in the SAP Business Accelerator Hub.

Purpose

Method

URL (Endpoint)

Headers

Body

Retrieve access token

POST

<uaa-url>/oauth/token

 

{
  'grant_type': 'client_credentials',
  'client_id': client_id,
  'client_secret': client_secret,
}

 

Push data into DPI

POST

<DPIs-url>/<DPIYouWantToPushDataTo>

{
  "Authorization": f"Bearer {access_token}",
  "DataServiceVersion": "2.0",
  "Accept": "application/json",
  "Content-Type": "application/json"
}

your records in JSON-format as shown in the data preparation step

Validate data

POST

<DPIs-url>/validate

{
  "Authorization": f"Bearer {access_token}",
  "DataServiceVersion": "2.0",
  "Accept": "*/*",
  "Content-Type": "application/json"
}

{
  "runId": run_id
}

Get validation results

GET

<DPIs-url>/validationResults(runId='{run_id}')

{
  "Authorization": f"Bearer {access_token}",
  "DataServiceVersion": "2.0",
  "Accept": "application/json"
}

 

Publish data

POST

<DPIs-url>/publish

{
  "Authorization": f"Bearer {access_token}",
  "DataServiceVersion": "2.0",
  "Accept": "*/*",
  "Content-Type": "application/json"
}

{
  "runId": run_id
}


Using the API

The process of uploading data in the SCT contains several steps. It is not possible to directly push data into the SCT database tables itself. Rather the data is first brought in to the Data Provider Interface (DPI) layer. There it has to be validated in regards to conforming to the master data of the SCT. After a successful validation the data can then be published to the SCT tables and it will be visible as part of the metrics.
This process is implemented in the manual data upload or the import via datasphere with the “Manage ESG Data” app for example and is of course also mandatory when using the SCT Inbound API.

Pushing Data to the DPI

So as a first step, we need to push the data to the respective DPI. For our example the INJ_INJURED_PERSON measure is part of the Injury DPI, so we need to push the data to the “/Injuries” endpoint.

The full list of DPI endpoints is available in the API reference in the SAP Business Accelerator Hub.

When pushing to the DPI, you need to make sure the headers and URL you use are correct. You can retrieve the necessary header arguments and the URL from the example or the table.

For our injury example it would look like this in Python (for reference, check out Code Snippet Push API) :

 

 

#posting data 
with open("your-data-file.json", 'r') as json_file:
    data = json.load(json_file)

url = "https://eu20.sct.sustainability.cloud.sap/api/sct-sourcing-service/v1/DPIs/Injuries"
headers = {
    "Authorization": f"Bearer {access_token}", #your authorization key
    "DataServiceVersion": "2.0",
    "Accept": "application/json",
    "Content-Type": "application/json"
}

#Make a POST request
post_response = requests.post(url, headers=headers, json=data) 

#check whether the request was successful
if post_response.status_code == 200:
    print(post_response.text)
else:
    print(f"Request failed with status code {post_response.status_code}")
    print(post_response.text)

 

 

Giving us this post_response in JSON-format:

 

 

{'@context': '$metadata#DpiService.response', 
'@metadataEtag': 'W/"249ec913ea2cb6e21b17ee04c7a"', 
'runId': 'ef696387-5e61-49ad-b0e7', 
'message': '1 records posted successfully'
}

 

 

Whenever you successfully send a request to post data to the SCT, a "runId" is configured which you will need to further validate and publish the data. You can retrieve the "runId" from the from the post response to the DPI like this:

 

 

#get your runId
run_id = post_response.json().get("runId", None) #retrieve the runId from the post_response
run_data = {
    "runId": run_id
}

 

 

You can also see the import process being started in the SCT. In the "Manage ESG Data" App there is an open import process for the Injury DPI telling us, that a validation is pending:

import_process_start.png

Side Note: Currently it is not possible to push audit metadata such as who pushed what data when via the API. So the import process is shown as started by an anonymous user.

Validating the Data

The next step after the successful push to the DPI, is to validate your data. This is done by posting the "runId" to the “/validate” endpoint.

For our example record, validating it could look something like this (for reference: Code Snippet Validate API) :

 

 

#validating data
#headers
validate_headers = {
    "Authorization": f"Bearer {access_token}", #your authorization key
    "DataServiceVersion": "2.0",
    "Accept": "*/*",
    "Content-Type": "application/json"
}

#url
url = "https://eu20.sct.sustainability.cloud.sap/api/sct-sourcing-service/v1/DPIs/validate"

validation_response = requests.post(url, headers=validate_headers, json=run_data)

if validation_response.status_code == 200:
    print(validation_response.text)
elif response.status_code == 204: #validation request yields no content in response
    print("Request was successful, but there is no content in the response.")
else:
    print(f"Request failed with status code {validation_response.status_code}")
    print(validation_response.text)

 

 

With this request we are triggering the validation in the CPE environment of the SCT. This activity is performed asynchronously so we might have to wait a bit for the validation results to be ready.

Getting the validation results

If the validation request was successful, you can retrieve the validation results (Get validation results API).

As described in the overview table a GET request against “/validationResults(runId='<runId>')” is needed to fetch the result of the validation run. Note that the "runId" parameter has to be encompassed in single quotation marks.

Since the validation is running asynchronously, You need to make sure the validation is completed before you retrieve the results. Therefore, it is sensible to call the GET request for the validation results multiple times until the validation is done. Here is an approach on how to do this:

 

 

##get validation results

#headers
response_headers = {
    "Authorization": f"Bearer {access_token}",
    "DataServiceVersion": "2.0",
    "Accept": "application/json"
}
#url
url = f"https://eu20.sct.sustainability.cloud.sap/api/sct-sourcing-service/v1/DPIs/validationResults(runId='{run_id}')"

validation_results = {}

status = "IN_PROGRESS"
while status == "IN_PROGRESS":
    response = requests.get(url, headers=response_headers)
    validation_results = response.json()
    status = validation_results.get("status")
    print("Status is still IN_PROGRESS...")
    time.sleep(5)

print(f"status: {status}. Validation completed.")
print(validation_results)

 

 

The validation results will look like this:

 

 

{'@context': '$metadata#DpiService.validationResponse', 
'@metadataEtag': 'W/"249ec913ea2cb6e21b17ee04c7a"', 
'runId': 'ef696387-5e61-49ad-b0e7', 
'status': 'COMPLETED', 
'errorCount': 0,
'totalCount': 1}

 

 

The status indicates that the validation is complete. The "errorCount" indicates how many data points (rows in your original excel file) are invalid and cannot be uploaded. The "totalCount" indicates how many data points you have pushed and which have been checked during validation.

If you run into issues with the validation, there are two options. If all your records are invalid, the "errorCount" will be equal to the "totalCount" and status will be NO_VALID_RECORDS:

 

 

{'@context': '$metadata#DpiService.validationResponse', 
'@metadataEtag': 'W/"249ec913ea2cb6e21b17ee04c7a"', 
'runId': 'ef696387-5e61-49ad-b0e7', 
'status': 'NO_VALID_RECORDS', 
'errorCount': 2,
'totalCount': 2}

 

 

If some of your records are invalid, the "errorCount" will indicate how many records are invalid, but the status will be COMPLETED as you could publish the non invalid records regardless:

 

 

{'@context': '$metadata#DpiService.validationResponse', 
'@metadataEtag': 'W/"249ec913ea2cb6e21b17ee04c7a"', 
'runId': 'ef696387-5e61-49ad-b0e7', 
'status': 'COMPLETED', 
'errorCount': 1,
'totalCount': 2}

 

 

Unfortunately, you will not get details on what exactly went wrong from the API response. If you want to view the validation results, you will have to go into the “Manage ESG Data” app of the SCT and select the current import process that you triggered:

import_process_failed.png

By clicking “Continue”, you will be directed to the error log. Now you can see what went wrong, resolve the issues in your data and restart the import process.

error_log.png

Thankfully, as our example data is all valid, we can publish it now. In SCT, we can see that publishing is pending:

import_process_publish.png

Publishing the Data

If the validation results are all fine, you can finally publish your data to the SCT (Code Snippet Publish Data API) :

 

 

##publishing data if validation results are clear

url = "https://eu20.sct.sustainability.cloud.sap/api/sct-sourcing-service/v1/DPIs/publish"
publish_headers = validate_headers
  
publish_response = requests.post(url, headers=publish_headers, json=run_data) #run data and publish_headers have been defined above
  
if publish_response.status_code == 204: #publish request yields no content in response
    print(publish_response.text)
else:
    print(f"Request failed with status code {publish_response.status_code}")
    print(publish_response.text)

 

 

And you are done! 😁
Your data should be visible in the SCT and the import process in the "Manage ESG Data" app should be finished with the status set to “Published”:

import_process_finished.png

We can also have a look at the MTDAC table of the CPE environment which contains all of the uploaded records. And indeed our record for one injured person in January 1986 is visible:

overview_mtdac.png

Summary

In this blog post we have shown how you can use the SCT Inbound API to push data to the SCT. This now enables you to connect any source system you want to the SCT and automate the data upload process. One option for example could be a simple side-by-side extension on SAP BTP for preprocessing data before uploading it or using SAP Build Process Automation with a simple workflow.

Stay tuned for follow-up blog posts on these topics in the coming weeks!
We hope you enjoyed this comprehensive overview of the SCT Inbound API.

Best regards,
Eva and Jonathan

Labels in this area