Recap

In the first part, linked below, we discovered how value mappings work with SAP PI and how to access the web services that allow you to create changelistsandvaluesmappings.

Updating Value Mapping via SAP Data Services (Part 1)

Connecting to SAP PI

In order to connect to SAP PI and update value mappings, you need to have a user with role SAP_XI_API_DEVELOP_J2EE. In this tutorial we will use the user id inbound_bods. In our landscape, we often add the suffix corresponding to the specific PI environment. For example, in QA we have specific environments for projects, support, and performance.InDataServiceswewilltiethese system configurations which will be explained later in the tutorial.

Creating the Data Stores

Within the Data Services application, you can connect to other systems via Data Stores. A Data Store can represent a specific database, like Oracle or SQL Server; and application, like Success Factors; or a web service. For this tutorial we will create web service data stores to connect to SAP PI.

The screenshot below shows the two Data Stores used in this tutorial.

/wp-content/uploads/2015/07/ds_repository_view_614785.png

The Data Stores contain functions which correspond to the operations supported bythewsdl. You can create the Data Store by supplyingthewsdlpathfrompart 1 of the tutorial. An example is below.

ds repository edit screen.png

For the connection, use the User name you defined earlier. Once the connections are established, you can import the functions as you would with any other web service within Data Services.

Reading Existing Value Mappings

At the core of this solution is the ability to track changes against the existing value mapping defined within SAP PI and create, update,anddeletevaluemappings. In ordertodothiscompareproperly, it is necessary to read the existing value mappings from SAP PI.

The solution to this is broken into two separate steps. The first step is to query for value mappings. This returns a list of value mapping id’s. The second step is to use the value mapping id’s to read the details.

Setting Up the Tables

Within Data Services, there are two main types of tables. Template tables can be created automatically by Data Services at run-time. They are write-once and read many. They can be used as a source, but notforalook-up. Tables must exist in the database already.Theyarewritemanyandread many. They can beusedforlook-ups. In our environment, we try to get the best of both. This means that wecanuseDataServicestocreate the table, while still writing to it many times as well as usingitforlook-ups. I first saw this approach in SAP’s Best Practices for Data Migration and have adapted it for my organization. In order to achieve this, we have created two data stores that point to the same database server and database. One is used for creating tables as template tables, namely DS_INT_TABLES_INIT, and another to have actual tables, namely DS_INT_TABLES.

In order for this to work, we create a job to create a template table in DS_INT_TABLES_INIT. Then we run the job which creates the object in the database.Finallyweimportthetable into DS_INT_TABLES. Now if the structure changes,wechangetheinitializationjob and re-import the table.

To create these tables, I have created a job, JOB_CONVERSION_SAP_PI_CREATE_TABLES. I have included the xml of the job details. You can import the job into your environment and change the details of the data store to fit your environment.

After executing the job, you will have the following tables:

STG_PI_SAP_PI_VALUE_MAPPING_RESULTS

/wp-content/uploads/2015/07/stg_sap_pi_value_mapping_results_615305.png

STG_PI_VALUE_MAPPING_UPDATE_PROPOSAL

/wp-content/uploads/2015/07/stg_pi_value_mapping_update_615306.png

STG_PI_CHANGE_LIST_ERRORS

/wp-content/uploads/2015/07/stg_pi_change_list_errors_615307.png

Description of the Tables

STG_SAP_PI_VALUE_MAPPING_RESULTS

This table is to hold the data read from SAP PI. It has the columns from the read operation plus two additional fields, GENERATED_KEY and SYSTEMCONFIGURATION. The generated key is just an internal key to ensure the records are unique. Within SAP PI,uniquenessisrelatedtoValue Mapping Id, Scheme Id, and Scheme Agency Id. The other additional field is system configuration. In order to understand the purpose of system configuration, you need to understand the meaning of a system configuration within Data Services. A single Data Servicesinstancecanconnectto the same data store with different configurations. A configuration contains specific connection details for an environment. At my company, we have many QA systems. Each system has one data store, but a separate configuration depending on the environment, e.g. QA1, QA2, QA3, and QA4. The individual data store configurations are linked together via a System Configuration. For example, we have a system configuration called QA1 that links our production support SAP system to our production support SAP BW system to our production support loyalty system. By storing the system configuration, we get further separation of data for each of our different SAP PI systems.Inprincipal, it workslikeamandantfieldwithinSAP.

STG_PI_VALUE_MAPPING_UPDATE_PROPOSAL

This table holds the proposed values to be imported into SAP PI. In the scenario for this tutorial, we will place the customer details from our loyalty system in this table. Its structure matches, almost exactly, the structure in the above table.

STG_PI_CHANGE_LIST_ERRORS

This table is used to track the results of creating or activating a change list.

The Read Job

For this exercise, we will create a job called JOB_SAP_PI_READ_VALUEMAPPING that will call the web services to retrieve and store the current value mappings.

Job Variables

The Job Variables are all optional. They represent the filter criteria for the web service. If no values are passed, then all value mappings are read from SAP PI.

$G_DESCRIPTION Type VarChar(1024)

$G_USER_RESPONSIBLE Type VarChar(1024)

$G_DATE_LAST_CHANGED Type DateTime

$G_USER_LAST_CHANGED Type VarChar(1024)

$G_GROUP_NAME Type VarChar(1024)

$G_SCHEMEID Type VarChar(1024)

$G_SCHEME_AGENCY_ID Type VarChar(1024)

$G_MAPPING_VALUE Type VarChar(1024)

Initialization Script

In the initialization script, we just print the variables. When troubleshooting issues or just monitoring jobs, it is helpful to see the variables used when calling the job. The included xml for the read job includes a function I developed to standardize printing variables. It includes the calling context of the print statement too. This means that if you print from within a job, it will print the job name. If you print from within a workflow, it will print the workflow name. If you print from within a data flow, it will print the owning workflow or job. The function also returns the value you pass in. This can be helpful when troubleshooting an issue because you can use the function within a mapping and will print the value and map it to the target.

DF_SAP_PI_READ_VALUE_MAPPING

The purpose of this data flow is to call the query web service and read the system-generated value mapping ids. These value mapping ids are guids generated automatically by SAP PI when creating a new value mapping. In Part 3 we will create and update value mappings. In order to update an existing value mapping, the mapping id is required. When creating a value mapping, the mapping id is blank and gets generated by SAP PI. Note: For language-specific entries like description, I have hard-coded English. If you require a different language, change the data flow or use a variable.


The data flow stores the resulting mapping ids in a template table called STG_SAP_PI_VALUE_MAPPING. The same results could have been accomplished by using the output of this call into the read web service call.


DF_SAP_PI_READ_VALUE_MAPPING_RESULTS

This data flow uses the value mapping ids and, via the read web service, reads the value mapping details. After many un-nesting calls, it writes to the STG_SAP_PI_VALUE_MAPPING_RESULTS table.


Next Time

In the next part we will update the value mappings from Data Services.


(The xml file that contains the read job is split in 2. To use it, merge the two files first.)

To report this post you need to login first.

1 Comment

You must be Logged on to comment or reply to a post.

Leave a Reply