Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
alex_bundschuh
Product and Topic Expert
Product and Topic Expert
I'm taking the chance of a quite year end to come up with another technical blog series, this time about best practices when using the Process Integration Test (PIT) tool of SAP Process Orchestration.

Intro


When running your integration scenario tests, sometimes it's required to sort of manipulate or alter the message body or message header to achieve proper test results. Here, you more or less have two possibilities: using exemptions/replacement rules and/or message preprocessing. Whereas exemptions and replacement rules are applied after message processing, i.e., when running the verification, the preprocessing rules change the data before message processing. For some use cases, both approaches may need to be applied together.

You use exemptions and replacement rules when you expect beforehand that the test messages will differ from the expected outcome. For instance, your message body contains an automatically generated unique identifier, or a time stamp. Or you may have deviations due to the different landscapes. Further verification issues and ways how to mitigate the issues are described here: Understanding Verification Errors.

You use message preprocessing for cases where the payload or the dynamic header will impact the course of the message processing, e.g., for content-based routing, or when applying value mapping. Another use case is to extend the test coverage, e.g., you like to test any potential routing paths. See also Message Preprocessing.

In the following blog series, I like to explain how to use replacement rules and preprocessing along three use cases. In the current blog, let's start with an integration scenario where a value mapping is carried out.

In the following, I focus on the settings in PIT related to the replacement and preprocessing rules. For a detailed description about how to setup a test case from end to end, check out this blog.

Value mapping use case


In the integration scenario, I exchange person data between two systems. Here's an example of such an input file:

<?xml version="1.0" encoding="UTF-8"?>
<ns0:PersonIn xmlns:ns0="http://demo.sap.com/pit">
<Id>10054122</Id>
<LastName>Zurowska</LastName>
 <FirstName>Katarzyna</FirstName>
 <Country>PL</Country>
</ns0:PersonIn>


 

The integration flow is a point to point connection between two systems with a mapping in between. In the test landscape, the sender name is SND_A_TST, and the receiver system name is REC_1_TST, on the productive landscape they are SND_A_PRD and REC_1_PRD, respectively.


 

In a message mapping, the country that the person lives in is then mapped to a region, so it's actually an n:1 value mapping, e.g.:

  • DE, PL, DK are mapped to Europe

  • CA and US are mapped to North America

  • BR, MX, AR are mapped to Latin America, etc.


So, this would be the message body after mapping:
<?xml version="1.0" encoding="UTF-8"?>
<ns0:PersonOut xmlns:ns0="http://demo.sap.com/pit">
<Id>10054122</Id>
<Name>Katarzyna Zurowska</Name>
<Country>PL</Country>
<Region>Europe</Region>
<LastChanged>2020/12/22 09:39:06</LastChanged>
</ns0:PersonOut>

 

Basic test case


We like to test this scenario in PIT and hence create the following test case with the production system A8Z as source system where the test data is pulled from, and N75 as target system where the test runs are carried out.


 

Sample messages were read from the production system for different countries. We can copy the test data sets and either delete or merge test messages. This way, we created a test data set for each region so that we can test individually. In the following however, I just used the first data set in the list containing all uploaded messages.


 

During the test runs, the headers are automatically set according to the integration flow configuration. As said, the system names differ between the two landscapes, hence the message header in the source and the target can't be matched, and we would run into a structural error. See also Understanding Verification Errors. To overcome this, we define a replacement rule for each system pair. This is done on the Verification tab of the test case. We select the StructureComparator verification step, and maintain two rules, one for the senders and one for the receivers. E.g., we define that SND_A_PRD on system A8Z corresponds to SND_A_TST on system N75. During the verification, the corresponding system names will then be seen as identical.


 

As you may have guessed from the sample message above, we map the current date and time to the LastChanged field. So, when we rerun the mapping during the test run, a different time stamp will be generated, and hence the verification will show a difference. Since time stamps usually differ anyway, we like to exclude them from the comparison. So, we select the PayloadComparator verification step, and maintain a corresponding xpath expression.


 

To directly run or schedule the test run, we define a run configuration where we maintain the test case, the test data set, and the target system.


 

After having executed the test run and verification job, let's open the test case verification result. For the outgoing message we first select the Message Header. As you can see, although the system names of the source and the target message exchanges differ, no verification error is raised.


 

If you select the Payload node of the outgoing message, the different time stamps are highlighted however an information is displayed in the Test Verifications Problems tab indicating that for this field an exemption exist. Hence, also the payload comparison doesn't show any difference, and overall we don't have any regressions.


 

Test case with Message Preprocessing


The data set contains a subset of the overall possible countries, so the test coverage is quite low. To extend the scope of the test, we use message preprocessing.

In the test case, on tab Message Preprocessing, we create a new ruleset with a rule for altering the payload. The rule is of type Constant Value Substitution. Here, we maintain the xpath pointing to the Country field, and replace an existing country ID with a country ID that we like to test in addition. Here, we need to ensure that the source and the target country IDs are within the same region.


 

Since we change the country id of the incoming message, the country id of the outgoing message will definitely differ from the expected value of the stored test message. So, we need to define an exemption for the country field. On the Verification tab of the test case, we select the PayloadComparator verification step again, and add a new xpath expression to the existing one.



 

We define a new run configuration where we maintain the test case, the test data set, and the target system. In addition to the previous run configuration, we here maintain the beforehand created preprocessing rule.


 

Then we execute the test run and the verification job. In the test case verification result, we first select the payload node for the incoming message. As you can see, the country id in the target message exchange has been replaced with the new value, in this case from PL to NL. This information is also displayed in the Test Verification Problems tab.


 

If you select the Payload node of the outgoing message, you can see that the n:1 value mapping was successfully carried out, so NL was mapped to Europe. As expected, the country id as such differs, however here we have defined an exemption and so the overall test shows no difference.


 

Note, this test setup works fine for an n:1 value mapping including the possibility to schedule and hence automate the test runs and verification. For a 1:1 value mapping, e.g., if you map the country id to country name, you still can use preprocessing rules to extend the test scope. However, a comparison with the stored value will always differ, and hence you can't really automate the test runs and verifications. In this case, you need to manually verify the outcome of your test.

 

With this, I have covered the first use case within the blog series. In the next blog, I will describe a use case based on dynamic headers.