Skip to Content

I was having a scenario where i need to handle more than 10000 values in Value Mapping which was very tedious task. Entering large number of values manually in ID was not possible otherwise it would end in months. Then i tried Value Mapping Replication (VMR) interface available in Basis component but it was also not that efficient. Then i also tried uploading with NWDS directly by creating CSV file for Value Mapping but it fails when we have any “,” (comma) in key or Value.

So this option was not helpful for me.

Instead of this we were knowing that if we are doing lookups in any database or file server it will hit our interface’s execution time if number of lookups are more.

Then i thought why we can’t handle these much values in ESR only in any file format and directly read values from those file which will be much quicker than any other option. So i was having only one option where i can upload any kind of file in SAP PI i.e. Imported Archive. Imported archive is normally used for JAVA or XSLT mappings. But it provides an option to upload files in .zip format that gave me a loophole from where we can upload any kind of file  in ESR after zipping the files together.

It was amazing when i got success to do lookup among >50K values in few milliseconds. So i thought to share this new concept with all of you because i searched whole SCN and SAP documents to handle this problem and i returned empty hand.

I will explain step by step procedure to handle any number of values in Key- Value pair format in ESR and can easily do a lookup through a small UDF.

Step 1: Create text files containing key – Value pair separated by space or “=” as shown in below screenshot


Step 2: Now create all the files that you want to create for lookup in text format and zip it together.

      ( I was having a requirement where i need to transform for 20 EDI segments incoming input values into its standard actual values as shown in above figure.          So i created different file for different segment. If you want you can merge all files together and upload single text file.)

          I created 21 files as per the requirement and zipped those together as below:

Now i have a zipped file that contains all key-value pair in it. Now we can upload this file in imported Archive without any issue 🙂 🙂

Step 3: Create an Imported Archive as ValueMapping  and import .zip file into it.


Now you can see all text files in imported Archive as shown in below screenshot.

Click on any file , you can see the content of the file as below:

Now Save and Activate your imported Archive.

Step 4: Now assign your imported archive in your message mapping in Function tab under Archive used as shown below:


Step 5: Now we will create a simple UDF which will take two input  values first value will be key against which i want description and second input of UDF will be file in which i want to lookup values.

  (If you creating one file then you can pass only one input to UDF and directly write file name in UDF.)

Step 6: Copy paste the below code in your UDF:


//public String FileLookup(String key, String filename, Container container) throws StreamTransformationException{


String returnString=””;

    try {

          InputStream lookupStream = this.getClass().getClassLoader().getResourceAsStream(filename);

        InputStreamReader reader = new InputStreamReader(lookupStream);

            BufferedReader buffer = new BufferedReader(reader);

String read;

while((read=buffer.readLine()) != null){

String temp = read.substring(0,key.length());

if(key.equals(temp)){

returnString = read.substring(key.length()+1,read.length());

if (  read.substring(key.length()+1,read.length()) != “00”){

int num = Integer.parseInt( read.substring(key.length()+1,read.length()));

num = num+2;

returnString = Integer.toString(num);

    }    }

}

        } catch (Exception e) {

          returnString = e.getMessage();

    }

    return returnString;

//}

Step 6 : Now we will create one more UDF for trimming fixed extra description that will always come when we are using lookup code.

              There will be only one input for this UDF. We will pass the output of the Lookup UDF into it and it will give actual output to us. If you are a bit confused                you will get clear picture once you do display queue on these udf.

Below is the code for trimValue UDF. ( Input Parameter of UDF : value )

if(value.length() >0){

String str =””;

str = value.substring(19,value.length()-1);

return str;

}else{

    return “”;

}

Now our UDFs are ready for testing 🙂 🙂

Step 7 : Now i will pass a key that is available in DE_365.txt file and our output will be actual value against this key.

            I have shown every input and output using display queue that will explain everything clearly and now you can understand why i wrote trimvalue function 🙂


Now we can compare the key value available in our text file:



This will never hit performance and execution time of message mapping as we are maintaining the lookup files in ESR as an ESR object.

Value Mapping has a constraint over the length of target field (i.e. it can’t be more than 300 ) but here you can pass more than that as we are maintaining the values in text file.


Hopefully this will solve most of the problems related to large Value Mapping data maintenance. You can upload millions of data without much effort in ZIP format. 



Note: I would suggest to go for VMR or NWDS upload through CSV file under value mapping if you have data which may change frequently and you don’t have length issues in target, otherwise you will have to reimport changes to production System every time you make changes even in a single field.



To report this post you need to login first.

4 Comments

You must be Logged on to comment or reply to a post.

  1. Bhavesh Kantilal

    Just one issue with this approach, every time you need to modify a value or add a value this would need a transport into production.This in landscapes that have monthly or weekly release cycles can have a huge impact.

    This is where Value mapping helps as it enables direct changes in production. Would you have a background on what was the issue with VMR? In this case i would have read this file from a NFS mount or FTP Server, converted into a XML using Content conversion, performed mapping to the VMR Interface and then used the XI Adapter to push into the ValueMapping table of PI. This would have been a one time build effort of 1 day.

    The intent of this comment is not to discourage this posting / approach but to also make readers aware that there could have been other means to approach this!..

    Regards

    Bhavesh

    (0) 
    1. Rahul Yadav Post author

      Thanks Bhavesh for your Valuable Comment. 🙂 🙂

      This can fulfill requirement where we have lots of vlaues those are fixed.

      We have used this approach in transforming EDI segment to its Actual Meaning and that will always be fixed as per EDI standards.

      We were having two problems:

      • Some fields length were having length greater than 300 hence values were truncating.
      • One more problem was that few files were having different description under different segment for same Key. 

      Hence it was a scenario where one key was pointing to multiple value hence Vaue Mapping was not allowing duplicate keys.

      Yes if values are dynamic then i would recommend to go through Value Mapping.

      (0) 
  2. Anupam Ghosh

    Hi Rahul,

                      This is a new information on how to read a file uploaded in PI server from UDF that too from IA.  There is another issue with this approach is the fact that accidental deletion of file may happen causing mapping to fail which is not the case with value mapping. The PI server is accessed by other teams as well thus depending on this solution always comes with a risk. This is just to point out my concern, please do not think I am criticising this blog.  Really appreciate your new way of thinking.

    Secondly I have a query “you are not mentioning absolute file name to the UDF reading the file. Then how is UDF managing to read the file.

    Regards

    Anupam

    (0) 
    1. Roy D

       

      Hi Anupam, it will work if you copy the code like Rahul mentions in the blog. You can also remove the second UDF, then it will still work if you not need any trim functionality in your mapping.

      Thanks Rahul Yadav, for my scenario, your blog is really helpfull.
      If you have fix values, who not often change, or not have to be different in a system, you can easy use above blog and create CSV files for the value mappings.

      I only want to inform other developers, that if you copy the code from above blog. Be aware that if you get some weird errors, that the UDF is not working that you have to check the symbols which are used in the blog : “”  because with copy pasting them, it can end up in another symbol in SAP PI / PO.

      (0) 

Leave a Reply