Skip to Content

Every SAP integration implementation has the challenge of converting Legacy Values to corresponding SAP Values and vice versa. There are various approaches and every one has pro and cons.

Ideal solution would be to maintain the value mapping table in Integration Directory. But this will only serve when the lookup entries are a small data set. As the data set increases the Cache size keeps on increasing and consumes the heap memory.

The next approach is save them in a custom ABAP table on ECC and Lookup them using RFC Lookup or a table in Database and use JDBC Lookup. Again we know the problem of overheads here.

Recently I came across the one with minimum overheads compared to above solutions. That is using UKMS (Unified Key Mapping Service, it was available since long but I haven’t came across in my short career in PI consulting. http://scn.sap.com/people/rudolf.yaskorski/blog/2009/11/04/accessing-ukms-from-message-mappings-single-value-implementation–part-1). Idea here is to keep the Data Set close to PI. That is keeping the Data Set on ABAP Stack of PI and use the API provided in ESR by SAP which internally does a RFC Lookup. Although it can be argued that the gain is not significant over RFC Lookup with table on ECC System. Nevertheless there is gain as all the data translation calls are executed inside PI.

Now with PI 7.3.1 Single Stack this can’t be achieved as there is no ABAP stack to keep the Data Set close to PI.

I thought on following points.

  1. Having the Data Set close to PI. So why not maintain the table on AS Java DB. I am not sure on terminology here, what I meant is keeping the table on AS Java on which PI is installed. Use Java Dictionary to create the table and deploy on PI server. The table has 6 columns as we have for Value Mapping table (Source Agency (SA), Source Scheme (SS), Source Value (SV), Target Agency (TA), Target Scheme (TS) and Target Value (TV). SA, SS, SV, TA, TS primary key set.)
  2. Instead of going for graphical lookup (JDBC Lookup in this case), why not fetch directly form DB table. It’s usually not recommended, but what we are doing here is just value retrieval, no transactions, no keeping the connection open for long time. Just a single hit per lookup and close. The overhead of converting to JDBC Structure and entire flow of going to JDBC Adapter and coming back is saved. In the example I have used the CPA Cache’s Data Source as I feel that will be the least used one compared AF Data Source at runtime as CPA cache keeps the objects in memory. 
  3. Use a simple cache implementation. In the example I have used Synchronized LinkedHaspMap with maximum entries = 10000 and eldest entry removal as my eviction policy. This can be replaced with any of available cace implementations (ehcache, tried with some errors or google guava, should be straight forward).  I was facing a problem in keeping the cache accessible across mapping invocations. I have not researched much here, but keeping static variable does not work, due to Java Reflection may be as it is the way the mapping classes are loaded. So I used JNDI to bind my cache object and do a JNDI Lookup. Found it a very useful. The gain may be minimal due to this cache for large Data Sets, but yes, locality of reference is underlying in all computation we do. Following is the sample code.

    

         

package com.ibm.cust.lookup;

import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Collections;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.Properties;
import javax.naming.InitialContext;
import javax.naming.NamingException;
import javax.sql.DataSource;

public class CustLookupService  {

public String getMapping(String sa, String ss, String sv, String ta, String ts){

  Connection con;
  DataSource ds;
  InitialContext ctx;
  PreparedStatement stmt;
  ResultSet rs;
  String targetValue=””;
  final int maxEntries = 10000;
  Map<String, String> cache = null;
  Properties properties = new Properties();
  try {
   ctx = new InitialContext(properties);
   cache = (Map<String,String>) ctx.lookup(“LRUCache”);
  } catch (NamingException e) {
   targetValue = “Cache Naming Ex”+e.getMessage()+e.getCause();
  }
  if(cache==null){
   cache = Collections.synchronizedMap(new LinkedHashMap<String, String>(maxEntries,.90F,false){
    private static final long serialVersionUID = 1L;
    public boolean removeEldestEntry(Map.Entry<String,String> eldest) {
     return size() > maxEntries;
    }
   });
  }

  targetValue = cache.get(sa+”:”+ss+”:”+sv+”:”+ta+”:”+ts);

  if(targetValue == null){
   try {
    ctx = new InitialContext(properties);
    ds = (DataSource) ctx.lookup(“jdbc/notx/SAP/BC_XI_AF_CPA”);
    con  = ds.getConnection();
    stmt = con.prepareStatement(“select TV from UKMS_ENTRIES where (SA = ? AND SS = ? AND SV = ? AND TA = ? AND TS = ?) “);
    stmt.setString(1, sa);
    stmt.setString(2, ss);
    stmt.setString(3, sv);
    stmt.setString(4, ta);
    stmt.setString(5, ts);

    rs = stmt.executeQuery();
    if(rs.next())
     targetValue = rs.getString(1);
    else
     targetValue = “result set was empty”;

    con.close();
    cache.put(sa+”:”+ss+”:”+sv+”:”+ta+”:”+ts, targetValue);

   // targetValue = targetValue.concat(“frdb”);

    ctx.rebind(“LRUCache”, cache);

   } catch (NamingException e) {
    targetValue = “DS Naming Ex”+e.getMessage()+e.getCause();
   } catch (SQLException e) {
    targetValue = “SQL Ex”+e.getMessage()+e.getCause();
   }

  }/* else
   targetValue = targetValue.concat(“frca”);*/
  return targetValue;
}

}

     

  1. To maintain the table, it’s a simple problem with lot of options available. Chose according to your convenience.
    1. Use BPM. Build a UI task and post to PI. Then using JDBC Adapter insert into DB.
    2. File to JDBC. A simple CSV for mass loading.
    3. As in the case of Value Mapping Replication, Proxy to JDBC.

     

Implementation Steps:

  1. Create Java Dictionary Project to store the mapping entries, build the sda and deploy on PI.test1.JPG
  2. Use any of the options (discussed in point 4) to insert the values into table.
  3. Use the code provided to build the zip containing class file. Import the archive in ESR.test2.JPG
  4. Using it in UDF.    

test3.JPG

Any thoughts on feasibility of this idea in a productive implementation would be great. It is my first blog, so please let me know if I missed anything in putting my idea forward. 

To report this post you need to login first.

3 Comments

You must be Logged on to comment or reply to a post.

  1. Aditya Adepu Post author

    To maintain the cache consistence incase of updatation/deletion of entries, use following fuction in the same class and call it in the mapping of the interface used for update/delete

    public void updateCache(String update,String sa[], String ss[], String sv[], String ta[], String ts[],String tv[]){

      InitialContext ctx;

      Map<String, String> cache = null;

      try {

       ctx = new InitialContext();

       cache = (Map<String,String>) ctx.lookup(“LRUCache”);

       if(update.equals(“update”)) {

        for(int i=0; i < sa.length; i++)

         cache.put(sa[i]+”:”+ss[i]+”:”+sv[i]+”:”+ta[i]+”:”+ts[i], tv[i]);

       } else if(update.equals(“delete”)){

        for(int i=0; i < sa.length; i++)

         cache.remove(sa[i]+”:”+ss[i]+”:”+sv[i]+”:”+ta[i]+”:”+ts[i]);

       }

      } catch (NamingException e) {

      

      }

    }

    (0) 
  2. Setharaman T

    Hi ,

      This looks something very close to my requirement, currently we are in 7.4 PO single stack , can we follow the same approach or do you find other solu for the same requirement.

    A very good document .

    (0) 
  3. Madan Mohan Agrawal

    Hi Aditya,

    Very nice blog. I implemented the same but with an additional step of UPDATE along with SELECT. After deploying this, when many messages are processed then all connection to database are getting full and PI server is going down with following error trace.

    com.sap.sql.log.OpenSQLException: No connection to data source SAPDB available. All 200 pooled connections are in use and 31 connection requests are currently waiting.

    Please suggest how to get it rid of this connection pool issue.

    Br,

    Madan

    (0) 

Leave a Reply