Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
vikas2
Active Participant
0 Kudos

Overview:


Sap adapter module IDOCFlatToXmlConvertor can be used to convert messages from IDoc flat file to XML format. Normally it’s added in the sender adapter so that the message gets converted to XML format.


In this blog, I’ll describe the approach of the journey of migrating from a custom solution to the standard bean IDOCFlatToXmlConvertor.


Existing Solution:


Existing solution to convert flat files to IDocs was created in SAP PI 7.0 system. As there was no standard bean to convert flat files to IDocs, we built a custom bean to do the flat file to IDoc conversion.



The solution relied on SAP IDoc metadata to be present on the ABAP stack on SAP PI system .


Our files can have some extra fields in the beginning 10 characters.


Some files can have segment type, others will have segment definitions.


The file can have data for multiple IDocs.


Files can be in ALE format ( i.e. control and data record is a single continuous string)


e.g. ALE file format



Non ALE format with segment type and leading DATA and CONTROL records



Non ALE-format with segment definition



Above are three examples files but the data could be in any combination of formats and segment type/segment definition information.


The drawback of the solution:




  • Not future ready with SAP’s integration strategy of SAP PO ( Java only system). The metadata is being read from SAP PI ABAP stack.

  • The IDoc metadata needed to be manually refreshed if there was an IDoc change.


However, the solution was pretty robust and was being used by thousands of adapters ( 2000+ ).


Starting Getting Ready for Future


While doing migration from PI 7.0 system to PI 7.4 system, we had to code the adapter module again as the underlying Java APIs have modified. We decided not to try to switch to the standard bean as the standard bean doesn’t do the following:




  • Our files can have some characters at the beginning which needs to be removed.

  • The data can be segment type or segment definition. Originally the standard bean worked only with segment definitions. This was enhanced by allowing segment type processing. (See note 2267985 ) . However, we can't identify in mass if an adapter will process segment type or segment definition.

  • Some files are in ALE format. So the control and data record will appear as a continuous line instead of being divided into separate lines for control and a new line for each data record.


Hence, we decided to create a custom pre-processing bean which will format the data required by the standard bean.


The plan is to replace the old bean by the combination of standard SAP_XI_IDOC/IDOCFlatToXmlConvertor module along with the new custom module


So functionally we will have the following situation


Old Bean =  IdocPreProcessbean + SAP_XI_IDOC/IDOCFlatToXmlConvertor


Requirement for  mass change of channels


As we have thousands of channels using this bean, the change will need to be done in mass.


We created a tool for making the change using the directory APIs so that we can make the  change in mass and revert back to the old version speedily, if required. This blog from William Li can be accessed here which served as the foundation to build the tool.


Initial Version of the bean:


The first version if the bean was pretty lightweight. The thinking was:




  • Removing the fields at the beginning of the line is a lightweight string operation

  • From note 2267985,  we can process files with segment type and segment definition

  • If the file is in ALE format, it can be easily converted to multiline format.


Some hints here for converting the single line file to multi line format:


Control record maps to ABAP structure EDI_DC40 and has a length of 524 characters.


Data records map to structure EDIDD and has length 1065 characters.


Using this information, the single row ALE file can be converted to format expected by standard bean.


Roadblock:


The tests with the bean worked for single tests but when we started deploying the change to use the new bean in mass ( we took 10 channels as a pilot to test), it will require us to change thousands of communication channels.


There were two major issues:




  • We’re not able to identify which files use segment type and which ones use segment definition.

  • We're not even sure of the segment release level for the segment definitions in the file.



Looking at an example segment type and segment definition screenshot, some files have E1EDKA1 whereas other files can have E2EDKA1003, E2EDKA1002, E2EDKA1001 or E2EDKA1.



This was the biggest roadblock as we can't automatically determine these attributes of the interface. In the old bean, the custom function module has access to all the metadata and can easily work with either segment type or segment definition of any release level. It was internally handled in ABAP function module IDX_IDOC_TO_XML.


Modification to solution:


In summary, we got a solution but it can’t be implemented as it’s not possible for us to do the change in mass as we can't identify file data attributes. So we decided to enhance the solution to cater for the roadblocks.




  • We will always convert the segment definition to segment type.

  • We’ll read the IDoc metadata for the current release level.


You can read the blog which I had written before about how to read IDoc metadata on Java stack. The link is here.


The solution relied on the following approach:




  • Check if the file contains segment definition or segment type. This can be verified by checking the first data line’s segment information. Segment types have the following characteristics:


  • Standard SAP segment definition starts with E2. For custom segments (and all recent segments), the last 3 characters are numeric. This is ensured on the ABAP side. Whenever a new segment gets generated, the sequence gets incremented.


  • Once we have the segment type / segment definition mapping for the current release level, we go and introduce the information for previous segment definitions. This is required as the file data can have segment definitions from any release level.


  • Looking at one of the segments with lot of revisions, we see that there are 10 versions of the segment definition.



When we read the metadata with the current release level ( 740 ) , we get the below information.



Other release levels are created by the adapter module by decrementing the counter for segment definition.



This is required for the below reasons:




  • We won’t know which release levels we need to use. The other alternative was to use an array of release levels but in that case we may end up trying to extract metadata for lot of unnecessary release level.


  • Some of the metadata read for some of the release levels may fail if there is an issue with segment definition. For instance, the segment definition at release 620 is E2EDP01007. If the  structure is missing or has some data elements missing, metadata retrieval will fail.This will be mostly an issue for custom segments. Further, it's not even possible to go back to the release level and fix the segment. if we're on 740 and release a segment after making changes, the release level will be 740.


Module Configuration


Adapter configuration using old adapter module.



Adapter configuration using the new bean along with standard adapter module SAP_XI_IDOC/IDOCFlatToXmlConvertor.



As seen in the  screen-print, the parameters are:


IdocPreProcessbean:


SourceJRA
TargetDestination


These are required to  read IDoc metadata from the target ERP system. As we're not specifying any release level, it will read the metadata for the system's release level.




This can be verified by looking at the segment definition of the segment we were looking in WE31.



Parameters for the SAP_XI_IDOC/IDOCFlatToXmlConvertor bean:


SourceJRA


TargetDestination


SAPRelease


SegmentTypeProcessing


Now, these parameters can be supplied in mass.


For connection, these are fixed


SourceJRA
TargetDestination


SAPRelease : We use the current release level: 740


SegmentTypeProcessing: true , as we have already converted the segment definition to segment type in the custom bean.


Handling date and time:


As most of the interfaces are inbound to SAP, we converted them from classical scenarios to ICOs. This allowed us to use Idoc_AAE adapter on the receiver side. As Idoc_AAE adapter doesn't like date and time passed to it, it's blanked out.


In summary after the changes the implementation looked as below:



Libraries required to build the adapter module


Apart from normal libraries required for standard adapter module development, we'll need IDoc library as well.AP Java IDoc class library and SAP JCO libraries can be downloaded from service.sap.com/connectors  .


Note about performance:




  1. We tested with huge files ( 50+ MB files ) being sent in parallel to the PI system and started seeing some performance issues on SAP PI Java AS. We profiled the bean using the guide here.


https://wiki.scn.sap.com/wiki/display/ASJAVA/Profile+an+application+on+the+SAP+NetWeaver+Composition...


and realised that as we were appending Strings to existing String objects, it was consuming a lot of heap space. Then we realised that Strings are immutable in java and if an already existing String gets modified, a new Object gets created.


We could have used StringBuilder but decided to use StringBuffer as it is thread-safe.


2) The main objective of the bean was to make us future ready for eventual PO migration where we'll be forced to make the change. With the previous bean, we were making a call to PI ABAP stack for all messages.
With the new bean, call to ERP ABAP system to read IDoc metadata gets called only for the first message. Once the metadata gets cached, no calls to
ERP system are made and the metadata gets read locally from AS Java server cache. Of course, if the system restarts or someone deletes the metadata
manually, it'll again need to go and read metadata from ABAP stack of ERP system.


Standalone testing:


To speed up development and testing of the module, I did the testing from NWDS itself. The link I had for the IDoc library has the sample code for testing in Java SE environment.


As a summary, metadata can be read by creating a local JCO destination file and using in the code.



JCoDestination destination=JCoDestinationManager.getDestination(“NSP”);

IDocRepository iDocRepository = JCoIDoc.getIDocRepository(destination);

IDocSegmentMetaData rootMetaData = iDocRepository

.getRootSegmentMetaData(“Z_TEST”, “Z_TEST_EXT”,

“7.02”, “7.02”); } }


where Z_TEST : idoc type

Z_TEST_EXT : Extension Type

Github link is here. The code to read locally from NWDS JCO destination has been removed as I didn't want to deploy it to productive version.

Labels in this area