Skip to Content

As per the business requirement we have a file interface, which has to process huge file in XI containing half a million records. We decided to split the file at every 5000th record and process the file.

I used the option “Record sets per Message” for splitting the message and did the necessary configurations in the ID. Once the configurations are complete when I tested the file I observed that splitting happens very fast at 5000th records and Idocs are posted into SAP IS-U but surprisingly after 2.25 lakh record the splitting stops and the rest of the entries in the files does not get processed. I don’t see any errors in sxmb_moni or RWB or Visual admin logs but the file vanishes after posting 2.25 lakh records.

We thought there are some explicit settings for reading huge files, which has to be done in J2EE stack. After wasting lot of time in exploring that we thought we will split the huge file at every 2.25 lakh records manually and process it .In that process the source file is split into 3 files and then tried to upload the 3 files separately. I have noticed strange phenomena that only 2 idocs are posted from the second file and file again gets vanished and no error logs are seen even this time anywhere. As a part of understanding the weird phenomena I removed the top 10,000 records and put the file again. And my exercise worked!!! It gave a CC error in the RWB that it is not able to convert the 1500 record due to the error in data.

The outcome is: 2.265-lakh record of the file is the error record, which contains ‘,’ inside the content of the field that has caused this problem.
“ROW”,76479006,”U”,20031125,”T”,”FIELD”,””,”T”,88000,”F88″,”Y”,”FIELD”,1985,”,5890061″,”Y”,”Y”
“,5890061” is the error field as the field separator ‘,’ is coming as the content .I removed ‘,’ manually and uploaded the file 2 successfully and it posted all the records this time.

This is the same reason due to which the adapter is processing exactly 2.25 lakh records initially and doesn’t give any error logs. So we need to skip the ‘,’ if it comes in the content using file CC parameters in the ID.

Have a look at sample configuration to skip the field separator if it comes in the content of the file. I edited the actual configuration due to the confidentiality purposes.
Skip Comma
Skip Comma

Good news is that it just took 15 minutes for processing 65 MB file and splitting the file at 5000th record after doing the file CC configurations as shown above but the simple problem of ‘,’ in the content has taken lot of time in trouble shooting due to the lack of proper logs and integrated tools for debugging adapter level failures .I assume that it will be atleast provided in the upcoming versions of SAP XI.

To report this post you need to login first.

11 Comments

You must be Logged on to comment or reply to a post.

  1. Anonymous
    Hi,

    Did you take a look at the trace files?? Usually such errors/warnings are loggged into default.trc file.

    Naveen

    (0) 
    1. Sravya Talanki Post author
      We have looked into trace but could not find any problems.
      When we give some wrong parameters in file CC the file vanishes when configured in the delete mode and no errors or trace anywhere.
      (0) 
      1. Anonymous
        Hi Sravya,

        Good to see your blog. When I was working with a banking client 2 years ago, I had nearly 2 million records of transactions which had to be processed using java application. We did all optimized programming and found no way solving the same problem. The problem is so simple, Java Heap!
        It might be the same in Java Stack of WebAs in your case.
        Finally we solved the issue by using ‘swaps’ wrapping over vector class. I personally feel, increasing the Java heap to something around 2 GB to 4 GB (by default it would be 512 MB in WebAS) would solve the problem because, the rate at which garbage is collected in java is much lesser than the rate at which objects are created here.
        Its really a good effort. Keep it up 😉

        Best regards,
        Felix Jeyareuben

        (0) 
        1. Sravya Talanki Post author
          Hi Felix,
          Iam afraid the context is mis-understood.The CC didnot fail but only 2.25 lakh records are processed due to the ‘,’ problem in the data records.Once I escaped the ‘,’ using enclosure sign option of file CC option we were able to process  the 64MB file in no more than 15 minutes but the problem is that file adapter didnot log the trace that there is a failure while converting the file using file CC parameters.
          (0) 
      2. Dev S
        Hi Sravya

        I have a typical scenario where the requirement is to post 999 invoices exactly since SAP FI has a limitation that it cannot post more than that number , so i ahve to do the following . in a file of say 2500 records i have to first sort on the basis if a field in the line and then Split files and create an idocs for every 999 records so for 2500 there will be three records created

        Can this be achieved in PI graphical mapping or do I have to resort to Java mapping or Adapter module coding ?

        actually there is an additional requirement of adding a dummy record to balance the amounts afer every 998 records but i wanted to know if there is a fix for splitting the file for 999 records first .
        thnaks

        (0) 
  2. Joachim von Goetz
    Hi,

    you might probably find a solution to your problem if you look at BPEM.
    It can handle a huge amount of files and out of that you are able to automatically or manually trigger cases.

    i just found a weblog on it, which might help:
    The specified item was not found.

    Joachim

    (0) 
  3. S T
    Hi Shravya,
    thanks for the Blog. But this solution really doesn’t solve problem if the file is say 200MB. Basically you explained how to fix CC rather than solving to process huge file. right?. We have to process 200MB file. We cant even ftp file of this size. We are exploring the possibility of increasing heap.
    (0) 
    1. Sravya Talanki Post author
      It is not a solution to solve file size…It is just an experience while processing huge files and how we solved the data error as the log doesn’t tell you anything!

      Guess it is better to split th 200 MB file..I always had bad experiences with huge files even after increasing heap size.

      (0) 
  4. Sudheer J
    Hello,

    We have a PlainFile–TO–Idoc SAP XI Scenario. We have configured everything and it is working fine. However, we have a new requirement where we need to produce multiple idocs from one file. I mean, in the text file we can have something similar to this:

    HEADER;XYZ;123;123456
    DETAIL;XYZ;123;123456
    DETAIL;XYZ;123;123456
    DETAIL;XYZ;123;123456
    DETAIL;XYZ;123;123456
    ..
    ..
    ..
    ..
    TRAILER;XYZ;123;456

    Now for every 1000 “detail” records from the above structure it should create another idoc.

    So can you tell us how to solve this problem? Is there anything we can do to configure SAP XI so we can send 1 File to multiple IDOCS.

    Looking forward for your help on this..!!
    Thanks in advance.
    Sudheer

    (0) 

Leave a Reply