Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos

This is part 3 of a blog series "Implementing & Sustaining Interfaces in SAP ERP applications". The complete list of blogs in this series can be accessed from salai.sivaprakash/blogImplementing & Sustaining SAP ERP Interfaces - Part 2, Interface Strategy explained what are the major characteristics of interfaces that need to be addressed in an interface strategy. This part of the blog explains those characteristics using an illustration. Also, we look at the kind of forms that interfaces can take and their ramifications on design and implementation.

Part 3 - Common Pitfalls

In the previous blog, I explained how an interface strategy should address the five key characteristics of integrity, security & auditability, error handling, throughput and robustness. Now, in this blog, I would like to walk through these characteristics further, with an illustration.

Interfaces as we know come in all forms, size, shapes and colors. The interfaces that go in and out of an SAP system can vary greatly in their characteristics. These characteristics impact the design and the implementation largely. For example:

  • Volume of the interface records can vary between a few records to 100s to 1000s and also 100s of 1000s
  • Frequency can be adhoc, on demand, hourly, weekly, monthly, annually etc.
  • The interface may execute real-time or deferred
  • The records may be batched or sent individually 
  • The calls can be Synchronous or Asynchronous
  • The trigger can happen Online, with user interaction or background
  • The medium and mechanism used can be File or EDI or RFC or Web service or Email or ABAP proxy etc.
  • The partner system can be Internal (A2A) or External (B2B)
  • Some interfaces may be critical while others have high or low impact
  • The data that is carried can be Master data or Transactional data or Calculated reports
  • The data processing can vary between Create, Change, Delete, Status Change, Read-only (fetch) etc

So, the design or the build is going to largely depend upon the context - the data, technology, partner systems, functionality etc. However, a key theme to consider in the design is that these parameters can often change. And sometimes, change quickly. Changes in business, system design and other constrainsts can make

  • a low volume interface become high volume
  • a high frequency interface become low frequency
  • an online interface into a background job
  • a real time interface into a batched or deferred interface etc.

A good interface should be built with sufficient flexibility in the design, so that there is minimal or no impact when such parameters change.

Also, irrespective all all these parameters, interface design should address the 5 key characteristics. Critical interfaces that do not address these characteristics can cause implementation and sustainment failures.


Just for the sake of illustration, lets take the example of a file interface. File interfaces are old fashioned but are still popular due to many reasons, especially limitations of partner systems. I am using this example as most people are familar with this, and might come across this in future as well. This is a very common construction of an file interface program to illustrate the interface characteristics that will be discussed in the future blogs of this series. 

The illustration, as seen in the picture, is an interface program that loads a file into SAP. It reads an input file, then it reads the first interface record, parse the record into fields, then map the fields to the application object, and post the application document into SAP (let's say, an accounting document). If there is no error it proceeds to the next record. If there was an error it logs the error internally before proceeding to the next record. At the end of the file, the program downloads an error file with the list of error records and error messages. A user would then correct the error records and reprocess the error files.

Now, let's analyze this and take a closer look at how such a constuct would impact w.r.t. the key characteristics that we discussed previously.

Throughput: This program involves sequential processing of data which means that the total processing time will be directly proportional to the number of records in the program. So, a large file would mean longer processing time. Long run times mean greater chance of failure (due to external factors). Moreover, when the program (or job) is running, it is going to be very hard to measure how many records are processed so far or how many are in error, until the job fully completes. There is no predictability on expected job completion time (required for critical month end planning etc) since processing rate is not available. Plus, we don't have a way here to split the processing into multiple jobs or increase performance in any other way.  

Robustness: Even in case of well written programs, abends are still possible due to external factors like memory dump, runtime error, system shutdown etc. In this case, should the job abend in between, the program should be able to handle the restart & reprocessing of the file. If not, it will be a tedious task to find out how many records were processed, which ones were not and which ones have erred. The program and the design of the interface should allow for an easy way to fix such failures, regenerate or reprocess the file. 

Integrity: This program might go through multiple schedules or parallel processing. The file can get duplicated on the way or the source can resends the file or the same data can get extracted again (partially or completely) or multiple jobs may process the same file or the file may be reprocessed by mistake (or intentionally to fix a failed job). In such cases, the interface should be able to identify records that are being processed twice and stop the duplicates from posting twice. This kind of integrity should be built within the interface. Also, at times, the sequence in which the records are processed might also impact the integrity of the data. 

Error processing: The error handling shown in this illustration is very tedious, since the user has to access the file, download it, make corrections manually and upload it again. He/she not only needs to understand file handling dynamics but also the structure of the file, its syntax, semantics, data types, naming conventions etc. A lot could go wrong here (remember the frustration meter in Implementing & Sustaining SAP ERP Interfaces - Part 1, Essence of Interfaces?). Even if the user could do that, there is a lot of scope for typos and unintentional errors. Plus, the user has to search through files and do his due diligence every day, week or month. Vacations, absence, retirement... everything get in the way here. To top it, there is no easy way of reporting the number of errors and their distribution by category. When there are large number of issues (as is often the case in early stages of an implementation or when large business changes have happened), it is hard to prioritize the errors as the statistics on error types based on numbers or impact will not be available.  

Security and auditability: The design of this interface means that the files go through human corrections, during their its voyage, which opens up a lot of security concerns. Some interfaces may carry sensitive, confidential or private data and this design might expose both the business and the project team to unwarranted liabilities. Also, in this case, it is not possible to audit who made the changes to the files and what were the changes. There is no traceability of a document (that was posted in SAP) to the record in the file. Or vice versa. Probably, there is also as much absence of traceability of a file record back to the document in the source system. 

The illustration used here is probably very particular. But like I said before, interfaces coming in different forms. And, how we deal with them is very context dependent. In some contexts the challenges may be very specific to the functionality, technology or other factors. It may not be possible or required to build a strategy to deal will all such scenarios. However, many themes often keep repeating across projects and implementations. The idea of this blog series is primarily to reinforce on these common themes, and mark a benchmark where as a community we dont get trapped in the common pitfalls.

In the coming parts of this blog series, I intend to write upon techniques to deal with interface scenarios. Some of these techniques are common and obvious, while others are not. In the next part, I will write about serialization, an often overlooked aspect of interfaces, that impacts integrity of interface data.