One of the most famous passages in Homer's Iliad is the so-called Catalog of Ships that lists each of the contingents that set sail from the different cities and islands of Greece to join in the Trojan War.
The Catalog of Ships, which is greatly loved by Homeric scholars, is an inventory of political and military power in Greece long before the days of Socrates, Plato, and Aristotle. It names the king or hero who led each contingent, describes his home city, and provides the number of ships he contributed with the number of men in each ship.
This simple act of listing each leader and his contribution provides a fascinating glimpse into the coalition of forces that set out for Troy, their relative strengths and their relationships to the great powers of Late Bronze Age Greece.
No matter how powerful or weak each contingent was, together they formed a great coalition under a unified command to pursue a common cause: the siege and eventual conquest of Troy, a war that was to drag on for 10 years and inspire what many scholars consider to be history's first true novel, rife with psychological drama and the tragic consequences of destructive emotions revealed on a human, rather than divine, scale.
Phew ... now that really got my attention. If you're interested in learning more about the Homeric Catalog of Ships, you can find it in Book 2, lines 494-759 of the Iliad. Go on ... it's good for you. And if you're really interested ... click the link to download a complete free copy of Thomas W. Allen's classic 1862 study of history's most literate inventory document entitled The Homeric Catalogue of Ships.
While I really enjoy reading and writing about Homer and his tragic Trojan heroes, the whole point of this literary interlude is that since earliest times, lists have been interesting and have helped move a story along. This particular list provides us a foundation to construct a picture of an entire expedition, which sets the backdrop and the stage for the story that unfolds. We can't really understand the whole without learning something about each of its pieces.
In more prosaic terms, the list helps us integrate our vision.
It's the same thing with programming and developing interfaces. We have our vision of an integration architecture but we need to identify each of its pieces before we can begin building it. In other words, we need a list, our very own catalog of integration objects. It's all part of our integration thinking.
Each piece can be described in detail and has a separate existence. But none has any real functionality on its own. They can only fulfill our integration requirement together, like the different contingents of the Achaean army setting sail from Greece so many many years ago.
We know what we want to do: extract payment reconciliation data from SAP and send it our banking partners through our EDI system. We also know that each partner expects his remittance data in his own unique file structure. And we know that we can leverage objects from our existing EDI architecture to make this work, as outlined in my previous posting Bank Payment Interfaces: Considering What We Have Is How We Begin.
And so we begin, to paraphrase and with sincere apologies to Homer, to sing, oh goddess, of the catalog that lists each object destined to become part of a whole that will unify our architecture and deliver remittance data from SAP to our banks.
I'll just list the objects and requirements in this week's posting. We'll look at how they're actually used in our next posting.
We've mentioned that we've found ourselves in an environment with an existing custom extract program based on standard SAP report RFCHKE00 that pulls payment data out of tables PAYR, REGUP and REGUH for one bank. We'll keep this program since it can select for any bank or vendor set up in the system. We can use the existing selection logic for all our banks, since it pulls its data from PAYR but we will need to make some adjustments to handle the different formats for each specific bank.
The existing remittance program outputs its data to a file which is then sent to the bank by an SFTP batch file or script, which may or may not work once the consultants leave. Our approach is to replace the file in SAP with an IDoc to take advantage of:
This is all about end-to-end visibility of the interface in SAP and SI. Any problems in outbound processing, particularly SFTP transmission, will be reported back to us immediately through the STATUS interface update of the control and status records of the outbound remittance IDoc.
So our approach is:
The following IDoc configuration is required:
We'll need to build a function module that will create our custom IDoc for each bank from the remittance extract data. We'll pass unformatted 1,000 byte strings for the header and trailer records and an internal table with a single 1,000 byte field. This will handle any bank's unique format.
The function will identify the bank and then build an IDoc that identifies the bank's particular file structure through its basic type.
The function will be built once but adjusted to recognize the bank's basic type each time a new bank interface is added. This should be a simple adjustment because the extract data is coming in unformatted one field structures that match the SDATA field of the IDoc data record. The only unique values that we really need to pass from bank to bank is the basic type name.
We can call the function Z_MASTER_IDOC_CREATE_ZAPREMIT.
All the information required to complete an SFTP put including hostname or IP address, port, user name and public keys. This is a one-time task for each bank and is critical for successful SFTP communications.
We'll need two records for each bank interface in our custom table that manages EDI identification and routings in SI. These records will be entered each time a new bank is set up:
This is a simple map to build since the IDoc basic type is already structured like the bank flat file. The map strips the control segment from the IDoc and ensures that each file record is the precise length required by the bank. One map is built for each bank.
We'll need two new BPs to process all bank interfaces:
The BPs are generic and use XPath statements to identify which bank interface is currently running and each of its specific parameters to run the map and send its file by SFTP. There's no hard-coding anywhere so any bank remittance interface can be dropped into the architecture.
We'll look at how this all works together in our next posting. Until then, I have a hankering to brush up on my Homer. And a Happy Valentines Day to everybody!