Skip to Content
Author's profile photo Vinita Kasliwal

Ariba master data extraction program


This blog talks about how to use the master data extraction program to read data from ECC to Ariba system. I also wrote another blog on how the integration works please read the blog here:


This blog talks about below topic areas:

  • Master Data extraction program
  • Sequence for Initial load consideration
  • Customization of the master program
  • Incremental load consideration
  • Lessons Learnt
  • Test cases for Master data integration

Master Data extraction program:

The standard program /ARBA/MASTER_DATA_EXPORT needs to be customized or you will keep wasting your time to find the implicit enhancement points which are too less and  not at the right places. So I just did what was the easier thing to do copied the entire program as a Z and then did the changes.When you run the program you see radio buttons like Procure to Order and Procure to Pay, Sourcing and contracts.

What needs to be run depends on what modules you have implemented in Ariba and a lot of the contents available to download is common among sourcing and P2P and P2O radio buttons However it still needs to be run twice as it updates specific modules in Ariba. So based on what you select you get options to ‘Choose Master data to export’ you get different master data options,

To understand better you need to  first extract company code from P2O and then extract company code again from sourcing. So even though it’s the identical company code master data which gets exported but it needs to go to different modules in Ariba like sourcing and contract management so you need to run the extractors twice. Generally procurement scenario  in Ariba consist of

P2P: Contracts Orders and Invoicing

P2O: Contracts only

You can also choose data to be sent as an initial load one time and as an incremental load. So you would need to run :

  1. Sourcing and contracts
  2. Procure to Pay/ Procure to Order (One of these depending on modules implemented in Ariba)

There are also connectivity options like :

  • Direct connectivity: This is used to send the data to the Ariba system directly
  • Integration Toolkit: This loads the file in a shared drive and Integration layer (PI/PO) should pick this up to send to Ariba.

Sequence for Initial load consideration:

It is advisable to follow a specific logic to ensure the correct data and in the right sequence gets uploaded to the Ariba system. It is advisable to fill Sourcing and contracts before as its an Upstream module in Ariba and the others are downstream

  Data FileName # Delta Load
Sourcing and contracts Company Code CompanyCode 1
Plant Plant 2
Purchase Org PurchaseOrg 3
Purchasing Group PurchaseGroup 4
Payment Terms PaymentTerms 5
incoterms IncoTerms 6
Item Category ItemCategory 7
ERP Commodity Code ERP Commodity Code 8
Plant Purchase Org PlantPurchaseOrgCombo 9
Material Master Material 10 X
Material Plant Combo
Material Lang
Procure to Order / Procure to Pay Company Code CompanyCode 1
Plant Plant 2
Purchase Org PurchaseOrg 3
Purchasing Group PurchaseGroup 4
Plant Purchase Org PlantPurchaseOrgCombo 5
User UserConsolidated 6 X
Supplier PurchaseOrgSupplierCombo 7 X
Cost Center CostCenter 8 X
Internal Order InternalOrder 9 X
General Ledger GeneralLedger 10 X
WBS Element WBSElement 11 X
Company Code WBS Combo
Asset Asset 12 X
Account Category AccountCategory 13
Account Category Field status AccCategoryFieldStatusCombo 14
ERP Commodity Code 15
Currency Conversion CurrencyConversionRate 16
Taxcode TaxCode 17
Payment Terms 18


Customization of the master program

There may be some customization needed in order to either clean / correct the data being sent to Ariba. I am specifying some of the scenarios below:

To Change column Name or add additional column and populate data:

It  may require you to change some column name to make it as a the primary key for example instead of the user Id which is the default primary key for the user extraction client wanted to make email ID as the primary key. So, I had to change the name of the column for email as UNIQUENAME (Ariba understand UniqueName column as the primary key) for the user extraction file and assign it as value of email.  In /ARBA/FIELD_MAP the field BNAME (User ID) currently is mapped to Ariba field UNIQUENAME. I now want the email ID field to populate for UniqueName.

Lets have a look at how this was done.

  1. In Table /ARBA/FIELD_MAP in SM30 I added an extra SAP column UNIQUENAME1 in SAP. Then name of the column on CSV would be UniqueName This value UNIQUENAME1 was assigned the email ID.

Also I changed the label for BNAME to UserId as only 1 field can have value as UniqueName which becomes the key in Ariba .

  1. Add an append structure in the standard structure as seen below:



3. To assign value to this column we need to change code to read email of the user:

And in order to add value to this field I used  BADI  /ARBA/MASTER_DATA. This badi has all the user extraction data methods as seen below



Method  /ARBA/IF_EXP_MASTER_DATA~MODIFY_USER_DATA is called for each time the user record is extracted so I enter  my field UNIQUENAME1 and pass  Email address. Remember this is a structure so it will be called for all entries one by one.

However most of the methods like /ARBA/IF_EXP_MASTER_DATA~PUBLISH_USER_DATA have exporting /changing parameter as table. So you can loop at each record and delete if the Email ID is blank etc as this is our primary key so keeping it blank will create issues in Ariba. 

Now when you have the file extracted as a CSV it will have an additional column as UniqueName with the email ID of the user  when the Ariba program would be run.


Incremental load consideration:

Note that not all data can be set up as incremental load. In P2O/P2P scenario the incremental load consists of  below master data.



In Sourcing and contracts below master data can be scheduled to run for incremental data:

If the data which you wish the send to Ariba is not included in the incremental list above like product category, Pur grp, company code, pur org etc. then you need to either update that manually or send an initial load everytime as needed.


How would the system know what changes occured since last time and how to pick the delta load?

he program handles the incremental load based on a table /ARBA/INCR_DTTIM. Everytime an incremental load is run the table gets updated with date and time of the specific load which was run. THe next time system would automatically filter the data based on the last updated fill from the table for users, suppliers or other master data for which you plan to run the incremental load. After the load run the system automatically update the Date information and the next time for incremental load it references this to understand what has been the delta change in the system since the last run.


To ensure the data is sent to the right destination /ARBA/auth_param table should be populated. Ensure you have maintained the necessary parameters in the table /ARBA/ AUTH_PARAM so the ECC system points to the correct Ariba Realm


Lessons Learnt:

  • Ensure you get a confirmation of the mandatory fields in Ariba which may not be mandatory in ECC hence always check if the extracted file from ECC got successfully updated in Ariba with the same number of records
  • Data fails in Integration like email IDs have special character this needs to be handled by custom logic by creating email aliases etc.
  • If number of error records are more than say 100 or something the entire file fails to load in Ariba. The data send needs to be corrected before you send the file again as it stops reading the entire file after a threshold of N number of failures is reached.
  • If the extracted data load is huge it may cause your SAP screen to freeze so better run as a batch job
  • In ‘Connectivity’ section If you do a direct connectivity it upload the data to the connected Ariba system else if you choose ‘Integration Tool Kit’ it stores the file in a local system at a specific location which you specify and then needs to be manually uploaded to Ariba. So for testing I used ‘Integration toolkit’
  • You can also have automation using Integration tool kit where PI/PO picks this up and sends to Ariba. If that is the case then you may want to send the file to some other location for testing
  • Not all data is available for incremental load so if you choose to create new product categories you need to update this manually in Ariba. Check the delta load available to update as seen in the last column in my table above
  • When you run the batch job check user’s authorization as the user would need to have some specific authorization which involves accessing a local directory, creating a folder and placing the file in it
  • If the jobs are Run for incremental load ensure that the datetime stamp gets correctly updated by the system in the table /ARBA/INCR_DTTIM. This is how the system knows which records to pick up for delta extraction.
  • When you put filters for the load like user  you may use below files.
  • Supplier data extraction Methods are separate for both initial and delta load

For incremental load for users SAP does not provide any method in the BADI to modify the incremental load files for deletion like userdelete and usergroupdelete files.

These files User Deleted File and UserGroup deleted file are  sent to Ariba to deactivate those users who have left the organisation. However no method in the BADI is triggered for formatting these files  Hence I came up with a workaround as below. Overwrite the extracted user deletion and group_deletion file with the required data.

Note that in the deletion file only one column is updated the primary key using which Ariba system would know which user to deactivate.Since I had changed the primary key to email this was not populated in the deletion files being sent to Ariba and system would not know which user to deactivate. So I did below

Rename the file using below code where gv_fname is the file which was generated without the filter. So I am Overwriting  the same extracted file with additional information that I need to pass to the Ariba system. In my master data program I modify the gr_user_del file to add email and also rename the file to the same name as was used by Ariba so my file replaces  the original file created by the program.

*** Modify the deleted file as needed:
IF sy-subrc EQ 0. "update entry with email
 ls_user_del-uniquename1 =   ls_smtp-e_mail.
 MODIFY   gt_user_del FROM ls_user_del INDEX lv_index.

***** overwrite the file
CONCATENATE gv_fname 'GroupConsolidated_Delete.csv' INTO lv_file_name.

      i_filename        = lv_file_name
      i_fileformat      = 'CSV'
      i_field_seperator = ','
      i_tabname         = '/ARBA/USER'
      i_encoding        = 'UTF-8'
      i_solution        = 'AR'
      i_tab_sender      =  gt_user_del
      open_failed       = 1
      close_failed      = 2
      write_failed      = 4
      conversion_failed = 5
      OTHERS            = 6.
  IF sy-subrc <> 0.                                         "#EC NEEDED



Test cases for Master data integration should include:

  1. Maintain a checklist to ensure all master data is loaded in Ariba
  2. In addition to each of the master data also compare the count for each example 200 users sent from SAP should all be loaded in Ariba
  3. Create Documents like contract, PO to ensure the system data for the users is correctly updated and you are able to create the documents successfully with multiple users.
  4. If there are failures in loading the data to Ariba verify that  file format being sent to Ariba is being read correctly.
  5. Check the system is reading the incremental load correctly with a small set of data for suppliers and users.
  6. If there are issues check incremental files have the right format and changes which you applied to the initial file load whether they are needed or not for the incremental push.
  7. Ensure blocked suppliers are updated correctly in the Ariba system and no more showing up
  8. Ensure the users who have left the org are removed from the Ariba system
  9. Check integration of the system is correct from and to ECC and SRM as applicable.
  10. For currency conversion the same rate applies as shown in the ECC system
  11. For UoM ensure the Ariba value translated to the ECC ANSI conversion units
  12. Ensure there is a process documented for all manual activities like updating delta load manually if not set as a batch job e.g. for new product categories, cost center and WBS.
  13. If you use SSO in your organisation ensure it works fine for the users when they login
  14. There are email notifications being sent from Ariba ensure that SSO is enabled for them and they are being sent to the right recipients.
  15. Ensure the approval rules are working correctly for creation as well as changes


Thanks for reading. I hope this document is helpful for those looking to find some help on using the master data extraction program.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Srinivas Rao
      Srinivas Rao

      Excellent blog !! The "lesson learnt" and "test case" section are the best....Thanks for sharing... !!

      Author's profile photo Former Member
      Former Member

      Hi vinitha, thank you for the blog. Im doing it for the first time (Ariba master data extraction)

      Is SQL and ABAP knowledge compulary to do the extraction....


      Author's profile photo Ashish Gupta
      Ashish Gupta

      You just need to configure the ERP and Standard program will run and that contain the ABAP code, by which you can send the data.

      No such requirement of having knowledge of ABAP and SQL

      Author's profile photo Former Member
      Former Member

      Thank you for sharing.


      Author's profile photo Ashish Gupta
      Ashish Gupta

      Hi Vinitha,  thanks for the information.

      I am also doing master data integration and run the master data export program.

      I select the sourcing,  company code, full load and execute the program.

      The company code record extracted but in between the start and end time shown its showing me an issue that "The specified path is invalid !!"


      Author's profile photo Kristoffer Kronhamn
      Kristoffer Kronhamn

      Hi Ashish,

      You need to maintain a temp path in ARBA/TVARV. Make sure that this path is valid.

      Best regards,

      Author's profile photo Ashish Gupta
      Ashish Gupta

      Thanks Kristoffer for the suggestion but i had given the correct path as .\Masterdata in selection value at Temp_directory in  ARBA/TVARV.

      From where we can find the Valid path in SAP for tem_directory.


      Ashish Gupta

      Author's profile photo Pramod Kumar
      Pramod Kumar

      Hi Ashish


      Can you please advise on this path issue.

      I m also facing similar issue.




      Author's profile photo Ashish Gupta
      Ashish Gupta

      Hi Pramod,

      Set Logical file path in SAP.


      Ashish Gupta


      Author's profile photo shahbuddin ali
      shahbuddin ali

      Hi Ashish,

      I am facing the same issue, so I just wanted to know that is your issue is resolve.

      If your issue is resolve can you help me to complete this process.




      Author's profile photo Devesh Agarwal
      Devesh Agarwal

      Hello ,

      Greetings !
      Actually i read integration blog by prapti Vyas and one another blog , and i found completely opposite info in both blog . According to Prapti Vyas blog , ITK method we used when we dont have PI as middleware but according to another blog i read ITK we used when we used ITK when we have PI in between . Can you please clarify ?
      Author's profile photo Ashish Gupta
      Ashish Gupta

      Hi Devesh,

      ITK can be used when we are using PI or any other middleware also, but that is not mandatory.

      We can use ITK independently to integrate whether we are not having PI/PO.


      Ashish Gupta

      Author's profile photo Devesh Agarwal
      Devesh Agarwal

      Hello Ashish ,


      Thanks for the Answer !


      As you said that , "we can use ITK independently " So ,

      Can we use for Direct Connectivity  ?

      To be honest , Can you please explain briefly or provide me some document for integration or ITK ?


      Thanks !

      Author's profile photo Ashish Gupta
      Ashish Gupta

      Hi Devesh,


      Yes, defiantly we can use it in direct connectivity.

      Provide me your scope of work, As standard to start ITK, first download ITK from and download SAP ITK guide.

      ITK basically a file transfer integration which extract the data from source(ERP) and through to destination like Ariba sourcing.


      Ashish Gupta


      Author's profile photo Former Member
      Former Member

      Thank you for Clear test cases and documentation. I have below query, could you please clarify the same ?

      We have only Ariba contract management, SAP S4 HANA and PI/PO as a middleware system.
      There is no license for SIPM, SLP and MDG module but Is it possible to replicate the Supplier master from S4 HANA to Ariba by using this BADI# /ARBA/MASTER_DATA & Report# ARBA/MASTER_DATA_EXPORT
      I can see there is NO Supplier master data under Sourcing and contract management selection in report# /ARBA/MASTER_DATA_EXPORT
      Can we customize this report and load supplier master data automatically ?
      Author's profile photo Ashish Gupta
      Ashish Gupta

      Hi All,

      I sent all Master data from sourcing and contract part, but Now I want to send the supplier data to the Ariba and its comes in Procure to Pay.

      My Ariba is Sourcing porta, So I want to ask that Is this possible to send supplier data in sourcing module. I tried to send the  data from procure to pay and configure Two tables in /ARBA/TVARV But at the time to send I’m getting error of  “No Stack available”

      Kindly help me for the Same.

      Author's profile photo Kaustubh P Deshpande
      Kaustubh P Deshpande


      Vinita Kasliwal - Thanks for blog nice from integration prospective further if you can also share some details if any customer wanted to have some custom filed added in  Ariba P2P or P2P example if Storage location from ECC wanted to have in PR , what all considerations we need to consider and how master data to transactional interface will work.

      Also above master data extraction impacts with mediated and direct or Csv file method and configurations needed , say what services in SICF in HANA or ECC require to activate ? what webservices in case of direct we need to activate ? etc

      • Kaustubh


      Author's profile photo Andrew Hunt
      Andrew Hunt



      Thanks for the blog, have successfully added to the SupplierLocation.csv and thought it would be a similar process for the UserConsolidated.csv, part of my requirement is to overwrite the information in columns DefaultCurrency, SAPPurchaseGroup, SAPPurchaseOrg.  These are not part of the structure within MODIFY/PUBLISH_USER_DATA.  Would really appreciate a tip on how these columns can be populated with data?



      Author's profile photo David Zhuwao
      David Zhuwao


      Hi Vinita,


      I have a requirement to extract Ariba data into SAP BW for reporting purposes. Do you have any ideas on how this can be accomplished?

      I have not been able to find info on standard extractors.



      Author's profile photo Devesh Agarwal
      Devesh Agarwal

      Hello ,

      Greetings !
      I am Devesh Agarwal , working in infosys and i am learning ARIBA so i just need some knowledge .
      Actually i read integration blog by prapti Vyas and one another blog , and i found completely opposite info in both blog . According to Prapti Vyas blog , ITK method we used when we dont have PI as middleware but according to another blog i read ITK we used when we used ITK when we have PI in between . Can you please clarify ?
      Author's profile photo Manoj K
      Manoj K

      Devesh ,

      Below are the possible way of integrtaion :

      1.ECC/S4--->ARIBA ( No PI involved and no ITK involved) : The connection is setup in S4/ECC -SOLMAN to send directly to Ariba.

      2.ECC/S4-->PI(Via PROXY to SOAP)--->ARIBA : No ITK involved, standard content is used to setup Proxy to SOAP interface.

      3.ECC/S4-->PI(ITK)--->ARIBA: ITK is isntalled in PI to upload/download data.

      4.ECC/S4-->Any Windows or Unix Server-->ITK : Standalone ITK batch jobs are scheduled to transfer data.



      Author's profile photo Pratik Shah
      Pratik Shah

      Has anyone customized Master data program to update Cost Centers for Upstream ?


      Currently cost center and GL account updates are only available for P2P not for sourcing & contract.


      Please provide solution or approach  that was used in your case.

      Author's profile photo Francois Bouchard
      Francois Bouchard

      Hi Vinita,

      any experience on how to load vendor with multiple Purchasing unit possibilities ?

      We want to implement this in Ariba but we don't have any success when trying to load a vendor that can be in different Punit.


      Author's profile photo Rob Demchuk
      Rob Demchuk

      HI Francois,


      You can do this by updating the Filters in SPRO… the CIG Guide can help, if you send me your connect data I will try to help via email…  ** not sure what you mean by "multiple Purchasing unit possibilities”..



      Author's profile photo Shruthi Nandihallyrangaiah
      Shruthi Nandihallyrangaiah

      Hi Vinita,

      I am facing issue in finding the user delete records - we are on 1809 HANA and I see no records in table USH04 and hence whenever user is deleted corresponding entry is not picked up. Did you face this issue?



      Author's profile photo CARLOS LAURENCIO

       is it possible to run full load program monthly?

      Author's profile photo Morris Allatiw
      Morris Allatiw

      Hi Vinitha, everyone,

      are you familiar with this issue? it failing after SAP refresh. We restored missing RFCs in SM59 and corrected SLDAPICUST.



      Author's profile photo Mala Kaushik
      Mala Kaushik

      Hi everyone,

      We need to send a custom helper csv file present under import task - "Import CSV Lookup Tables Used by Approval Rules". We copied the Ariba delivered master data program and modified it to send the helper file to Ariba. It reached CIG and in CIG later we are getting the following error. The regular master data files are working properly. Its the custom helper csv file that does not work. If we manually import this in CIG, it works.


      ..with HTTP Response Code :400 and Response Message: FAILED. Suggested Action : HTTP Post to Target system failed and above response message is received . Common HTTP Error Response and their suggestive actions. 500 -> Internal Server Error -> Check in the target system for exact nature of request failure. 400-4XX -> Authentication Issue, Forbidden, Not Found -> check target system provided valid authentication details in CIG or the target service is available, Target System service has right ACL's declared. Error Code : CIG-PLT-04640. Please review the Support Note 189493 ( for solution.

      Any help to resolve this is much appreciated.




      Author's profile photo Vikash Gupta
      Vikash Gupta

      Hello Mala,


      I have the same requirement. Can you give more details on how you achieved this?




      Author's profile photo Christine Banks
      Christine Banks

      Dear Vinita,

      thank you for your great blog!

      Anyone has asked you a similar question, how to modify the User Group? since the User_Uniquename is changed to a mail address in the User CSV file and the User_Uniquename in the GroupConsolidated file must also be a mail address.

      But there is just one method MODIFY_USER_DATA and no method for the User Group.

      Could you help me to add the mail address as User_Uniquename in the GroupConsolidated file?

      Thank you and regards