Skip to Content
Technical Articles
Author's profile photo Marty McCormick

Data Archiving with SAP Cloud Integration

Customers often ask for ways to persist data longer than default policies when using SAP Cloud Integration.  There are limits with regards to how much and for how long data can be logged for many reasons, including performance and operations.

Data Archiving is a new feature in SAP Cloud Integration where customers can persist data from Cloud Integration to an external Content Management System (CMS).  For example, customers may have legal or reporting requirements or just want data kept for X number of years for historical purposes.  This can now be accomplished using this feature.

This blog walks through the documentation available on help.sap.com and provides some screen shots and additional information.  Archiving Data

In order to archive data, a customer needs to integrate their own Content Management System, which is external to the Cloud Integration tenant.

For the purposes of this blog, I created a new repository using the Document Management Service on a Neo tenant on SAP Business Technology Platform (BTP) —link.  I then developed a proxy-bridge Java application and deployed it on the Neo BTP tenant in order to connect to my repository.  Here is a good blog describing the process of developing this application which also contains links to SAP help documentation.

Then using the proxy-bridge app, I can connect to the repository using browser based binding using applications like OpenCMIS.  You’ll need the repositoryId of the repository, which is required in the configuration.

Now we are ready to configure archiving in SAP Cloud Integration.

The first step is to configure the destination.

In the BTP cockpit, navigate to the subaccount for the Cloud Integration tenant.  Select Destinations under Connectivity and click New Destination.

 

Complete the destination configuration using the URL to your repository.  The name needs to be CloudIntegration_LogArchive.

The URL should be the browser binding URL (or if you want to set an additional property to use AtomPub see the documentation to set that property).  For example, I used the URL https://<app name>/cmisproxy-application/cmis/json

Provide the details for authentication (basic in my case) and then set an additional property for RepositoryId and enter the value.

 

The next step is to activate archive logging on the Cloud Integration tenant using the OData APIs.  For these steps I’ll use Postman.

URL: https://<<CloudIntegrationHost>>/api/v1/activateArchivingConfiguration

Before we can POST we need to fetch the x-csrf-token.  Set a Header property “x-csrf-token” to “Fetch” and issue a GET on the URL to fetch the x-csrf-token.

Paste the returned header value of the token into the x-csrf-token header variable and issue an HTTP POST.  You should get back an 200 OK HTTP code that archiving has been enabled.  (Note the first time that I tried this I received an error that the retrieved destination did not have the required data–turns out that I was missing RepositoryId.)

Next, you need to assign your user the required roles in order to configure archiving.  The documentation refers to the roles ConfigurationService.RuntimeBusinessParameterRead and ConfigurationService.RuntimeBusinessParameterEdit but these are roles available for Neo tenants.  If you are using Cloud Foundry for your Cloud Integration tenants then you need to assign the roles are TraceConfigurationEdit and TraceConfigurationRead.

 

To assign the roles in Cloud Foundry go into your subaccount in the BTP cockpit and to navigate Security->Roles and assign the role(s) to a Role Collection that your user has assigned.  If you are on Neo, the roles are also assigned in the BTP cockpit but under Security->Authorizations.

Now when you go into the Monitoring View in the Cloud Integration WebUI and open your iFlow using the Manage Integration Content tile, you will see the Data Archiving Link enabled.  By default, it is not activated.

Click on the Archive Data link and you are presented with options to Archive.  You can log all Sender and Receiver channel payloads or anytime a message is persisted to a data store.  In my case, I developed a simple iFlow with a content modifier to set a body and then GroovyScript that logs this body as an attachment using Message Payload Logging (MPL). So in my case I only selected option “Log Attachments”.

After running the iFlow, the log viewer of the iFlow will show that Archiving is pending for the integration.  By default, the logs will be archived after 7 days according to the documentation.

 

You can then connect to your repository using your preferred tool to see the content.  I’ll come back and update the blog once my content is archived.

2 other helpful URLs:

You can check the tenant configuration by using URL /api/v1/ArchivingConfigurations(‘s4hccpis’)

and can check performance metrics using this link:

/api/v1/ArchivingKeyPerformanceIndicators?$filter=MplsToBeArchived eq 5000

 

Addition 10/10/2021

After 1 week, I do see the archive files in the repository.  I need to play around with the functionality a bit more but it seems like a zip file is placed into the repository with the MessageID as the name, i.e. <messageid>.zip.  Inside this zip file would be some data regarding the integration and archiving configuration and more importantly, the archived files themselves (my attachment was stored as my log description .bin, i.e. “SOAP payload sent_.bin”.  I suspect all attachments that are stored in the iFlow run are placed into the same zip under different subfolders but will test this out as well.

Thanks,
Marty

 

Assigned Tags

      6 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Amith Nair
      Amith Nair

      Made it look so easy! Great Blog, Marty!

      Author's profile photo Geoff Beglau
      Geoff Beglau

      Is it not possible to link Cloud Integration directly to Document Management Service without the proxy-bridge application?

      Can the same be executed using Document Management Service in Cloud Foundry? 

      Author's profile photo Marty McCormick
      Marty McCormick
      Blog Post Author

      Hi Geoff

      I'm not sure and will need to check.  I know there are APIs to interact with the repository on Neo but I couldn't figure out how to access the repository from an external source using Browser or ATOM access without the proxy bridge.  If others know a way please feel free to comment here and I'll see if I can find out any additional information.

      Thanks,
      Marty

      Author's profile photo Marco Koch
      Marco Koch

      Hi Marty,

      great blog post!

      To configure archiving for an integration flow your user must be assigned to one of the personas Integration Developer, Business Expert or Tenant Administrator persona by assigning one of the following role collections: PI_IntegrationDeveloper, PI_Business_Expert or PI_Administrator. Alternatively you can only assign TraceConfigurationEdit role template.

      Mentioning the Neo roles is an error in the documentation and will be fixed in one of the next releases.

      Regards
      Marco

      Author's profile photo JAIME ARTURO CASTILLO
      JAIME ARTURO CASTILLO

      Hi Marty

      I would like to know is there API for SAP Data archiving in google cloud Storage

      Best Regards

      Author's profile photo Sirish Kumar Reddy Gongal Reddy
      Sirish Kumar Reddy Gongal Reddy

      Hi Marty McCormick

      What if, If we want to archive large files let's say in GB's?

      As one of the alternative, Can we use SFTP (On-premise) and deliver a copy of payload for archive purposes? We can always map local file storage/NFS etc., to SFTP server so documents stored locally? Do you see any issue with this approach. Please suggest.

       

      Thanks,

      Sirish