Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Muniyappan
Active Contributor
0 Kudos
This blog explains how to delay the message processing when api call fails. This does not provide all detailed steps to consume celonis apis. Rather it discusses one part of the scenario.

 

Requirement


This is a SFTP to REST(Celonis) scenario. Celonis has shared APIs for posting data from SAP PO. One restriction is SAP PO should not call submit job API parallel and each request should be sent with a delay of 30 secs.

https://docs.celonis.com/en/data-push-api.html

 

Celonis APIs


Here is the celonis apis documentation link 

Celonis has provide three endpoints to send the data to rest service.

  1. Create JobID. This api is used to create the job id and here we mention the structure of payload. You basically define here number of csv columns and its properties, primary keys etc.

  2. Push CSV : Data will be sent to this endpoint and csv payload should match the structure which is created using create jobID api.

  3. Submit Job: This will load the data into celonis. You just pass the jobID and job gets executed.


 

If you check the api work flow, we have to push all csv files and then finally call submit job api. But this is only possible with collect pattern. Since I am not using collect pattern for my scenario, I have to repeat whole steps for each message.



Problem


Let us assume we have three interfaces and each interface has separate ICO. These three ICOs can run parallel. Each ICO will pick up the file and converts into csv, create job id, push data and finally submit the jobID.

 


 

Since create job and Push CSV apis can be called parallel, We don't get any error when calling these two Apis. The issue happens, when three ICOs call Submit Job api parallel. we would end up getting below http errors. First we get 409 error with conflict and finally 429 will results if many request sent.


 

Next Change was made, in order to set these ICOs as EOIO, so that change will be minimal. But this is also not solving the issue. We would still end up, sending the requests parallel, but here the number of calls will be less compared to EO option. with EOIO option,  we would be getting http error code 409.


 

Solution


Here we combine first two API calls into one ICO and submit Job api call is sent to another ICO. Submit call ICO is common for all three interfaces.

 

  1. Keep the common ICO as EOIO. So that messages will get processed sequentially.


 


 

2. Still we will get error. let us say first message is processed at 10:00:00 and next one is sent at 10:00:05, we will get the same error. So API expects delay of at least 30 secs for every call. We can not continue to send next message soon after first message is sent.


One common solution will be use wait step in bpm or use sleep thread so that delay will be introduced. Instead of these options we can control this using receiver channel settings.


Here we can set the number of retries as 20 and interval as 30 secs. Chances of 20 retries getting failed is slim and even if it fails, support team can restart it. Most likely channel will be able to send the data in couple of retries. We get around 100 messages in production per day and it never failed so far which has been running almost year.


 



 

Channel will wait if api call fails and try to re send again as per re try interval mentioned. From the logs we can see that message fails and waits for next re try schedule.


 


Message gets delivered in the next schedule.



 

If we use CPI, then it would be much simpler, we can first collect all files and then push data using looping process and finally submit the job. Hence we can avoid calling submit job api for each message.

 
Labels in this area