Skip to Content
Technical Articles
Author's profile photo Subhojit Saha

Creating Dynamic Queries in CPI Odata receiver Adapter

In the blog, I will try to explain how to technically solve a common problem of creating Dynamic Query, which is required many times while fetching data using OData or SuccessFactors receiver Adapter.

A Dynamic Odata Query looks like :

$select=userId,costCenter&$filter=userId eq ‘00000813’ or userId eq ‘00001079’.

Here the values of userId i.e. ‘00000813’ and ‘00001079’ are to be added dynamically based on input payload.

 

Here there are two values of userId, but what if the incoming payload has thousands of userId’s and we need to dynamically fetch for them like :

 

 

….

And many more.

Also, we have to take into consideration that we can’t build a huge query (the size of the query depends on target system whether it is Employee Central, S/4 Hana or any third party OData service) as

  1. the target system might have limitations
  2. the amount of data in response must be manageable to CPI memory

Scenario:

Taking all above points in consideration we are building a Iflow in CPI, which takes a File input. The File has huge size around 25K data .  The file is a ‘|’ delimited file (in some other case it can be “,” separated or even fixed length). Each line of the file has multiple fields with the 10th field being Employee Id, for which we need to fetch corresponding CostCenter (in actual scenario it can be multiple other fields like Company, payscaleArea etc. but we are only taking costCenter for ease, here) from ‘EmpJob’, Entity of Employee Central module of SuccessFactors. Also, there is a chance that in the file an Employee id might be present multiple times or same Employee Id may be duplicated and hence we have to fetch only for unique ids and there corresponding costCenter values.

 

Input – File:

The Employee Id column:

 

The Iflow:

Explanation:

There is One Main Integration Flow which calls another local sub process or local Integration flow using ‘Looping Process Call’ i.e. the sub flow is called multiple times based on a condition. Also in Image the name of sub flow is : ‘CallEmpJobDynamically’.

In Main Flow we start with Content Modifier to add payload (instead of calling an sftp and uploading the file, for ease of use.). The 23K of data is copied into payload body as constant.

Also, we have defined two properties used later:

Next, we have a groovy script which creates an array list with unique employee Id from the payload:

Here we first split each line based on ‘\n’ and then for each line take tempdata[9] i.e. the employee Id ( whose position is 10th in each row ) and add them to an arraylist if they are not null or not already present:

After that, we have another script which takes the arraylist ‘empidarray’ which is stored as a property in messaging context and create an dynamic query out of it. Also remember the query should have a fixed size (reasons discussed before)and we decided there should be  no more than 100 userId (which is same as Employee Id in file ) for each query fetched ( you can have more than 100, if you feel so, based on your target system).  The output arraylist has each line or row with – userId eq ‘00093147’ or userId eq ‘00000813’ … up to 100, with multiple lines based on number of unique employee Id present in input file:

The output arraylist from the above groovy script:

 

Finally, we call the sub process: ‘CallEmpJobDynamically’ using a looping process call

The looping logic is that that the arraylist (arraylist name is – ‘empidquery’) formed above, should have at least one line in it to call the loop:

 

In Subprocess:

We first read the first line of the arraylist ‘empidquery’ as an header element in an content modifier named – ‘OneLineOfArray’

Then we use this header ‘queryempnow’ as filter condition in next Request -Reply step to get data from EmpJob entity of Employee Central.

The we remove any header so they are not present in next call via a Content Modifier:

The next is a script added just to show the response and also two highlight that the sub flow is called multiple times : (In real flow you will replace this groovy with a groovy to collect userid and corresponding costCenter and store them)

Script:

Response of EmpJob:

Highlighting the sub process is called multiple times :

 

Finally, we call the last Groovy Script to delete the top line from arraylist ‘empidquery’ so that the immediate next line becomes the top in the arraylist and the looping continues smoothly.

Conclusion:

This blog is written to ease the pain of developers, as while building Iflows, we do come across multiple cases where in, OData receiver Adapters we need to fetch data based on a dynamic query and the value of the dynamic part comes from source payload. In all those cases, the above blog can be helpful to build the lflows quickly, with looping call to fetch the data based on dynamic url created from payload data. It hastens those developments by providing architecture and re-usable codes on how to achieve the outcome.

I will look forward to your inputs and suggestions.

Assigned Tags

      14 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Somen Mohanty
      Somen Mohanty

      FYI, OData adapter now supports "IN" which was missing earlier.

      Author's profile photo Subhojit Saha
      Subhojit Saha
      Blog Post Author

      Hi Somen, Yes "IN" is now supported and you can modify the groovy a bit to form the url with "IN" instead of "EQ" for each. Thanks, for bringing this up.

      Author's profile photo Raghav Uppal
      Raghav Uppal

      Awesome blog..really helpful...hats off to the writer

      Author's profile photo Subhojit Saha
      Subhojit Saha
      Blog Post Author

      Thanks Raghav.

      Author's profile photo Yee Loon Khoo
      Yee Loon Khoo

      Great post. I did similar dynamic construct url stuff too.

      Other consideration beside fixed 100 size, is the URL length allowed. Normally i use up to 1800 url length characters, to do Get calls. The idea is get as much data as possible per call, and minimize number of calls, because each call take time and slow.

      Another possible way is after getting list of empidquery, use splitter-gather pattern. Split each empidquery by line-break, call API get response, then combine it back together using Gather step either simply concatenate or xml merge. One advantage of splitter-gather is can have parallel processing, this speed up processing time compared to sequential loop.

      Author's profile photo Subhojit Saha
      Subhojit Saha
      Blog Post Author

      Hi Yee,

      Thanks for your comments . I agree, that using splitter - gather pattern and parallel processing we can process it faster. May be that's  a direction which you can show by writing a blog... 🙂 , so that it can help others. Then I can refer that blog here for alternate approach.

      Thanks again for your valuable insights.

      Author's profile photo Arindam Mitra
      Arindam Mitra

      Great job Subhojit!! Very much helpful and keep writing..:-)

      Author's profile photo Subhojit Saha
      Subhojit Saha
      Blog Post Author

      Thanks Arindam.

      Author's profile photo Ashish Sharma
      Ashish Sharma

      Nice blog Subhojit !!! Great work !

      Author's profile photo Subhojit Saha
      Subhojit Saha
      Blog Post Author

      Thanks Ashish.

      Author's profile photo Sathish Venkataraman
      Sathish Venkataraman

      Superb blog!!! Good to know that SF adapter now supports IN for Odata.

      Author's profile photo Subhojit Saha
      Subhojit Saha
      Blog Post Author

      Thanks Sathish

      Author's profile photo Punit Jhanwar
      Punit Jhanwar

      Nice blog Subho...

      Author's profile photo Subhojit Saha
      Subhojit Saha
      Blog Post Author

      Thanks Punit