Skip to Content

Make the most out of Query Execution – Part 1.

Make the most out of Query Execution – Part 2.

 

A case in point here could be a global company which has multiple BW instances across the world and wants to consolidate its balance sheets across instances. Here the balance sheet would refer to a query that is executed in individual systems and then consolidated into the target system.

For this the Open Hub service may be used which is the tool of choice currently. I am not proposing an alternative to the open hub service but then looking at things a little bit differently by which query information can be pulled by the target system instead of a push by the Open Hub service.

All this again ties back to the Query_view_Data function module. What we can do is to create a generic data source on a query and then call it from the target system. Here you can have the query data stores in the E_CELL_DATA and then sent to the target system very much like a data load . This makes it a pull type of data load where the target system pulls in data when needed as opposed to the source system pushing the data to the target system.

The QUERY_VIEW_DATA function module can be encapsulated within a generic extraction function module and then the same can be used to send the data to the target system. It is kind of roundabout but then no data is persisted. Some of the perceived advantages are :

  • No data is persisted in the source system , the data is extracted and sent
  • The data is current – no issues about running the infospoke multiple times to make sure that data is current – for example – the infospoke / open hub has to be run every time a data load is done and the data is again extracted into the target system , here the data is not persisted.

Possible disadvantages :

  • Very large queries might dump
  • You can call the function module directly in the target system as opposed to a datasource
  • You will need separate datasources for each query.

When you have a datasource , it helps consolidate multiple systems by which the same datasource may be used across systems. Also the query can be built in such a way that selection conditions may be passed to the generic extractor and accordingly data can be fetched – this makes the whole thing dynamic and can even lead to a ‘Delta’ kind of scenario for query based data extraction.

 

I am just exploring possibilities of using a query beyond its stated usage in BEx , this was one of the approaches that looked nice to have. I am not sure if someone has already tries this  , I would love to know your views on the same to see if the approach of using a query as a datasource makes sense.

 

  • No data is persisted in the source system , the data is extracted and sent
  • The data is current – no issues about running the infospoke multiple times to make sure that data is current – for example – the infospoke / open hub has to be run every time a data load is done and the data is again extracted into the target system , here the data is not persisted.

Possible disadvantages :

  • Very large queries might dump
  • You can call the function module directly in the target system as opposed to a datasource
  • You will need separate datasources for each query.

When you have a datasource , it helps consolidate multiple systems by which the same datasource may be used across systems. Also the query can be built in such a way that selection conditions may be passed to the generic extractor and accordingly data can be fetched – this makes the whole thing dynamic and can even lead to a ‘Delta’ kind of scenario for query based data extraction.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply