This is a comment that I’ve posted in the context of John Appleby’s blog on Bluefin’s websiteasking the questionIs SAP deliberately locking out competitors with BI4?. Various people have approached and asked me to submit this as an own blog in SDN for better reference. So here you go …
Hi John, all,
thanks for writing this blog and initiating such a lively discussion. I like to shed some light into the thinking that has been behind the integration between Business Objects tools and clients with BW. My goal is to remove some of the speculation that has come up in the course of the discussion.
First of all, let me emphasise what has already been said above, namely that MDX is well suited for interactive analysis meaning the case that the result of an MDX query is displayed 1:1 in a suitable UI. The latter typically implies small result sets as an end user can “digest” only manageable portions of data in order to reasonably analyse. Tools that follow this paradigm normally see good performance as the logic is calculated down in a server which is architected for those calculations. Microsoft’s Reporting Services or (native) Excel follow that rule and are equally suited to run on both, SAP BW and Microsoft Analysis Services. On the other hand, tools suffer from poor performance if they use MDX to retrieve significantly large amounts of data to only then calculate on top of that data. Some of the 3rd parties that have been mentioned above and that allegedly suffer from a competitive disadvantage simply follow such an approach and it is the latter that lets them fail rather than anything else.
Initially (i.e. around the time SAP acquired Business Objects), WebI followed that approach too. This was absolutely reasonable from the (previously independent) Business Objects’ perspective as the semantic layer intended to provide semantics on top of arbitrary data sources. Here, “arbitration” implies “no assumption on capabilities of the underlying data source”. This, in turn, implies that you need to apply semantics and the underlying calculations only once the data is in your own realm of control within the call stack. As soon as the SAP – Business Objects acquisition happened, the entire call stack (for a WebI query on top of BW) came under the control of one single company. Therefore, it was natural to redistribute the tasks and the processing with this new opportunity in order to do the best possible job for the end user.
There had been 2 options to do so:
- (a) provide a rather dumb (semantically poor) but volume enabled interface to BW that would allow WebI to keep its initial approach, and/or
- (b) adjust WebI (and the underlying semantic layer processing) to the fact that BW already holds a fairly rich set of semantics within its repository (infoproviders, BEX queries, hierarchies etc.).
Finally, both approaches were implemented which led to the “SQL Access to BW via Data Federator” for (a) and the BICS universes (or direct BICS access) with BI 4.0 (Aurora) for (b). Why both? Well, there is advantages and preferences at both ends, e.g. some customers like the modeling approach behind the relational universes and prefer (a), others have heavily invested in BW modeling, require the SAP specific semantics, want to continue to do so and prefer (b).
Now, why not opening up BW’s “DF Façade” to 3rd parties, it has been asked. Well, the reason is simple: it has not public interface quality. For example, it cannot handle standard SQL albeit being close to SQL. The Data Federator compensates the missing SQL functionality (like HAVING clause processing or converting to a “select options” based WHERE clause). This is how it is currently architected. It means that the “DF Façade” is a technical layer that is not complying to any standard and also might be adjusted or changed in the future (which would not meet the expectation for an API). But besides, it also means that Data Federator can expose BW infoproviders via standard SQL and that means that 3rd parties can in fact query BW infoproviders via standard SQL. It requires Data Federator as it adds value (SQL compensation, SQL connectivity like JDBC, SQL parsing and translating it to a proprietary API, optionally also user management, security). You can always challenge this setup but it does the job and it does it well – see Comparing SAP BW and an Oracle DWfor an example.
Furthermore, there have been questions on BICS. Firstly: it has not been invented for integrating Business Objects tools. Secondly, it is not based on the “DF Façade”. Thirdly, it has surged in the course of NW 7.0 and BEX Web. Nowadays, Analysis Office uses a .NET version. BICS is not necessarily superior to MDX but rather different in its focus. SAP-internally, we have analysed and discussed a lot on this and it would fill many pages and require many authors to summarise. However, I like to provide one single and fairly trivial but, in performance terms, unfortunate difference between the two: type casting! The MDX standard requires to format even key figure (measure) values as strings − e.g. meaning a cast float (ABAP) → string (ABAP) − to be then sent over the network to the Java client − meaning string (ABAP) → string (Java) − to be then reconverted into a Java type − e.g. string (Java) → float (Java). The negative performance impact of such conversions is neglectable in case of small result sets (a few 1000 cells/values) but increases the more data is transferred. See the discussion above. BICS, in turn, uses type casts that do not deviate via an intermediate string result. It sounds trivial but the impact is not.
My comment has become quite long, I need to stop here. I’ve intended to describe the sound technical motivation behind some of the decisions that have been discussed in this blog and have given room for interpretation into one or the other direction.