Skip to Content

What is BI Accelerator ?

BI accelerator (BIA), also known as High Performance Analytics (HPA) is a new functionality provided with NW 2004s. As the name suggests it does accelerate your queries. Hence, if you are working on separate performance project to have faster query response or spending time on weekends to monitor huge aggregate roll ups/ change runs, then BI Accelerator could help you a lot.   Prior to BI accelerator we had following ways to improve performance.             


  •   Build Aggregates 

  •   Use Olap Cache to accelerate response for similar queries by caching similar results and reading from cache instead of reading from DB

  •   Use Reporting Agent or Information Broadcasting to run popular queries in background during off hours and push summary views of updated data to users via email.

      So does BI Accelerator replace them??? Obviously answer to this question is No.  One can decide on whether to use aggregates or BI accelerator depending on different situations but Cache and Information Broadcasting are any how available. The figure shows the order in which BI uses the listed repositories during execution of query.  image With increasing data volumes along with increase in no of users, we have some constraints on no of aggregates to be created for all possible queries. Thus, typical strategy is to build somewhat general aggregates that more queries can utilize, but the performance benefit per query is less significant. Hence users complain about unpredictable time for query execution.   Fundamentally they need stable transparent process  and predictable repose time as obtained from search engine like Google , where they do not have to worry if aggregates are available or not.  Although SAP has a search engine named TREX, it is only meant for searching non structured Text data.    Therefore, can an engine initially conceived for text search be used to Implement fast search for structured data in tables???    Yes!!!  SAP adapted its TREX search engine to implement a capability for BI Accelerator.  Subsequently can we start using the normal TREX installation as BI Accelerator?         Currently Answer is No. BI Accelerator and TREX are two different installations and hence BI Accelerator cannot be used for standard KMC functionality and vice versa.  But note that technically there is a lot of commonality between TREX-BI Accelerator roles.  Finally to summarize BI accelerator will be a box, may be standalone or could also fit into an existing customer rack which once plugged will surely result in faster response. Some more documented benefits of BI accelerator are listed below, 


  • Very fast query response time, with improvements in Performance by factor 10 – 100 in terms of DB time.

  • Stable Query response time, due to freedom from DB optimizer and aggregates

  • Low maintenance, due to again no aggregate maintenance and minimized roll-up /change run

  • High scalability, planned support of new hardware paradigms like grid / adaptive computing

  • Increased  end-user  satisfaction / extended  BI reach

  • Significant TCO reduction.

You must be Logged on to comment or reply to a post.
  • The BI accelerator is really a hardware solution.  Can you give us any guidelines about how much hardware is required to run a cube of specific size?  Is there any sizing documents yet for this?  Are there any restrictions of cube size?
    • Hi ,

      I am not able to find very detailed sizing for cubes.  Do let me know if you are able to find one.
      Hardware or sizing related Information I came across is as follows.

      The BI accelerator box will be based on Intel-based blade technology.
      The OS for the blades will be Linux SLES 9.
      The blade servers with 64-bit Intel processors will be available in following different sizes

      Small T-shirt Size

      20 parallel user sessions
      250 million rows total
      500 byte / row

      Medium T-shirt Size

      50 parallel user sessions
      500 million rows total
      500 byte / row

      Large T-shirt Size

      100 parallel user sessions
      1,000 million rows total
      500 byte / row

  • I was using the report ZZ_BIAMEMCONSUMPTION_BW3X
    to estimate how much memory we would need for our biggest cube in the BIA. When I compared the results from the report with the space consumption in the database then I have a compression rate of less then 1:4. In the FAQ for NW2004S they mentioned “We see compression of 1:20 and more”. We are using DB2 as database. The question is now if the report is wrong because a compression rate of less then 1:4 is a little bit disappointing. Are there any real live experience?

  • We are considering to upgrade BI3.5 to BI7.0 due to the fact of BIA having with great performance improvement. Now we want to know whether BIA always work for data access if using 3rd part report tools under MDX standard or others.
  • Can you please elaborate a bit more on this “planned support of new hardware paradigms like grid / adaptive computing”? What are ideas behind that? Or at list can you put some reference (link), where it is more explained?

    Thank you!