Skip to Content

In my last post (Is it time for Business Intelligence to retire? – Part I) we have discussed what Business Intelligence is, in order to set the stage, and a simplified path on how Business Intelligence finally became what we have today. During this post we will discuss future trends and how I imaging we can move on. I hope you will enjoy it.


Where are we heading next?

With the blinding fast changes we have been facing in the last few years, I believe it is time to ask ourselves: is it time for Business Intelligence to retire? Has the time came when end-users will be able to report and plan directly against their transactional systems (ERP – Enterprise Resource Planning, CRM – Customer Relationship Manager, PoS – Point of Sales and others)? Is it today when we can deliver a true self-service BI and with that, drastically reduce the need for BI specialists and experts?

Some of my colleagues believe the answer is yes. When they say that BI has run its course and it is now time to either retire or do something completely different, there are a few reasons that are consistently presented. I have listed the top four reasons they offered me:


1.    From a Total Cost of Ownership (TCO) point of view, having a single solution that could deliver both transactional and analytical capabilities would be more efficient. Give, of course, a few requirements are met:

    • Performance – sub-second response times for billions of result records, as big data is a unavoidable reality;
    • Usability – different presentation layers for different target audiences (Dashboards for the company’s CEO, heavy analytical tools for controlling analysis, for example) and data manipulation for simulation, data discovery, data mining, etc.;
    • Content – once in general, all internal data required for most of corporate reporting needs are stored in one way or another in one of the company’s core systems, be it ERP or CRM, vendor A or B;

2.    Simplified landscape would be no doubt a great advantage. Imagine the possibility of reducing the number of servers on your Data Center and its consequent benefits:

    • Less physical space usage;
    • Less energy consumption, great plus towards green IT (another top priority topic on C level agenda);
    • Simpler and fewer operation activities, such as backup, monitoring and so on.

3.    Real-time BI. In a world that is moving faster and faster every second, the ability to report, monitor and adjust the course of planned action based on what is going on RIGHT NOW instead of last night, is a great marketing differentiator. Imagine you can perform credit and budget impact simulations, and risk assessments in real-time – within sub-second response time – in front of your customer. You would then be able to answer on the spot if you would approve or not your customer’s offer and his conditions of payment! That would be possible with the combination of transactional and analytical worlds, in the very same solution. 

4.    Current BI architecture and set up is too complex. Indeed, it is. Mapping source system data models, going through – sometimes very painful – ETL processes, cleansing and harmonizing master data, creating dimensional data modeling specially designed for performance, navigate throughout the wide range of front-end tools in order to design specific layouts for each target audience. Only to list it, one feels exhausted.

All four reasons above are valid and true, so it is not possible to argue against it. However, it is possible to put it in perspective. It is also true that big software vendor are announcing really astonishing products (as the already mentioned SAP HANA and Oracle Exalytics, to name two examples) which promise and actually deliver, based on the experience I had with SAP HANA for instance, outstanding results.

It is undisputable that we are living in a game-changing time when it is indeed possible to perform clear Business Intelligence tasks in a transactional system powered by those amazing new solutions.

It is possible, for example, to perform an extensive client profile evaluation based on a number of different characteristics before releasing a credit extension. Those characteristics could include:

  • History of deviations on the client’s past payments due dates;
  • Percentage of clients with similar profiles that would not honor an credit extension – which can be accomplish by clustering the clients with well know algorithms (like k-means, for example);
  • Time since the client’s very first purchase;
  • And many others.

It would also be possible to connect to the transactional system data using one of the top vendor’s front-end tools, like MicroStrategy, IBM, SAP or Oracle. And by doing that, it would be possible to deliver amazing front-end experience with all required drill down, drill through, charting, filtering, etc.

However, it is still not possible to perform all Business Intelligence activities and achieve all Business Intelligence objectives under this model, and maybe it will never be truly possible. And why do I say that?

The future on Business Intelligence

It is hard to project a long term future when it comes to new technologies, and within BI subject it is not different. Consider, even briefly, how information was accessed fifteen years ago. If you had to write 500 words on the Greek philosopher Socrates you would end up in a library or looking for information on an encyclopedia. Nowadays, if you just google “Greek philosopher Socrates” you will get over 650.000 occurrences in a blinking of an eye, which is not bad at all.    

Having it said, my answer to the question “Is it time for Business Intelligence to retire?” is not yet! I believe that in the future, BI retirement can be the case, but for now I only see that Business Intelligence, and BI professionals, have to adapt in order to bring more business value in the face of a new reality. Here are my top five constrains why BI cannot retire for now:

Relevant data – in order to do a proper BI project with clear business results, it is necessary to have the proper data to start with. Although most of internal relevant data are usually stored in robust transactional systems, such as ERPs, CRMs, PoSs, etc., there are at least three other very important sources of data:

  1. Internal and non-formalized sources, which consists of those “magical” spreadsheets that can be found all over organizations across the globe. It is possible to argue that this should not exist according to this or that concept (either from an architecture or security point of view), but reality imposes itself over concepts. Spreadsheets and, so called, “local databases” are everywhere in all organizations.
  2. External and formalized sources, which are those standardized and normalized sources provided usually by financial institutions (inflation rates for the following quarters, for example) or official organizations (number of complaints by company in the Telecommunication – Telecom – sector, for example) or even for research companies (e.g., beer brand preference on Brazilian beaches).
  3. External and non-formalized sources, which are formed by non-structured sources of data, for example, those generated by social media (e.g., Facebook posts or Tweets), body talk in a pub and so on.

The point been, although organizations are becoming more professionalized, centralized, harmonized and structured, there are still many sources of very relevant information completely out of their transactional systems.

It is not only a matter of getting that information into transactional systems, there are other important matters like:

  • Master data harmonization, with different sources using different codes, different attributes and different descriptions for the same entity, and
  • Data granularity, with different analysis requiring different level of granularity: per product, per product family, per sales organization, per sales person and so on. However, different data sources have also different purposes and consequently different granularity.


Data volume – although it seems a minor problem at first sight, with all Big Data focused technology been released on the market, it could become a really pain in case you decide to bring all possible data sources, as described in the point above, into your transactional/analytical combined system. In short, the exponential data growth most organizations are faced with, would simply be blown up by this approach leading to, something I will call on this article of, “Huge Data” instead of Big Data.

With “Huge Data” approach, organizations would be faced with very complex hardware and operational environments, including shortage of maintenance and backup windows, very complex disaster recovery requirements and so on.

Flexibility – in order to adapt to constant market changes, be it new regulatory requirements, new customer needs or to gain competitive advantage, there is an unstoppable wave of changes on IT systems. Those changes varies from small master data additions (e.g., create a new product) to extensive customization exercises (e.g., create a new company within your current system and all its dependences).

Let us imagine a simple scenario: your company is entering a new market and in order to meet that new reality a new division is being created. New organization structures will be created, new products, new sales organizations, new organizational levels and groups, etc. And it is needless to say, new Business Intelligence requirements are all over the place: from reporting adaptations to entirely new clustering, going through consolidation requirements and trend analysis.

Now, at the very same time, the marketing department is launching a new marketing campaign, which consists basically in offering through e-mail marketing a new wine vintage selection. Unfortunately, the sales process will go through a last minute partnership with a wine importation company, that being the case, all sales will be reported to your company via a flat-file interface on their codes.

Finally, supported by your very efficient portfolio prioritization process, you got all necessary resources to get both done on time. Problem is: it will all happen at the very same time, in the very same system, touching very similar objects and in some cases, the very same object.

Although very simple and imperfect, the scenario above presents even more common real live situation. IT departments are called to answer many demands at once and having it all in the very same “box” will only create more complex dependences, higher risks and some very nasty test scenarios.

Besides that, what would be the impacts of applying a correction or developing a new application on such a deeply integrated system? Will it be possible to determine its impacts and mitigate consequent risks? I would risk saying, at the moment, that the answer would be no.

 

Costs – yes, costs of brand new technologies are accessible for a higher number of organizations world-wide. However, when it comes to build a single transactional/analytical system, dealing with “Huge Data”, as said previously, will also lead to the utilization of a massive amount of hardware resources, mainly CPU, memory and storage.

If you doubt, simply sum up your company’s main transaction systems (ERP, CRM, PoS, Supply Chain Management – SCM, Procurement, etc.) plus half the size of your current Enterprise Data Warehouse (assuming the other half is already contained on the transactional systems). What kind of server would you need to support your transactional/analytical operation all together?


Maturity level – in most of the examples explored on this article, there was an implicit assumption: the assumption that the organizations are currently using an Enterprise Data Warehouse (EDW) and for that reason they would have basically to merge their transaction systems with that single EDW.

Unfortunately, several organizations – if not most organizations – are far away from that perfect goal. In my experience, as a consultant, what I see pretty much every day is that organizations (even the big and resourceful ones) are trying hard to get there, but could not achieve it so far.

In most of the cases, a few DWs can be found all over the organizations and even more frequently several front-end tools and front-end vendors are also part of the picture, not to mention ETL with its vendors.

In a heterogeneous scenario as described above, it is even more complex to target the transactional/analytical combined scenario once it would directly lead to a disruptive in organizational maturity path, which, in the worst case, could lead to elevated risks to the business effectiveness and even continuity.

Now that we have discussed: what Business Intelligence is, how it become what it is nowadays and how I believe it will develop there is only a wrap up missing. Let us close this discussion in the next – and last – part of our discussion.


Best regards for now.


Do you want to read the second part of this post? Check it out at: Is it time for Business Intelligence to retire? – Part III


To report this post you need to login first.

6 Comments

You must be Logged on to comment or reply to a post.

  1. Esteban Burbano de Lara

    Eduardo, good post. I believe that some of the issues that have triggered the questions if BI is dead is because we continue to invest time and money in analytical projects that consolidate, query, and analyze data itself. We work ourselves to exhaustion until the very last cent squares up between my OLTP and my EDW and myriad of DMs.

    My belief is that the next big BI wave will be oriented to metadata. Use patterns, word and context recognition, time and location (as in what system, what user, when, to whom, from whom) all stored together from which key users can spot trends, validate activities, and further enrich their corporate intuition.

    C-Level execs need to get closer to where the action is but that doesn’t mean they need a report that is to the cent validated with a cash register at a POS. CEO´s don´t make decisions based on nickel and dimes; they make decisions based on validated trends, intuition and more importantly, the ability to deeply understand his company, the consumer/customer, the competition, their trends, and so on.

    In order to do this, you don’t need to extract, transform, and load data, you need
    to have instant access to metadata, all of it, yours, mine, all of it.

    Think of this as a “Corporate PRISM” (sans the privacy issues), you don’t need to know the details, if you know enough; you can operate better via your extended staff which then would have to know the details of whatever they do, but not decision makers. Maybe it´s here, the middle tier where BI (or BI as we know it) should live, not above.

    I blieve that the bigest flaw behind BI´s maturity process as we thought of it lies on the aggregative nature of BI. “In order to spot trends you need granular detail data, and then add it up, a little bit of math, and voila, you have yourself a trend”, that´s because (up until know) that´s the only thing we had, data. With metadata we can spot a trend even if we don´t add all the invoices to the cent, that´s how the corporate world was made before all of us, validated and reliable intuition. The future of BI is still made of the “B” and the “I”, but I call it “Business Intuition Systems”

    Sorry for the long post. Cheers, Esteban

    (0) 
    1. Eduardo Rodrigues Post author

      Hi Esteban,

      Thank you very much for taking the time for providing your valuable feedback on this article. I hope you have enjoyed the conclusion also under: Is it time for Business Intelligence to retire? – Part III

      I agree with you we are living a time of changes and thinking “out-of-the-box” could result in the next big paradigm rupture. From a basically technical point of view, I’d like to put my hands on the algorithms used by PRISM by the way 🙂

      I’d like to learn more about your proposal of Business Intuition Systems, it sounds very interesting.

      Best regards

      Eduardo

      (0) 
      1. Henry Banks

        Hi,

        You guys might be interested to read this “Network of Truth” blog over here: http://scn.sap.com/community/business-intelligence/blog/2013/04/08/the-network-of-truth .  this hints at a future strategy for analytics.

        By the way, this “BI” forum you’re posting in, is understood as being BusinessObjects business intelligence (look at the subforums on the left). If it’s EDW you need, please work under the ‘SAP netweaver Business Warehouse’ forum.

        Regards,

        H

        (0) 
        1. Eduardo Rodrigues Post author

          Hi Henry

          thanks for the hint! I´ll take a look on the other article. Actually I would classify my post as conceptual BI not necessary linked to one tool or group of tools. But I´ll consider it next time!

          Thanks once more.

          Best regards

          Eduardo

          (0) 
        2. Esteban Burbano de Lara

          Henry,
          I read the blog post you kindly referenced. Even though it sounds novel I believe that such a collaborative effort (such as the Wikipedia example mentioned) might have its difficulties when brought to the competitive environment that common workplaces breathe. Limited resources and the desire to shine under one´s own light (part of human nature, nothing to do with BI) is also why finger pointing is so bad but part-of BI projects. “The Network of Truth” seems to me a bit risky given those human traits we´ve all seen. 

          In a world of limited resources such a collaborative effort in a Business & IT (just to mention to areas) tug-of-war could be a handful, What I do agree is on what Eduardo has said, BI´s evolution will have to be disruptive in order to prevail, up until now only minor enhancements have been seen.

          (0) 

Leave a Reply