Is it time for Business Intelligence to retire? – Part II
In my last post (Is it time for Business Intelligence to retire? – Part I) we have discussed what Business Intelligence is, in order to set the stage, and a simplified path on how Business Intelligence finally became what we have today. During this post we will discuss future trends and how I imaging we can move on. I hope you will enjoy it.
Where are we heading next?
With the blinding fast changes we have been facing in the last few years, I believe it is time to ask ourselves: is it time for Business Intelligence to retire? Has the time came when end-users will be able to report and plan directly against their transactional systems (ERP – Enterprise Resource Planning, CRM – Customer Relationship Manager, PoS – Point of Sales and others)? Is it today when we can deliver a true self-service BI and with that, drastically reduce the need for BI specialists and experts?
Some of my colleagues believe the answer is yes. When they say that BI has run its course and it is now time to either retire or do something completely different, there are a few reasons that are consistently presented. I have listed the top four reasons they offered me:
1. From a Total Cost of Ownership (TCO) point of view, having a single solution that could deliver both transactional and analytical capabilities would be more efficient. Give, of course, a few requirements are met:
- Performance – sub-second response times for billions of result records, as big data is a unavoidable reality;
- Usability – different presentation layers for different target audiences (Dashboards for the company’s CEO, heavy analytical tools for controlling analysis, for example) and data manipulation for simulation, data discovery, data mining, etc.;
- Content – once in general, all internal data required for most of corporate reporting needs are stored in one way or another in one of the company’s core systems, be it ERP or CRM, vendor A or B;
2. Simplified landscape would be no doubt a great advantage. Imagine the possibility of reducing the number of servers on your Data Center and its consequent benefits:
- Less physical space usage;
- Less energy consumption, great plus towards green IT (another top priority topic on C level agenda);
- Simpler and fewer operation activities, such as backup, monitoring and so on.
3. Real-time BI. In a world that is moving faster and faster every second, the ability to report, monitor and adjust the course of planned action based on what is going on RIGHT NOW instead of last night, is a great marketing differentiator. Imagine you can perform credit and budget impact simulations, and risk assessments in real-time – within sub-second response time – in front of your customer. You would then be able to answer on the spot if you would approve or not your customer’s offer and his conditions of payment! That would be possible with the combination of transactional and analytical worlds, in the very same solution.
4. Current BI architecture and set up is too complex. Indeed, it is. Mapping source system data models, going through – sometimes very painful – ETL processes, cleansing and harmonizing master data, creating dimensional data modeling specially designed for performance, navigate throughout the wide range of front-end tools in order to design specific layouts for each target audience. Only to list it, one feels exhausted.
All four reasons above are valid and true, so it is not possible to argue against it. However, it is possible to put it in perspective. It is also true that big software vendor are announcing really astonishing products (as the already mentioned SAP HANA and Oracle Exalytics, to name two examples) which promise and actually deliver, based on the experience I had with SAP HANA for instance, outstanding results.
It is undisputable that we are living in a game-changing time when it is indeed possible to perform clear Business Intelligence tasks in a transactional system powered by those amazing new solutions.
It is possible, for example, to perform an extensive client profile evaluation based on a number of different characteristics before releasing a credit extension. Those characteristics could include:
- History of deviations on the client’s past payments due dates;
- Percentage of clients with similar profiles that would not honor an credit extension – which can be accomplish by clustering the clients with well know algorithms (like k-means, for example);
- Time since the client’s very first purchase;
- And many others.
It would also be possible to connect to the transactional system data using one of the top vendor’s front-end tools, like MicroStrategy, IBM, SAP or Oracle. And by doing that, it would be possible to deliver amazing front-end experience with all required drill down, drill through, charting, filtering, etc.
However, it is still not possible to perform all Business Intelligence activities and achieve all Business Intelligence objectives under this model, and maybe it will never be truly possible. And why do I say that?
The future on Business Intelligence
It is hard to project a long term future when it comes to new technologies, and within BI subject it is not different. Consider, even briefly, how information was accessed fifteen years ago. If you had to write 500 words on the Greek philosopher Socrates you would end up in a library or looking for information on an encyclopedia. Nowadays, if you just google “Greek philosopher Socrates” you will get over 650.000 occurrences in a blinking of an eye, which is not bad at all.
Having it said, my answer to the question “Is it time for Business Intelligence to retire?” is not yet! I believe that in the future, BI retirement can be the case, but for now I only see that Business Intelligence, and BI professionals, have to adapt in order to bring more business value in the face of a new reality. Here are my top five constrains why BI cannot retire for now:
Relevant data – in order to do a proper BI project with clear business results, it is necessary to have the proper data to start with. Although most of internal relevant data are usually stored in robust transactional systems, such as ERPs, CRMs, PoSs, etc., there are at least three other very important sources of data:
- Internal and non-formalized sources, which consists of those “magical” spreadsheets that can be found all over organizations across the globe. It is possible to argue that this should not exist according to this or that concept (either from an architecture or security point of view), but reality imposes itself over concepts. Spreadsheets and, so called, “local databases” are everywhere in all organizations.
- External and formalized sources, which are those standardized and normalized sources provided usually by financial institutions (inflation rates for the following quarters, for example) or official organizations (number of complaints by company in the Telecommunication – Telecom – sector, for example) or even for research companies (e.g., beer brand preference on Brazilian beaches).
- External and non-formalized sources, which are formed by non-structured sources of data, for example, those generated by social media (e.g., Facebook posts or Tweets), body talk in a pub and so on.
The point been, although organizations are becoming more professionalized, centralized, harmonized and structured, there are still many sources of very relevant information completely out of their transactional systems.
It is not only a matter of getting that information into transactional systems, there are other important matters like:
- Master data harmonization, with different sources using different codes, different attributes and different descriptions for the same entity, and
- Data granularity, with different analysis requiring different level of granularity: per product, per product family, per sales organization, per sales person and so on. However, different data sources have also different purposes and consequently different granularity.
Data volume – although it seems a minor problem at first sight, with all Big Data focused technology been released on the market, it could become a really pain in case you decide to bring all possible data sources, as described in the point above, into your transactional/analytical combined system. In short, the exponential data growth most organizations are faced with, would simply be blown up by this approach leading to, something I will call on this article of, “Huge Data” instead of Big Data.
With “Huge Data” approach, organizations would be faced with very complex hardware and operational environments, including shortage of maintenance and backup windows, very complex disaster recovery requirements and so on.
Flexibility – in order to adapt to constant market changes, be it new regulatory requirements, new customer needs or to gain competitive advantage, there is an unstoppable wave of changes on IT systems. Those changes varies from small master data additions (e.g., create a new product) to extensive customization exercises (e.g., create a new company within your current system and all its dependences).
Let us imagine a simple scenario: your company is entering a new market and in order to meet that new reality a new division is being created. New organization structures will be created, new products, new sales organizations, new organizational levels and groups, etc. And it is needless to say, new Business Intelligence requirements are all over the place: from reporting adaptations to entirely new clustering, going through consolidation requirements and trend analysis.
Now, at the very same time, the marketing department is launching a new marketing campaign, which consists basically in offering through e-mail marketing a new wine vintage selection. Unfortunately, the sales process will go through a last minute partnership with a wine importation company, that being the case, all sales will be reported to your company via a flat-file interface on their codes.
Finally, supported by your very efficient portfolio prioritization process, you got all necessary resources to get both done on time. Problem is: it will all happen at the very same time, in the very same system, touching very similar objects and in some cases, the very same object.
Although very simple and imperfect, the scenario above presents even more common real live situation. IT departments are called to answer many demands at once and having it all in the very same “box” will only create more complex dependences, higher risks and some very nasty test scenarios.
Besides that, what would be the impacts of applying a correction or developing a new application on such a deeply integrated system? Will it be possible to determine its impacts and mitigate consequent risks? I would risk saying, at the moment, that the answer would be no.
Costs – yes, costs of brand new technologies are accessible for a higher number of organizations world-wide. However, when it comes to build a single transactional/analytical system, dealing with “Huge Data”, as said previously, will also lead to the utilization of a massive amount of hardware resources, mainly CPU, memory and storage.
If you doubt, simply sum up your company’s main transaction systems (ERP, CRM, PoS, Supply Chain Management – SCM, Procurement, etc.) plus half the size of your current Enterprise Data Warehouse (assuming the other half is already contained on the transactional systems). What kind of server would you need to support your transactional/analytical operation all together?
Maturity level – in most of the examples explored on this article, there was an implicit assumption: the assumption that the organizations are currently using an Enterprise Data Warehouse (EDW) and for that reason they would have basically to merge their transaction systems with that single EDW.
Unfortunately, several organizations – if not most organizations – are far away from that perfect goal. In my experience, as a consultant, what I see pretty much every day is that organizations (even the big and resourceful ones) are trying hard to get there, but could not achieve it so far.
In most of the cases, a few DWs can be found all over the organizations and even more frequently several front-end tools and front-end vendors are also part of the picture, not to mention ETL with its vendors.
In a heterogeneous scenario as described above, it is even more complex to target the transactional/analytical combined scenario once it would directly lead to a disruptive in organizational maturity path, which, in the worst case, could lead to elevated risks to the business effectiveness and even continuity.
Now that we have discussed: what Business Intelligence is, how it become what it is nowadays and how I believe it will develop there is only a wrap up missing. Let us close this discussion in the next – and last – part of our discussion.
Best regards for now.
Do you want to read the second part of this post? Check it out at: Is it time for Business Intelligence to retire? – Part III