Skip to Content

This is part two in a four part series. Other parts can be found here:

The Future Analytics: Part 1 – Overview

The Future Analytics: Part 3 – Apps and Visualization

The Future Analytics: Part 4 – Suite and Actions

Watch this video to learn more about the Future Analytics: The Future of Analytics & Big Data (sapserviceshub.com)

Do we want to deal with the world as it is, or how we want it to be? One aspect of Analytics that I have always found fascinating is how metric identification and monitoring these chosen metrics during business operations create their own reality. We choose metrics on the assumption that they are reflective of the performance we want to track, and associate targets and goals with them. However, it is not always easy and obvious which metrics accurate reflect our performance goals. Even if we choose our metrics reasonably well, it is essentially a reductive approach: we decide to follow these metrics and no others and if we meet our goals, we declare success.

This can be enormously misleading. Suppose we chose our goal to raise revenue by a certain percentage in a year. We put an entire organization behind that goal and like hawks track our progress against our growth target. Throughout the year we see revenue rise, and once we pass our goal, we celebrate. However, the cost of raising revenue might be such that margins have suffered, and you may find that while you reached your original goal, profitability has gone down and may even have gone negative. What have we really achieved, then? By reducing our view of the world to a very narrow one, we created our own fantasy in which we were successful, when in actuality we’ve spent a lot of energy and effort to no real benefit.

But we have a second fundamental limitation in this approach: We cannot track what we do not measure, so metric selection is often defined by the data you have. If all you collect is data that is generated internally through transactional systems (“actuals”), our metrics by necessity are limited to that dataset. But is that the whole story? By no means is all relevant data coming in only through transactional systems. The real world is substantially more complex than that.

/wp-content/uploads/2014/08/future_analytics_part2_518698.png

Big Data technologies allow us to take a different approach. The number of potential data sources we can use suddenly explodes, including data that previously was either impossible to add to our analysis, or only with great difficulty. We can now take a much wider look at reality that tells us things that actuals cannot tell us. One use case example, for instance, is the use of corporate email.

When we look at Customer Relationship Management, the “360 degrees view of the customer” is a common phrase, but many CRM solutions still don’t leverage corporate email in analysis, either for direct contact with the customer or everybody internally that somehow has been involved with this customer. From sentiment analysis on the direct communication between customer and main sales contacts, as well as gathering the internal discussions, we already get a much clearer picture of who has been involved with a particular customer, what conversations were had, whether the customer is satisfied or not, or whether there are likely further upsell or cross-sell opportunities. We might even be able to rate customers to maximize the productive time of our sales organization, and discourage putting effort into a customer that’s unlikely to buy again soon. Social media sentiment analysis is often the first that comes to mind in a Big Data context, but for organizations that sell to other businesses rather than to the general public, such corporate email analysis is likely far more effective and valuable.

We can’t do any of this without the assistance of predictive analytics, though. The use of predictive analytics is by no means restricted to forecasting what comes next based on past actuals. It is absolutely essential to make sense of Big Data, and being able to separate signal from noise. While we can slice and dice and sum actuals every which way, Big Data analysis in even its simplest forms involves counting elements in a frequentist analysis – for instance, the frequency of interaction between our sales person and the customer – which immediately gets us into basis statistical elements like histograms, various distribution models, and outliers. This is even before we apply more elaborate techniques like regression, classification and clustering techniques, text analysis and neural networks, or even Machine Learning. And while we will have to deal with levels of uncertainty that we don’t encounter in traditional analytics, I am constantly amazed how predictive analytics allow us to deal with real-world problems that traditionally would make us throw up our hands in abject defeat.

We have to realize, though, that analyzing vast quantities of data and running additional predictive models on top of that comes at a hardware and performance cost. An export of my own mail Inbox for last year in text-only format comes to 85MB. Assuming that is an average number, a 50,000 employee corporate email archive for a year comes to 4 Terabyte, which is not enormous but is equivalent to a decent-sized data warehouse and exceeds the volume of most ERP installations. Data from thousands or millions of sensors taking a measurement every minute or second will come to massive volumes. We can parallelize processing in Hadoop, and do the same with complex predictive models that can take a long time to run. However, moving such analysis after initial processing to an in-memory database like SAP HANA dramatically improves such performance, and even makes it possible to run predictive models on-demand on data held inside SAP HANA. Such performance improvements become significantly more important if such analysis is used to make recommendations or offer specific promotions to a customer segment on your company website or in-person, or recommendations to central purchasing or supply chain management, where a quick response is essential to be effective.

Let’s be clear, though. Big Data (and predictive) are not a substitute for tracking actuals. At the end of the day, in a business context, we need revenue and profitability for a company to survive, and knowing accurately what you sold and where is critical information. Too often Big Data is presented as the new thing that will replace traditional analytics, but that strikes me as hubris. The value of Big Data is to enrich the information we get out of actuals, and where possible provide additional insights and business opportunities. It is the combination of them where the value is. With a better understanding of who our customers are, how they use our products or services, why there are regional variations in sales, the impact of external factors like weather, distance or ease of access from population centers, the local economy, or any other elements that might be relevant to the specific use case we’re dealing with, we can make better informed and more efficient decisions. What we’re trying to do is to expand beyond our reductionist view of the world, and get closer to reality.

To learn more about how SAP HANA Services can help you throughout your Analytics journey, please visit us online.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply