SAP for Utilities Blogs
Discover insights and practical tips to optimize operations, reduce costs, and deliver reliable energy with SAP technology. Contribute your own blog post!
cancel
Showing results for 
Search instead for 
Did you mean: 

In our last installment of NextGeneration EAM we spoke about the concepts of mobility - specifically how mobility was much more that porting the back office transaction to a connected laptop in the field.  Mobility is about transforming the way the field worker does their job by providing the information they need to complete the process without the burden of the complete backend data load.

In this week's blog, I would like to explore the expectations and the solutions that are available to address yet another critical node of our NextGenEAM Mind Map.  For a quick refresher, let's revisit it before diving head in.  The NextGenEAM Mind Map was an exercise in mapping the key ideas that support the concept of NextGenEAM.  The Worker, the Visualization, the Data and Mobility.  The MindMap below shows the structure and a couple of the minor nodes are specifically called out.

BIG DATA - where else but in the word of EAM would you expect to run into extreme data sets ?  With the sensory networks of today’s machines coupled with the numerous endpoints being added into the utility via projects such as Smart Grid - Automated Meter Infrastructures ( AMI ) Digital Control Systems and Programmable Logical Controller ( DCS / PLC ) we have an environment ripe for data sets that enter the extreme levels of petabyte levels.

So all of this great data is being sent / exchanged in M2M forms = but what to do with it and how best to make sense of it all ?

Before we descend into the discussion of in memory vs. relational data bases - let’s start at the most fundamental aspect of the information we are dealing with.  It’s often overlooked and when later "discovered" as a concept it can present challenges to incorporate.  All data has an origin - sure you say it comes from a sensor, meter, rack equipment - no I mean it shares a more basic and common data point that describes its origin.  Geo Coordinates - GeoCoords are the most elementary and a very critical part of leveraging the data to its fullest.  Beyond knowing that a given data point came from a given sensor we most likely want to know where its located to consider additional meta data associated with that areas environmental conditions perhaps or what similar equipment is located in a given radius. 

However let’s take it to another level - what geographically related data points are exhibiting a similar pattern of behavior.  So now we have a multidimensional profile of a given data point - grounded on its geocoord as a common denominator and ready for us to take advantage of.  What pattern is exhibited when the weather is in a certain configuration as compared to the network.  Are we seeing precursory data signatures in a given node or type of asset prior to a failure - These are all indicators that are being processed in the best of the EAM engineering shops.  It's not new - that being signature analysis - think back to the Vibration Snapshot devices and how they would capture mutli point xyz data and create a signature that the trained eye could see into.

Early detection involves current and historical data - with machines as well as humans.  Why does your doctor want to know family history or why do they keep those excellent notes in their folders?  Of course so they have a historical profile of your bodies machinery to compare current data to.  With our big data sets we need the ability to do this as well. Wherein the challenge comes is dealing with multiple historical datasets spread across different protocols and languages.  That's not the end of the dilemma - the next quandary comes in when we marry up time series data sets of differing resolution to that of sequence / change of state data streams - Not the easiest nut to crack - one that is doable with modern tool sets - still not to be underestimated when merging the large data sets into one for handling quickly and delivering maximum results.

Finally what is the purpose of information mining from big data sets ?  To support a informed decision being made by the user.  Strike out with key measures in mind - focus on the critical assets first, drive maximum benefit out of the data by providing a spatially focused / delivered view that is supplemented by key indices available at a glance.  Allow for drill downs into the data sets either by zoom levels or through intuitive right mouse click or other modern UX controls.  Unlock the power of the data by allowing the users to explore the data - customers are always amazed by what the data shows when it is in a unified in memory database.  Inconsistencies show their face quickly and lead to discovering why failures go undetected until it’s too late.

The advent of big data sets and more open networks sharing data has allowed the system architects to exploit the possibilities of entering the world where operational technology and information technology merge - or at least meet each other. This is a matter of degrees and dependent on the proper design to prevent security breaches or performance issues from impeding the real opportunity which is to extend business processes from the sensor to the mobile user and thereby drive the maximum out of the Next GenEAM.

Look for my next series - In which we introduce the next topic centered on Enterprise Asset Management. Today I've eluded to it and I assure you it will be interesting...

2 Comments