I am in my mid 40ties, working in IT for some 20 years now – and even though I am generally open to change, I sometimes really feel overwhelmed by the speed of changes going on in IT (probably in other areas as well – just not so much noticed by me).

 

Big Data is one example.

When I decided to work with computers more, Windows was just about the set the standard and my first PC had the amazing size of 120 mb harddisc (hey, that was 40 more than the standard sold at the time in the computer stores around). MS Word was still delivered on a number of floppy discs. Do you remember those?

 

floppy disc.jpg (Picture from Pixabay.com)

 

Well, these days are certainly gone!

 

And now? There is so much data around that is captured (you may like it or not) and is further worked on… I do not really know how many floppy discs we would need to store the data that is collected in one e.g. mining company a day. Probably it would require to build new storage buildings for floppy discs on a constant basis.

 

But – what is all that data good for? Is it helping to improve processes? I believe that we have not yet really understood what the data could help us to do. My imagination is not yet developed far enough to see the use cases and benefits that we could have from this data.

 

One example of use cases (for some industries like mining, chem, pharmaceutical) was recently described in a publication of McKinsey (http://www.mckinsey.com/insights/operations/how_big_data_can_improve_manufacturing). I will just quote the paragraph from the mining company (as this is ‘my’ industry):

‘Meanwhile, a precious-metals mine was able to increase its yield and profitability by rigorously assessing production data that were less than complete. The mine was going through a period in which the grade of its ore was declining; one of the only ways it could maintain production levels was to try to speed up or otherwise optimize its extraction and refining processes. The recovery of precious metals from ore is incredibly complex, typically involving between 10 and 15 variables and more than 15 pieces of machinery; extraction treatments may include cyanidation, oxidation, grinding, and leaching.

 

 

 

The production and process data that the operations team at the mine were working with were extremely fragmented, so the first step for the analytics team was to clean it up, using mathematical approaches to reconcile inconsistencies and account for information gaps. The team then examined the data on a number of process parameters—reagents, flow rates, density, and so on—before recognizing that variability in levels of dissolved oxygen (a key parameter in the leaching process) seemed to have the biggest impact on yield. Specifically, the team spotted fluctuations in oxygen concentration, which indicated that there were challenges in process control. The analysis also showed that the best demonstrated performance at the mine occurred on days in which oxygen levels were highest.

 

 

As a result of these findings, the mine made minor changes to its leach-recovery processes and increased its average yield by 3.7 percent within three months—a significant gain in a period during which ore grade had declined by some 20 percent. The increase in yield translated into a sustainable $10 million to $20 million annual profit impact for the mine, without it having to make additional capital investments or implement major change initiatives.

 

Well, what should I say? I would not have had that idea to use data that way and increase the yield just by using the information I have at hand in such a clever way!

 

How about you? Are you already feeling at home in the world of big data? Do you other examples that you can share? I am so curious about those examples – they always show me the truth of the saying: I don’t know what I don’t know!

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply