Data Deluge due to ERP systems – Is Formula 1 Strategy is the only way Forward?
If you look at any write-up on future of ERP, Handling Data deluge would be right on top. ERP industry has accepted that this is going to be a big money spinner and source of future revenues. While this assertion looks logical but the IT industry strategies to take advantage of this is singularly focussed on more powerful hardware and more powerful software which can successfully process millions of data records as quickly as possible. My experience with such huge reports created out of processing lot of records actually adds miniscule value to the business users. By and large business users tend to look at these reports with huge amount of skepticism. So the IT industry to focus on more power & more speed, a sort of Formula 1 strategy looks perplexing to me.
There is a lot that can be done on the data generation side itself to ameliorate the Data Deluge problem. I think this should get as much attention and weightage if not more that the high speed data processing capacity alone on next generation ERP systems.
Let me take a simple purchasing scenario to explain the potential to reduce data generation.
In a typical PTP scenario, we start the process by creation of purchase requistion, which is then converted into a Purchase order followed by Goods receipt, then invoicing and finally payment run. As expected multiple standard tables involved here like EBAN, EKKO, EKPO, MKPF, MSEG, RBKP, RSEG etc. So when I create a purchase requisition, once I enter a material, a bunch fields are getting updated from material master, similarly other data like Vendor, Plant etc. As I move further into PO creation, GR posting, invoice posting, data that were already part of earlier documents gets carried onto further documents, creating a data duplication across SAP tables. While the validations each of these data that are carried over from respective master data & Org data tables are really required, but my question, do they have to be copied also into all the tables. Can the programs not be made smart enough to avoid duplication of same set of data across multiple tables along the business process chain?
At the outset this may look like a small set of data that gets duplicated, but it is the small droplets of water over a period creates flood and that is what is happening in ERP systems creating huge amount of data over a few years. Particularly this data duplication becomes even more prolific if IDOCs are involved in the process and if the whole transactional process is auotmated with multiple acknowledgements etc.
To be fair, I sometimes feel that this strategy looks very simplistic, but can be worthy one also. What do you guys think?