GRC Tuesdays: Risk due to RPA Gold Rush
First of all, let me say I am in favour of, and excited by, the use of Robotic Process Automation (RPA) and Bots in business. And the very real benefits that Artificial Intelligence (AI) technologies can bring to complex repetitive high-impact tasks like analysing X-rays for signs of cancer and other diseases.
Keeping with contemporary beneficial use cases: at a recent SAP hackathon, Itelligence proposed combining RPA with internet-connected sensor tags that would continuously measure the temperature of vaccine batches and send the data via Bluetooth to a smartphone.
A sensor app would receive the data and forward it to a cloud platform, which would add batch and product information, as well as temperature thresholds and relevant delivery details.
Another app would then check the incoming sensor values against predefined thresholds, like the temperature limit, and kickstart one or more RPA bots, thus helping safe distribution of COVID-19 vaccines. This is a great use of RPA and Bots.
But here are the reasons why I wanted to write this blog: Just because a Bot (combining the terms for convenience) does something and is therefore not subject to human error, it doesn’t mean it is always correct.
- Because a Bot is doing the assigned task exactly as ‘instructed’, is it always true to say the outcome is correct for the business? Sometimes judgement is necessary.
- If a Bot was correct to start with, data and processes change (sometimes in hidden ways), so it could become incorrect.
- Bots are fundamentally software coded by human beings and trained by human beings: open to error and therefore potentially leading to errors in the outcomes.
- Are we comfortable with situations where Bots overlap?
- We are rushing to look for and deploy Bots in so many areas of business and the volume of data they touch is rising rapidly, stretching out ability to remain in control of the outcomes.
Leading to this: just because a Bot does something, it doesn’t mean it does not need to be monitored or audited.
While there are clear cost benefits in using Bots, I believe we have bypassed a healthy scepticism of their infinite reliability and accuracy. Of course, we check human activity because we expect errors. If we had any doubt about the ongoing accuracy of Bots, then we should check their output. Especially as we will see them being used increasingly and pervasively.
Guidance on how best to integrate the use of Bots within human teams is to encourage the human team think of them as part of their team. So while not expecting them to join your team outings, it is encouraged to treat them as if they were a valuable contributary team member, and not external to the team. They are a user, they perform actions on business data. And there are manager Bots that ‘control’ lower Bots, so activities can become complex, but still with the option of them being unattended.
Then there is the scenario of multi-agent AI: AI tools overlap processes, and touch some of the same data when analysing and manipulating business data. This leads to the concept of multiagent negotiation: some sort of automated prioritisation and decision making. Essentially digital judgement.
Extending this concept somewhat fantastically into the realm of a Science Fiction story, maybe there needs to be an Institute of Bots, like there is for engineers, economists, doctors, finance and accounting specialists, with stated best practices and codes of conduct & ethics. Perhaps the equivalent of 3 laws of robotics for Bots?
Coming back to Earth (so to speak), and looking at the notion of ethical behaviour of AI technologies, Deloitte ask whether AI can be ethical. They identify 5 ethical risks relating to AI and machine learning:
- Bias and discrimination
- Lack of transparency
- Erosion of privacy
- Poor accountability
- Workforce displacement and transitions
Automating data change, and potentially decisions, affects outcomes at a business and personal level for example business performance, data accuracy, credit records, funding approval. Who checks the outcomes when the above risks become real world events, impacting our lives?
The European Commission has presented the concept of Ethics guidelines for trustworthy AI and suggest a trustworthy AI should be:
- lawful – respecting all applicable laws and regulations
- ethical – respecting ethical principles and values
- robust – both from a technical perspective while taking into account its social environment
They suggest that the less oversight a human can exercise over an AI system, the more extensive testing and stricter governance is required.
We know from the exponential growth of Machine to Machine communication in the Internet of Things world that whereas data generated by humans is around 1.5 billion gigabytes every month when combined with M2M this doubles every year and reached around 10 billion gigabytes in 2017, and will be much higher in 2021. Ericsson estimated there will be up to 50 billion devices online by 2025 (about 6 per person on Earth). I don’t have 2021 updates for these figures but the point is clear: the use of automated software will grow rapidly. If not the Bots themselves, then certainly the data they touch.
My understanding is that while it is evident auditors will need to boost their understanding of these amazing technologies, they will also need to audit the use of robotic software. The aspects of volume, velocity and variety from the big data world will apply to the Bot world. The more we automate with Bots to improve business performance the faster we will need to check Bots’ performance, to continue delivering equivalent levels of due diligence and governance. I’d say “watch this space”…..
Which like my previous blog on the use of internal controls to drive resilience and trust, leads me to the point that we need in parallel to the increased use of Bots, to also increase the use of smart automated internal controls solutions. And embedded continuous and automated auditing across an enterprise business landscape. I don’t think we can assume Bots are infallible. I don’t think we can leave them unmonitored.
I’ll end with an Eeyore quote from A. A. Milne’s Winnie the Pooh books: “They’re funny things, Accidents. You never have them till you’re having them”