Business Trends
GRC Tuesdays: Artificial Intelligence and Machine Learning in a GRC World
Everyone’s Talking about AI and ML
GRC vendors are all talking about it—including SAP. But what does it mean and, more importantly, what might it mean for you in the future? First, let’s get on the same page in case you are one of the many who looks up most acronyms these days because acronyms appear and disappear so quickly.
AI = Artificial Intelligence (from Merriam-Webster):
“A branch of computer science dealing with the simulation of intelligent behavior in computers (…machine to imitate intelligent human behavior)”
ML = Machine learning (from Wikipedia):
“A field of computer science that uses statistical techniques to give computer systems the ability to ‘learn’ (e.g., progressively improve performance of a specific task) with data, without being explicitly programmed”
Expert System
An expert system, closely related to both AI and ML, uses a knowledge base of expert information plus an inference engine to make decisions and solve complex problems.
The Merriam-Webster definition of AI (specifically, “…machine to imitate intelligent human behavior”) did put a smile on my face as I contemplated whether a computer imitating stupid human behavior would qualify as artificial intelligence, or do we also need a definition for artificial stupidity? You may think I’m kidding, but it’s only a slight exaggeration if you realize that some of the current buzz around Google Duplex and Assistant emphasizes a computer agent that can imitate the voice, pauses, and false starts inherent in human communication.
To me, it sounds as if they are dumbing it down to sound more human. What’s next—bad grammar, swear words, and a dog barking in the background? I hope our future robocalls will understand when I say, “I’m on the Do Not Call List!”
But to get back on topic, where machine learning meets AI would involve an AI agent evaluating its own behavior and then adjusting as needed. To continue with the robocall analogy, if the robo-agent could learn that the canned sales pitch did not produce enough sales orders within a certain demographic and then adapt the pitch for future calls, now that would be something! (Notice that I didn’t say “something good.”)
Intelligent GRC
You may be wondering how this relates to GRC. Well, we already have a few examples in previous blogs:
- How Machine Learning Helps to Improve Security, Part 1 by Lane Leskela
- How Machine Learning Helps to Improve Security, Part 2 by Lane Leskela
The above discusses primarily capabilities that already exist within the SAP stable of products. I’d like to explore a couple potential new capabilities with you. Since I’m more focused on risk and compliance management, let’s confine our focus to those.
First, one tentative problem statement—it’s cumbersome to determine which internal controls to test, how to test them, and how frequently to test. Perhaps we could make this a bit easier with an expert system and machine learning. Without going all “SOX wonk” on you, let’s assume you have captured some information about each control, such as:
- Type of control
- Frequency of control performance
- Extent of automation of control performance
- Materiality of the risks a control was intended to mitigate
- Relative difficulty of performing the control correctly
- Extent to which the control is easily overridden by management
- History of control failure
- And so on….
With this information, the system could determine which controls to evaluate, how, when—and perhaps even who—based upon rules. From there, why not automatically schedule the evaluations, as well as route resulting issues or exceptions, if any? Make use of machine learning capabilities by adjusting the schedule automatically based upon changes to risk level, control failure, and such. And, of course, if you can fully automate many of the tests themselves, you’re home free! This could potentially take a time-consuming task and turn it into something easy, predictable, and auditable.
The key seems to be getting plenty of relevant data and understanding how the rules should be based on that data. The internet is crazy with buzz about potential unintended consequences with machine learning algorithms. Consider this following example from an interview with Ryan Jenkins, a philosophy professor at Cal Poly.
My paraphrase: Imagine you were developing rules for how a self-driving vehicle would react when a crash with another vehicle was imminent. Logically, you might want to direct the car to avoid the crash by steering away from the potential crash and towards an open area or, if no open area was available, towards the smallest object, to minimize damage to the vehicle. But what if the smallest object is a child? Obviously not a good rule but a good example of unintended consequences….
Or let’s think about trying to figure out in advance which internal controls will fail testing. A friend and colleague suggested that we could look at testing history to identify controls that had failed before, and consider those to be most likely to fail again. Yet in practice, it may well be that controls that had previously failed would receive the most attention to ensure that they wouldn’t fail again. So actually they might be the least likely to fail. A more meaningful way of thinking about it might involve control complexity, who is performing the control, and so on.
Opportunities Abound
I certainly don’t have the answers yet, but I can imagine a lot of interesting opportunities leverage a combination of artificial intelligence, expert systems, and machine learning:
- Using predictive techniques to identify potential new patterns of waste, fraud, and abuse (which some of our solutions do today), and to construct and operate preventive and corrective controls to better guard against them
- Managing risks and opportunities based upon a variety of factors that go beyond simple risk appetite thresholds, risk estimation, and related response activities—this might involve using a variety of internal and external information to develop a structure that would support decision making focused on company objectives (for example, Should we expand our product line? Should we purchase a plant in China?)
- Using with natural language processing to identify and review/audit suspect transactions based not just on the identification of changing patterns of data, but also based upon free-form text (like comments) included in the transactions
- Understanding and preventing cyber threats by analyzing what has changed, what users are processing atypical transactions, and by learning from attacks that did succeed
- Learning from any or all of the above to continuously refine computer-based decisions and algorithms to make them smarter—over time, more can be handled without human intervention and with increasing accuracy
For a related discussion, take a look also at a blog written by my SAP colleague and friend Thomas Frénéhard, What Will GRC Look Like in 2021? An Anticipation Scenario. While written in 2016, it’s still relevant today. (Remember when two years in the past did not seem like a lifetime ago?)
Clearly understanding and using these techniques is a journey, not a destination. I’d welcome your feedback on what your companies are doing.
Learn More
Read the other blogs in our GRC series to learn more about topics like this one and our GRC solutions.