Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
anubhavpandey
Advisor
Advisor
0 Kudos

The WHY-WHAT-HOW of Explainable AI

 In the previous part, we covered the basics of Artificial Intelligence. In this part, we will focus on the concepts of XAI its importance, use cases and various models.

WHY?
As AI is making more and more decisions for us in the field of finance, healthcare, law enforcements, military, and governments it is increasingly that they make these decisions right and for the right reasons. Explainable AI (XAI) is a set of tools and methods that help us understand these the predictions and decisions that AI programs make especially where the stakes are high stake.

Take an example of artificial intelligence program that predicts if a patient is at high-risk post discharge, it is crucial for doctors to understand the reasoning behind the predictions. With explanations, the doctors can better understand the basis and can make an informed decision. Without this transparency, it can be difficult to trust an AI system’s predictions.

The General Data Protection Regulation (GDPR) requires that for decision-making systems solely based on automated processing people have rights to have meaningful explanation about the logic involved for the decisions made.

As Artificial Intelligence continues to advance in high-risk tasks and augmenting humans, there is an increasing importance placed on establishing trust and transparency within the system.

WHAT?
In most simple terms, Explainable AI are the methods and techniques that can make AI model’s decisions or predictions understandable to humans.

 

anubhavpandey_0-1707923483409.png

Figure 3 Scope of Explainable AI (XAI) in general

In context of AI, Explainability and Interpretability helps in building trust and improving transparency but sometimes used interchangeably. In my understanding there is a difference between the two concepts.
Explainability focuses on providing insights on the working of the model for meaningful explanation about why the model took a certain decision.
On the other hand, Interpretability aims at understanding the behavior of the AI system by building relationship between input features and output predictions.

These two terms are commonly used interchangeably and so it was necessary to bring it up here.

HOW?
The AI models can be broadly classified as directly explainable models such as linear regression models, decision trees and rule-based models and they are simple models to explain. Generalized Linear Rule Model (GLRM) is an example of directly explainable model. It generates simple rules which can be easily understood by humans. On the other side, models such as deep neural networks require post-hoc explainability. These models are complex and act as a black box for humans to understand its working and so they require another set of algorithms to explain the decisions.

There are three techniques applied for algorithms that are used for post-hoc explainability.

Explaining the model globally – Generate an explanation that gives an overview of the model and how it works. These algorithms try to train a simpler model like a decision tree to approximate the outcomes from the complex model while optimizing loss of information.

Explaining a decision locally – The algorithm tries to explain a particular decision that the model made. For example, if the AI model predicts a credit score for an applicant as low risk, the algorithm provides an explanation why the model thinks the applicant is at a low risk of defaulting loan. This could be achieved, for instance, with the local contribution of features or with giving similar examples.

Inspecting counterfactual – The algorithm is focused to give an explanation by generating counterfactual instance. Counterfactual instance is similar to the original data but differs in some features that allows user to get insights into the factors that influence the outcomes. For example, the outcome of high risk credit score of an applicant can be explained with an example of hypothetical applicant with low credit score and with most features similar except higher debt percentage.

In this part we learned why XAI is important, what is XAI and then how XAI algorithms work. In the next and final part, we will cover how to design user experience for XAI.

Resources and further reads:

Generative AI at SAP – an OpenSAP course
SAP Fiori Design Guidelines for Web
UXAI – A visual introduction to Explainable AI for Designers.
Introduction to Explainable AI: Techniques and Design (By Vera Liao)
Building XAI applications with question-driven user-centered design (Blog on Medium by Vera Liao)
Trustworthy AI: How to make artificial intelligence understandable
People+AI Research (PAIR) GuidebookAI Design Guideline from Google
AlphaGo documentary on YouTube