On Responsible AI: SHAP of you
Interesting fact: SHAP (more on this below) and Ed Sheeran’s “Shape of you” (needs no introduction) were both released in 2017.
An even more interesting fact: Christian Klein recently spoke about Responsible AI, emphasizing that while AI addresses the greatest challenges of our time like carbon footprint minimization and scaling of aid, we need to ensure that it is used in a fair, transparent, and compliant way. You can find more details on SAP’s Global AI Ethics Policy here.
This blog might be interesting to you if:
- You want to read more about responsible AI, and how it can be delivered through explainable AI
- You would like to see some examples and watch-outs in using explainable AI, specifically SHAP (more on this below)
- Most importantly, you are curious how you can use and extend SHAP explanations in enterprise use cases
1. Explainability Explained
Explainable AI can be defined as being able to understand the predictions made by AI. A sample case is being able to see why a predictive model has assessed that a group of students will most likely fail the school year based on assessments of school work, attendance, background, etc. Responsible AI, on the other hand, is identifying what could go wrong early in the design phase, i.e. a premortem. Back to the student example, the initial use case can cause discrimination or even worsen the situation for the student; the use case can be changed into an automated creation of study materials or activities that students may consider to improve their grades. Explainable AI can also be responsible AI when values such as sustainability and human-centric design are considered early in the process.
There are tools that provide explainable AI, with SHAP (SHapley Additive exPlanations) being one of the approaches in identifying correlations/explanations on outcomes of predictions. With SHAP, think about each output of a prediction as a game with variables (e.g. duration, quantity, frequency, etc.) as the players. The SHAP output for an observation shows which players made the most impact on the game result (e.g. in a positive or negative in a binary classification).
An example of a SHAP plot can be seen below on customer churn, i.e. if a customer will discontinue their phone plans (image credit: Yifei Huang). You can click on the image to enlarge the text. In summary, the SHAP plot shows:
- Scaled values (0 as lowest, 1 as highest) of the variables in the dataset, i.e. pink for high and blue for low. For example, if total_day_charge has 100 as the highest, it will be plotted as a pink dot in and if the lowest value is 1, it will be plotted as a blue dot. Values in-between will follow the color gradient shown on the right
- Variable values that influence churn, by looking at the dots on the positive X-axis (on the right)
- Variable values that influence not churning, by looking at the dots on the negative X-axis (on the left)
- As an example, high values of total_day_charge (pink dots) are correlated to customer churn (plotted on the right, positive X-axis)
There are great SAP Blogs that talk about SHAP in technical detail (see section 3). The remainder of this blog talks about the implications of using SHAP and our responsibility to build on top of it to provide the best outcome to our customers and achieve responsible AI.
So how does SHAP help in achieving responsible AI?
2. Explainability Encountered
When I was working on a customer churn business problem for a global business, one of the initial models we created had country and region as the variables. We anticipated that this would be an issue; as anecdotally, we have observed that one region showed high churn rates. With our suspicions backed by our testing, we removed variables such as country and region as these are areas that we cannot take action on. The model was heavily biased to predict engagements associated with a certain country or region as churning. Instead, we focused on areas such as service availability, ticket resolution times, utilization of products, etc.
SHAP helped us identify the variables that were resulting in biases in the predictive model; i.e. areas that neither customer nor the service provider can take action on.
3. Explainability Enumerated
I have discussed understanding and evaluating SAP AI/ML tools in my previous blog posts. Let us now deep-dive into how each of the tools provides explainable AI, specifically via SHAP.
SHAP is accessible via libraries in SAP AI/ML tools. There are a few ways you can access this as described in the blog posts below:
|Tool / Solution||Blog Article||Author|
|HANA APL, also applicable in SAP Data Intelligence||
SHAP-explained models with APL
Hands-On Tutorial: APL in SAP HANA Cloud
|SAP Analytics Cloud (SAC)||
Explanations For Classification Models in SAC
Explanations for Regression Models in SAC
SAP分析云中分类预测模型的预测解读 (in Chinese)
4. Explainability Extended
As seen in reports such as this from Forbes, dependency on AI alone, without human judgment can result in negative impacts in areas such as Diversity, Equity, and Inclusion (DEI). We also need to take caution with the fact that tools such as SHAP provide insight into correlations, not causation.
To better understand the causation of the variables involved, consider performing appropriate causal experiments. These experiments take time but it is inappropriate to make important decisions using correlation results alone.
For example, the SHAP correlations of a certain model show that faster ticket resolution times may cause better customer retention even if current resolution times meet Service Level Agreements (SLAs) >99% of the time. We want to be sure that in the real world, faster ticket resolution times can indeed help retain customers.
An A/B Test on customers up for renewal in the next 2 quarters can be performed in this case. This type of test splits the customers identified into mutually exclusive groups to establish causality. In the example below, assume we are improving the resolution times to be half the current average:
- Group A is predicted to churn, with intervention to improve ticket resolution times
- Group B is predicted to churn, with no intervention to improve ticket resolution times
- Group C is predicted to not churn, with intervention to improve ticket resolution times
- Group D is predicted to not churn, with no intervention to improve ticket resolution times
If after applying the action to prevent churn, i.e. improving ticket resolution times, we get the following:
- Group A has a 40% churn after 2 quarters
- Group B has an 80% churn after 2 quarters
- Group C has a 5% churn after 2 quarters
- Group D has a 5% churn after 2 quarters
This shows that the action, i.e. improving ticket resolution times, reduces the churn by 50% in groups that have been identified to churn, i.e. Groups A and B. Improving resolution times for customers predicted to not churn made no difference, i.e. Groups C and D. This establishes the causality of improved ticket response times to reducing customer churn and opens up for more analyses that can be done for Groups C and D to identify what are other causes for not churning.
In the sample scenario, it is not sustainable to improve existing >99% SLA-compliant ticket resolution times to be half of the current average across the global business; but one may think about doing that using the SHAP output alone. Instead, adding more capacity to improve ticket resolution times only to customers identified to churn 2 quarters before their renewal will focus efforts and will be a more responsible approach.
This is a simplistic scenario to emphasize that explainability is not only about the SHAP output. The setup is also shown in the diagram below.
I hope this blog has given a view and concrete examples of using responsible and explainable AI, specifically in the SAP context. Understanding correlations is just the start of the journey in providing explainable AI. Identifying causation is not only rewarding but can also drive business value through AI.
There are other explainable AI tools such as LIME. Do share your views and experiences in using explainable AI in the comments section. If you have questions or suggestions – I would appreciate it if you can post them as well.
Do follow my profile, Leo Jacinto Francia, for upcoming posts on Data Science, AI, and ML.
Invariably stochastically yours,
Hi Leo, thanks very much for your different blogs.
I am impressed by the content you created in the SAP community for the past few weeks, kudos!
As you mention SAC Smart Predict does offer the feature around prediction explanations, which corresponds to our simple & straightforward way of offering SHAP values to business analysts to leverage prediction outcomes in SAC Dashboards.
In addition to the blogs you listed, just wanted to bring your attention to this one from David SERRE which describes how prediction explanations can be created & consumed when using SAC regression models - see https://blogs.sap.com/2021/07/22/prediction-explanations-for-regression-models-in-sap-analytics-cloud/.
Keep up delivering quality blogs!
Thanks Antoine CHABERT! I have added the Regression blog from David above, somehow it did not appear in my initial search (which I normally do to avoid creating duplicate content). Glad to contribute.
First off, I really like the title, very creative thinking there .
I think that explainable AI is something that not many of us here have really touched, but it's a crucial piece for AI systems where either the understanding of the results are critical (e.g. medical diagnosis), or the results are for some sensitive topics (e.g. demographics). Therefore, I think it's really nice to see an article about explainable AI here.
Thanks, YX! It is good to know my style of writing about AI/ML is well-received, especially by AI/ML practitioners such as yourself. Having seen potential discrimination in a model first-hand has encouraged me to share about explainability more.