Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos
One of the guiding design principles for intelligent systems is to empower end users. If we want people to trust in machines, we need to share information about the underlying models and the reasoning behind the results of algorithms. This matters even more in business applications, where users are held accountable for every decision they make.

In the meantime, it’s widely accepted that intelligent systems need to come with a certain level of transparency. There’s even a new term for it: explainable AI. But that’s just the beginning. As designers, we need to ask ourselves how explainable AI ties in with the user interaction. What do we need to think about whenever we explain the results and recommendations that come from built-in intelligence? And how can we make it a seamless experience that feels natural to users?

Does the user always need an explanation?


Before we go into detail on how explanations can be fashioned, let’s take a step back and ask ourselves if we really need to explain everything we show on the UI.

What we’ve been learning from recent user tests is that if the quality of a prediction is high and the stakes are low, users probably won’t expect comprehensive explanations.

Our test scenario: Paul works in a large corporation and has an issue with his emails. When he opens an IT support ticket, the system helps him to pick the right category based on his problem description.

Explaining input recommendations to the userWe did our best to make the system recommendation as transparent as possible. But in the end, none of our test participants were interested in the explanation. When we investigated this further, we came up with three factors that explained the user response:

  • Low level of risk. The consequences of selecting the wrong category were not that dramatic. A wrong choice can easily be changed, corrected at a later stage, or even ignored.

  • High prediction quality. The quality of the predictions offered by the system was good enough to ensure that all users found an appropriate category in the top 3 proposals.

  • Good system performance. It was quick and easy for users to correct the chosen category, or even to experiment with the input on the fly (“learning by doing”).


In short, if users can easily eliminate or circumvent the negative impact of inaccurate system recommendations, they might not be that interested in explanations. But what are we going to do in other situations?

What do I need to explain?


To start with, we need to break down our explanations into “global” and “local” components. AI experts call this the scope of interpretability for a model:

  • Global interpretability helps the user to understand the “big picture” – the entire logic of the model.

  • Local interpretability helps the user to understand small regions of the data (such as clusters of records, or single records).


Example: Let’s get back to Paul, who is actually a purchaser in a large company. Paul needs to find a new supplier for a material. Our intelligent system can propose a ranked list of suppliers for this specific material.


Example of a ranked supplier list


Here are some basic questions Paul might ask himself when he looks at this list:

  1. Why do I see only these suppliers? What are the criteria for inclusion vs. exclusion in the ranking list?

  2. Why does supplier B have this score?

  3. Why is not on the list?


Global Scope: Why do I see only these suppliers?  


This question is an example of global interpretability scope. Paul wants to understand the logic behind ranking on a general (global) level to gain an initial sense of trust, i.e. is the system competent enough to help me with my task?


Paul understands the basic components of supplier ranking and their relative importance. He may want to adjust them.




Local Scope: Why does supplier B have this score?


Paul wants to understand the details of the ranking system based on the ranking for a concrete supplier. This may be a supplier he already knows, giving him another chance to verify the competence of the system. Or it could be a supplier that Paul hasn’t dealt with before. In this case, he really wants to learn something new from the system.



Paul can see the breakdown of the rating for supplier B in comparison to competitors

Mixed: Why is my favorite supplier, XYZ, not on the list?


This is a mixed case. At first glance it seems to be a local question. But Paul needs to understand both the global rules and the local effect to interpret the situation.

Paul can search for supplier XYZ and check the rating, as he did previously for supplier B. In the detail view, he sees that his favorite supplier does not have favorable conditions for the material he needs to purchase.

How much can I explain at once?



Paul is overwhelmed by the explanations provided


Providing explanations to end users is a perfect scenario for applying progressive disclosure – the  design technique we use to avoid overwhelming the user with too much information at once.

Let’s explore how it could work in our IT ticket example (assuming an explanation is required):



Example of progressive disclosure

The main elements of the explanation are shown in a concise form on the main screen, with an option to drill down to more detailed information on secondary screens. The benefit of this approach is that the users enjoy a more simplified and compact UI, and only need to concern themselves with the details if they need them.

How can I apply all this to my own application?


You might be asking yourself how many levels of progressive disclosure you need to design, and what kind of information you need to offer at each level. This will depend largely on your use case, persona, and chosen level of automation, so there’s no universal pattern. However, the questions below might help you to understand the scope of your own explainable AI design requirements, or even prompt you to explore completely new ideas.

  • Does the user expect an explanation?
    If the risks of action are quite low and the results easily be rolled back, users are not normally interested in seeing an explanation of the system proposal.

  • What type of explanation can you provide?
    If the system generates a list of items using a specific machine learning algorithm, we have at least two things to explain: the model in general (global explanation), and the application of the model to each line item (local explanation).

  • Which level of explanation is expected in which context by which user?
    Depending on the use case, users can require different types of information in different contexts. The role of the user is key, and different user roles may require different types of explanatory information. If you need more than one level of detail, consider using the concept of progressive disclosure for explanations.

  • Are there other interactions that might extend explanations? Some interactions are natural extensions of explanations. For example, users who invest time in understanding the logic of the system might be interested in providing feedback. Or users exploring a result at item level (local interpretability) might be interested in a comparison of the results on this level.

  • Is there a lifecycle for an explanation, and how might it look?
    If you are using progressive disclosure, ask yourself whether you need a time dimension. We assume that the need for a repeated (static) explanations of the model can decrease or even disappear over time as the user gains more experience with the system. For example, explanations on the global interpretability level could disappear or be hidden over time, once the user understands the main principle of the underlying algorithm.


In a nutshell


It goes without saying that you’ll need to explain your overall AI logic to your users. But this alone won’t be enough to make AI part of an engaging and empowering user experience that adds obvious value to your solution. If you want users to embrace the new intelligent capabilities, your explanations will need to be carefully designed as an integrated part of the UI. And this means telling users exactly what they need to know in their specific context – just enough information at just the right time.

Curious to learn more?


Of course, there’s much more to explainable AI than we’ve covered so far. What are the challenges for writing explanation texts? What is the role of explainable AI in building trust in intelligent systems? And how can explainable AI be integrated with user feedback functionality? We will be coming back to these topics in our upcoming posts.

In the meantime, we’d be happy to hear about your own experiences and the challenges you face when designing explainable AI systems.

So stay tuned and please feel free to add your thoughts in the comments.

Special thanks to Susanne Wilding for reviewing and editing this article.
Labels in this area