Explaining System Intelligence
One of the guiding design principles for intelligent systems is to empower end users. If we want people to trust in machines, we need to share information about the underlying models and the reasoning behind the results of algorithms. This matters even more in business applications, where users are held accountable for every decision they make.
In the meantime, it’s widely accepted that intelligent systems need to come with a certain level of transparency. There’s even a new term for it: explainable AI. But that’s just the beginning. As designers, we need to ask ourselves how explainable AI ties in with the user interaction. What do we need to think about whenever we explain the results and recommendations that come from built-in intelligence? And how can we make it a seamless experience that feels natural to users?
Does the user always need an explanation?
Before we go into detail on how explanations can be fashioned, let’s take a step back and ask ourselves if we really need to explain everything we show on the UI.
What we’ve been learning from recent user tests is that if the quality of a prediction is high and the stakes are low, users probably won’t expect comprehensive explanations.
Our test scenario: Paul works in a large corporation and has an issue with his emails. When he opens an IT support ticket, the system helps him to pick the right category based on his problem description.
Paul can see the breakdown of the rating for supplier B in comparison to competitors
Mixed: Why is my favorite supplier, XYZ, not on the list?
This is a mixed case. At first glance it seems to be a local question. But Paul needs to understand both the global rules and the local effect to interpret the situation.
Example of progressive disclosure
The main elements of the explanation are shown in a concise form on the main screen, with an option to drill down to more detailed information on secondary screens. The benefit of this approach is that the users enjoy a more simplified and compact UI, and only need to concern themselves with the details if they need them.
How can I apply all this to my own application?
You might be asking yourself how many levels of progressive disclosure you need to design, and what kind of information you need to offer at each level. This will depend largely on your use case, persona, and chosen level of automation, so there’s no universal pattern. However, the questions below might help you to understand the scope of your own explainable AI design requirements, or even prompt you to explore completely new ideas.
- Does the user expect an explanation?
If the risks of action are quite low and the results easily be rolled back, users are not normally interested in seeing an explanation of the system proposal.
- What type of explanation can you provide?
If the system generates a list of items using a specific machine learning algorithm, we have at least two things to explain: the model in general (global explanation), and the application of the model to each line item (local explanation).
- Which level of explanation is expected in which context by which user?
Depending on the use case, users can require different types of information in different contexts. The role of the user is key, and different user roles may require different types of explanatory information. If you need more than one level of detail, consider using the concept of progressive disclosure for explanations.
- Are there other interactions that might extend explanations? Some interactions are natural extensions of explanations. For example, users who invest time in understanding the logic of the system might be interested in providing feedback. Or users exploring a result at item level (local interpretability) might be interested in a comparison of the results on this level.
- Is there a lifecycle for an explanation, and how might it look?
If you are using progressive disclosure, ask yourself whether you need a time dimension. We assume that the need for a repeated (static) explanations of the model can decrease or even disappear over time as the user gains more experience with the system. For example, explanations on the global interpretability level could disappear or be hidden over time, once the user understands the main principle of the underlying algorithm.
In a nutshell
It goes without saying that you’ll need to explain your overall AI logic to your users. But this alone won’t be enough to make AI part of an engaging and empowering user experience that adds obvious value to your solution. If you want users to embrace the new intelligent capabilities, your explanations will need to be carefully designed as an integrated part of the UI. And this means telling users exactly what they need to know in their specific context – just enough information at just the right time.
Curious to learn more?
Of course, there’s much more to explainable AI than we’ve covered so far. What are the challenges for writing explanation texts? What is the role of explainable AI in building trust in intelligent systems? And how can explainable AI be integrated with user feedback functionality? We will be coming back to these topics in our upcoming posts.
In the meantime, we’d be happy to hear about your own experiences and the challenges you face when designing explainable AI systems.
So stay tuned and please feel free to add your thoughts in the comments.
Special thanks to Susanne Wilding for reviewing and editing this article.