Communication target: The topic of this article concerns the set-up of a continuous series of small articles for https://community.sap.com, https://experience.sap.com, and https://medium.com in relation to my work on “Explainable-AI”. It lays out the path of how I got into contact with machine-learning as a designer, and how I approach the challenges which come with the task of specifying and designing intelligent experience concepts. The target audience includes professionals in the field of design, user experience, and machine-learning.
My profile – I studied visual communication with an emphasis on digital media, graduating in 2015 with a B.A. from the University of Pforzheim. After that I gathered professional experience in various areas such as application design, illustration, brand design, photography, and film making, acquired from being both an employee and self-employed. Before I joined SAP in 2016, I was working for Seeburger AG where I first became familiar with professional enterprise software. I then worked as an application designer mainly for financial but also for manufacturing and utility applications. Since 2019, I have been taking ownership of the UX concept for Explainable-AI (XAI) alongside Joachim Sander and the team of SAP Fiori for AI.
UX Designer Meets AI
Hello fellow designer, digital professional, or SAP colleague. I am Alex and since 2016 I have been working for SAP as a UX designer. In this role, it is my duty to build empathy with our users and to objectify their needs, requirements, and goals inside a, hopefully, delightful and functional product design. A good portion of this – and an aspect I very much love about my profession – is the need to constantly learn and extend my knowledge about the domain and application I am designing a user experience for.
I started out designing business applications in financial services and came across projects in the field of manufacturing and utilities in my first few years with SAP. Somehow, I landed in the domain of artificial intelligence (AI) shifting away from product focus towards interaction concept development and research. There, I found myself in a very large and increasingly popular but by no means new scientific field, having existed since the days of Alan Turing in the middle of last century. Anyway, I was appointed to contribute in this newly founded team “SAP Fiori for AI”, and so it was time to get to grips with it, right?
When you read this, I assume you probably have a background in UX yourself and at least a genuine interest in this subject as well as AI. You might wonder what a designer has to do with AI or, more specifically, with machine-learning. So, what is my stake as a designer in this domain? More on that question in the course of this article.
First Things First
What is intelligence anyway? You will find numerous definitions and descriptions for various types of “intelligence” based on how good an individual is at recalling learned knowledge, or how skilled one is in applying social or emotional competencies, but not a generally accepted scientific consensus. However, a recurring theme in the conceptualizations of intelligence are the abilities to gather information, memorize it, and learn from it. Those are the basic abilities required for cognitive skills such as problem-solving and facets of it including creativity, critical thinking, or strategic planning. In machine-learning, scientists try to mimic human intelligence by inventing algorithms which can learn from experience (data) and form new knowledge based on this data to create a “model”. This model can recall what is learned to make predictions about similar situations and infer something.
The self-imposed long-term goal of AI research is to achieve a level of cognitive intelligence which equals that of a human. This is called “artificial general intelligence (AGI)” sometimes also referred to as “strong AI”, depending on the definition. Still, science is far from reaching that goal, and today’s applications utilize only simple types of machine-learning algorithms to simulate individual cognitive processes that are classified as examples of “weak” or “narrow” AI. But even though they are not human-like, they still find very practical applications in consumer and professional software, and that is the kind of intelligence we talk about when we design for today’s applications. From anomaly detection in access governance to association in object recognition or probability models for risk assessment, with machine-learning, we can create smarter and more efficient ways to solve problems or get time-consuming, tedious tasks done.
How Does it Work
I would like to provide clarification on the most basic aspects of machine-learning, so that you will be able to distinguish between the various terms and concepts a little bit better.
The process of learning requires the assessment of success criteria. The machine learning algorithm can only learn if it can optimize towards a certain objective. The definition of this objective is dependent on the use case: which results shall be rewarded, and conversely, which are to be penalized? In spam mail detection we want to catch as much spam as possible but no business emails. In medical diagnoses, an undiscovered illness is worse than a treated healthy patient. With this, it must also be taken into consideration how “costly” a false positive prediction is.
Select the appropriate method
Select the algorithm
Algorithms are the tool set for the AI engineer or data scientist. Depending on the point of view, those algorithms are differentiated by how they learn from examples (e.g. supervised or unsupervised learning, etc.), their theoretical foundation (e.g. bayesian, symbolic, connectionist, etc.), or how readable or respectively transparent they are for other humans (e.g. white-box, black-box, etc.).
Train your model
If the algorithms are the tools to encode information, then the model is the outcome of this encoding, and the whole process is called a training. The data that is used to do the training is a subset of all the historic information we have collected about the subject we want to create a model for, because we also need untouched samples to evaluate the performance of our model later.
Test your model
Testing is done with the reserved data samples the model hasn’t seen yet (see “random cross validation”). After the test run, the data scientist has multiple options to improve the performance of the model. The scientist can refine the used data inputs, change the architecture of the algorithm, or change conditions under which the model is trained (see “hyperparameter tuning”).
Apply the model
Once you have proofed your model to be sufficient and deployed it in your application, you are able to use it in order to make predictions, for example about the value of real estate or to estimate the likelihood of someone having a certain illness given the observed evidence. The common term for that is “inference” because our model is generalizing over all examples we have trained it with in order to conclude what the most likely solution to our current question is.
The model has been deployed and the user probably likes to see their investment proving to be beneficial. Monitoring and benchmarking is important to actually know if the model is performing as tested in the evaluation phase. It is the task of the citizen data scientist and business expert to apply appropriate KPIs for that purpose. Various objectives can be measured. How many unresolved tasks can I clear with this model? Does the model make decisions which are more, as good as, or less accurate than my employee? How many of the model’s decisions result in an error? The ability to track those performance indicators and to have some threshold to refer to for comparison are both required to properly define what success is. Here we return back to the business objectives we had to define in step one of this process.
Same-same but Different
How does the introduction of intelligence into our design project change the way I as a designer work, or does it change at all? The quick answer is – not entirely, but it adds another level of complexity as the user is no longer the only “operator” in the system. This can result in a rather uncanny experience and expresses itself through public discussions about “user trust”, “human control”, “explanation of AI” etc. These issues aren’t so prominent in recommendation systems like Netflix or Spotify, which one could call “low-stake decision” services. However, these concerns become very concrete when people are harmed directly due to the decisions taken by a machine-learning algorithm. An algorithm or model does not respond to ethical questions and does only as it has been designed to do. At least as of today, AI systems don’t have intentions of their own yet. Seemingly, there are now UX requirements present which weren’t there when programs produced only straightforward operations.
I have faced the issue of diminished user trust in the context of analytical scenarios, for example. It is quite common that users lack the understanding and competence to evaluate the AI output correctly. At the same time, they have the strong need to consolidate information in order to take educated decisions. AI is supposed to help those users, but they refrain from using it because it is not transparent to them.
The Nielsen Norman Group is calling this problem a “usability-gap”. You may have all the right features, but people can’t figure out how to use them. The reverse of this is a “utility-gap” where you have great UX in place but offer needless or inappropriate functions.
But why is that? Why are AI applications deemed to be more efficient and performant but are distrusted at the same time? A machine can, in contrast to humans, process large amounts of information in an instance and execute complex calculations at the same time. However, tasks which feel simple to us like differentiating objects or working with incomplete information is particularly hard to compute. Machine-learning algorithms are destined to overcome this handicap with the help of various techniques to approximate the inherent rules and patterns that are present within given data. In doing that, they create a “model” which allows them to make statements (“infer”) on similar instances of data. The term inference is inherited from the science of statistics which preceded AI and means the process of making claims based on a finite number of evidence samples. This brings us to some defining factors as to why designers need to appoint appropriate measures within applications to avoid potentially dangerous errors in human-computer-interaction:
- Machine-Learning-Models are an approximation, not a representation of the real world.
- Many steps are involved to create Machine-Learning-Models and each is prone to errors.
- AI is a scientific field of research and not constrained by common industry standards.
- Machine-Learning-Models can’t provide 100% certainty for the entirety of results.
- Data bias is carried over in the machine learning model and therefore results in biased outputs.
- Machine-Learning-Models can be manipulated by adversarial attacks.
- The human factor remains a cause for errors and isn’t resolved with apps becoming intelligent.
- Machine-Learning services can produce unexpected or surprising results that might not match the user’s mental model.
- The way that AI is perceived also has psychological and social-psychological implications such as loss of control or the feeling of being observed and therefore have an impact on the user behavior.
This shows that with the obvious advantages of AI there are also some tradeoffs such as dealing with uncertainty and calculated risk, which need to be addressed in a solid UI/UX design. This is especially critical if the potential consequences of decisions founded on those AI-Models could be life-threatening or could violate the law in force. Consequently, specific research projects about “interpretable” and “explainable” AI emerged to resolve the demand for human-readable and transparent systems.
Still, the debate for businesses and their customers is in most cases focused on technical as well as legal requirements, and among public stakeholders, the ethical and sociological implications. Since they are aware of the challenges modern applications of machine-learning bring up, many companies have already published their own guidelines, principles, and best practices on how to address common needs and requirements, and we at SAP have done this as well. As a designer however, I am much more concerned about the usability implications and how to facilitate intelligence in a product in a way where the user can benefit the most. This is what I would like to talk about in my upcoming posts – reflecting on how I think machine-learning should be integrated so that it serves the user, and the user doesn’t have to blindly obey the AI.
If you are interested in further exploring the subject of UX for intelligent systems, we invite you to read our other blog articles, revisit our section in the SAP Fiori UX Design Guidelines or join the exchange here in the SAP Community. If you liked this article, leave a comment and let us know about your experiences with UX matters in machine learning scenarios.