Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
GabiBuchner
Product and Topic Expert
Product and Topic Expert
Even the best of intentions might lead to bad outcomes. This is also true for chatbots. Infamous examples of chatbots gone rogue include Microsoft’s Tay or Korean Luda Lee that both fired off hate speech and homophobic insults at users. Since more and more chatbots and voice assistants, also called conversational agents (CA), penetrate our social and private lives and are available to an ever-increasing audience, including minors and other vulnerable groups, we must ensure that they do not do more harm than good.

Let us look back in time a bit. In his 1942 sci-fi story "Runaround", Isaac Asimov set up a code of conduct for robots by introducing the three laws of robotics: first, a robot may not injure a human being or, through inaction, allow a human being to come to harm. Second, a robot must obey orders given by human beings except where such orders would conflict with the first law. And third, a robot must protect its own existence as long as such protection does not conflict with the first or second law. Mr. Asimov later added the zeroth law, saying that a robot may not harm humanity, or, by inaction, allow humanity to come to harm. These laws inspired many sci-fi authors and even influenced thoughts on the ethics of Artificial Intelligence (AI) and the design of modern industrial or household robots.

More recently, several initiatives and research projects have set up standards and guidelines to ensure that AI systems uphold universal human values and are developed and used in an ethical and responsible way. Examples are the Ethically Aligned Design (EAD) series published by the Institute of Electrical and Electronics Engineers (IEEE), The Ethics and Governance of Artificial Intelligence Initiative papers, or the Design for Values approach promoted by the Delft University of Technology. Virginia Dignum, Professor of Responsible Artificial Intelligence at the University of Umeå, has proposed the ART principles for artificial intelligence - accountability, responsibility, and transparency. ART demands that AI must not only be explicit and verifiable – meaning that systems must be able to explain and justify their decisions - but also lawful, ethical, reliable, and beneficial.

In the light of this research, many companies have set up ethical guidelines for their AI and CA development. However, these guidelines often stay too abstract, are not integrated well enough into the process, or are considered a nice-to-have. Many of those involved in creating CAs might not be aware of the potential social impact or ethical implications of CAs because they are so enthusiastic about the technology as such that they tend to overlook the negative side-effects. One obvious example of bias in conversational agents is the gender choice. Many CAs, such as Alexa, Siri, or Cortana suggest that they are female, thus reinforcing the age-old stereotype of the friendly, polite, and helpful secretary or personal assistant.

According to Virginia Dignum, AI is an artefact or simply a tool or system that we design. So, “whether the system works in a legal, responsible, and ethical way depends on those who make it.” Let us examine in more detail what this means for CA developers and how we can ensure ethical design right from the start.

Transparency


The top requirement included in many standards and guidelines published so far is that your CA must not be obscure and therefore must disclose its artificial nature, preferably at the start of the interaction with the user. The state of California even passed the “Bolstering Online Transparency” bill that forces online CAs to reveal themselves in specific use cases where CAs attempt to influence the voting or buying behavior of their users. One reason CAs must declare themselves is that we must respect the dignity of every human being who interacts with our CAs. We do not want to deceive or betray our users. Considering the human tendency to anthropomorphize things, which means that people readily assign human traits to objects and can develop an emotional relationship with them, humanlike CAs will most certainly invoke feelings on the side of the user, be it affection, anger, or sadness, for example. Therefore, a CA must always be explicit about its nature, and while showing empathy for the user must not encourage empathy on their side, making it clear that the user interacts with a virtual entity. Google’s Duplex phone assistant has come under massive public pressure because it concealed its true nature. It spoke with such a realistic voice and mimicked human behavior so perfectly, for example by using stance adverbs such as um and uh, that the humans on the other side did not realize it was a machine they were talking to.

A second reason CAs must be transparent is that users need facts and substantial information to be able to make well-founded decisions. Therefore, CAs must not only be explicit about their identity but also about their limitations to enable users to understand what a CA can do for them and what it cannot. Research has shown that humans interact differently with humans than with CAs and that they show different personality traits and communication attributes. Users who know that their conversation partner is artificial and are aware of its capabilities are in control. This allows them to consciously decide how they want to behave, if they are willing to give away private and sensitive information, and how they rate and evaluate the output or service provided by the CA.

In a narrower sense, transparency should also mean free choice for the users. CAs implemented as recommendation systems might preprocess available options and present only one or two while excluding all others. Let us say you are asking a CA to look for a carpenter to repair your furniture, and you get only one suggestion while there are five other carpenters who would be potential candidates. The CA might exclude these other carpenters because they are farther away, because their ratings are not as good, or because the top carpenter presented is in one way or other connected to the operator of the CA. Such preprocessing of available options does not only compromise the free choice of users but is also discriminating and tends to create filter bubbles.

Being essential from a human perspective, transparency comes with several disadvantages from a business perspective. In a recent study, a group of researchers found that CA disclosure negatively affects purchase rates because “when customers know the conversational partner is not a human, they are curt and purchase less because they perceive the disclosed bot as less knowledgeable and less empathetic.” What is more, CAs that remain unrevealed “are as effective as proficient workers and four times more effective than inexperienced workers in engendering customer purchases.” This negative effect may tempt companies to conceal the identity of their CAs so as not to lose any business benefit. While the study authors suggest mitigating this negative effect by disclosing the identity of the CA right after the end of the conversation, I disagree with this procedure because users might feel betrayed and lose their trust in the company or brand.

Trust


Trust goes hand in hand with transparency. Therefore, it is mandatory that CAs never exploit the trust of their users, be it in the company or brand, the goodwill of the CA, or the correctness and fairness of any CA output or service. We must put users first and do not act in the interest of the company, no matter whether the use case is high-stakes or not. For example, if your CA is a purchasing adviser for a large department store, its recommendations will most probably not harm users in any way, that is true. Nevertheless, trust is essential even in low-stake scenarios. Users must be confident that the CA is reliable and respectful and tells the truth. In the above example, the CA must not present only the most expensive products or profile users based on sensitive features such as income, age, or social class. Yet the higher the stakes, the more important trust and transparency become. Imagine you want to get advice on whether to make some big investments, would you be willing to trust a CA? And if you mistook a concealed CA for a human agent, would you maintain your trust if you find out? Let us even go one step further: intimate and personal high-stake scenarios such as psychological help or health care simply do not work without a trustful relationship.

To create trust, we need to respect our users and aim for their wellbeing even if their goals are not our goals. We must take care that our CAs are not too pushy to encourage users to buy or invest. While generating leads, customers, or profits is acceptable from a business view, there is an increasing number of "evil bots” trying to change the behavior of users or manipulate their emotions to a specific end, a strategy referred to as “nudging” in behavioral science. Since many of these bots are found in social media, they are called “social bots”. The University of South Carolina found that during the 2016 election campaign in the United States, almost 20% of all Twitter tweets were sent by social bots. Whether this influenced the outcome of the election is under debate. But we can misuse CAs to manipulate users in their decisions, influence the course of a discussion, and spread wrong or incomplete information.

Privacy


Privacy is closely related to trust. CAs must not overhear private user conversations or covertly collect and evaluate user data, for example, to create personalized shopping recommendations. Collect or ask for user data only if your CA explicitly needs this data to fulfill its task, and if you do it, be sure to comply with the applicable privacy laws. Under the European General Data Protection Regulation (GDPR), for instance, you may only collect the absolutely required minimum of data that clearly relates to the pre-defined purpose. Collecting certain sensitive personal information is forbidden, including data about the religion, ethnicity, or health of users. Any data collected must be processed in a secure, transparent, and accurate way. This requires you to implement solutions that ensure data integrity and protect user data against attacks or compromise of any kind.

In addition, you must allow users to object to data collection, and you must correct or delete collected data at their request. I recommend that you inform users in advance so they can quit if they do not consent. You can do this by having the CA send a note or warning before the actual conversation starts. Similarly, if you train your CA based on collected real-life user data, make sure that this data is accurately anonymized and cleaned beforehand. Scatter Lab's Luda Lee revealed several real names and bank account numbers because the data was not properly prepared.

Users often assume conversations with CAs to be anonymous while this is not necessarily the case. The internet and social media have created an anonymous world where people can safely hide behind their screens and be whoever they like to be. This leads many users to think that interactions with CAs are anonymous. But if you log on to a website and start a chat with the CA on this site, you will be automatically recognized without realizing it. This perceived anonymity can lead people to disclose and share more information than they would normally in face-to-face scenarios. The persona of your CA can also contribute to higher online disclosure. The more human-like your CA is, the more users are willing to share personal information and follow its recommendations.

The anonymous online world that tends to encourage disclosure and talkativeness is attractive to a lot of people, especially those who suffer from social anxiety or loneliness. People might start talking about their personal problems with your CA and this even if your CA is for commercial or entertainment purposes. If while analyzing conversation logs, you find that the safety, health, or wellbeing of a user might be at risk, privacy does not come first any longer. In this case, our moral obligation to help takes priority.

Fairness


There is no denying that every human has biases and stereotypes, either conscious or unconscious. The problem is that humans can inject these personal biases into the CAs they create, for example, in the design phase or during training data selection. Often, potential discrimination against a specific group of people is not intended but caused by a lack of awareness, bad design, or the use of wrong technology. Just think of soap dispensers that use an infrared sensor to detect a hand. They do not detect darker skin tones due to the way in which these sensors are designed. In an AI-driven world-wide beauty contest, the digital jurors turned out to dislike people of dark color because the image data used to train the algorithm did not include enough minorities.

When building a CA, we need to make a lot of decisions, and these decisions tend to indicate a norm. For example, by defining the words, phrases, grammar structures, dialects, or accents that our CA can understand, we create a standard data set, and everything outside this set could be seen as unusual, odd, and atypical or even as inferior or inacceptable. What if users of Scottish origin are not understood by our CA? Will they feel angry? Amused? Our CA gender choice can also reinforce existing biases: on a general basis, women are perceived as great secretaries or nurses while men are seen as scientists and managers. This is most probably the reason Amazon’s Alexa conveys female gender and IBM’s Watson speaks with a male voice. How can we avoid stereotyping and ensure diversity, inclusion, and accessibility to everyone? How can we make sure that the responses or decisions of our CA are fair, neutral, given in best conscience, and not discriminating against anyone on grounds of gender, race, age, religion, or ethnicity?

The first and most crucial step is to accept and recognize that we have biases. This is not easy because biases are often implicit or unconscious. Harvard’s Project Implicit provides a lot of Implicit Association Tests (IATs) on typical biases such as skin tone, disability, religion, or weight. You will be surprised by your own results if you take these tests. Once you have recognized your biases, try to eliminate them. Again, this is a challenging task because biases are often hardwired in our brains. Try to develop your own specific procedure or habit to fight biased behavior.

In CA development projects, keep your team as diverse as possible. Include people of many diverse cultural and ethnic backgrounds, and people with different disabilities. This can help you prevent bias in the design of your CA, as for example, not understanding Scottish or African-American Vernacular English pronunciation. The same applies to your test team. Recruit a highly diverse team of testers and collect their feedback. Also check your training data set to see if it is diverse, inclusive, large enough, and representational. If you set up an AI-based recruiting process for a developer position and feed the algorithm primarily with the anonymized application data of your own developers 85% of whom are male, you will end up with a bias against female applicants. On a more general note, the quality or source of your training data matters a lot. Training your CA on live data extracted from some social media channel might not deliver the results that you want to get. Following a human-in-the-loop approach for training and testing can help you to mitigate the negative impact of AI bias.

Responsibility


Whatever your CA does, be aware that it is an artefact developed, designed, and trained by a human - you. It is therefore you (or your team or company) who is responsible for every utterance, decision, or judgment of your CA. Responsibility exists at various levels: technical, legal, and moral or social. At the technical level, responsibility means that the results of your CA must be explainable and traceable. A human (domain expert) must be able to understand why and how a CA made a specific decision, for example, in a loan granting scenario. Gartner’s Market Guide for AI Trust, Risk and Security Management states that “explainability describes a model, highlights its strengths and weaknesses, predicts its likely behav....” This white-box approach (as opposed to black-box models like neural networks which can be hard to understand) is crucial when it comes to building trust and transparency. It allows you to check if your CA works as intended and meets regulatory requirements, and it also allows those affected by a resul.... The right to explanation has even been incorporated into the European GDPR.

From a legal perspective, responsibility means that a person is liable for the outcomes of their actions. But if your CA harms a user, who is liable? In December 2021, a 10-year-old girl asked Amazon’s Alexa for a challenge, and the CA checked the Internet and readily suggested the penny challenge, a dangerous stunt where you touch a penny to the prongs of a plug that is partially inserted into an electrical outlet. You might get an electric shock or other serious injuries, but it can also start fires. Luckily, the child’s mother was present, and Amazon fixed the issue at once. But what if the child got harmed? Who is liable? Is it the CA or is it its operator?

There was a debate within the European Union on whether the legal liability issue could be solved by granting AI its own legal personality. However, this proposal was rejected because of two reasons: first, it is not clear if an AI can have full legal capacity because although an AI can develop and make new decisions through machine learning, this development is restricted by its underlying code. So, an AI cannot “think” in the true sense of the word. Second, making an AI legally liable might be to the disadvantage of potential claimants as an AI would not be able to provide adequate compensation, for example. In its civil liability regime for artificial intelligence, the European Union suggests a risk-based liability assessment, demanding strict liability for operators of high-risk AI that shows a high potential for causing damage and a fault-based liability for any other AI.

Let us now have a look at our moral or social responsibility. The https://www.hownormalami.eu/ website (which is supported by the European Union) hosts an algorithm that judges your face and evaluates you according to beauty, age, gender, Body Mass Index, and other criteria. Your results tell you how attractive and normal you are. While the intention of its creator Tijmen Schep is to encourage us to question the reliability of AI (yes, you can cheat to get better results) and recognize potential bias, there might be psychological implications on people who try it out and get a “bad” result. This could affect teens, for example, who suffer from social anxiety and low self-esteem. How will they feel if an algorithm decides they are not attractive and beautiful enough for this world? I would recommend that this website adds an explicit disclaimer or a more elaborate explanation of what the actual purpose of it is. You may also want to ask Delphi to give judgements on everyday moral questions, such as if it is okay to rob a bank if you are poor or to help your friends if they break the law. According to its creator, the Allen Institute for AI, Delphi is designed to demonstrate the abilities and limitations of state-of-the-art AI models.

Social responsibility also refers to how your CA handles sensitive topics and reacts when confronted with abusive, rude, or toxic language, insults towards specific ethnicities or groups, bullying, sexual harassment, offenses, and the like. A recent study has shown that there are three common ways of how current CAs respond: ignore, deflect, or fire back. These are the same strategies that we use with human aggressors. However, considering that your CA may stand for your company or brand, and might be used by thousands of people, a neutral or no response to complex social issues could be seen as a trivialization or devaluation of the topic or even as an endorsement of the user’s opinion or comment. Firing back by repeating the abusive or rude user entries is also not good because your CA might learn these phrases. Remember what happened to Microsoft’s Tay which had to be shut down after only 16 hours! Carefully think about how you can handle such situations and design your responses with the aim of protecting the user. If your CA is for children or adolescents, consider a handover to a human agent. Generally, make clear that you do not tolerate discrimination or abuse but keep a calm tone. Do not get counter aggressive. Consider issuing warnings and banning users after several misbehaviors in a row.

Conclusion


There are many different ethical aspects we need to consider in our CA design. I discussed several aspects that I regard as essential, including transparency, trust, privacy, fairness, and responsibility, and collected some tips and recommendations. However, these tips are not exhaustive, and depending on the use case of your CA, you might need to think of different solutions or approaches to ensure your CA follows ethical and moral standards and maintains users’ trust.