Technical Articles
A Checklist of Ethical Design Challenges for Business AI
In the first part of this two-part blog, we saw that design principles and digital ethics build trust in artificial intelligence (AI). However, we didn’t address the areas where we are most likely to find those ethical challenges. A checklist can help identify the most relevant pitfalls. Teams should define such a list to ensure that they are properly and responsibly bringing an AI system into the world. Not only is it important to plan, implement, and operate an AI system carefully, but also create a plan to terminate it if needed.
The following questions can help to formulate such a checklist. Most of the items apply to multiple phases and are hopefully useful for teams in creating individualized, AI-specific checklists.
Human involvement
Let’s start with some questions about human involvement in AI-driven processes.
- What user roles are involved and what are the individual ethical requirements of those users? (For example, a hotel reception clerk would have very different requirements from a doctor.)
- In which contexts will the users work? (Create parameters for these contexts to help the AI “understand” the users current situation and adjust depending on their ethical needs. For example, a purchasing process may have the parameter “urgency” so that it may be acceptable to not purchase from a fair trade source if the purchase is deemed urgent.)
- How is each user’s current state evaluated and how is that information used? (For example, if eye tracking technology is used, what happens with the information that is gathered concerning the users reading behavior and emotional state?)
- What is the user’s cultural environment? Which ethical and social values do the users share?
- Will there be situations in which the user needs to or wants to completely or partly turn off the AI’s functions? Should you develop an “escape door” for the AI functions? How can the user be supported if such a situation should occur?
- If explicit user feedback is requested, how will it be evaluated? (For example, if a system learns from observing users when they analyze why a month had low revenue, does the AI need to ask the users first if it can be trained by observing their behavior?)
Algorithms and boundary conditions
One requirement of business AI is to be transparent, especially about why a result was achieved. This can be challenging (or even impossible) for some AI technologies like neural networks. Here are some checklist items concerning technological boundaries.
- How can “learn and forget” processes be monitored (e.g., in case of outdated data or legal deletion obligations)? How will humans be involved here (e.g., in the form of an “AI auditor”)?
- What are the rules for AI “learn & forget”? How can they be customized? By whom?
- How can the quality of information used for learning be monitored?
- How will machines evaluate when a human must be involved in a decision or when to seek advice from a human?
- What are the limits for decisions or proposals created by the system (For example, the process of human-AI-interaction might depend on parameters like “if the goods to be purchase are valued over 10,000 euros then a human decision maker must be involved.”)
- In which cases does a process require a four-eyes principle (i.e., two people must review everything) or other trust-building measures?
- Can computational results – and how they were achieved – be understood by humans? Is it possible to understand them immediately or after certain actions? (For example, normally the user may not need an explanation of why an AI came to a decision. But, if the user wants an explanation, the system must be able to provide it.)
- Can computational results be reproduced (without and after additional learning steps)? If not, how will the users be informed about this?
- In which process steps are “escape doors” required or reasonable? To which extent should they stop AI functions? How can they be tested?
- Is the data used for learning free of bias? If not, what are the sources of bias? Can the level of bias be analyzed and quantified?
- To which extent does bias affect the system’s computational results?
- How can the level of bias be shown to the user or other systems using the results?
Compliance and security of learning systems
Legal compliance and data security are strongly related to digital ethics. Let’s look at some fundamental questions to consider.
- Which legal compliance topics must be covered (including data privacy)? Which internal company policies are applicable? How can compliance be monitored during development and operations?
- In terms of self-regulation of AI providers, are the AI development plans compliant with the company’s own standards?
- Are there possible liability issues resulting from AI-made decisions? In which situations?
- Would it be possible to intentionally teach the system the wrong things? How can security breaches (including incorrect teaching) be uncovered?
- How can the implementation of backdoors be avoided and detected?
- How can potential, hidden override directives be unveiled?
- In which cases must the system’s life be ended? What happens with the learned information in such a case?
Potential long-term impact
According to Amara’s law, “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” Therefore, it makes sense to look at potential long-term ethical questions.
- How can a long-term risk assessment be done? What will be the impact on society, if any, when tasks are automated on a large scale?
- If the system is widely used, would humanity lose knowledge or capabilities?
- How can behavioral changes of the AI system be detected?
- How can unintended or irrational computational results be monitored – and be avoided or filtered?
- How can we detect if the system starts working independently of any human-defined task? How will we react in such a case?
The most important checkpoint
Running through this checklist helps identify the requirements for a business system that acts in a sustainably ethical way. The most important checkpoint was not mentioned: The list must be reviewed and updated on a regular basis. And, as the level of AI intelligence grows, checklist items must become more granular.
Technological advances must be foreseen to avoid AI developing faster than people’s ability to define and implement an ethical foundation. We need things like bias handling, escape doors, and AI auditors. There are likely many more pitfalls and challenges out there waiting to be seen before we can add a real ethical “conscience” to what is called artificial intelligence.