Skip to Content
Technical Articles
Author's profile photo Guido Wagner

How Digital Ethics Enables Trust in Business AI

Technologies summarized under the label of “Artificial intelligence” (AI) open doors to new possibilities that can enrich the lives of many humans. This is the first part of a two-part blog that shows that digital ethics can be helpful to manage the risks arising from expected changes in the labor market and our society. The second part proposes a checklist that covers ethical design challenges for business AI.

A growing automation level may finally lead to a world in which all humans need to work just a few hours a week – even if it will be a challenge for humanity to get there without social unrest. The trend is supported by the tendency to realize intelligent enterprises by using embedded AI systems in production and business management. Personal assistants will be the spearhead of a new generation of user interfaces.

Intelligent systems will be able to communicate with users in the style of human communication. Where Mr. Spock in the Star Trek Universe needed to press buttons to interact with machines, there will be voice and gesture recognition and maybe thought-controlled interfaces. Using, for instance, augmented reality glasses, everything can become a device – because any wall or table can be augmented with controls useful to interact with AI.

Sounds like a bright future for the coexistence of humans and machines. We just need to ensure that we do it intelligently – ensuring that IT primarily serves humans. As with any advanced technology, the impact of AI on individuals and human society is difficult to assess for a variety of reasons: it’s new; there haven’t been a lot of positive experiences yet; people fear losing their jobs for efficiency reasons; or, because they think AI is taking over. This results in a lack of trust in AI. Nevertheless, for successful integration of AI technologies in our daily business and for broad acceptance, user’s trust is crucial. The challenge is that trust is not built just on the exchange of computational results; it happens on a personal level. For example, if a personal assistant is quite helpful but unveils privacy related data or has an unpleasant voice, then humans will not place their trust in it.

We need to show that trust and automation efficiency can be partners, helping each other to grow. A valid starting point would be a set of design principles to use in the architecture of systems for the Intelligent Enterprise. Here is a proposal, developed and used by SAP Design:

  1. Intelligent automation: When driving the automation of a process, it should be done with caution and foresight.
  2. Human augmentation: Technology should be used to augment and improve human capabilities rather than patronize people.
  3. Keep humans in the loop: Users rely on honesty, transparent processes, and reasonable methods, to come to their own conclusions – also without AI.
  4. Digital ethics: We must seek out AI partners who subscribe to human ethical values and social behavior. Two important elements will be managing bias and potential misuse.

In fact, the last principle has the power to connect trust and efficiency. If an AI behaves along broadly accepted ethical lines, humans can be sure to be treated fairly and individually, even in times of automation. Read more about why digital ethics matter.

Trust can also be built by using a communication style similar to that used by humans, finally fulfilling the requirements of the Turing test. Therefore, AI would need to consider cultural and social conventions which are even beyond ethical rules. Such conventions may depend on the age of the user, and the user’s social environment. Of course, trust requires that AI complies with legislation and especially data privacy regulations. These topics are important but will not be looked at in depth here.

Before we can start to implement or build AI systems, we need to think about the potential impact. We need to define goals, requirements, and operational boundaries from a human perspective. Let’s start with an inventory of the system’s tasks and boundary conditions. Being aware of related questions helps to find out in which areas ethical challenges are waiting.

Inventory – what will AI do, and which impact can be estimated?

  • Which tasks of the AI system might affect human life?
  • Will machines be better than humans at this task? In which areas?
  • Will humans benefit from the handover to AI? Which user roles will benefit?
  • What advice will be given and which decisions are made by the system?
  • Which specific types of information will the system be allowed to handle (inbound, outbound)?
  • Which type of information will the system need to learn?
  • Could there be situations which require specific ethical behavior (e.g., emergencies)?
  • List the technologies that will be used (e.g., UX, machine learning, …)
  • List the physical entities that can be accessed or affected (e.g., production machines, robots, traffic lights).
  • Will environmental sustainability be improved by using AI?

Knowing what AI should do and how it is supposed to improve human life, we can start thinking about potential ethical challenges. A checklist of questions is helpful to consider aspects of ethical systemic behavior in specific situations. A proposal for a set of questions will be covered in the second part of this two-part blog.

Assigned Tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.