Skip to Content
Author's profile photo Cosette Jarrett

Understanding the need for transparency with conversational AI

Conversational AI has taken the main stage at some of this month’s hottest tech conferences like F8, Google I/O, and Microsoft Build. Although the promise of advanced, AI-powered virtual assistants and chatbots has many industry professionals and consumers buzzing about the potentially positive impact these technologies will have, many critics still raise valid concerns around the issue of transparency. As companies implement AI to mimic human behavior and perform consumer-facing tasks, where will we draw the line on how and when the general population knows they are interacting with a bot as opposed to a real human?

This is a point I often see experts discuss in the AI and transportation guest contributions I edit at VentureBeat. As AI progresses in its ability to more accurately depict real, human behavior, how does the responsibility of brands change in notifying users that they are speaking to an AI-powered bot as opposed to a human customer service agent or assistant? The most common chatbots at the moment are a bit too clunky to dupe savvy users, but what about the AI-enabled technologies we’re seeing pop up? How will users interpret their interactions with these advanced bots when they’re able to speak with an uncanny resemblance to real humans? This is something I’d like to think businesses will put increased focus on addressing as we see more major tech companies roll out shockingly accurate AI with more advanced capabilities.

AI-powered service is simply answering human demand

One important item to consider before we dive into the responsibility of companies to notify users when they are working with an intelligent bot is that the customers themselves have created the demand for automated processes in the first place. An example of this comes from the extinction of the voicemail box.

It’s reported that 80 percent of customers who reach an actual voicemail system drop off the call without leaving a message. This creates an issue for companies that do not have enough bandwidth to handle every call that comes in, which means automation in the form of intelligent customer care bots and advanced automated voice systems are necessary not only to avoid dropoff, but also to provide a more streamlined experience for customers as well. So while we often hear customers complain about the issue of chatbots and assistants that try to take on too many traditionally human-controlled responsibilities, it’s important to bear in mind that companies are simply creating more advanced solutions to react to the needs of their clientele.

With all of this being said, it seems only fair that consumers give companies a little break as they iron out the kinks in their conversational AI strategies. However, this doesn’t mean companies should have free reign to deploy bots as they please without at least giving their customer bases a little fair warning.

That doesn’t mean transparency is out the window

According to a survey conducted by Mindshare, 75 percent of consumers want to know when they’re talking to a bot, and 48 percent considered chatbots pretending to be human “creepy.” Although the presence of consumer demand for more and smarter service bots creates understanding for why companies would want/need to automate more processes with intelligent solutions, that doesn’t mean consumers don’t deserve to know who – or what – they’re sharing their information with.

A lot of the consumer concern around AI-powered chatbots and virtual assistants lies in their potential to deceive consumers. Despite this concern, many traditional chatbots offer names and photos for fake customer service agents to make users believe they are speaking with a real person. Sure, the technology is helpful if the bot can answer customer questions and help them avoid waiting in a call queue to speak with a human agent, but users often lose trust for companies that aren’t upfront about their use of bots. Especially if the bot is unable to resolve their issue as a human might be able to.

As I mentioned, many savvy users can see right through most modern bots by picking up on unnatural wording and other cues, but this won’t always be the case. This issue will become more complex when advanced, AI-powered technologies like the recently released Google Duplex — a technology that can make calls to schedule appointments and gather business information using a human voice — become more prevalent. AI-enabled software like this could dupe most users into believing they’re speaking with a real human for the entire duration of a call, which leads to ethical concerns beyond simply losing the trust of the user on the other end.

Getting ahead of regulation to establish consumer trust

By the year 2020, Gartner predicts that 85 percent of interactions between consumers and enterprises will be handled by bots. Clearly, it’s time to approach the ethical execution of conversational technologies, but federal regulation seems to be lacking.

Although federal regulation hasn’t quite caught up with AI-enabled chat and assistance solutions, companies should still develop and abide by an ethical code of transparency if they want to establish consumer trust. A simple notification informing a user that they are in fact working with a bot at the beginning of a conversation could help companies avoid losing consumer trust and possibly even encourage more patience among users as they work with the bot to solve customer service issues.

At the end of the day, transparency in all business practices is an important part of earning and keeping consumer trust. As new technologies become available to serve customers without human agents, companies must remember to remain transparent if they want to stay in the good graces of their customer bases.

Assigned Tags

      Be the first to leave a comment
      You must be Logged on to comment or reply to a post.