AI offers up a mixed bag of risks and advantages. Here’s what your customers expect when you use it.
The use of Artificial Intelligence (AI) is becoming more and more prevalent in our lives. Between our digital assistants, chatbots, virtual assistants, automobiles and recommendation engines across industries such as medicine, finance, insurance, manufacturing, marketing and entertainment, AI is everywhere.
AI is used to inform healthcare decisions, help customers resolve customer service issues, talk with us as companion bots, make financial decisions, drive autonomous cars and help employees make more informed, faster decisions. Many brands are already using AI and will use it more often as time goes on. But do their customers trust those brands’ use of AI?
With recent headlines about AI applications that are able to create images from a text description (such as Craiyon), many consumers are rightfully concerned that AI may pose the risk of being used for nefarious purposes, such as creating deepfakes. Other concerning news came from a Google engineer who was fired for saying that Google’s AI bot was sentient.
Because Artificial Intelligence has been featured prominently in so many science-fiction movies, usually with negative connotations, AI has been incorporated into the human consciousness, and many consumers are hesitant to trust the use of AI in their daily lives. In a 2021 report from YouGov, 52% of respondents indicated that they’re worried about the implications of AI. Additionally, many consumers believe that AI may be a threat to their job.
A 2021 CMSWire article on unconscious biases reflected on Amazon’s failed use of AI for job application vetting. Although Amazon did not purposely use prejudiced algorithms, its data set looked at hiring trends over the past decade and suggested the hiring of similar job applicants for positions within the company. Unfortunately, the data revealed that the majority of those who were hired were white males. Amazon eventually gave up on the use of AI for its hiring practices, instead relying on human decisioning.
Also concerning is that, according to Timnit Gebru, founder and executive director of the Distributed Artificial Intelligence Research Institute, computer vision researchers have apparently shown a general disregard for ethical considerations and the potential human rights impacts of computer vision-based technologies that are used for border surveillance, autonomous drone warfare and law enforcement.
In 2018, Elon Musk, the founder of Tesla and SpaceX, stated at the SXSW conference, "Mark my words, AI is far more dangerous than nukes," claiming that there needs to be a regulatory body overseeing the development of superintelligence. Additionally, Musk stated that he was “really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me.” Many consumers would tend to agree.
That said, consumer trust in AI, at least when it comes in the form of chatbots, is still fairly high. A report from Capgemini showed that 54% of customers have daily AI-based interactions with brands, and 49% of those customers found their interactions with AI to be trustworthy.
That trust isn’t limited to just customers — employees also trust their interactions with AI. A report from Oracle revealed that 64% of employees would trust an AI chatbot rather than their manager, and 50% have used an AI chatbot rather than going to their manager for advice. Additionally, 65% of employees indicated that they are optimistic, excited and grateful for their AI "co-workers," and almost 25% said they have a gratifying relationship with AI at their workplace.
The complexity of most AI and machine learning (ML) applications prevents most people from understanding exactly what is going on. Even those who are part of the development process may be unable to understand all but the part they are working on — the AI seems to exist in a “black box” of mystery.