Imagine sending an artificial intelligence (AI) chatbot a message about a lost package, and getting a reply that it’s “happy” to help. Once the bot creates a new order, they say they’d be “happy” to fix it. Afterwards, you get a survey about your interaction, but are you likely to rate it positive or negative?
This scenario is not far from reality as AI chatbots have taken over online commerce.95% of companies will have AI chatbots by 2025, it is claimed financial digest. AI may not yet be sentient, but it can be programmed to express emotions.
It has long been known that displaying positive emotions during customer service interactions can improve the customer experience, but researchers at Georgia Tech’s Scherer School of Business wanted to see if this also applies to artificial intelligence. They conducted experimental research to determine whether positive emotional displays could improve customer service and found that emotional AI would only be appreciated if customers expected it, which might not be the best path for companies to invest in.
“It is widely believed, and repeatedly shown, that human employees can express positive emotions to improve customer service ratings,” said Han Zhang, the Steven A. Denning Professor of Technology and Management. “Our findings suggest that the likelihood that AI expressing positive emotions will be beneficial or unfavorable for service evaluation depends on the type of relationship customers expect from a service agent.”
The researchers present their findings in the paper “Emotional Robots: Should AI Agents Express Positive Emotions in Customer Service?” Information System Research in December.
Researching artificial intelligence emotion
Researchers conducted three studies to expand understanding of emotional artificial intelligence in customer service transactions. Although they varied the participants and scenarios for each study, the emotionally charged AI chatbots used positive emotional adjectives, such as excited, delighted, joyful, or delighted. They also deployed more exclamation points.
The first study focused on whether customers responded more positively to positive emotions if they knew a customer agent was a robot or a human. Participants were told they were seeking assistance for items missing from their retail orders. The 155 participants were then randomly assigned to four different scenarios: a human agent with neutral emotions, a human agent with positive emotions, a robot with neutral emotions, and a robot with positive emotions. They then asked participants about service quality and overall satisfaction. The results showed that positive emotions were more beneficial when the human agent exhibited them, but had no effect when the robot exhibited them.
A second study examined whether customers’ personal expectations shaped their responses to bots. In this case, 88 participants imagined returning a textbook and were randomly assigned to an emotionally positive or neutral robot. After chatting with the bots, they were asked to rate to some extent whether they were public-oriented (social) or exchange-oriented (transactional). Participants were more likely to appreciate positively emotional bots if they were community-focused, but made their experience worse if they expected the communication to be transactional only.
“Our work enables businesses to understand what customers expect from AI services before they casually equip them with emotional expressive capabilities,” Zhang said.
After 177 undergraduates were randomly assigned to either an emotional or a non-emotional robot, the final study explored why a robot’s positive emotions can influence customer emotions. The results explain why the frontal robots didn’t work as well as expected. Because customers don’t want machines to have emotions, they will react negatively to robots’ emotions.
The findings suggest that using positive emotions in chatbots is challenging because businesses do not understand customers’ biases and expectations in interactions. A happy chatbot can lead to an unhappy customer.
“Our findings suggest that the positive impact of expressing positive emotions on service evaluation may not be realized when the source of the emotions is not human,” Zhang said. “Practitioners should be cautious about the promise of equipping AI agents with emotional expressive capabilities.”