People Prefer Virtual Assistants That Seem Happy: Study
A cheerful virtual assistant improves the experience for those interacting with them, according to a new study in the Journal of Retailing and Consumer Services. When the text used to talk to people came off as happy, the human conversing with the AI tended to reflect that mood, encouraging a higher rating than with a virtual assistant speaking in a more neutral tone.
Virtually Happy
The idea of the study was to determine how people react to virtual assistants when they are presented as being happy or enthusiastic, rather than the typical neutral emotional tone they usually use. The two parts of the study involved interacting with a chatbot on a hypothetical online retailer or online etiquette advice website. Participants interacted with a chatbot that either spoke with a standard neutral tone, chatbot or one that had been designed to come off as happy through the use of positive words, exclamation points, and other indicators. When asked about it afterward, the happier chatbot almost universally received higher marks from users, who thought it a better experience and one they’d be willing to repeat.
“The experiments showed that VA text manipulated to signal VA happiness boosts overall VA evaluations, and the field study showed that perceived VA happiness is positively associated with overall VA evaluations,” the researchers wrote. “Taken together, the findings indicate that we humans are so hardwired for interactions with other humans that we react to VA display of happiness in ways that resemble our reactions when we are exposed to happy humans. The findings also provide designers of VAs and service marketers with a set of easily implemented linguistic elements that can be employed to make VAs appear happy in service encounters.”
Setting AI Mood
There are many experiments in how to make chatbots and virtual assistants more pleasing and interesting for people to interact with. The idea of modeling them around a specific mood fits that strategy. It’s also led to efforts like the Selected Pairs Of Learnable ImprovisatioN (SPOLIN) project, which tries to apply the ideals of improvisational comedy to AI. Building on recordings of Paul F. Tompkins’s podcast Spontaneanation, the SPOLIN project tries to get AI to build and expand on human input instead of ever denying it. There’s also evidence that the way a chatbot is presented can impact how well people think of it. For instance, a Stanford University study found that people generally prefer talking to chatbots that are described to them as toddlers, as opposed to smart experts. It’s only a metaphor, but one that has a deep impact on how the user talks to and reports back on a chatbot. Combining that with actual mood indicators that a chatbot is happy or sad could make an enormous difference in determining what kind of chatbots people prefer to talk to.
Follow @voicebotaiFollow @erichschwartz
Want to Make Consumers Adopt Your Bot? Call It a Toddler, Not an Expert: New Study
New Chatbot Project Turns Conversational AI into an Improv Performance