Emory University Wins Alexa Prize Socialbot Grand Challenge
Emory University has won this year’s Alexa Prize Socialbot Grand Challenge Emora socialbot, earning the $500,000 grand prize. Chosen over three rounds from an initial 15 competitors, Emora rated the highest for its ability to converse and behave as close to a human as possible.
The Alexa Prize began in 2016, with this year as the third time the competition has run. Emory and Emora join the University of Washington’s Sounding Board program, which won the first year, and the University of California, Davis, which won the competition most recently with their socialbot, Gunrock.
The finalists release their socialbots for testing with Alexa users as well as the judges throughout the semi-finals. This year, the socialbots held more than 793,000 conversations over 19,000 hours with Alexa users. The judges rate the socalbot on different metrics, with a grand challenge goal of 4.0 out of 5 for the final score. That would mean the socialbot can conduct a coherent, exciting conversation that lasts at least 20 minutes at least two-thirds of the time. No socialbot in the competition’s history has reached that goal, although Emory’s 3.81 rating came very close. Emory wasn’t the only school to win money. Stanford University and its Chirpy Cardinal socialbot came in second with a 3.17 score, earning $100,000, while Czech Technical University came in third with a 3.14 score, winning $50,000 for its Alquist socialbot.
“The work people have been putting into incorporating common sense knowledge and common sense reasoning into dialogue systems is one of the most interesting directions of the current conversational AI field,” Emory team leader Sarah Fillwock said in a statement. “A lot of the common sense knowledge we use is not explicitly detailed in any type of data set as people have learned them through physical experience or inference over time, so there isn’t necessarily any convenient way to currently accomplish this goal. There have been a lot of attempts to see how far a language modeling approach to dialogue agents can go, but even using huge dialogue data sets and highly complex models still results in hit-and-miss success at common sense information. I am really looking forward to the dialogue approaches and dialogue resources that more explicitly try to model this type of common sense knowledge.”
The ongoing COVID-19 health crisis became a factor during the challenge this year. The competitors included new conversational topics and adapted their AI to include this omnipresent part of people’s lives. Because people were at home more and often more alone, many talked to the socialbots more than they might have at another time, and the teams started getting comments about how they were happy to have a socialbot to talk to that seemed like a real person.
“When COVID became a significant societal issue, we tried two things: we had an experience-oriented COVID topic where our bot discussed with people how they felt about COVID in a sympathetic and reassuring atmosphere, and we had a fact-oriented COVID topic that gave objective information,” Fillwock said. “This really gave us some empirical evidence that social agents have a strong potential to be helpful in times of turmoil by giving people a safe and caring space to talk about these major events in their life since people responded positively to our approach at doing this.”
The socialbot challenge is one of several contests Amazon runs to attract new ideas and talent for Alexa. There is also the Alexa Skills Challenge for Kids and the Alexa Life Hack Challenge, and the Lego Mindstorms Voice Challenge: Powered by Alexa. The skills built by those developers, help convince people to use Alexa for their homes, cars, and mobile devices by making Alexa more personable and adding interesting voice apps to the skill store. Amazon hasn’t said when the next socialbot challenge would begin, but you can still try out Emora or any of the other competitors by saying, “Alexa, let’s chat.”