Alexa Prize Grand Challenge 4

Alexa Prize Grand Challenge 4 Awarded to Team from Czech Technical University

Amazon announced today that the Alexa Prize Socialbot Grand Challenge 4 winner was team Alquist from Czech Technical University. The team won the “$500,00 first prize for earning the top score in the finals competition,” according to the announcement. Standford University placed second and the University of Buffalo was third earning their teams awards of $100,000 and $50,000 respectively. The earnings are in addition to the $250,000 grant Amazon provided to support each of the nine teams that participated over the 2020 – 2021 academic year.

The 2021 competition marked the fourth appearance of the Czech Technical University team in the finals and its first win. Past winners included the University of Washington, the University of California Davis, and Emory University. Nine teams participated in the Grand Challenge 4 and five participated in the finals.

Alexa Prize Challenges

The Alexa Prize was created in 2016 with 2017 becoming the first award year. In 2019, the challenge participant selections were moved from the fall to the spring to better align with the academic school year so there are now winners for 2017, 2018, 2020, and 2021.  This competition is known as the Grand Challenge. Amazon introduced the second Alexa Prize competition in 2021 named the Taskbot Challenge. In that competition, teams focus on creating voice apps that can execute complex tasks and take instructions by voice.

For the Grand Challenge, the objective is for the voice app to hold a general conversation with a user. Teams are ranked based on two criteria in the challenge period where all socialbots compete:

  • What rating the user gives regarding their willingness to speak with the bot again on a 1-5 scale
  • How long the average conversation lasts

The finalists are then evaluated by a panel of judges. If the winning team achieves average conversation time with judges in excess of 20 minutes for two-thirds of the sessions and a rating of 4 or higher, it will win a $1 million grand prize. Czech Technical University received a 3.28 average judges rating and an average interaction time during the final judging of 14 minutes and 14 seconds. The University of Buffalo actually achieved a higher average conversation time of 14 minutes and 45 seconds than the eventual winner but had a lower average judges rating. The conversation times were up significantly from previous years.

Each of the teams produced a research paper on their findings. The prize money is certainly a motivation for the teams. However, the more significant opportunity may be the opportunity to have so many users interact with their socialbots. Amazon says that since 2017, the socialbots have engaged in more than 900,000 hours of conversations with Alexa users. That translates into over 21,000 hours per socialbot and well over 100,000 sessions. This scale of interaction and data collection would be difficult for a University research team to match under other circumstances.

Do the Conversations Meet the Objective

It is Amazon’s competition and they can decide whether the teams are meeting the overall objectives. However, I am not sure we are making that much progress in general conversation even though there is clear progress in something that is important to Amazon.

I have interacted with many of the socialbots each year since the competition began. You can try this year’s socialbots by saying, “Alexa, let’s chat.” When you say that phrase, the socialbots are randomly selected by Alexa. That means you never know if you have tried all of the contestants. I have certainly interacted with several again this year.

There is a general improvement in gathering information from the user. The socialbots seem to be better at asking questions and some are very good at transitioning the conversation into sharing interesting information. There also seems to be a higher success rate in interpreting user responses. These are the reasons why I suspect the average conversation times were much higher in 2021 than in previous years.

Despite the fact that the conversations mostly entail Alexa asking repeated questions of the user, finding interesting questions to ask followed by novel information is a good way to extend a conversation. Importantly, the improved ability to gather information could be very useful to Amazon as it seeks to create more personalization for Alexa users in the future. Keep in mind that Amazon gets to see all of the data generated since it flows through Alexa and can learn new techniques to improve the voice assistant.

Where the socialbots aren’t seeing as much improvement is in actual two-sided conversation. The interactions too often seem like an interrogation that is desperately trying to get the user to keep answering questions to increase the conversation duration. Very often the conversations fail when you try to ask a question of the socialbot. It then scrambles to ask you another question to keep you engaged. This might not be the ultimate goal or the best way to win the contest. However, creating more engaging two-way conversations with equal participation might just be what is required to get past the 20-minute barrier. It also would fundamentally change the nature of the interactions.

The socialbot challenge is the opposite of a typical Alexa interaction. Normally, the user makes a number of requests or poses questions and Alexa responds. The socialbots turn this around and it asks the users questions who are typically responding. These are multi-turn conversations in model but not in spirit. Maybe Grand Challenge 5 will bring some more two-side conversational balance.

Sonos Surveys Users About Potential Voice Assistant

Samsung Upgrades Bixby Speed, Adds On-Device Speech Transcription

Val Kilmer Gets a New Synthetic Voice Replica from Sonantic