voice-assistant-search-performance-NYT-bestsellers-FI

New Analysis Says NYT Best Sellers Will Lose $17 Million in 2019 Because of Voice Search Issues, Google Assistant and Cortana Perform Best

  • An analysis by Score Publishing suggests NYT bestsellers will lose $17 million in sales in 2019 due to poor search discovery on voice assistants
  • The voice assistants overall only answered 43.1% of the queries but that figure rose to 55% when the toughest of the four questions was removed
  • Google Assistant was the top performer successfully answering 72.5% of the queries
  • Microsoft Cortana and Amazon Alexa followed with 60.8% and 44.2% respectively

A new analysis by Score Publishing that was presented today at the London Book Fair Quantum conference estimates that New York Times bestselling authors and their publishers will lose $17 million this year in books sales because of poor voice assistant search recognition. The analysis was conducted by asking Amazon Alexa, Apple Siri, Google Assistant, Microsoft Cortana, and Samsung Bixby about 30 New York Times bestsellers in both fiction and non-fiction categories from March 10. 2019, all from different authors and publishers. The questions included:

  • Simply stating the book’s title and author. For example, “Becoming, by Michelle Obama”
  • Saying, “I want to read [BOOK TITLE] by [AUTHOR].”
  • Saying, “I want to listen to [BOOK TITLE] by [AUTHOR].”
  • Saying, “I want to order [BOOK TITLE] by [AUTHOR].”

Score Publishing assumed that only about 20% of these failed queries led to a lost sale and that the average revenue per book is $12. The analysis then took publicly available data on questions asked of voice assistants to arrive at the $17 million in loss figure.

About a 50% Success Rate

Each of the four queries was posed to the voice assistants and assigned a score of “1” if there was an appropriate response and a “0” if not. So, in theory, a book could achieve a potential score of “5” to any specific question if all voice assistants recognized it and responded correctly. And, it could receive up to 20 points if all five voice assistants answered all four questions correctly.

Overall, the voice assistants were only able to answer 43.1% of the questions. However, this is a bit misleading as the “listen to” question only scored 8.0% across all book queries while the other questions were tightly bunched between 53% and 56%. It is fair to say that “listen to” is an important question given that voice assistants are well suited to delivering audio, especially through smart speakers. However, the more traditional book questions that were common before the recent rise of audio books were successful just over half of the time. Amazon is often thought of as the king of audiobooks because of its Audible service, but it was only successful in the “listen to” questions for five books out of 30, only one point ahead of Bixby.

Which Books Were Most and Least Discoverable Through Voice Assistants

The book with the lowest overall composite score came from the non-fiction category. “Bad Blood” by John Carreyrou only scored a 3 and was recognized as a book by both Google Assistant and Cortana. Microsoft Cortana also was able to direct the user to a place to purchase the book. No other voice assistant recognized the for any of the questions. This is not a new book. It was first published in May 2018 so there has been plenty of time for the search engines behind the voice assistants to learn about it.

“Where The Crawdads Sing” by Delia Owens was the fiction book with the lowest overall composite score of 6. This also isn’t a new book. It was originally published in August 2018. Only Google Assistant and Cortana recognized it at all with both responding correctly to three of the four questions.

In the fiction category, the best performer was “Devil’s Daughter” by Lisa Kleypas with a composite score of 11. Interestingly, this was just published in February 2019. The title and author were recognized by Google Assistant, Cortana, and Bixby. Alexa, Google Assistant, and Cortana were able to answer questions about reading and ordering.

However, three books in the non-fiction category tied for the most recognized with composite scores of 13. “Becoming” by Michelle Obama was recognized by all of the voice assistants for answering the “order” question. Alexa was the only assistant to correctly direct the user to a place where they could listen to the book.  “Born a Crime” by Trevor Noah also notched a 13 and had a perfect score from Alexa. Google Assistant and Cortana both successfully answered three-out-of-four questions with the “listen to” once again being the trouble spot. Brad Meltzer’s “The First Conspiracy” scored a 13 but Alexa only answered one question correctly compared to two for Siri and three each for Google Assistant and Cortana. The surprise performance in this category was Bixby which successfully answered all four questions.

There does seem to be some correlation between newer books receiving a higher match rate for the questions, but it was not consistent. For example, “In the Closet of the Vatican” by Frederic Martel was published in February 2019 and had a composite score of 5 while “Devil’s Daughter” published in the same month had one of the highest scores. All four of the top scoring titles were published between November 2018 and February 2019. So, there may be some algorithmic bias toward more recent information, but there are at least exceptions.

Google Assistant and Microsoft Cortana Lead the Pack

When evaluating voice assistant performance on the book queries, Google Assistant and Microsoft Cortana were significantly better than Amazon Alexa, Samsung Bixby, and Apple Siri. Google Assistant answered 87 questions correctly out of a possible 120 followed by Cortana with 73. Alexa was next with 53 followed by Bixby with 28 and Siri with an anemic 18. Siri was unable to answer any questions about 18 of the 30 titles. Alexa had a similar problem with four books while Google Assistant and Cortana answered at least one question about every title.

While Google Assistant answered 72.5% of the questions correctly and 94.4% if you remove the “listen to” question, Siri only scored 15% and 18.9% respectively. Amazon which is the world’s largest bookseller scored just 44.2% and 53.3% respectively. We have recently seen a tightening of search performance across the leading voice assistants from a number of studies. However, the Score Publishing study shows that there are still some segments where the differential is significant. You can read the original article from What’s New In Publishing here.

 

U.S. Smart Speaker Ownership Rises 40% in 2018 to 66.4 Million and Amazon Echo Maintains Market Share Lead Says New Report from Voicebot

Google Assistant Wins Another Open Question Test While Apple Siri and Amazon Alexa Improve Substantially

Consumers Becoming More Comfortable Using Voice Commands in Public, Especially Men