2019 Predictions From 35 Voice Industry Leaders
Our list of 2019 predictions for the voice industry was contributed by 35 Voice Insider readers. One email was sent out and the responses are below, most of it unedited (with the exception of a few more verbose responses). This group is among the most well-versed in the industry in terms of consumer adoption, user experience, technology advances, and the next big thing. They did not disappoint. In fact, about two-thirds of the predictions are unique. You will see some overlap, but surprisingly little. What does this dispersion tell us? Well, there is no conventional wisdom about what will be the next “big thing” for voice in 2019 and instead, there are many ideas about important trends on the horizon.
The predictions are presented in the order they arrived so there is no implicit ranking implied. The question was simple. “What is your prediction for Voice AI, voice assistants, smart speakers or related activities that will be an important development in 2019 and why?” Let me know what you think on Twitter. Here you go.
Stuart Crane, Voice Metrics (Founder & CEO)
The proliferation of multi-modal and voice displays, particularly the purchase/usage of devices such as Echo Show, Google Home Hub, and Facebook Portal, will be the most important development for Voice in 2019. The biggest use cases for Voice are those done in the main part of the home (kitchen, family room), such as playing music, asking questions, controlling smart home devices, etc. — and by having more consumers experience the multi-modal capabilities of Voice, and with more skills and capabilities taking advantage of multi-modal, Voice will be elevated to a “must-have” in every home, just as smartphones are a “must-have” in every pocket.
Pat Higbie, XAPPmedia (Co-Founder & CEO)
Voice search in the form of implicit invocation of Google Assistant actions and nameless invocation of Alexa skills will become essential to the discovery of third-party voice apps and a race will be on in every product category for position 1 in search across both platforms.
Jon Stine, J. Christopher CCV LLC (Principal)
First, we’ll go through a perceived dive on the hype curve, given that early, bloated and misunderstood predictions won’t come to pass; skeptical commentators and bloggers will step into the spotlight and enjoy their moment. Second, enterprise awareness will begin to seep into the C-suite and take first steps beyond a “we’ll do skills” response.
Jason Fields, Voicify (Chief Strategy Officer)
2019 is going to be the year voice & IVA’s are integrated into brands overall CX strategy. I think a barrier for most brands has been being forced to show direct ROI, which is especially hard for companies who are laggards with technology. But wide adoption by consumers has a way of forcing the hands of executives, specifically when they can see that support and retention are not only viable metrics for new experience programs, but also give them competitive differentiation.
Dave Kemp, Oaktree Products / Future Ear (Business Development Manager)
I believe that due to Alexa Mobile Accessories Kit (AMAK) now being widely opened up to mobile OEMs, we’ll begin to see widespread Alexa usage on third-party mobile devices, including headphones, hearables and hearing aids. I believe this will spur Google to follow suit with a similar SDK to match Amazon’s Mobile Accessory Kit.
Bart van der Meer, Klik Proces (Owner)
At the start of 2019 in pretty much all countries people will have access to smart speakers and will start experimenting with them asking silly questions, playing games and doing the top activities like weather forecasts. So this will be that year that major traffic will be pouring into voice search AND the year that companies can be “first” in their market.
Rani Molla, Recode (Data Editor)
Voice assistants will get notably better at understanding natural language. If and when this happens, it will be a huge turning point from voice, bringing it from a novelty to a resource.
Todd Mozer, Sensory (CEO)
Voice Assistants are still not very smart and a lot of their functionality today is performing voice search or simple setting functions like alarms. Voice assistants will be getting smarter in 2019 and beyond by utilizing more sensor data to provide more relevant assistance, and more functionality will go embedded and will rely on user specific knowledge to better serve the specific needs of individuals rather than the general population. Along with this will be the need to better identify individual users to build data specific to those users. Identification can be done with voice or vision biometrics.
Niko Vuori, Drivetime.fm (Founder & CEO)
2019 will be the year that voice makes it into the car in a big way – similar to how 2017 was the breakout year for voice in the home on Smart Speakers. With both Amazon (Alexa/Echo Auto) and Google (Assistant/Android Auto) vying for supremacy in the car, and with the car the most natural environment in which voice just makes sense, it is inevitable that the automobile will be the next major battleground.
Owen Brown, Starbutter AI (CTO)
Sub-1 second latency on mobile assistants (Google Actions) over 5G will be a complete game changer. If you look at all the research that AT&T did when they migrated to sending data over IP, they found that people won’t tolerate delays in conversation greater than 1.5 seconds. People have been trained to expect 4-second delays in responses from Alexa and Google assistant. The first app to break the 1-second barrier will have a remarkable impact.
Braden Ream, Voiceflow (CEO)
I think Canfulfillintentrequest will begin to become the dominant method of Alexa skill discovery by late 2019. This will dramatically increase the number of skills as millions of third-party skills become connected to Alexa which will act as a “search engine for services,” powered by Canfulfillintentrequest.
Jan König, Jovo (Co-founder)
In 2019, voice will move beyond the smart speaker and the home. We will see more deep integrations into everyday tools, with smart earbuds and cars being just the beginning. It will be more and more natural for people to use their voice when they interact with technology, no matter where they are.
Dana Young, Virtual Concierge Service
The use case for voice in the hospitality industry is firmly established. To date, it has been a lot of talk with little actual adoption. In 2019 Amazon will make Alexa for Hospitality generally available, and there will be many stories about the adoption of voice tech in hotels. Google will need to respond and will make this a focus internally, but won’t be ready for an announcement in 2019, giving Amazon a substantial lead in this space.
Ahmed Bouzid, Witlingo (Founder & CEO)
2019 will be the year when the voice assistant will begin to be viewed by enterprises as a business impacting channel, taking its place alongside email, the web site, social media, and plain old telephone services, including Interactive Voice Respons self service. The most immediate entry point will be customer care: enterprises providing their customers with the ability to answer questions about products and services by just asking questions to their Alexa, Google Assistant, Microsoft Cortana, Samsung Bixby and other assistants that are emerging.
Lin Nie, AnswerLab (User Experience Researcher)
What does a context-aware AI assistant look like? Imagine browsing text or graphic-heavy stock market charts on a website; you see a peak in price, move the mouse over it, and want to say “Alexa, if the price goes below here (cursor location on the screen), sell.” For this to work, Alexa needs to see or minimally utilize the cursor location to know where “here” is, and give a fully context-aware response. This is the future users want. This year’s Echo Wall Clock is only a very small step in this direction.
Why better hand-off experiences? Voice devices need to meet users where they are and turn the gears that way. Handoffs between the digital assistant and other platforms are becoming more common, e.g., from a smart speaker without a screen to a screen-based device such as a phone, especially in retail voice use cases. Users are often disappointed by not knowing where to go next or simply being dropped when the AI assistant doesn’t seamlessly transition them to another platform. In AnswerLab’s in-home, longitudinal user studies conducted with older adults, we observed that whenever Alexa/Google Assistant told our participants to ‘Go to your Google/Alexa app,’ they lost the user. In some sense, the existing hand-off experience is not designed for the Baby Boomer generation. They’re not the app generation.
Why moving away from the app-based ecosystem? Currently, third-party voice app adoption is low, the majority of users are not using their voice assistant to do more than quick, in-the-moment information retrieval, turning on lights, and listening to music; a growing list of skills of voice assistants are buried and users are not discovering them over time.
Carl Robinson, Voice Tech Podcast (Host)
2019 will see greater demand for voice technologies that enhance privacy and security, driven by accelerating enterprise adoption of voice assistants and increased consumer awareness. These include open source libraries, edge computing, blockchain and even whisper technology. We’ll also see products that squeeze voice data for more and more information, including voice emotion analytics, audio event classification, speaker identification, and greater contextual awareness by correlating multi-modal inputs.
Roger Kibbe, Voice Craft (Principal)
2019 will be the peak year for smart speaker sales. As we move into 2020, assistants will be embedded in appliances, TV’s, electronics and more. The ubiquitous availability of an assistant will mark the move well into the early majority, if not the late majority, technology adoption lifecycle.
Tim Kahle, 169 Labs (Co-founder)
With the fast-growing integration of Voice Technology, we will experience much more diversified and new product ideas on the market. The next disruption will take place in mobile everyday situations out-of-home. I’m convinced that Amazon will introduce a device with a mobile hotspot on board so that users can be always-on, everywhere.
Brian Roemmele, VoiceFirst.Expert (Founder & CEO)
There will be the start of what I call the “cold winter” in Voice First during the end portion of 2019. These emergent phenomena will promulgate because the apparent declining novelty of the current popular Voice First platforms. The reasons are manifold and are elicited manifestly through fatigue created by real or perceived limitations of current systems presentiment and versioning. This is simple to correct but hard to backtrack from the premise all current platforms are using. Future paths will create candidate systems that expand new technologies that congruently guide real or perceived limitations of Voice First systems and will have robust plans for presentiment and versioning. The “cold winter” will not significantly slow adoption, but it will slow uses and use cases along with a decline in some developer interest until it is first identified by the companies and then solved for.
- Amazon will release the “Alexa Phone” with a surprising feature set including a new wireless ear and microphone technology.
- Amazon will acquire a well-known company for their AI technology and Voice First patents making it one of the largest patent holders of this tech in the world.
- Apple will begin to signal a SiriOS path. With new iOS features and new AirPod / Apple Watch designs.
- Apple will make a very large acquisition of a company related to AI and Voice First.
- Bixby/Viv will begin to dominate appliance and automobile Voice First installed base starting a rush to add Voice First to a majority of new appliances.
- A start-up unknown of today will demonstrate a very advanced Voice First platform that will be 3 generations ahead of other platforms in the market.
Charles Andrew Whatley, Instreamatic.ai (SVP Business Development)
I believe that sensing emotion and mood will have a huge impact on Voice AI.
Chris Geison, AnswerLab (Principal UX Researcher, Emerging Technologies)
My hope: Apple finally gets Siri-ous. My prediction: They won’t. Also, the center of innovation in voice-mediated AI will start to shift from the Western U.S. to China, led by Baidu, Alibaba, and to a lesser extent, Tencent and Xiomi. Monetization (true monetization, finally) and voice-authentication will unleash a wave of creative energy and development. The car as a locus for voice assistants will finally get its due (led by Hound and aftermarket products), but automakers won’t yet truly integrate voice into the car (e.g., rethinking the driving experience, building in access to non-audio controls). Maybe in 2019, maybe 2020, Tesla will step up and show them how it’s done.
“Hearables”—be they agent-enabled hearing aids or Apple AirPods—will see some encouraging developments but insufficient adoption beyond niche use cases to really take off, yet. Voice assistant “intelligence” will continue to improve. Compound requests, contextual understanding, and personalization will all contribute to richer experiences. Also, we will start seeing examples of “affective computing” via voice assistant—emotionally-personalized experiences based on biometric & behavioral data. Initial forays into this space will be clumsy, tech ethicists will sound the alarm about the risks of misuse and unintended consequences, but many users will find them engaging/endearing.
Voice-enabled hardware manufacturers will continue to release products that assume best case scenarios until later this year one experiences a major hack or data breach. Their tone-deaf response will add insult to injury. Congressional hearings are held and our elected officials make obvious, once again, that they’re ill-equipped for the challenges of the 21st century. After the hearings, nothing changes.
Yann Lechelle, Snips (Chief Operations Officer)
Integration of voice will be one of the main topics on the agenda of OEMs. However, OEMs already know what they want and why they need voice, so they will be looking for customised solutions to match their customers’ needs.
OEMs will also be more and more cautious about their brand territory. They have to make a decision: either to cooperate with Google/Amazon losing their brand value and giving part of their business to big tech giants or look for alternative solutions. OEMs and end-users will be more and more cautious about data privacy. This year has shown that it is a very important value for end users, so OEMs will try to use privacy as additional customer value.
There will be less need and less interest for generic voice assistant from OEMs. Having an assistant that is user-case focused can actually help OEMs provide better performance assistants to their customers: instead of comparing a request with all the English/French/other language existing in the cloud, alternative solutions (data generation) allow training a voice assistant with the equivalent of months of user data, before they are even launched, and without access to the cloud. This also preserves user data privacy.
Stas Tushinskiy, Instreamatic.ai (CEO)
The important thing about AI, which common people don’t hear a lot about, is that AI doesn’t have a mind, meaning it can’t set its own goals, it can’t feel emotions, it doesn’t have a sense of universal purpose, etc. AI can learn how to do certain things at a fantastic pace providing great quality, but it doesn’t know ethics. It can not make right or wrong decisions from the ethical point of view.
I expect next year we’ll start seeing more and more philosophers wanted in the AI space. Philosophers have been theorizing for centuries and now AI enables first real-world practical implementation of those theories. Somebody has to give an answer to questions like “who should be hit by an autonomous vehicle: a baby or a granny?” and “should Alexa constantly listen and analyze to what’s going on in the house to be able to call emergency on its own in case something happens?”
Milkana Brace, Jargon (Founder & CEO)
John Gillilan, bondad.fm (Founder)
Tom Parish, ConverseNow.ai (VP Marketing)
Giulio Caperdoni, Vidiemme Consulting (Head of Innovation)
Greg Hedges, RAIN (VP of Emerging Experiences)
Peter Erickson, Modev (CEO/Founder)
Heidi Culbertson, Marvee LLC (CEO)
Omar Tawakol, Voicea (CEO)
In 2019, voice assistants will continue to be untethered from specific devices. The last few years have seen Amazon, Google and other companies create voice assistants that are device-dependent, but this year we predict more of these assistants will become device-agnostic and woven into the environment around you. The goal is to have most of your interaction with a machine be enabled by voice in whatever place is most convenient for you. That means assistants in conference calls and in conference rooms at work. More voice-activated engagement in your car and in your home, but directly with devices and appliances where an exchange of information creates value or saves time.
2019 will also see the introduction and adoption of a new type of enterprise voice assistant that balances people’s thirst for note taking and their desire to maintain privacy and a lack of future discoverability. This new type of Voice assistant will capture important actions and decisions and impact workflow without having a recording or transcript persist beyond the meeting (think of it like a Snapchat style interaction). We have seen many users get excited about voice assistants but at the same time want to maintain the lower risk of conversations that are ephemeral. This new type of Snapchat like assistant will strike the right balance.