Speechly Brings Voice AI Features to Unity Metaverse Platform
Speech recognition technology startup Speechly has released a tool for adding voice interfaces to the Unity virtual reality and augmented reality platform. The Speechly client library for Unity allows developers working on metaverse projects or other AR and VR programs to incorporate speech-to-text and natural language understanding as part of the interactions.
Speechly’s voice API is designed to incorporate voice interactions into games and other experiences. Unity has rapidly grown as a platform for AR and VR developers working on vivid, digital interactions. Speechly has made it possible to link the two. A Unity app streams audio to the Speechly cloud through its voice API and the Speechly provides any speech-to-text and voice interaction services that the developer wants to include. Speechly added the client to its GitHub for developers to experiment with.
“The world of AR/VR, Gaming, and Metaverse experiences continues to grow, so we are excited to release our Speechly Client Library for Unity,” Speechly explained in announcing the new tools. “This release is the result of growing demand for being able to easily add a Real-Time Voice UI as a Feature to these experiences to make them more interactive and easier to navigate.”
Speechly’s low-latency speech recognition API serves as a real-time response service for voice commands. It’s able to start carrying out commands before the user finishes speaking, adjusting as more words clarify the request. That makes it faster than the traditional method, where voice assistants don’t start processing commands until the user finishes talking. Most recently, Speechly partnered with gaming e-commerce and advertising platform Sayollo to integrate a voice interface into its games so that players can request an item they spot in the game and confirm their order without pausing the game or app. Before that, Speechly simplified setting up voice user interfaces with a set of templates. Voice UIs are the tools used to understand what a user says and respond appropriately. The templates work to assure users that the AI is working by adding multimodal experiences to the audio interactions.
Speechly will be a guest in an upcoming webinar on April 13 to discuss the use cases associate with this news and others at 11:00 am EDT. Register to attend or to receive the video afterward.