Voice AI Startup Tenyx Solves LLM ‘Catastrophic Forgetting’ During Fine-Tuning
Enterprise voice AI startup Tenyx has introduced a new technique to solve the issue of so-called catastrophic forgetting in large language models (LLMs) during training. The company’s new methodology enables businesses to customize LLMs for their needs without compromising the model’s core capabilities.
Tenyx Talk
For LLMs, the common approach to fine-tuning requires exposing models to new data. That’s to improve performance, but it can also unintentionally degrade previously learned skills. Fixing these distortions in large, complex models is hugely difficult. Current solutions also cannot prevent the erosion of vital safety measures like reinforcement learning from human feedback (RLHF). This mechanism stops AI systems from producing harmful outputs. Tenyx’s new approach draws on mathematical interpretations of how knowledge is encoded in LLMs. This preserves prior learned knowledge and reasoning while retaining RLHF safeguards after customization.
“In the rapidly evolving landscape of AI, our commitment has always been to address its inherent challenges head-on. With this novel methodology, we’re not just pioneering an advanced solution; we’re revolutionizing the way enterprises utilize LLMs,” Tenyx CEO Itamar Arel explained. “Our innovation ensures that businesses no longer have to choose between customization and core capabilities. They can confidently enjoy the best of both worlds.”
Tenyx began as an automated customer service voice AI provider and was founded by the team behind drive-thru restaurant voice AI provider Apprente, acquired by McDonald’s in 2019 for its new McD Tech Labs before IBM purchased the division. Tenyx created its AI using neuroscience as inspiration for an AI that can understand intent and respond accordingly.
That fits with the startup’s new technique, which it piloted against popular LLM tuning methods from OpenAI, Anthropic, and others. Results showed Tenyx’s approach kept models safer and more accurate in enterprise use cases. For example, Tenyx mitigated “catastrophic forgetting” three times better than alternatives, with just a 3% loss of prior knowledge compared to 10-40% losses. Safety measures were also retained at 11% strength versus 66-94% erosion. This safer fine-tuning approach is especially relevant in light of evolving regulatory standards, such as the White House’s executive order on Safe, Secure, and Trustworthy AI.
Follow @voicebotai Follow @erichschwartz
IBM Acquires McDonald’s McD Tech Labs to Seal Drive-Thru AI Partnership
Enterprise Conversational AI Startup Observe.AI Raises $125M
Gupshup Extends Acquisition Streak With Ecommerce Conversational AI Startup AskSid