Intel and Dell Create Voice Cloning Book for Those With Motor Neuron Disease
Intel, Dell, and Rolls-Royce have created a digital tool to preserve and clone the voices of people with motor neuron disease (MND), also known as ALS before they lose their ability to speak. The companies worked with the Motor Neurone Disease Association to produce the I Will Always Be Me storybook and machine learning process to generate a voice model of the person with MND who reads it.
MND Voice Clone
The I Will Always Be Me story written by Jill Twiss incorporates crucial phrases into its thousand-word length. Once a user reads it aloud, the audio recording is processed and transformed into a voice model capable of synthesizing the reader’s voice to say whatever they wish through the kind of accessibility devices made famous by Dr. Stephen Hawking. The idea is to give the 80% of people with MND who lose the ability to speak to still have their own voice. The project began with Rolls Royce IT Innovation head Stuart Moss after his father died of MND. Intel and Dell worked with Rolls Royce and the MNDA to set up the website and story, with Intel donating the computing power to process and digitize the voice online and Dell donating computers to people with MND to record their voices.
“Together with our partners, we are bringing voice banking and digitization technology from a niche use case to a mainstream audience while discovering innovation pathways for how technology can address more accessibility challenges,” Intel director of accessibility Darryl Adams said.
Synthetic voices are popping up in more places as the technology improves and people experiment with using them for entertainment and accessibility. Val Kilmer, whose illness has left him unable to speak, has begun experimenting with a synthetic version of his own voice, and startups like Lovo, Veritone, and Wellsaid are all finding plenty of interest in developing similar voice cloning tools. Tech giants are rolling out their own approaches toward voice accessibility for people with limited or impaired speech, like Google’s recent Project Relate, an outgrowth of its Project Euphonia. Alexa has its tools for allowing direct control by those with speech impairments thanks to Amazon’s partnership with speech recognition technology startup Voiceitt, which offers an iOS app to help people with atypical and impaired speech communicate. Apple has dipped its toe in the field, researching techniques to help Siri know when someone is stuttering to compensate and ensure it doesn’t interrupt or misunderstand the user. The broader usage of I Will Always Be Me goes toward the project’s goal of making voice clones an integral part of accessibility technology.
“What I think is brilliant about this project is that you’re asking people to do something that’s very natural to them,” Intel director of human & AI systems research lab Lama Nachman said. “True accessibility is about recognizing the human experience and building around it.”