Let’s Talk Voice Tech, Data Privacy, and Kids

Let’s Talk Voice Tech, Data Privacy, and Kids

Editor’s Note: This is a guest post by Dr. Martyn Farrows, COO of child-specific voice recognition technology developer SoapBox Labs.

The data privacy debates continue to rage with views that range widely from “stop companies collecting data” and “let users own and profit from their data,” right through to “allow companies to harvest the data they need to compete in this emerging AI industry.”

We need to remember that voice data is different. People often think of voice, like breath, as ephemeral. That once spoken, words disappear forever. But in the modern age of voice technology, this is no longer the case. Today, voice data is processed and stored because of its intrinsic commercial value.

Voice data differs from other forms of data. It conveys more than just what was said. Voice data convey our identity, our emotions, our intents, our environment, certain health conditions. Even our socio-economic and educational background can be inferred from our accents and dialects.

We need to be more thoughtful with our voice data, and even more so again when it comes to the voice data of our children. Voice is an even more important tool for children than it is for adults. It allows them to interface with technology and control devices long before they’re old enough to read or write. Voice gives kids agency and allows them to learn and play in the most natural way possible – using their voices.

When SoapBox Labs entered the market in 2013, it still felt like the wild west in terms of data privacy compliance for kids. Even so, a privacy-by-design approach to data and technology was part of our DNA from the very beginning. It wasn’t faster or cheaper to design-in privacy. Still, as a deep tech, kid-focused company, we wanted to set ourselves up for long term success – and that meant respecting every kid’s fundamental right to privacy.

Children and Online Privacy: A Brief History

The Children’s Online Privacy Protection Act (COPPA), passed in 1998, was the first piece of legislation to protect the personal data of kids under 13. It prevented companies from harvesting kids’ data without the prior and explicit consent of a parent or guardian. By 2012, when COPPA was expanded to include voice and video recordings, the U.S. was leading internationally concerning data privacy laws that protected children. However, COPPA continued to have some troubling weaknesses. For instance – once adult consent was given, no differentiated treatment was required in the processing or storage of a kid’s voice data. Once consent was given, a kid’s data was treated just like the data of an adult.

In late 2014, the Amazon Alexa smart speaker was launched in the U.S. and Google Home followed in 2016. The adoption of these ‘plug and play’ smart speakers in homes was rapid. But, while they were easily and regularly accessed by kids, manufacturers maintained that they were not designed to be used by kids, and therefore did not need to be COPPA compliant. Initially, at least, there were very few arguments to the contrary.

Meanwhile, in the toy market, Mattel was being held to a much higher standard for kids’ data privacy. Mattel was one of the early innovators and adopters of voice technology, but the fanfare in 2015 around the launch of “Hello Barbie,” their voice-enabled doll, turned into a cautionary tale when it was pulled from the market due to privacy concerns. Headlines like this one in Quartz told a no-holds-barred story: “Mattel’s new “Hello Barbie” records kids’ voices and sends the intel back to corporate.”

Class action lawsuits quickly followed in late 2015, but not because Mattel had violated COPPA rules in relation to gaining parents’ consent. The lawsuits focused on a separate issue – the absence of consent for visiting playmates. The speech recognition system could not differentiate between the two kids’ voices and therefore processed, and more importantly, stored both kids’ voice data – in violation of COPPA. In 2017, to address the widespread use of these voice assistants and smart speakers in homes and classrooms, the FTC relaxed the COPPA requirement around parental consent explaining:

“The Commission recognizes the value of using voice as a replacement for written words in performing search and other functions on internet-connected devices. Verbal commands may be a necessity for certain consumers, including children who have not yet learned to write, or the disabled.”

So while the COPPA rule remained unchanged, it now allowed companies to process voice data, turn it into text and then immediately delete it. Importantly, the FTC added that companies could not use this window of legality to do anything else with the data, other than converting it to text.

The Next Wave of Privacy Protection for Kids

While consumer activists and concerned parents are also now waking up to the data privacy concerns around ‘plug and play’ devices, the ongoing issue with all smart speakers, voice UIs and assistants is that even if a parent gives consent for their kids to use them, companies are still not allowed to collect data from visiting playmates.

From a legal perspective, both COPPA and the equivalent EU GDPR (the General Data Protection Regulation) legislation are aligned on this point – verified parental consent is required to process and store the personal data of a child under the age of 13. But what are the consequences of non-compliance? The FTC has generally been slow to impose fines in relation to COPPA and any fines have tended to be modest relative to the commercial heft of the entity involved. The $170m recent YouTube fine was the FTC’s largest ever, but represents approximately 0.01% of Google’s annual turnover.

Violations of the EU’s GDPR are subject to a fine of up to 4% of a company’s annual turnover and each country has its own supervisory authority to add teeth to its enforcement. GDPR is also still under two years old however and so far, the data privacy rights of children have tended to take a back seat to more high profile cases. In 2019, SoapBox Labs and child anthropologist Dr. Veronica Barassi responded to a request from the UN’s Office of the High Commissioner for Human Rights (OHCHR) for submissions on protecting kids in digital environments. Our paper urges the OHCHR to specifically address issues in relation to the lack of clarity and transparency around the processing and storage of kids’ data by voice operators. Despite the fines and class action suits that have been filed since 2017, it’s clear, even to the most casual observer, that the current regulations need fixing.

The Emergence of On-Device Solutions

The processing and storage of kids’ voice data will continue to be more sensitive, receive more scrutiny and require stricter legislation than for adult data. To ensure complete data privacy, the only solution is to transmit NO personal voice data to the cloud and perform all processing at the edge i.e. in an embedded, on-device manner.

Offering data center level processing for voice technology on embedded chips, at low power consumption and low cost is a game changer for the voice industry. It will also remove the privacy concerns of the education and toy industries as all processing of kids’ voice data will happen on-device with no data flowing to the cloud, or being stored by companies.

These are incredibly exciting times for the voice tech industry. However, against the backdrop of consumer sentiment, strict regulation, and increased enforcement activities, important legal and technology-driven action is needed to make sure that voice technology does not threaten children’s rights. Actions such as:

  • Transparency on how kids’ voice data is used once it is stored.
  • A commitment to treat kids’ data differently to adults’ data, even with consent.
  • A commitment to identify voice data captured without consent, for example, from a playmate or visitor to the home whose parent has not given consent, and delete it.
  • The development of kid/adult voice classifiers to protect kids from adult-centered digital environments, and to recognize when parental consent is needed before storing voice data.
  • The development of solutions that offer to process of voice data embedded on-device.

As voice technology becomes a mainstream educational and entertainment opportunity, let’s work together on an approach that emphasizes transparency around kids’ voice data. Now is the time to invest deeply in protecting kids’ data privacy in order to earn the respect and trust of parents and guardians, and accelerate voice innovations across the global education and entertainment markets.

SoapBox Labs Introduces Fluency Assessment Technology for Kids

Patricia Scanlon CEO and Founder of Soapbox Labs – Voicebot Podcast Ep 129

SoapBox Labs is Schooling Voice AI to Understand Children