IBM and Pfizer’s Apply Language AI to Predict Alzheimer’s Onset With 70% Accuracy
IBM and Pfizer have developed a way to use machine learning on clinical verbal tests to predict the appearance of Alzheimer’s disease years ahead of the standard symptoms. The AI analysis of language data reached an accuracy rate of 70% when matched to those who did not develop Alzheimer’s
For the study, researchers analyzed 703 samples of language tests from 270 participants. Half of the participants developed Alzheimer’s symptoms before turning 85. The researchers broke down the data into 87 variables, checking spelling, punctuation and grammar, and the style and vocabulary of the sample. Using natural language processing, the AI looked for any subtle changes or aspects worth flagging. That data was then examined in the context of the participant, including their age, background, and test results on the Montreal cognitive assessment. With the permission of the participants or their families, the results of the Mini-Mental State Examination speech test from the Framingham Heart Study were incorporated to expand the total data further. The judgment of the AI was then matched to what really happened over the years to those participants. The resulting accuracy of about 70% is impressive, especially compared to the purely clinical prediction of 59% accuracy.
More than five million people in the U.S. with Alzheimer’s and the extent of the disease are expected to rise over the next several years. Testing for Alzheimer’s disease in its early stages is often difficult because the loss of memory and cognitive ability is often very subtle at first and can be ascribed to age and other factors. Catching it early can make a huge difference in treating and at least slowing its spread. This isn’t the only way AI is being applied to testing for Alzheimer’s, including analyzing brain scans tracking the telltale proteins in the brain that indicate the beginning of Alzheimer’s. The new study differs because it doesn’t look at people who are already starting to show cognitive impairment. The samples are all from before that happens. At the same time, the analysis looked at the likelihood of Alzheimer’s among the general public, encompassing people without a family history of the disease, and those more likely to be diagnosed with Alzheimer’s.
“Ultimately, we hope this research will take root and aid in the future development of a simple, straightforward and easily accessible tool to help clinicians assess a patient’s risk of Alzheimer’s disease through the analysis of speech and language, and in conjunction with a number of other facets of an individual’s health and biometrics,” IBM Principal Research Staff Member Guillermo Cecchi explained in a blog post. “Having such a tool at their disposal could help doctors determine the need for more complex and demanding psychiatric assessments, testing and monitoring. Typically only given once the development of Alzheimer’s disease is suspected, current tests may not always be in easy reach of a large population. Being able to identify higher risk patients could also open up the door to more successful clinical trials, as those deemed at a high likelihood of developing the disease could enter trials for preventative therapies.”
Applying voice data to diagnosing an illness is becoming more popular as AI, and audio technology improves. The trend accelerated this year due to the COVID-19 health crisis. The combination of reduced in-person clinical meetings and how the virus affects people’s lungs, and thus their voices, have brought speech tech into play. For instance, the COVID Voice Detector project is an international coalition of researchers and businesses working to create a vocal test for COVID-19 with artificial intelligence and machine learning. With data from government agencies and supplied by volunteers through Brazilian biometrics startup Unike Technologies, the test is evolving. Similarly, Vocalis Health has been collecting voice samples for its own work, making a vocal diagnostic test for COVID-19 infection. The data is also part of a government project to track people recovering from the virus at home, both for data-gathering and spot when they may need medical intervention. Meanwhile, Indian startup Salcit Technologies came out with kAs, Sanskrit for cough, a mobile app which also analyzes coughing sounds for potential COVID-19 infection.
If these projects achieve their goals in building a map of vocal biomarkers for determining if someone is infected, it could be a huge boon during the health crisis. In the long-term, though, it helps prove the value of using AI analysis of voices for medical diagnoses, even before more obvious symptoms arise. Whether for infectious diseases like COVID-19 or degenerative genetic diseases like Alzheimer’s, anything that can provide clues and better care should be pursued.