Many patients with ALS eventually find their vocal cords impaired. A team at Google is working to improve performance of automated speech recognition systems for those with impaired speech.
The goal is to enable more effective voice communication.
Google engineers are currently conducting outreach to find people interested in contributing voice recordings of representative phrases. These recordings are made through a simple web
tool. This web tool will enable the team to develop models that recognize a wider variety of impaired speech characteristics.
If you’d like to participate in this research, find
out more at the following link: bit.ly/AudioData.
One early advocate, Steve Saling, recently posted on Facebook with some personal points of interest (see below). Being able to communicate verbally with family members is the key to keeping
those relationships strong. https://www.facebook.com/1158772498/posts/10215187716903782/
Recently, the Steve Saling ALS Residence and the Dapper McDonald ALS Residence at the Leonard Florence Center for Living hosted a group of Google researchers and engineers who have been
collaborating with ALS TDI on their Precision Medicine Program (https://www.als.net/precision-medicine).
In particular, they are working on improving the communication and computer input options with voice recognition.
Google is at the forefront of computer technology, and I recently saw a demonstration of of their language recognition and translation apps. Using just a cell phone, with two people speaking
into it–one an English speaker, the other Chinese–the app easily recognized English sentences and converted them into Mandarin Chinese. When the Chinese speaker answered, the app responded in English, so there was a real back-and-forth.
Here are some of the speech
recognition/translation apps currently in the Google store. To make these breakthroughs in translation and speech recognition, Google engineers are using something called machine
learning.
Before you think voice recognition would help very few pALS, consider that if a computer can translate Mandarin Chinese, then machine learning can make it possible to translate an ALS
accent the same way a loved one learns to translate pALS long after everyone else hears gibberish. And because the machine is learning, the pALS’ computer will maintain its accuracy, even as speech gets harder and harder to understand.
This group of geniuses from Google hopes they can improve communication and computer access for a great number of people. If you are interested in helping Google do their research, complete
this short form expressing your interest bit.ly/AudioData.
Alisa Brownlee, ATP, CAPS | Assistive Technology Specialist/Consultant
The ALS Association | 1275
K Street NW, Suite 250
|
Washington, D.C. 20005 | alsa.org
office 215-631-1877 | cell 215-485-3441
email xxxxxx@alsa-national.org
This email and its contents are confidential. If you are not the intended recipient, please do not disclose or use the information within this email or its attachments.
If you have received this email in error, please delete it immediately. Thank you.