A&S Professor Receives NIH Grant to Study Biofeedback Technologies for Speech Therapy
The grant will explore the effectiveness of technologies which use visual targets to help people adjust their speech.
One of the most common speech errors in English is making a “w” sound instead of the “r” sound. Although most children grow out of these and other errors, 2%-to-5% exhibit residual speech sound disorder through adolescence.
Research has shown that biofeedback technologies can help benefit children struggling with the “r” sound by making the sound visible.
Biofeedback speech therapies use electronics to display a real-time representation of speech that the child ordinarily can’t perceive on their own. In this instance, the technologies allow the child to see what an “r” sound looks like on a screen. The child hears their “r” sound and views a visual display of their speech on the screen, along with a model representing the correct pronunciation of the sound. The model provides a visual target, which the child can use to adjust their speech.
Now, a team of scientists at Syracuse University, New York University and Montclair State University has been awarded a grant from the National Institutes of Health for further study of biofeedback technologies. The team will compare the effectiveness of these technologies for speech therapy under different conditions. The researchers will also evaluate AI-based tools that could guide home-based practice in tandem with human oversight.
“If we want kids to improve quickly, we’d want them to practice at home,” says Jonathan Preston, a professor in the Department of Communication Sciences and Disorders at Syracuse University. “But they don’t have a skilled speech pathologist available at home to help them practice.”
Many children also lack access to clinicians who use biofeedback methods.
AI could help change that.
Through the research team’s efforts, an AI-powered speech therapy algorithm was trained on the voices of over 400 children.
Then comes individualized practice. “At home, kids will talk into a microphone, and based on the algorithm, the child will receive feedback about whether they spoke the word clearly or not,” says Preston.
“We’re developing methods that can be implemented in the real world to advance and enhance clinical practice,” says Tara McAllister, the grant’s principal investigator and an associate professor at New York University. “We develop the technology, but you need to test it to ensure its effectiveness.”
There are two types of biofeedback speech therapy.
In visual-acoustic biofeedback therapy, the technology provides a wave-like image of speech on a screen. The peaks of the waves represent the frequencies of the vocal tracks. The learner attempts to match their voice to the model on the screen.
In ultrasound biofeedback therapy, a learner observes the shapes and movements of the tongue in real time on a screen and attempts to match an ultrasound image of a correctly articulated target sound. With a small ultrasound wand placed under their chin, a child can see their tongue moving in real time. The child no longer solely depends on what they hear and how their tongue feels — now, they can see what their tongue looks like in making a correct sound.
In a randomized controlled trial, the study will compare practice including visual-acoustic biofeedback when it is delivered via teletherapy versus face-to-face for ten weeks. In a later phase of the project, the researchers will test the effects of AI-assisted home practice with visual acoustic biofeedback. An additional sample of children will receive both visual acoustic biofeedback and ultrasound biofeedback.
The researcher team is in the process of completing a randomized controlled trial comparing visual-acoustic biofeedback therapy, ultrasound biofeedback therapy, and therapy with no biofeedback. Furthermore, the team is continuing to study whether choosing different biofeedback technologies could be matched to the learner’s sensory strengths and limitations.