Neuroscience News logo for mobile.

AI detects autism speech patterns in different languages

Overview: Machine learning algorithms help researchers identify speech patterns in children on the autism spectrum that are consistent across languages.

Source: Northwestern University

A new study led by researchers at Northwestern University used machine learning — a branch of artificial intelligence — to identify speech patterns in children with autism that were consistent between English and Cantonese, suggesting that speech features could be a useful tool for diagnosing disabilities. The condition.

The study, conducted in collaboration with collaborators in Hong Kong, provided insights that could help scientists differentiate between genetic and environmental factors that shape the communication skills of people with autism, potentially helping them learn more about the origin of the condition and develop new therapies.

Children with autism often speak more slowly than normally developing children and exhibit different differences in pitch, intonation and rhythm. But those differences (referred to as “prosodic differences” by researchers) have been surprisingly difficult to characterize in a consistent, objective manner, and their origins have remained obscure for decades.

However, a team of researchers led by Northwestern scientists Molly Losh and Joseph CY Lau, along with Hong Kong-based collaborator Patrick Wong and his team, have successfully used supervised machine learning to identify speech differences linked to autism.

The data used to train the algorithm was recordings of English- and Cantonese-speaking youth with and without autism telling their own version of the story depicted in a wordless children’s picture book called “Frog, Where Are You?”

The results have been published in the journal PLOS One on June 8, 2022.

“When you have languages ​​that are so structurally different, similarities in speech patterns seen in autism in both languages ​​are likely traits that are heavily influenced by the genetic predisposition to autism,” said Losh, the Jo Ann G. and Peter F. Dolle. professor of learning disabilities at Northwestern.

“But just as interesting is the variability we observed, which may indicate features of speech that are more malleable and may be good targets for intervention.”

Lau added that using machine learning to identify key elements of speech that were predictive of autism was an important step forward for researchers, who were constrained by the English language bias in autism research and the subjectivity of people like the involved classifying speech differences between people with autism and those without.

“Using this method, we were able to identify features of speech that could predict the diagnosis of autism,” said Lau, a postdoctoral researcher who works with Losh in the Division of Communication Sciences and Disorders of Roxelyn and Richard Pepper at Northwestern.

“The most prominent of those characteristics is rhythm. We are hopeful that this study can provide the basis for future work on autism that uses machine learning.”

The researchers believe their work has the potential to contribute to a better understanding of autism. Artificial intelligence has the potential to make autism diagnosis easier by reducing the burden on healthcare professionals, making autism diagnosis accessible to more people, Lau said. It could also be a tool that could one day transcend cultures, due to the computer’s ability to quantitatively analyze words and sounds, regardless of language.

This shows a brain
The researchers believe their work could be a tool that could one day transcend cultures, due to the computer’s ability to quantitatively analyze words and sounds, regardless of language. Image is in the public domain

Because the features of speech identified through machine learning encompass both the features of English and Cantonese and the features of one language, Losh said, machine learning could be useful for developing tools that not only identify aspects of speech that are suitable for therapy interventions, but also measure the effect of those interventions by evaluating a speaker’s progress over time.

Finally, the results of the study may contribute to efforts to identify and understand the role of specific genes and brain processing mechanisms involved in genetic susceptibility to autism, the authors said. Ultimately, their goal is to gain a more comprehensive picture of the factors that shape the speech differences of people with autism.

“One brain network involved is the auditory pathway at the subcortical level, which is really strongly linked to differences in how speech sounds are processed in the brain by individuals with autism compared to those that typically develop in different cultures,” Lau said. .

“The next step will be to identify whether those processing differences in the brain lead to the behavioral speech patterns we observe here, and their underlying neural genetics. We are excited about what lies ahead.”

Also see

This shows an artistic painting on wood of a young woman's face

About this AI and ASD research news

Author: Max Witynskic
Source: Northwestern University
Contact: Max Witynski – Northwestern University
Image: The image is in the public domain

Original research: Open access.
Cross-linguistic patterns of prosodic speech differences in autism: a machine learning study” by Joseph CY Lau et al. PLOS ONE


Abstract

Cross-linguistic patterns of prosodic speech differences in autism: a machine learning study

Differences in speech prosody are a commonly observed feature of Autism Spectrum Disorder (ASD). However, it is unclear how prosodic differences in ASD manifest in different languages ​​demonstrating cross-linguistic variability in prosody.

Using a supervised analytic approach to machine learning, we examined acoustic features relevant to rhythmic and intonation aspects of prosody, derived from narrative examples evoked in English and Cantonese, two typologically and prosodically distinct languages.

Our models revealed a successful classification of ASD diagnosis using rhythm-relative features within and between both languages. Classification with intonation-relevant features was significant for English, but not for Cantonese.

Results highlight differences in rhythm as a major prosodic trait affected in ASD, and also demonstrate significant variability in other prosodic traits that appear to be modulated by language-specific differences, such as intonation.

Leave a Comment

Your email address will not be published.