Siri May Soon Get Better at Handling Atypical Speech Patterns Like Stuttering

Siri Tutorial on iPhone Credit: Bloomicon / Shutterstock
Text Size
- +

Toggle Dark Mode

While there’s room for reasonable debate as to how well Siri’s voice recognition works in general, there’s been one area where it’s inarguably fallen woefully short, and that’s in adapting to users with speech patterns that aren’t typical, but the good news is that it looks like Apple is putting considerable effort into address this problem as part of its commitment to accessibility.

Normally, calling up Siri with a button press or “Hey Siri” command will have your iPhone, iPad, HomePod, or other device automatically begin listening, at which point you make your request and Siri goes off and does its thing. For users who suffer from conditions like dysarthria or stuttering, however, even making simple requests to Siri can pose huge challenges, since Siri will interpret the periods of silence that are common to these atypical speech patterns as a sign that the person has finished making their request.

Right now, the only effective way to address this is by holding down the button on your iPhone, iPad, or Apple Watch until you’re finished speaking, to let Siri know when it’s okay to actually begin processing your request. However, this eliminates the huge convenience of simply saying “Hey Siri” and almost entirely rules out using devices like the HomePod to make Siri requests, since while “hold to talk” is also available on the smart speaker, having to walk up to it and hold down the top button sort of defeats the point of having a smart speaker in the first place.

Fortunately, Apple is working to solve this exact problem, according to a new report by the Wall Street Journal, which highlights the efforts being made by all the major voice assistants.

Apple’s Research

While Amazon and Google have gone in their own directions to solve this, with the former partnering with an Israeli startup and the latter building a prototype app to help users train Google Assistant directly, Apple is taking a unique approach by relying on a massive audio bank culled from its library of podcasts.

Specifically, the Journal notes that Apple has built up 28,000 audio clips from various podcasts that feature stuttering as a sample group to aid in its research, and in fact, it published a research paper this week that outlines the study.

An Apple spokesperson told the Journal that the goal is to improve Siri for people with atypical speech patterns, although he didn’t offer any details on exactly how Apple plans to use that data in actual practice, and the research paper only covers the analysis of the raw data without getting into specific applications for it.

At this point, most of Apple’s research is focused specifically on users who stutter, although it does add that it expects to expand that to other atypical speech characteristics in future studies.

In a statement shared by 9to5Mac, Jane Fraser of the Stuttering Foundation, shared her delight that technology is finally evolving to “account for what people say, rather than how they say it.”

We’re thrilled to learn of recent efforts by tech companies to be more inclusive of the stuttering community in their voice assistant technologies. For people who stutter, being heard and understood can be a lifelong struggle. The evolution of technology to account for what people say, rather than how they say it, opens the door for tens of millions of people who struggle with stuttering.

Jane Fraser, President, Stuttering Foundation
Sponsored
Social Sharing