Voice recognition technology has changed the way people interact with their phones, speakers, and more. However, it’s not accessible for everybody.

At the first day of Google I/O, the company announced Project Euphonia, to look into how artificial intelligence can improve for those with speech impairments and other types of speech patterns.

While speaking at I/O, TechCrunch reported that Google CEO Sundar Pichai said voice recognition technology currently doesn’t work for people with speech impairments because there isn’t enough data.

To gather their own set of data, Google partnered with non-profit organizations ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI) to record the voices of people who have ALS, “a neuro-degenerative condition that can result in the inability to speak and move.”

“Our AI algorithms currently aim to accommodate individuals who speak English and have impairments typically associated with ALS, but we believe that our research can be applied to larger groups of people and to different speech impairments,” Julie Cattiau, Product Manager of Google AI, wrote.

The company is also looking into training personalized AI algorithms to be able to detect sounds or gestures. This would be useful for people who are unable to speak.

It’s been long noted that voice recognition technology leaves out many people, including those who are displayed or who may speak with certain accents.

“The more speech samples our system hears, the more potential we have to make progress and apply these tools to better support everyone, no matter how they communicate,” Cattiau wrote.

If you have a speech impairment, Project Euphonia is looking for more voices to analyze. Google is encouraging people to fill out this short form to volunteer and record a set of phrases.