Now, more than ever, we rely on technology when we work, go to school, get health care and just to be with people we can’t visit in person. But technology is only useful if it’s accessible. And right now, Microsoft’s Mary Bellard says, we are in a data desert.

“What’s missing is representation of people with disabilities in the datasets and models that power the cutting-edge AI features in all kinds of technology and experiences,” said Bellard, who leads Microsoft’s accessibility innovation program. Microsoft recently partnered with several organizations to create more inclusive collections of data.

Take facial recognition. Some people with ALS find that eye tracking is the best way to interact with their computer. But for computer cameras to track a user’s eyes, it first needs to find his or her face.

“That facial-recognition piece, today, doesn’t do a great job for the realistic experience of people with significant mobility disabilities,” Bellard said. “Maybe there’s a breathing tube, maybe they have droopy eyelids [or] some sort of facial feature that doesn’t get recognized accurately with generic, mainstream facial recognition.”

In one of those partnerships, Microsoft is working with the nonprofit Team Gleason to get more pictures of people with ALS looking at their computers to better train artificial intelligence models.

Mary Bellard: The intention is to make technology, in general, more accessible and more inclusive to people with disabilities. And the only way that that works in a holistic sense is if people with disabilities are included in the entire process. We have this phrase, that we try to avoid doing things “for” people with disabilities, and we want to prioritize doing things “with” people with disabilities or amplifying activities from or by people with disabilities. The more diverse the workforce is, the more inclusive the products will be that are created from that workforce.

Kimberly Adams: I’m glad you brought that up because this is an issue that’s come up quite a bit in the conversations that we’ve had on this show around tech and disability, where people tell us that accessibility needs to be part of a design from the beginning, rather than a fix later. How much of your work and Microsoft’s work more broadly is fixing problems where elements were designed without accessibility in mind, versus what you just mentioned — those sort of from the ground up being involved in the process?

Bellard: We have proven time and time again that when people with disabilities are part of the design and development of our features of our apps, they are more meaningful to our customer base, to our employee base, who uses these products and services. And doing that more frequently requires that we have to really invest in our inclusive hiring efforts of people with disabilities — how we recruit talent and also how we retain and guide employees throughout their career journey, whether it’s at Microsoft or other companies. Inclusive hiring is not just one company’s opportunity, it’s the entire workforce’s opportunity to bring in the best talent into the areas of employment where we need people the most.

Related links: More insight from Kimberly Adams

Another one of Microsoft’s initiatives to reduce that data desert is its Seeing AI app, which “looks” at objects and images and creates descriptions for people who are blind or have low vision. But that app is trained on generic data, and the company is working to train it with data collected from people who are blind or have low vision. The company hopes that will allow the app to tell users not just that there’s a set of keys on the table in front of them, but their keys, as opposed to, say, their roommate’s.

There’s a video on Microsoft’s AI blog about how this app works and how it can create captions for pictures people take. The company says the AI can generate captions that are, in some cases, more accurate than the ones people write. The Verge adds a healthy dose of skepticism and warns that similar claims about algorithms’ relative skills have been, shall I say, overstated in the past.

And speaking of tech not necessarily capturing accurate information about certain groups, there was a story in The New York Times this week about how pulse oximeters, which clip onto your finger to read your blood oxygen level, don’t work as well on Black patients. The story builds upon an essay by Amy Moran-Thomas, an anthropology professor at MIT, that was published in the Boston Review back in August. This is particularly alarming because those devices are one of the tools doctors use to evaluate COVID-19 patients to help determine how much danger they’re in and if they need to be hospitalized.

By