Microsoft eliminates facial analytics tools in ‘Responsible AI’ push

For years, activists and academics have expressed concern that facial analysis software that claims to identify a person’s age, gender and emotional state may be biased, unreliable or invasive — and should not be sold.

Microsoft acknowledged some of those criticisms, saying on Tuesday it planned to remove those features from its artificial intelligence service for detecting, analyzing and recognizing faces. They will no longer be available to new users this week and will be phased out for existing users within the year.

The changes are part of Microsoft’s move to tighten controls on its artificial intelligence products. After a two-year review, a team at Microsoft has developed a “Responsible AI Standard,” a 27-page document that outlines the requirements for AI systems to ensure they will not have a harmful impact on society.

The requirements include ensuring that systems provide “valid solutions to the problems they are designed to solve” and “comparable quality of service for identified demographics, including marginalized groups.”

Before being released, technologies that would be used to make important decisions about a person’s access to work, education, health care, financial services or any opportunity at life are subject to an assessment by a team led by Natasha Crampton, the AI ​​responsible. from Microsoft. officer.

There were heightened concerns at Microsoft about the emotion recognition tool, which labeled a person’s expression as anger, contempt, disgust, fear, happiness, neutral, sadness or surprise.

“There’s a tremendous amount of cultural, geographic and individual variation in the way we express ourselves,” Crampton said. That led to reliability concerns, along with the larger questions of whether “facial expression is a reliable indicator of your internal emotional state,” she said.

The age and gender analysis tools being eliminated — along with other tools to detect facial features such as hair and smile — could be useful for interpreting visual images for the blind or partially sighted, for example, but the company decided it was problematic to use the profiling tools. that are generally available to the public, Crampton said.

In particular, she added, the system’s so-called gender classification was binary, “and that’s not consistent with our values.”

Microsoft will also be putting new checks on its facial recognition feature, which can be used to perform identity checks or search for a specific person. For example, Uber uses the software in its app to verify that a driver’s face matches the ID registered for that driver’s account. Software developers who want to use Microsoft’s facial recognition tool must request access and explain how they want to use it.

Users must also apply and explain how they will use other potentially abusive AI systems, such as Custom Neural Voice. The service can generate a human voiceprint from a sample of someone’s speech, so that authors can, for example, create synthetic versions of their voice to read their audiobooks in languages ​​they don’t speak.

Because of the potential abuse of the tool — to give the impression that people have said things they haven’t — speakers must go through a series of steps to confirm that their voice is allowed, and the recordings contain watermarks provided by Microsoft. can be detected.

In 2020, researchers found that speech-to-text tools developed by Microsoft, Apple, Google, IBM and Amazon did not work as well for black people. Microsoft’s system was the best of the bunch, but misidentified 15 percent of words for white people, compared to 27 percent for black people.

The company had collected various speech data to train its AI system, but hadn’t understood how diverse language could be. So it hired a sociolinguistics expert from the University of Washington to explain the language varieties Microsoft needed to know. It went beyond demographics and regional variation in how people speak in formal and informal situations.

“Thinking about race as a determinant of how one speaks is actually a bit misleading,” Crampton said. “What we learned in consultation with the expert is that, in fact, a huge range of factors influence language variation.”

Crampton said the journey to resolve that disparity between speech and text had helped inform the guidelines set forth in the company’s new standards.

“This is a critical normative period for AI,” she said, pointing to the proposed European regulation that will set rules and limits on the use of artificial intelligence. “We hope with our standard to contribute to the clear, necessary discussion that needs to be had about the standards that technology companies must adhere to.”

Leave a Comment

Your email address will not be published.