Axon’s Taser-Drone Plans Lead to AI Ethics Board Resignation

The real disappointment, Calo says, isn’t that the company didn’t do exactly what the board advised. It is that Axon announced its Taser drone plans before the board could fully describe its opposition. “Suddenly, out of the blue, the company decided to just leave that process,” he says. “That’s why it’s so discouraging.”

He has a hard time imagining that police or trained personnel in a school will have the situational awareness to handle a Taser drone judiciously. Even if a drone operator were to successfully save the lives of suspects or people in marginalized or vulnerable communities, the technology wouldn’t stay there.

“I think there’s going to be mission creep and they’re going to start using it in more and more contexts, and I think Axon’s announcement to use it in a very different context is proof of that,” Calo said. “A situation where there are ubiquitous cameras and remotely deployed tasers is not a world I want to live in. Period.”

Axon’s is the latest outside AI ethics committee to come into conflict with its associated tech company. Google famously convened and disbanded an AI ethics advisory group in about a week in 2019. These panels often operate with no clear structure other than asking members to sign a nondisclosure agreement, and companies can use them for “virtue signaling” rather than substantive input, says Cortnie Abercrombie, founder of the nonprofit AI truth. Her organization is currently researching best practices for AI ethics in businesses.

In the case of Axon, multiple members of the AI ​​Ethics Board who spoke to WIRED said the company was listening to their suggestions, including in a 2019 decision not use facial recognition on body cameras. That made the sudden announcement of a Taser drone all the more shocking.

There are usually conflicts in companies between those who understand the risks and limitations of a technology and those who want to make products and profits, said Wael AbdAlmageed, a computer scientist at the University of Southern California who resigned from the Axon AI Ethics Board. If companies like Axon want to take AI ethics seriously, he says, the role of these boards couldn’t be more advisory.

“If the AI ​​Ethics Board says this technology is problematic and the company shouldn’t develop products, then they shouldn’t. I know it’s a difficult proposition, but I really think it should be,” he says. “We have seen” problems at google and other companies for people they have hired to talk about AI ethics.”

The AI ​​Ethics Board tried to convince Axon that it should respond to the communities affected by its products, Friedman says, rather than the police who buy them. The company did set up a community advisory committee, but Friedman says that until AI ethics councils figure out how to involve local communities in the procurement process, “police technology providers will continue to play the police.”

Four members of the AI ​​Ethics Board have not signed the letter of resignation. They include former Seattle Police Chief Carmen Best, former Los Angeles Police Chief Charlie Beck, and former California Highway Patrol Commissioner Warren Stanley.

Leave a Comment

Your email address will not be published.