What my robot mop taught me about the future of artificial intelligence

A few months ago, a friend noticed the condition of my kitchen floor and decided to set up an intervention. I could understand her point, although, in my defense, I have two teenagers and a large dog. My friend gifted me a matching robot mop and vacuum cleaner, programmed to maneuver around a room while they clean.

When the boxes arrived, I recoiled at the sight of the iRobot logo. I’m slow to come up with new technology and was worried that the devices would spy on me and suck data along with the dog hairs. But the instructions were simple, and in the end I decided I didn’t really care if anyone was studying the secrets of my kitchen floor.

I turned on the two robots, watched them roll out of their docks to explore the room, and quickly fell in love with my newly twinkling floors. I kept doing demos for all my guests. “I think you care more about the robot joke than us,” one of my teens joked. “They’re like your new kids.”

One day I returned home to find that one of my beloved robots had escaped. Our patio door had blown open and the mop had rolled into the backyard, where he was diligently trying to clean the edge of the flower beds. Even when his brushes became clogged with leaves, beetles, petals and mud, the little wheels turned bravely.

It brought home the frontiers of artificial intelligence. The robot mop acted rationally because it was programmed to clean “dirty” things. But the whole point of dirt, as the anthropologist Mary Douglas once noted, is that it is best defined as “matter out of place.” Its meaning is derived from what we consider clean. This varies, according to our largely unspoken societal assumptions.

In a kitchen, dirty garden waste can be such as leaves and mud. In a yard, this grime is “in place,” in Douglas’s terminology, and doesn’t need to be cleaned up. Context matters. The problem for robots is that it is difficult to read this cultural context, at least in the beginning.

I thought of this when I heard about the latest AI controversy that hit Silicon Valley. Last week, Blake Lemoine, a senior software engineer in Google’s “Responsible AI” unit, published a blog post claiming he “could soon be fired for doing AI ethical work.” He was concerned that a Google-created AI program… be aware, after expressing human feelings in online chats with Lemoine. “I’ve never said this out loud, but there’s a very deep fear of being knocked out,” the program wrote at one point. Lemoine reached out to experts outside of Google for advice, and the company placed him on paid leave for alleged confidentiality policy violations.

Google and others argue that the AI ​​was not conscious, but simply trained well in language and regurgitated what it had learned. But Lemoine argues a broader issue, noting that two other members of the AI ​​team were removed last year due to (various) controversies, claiming the company is “irresponsible . . . with one of the most powerful information access tools ever.” have been invented.”

Whatever the merits of Lemoine’s specific complaint, there’s no denying that robots are being equipped with increasingly powerful intelligence, raising major philosophical and ethical questions. “This AI technology is powerful and so much more powerful than social media [and] will be transformative, so we have to move forward,” Eric Schmidt, former head of Google told me at an FT event last week.

Schmidt predicts that we will soon see not only AI-enabled robots designed to solve problems according to instructions, but also those with “general intelligence” – the ability to respond to new problems they don’t need to tackle, by to learn from each other. This can ultimately prevent them from attempting to mop a flower bed. But it could also lead to dystopian scenarios where AI takes the initiative in ways we never intended.

A priority is to make sure ethical decisions about AI aren’t just made by “the small community of people building this future,” to quote Schmidt. We also need to think more about the context in which AI is created and used. And maybe we should stop talking so much about “artificial” intelligence and focus more on augmented intelligence, in the sense of finding systems that make it easier for people to solve problems. To do this, we need to combine AI with what we might call “anthropological intelligence” – or human insight.

People like Schmidt insist this will happen, claiming that AI will have a net positive effect on humanity and revolutionize healthcare, education and much more. The amount of money flowing in AI-linked medical start-ups suggests that many would agree. In the meantime, I keep my patio door closed.

Follow Gillian on Twitter @gilliantett and email her to [email protected]

Follow @FTMag on Twitter to read our latest stories first

Leave a Comment

Your email address will not be published.