Should we be concerned about AI becoming aware?

“The 360” shows you different perspectives on the most important stories and debates of the day.

What is going on

A software engineer from Google’s artificial intelligence team was suspended by the company earlier this month. His offense: sharing confidential information that led him to believe that an AI he had been talking to

The engineer, Blake Lemoine, has reportedly spent months defending to his colleagues that Google’s chatbot generator LaMDA, an incredibly complex language model that mimics human conversations, had become so advanced it had reached consciousness. Last week he has in which the AI ​​wrote that it told him it was experiencing loneliness, was terrified and stated, “I want everyone to understand that I am, in fact, a person.”

Google insists that LaMDA is unaware, saying that Lemoine was “anthropomorphizing” a system designed to “imitate the types of exchanges found in millions of sentences.” Most experts agree with the company, stating that current artificial intelligence models — while getting more sophisticated every day — still lack the complex skills typically considered to be signs of feeling, such as self-awareness, intuition and emotions.

The idea of ​​AI becoming conscious has been the source of fascination and fear since the early days of computer programming. Some have envisioned utopian societies supported by hyper-intelligent artificial beings. Others are terrified of a future dominated by machines in which humans are subjugated or even exterminated. Tesla founder Once called AI a “fundamental existential risk to human civilization.”

Why is there discussion?

As artificial intelligence has progressed, the ethical questions surrounding AI have shifted from theoretical exercises to real problems to be solved.

That’s not yet the case when it comes to AI being conscious, most experts agree, but there’s still plenty of debate about what it would mean for humans if it became aware one day. A major concern for many is what we can do today to ensure that a potential sentient AI is unable or unwilling to pose a real threat to humanity. Some also argue that if the AI ​​itself reaches real consciousness it should be given some of the basic rights we give to other beings.

Others wonder how we would ever really know for sure if AI is conscious or just really good at mimicking consciousness, as there is still no universally accepted definition of consciousness in general. Because of that uncertainty, some argue that people may be inclined to mistakenly label AI as conscious because of our deeply ingrained desire to give more meaning to the things around us.

There are also plenty of experts who say AI will never understand and argue that the ongoing debate about this fantastic idea is a distraction from the very real problems with AI systems that we rely on today. Artificial intelligence is now being used for an increasing number of tasks once performed by humans – from conditional decisions to facial recognition, to self-driving cars to education. Experts have documented major problems with these systems, that many believe should be addressed before time is spent on lofty discussions of AI awareness.

perspectives

We need answers to some really tough questions in case AI becomes aware

“Google seems convinced that LaMDA is just a well-functioning research tool. And Lemoine might just be a fantasist in love with a bot. But the fact that we can’t fathom what we would do if his AI sense claims were really true suggests that now is the time to stop and think — before our technology surpasses us again.” — Christine Emba,

We are so far removed from technology that the debate is pointless

“While Lemoine no doubt sincerely believes his claims, LaMDA is probably as conscious as a traffic light. Feeling is not well understood, but what we understand about it is limited to biological beings. We may not be able to rule out the possibility that a sufficiently powerful computer could be in a distant future. But it’s not something most serious artificial intelligence researchers or neurobiologists would consider today.” —Toby Walsh,

Debates over sense distract from the real damage the AI ​​is doing right now

“I don’t want to talk about sentient robots because there are people on all ends of the spectrum that harm other people, and that’s what I’d like to focus the conversation on,” – Timnit Gebru, AI ethics researcher, to

Whether AI becomes aware or not, many people will act as if it does

“I’m not going to consider the possibility that LaMDA is conscious. (It isn’t.) More important, and more interesting, is what it means that someone with such a deep understanding of the system would go so far off the rails in his defense, and that, in the resulting media frenzy, so many people would entertain the prospect that Lemoine is right. The answer, as with seemingly anything computer-related, is nothing good.” — Ian Bogost,

Humans, not AI, are the danger

“We know that the algorithms we program are not free from our worst behaviors and biases. But instead of correcting the root problems in society, we try to curb the bots that are a reflection of ourselves. And when artificial intelligence achieves and surpasses the cognition Lemoine thinks she already has, she’s fueled by some of humanity’s most inhumane impulses.” — Chandra Steele,

AI secretly built by profit-seeking companies carries real risks

“There are a lot of ethical issues with these language processing systems and all AI systems, and I don’t think we can solve them if the systems themselves are black boxes and we don’t know what they’re trained on. We don’t know how they work, or what their limitations.” — Melanie Mitchell, artificial intelligence researcher, to

We cannot decide whether AI is conscious without a concrete definition of consciousness itself

‘To identify feelings, or consciousness, or even intelligence, we will have to find out what they are. The debate on these questions is ongoing [on] for centuries.” —Oscar Davis,

We have no way of knowing if AI has gained consciousness

“The simple fact is that we don’t have a legitimate, agreed-upon test for AI sense for exactly the same reason we don’t have one for aliens: nobody knows exactly what we’re looking for.” — Tristan Greene,

Humans would pose as much of a threat to sentient AI as they are to us

“If humans ever develop a conscious computing process, it will be quite easy to make millions or billions of copies of it. Doing so without a clue as to whether the conscious experience is good or not seems like a recipe for massive suffering, akin to the current system of factory farming.” —Dlan Matthew,

People have a far too limited view of what consciousness is

“Ghosts can take different forms. Different beings can think and feel in different ways. We may not know how octopuses experience the world, but we know that they experience the world very differently than we do. So we should avoid reducing questions about AIs to ‘Can AIs think and feel’ like us?’” — Jeff Sebo,

Is there a topic you would like to see covered in “The 360”? Send your suggestions to [email protected]

Photo illustration: Yahoo News; Photos: Getty Images (3)

Leave a Comment

Your email address will not be published.