In brief No, AI chatbots are not aware.
As soon as the story when a Google engineer, who blew the clock on what he claimed was a deliberate language model, went viral, multiple publications stepped in to say he was wrong.
The debate on whether the company LaMDA chatbot being aware of whether it has a soul or not is not very good just because it is too easy to shut off the side that believes so. Like most major language models, LaMBDA has billions of parameters and is trained on text scraped from the internet. The model learns the relationships between words and which ones appear next to each other more often.
It seems intelligent enough and can sometimes answer questions correctly. But it knows nothing of what it says, and has no real understanding of language or anything. Language models behave randomly. Ask it if it has feelings and it can say yes or no† Ask if it is one squirrel, and it can also say yes or no. Is it possible that AI chatbots are actually a squirrel?
FTC sounds the alarm over the use of AI for content moderation
AI is changing the internet. Realistic-looking photos are used in profiles of fake social media accounts, pornographic deepfake videos of women are circulating, images and text generated by algorithms are posted online.
Experts have warned that these capabilities could increase the risk of fraud, bots, misinformation, harassment and manipulation. Platforms are increasingly turning to AI algorithms to automatically detect and remove bad content.
Now, the FTC is warning that these methods could exacerbate the problems. “Our report emphasizes that no one should view AI as the solution to the spread of harmful content online,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. said in a statement.
Unfortunately, the technology can be “inaccurate, biased and discriminatory by design”. “Combating harm online requires a broad societal effort, not an overly optimistic belief that new technology — which can be both helpful and dangerous — will take these problems off your hands,” Levine said.
Spotify understands deepfake voice startup
Audio streaming giant Spotify has acquired Sonantic, a London-based upstart focused on building AI software that can generate completely made-up voices.
Sonantic’s technology has been used in gaming and in Hollywood movies, giving actor Val Kilmer a voice in Top Gun: Maverick† Kilmer played Iceman in the action movie; his lines were spoken by a machine due to speech problems after battling throat cancer.
Now the same technology seems to be making its way to Spotify as well. The obvious application would be to use the AI voices to read audiobooks. Spotify, finally acquired Findaway, an audiobook platform, last November. It will be interesting to see if listeners can customize how they want their machine narrators to sound. Perhaps there are different voices for reading children’s books than for horror stories.
“We are very excited about the potential to bring Sonantic’s AI voice technology to the Spotify platform and create new experiences for our users,” said Ziad Sultan, Spotify’s vice president of personalization, said in a statement. “This integration allows us to engage users in a new and even more personalized way,” he hinted.
TSA tests AI software to automatically scan baggage
The U.S. Transportation Security Administration is going to test whether computer vision software can automatically screen baggage to look for items that look odd or are not allowed on flights.
The trial takes place in a laboratory and is not yet ready for real airports. The software works with the pre-existing 3D Computed Tomography (CT) imaging that TSA agents currently use to peek through people’s bags at security checkpoints. If officers see anything suspicious, they will put the luggage aside and shoot through it.
AI algorithms can automate some of that process; they can identify objects and mark instances where they detect certain items.
“While TSA and other security agencies use CT, this application of AI represents a potentially transformative leap in aviation security, making air travel safer and more consistent while allowing TSA’s highly trained officers to focus on bags that pose the greatest risk.” said Alexis Long, director of product at Pangiam, the technology company that works with the administration.
“Our goal is to use AI and computer vision technologies to enhance security by providing TSA and security officers with powerful tools to detect priceless items that could pose a threat to aviation security. This is an important step towards a new security standard with global implications.”