Neuroscience News logo for mobile.

Can computers understand complex words and concepts?

Overview: Artificial intelligence can understand complex words and concepts by representing the meaning of words in a similar way that correlates with human judgments.

Source: UCLA

In “Through the Looking Glass,” Humpty Dumpty says contemptuously, “When I use a word, it means exactly what I want it to mean — no more, no less.” Alice replies, “The question is, can you make words mean so many different things.”

The study of what words really mean is centuries old. The human mind must parse a web of detailed, flexible information and use sophisticated common sense to perceive its meaning.

Now a newer problem has arisen regarding the meaning of words: Scientists are investigating whether artificial intelligence can mimic the human mind to understand words the way humans do. A new study by researchers from UCLA, MIT and the National Institutes of Health addresses that question.

The article, published in the magazine Nature Human behaviorreports that artificial intelligence systems can indeed learn very complicated word meanings, and the scientists discovered a simple trick to extract that complex knowledge.

They found that the AI ​​system they studied represents the meanings of words in a way that strongly correlates with human judgment.

The AI ​​system the authors explored has been used extensively over the past decade to study the meaning of words. It learns to figure out word meanings by “reading” astronomical amounts of content on the Internet, which spans tens of billions of words.

When words often appear together – for example ‘table’ and ‘chair’ – the system learns that their meanings are related. And if word pairs very rarely occur together – like ‘table’ and ‘planet’ – it shows that they have very different meanings.

That approach seems like a logical starting point, but consider how well people would understand the world if the only way to understand meaning was to count how often words occur together, without any ability to communicate with other people and our environment.

Idan Blank, a UCLA assistant professor of psychology and linguistics, and co-lead author of the study, said the researchers wanted to learn what the system knows about the words it learns, and what kind of “common sense” it has.

Before the study began, Blank said, the system seemed to have one major limitation: “As far as the system is concerned, any two words have only one numerical value that indicates how similar they are.”

In contrast, human knowledge is much more detailed and complex.

“Think of our knowledge of dolphins and alligators,” Blank said. “When we compare the two on a scale of magnitude, from ‘small’ to ‘large’, they are relatively similar. In terms of their intelligence, they are somewhat different. In terms of the danger they pose to us, on a scale from ‘safe’ to ‘dangerous’, they differ greatly. So the meaning of a word depends on the context.

“We wanted to ask whether this system really knows these subtle differences — whether the idea of ​​similarity is flexible in the same way it is for humans.”

To find out, the authors developed a technique they call “semantic projection.” One can draw a line between the model’s representations of the words ‘large’ and ‘small’, for example, and see where the representations of different animals fall on that line.

Using that method, the scientists studied 52 phrases to see if the system could learn to sort meanings — such as rating animals by their size or how dangerous they are to humans, or classifying U.S. states based on weather. or on general wealth.

Among the other phrases were terms related to clothing, occupations, sports, mythological creatures, and given names. Each category was assigned multiple contexts or dimensions – for example, size, danger, intelligence, age and speed.

This diagram shows how words and concepts are processed
A depiction of semantic projection, which can determine the similarity between two words in a specific context. This grid shows how similar certain animals are based on their size. Credit: Idan Blank/UCLA

The researchers found that their method was very similar to human intuition in all those objects and contexts. (To make that comparison, the researchers also asked cohorts of 25 people each to make similar ratings on each of the 52 phrases.)

Remarkably, the system learned to perceive that the names “Betty” and “George” are similar in terms of being relatively “old”, but that they represented different genders. And that “weightlifting” and “fencing” are similar in that both take place mostly indoors, but different in terms of how much intelligence they require.

“It’s such a beautifully simple method and completely intuitive,” said Blank. “The line between ‘big’ and ‘small’ is like a mental scale, and we put animals on that scale.”

Blank said he didn’t actually expect the technique to work, but was delighted when it did.

“It turns out that this machine learning system is much smarter than we thought; it contains very complex forms of knowledge, and this knowledge is organized in a very intuitive structure,” he said. “Just by keeping track of which words occur next to each other in a language, you can learn a lot about the world.”

Also see

This is a cartoon of talking people

The study’s co-authors are MIT cognitive neuroscientist Evelina Fedorenko, MIT graduate student Gabriel Grand and Francisco Pereira, who leads the machine learning team at the National Institutes of Health’s National Institute of Mental Health.

Financing: The research was funded in part by the Office of the Director of National Intelligence, Intelligence Advanced Research Projects Activity through the Air Force Research Laboratory.

About this AI and language research news

Author: Stuart Wolpert
Source: UCLA
Contact: Stuart Wolpert – UCLA
Image: Image is credited to Idan Blank/UCLA

Original research: Open access.
Semantic projection restores rich human knowledge of multiple object features of word embedding” by Idan Blank et al. Nature Human behavior


Abstract

Semantic projection restores rich human knowledge of multiple object features of word embedding

How is knowledge of word meaning represented in the mental lexicon?

Current computer models infer word meanings from lexical contemporaneous occurrence patterns. They learn to represent words as vectors in a multidimensional space, in which words that are used in more similar linguistic contexts, i.e. are more semantically related, are closer to each other.

While proximity between words only establishes general kinship, human judgments are highly context dependent. For example, dolphins and alligators are similar in size, but differ in danger.

Here we use a domain-general method to extract context-dependent relationships from word embeddings: ‘semantic projection’ of word vectors onto lines representing characteristics such as size (the line connecting the words ‘small’ and ‘large’) or danger (from ‘safe ‘ to ‘dangerous’), analogous to ‘mental scales’. This method restores human judgments about various object categories and properties.

Thus, the geometry of word embeddings explicitly represents a wealth of context-dependent world knowledge.

Leave a Comment

Your email address will not be published.