We are told that AI neural networks “learn” like humans do. A neuroscientist explains why it doesn’t

Recently developed artificial intelligence (AI) models are capable of many impressive feats, including image recognition and human-like language production. But just because AI can engage in human behaviors doesn’t mean it can think or understand like humans.

As a researcher studying how humans understand and reason about the world, I think it’s important to point out that the way AI systems “think” and learn is fundamentally different from the way humans think about it. do – and we still have a long way to go before AI can really think like us.



Read more: Robots create pictures and tell jokes. 5 things to know about base models and the next generation of AI


A common misconception

Developments in AI have produced systems that can adopt very human-like behaviors. The language model GPT-3 can produce text that is often indistinguishable from human speech. Another model, Palmcan produce explanations for jokes he never has Déjà vu.

More recently, a general-purpose AI known as Gato has been developed that can perform hundreds of tasks, including captioning images, answering questions, playing Atari video games, and even controlling a robot arm to stack blocks. And DALL-E is a system that was trained to produce modified images and illustrations from a textual description.

These breakthroughs have led to bold claims about the capability of such AI and what it can tell us about human intelligence.

For example, Nando de Freitas, a researcher at Google’s AI firm DeepMind, says scaling existing models will be enough to produce human-level AI. Others have echoes this view.

In all this excitement, it’s easy to assume that human behavior means human understanding. But there are several key differences between how AI and humans think and learn.

Neural Networks vs Human Brain

The most recent AI is built from artificial neural networks, or “neural networks” for short. The term “neural” is used because these networks are inspired by the human brain, in which billions of cells called neurons form complex networks of connections with each other, processing information as they send signals back and forth. .

Neural networks are a very simplified version of biology. A real neuron is replaced by a simple node, and the strength of the connection between the nodes is represented by a unique number called “weight”.

With enough connected nodes stacked in enough layers, neural networks can be trained to recognize patterns and even “generalize” to similar (but not identical) stimuli to what they have seen before. Simply, generalization refers to the ability of an AI system to take what it has learned from some data and apply it to new data.

Being able to identify features, recognize patterns, and generalize from results is central to the success of neural networks — and mimics the techniques humans use for such tasks. Yet there are important differences.

Neural networks are typically trained by “supervised teaching”. They are therefore presented with many examples of an input and the desired output, then gradually the connection weights are adjusted until the network “learns” to produce the desired output.

To learn a language task, a neural network can be presented with a sentence one word at a time and will slowly learn to predict the next word in sequence.

This is very different from the way humans generally learn. Most human learning is ‘unsupervised’, which means that we are not told explicitly what the ‘right’ response is for a given stimulus. We have to sort this out ourselves.

For example, children are not given instructions on how to speak, but learn it through a complex procedure exposure to adult speech, imitation and feedback.

Children’s learning is assisted by adults, but they are not powered by massive datasets like AI systems are.
Shutterstock

Another difference is the scale of the data used to train the AI. The GPT-3 model was trained on 400 billion words, mostly taken from the Internet. At a rate of 150 words per minute, it would take nearly 4,000 years for a human to read that much text.

Such calculations show that humans cannot learn in the same way as AI. We need to make more efficient use of small amounts of data.

Neural networks can learn in ways we can’t

An even more fundamental difference is how neural networks learn. In order to match a stimulus with a desired response, neural networks use an algorithm called “backpropagation” to trace errors back through the network, allowing the weights to be adjusted in the right way.

However, it is widely recognized by neuroscientists that backpropagation cannot be implemented in the brain, because it would take external signals which simply do not exist.

Some researchers have proposed variants backpropagation could be used by the brain, but so far there is no evidence that the human brain can use such learning methods.

Instead, humans learn by doing structured mental concepts, in which many different properties and associations are interrelated. For example, our concept of “banana” includes its shape, its yellow color, the knowledge that it is a fruit, how to hold it, etc.

As far as we know, AI systems do not form such conceptual knowledge. They rely entirely on extracting complex statistical associations from their training data and then applying them to similar contexts.

Efforts are underway to build an AI that combines different input types (like images and text) – but whether that will be enough for these models to learn the same kinds of rich mental representations that humans use to make sense of the world remains to be seen.

There is still a lot we don’t know about how humans learn, understand and reason. However, what we do know indicates that humans perform these tasks very differently from AI systems.

As such, many researchers believe we will need new approaches and a more fundamental understanding of how the human brain works before we can build machines that truly think and learn like humans.

Comments are closed.