We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

The 4 Fallacies of Artificial Intelligence

Artificial intelligence researchers are kidding themselves that human-level performance is within reach, argues one leading thinker. Here's why.

The Physics arXiv Blog iconThe Physics arXiv Blog
By The Physics arXiv Blog
Apr 30, 2021 6:00 PMApr 30, 2021 10:00 PM
AI ROBOT IS THINKING - shutterstock 1154457493
(Credit: Phonlamai Photo/Shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

When will artificial intelligence exceed human performance? Back in 2015, a group from the University of Oxford asked the world’s leading researchers in AI when they thought machines would achieve superhuman performance in various tasks.

The results were eye-opening. Some tasks, they said, would fall to machines relatively quickly — language translation, driving and writing high school essays, for example. Others would take longer. But within 45 years, the experts believed there was a 50 percent chance that machines would be better at more or less everything.

Some people are even more optimistic. In 2008, Shane Legg, a cofounder of Deepmind technologies now owned by Google, predicted that we would have human level AI by the mid-2020s. In 2015, Mark Zuckerberg, the founder of Facebook, said that within 10 years, Facebook aimed to have better-than-human abilities in all the primary human senses: vision, hearing, language and general cognition.

This kind of hype raises an interesting question. Have researchers misjudged the potential of artificial intelligence and if so, in what way?

Now we get an answer of sorts thanks to the work of Melanie Mitchell, a computer scientist and author at the Santa Fe Institute in New Mexico. Mitchell believes that artificial intelligence is harder than we think because of our limited understanding of the complexity that underlies it. Indeed, she thinks the field is plagued by four fallacies that explain our inability to accurately predict AI’s trajectory.

Machine Victories

The first of these fallacies comes from the triumphalism associated with the victories that machines have had over humans in some areas of artificial intelligence — they are better than us at chess, Go, various computer games, some types of image recognition and so on.

But these are all quite narrow examples of intelligence. The problem arises from the way people extrapolate. "Advances on a specific AI task are often described as 'a first step' towards more general AI," says Mitchell. But this is a manifestation of the fallacy that narrow intelligence is part of a continuum that leads to general intelligence.

Mitchell quotes the philosopher Hubert Dreyfus on this topic: "It was like claiming that the first monkey that climbed a tree was making progress towards landing on the moon." But in reality, there are numerous unexpected obstacles along the way.

The second fallacy is based on a paradox popularized by the computer scientist Hans Moravec and others. He pointed out that difficult activities for humans — playing chess, translating languages and scoring highly on intelligence tests — are relatively easy for computers; but things we find easy — climbing stairs, chatting and avoiding simple obstacles — are hard for computers.

Nevertheless, computer scientists think that human cognitive activities will soon be capable by machines even though our thought processes hide a huge level of complexity. Mitchell points to Moravec’s writing on this topic. "Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it." This is what makes difficult tasks seem easy.

Mitchell’s third fallacy centers on wishful mnemonics. Computer scientists, she says, have the tendency to label certain programs, subroutines and benchmarks after the human characteristic they hope it will mimic. For example, one widely cited benchmark is the Stanford Question Answering Dataset, which researchers use to compare the ability of humans and machines to answer certain questions.

Mitchell points out that this and other similarly named benchmarks actually test a very narrow set of skills. However, they lead to headlines that suggest machines can outperform humans, which is only true in a narrow sense the benchmark tests.

While machines can outperform humans on these particular benchmarks, AI systems are still far from matching the more general human abilities we associate with the benchmarks’ names," she says.

Mitchell’s final fallacy is the idea that intelligence resides entirely in the brain. "The assumption that intelligence can in principle be 'disembodied' is implicit in almost all work on AI throughout its history," she says.

But in recent years, the evidence has grown that much of our intelligence is outsourced to our human form. For example, if you jump off a wall, the non-linear properties of your muscles, tendons and ligaments absorb the impact without your brain being heavily involved in coordinating the movement. By contrast, a similar jump from a robot often requires limbs and joint angles to be precisely measured while powerful processors determine how actuators should behave to absorb the impact.

Morphological Computing

In a way, all that computation is carried out by the morphology of our bodies, which itself is the result of billions of years of evolution (another algorithmic process). None of this "morphological computation" is done in the brain. Cognitive psychologists (and to be fair, some computer scientists) have long studied this aspect of intelligence.

But many artificial intelligence researchers fail to take this into account when predicting the future. "The assumption that intelligence is all in the brain has led to speculation that, to achieve human-level AI, we simply need to scale up machines to match the brain’s 'computing capacity' and then develop the appropriate 'software' for this brain-matching hardware," says Mitchell.

Indeed, many futurists seem to assume that a superhuman intelligence could be entirely disembodied.

Mitchel profoundly disagrees. "What we’ve learned from research in embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a common sense understanding of the world," she says. "It’s not at all clear that these attributes can be separated."

Together, these fallacies have given many AI researchers a false sense of the progress made in the past and what is likely in future. Indeed, an important open question is what it means to be intelligent. Without a clear understanding of the very thing researchers are hoping to emulate, the possibility of progress seems bleak.

Mitchell raises the idea that much of today’s artificial intelligence research bears the same relation to general intelligence as alchemy does to science. "To understand the nature of true progress in AI, and in particular, why it is harder than we think, we need to move from alchemy to developing a scientific understanding of intelligence," she concludes.

A fascinating read!


Reference: Why AI is Harder Than We Think: arxiv.org/abs/2104.12871

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.