We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

How Scientists Are Building a Better Brain-on-a-Chip

To create a more efficient AI, researchers are looking to the brain for answers once again.

By Claire Bugos
May 17, 2021 8:30 PMMay 17, 2021 8:35 PM
neural network
(Credit: Natali _ Mis/Shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

For nearly a century, scientists have looked to the brain to create computing models. The basis of many of these systems, from the earliest artificial intelligence to today's deep learning models, is artificial neural networks. These networks of electric nodes are a rough approximation of the inner workings of our minds. Like the neurons that carry pulses throughout our nervous system, the signals sent through artificial neural networks, or ANNs, allow machines to solve complex problems and even learn over time.

This technology has spurred advances in AI in the past few decades. ANNs, which have been considered the gold standard for computing systems based on the brain, are found in nearly every setting imaginable, from finance to robotics to smart phones.

But computing at this level can take a toll on resources. In one 2019 study, researchers estimated that a single deep-learning model can generate roughly the same CO2 emissions as five cars, combined, in their entire lifetime. That's about 17 times the amount the average American emits in a year.

As artificial intelligence systems become larger and more complex, researchers are working on ways to make these processes more energy efficient and sustainable. To achieve this, experts look (once again) toward the most efficient processing system we know of — the brain.

The Brain as a Muse

In the brain, neurons are connected in pathways. One neuron, if it receives enough input, will fire a signal to the next one down the line. As more signals get passed between these neurons, that connection is strengthened. Neuroscientists explain this process using the pneumonic, “fire together, wire together,” and it’s essentially how learning happens.

As early as the 1940s, key thinkers have developed computer models based on the biology of the human brain. To create neural networks in computers, scientists create links between different processing elements in the system, modeled after the transfer of signal between synapses in the brain. Each of these connections has a so-called weight, which indicates how strong the connection between an input and output is. Much like in the biological brain, these weights can be strengthened or weakened based on how the computer system is trained.

Artificial neural networks are a clunky approximation of the biological brain’s true processing power, though. In many versions of ANNs, layers of neurons are stacked atop each other. In each layer, these neurons receive signals from the previous layer before setting off all the neurons in the next. Triggering each input and output in one direction like this can bog down the system’s processing power and require much more energy. In the era of deep learning, the resources needed for a best-in-class AI model has doubled every 3.4 months, on average. And as artificial intelligence systems become bigger and more complex, efficiency is becoming increasingly important.

“As its design becomes more and more sophisticated, you require more and more computational resources — you require much more power,” says Wenzhe Guo, a student of electrical and computer engineering at King Abdullah University of Science and Technology.

To mediate this problem, scientists look back to the brain for clues. In recent years, researchers have made great advancements in the development of spiking neural networks (SNN), a class of ANN based more closely on biology. Under the SNN model, individual neurons trigger other neurons only when they are needed. This emulates the “spike” that triggers the passage of signals through biological neurons. This asynchronous approach ensures the system is only powering an interaction when it’s needed for a certain action.

Guo is the lead researcher on a team from which programmed a low-cost microchip to use SNN technology. His team showed that their chip was 20 times faster and 200 times more energy-efficient than other neural network platforms. Moving away from ANNs, which are simplistic approximations of the brain, he says, opens new opportunities for speed and efficiency.

Major companies have begun to harness the power of the SNN model to create and train complex neuromorphic chips, an algorithmic-based AI that more closely mirrors how the human brain interacts with the world. IBM’s TrueNorth, unveiled in 2019, contains one million neurons and 256 million synapses on a 28-nanometer chip. Intel’s Loihi chip contains 130,000 neurons in 14 nanometers and is capable of continuous and autonomous learning.

More Human Than Human?

Artificial intelligence, Guo says, “has been involved in every aspect of life.” Computing based on the nervous system is already widely used in image classification and audio recognition software, cognitive robotics, personal communication, our current understanding of muscle activity, and much more.

As these computing systems continue to more closely resemble the brain, there is some effort to use AI chips to study the mysterious organ that inspired them. There’s no existing in vitro model system — experiments that take place in test tubes and petri dishes — for scientists hoping to study neurodegenerative diseases, like Alzheimer’s and Parkinson’s disease. Testing drugs in the actual brain tissue can be challenging, too, because the organ's complexity can make it difficult to pinpoint the exact mechanisms driving certain research outcomes. 

In a 2020 review published in Neural Networks, a team of researchers compared ANNs and SNNs. Although SNNs have yet to reach the computational level of ANNs, the authors say continual progress will drive them in the same direction. “The rapid progress in this domain continuously produces amazing results with ever-increasing network size," the study authors write, adding that tech's trajectory is akin to the initial development of deep learning.

Guo says that SNN chips, like the one his team is developing, are made to be useful for various purposes. As chips more closely resemble the neural pathways of the human brain, they may one day offer a useful model for neurologists studying different diseases. "As of now, this SNN is still not as good as ANN,” Guo notes. “But it has full potential in the future.”

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.