AI Designs Computer Chips for More Powerful AI

Google’s breakthrough could dramatically accelerate the design cycle for intelligent machines.

The Physics arXiv Blog iconThe Physics arXiv Blog
By The Physics arXiv Blog
Apr 30, 2020 8:00 PMApr 30, 2020 8:33 PM
AI Artificial Intelligence Computer Chip Motherboard - Shutterstock
(Credit: Blue Andy/Shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

One emerging trend in chip design is a move away from bigger, grander designs that double the number of transistors every 18 months, as Moore’s Law stipulates. Instead, there is growing interest in specialized chips for specific tasks such as AI and machine learning, which are advancing rapidly on scales measured in weeks and months.

But chips take much longer than this to design, and that means new microprocessors cannot be designed quickly enough to reflect current thinking. “Today’s chips take years to design, leaving us with the speculative task of optimizing them for the machine learning models of two to five years from now,” lament Azalia Mirhoseini, Anna Goldie and colleagues at Google, who have come up with a novel way to speed up this process.

Their new approach is to use AI itself to speed up the process of chip design.  And the results are impressive. Their machine learning algorithm can do in six hours what a human chip designer would take weeks to achieve, even when using modern chip-design software.

And the implications are significant. “We believe that it is AI itself that will provide the means to shorten the chip design cycle, creating a symbiotic relationship between hardware and AI with each fueling advances in the other,” say Mirhoseini, Goldie and colleagues.

Microchip design is a complex and lengthy process. It begins with human designers setting out the basic requirements for the chip: its size, its function, how it will be tested, and so on. After that, the team maps out an abstract design for the way data flows through the chip and the logic operations that must be performed on it.

Hugely Complex Networks

The result is an abstract, but hugely complex, network of logic gates and combinations of logic gates with specific known functions, called macros. This network, which may have billions of components, is known as a “netlist.”

The next stage is to turn the abstract netlist into a physical design by laying out the components on a two-dimensional surface — the chip. However, this process must be done in a way that minimizes the power the chip uses and ensures the design is manufacturable. 

That is no easy task. One way to reduce the power is to minimize the length of the wiring that connects all the components together. Indeed, designers use “wirelength” as a proxy for how power-hungry their designs will be. But even calculating the wirelength and other metrics of performance for a specific chip design is computationally demanding and costly.

Once the wirelength is known, the questions then arise of whether it can be made shorter, and how. Finding this shortest distance is formally equivalent to the traveling salesman problem, for which there is no known quick-fix solution. But there are some rules of thumb that chip designers have learned over the years.

So the question that Google’s researchers ask is whether it is possible for a machine to learn these rules, and then to apply them in a way that designs optimal chips more quickly.  

The first step is to create an algorithm that can place all the components into the available area. The Google team program their algorithm to do this in two steps.

In the first step, the algorithm places the macros on the chip. These are circuits with known functions that typically take up a rectangular space of specific size. The program simply orders these by size and places them, largest first, onto the surface.

The next step is more difficult. The standard logic gates are smaller than macros, and together form a network that the team models as a set of nodes connected by springs. The nodes are attracted to each other, reducing the wirelength between them. The algorithm then places this messy network of gates onto the chip surface, in the space left between the macros. They then allow it to “relax” so that springs pull the nodes together, reducing the wirelength.

The result is a potential circuit diagram. This must then be assessed according to its wirelength and other factors that need to be avoided for good chip design, such as congestion, which is a measure of how many wires pass through the same narrow gaps. Then the system starts again to create a new design, and so on.

Superhuman Performance

In this way, the team built a database of 10,000 chip designs, along with their wirelengths, congestion levels, et cetera. They next use this database to train a machine learning algorithm to predict the wirelength, congestion levels, and so on, for a given design. It then further learns how to fine-tune the design to make it better.

The designs are as good as, or even better than, what humans can manage. The algorithm even learns the same rules of thumb that human expert designers have long known through intuition. For example, the machine distributes the larger macros around the edges of a chip, leaving an empty central region for the messier network of standard logic gates. For humans, this intuitively reduces wirelength.  

The result is a machine learning algorithm that can turn a massive, complex netlist into an optimized physical chip design in about six hours. By comparison, conventional chip design, which is already highly automated but requires a human in the loop, takes several weeks.

That’s interesting work that could dramatically reduce the cycle time for producing specialized chip design. As such, it could have significant consequences for the future of AI and other specialized computing tasks.


Ref: Chip Placement with Deep Reinforcement Learning arxiv.org/abs/2004.10746

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.