From 80 Man-Months to Just a Few Hours: AI is Transforming the Design of NVIDIA Chips
During a discussion at GTC 2026 between Bill Dally and Jeff Dean, Chief Scientists of NVIDIA and Google respectively, it emerged that NVIDIA is already extensively employing artificial intelligence within its semiconductor design workflow. While this is not exactly a new development—it's been talked about for years—the topic is more relevant than ever today, and the comparison between the two scientists highlights the extent of the improvements, in terms of speed and more, that AI is bringing to NVIDIA.
AI applications span a range from architecture verification to bug management and the development of standard cell libraries. However, Dally specified that a fully automated end-to-end design process is still far off; AI currently acts as an accelerator and advanced support, not as a complete replacement for engineering processes.
One concrete example is NVCell, a tool based on reinforcement learning developed internally. Its task is to port standard cell libraries to new production nodes. Traditionally, this operation required about 80 man-months: a team of eight engineers working for around ten months to adapt between 2,500 and 3,000 cells. With NVCell, the same work is completed overnight using a single GPU. Not only that, but according to claims, the results obtained are comparable to or better than human designs in terms of area, power consumption, and delay. This translates into a double advantage: increased productivity and lowered barriers to transitioning to new lithographic processes.
A second tool mentioned is "prefix RL"—an AI that experiments with thousands of different ways to organize the "steps" of a computation within the chip, finding configurations that are more efficient than those manually designed. In this case, AI does not just accelerate human work but also explores non-intuitive solutions. The layouts generated, according to Dally, would be difficult for a human designer to conceive, with estimated improvements of between 20% and 30% in key metrics.
This aspect highlights a paradigm shift: artificial intelligence is not just a tool for efficiency but also a means to expand the design solution space. Concurrently, NVIDIA has developed internal language models such as Chip Nemo and Bug Nemo, trained on proprietary material, including RTL documents and GPU architectures accumulated over time. These systems are used to answer internal technical questions, facilitate the understanding of design blocks, summarize bug reports, and direct issues to the correct modules or teams.
A practical application involves support for junior engineers, who can query the model instead of constantly turning to more experienced colleagues. This approach enhances operational efficiency and accelerates internal training. Thus, NVIDIA is progressively integrating AI into critical points of the design workflow, with tangible benefits in both time and quality. Full automation remains a long-term goal, but current results highlight how technology is already substantially impacting the way chips are designed.