Nvidia CEO says his AI chips are improving faster than Moore's Law
Nvidia CEO Jensen Huang announced that the company's AI chips are advancing faster than Moore's Law, with the latest superchip being over 30 times faster for AI inference than its predecessor.
Read original articleNvidia CEO Jensen Huang has claimed that the performance of the company's AI chips is advancing at a rate that exceeds Moore's Law, which traditionally predicted that the number of transistors on chips would double approximately every year. During a keynote at CES 2025, Huang stated that Nvidia's latest data center superchip is over 30 times faster for AI inference workloads compared to its predecessor. He emphasized that by innovating across the entire technology stack—including architecture, chips, systems, libraries, and algorithms—Nvidia can achieve this accelerated pace. Huang also introduced the concept of "hyper Moore's Law," suggesting that AI development is not slowing down but rather evolving through three active scaling laws: pre-training, post-training, and test-time compute. He noted that advancements in chip performance will lead to reduced costs for AI inference, making it more accessible. Huang highlighted that Nvidia's chips are now 1,000 times better than those produced a decade ago, indicating a significant leap in capability. This progress is crucial as leading AI labs rely on Nvidia's technology for training and running AI models, and improvements in these chips are expected to enhance AI model capabilities further.
- Nvidia's AI chips are reportedly improving faster than Moore's Law.
- The latest superchip is over 30 times faster for AI inference than previous models.
- Huang introduced the concept of "hyper Moore's Law" to describe ongoing AI advancements.
- Innovations across the technology stack contribute to accelerated chip performance.
- Nvidia's chips are claimed to be 1,000 times better than a decade ago, indicating significant progress.
Related
The thrill of AI is fading – and Wall Street is getting clear-eyed about value
Nvidia reported a 122% sales increase and doubled profits, but its stock fell 7% as investors reassess AI's revenue potential amid growing competition and waning excitement around the technology.
Elon Musk set up 100k Nvidia H200 GPUs in 19 days; normally takes 4 years
Elon Musk's xAI team established a supercluster of 100,000 Nvidia H200 GPUs in 19 days, a process Nvidia CEO Jensen Huang noted typically requires four years, calling it unprecedented.
Microsoft acquires twice as many Nvidia AI chips as tech rivals
Microsoft has acquired 485,000 Nvidia AI chips, doubling its competitors' purchases. The demand for these chips has surged due to AI advancements, with tech companies projected to spend $229 billion on servers in 2024.
Nvidia bets on robotics to drive future growth
Nvidia is focusing on robotics for growth, launching Jetson Thor in 2025. The global robotics market is expected to grow from $78 billion to $165 billion by 2029, despite safety challenges.
Nvidia announces $3k personal AI supercomputer called Digits
Nvidia's Project Digits, a $3,000 personal AI supercomputer, launches in May 2025, featuring 200 billion parameter capacity, 1 petaflop performance, and support for popular AI frameworks, enhancing accessibility.
The economic version of Moore's Law, e.g. cost per transistor halves every 18-24 months, died at 28nm. Dennard scaling broke down earlier.
Jensen here seems to refer to "total compute available in a given system", which is a strange metric: It's not compute-per-dollar, or compute-per-unit-of-energy, but ... as far as I can tell "compute per unit of ... volume"?
> We can build the architecture, the chip, the system, the libraries, and the algorithms all at the same time,” said Huang.
This seems only partially related to Moore's law.
Similarly on the GPU side much of the improvements is more fake pixels and more fake frames, now we can have multiple fake frames per real frame.
> algorithms all at the same time,” said Huang. “If you do that, then you can
> move faster than Moore’s Law, because you can innovate across the entire stack.”
So no connection at all with actual Moore's Law which states that number of transistors double each year. If you optimize your libraries to be 4x faster, no, that doesn't mean your 'AI chips' are improving faster than Moore's Law.
Statements like these annoy me enough to actually comment. Are they gaslighting us or do they actually believe this stuff ? Makes me really wonder.
It wasn’t really a “law” in the first place, right?
1. Transistor size (This is the official Moore's law) that we're improving the number of transistors per area.
2. Clockspeed (Pretty sure this just scales with power, and I'm like 99.9% certain that they're just making gas guzzlers over at nvidia that show improvements in the top-speed, because they use more power).
3. Instruction Throughput - Improving your ISA to be more efficient is something that we've generally been getting better at. It's also why microarchitectures exist, as I understand it (though someone can correct me on this).
I feel like NVIDIA is just saying "We can ramp up #2 all day, no one has really explored how much power one of these things can take" when in fact power-draw is potentially the worst way to try and improve this. (Or at least, they will likely reach a similar limit by dialing up this piece). How far turned up companies are on these dials is kind of how I would expect to evaluate how much growth a company has before they start to really feel the limits of CPU/GPU performance. But also, I would consider it concerning if companies weren't actively investing in all 3 of these, because I think it would be much more likely for them to hit the limits of 2 of these curves and stall out while they're trying to figure out how to improve performance on the other 2.
Related
The thrill of AI is fading – and Wall Street is getting clear-eyed about value
Nvidia reported a 122% sales increase and doubled profits, but its stock fell 7% as investors reassess AI's revenue potential amid growing competition and waning excitement around the technology.
Elon Musk set up 100k Nvidia H200 GPUs in 19 days; normally takes 4 years
Elon Musk's xAI team established a supercluster of 100,000 Nvidia H200 GPUs in 19 days, a process Nvidia CEO Jensen Huang noted typically requires four years, calling it unprecedented.
Microsoft acquires twice as many Nvidia AI chips as tech rivals
Microsoft has acquired 485,000 Nvidia AI chips, doubling its competitors' purchases. The demand for these chips has surged due to AI advancements, with tech companies projected to spend $229 billion on servers in 2024.
Nvidia bets on robotics to drive future growth
Nvidia is focusing on robotics for growth, launching Jetson Thor in 2025. The global robotics market is expected to grow from $78 billion to $165 billion by 2029, despite safety challenges.
Nvidia announces $3k personal AI supercomputer called Digits
Nvidia's Project Digits, a $3,000 personal AI supercomputer, launches in May 2025, featuring 200 billion parameter capacity, 1 petaflop performance, and support for popular AI frameworks, enhancing accessibility.