The death of Moore’s Law is finally starting to stink

News Room

For more than two decades we’ve heard about the death of Moore’s Law. It was a principle of the late Intel co-founder Gordon Moore, positing that the number of transistors in a chip would double about every two years. In 2006, Moore himself said it would end in the 2020s. MIT Professor Charles Leiserson said it was over in 2016. Nvidia’s CEO declared it dead in 2022. Intel’s CEO claimed the opposite a few days later.

There’s no doubt that the concept of Moore’s Law — or rather observation, lest we treat this like some law of physics — has lead to incredible innovation among desktop processors. But the death of Moore’s Law isn’t a moment in time. It’s a slow, ugly process, and we’re finally seeing what that looks like in practice.

Creative solutions

We have two brand new generations from AMD and Intel, neither of which really came out of the gate swinging. As you can read in my Core Ultra 9 285K review, Intel’s latest attempt pulls off a lot of impressive feats with its radically new design, but it still can’t hold up to the competition. And the Ryzen 9 9950X, although a clear upgrade over its Zen 4 counterparts, doesn’t deliver the generational improvements we’ve become accustomed to.

Get your weekly teardown of the tech behind PC gaming

Consider this — looking at Cinebench R23, the multi-core jump from the Ryzen 9 5950X to the Ryzen 9 7950X was 36%. Between the Ryzen 9 7950X and Ryzen 9 9950X? 15%. That’s less than half the improvement within one generation. In Handbrake, the Ryzen 9 7950X sped up transcoding by 34% compared to the Ryzen 9 5950X. With the Ryzen 9 9950X, the improvement shrunk to just 13%.

This isn’t just one odd generation, either. Looking at the single-core performance of the Core i9-101900K and Core i9-12900K, Intel delivered a 54% improvement. Even comparing the Core i9-12900K, which is three generations old at this point, to the latest Core Ultra 9 285K, we see just a 20% improvement. Worse, the new Core Ultra series from Intel shows oddly high results in Cinebench, and if you break out to other applications, you can actually see some regressions compared to a generation or two back.

AMD Ryzen 7 7800X3D sitting on a motherboard.

Even within just a few years, the rate of performance improvements has slowed considerably. Moore’s Law doesn’t directly talk about performance improvements — it’s simply concerned with the number of transistors on a chip. But that has clear performance implications. Throwing more transistors at the problem isn’t practical like it once was — read up on the death of Dennard scaling if you want to learn more why that’s the case.

AMD and Intel may not talk about it publicly, but both companies clearly see the writing on the walls. That’s likely why Intel pivoted to a hybrid architecture in the first place, and why it’s introduced a radical redesign with its Arrow Lake CPUs. And for AMD’s part, it’s no secret that 3D V-Cache has become a defining technology for the company’s CPUs, and it’s a clear way to skirt the bottleneck of Moore’s Law. A large chunk of transistors on any CPU die are dedicated to cache — somewhere in the range of 40% to 70% — and AMD is literally stacking more cache on top that it can’t fit onto the die.

A function of space

One important factor to keep in mind when looking at Moore’s Law and Dennard scaling is space. You can build a massive chip with a ton of transistors, sure, but how much power will it draw? Will it be able to stay under a reasonable temperature? Will it even be practical to place in a PC, or in the enterprise, a server? You cannot separate the number of transistors from the size of the die.

I’m reminded of a conversation I had with AMD’s Chris Hall, where we told me: “We were all enjoying Moore’s Law for a long time, but that’s sort of tailed off. And now, every square millimeter of silicon is very expensive, and we can’t afford to keep doubling. We can, we can build those chips, we know how to build them, but they become more expensive.”

Nvidia GeForce RTX 4090 GPU.

I’m not here to defend Nvidia’s insane pricing strategy, but the company has reportedly seen higher pricing from TSMC with its RTX 40-series GPUs than it saw with Samsung with its RTX 30-series GPUs. And, the RTX 4090 does deliver more than twice the transistor count as the RTX 3090 at a very similar die size. If there’s a commitment to Moore’s Law across chips, I’m not sure we as consumers will like the outcome when it comes time to upgrade a PC.

That’s not to mention the other problems a card like the RTX 4090 has faced — high power requirements, an insane cooler size, and a melting power connector. Not all of these problems are a function of doubling the number of transistors, not even close, but it plays a role. Bigger chips for more transistors, more heat, and usually at a higher cost, especially as the cost of silicon continues to increase.

The shortcut

Moore’s Law is dead, PC hardware is getting more expensive, and everything sucks — that’s not how I want to leave this. There will be more ways to deliver performance improvements year over year that doesn’t rely solely on more transistors on a chip at the same size. The way we’re getting there now is just different. I’m talking about AI.

Wait, don’t click off the article. Tech companies are excited about AI because it represents a lot of money — cynical as that perspective is, it’s just the way trillion-dollar corporations like Microsoft and Nvidia work. But AI also represents a way to bring a new form of computing. I’m not talking about a slew of AI assistants and hallucinatory chatbots, but rather applying machine learning to a problem to approximate results that we would previously get with pure silicon innovation.

Ray Reconstruction in Star Wars Outlaws.

Look at DLSS. The idea of using upscaling to maintain a certain level of performance is controversial, and it’s a nuanced conversation when it comes to individual games. But DLSS is enabling better performance without a strict hardware improvement. Add on top of that frame generation, which we now see from DLSS, FSR, and third-party tools like Lossless Scaling, and you have a lot of pixels that are never rendered by your graphics card.

A less controversial angle is Nvidia’s Ray Reconstruction. It’s no secret that ray tracing is demanding, and part of getting around that hardware demand is a process of denoising — limiting the number of rays, then cleaning up the resulting image with denoising. Ray Reconstruction delivers a result that would require far more rays and much more powerful hardware, and it does so without limiting performance at all — and once again, through machine learning.

It really doesn’t matter if Moore’s Law is dead or alive and well — if companies like AMD, Intel, and Nvidia want to stay afloat, they’ll continually need to think of solutions to address rising performance demands. Innovation is far from dead in PC hardware, but it might start to look a little different.






Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *