State of Moore's Law 2025

Moore’s Law continues to evolve and adapt
In December 2024, I wrote about how Moore’s Law was not only alive but thriving. AI had taken center stage, quantum computing was making strides, and the semiconductor industry was racing to push boundaries. Fast forward to 2025, and the landscape has shifted yet again. We’ve officially entered what Intel calls the “Angstrom Era,” chiplets have gone mainstream, and the industry has proven it can innovate its way around physical limits.
So, where does Moore’s Law stand today? Has it finally hit a wall, or has it simply found new roads? Let’s dive in.
The Angstrom Era Has Arrived
2025 has been a milestone year for semiconductor manufacturing. Leading manufacturers are now capable of packing an astonishing 50 billion transistors onto a chip the size of a fingernail. That’s mind-boggling when you think about it.
The key technological leap enabling this is the transition from FinFET to Gate-All-Around (GAA) transistor architecture. This new design, where the gate material wraps completely around the channel, provides superior electrostatic control—essentially giving engineers more precision at atomic scales.
The industry’s key players have all embraced this shift, and the results are impressive.
Progress of Key Companies
TSMC
TSMC began mass production of its 2nm (N2) process in late 2025, utilizing GAA to deliver a 10-15% performance boost or a 25-30% power reduction compared to its 3nm node. They remain the undisputed manufacturing leader, and their roadmap suggests even more aggressive scaling ahead.
Samsung
Samsung also started mass production of its 2nm process in 2025, featuring their third-generation GAA technology (known as MBCFET). They’ve been pushing hard to close the gap with TSMC, and this year showed they’re serious about it.
Intel
Intel is on track with its aggressive roadmap, with its 18A (1.8nm) process entering manufacturing. This node combines their version of GAA, called RibbonFET, with an industry-first backside power delivery network known as PowerVia. Pat Gelsinger’s bold promise of a trillion transistors by the end of the decade? It’s looking more realistic than ever.
Intel CEO Pat Gelsinger says until the Periodic Table is exhausted, Moore’s Law is alive and well, and there will be a trillion transistors in a single device by the end of the decade — Tsarathustra (@tsarnick) January 18, 2024
The Challenges: Let’s Be Honest
While these advancements are impressive, they don’t come easily. The classic interpretation of Moore’s Law is facing significant headwinds.
Physical Limits: Transistors are approaching atomic dimensions. With only about 1.5nm of space left for printing, the physical end of geometric scaling is no longer a distant theoretical concern. We’re literally counting atoms at this point.
Economic Costs: The pace of doubling has slowed from two years to three or even four. More importantly, the cost per transistor is no longer decreasing at historic rates past the 5nm node. Building a new leading-edge fabrication plant now costs upwards of $20 billion. Yes, billion with a B.
Power and Heat: Cramming more transistors into a small space generates immense heat. This has led to the problem of “dark silicon,” where portions of a chip must be powered down to manage thermals. As a result, the industry’s focus has shifted from raw transistor count to performance per watt.
The New Playbook: Chiplets and 3D Stacking
Faced with these challenges, the industry has pivoted from a singular focus on scaling to a multi-faceted strategy for performance gains. This is where the most exciting innovation is happening.
Instead of creating one large, monolithic chip, designers are now breaking systems down into smaller, specialized modules called chiplets. Think of them as Lego bricks—each optimized for a specific function (compute, I/O, memory) and manufactured on the most cost-effective process node.
These chiplets are then combined using advanced packaging techniques:
- 2.5D Integration places chiplets side-by-side on a silicon interposer.
- 3D Stacking vertically stacks dies on top of one another, connected by high-density Through-Silicon Vias (TSVs) and advanced hybrid bonding.
This approach provides immense flexibility, improves manufacturing yields, and enables heterogeneous integration—mixing and matching components from different vendors. The emergence of industry standards like the Universal Chiplet Interconnect Express (UCIe) is creating a robust ecosystem for this new design paradigm.
AI Demand’s Impact: The New Engine of Innovation
If there’s one force reshaping the semiconductor landscape more than any other in 2025, it’s artificial intelligence. The insatiable hunger for AI compute has triggered what analysts are calling a “giga cycle”—a sustained demand surge that’s fundamentally altering everything from chip design to factory construction.
The Industry-Wide Effect
Let’s look at the numbers. The semiconductor industry hit $627 billion in sales in 2024 and is projected to reach $697 billion in 2025—a new all-time high. The industry is now on track to hit $1 trillion by 2030. A massive chunk of this growth? AI chips.
Generative AI chips alone—GPUs, specialized accelerators, and the memory to feed them—were worth over $125 billion in 2024, representing more than 20% of total chip sales. That figure is expected to exceed $150 billion in 2025. Here’s the kicker: these AI chips account for less than 0.2% of total wafer volume but generate roughly 20% of industry revenue. Talk about high-value silicon.
Memory has become the new bottleneck. High Bandwidth Memory (HBM) has emerged as a critical component, with revenue expected to grow from $16 billion in 2024 to over $100 billion by 2030. Companies like Micron report their HBM production is “sold out through 2026.” The so-called “memory wall”—the gap between how fast processors can compute and how fast they can access data—has made memory bandwidth as important as raw compute power.
Advanced packaging is another area seeing explosive growth. Technologies like TSMC’s CoWoS (chip-on-wafer-on-substrate) are essential for connecting AI chips to their HBM stacks. Production capacity is expected to reach 90,000 wafers per month by end of 2026, up from levels that simply couldn’t meet demand in 2024.
The AI PC Revolution
Perhaps the most visible impact for consumers is the emergence of the “AI PC.” What was a marketing buzzword a year ago has become an industry standard. Gartner projects AI PCs will account for 43% of all PC shipments in 2025—that’s 114 million units, a 165% increase from 2024.
The key technology enabling this shift is the Neural Processing Unit (NPU)—dedicated silicon for running AI workloads locally on your device. Why does this matter? Running AI on-device means faster response times, better privacy (your data doesn’t leave your laptop), and the ability to work offline.
The competition among chip vendors is fierce:
- Qualcomm’s Snapdragon X2 Elite hits 80 TOPS (trillions of operations per second) with exceptional battery life, capturing nearly 25% of the premium laptop segment and breaking x86’s long-standing dominance.
- AMD’s Ryzen AI Max 300 takes a different approach with “Platform TOPS,” combining CPU, NPU, and integrated GPU. With up to 96GB of allocatable VRAM, it can run a 70-billion-parameter LLM like Llama 3 70B entirely locally. That’s a research-grade AI model on a laptop.
- Intel’s Panther Lake features their NPU 5 with 50 TOPS of dedicated AI performance, reaching 180 “Total Platform TOPS” when combined with their new Xe3 graphics.
The implications are significant. Local AI processing is starting to cannibalize the low-to-mid-range discrete GPU market. Features like real-time translation, AI-powered video editing, and smart assistants that actually respect your privacy are becoming standard. Microsoft’s “Windows AI Foundry” is standardizing NPU access for developers, and software like Adobe Creative Cloud is being optimized to offload tasks to these dedicated AI engines.
By 2028, nearly all PCs are expected to have onboard NPUs, and AI laptops will command a 10-15% price premium. The shift toward local, on-device AI—what some call “Sovereign AI”—represents a fundamental change in how we think about personal computing.
The Symbiosis
Here’s the beautiful irony: AI is not just driving chip demand—it’s helping design the chips themselves. AI tools are now being used to optimize chip layouts, predict defects, and accelerate the design cycle. Graph neural networks and reinforcement learning are helping engineers create more power-efficient designs faster than ever before.
It’s a virtuous cycle. Better AI demands better chips. Better chips enable better AI. And around we go.
Beyond Silicon: A Glimpse into the Future
On the materials front, there’s exciting research happening. Scientists are exploring materials beyond silicon, such as graphene and molybdenum disulfide, that promise better speed and power characteristics. Experimental gates have reached down to 0.34 nanometers using these exotic materials.
And here’s something fascinating—AI itself is being used to help design the next generation of complex chips, optimizing layouts and shortening development cycles. We’ve come full circle: the technology born from Moore’s Law is now helping extend it.
The Road Ahead
So, where does all this leave us? While the physical limitations of transistor scalability remain a challenge, the combined forces of GAA transistors, chiplet architectures, 3D stacking, and new materials are ensuring that Moore’s Law continues to hold true—at least in spirit.
Moore’s Law in 2025 is not a single rule but a layered strategy. The focus has broadened from simply shrinking components to system-level optimization, delivering more performance per watt, per dollar. The old road map may be ending, but the industry has already drawn a new, more complex, and arguably more innovative one for the journey ahead.
I’m hopeful that we’ll see those trillion-transistor chips before 2030. The journey is filled with challenges, but if 2025 has shown us anything, it’s that this industry knows how to innovate its way forward.
A Final Thought
As we navigate this rapidly evolving landscape, it’s worth remembering that computing power has seen a 1,000,000,000,000,000,000,000x improvement over the years. Pause to let that sink in. We’re living through one of the most remarkable sustained periods of technological progress in human history.
What do you think—will Moore’s Law survive another decade? Do comment with your thoughts.
References:
- https://www.investopedia.com/terms/m/mooreslaw.asp
- https://www.xda-developers.com/intel-roadmap-2025-explainer/
- https://www.tomshardware.com/tech-industry/semiconductors/tsmc-begins-quietly-volume-production-of-2nm-class-chips
- https://siliconangle.com/2025/12/19/samsung-debuts-worlds-first-two-nanometer-mobile-processor/
- https://www.deloitte.com/us/en/insights/industry/technology/technology-media-telecom-outlooks/semiconductor-industry-outlook.html
- https://www.gartner.com/en/newsroom/press-releases/2024-09-25-gartner-forecasts-worldwide-shipments-of-artificial-intelligence-pcs-to-account-for-43-percent-of-all-pcs-in-2025
- https://www.tomshardware.com/tech-industry/semiconductors/semiconductor-industry-enters-giga-cycle-as-ai-infrastructure-spending-reshapes-demand
- https://markets.financialcontent.com/wral/article/tokenring-2025-12-26-the-ai-pc-revolution-intel-amd-and-qualcomm-battle-for-npu-performance-leadership-in-2025