Futuretek: The Rise of the Machines (Sort Of)
Why Peter Zeihan Thinks AI Is All Gas, No Chips
Inside the real reasons we’re not getting GPT‑7 in your blender just yet
By Connor Searson, editor BatShitCrazy.com
The current AI frenzy makes the dot com bubble look like a school bake sale. Between 1995 and 2000, VCs poured $271 billion into dot com startups — and we’re on track to exceed that in just two years for AI. OpenAI alone is valued at over $80 billion, more than Dropbox, Box, and Reddit combined. It’s raised over $13 billion, mostly from Microsoft. Other labs — Anthropic, Mistral, xAI, Cohere — are following suit, hauling in massive funding rounds with barely a revenue plan in sight.
And if the Sam Altmans of the world are right — if, as Altman says, “compute is going to be the currency of the future… We are going to want an amount of compute that’s just hard to reason about right now” — then maybe all this manic over-funding is rational.
But what if they’re wrong — or just early?
That’s where Peter Zeihan comes in. Not to kill the dream — just to remind us how physics, politics, and power grids work. And those realities have implications for valuations, resource allocations, and infrastructure projects. If Altman is the prophet of the future, Zeihan’s the guy waving a map from the off ramp. And his map outlines five brutal constraints.
1. Hardware Ain’t Ready
AI is hungry. Not for data — it has that. Not for engineers — it’s drowning in those. It’s hungry for compute, and compute lives on chips. The problem is, we’re using the wrong kind.
Most of today’s large AI models run on GPUs — chips originally designed for rendering video game graphics. They weren’t built to handle the scale of parallel processing needed for billion-parameter models, and we’re already hitting their limits. Specialized chips like Google’s TPUs or custom-built NPUs are promising, but they’re still niche, expensive, and not widely available.
Peter Zeihan points to an even deeper constraint: we’re reaching the physical and geopolitical edge of chip making itself. The newest generation of semiconductors can only be etched with a rare, ultra-precise lithography process known as EUV (extreme ultraviolet). It’s the point where traditional DUV (deep ultraviolet) manufacturing tops out. And while Chinese firms have brute-forced DUV down to 7 nm, the results were expensive, inefficient, and not scalable.
“There’s a coalition of the Dutch, the Japanese, the Koreans, the Americans, and the Taiwanese,” Zeihan says, “trying to draw a line around EUV — because that’s the gatekeeper tech for the future.”
And standing right next to that line is TSMC, the Taiwanese chip giant and the only company on Earth that can reliably mass-produce 3 nm and 5 nm chips using EUV. For Xi Jinping, it’s like watching a pot of golden honey guarded by rivals. TSMC is the honey pot. EUV is the hive.
Until EUV can be scaled globally — or geopolitical tensions defused — the AI industry is building billion-dollar systems on a fragile foundation. For all the talk of exponential growth, we’re still feeding trillion-parameter models with chips built for Fortnite — and hoping no one sneezes in the Taiwan Strait.
2. The Geopolitical Chip Jenga
You can’t run AI on PowerPoints. You need chips. And you can’t get chips without navigating a supply chain that’s part miracle, part minefield.
Right now, the world’s most advanced semiconductors are made in a handful of places — mostly East Asia. Taiwan’s TSMC and South Korea’s Samsung dominate fabrication. Japan supplies ultrapure chemicals. The Netherlands’ ASML makes the EUV machines. The U.S. controls most chip IP — but outsourced manufacturing decades ago.
This supply chain is fragile by design. As Zeihan puts it: “Semiconductors are geopolitical Jenga.”
- Taiwan sits 100 miles from China, which wants reunification — peacefully or not.
- ASML is caught between U.S. bans and Chinese espionage attempts.
- Intel and TSMC fabs in Arizona and Texas are hitting water and power limits.
- Korea’s chip giants rely on American tech, Chinese customers, and Japanese parts — a single diplomatic spat can derail them.
- The South China Sea is a naval flashpoint, and a shipping chokepoint.
Even without a crisis, chip production is slow. It can take two years and four countries to build a single chip. One Taiwanese earthquake delayed global production for weeks.
“A blocked canal, a rare earth export ban, a cyber attack on fab firmware — these aren’t thought experiments,” Zeihan warns. “They’ve already happened.”
3. Electric Sheep Dreams, Brownout Realities
AI might remake everything — but right now, it’s blowing out transformers in Virginia. GPT-4 took tens of gigawatt hours to train. ChatGPT queries burn more energy than you’d think — and the trendline is steep.
Data center power consumption is expected to double by 2026. Each new model burns exponentially more electricity. But new substations take years. High-voltage lines? A decade.
And greentech doesn’t make this easier. “Megawatt for megawatt,” Zeihan says, “renewables require 2–5x the copper and chromium of traditional sources.” That’s more strain on a grid already cracking.
In other words: Altman may be pitching AI for everyone. But without upgrades to the grid — and fast — the lights might go out before the demos are done.
4. Bottlenecked at the Starting Line
The chip crunch isn’t just about the top-shelf stuff. Even neutered silicon is scarce. China’s H3C warned in March that Nvidia’s downgraded H20 — a TSMC DUV 5 nm variant — is “nearly depleted.” It’s not EUV, but demand is voracious anyway.
Meta, Amazon, and OpenAI are all designing custom chips to avoid bottlenecks. Microsoft’s Maia is in testbeds. But new chips need new systems. Power, cooling, rack design — it’s all upstream.
“You don’t just drop a new chip into an existing system and call it a day,” Zeihan says. “It’s not Lego. It’s architecture, energy load, and physics.”
Legacy hardware can’t keep up. Hybrid builds choke on heat, memory, or throughput. And bolting on older chips only deepens the power problem — more racks, more heat, more strain.
5. The Water Beneath the Cloud
AI runs on silicon. It runs on power. And it runs on coolant — which is often potable water.
Training GPT-3 consumed an estimated 700,000 liters. Modern data centers drink millions of gallons a day. Now multiply that by every cloud provider in the arms race.
“The countries with the worst water stress are often the ones most desperate to digitize,” Zeihan warns. “That’s a nasty collision course.”
Case in point: the planned $100 billion Microsoft/OpenAI mega data center in the UAE. Who’s running the environmental impact report?
Yes, there’s desalination. Israel made it work. But desal is energy-intensive — which loops us right back to the grid. Water, like chips, has hard limits. And unlike silicon, it can’t be fabbed. It just runs out.
The Timeline Nobody Wants to Hear
Cooling constraints. Fragile grids. A global race for fabs, fluids, and functioning supply chains. None of it sounds like the launchpad for a revolution. And yet — the bets keep getting bigger.
Because if Sam Altman is right — if, as he recently claimed, “The cost of intelligence is going to fall by a factor of a million… and you’re going to see intelligence as a service, embedded in everything, everywhere” — then AI won’t just reshape industries. It will remake the substrate of modern life.
But first, reality has a few veto points:
- We’re building trillion-parameter models on GPU architecture held together by duct tape and faith.
- The semiconductor supply chain is a geopolitical Rube Goldberg machine.
- The energy demands of scaling AI are explosive, not linear.
- New chips don’t fit old infrastructure.
- Potable water is running out — and desal burns even more power.
The VC pitch deck version of this future is clean, fast, inevitable. But the physics — and the geopolitics — aren’t cooperating on that timeline.
The stakes have never been higher, never more existential. And if you lived through the dot com era and bust, you may notice history rhyming again. If there’s a disruption in the supply chain — or even a single misstep — the dot com bust may start to look like a sweet dream.
Sam Altman is probably right about the trajectory — AI is heading toward ubiquity.
Peter Zeihan is probably right about the timeline — the road is longer, bumpier, and more resource-constrained than investors want to believe.
In other words:
Altman’s pitching a 2030 world.
Zeihan’s explaining why it might not arrive until 2040.
The markets are pricing it in like it’s happening this July.
Filed from a 10-year waitlist for a 2nm chip — on the Verge of immortality.

