Expert Insights on Costs, Jobs, and the Road Ahead
Share
Navigating the AI Hype
I recently came across a fascinating Substack discussion moderated by Patrick McKenzie (of Bits About Money fame). It featured Michael Burry, the investor who called the 2008 housing crash and now pens Cassandra Unchained; Anthropic co-founder Jack Clark, and podcaster Dwarkesh Patel, who's grilled everyone from Mark Zuckerberg to Satya Nadella on his show. They jumped into a Google Doc to hash out whether AI is the next big thing or just another bubble waiting to pop. Spoiler: opinions vary, but the takeaways are gold for anyone trying to make sense of this "revolution."
The piece, titled "The AI Revolution Is Here. Will the Economy Survive the Transition?", covers everything from AI's origins post-"Attention Is All You Need" to its potential to upend jobs and economies. I'll weave in my thoughts while highlighting what struck me most: focusing on the economics, labor shifts, and practical uses that could actually stick.
The Cost Conundrum: Why AI Isn't Like Google Search
One of Burry's sharpest points hits at the heart of AI's viability: Google's search engine thrived because it was dirt cheap to run, handling billions of queries without bleeding cash on non-monetizable stuff. Contrast that with today's large language models (LLMs) and generative AI and you'll notice they're resource hogs. We're talking massive compute demands that jack up costs, making it tough to see a clear profit path. Burry nails it: "It is hard to understand what the profit model is, or what any one model’s competitive advantage will be. Will it be able to charge more, or run cheaper?"
This resonates with me because we've seen tech booms fizzle when the math doesn't add up. If AI stays expensive, it might limit adoption to big players who can subsidize it, leaving smaller innovators in the dust. Patel echoes this by noting how inference scaling (running models post-training) demands exponential costs to keep improving. Without breakthroughs in efficiency, we're betting on a house of cards.
Much of cost is due to power consumption and if you are an avid reader - you may have read an article earlier this week: Deep Space Data Centers: AI's Ticket to Infinite Compute? which explores ways to use solar power in deep space to help with this energy cost issue.
Labor Impacts: Echoes of Past Revolutions
History buffs will appreciate Burry's nod to how tech shifts have reshaped work before. During the Industrial and Services Revolutions, automation displaced so many jobs that societies expanded mandatory schooling to delay young folks entering the workforce. It was a band-aid to manage unemployment spikes. Fast-forward to AI: despite models acing Turing tests and tackling complex coding, Patel points out the labor market blip is microscopic, if detectable at all.
Why? Automating jobs is messier than it seems. Clark adds that AI's "jagged" capabilities, superhuman in some areas, bizarrely flawed in others" mean it's not plug-and-play for most roles. Patel flips it: humans make errors LLMs would spot as odd, so maybe we're all spiky in our own ways. Still, if AI doesn't trigger mass schooling reforms or unemployment waves soon, it suggests the hype outpaces reality. Burry's take? We're headed for a long downturn anyway, with hyperscalers (like Microsoft and AWS) cycling through layoffs and rehires tied to stock swings, not true productivity.
Speaking of which, headcount isn't the metric to watch. Burry argues it's misleading for cash-rich giants in oligopolies. Value creation hinges on return on invested capital (ROIC), which is tanking as these software firms morph into capital-intensive hardware beasts. Patel counters: why obsess over ROIC when absolute growth (from ads to trillions in labor markets) could dominate? Fair point, but Burry's seen enough roll-ups flop to know size without returns spells trouble.
Competitive Edges and the Escalator Trap
Burry's escalator analogy is brutal: two competing department stores install escalators, but neither gains an edge. Costs rise, status quo remains. That's AI in a nutshell for many implementations: no durable margin or cost improvements, just mutual spending that benefits customers at best. In a commoditized AI world, competitors level up equally, eroding any moat.
This ties into skepticism around players like Nvidia and Palantir. Burry calls them "lucky" adapters, not innovators. Nvidia's CUDA dominance won't last against specialized ASICs and small language models (SLMs), which are cheaper and less power-hungry. Palantir? Burry skewers it: "There are virtually no earnings after stock-based compensation." Ouch. Patel wonders if "dogfooding" (labs using their own AI to boost internal productivity) can create lasting advantages, but with the podium rotating among OpenAI, Anthropic, and Google every few months, it seems gains are fleeting or smaller than claimed.
Clark pushes back, citing self-reported 50% productivity boosts at Anthropic, though studies like METR's suggest otherwise. The jury's out, but if recursive self-improvement (AI building AI) cracks open, Clark warns it could accelerate everything dramatically.
Practical Wins and the Human Factor
Amid the doom-scrolling, real uses shine through. Burry uses Claude for charts and tables though sourcing data himself and offloading design. Patel treats LLMs as one-on-one tutors, preferring their low-latency over humans for learning. I've dabbled similarly; it's like having an instant expert minus the scheduling hassle. Burry even suggested he can fixe home electrical issues with Claude's help, questioning if trade jobs are truly AI-proof.
On risks, perspectives diverge. Clark frets over self-improving AI accelerating beyond control, urging policymakers for transparency. Burry? Less worried about AGI doomsday saying humans adapted to Cold War nukes and Terminators; we'll handle this. His policy pitch: pump a trillion into small nuclear reactors and a fortified grid to fuel innovation without power bottlenecks. Clark agrees, AI's energy hunger could supercharge nuclear tech, bolstering economic security.
Patel highlights a core gap: true AGI needs human-like continual learning, not just in-context tricks. Timelines? He pegs 5-15 years, but surprises like solved learning could shorten it.
Adaptation Over Apocalypse
This debate underscores AI's dual nature: transformative potential meets economic hurdles. Costs must drop, or it'll stay niche. Jobs won't vanish overnight, but productivity tools are here now. And as Burry says, humans will adapt provided we don't let power shortages clip our wings.
Is AI overhyping its revolution, or are we on the cusp?
Note: This post draws from the Substack discussion by Burry, Clark, Patel, and McKenzie. Check it out for the full, unfiltered exchange.

