01/12/2025 lewrockwell.com  8min 🇬🇧 #297697

Ai, Gdp, and the Public Risk Few Are Talking About

By  Mark Keenan

December 1, 2025

Artificial intelligence is being sold as the technology that will "change everything." Yet while a handful of firms are profiting enormously from the AI boom, the financial risk may already be shifting to the public.

The louder the promises become, the quieter another possibility seems to be:

What if AI is not accelerating the economy at all - but disguising the fact that it is slowing down?

For months, the headlines have declared that AI is transforming medicine, education, logistics, finance, and culture. Yet when I speak with people in ordinary jobs, a different reality emerges:  wages feel sluggish, job openings are tightening, and the loudest optimism often seems to come from sectors most invested in the AI narrative.

This raises an uncomfortable question:
Has AI become a true growth engine - or a financial life-support system?

The Mirage of Growth

Recent economic data suggests that a significant portion of U.S. GDP growth is being driven not by broad productivity, but by AI-related infrastructure spending - especially data centers.

A  study from S&P Global found that in Q2 of 2025, data center construction alone added 0.5% to U.S. GDP. That is a historic figure. But what happens if this spending slows?

Are we witnessing genuine economic expansion - or merely a short-term stimulus disguised as innovation?

Historically, this pattern is not new. In Ireland in 2008 - before the housing collapse - construction boomed, GDP rose, and optimism became mandatory. Skepticism was dismissed as pessimism. The United States experienced something similar the same year: real estate appeared to be a pillar of prosperity - until it wasn't. On paper, economies looked strong. In reality, fragility was already setting in.

Today, I see echoes of that optimism - except this time, the bubble is not bricks and concrete. It may be silicon, data, and expectation.

The Productivity Paradox

AI has been presented as a labor-saving miracle. But many businesses report a different experience: " work slop" - AI-generated content that looks polished but must be painstakingly corrected by humans. Time is not saved - it is quietly relocated.

Studies point to the same paradox:

  • According to  media coverage, MIT found that 95% of corporate AI pilot programs show no measurable ROI.
  •  MIT Sloan research indicates that AI adoption can lead to initial productivity losses - and that any potential gains depend on major organizational and human adaptation.
  • Even  McKinsey - one of AI's greatest evangelists - warns that AI only produces value after major human and organizational change. "Piloting gen AI is easy, but creating value is hard."

This suggests that AI has not yet removed human labor.
It has hidden it - behind algorithms, interfaces, and automated output that still requires correction.

We are not replacing work. We may only be concealing it.

AI may appear efficient, but it operates strictly within the limits of its training data: it can replicate mistakes, miss what humans would notice, and  reinforce boundaries that present an "approved" or "consensus" version of reality rather than reality itself.

Once AI becomes an administrative layer - managing speech, research, hiring, and access to capital - it can become financially embedded into institutions, whether or not it produces measurable productivity.

At that point, AI does not strengthen human judgment - it administers it. And then we should ask:
Is AI improving society - or merely managing and controlling it?

The Data Center Stampede - But Toward What?

McKinsey estimates that over  $6.7 trillion may be spent on AI and computing infrastructure by 2030 - a level of capital allocation usually seen in wartime. But what exactly is being built - and will it ever return value to ordinary people?

Like other critics, analyst Jack Gamble warns of a troubling pattern: cloud and chip-companies investing in AI startups that then buy computing and cloud services from their backers -  potentially creating a circular economy of investment and demand. AI was perhaps becoming a circular economy of expectations rather than a new engine of growth or real value.

Now The State Is All-In: The Genesis Mission

Even before the government intervened, parts of the AI economy already appeared self-reinforcing. Now that intervention has arrived.

As noted in a recent  LewRockwell commentary, a 2025 U.S. executive order - dubbed the " Genesis Mission" - may institutionalize AI infrastructure, potentially transforming deeply indebted AI firms into de facto state-backed entities.

In November 2025, the U.S. government signed an executive order merging federal supercomputers, national-lab datasets, private-sector AI firms, and taxpayer funding into a unified national AI platform. This does not guarantee bailouts - but it creates the conditions under which major AI firms may become too strategically important to fail. Once AI is embedded into national strategy, failure becomes more than a financial problem - it becomes a political one.

This transforms AI from a speculative investment trend into a publicly underwritten enterprise, embedding AI infrastructure into national science and economic policy. Under these conditions, any failure of AI - technological, economic, or environmental - will not remain the problem of a few venture-backed firms. It will become a problem for the public and future generations.

Federal support may buffer data-center and computing firms from market corrections, but it also increases systemic risk if the promised productivity never materialises. When infrastructure becomes publicly supported before its value is proven, failure becomes harder to admit - and easier to subsidise.

There is a further concern: once certain firms become "state-protected," competition may shrink, accountability may weaken, and the largest AI companies could become entrenched as essential infrastructure. That would strengthen precisely the pattern this article questions - where labor is concealed, economic value is assumed, and risk is quietly socialised.

Who Carries the Risk?

The deeper concern is not AI itself - but where the financial risk of AI may already be hiding.

Large retirement funds and passive index portfolios are now  concentrated in AI-dependent giants such as Nvidia, Amazon, Microsoft, Google, and Tesla. On the debt side, data-center financing and private credit tied to AI infrastructure are  quietly entering bond portfolios.

This means the AI boom is not simply an investment trend.
It may already be embedded inside the retirement accounts of ordinary citizens - without their knowledge.

With the Genesis Mission, risk is no longer only held by markets-it is being woven into public institutions, budgets, national research plans, and long-term economic policy.

If AI becomes truly transformative, perhaps this risk will be justified.
But if AI merely relocates labor and props up GDP on paper - then the risk will not fall on venture capital.
It may fall on pensioners, savers, civil servants, and state retirement systems.

Questions the Public Deserves Answers To

  • Is AI transforming work - or creating new layers of hidden labor?
  • Are data centers driving prosperity - or only supporting GDP in the short term?
  • Are citizens knowingly investing in AI - or are they being invested through passive portfolios?
  • Is AI creating value - or merely absorbing capital and subsidies?

With so much capital and debt tied to AI it is beginning to resemble something that could be labeled "too big to fail" - much like the private banks that were rescued after the 2008 financial crisis. When enough money, debt, and public risk are tied to a technology, questioning it becomes difficult - and supporting it becomes mandatory - whether it is delivering real value or not.

Conclusion

As I have  written elsewhere we should not let AI overshadow human thought. AI may still deliver genuine breakthroughs, but at this moment, belief seems to be moving faster than evidence. History reminds us that optimism is most dangerous when it becomes unquestioned.

Because if the promised future arrives late - or not at all - the cost will not fall on the visionaries or the corporations. It will fall on the public.

Now that AI is being treated as national infrastructure, its success or failure is no longer a private gamble-it has become a public risk. And public risks always come with a public bill.

Note to readers who purchased the book The War on Men: a small number of early print copies were released before editing took place. The current edition has been fully revised, professionally edited, and formatted. If you received an early draft, please message me via  Substack or through my X account at  @TheMarkGerard- I'll gladly send you the corrected edition at no cost. Messages are received directly and replied to personally.

 The Best of Mark Keenan

 lewrockwell.com