Thought experiments after reading the 2028 GLOBAL INTELLIGENCE CRISIS

You believe AI could eventually create a near-zero-cost world, but before that it may trigger a self-reinforcing collapse in middle-class jobs and consumption, so we must keep advancing AI while adapting fast to the transition.

The news that Block laid off nearly half its team broke on Friday and triggered me to read a recently very popular article by Citrini Research called The 2028 GLOBAL INTELLIGENCE CRISIS. The article is a series of thought experiments from a Wall Street perspective, written as an investment memo set in the year 2028 (2 years from now), in a post–AI-agent era.

The most important takeaway from the long article is a negative feedback loop triggered by AI agents, with no natural brake:

AI capabilities improved, companies needed fewer workers, white collar layoffs increased, displaced workers spent less, margin pressure pushed firms to invest more in AI, AI capabilities improved…

Or, as they call it, the dead spiral of human intelligence displacement!

Do I agree with this prediction from an AI researcher’s perspective? The answer is complicated. They definitely overestimate the current capabilities of AI agents, partly due to hype from hobbyists. (I’ve been using coding agents very intensively in the last two months for my research, and I’ve run into many struggles.) However, that doesn’t mean the future they predict won’t arrive, maybe just not that fast.

I will write a blog about my recent experience with Claude Code soon, but today I don’t want to talk about the limitations of AI agents. I only want to talk about their potential impacts.

My first reaction after reading the fiction? Finally, someone wants to talk about the problem on the consumption side! Lately, I’ve had this gut feeling that the whole AI dream doesn’t make sense because of the consumption problem. The key question is basically: will the capitalist economic system still hold if there is no consumption in the system?

Therefore, I decided to do some thought experiments while spending Saturday taking care of my 4-year-old son. I have to admit, my only economics background is the micro and macro courses from my college days. However, that doesn’t stop me from using common sense to run these thought experiments.

I will run thought experiments on the potential outcomes of the post-AI era and think about whether the assumptions behind those outcomes can ever be reached. A quick hint to the conclusion of my thought experiments:

  1. AI must become extremely cheap to run at scale, meaning abundant energy plus long-lasting, efficient computing hardware.

  2. Our social and enconomic system much figure out a way to reduce the hard constraints that stay scarce today: energy, land, water, materials, manufacturing, storage, and distribution.

  3. A new way of re-distribution of AI benefits is needed under the consumption crisis.

Thought Experiments: What will be the perfect post-AI outcome we assume and is it really possible?

Perfect outcome: AI makes the cost of essential life requirements to zero and everyone can just work on things they like without any pressure to make a living.

This is the utopian outcome of a post-AI world that seduced everyone into starting AI development. But what are the underlying assumptions that support this outcome? I break it down into 3 assumptions:

  1. AI will have minimal cost

  2. AI will make the cost of essential life requirements zero, so there is no scarcity

  3. If everyone does the things they like, will all needs of society still be met, with AI supporting it?

Let’s see whether each assumption can be realized, and what’s missing.

AI will have minimal cost:

This is hard to reach with LLM-based AI agents. Current agent systems powered by huge LLMs do not have minimal cost. Every time we prompt and generate tokens, a significant amount of energy is used.

This goes against the usual digital economy, where the average cost can get thinner and thinner as the number of users grows. For AI agents, cost scales with usage: the energy cost and device cost for token generation grow roughly linearly as usage increases. That makes it really hard to make AI “minimal cost,” at least in the current state where energy is limited and devices (like GPUs) need frequent upgrades.

So how could this become true? First, we would need abundant (effectively unlimited) energy, such as controlled nuclear fusion. Second, we would need mature computing devices that can last a very long time to support AI, so the average cost per token drops as more tokens are generated.

AI will make the cost of essential life requirements to zero therefore there is no scarcity of them:

This depends on what costs AI can remove. One clear example today is programmers and many office jobs that mainly use computers to process data, plan, and make decisions, work that can be automated by an LLM-based agent. In the future, this may expand to fully autonomous robotics, including drivers, farmers, logistics workers, and so on. This will definitely reduce costs dramatically.

If we remove these labor costs, maybe buying an apple from the supermarket will cost half as much, or even less. But it definitely won’t be zero. This is because some things cannot be replaced by AI: energy needed by cars and devices, water needed by plants, land needed for manufacturing, storage, and distribution. These things will still be scarce under today’s system. How to remove the cost of these inputs is the key question.

Will everyone doing the things they like fulfill all needs of the society with AI supporting it

When AI replaces people, we often assume humans will finally be free to do whatever they want. In the long run, maybe this is possible, if the benefits of AI are evenly distributed back into the whole system. However, under the current capitalist economy, this is close to impossible.

The most common realistic proposal is UBI (Universal Basic Income), where the government taxes companies that use AI and redistributes the gains. However, if you follow the “dead spiral” described in the 2028 article, you can see why this could become a dead end: if consumption keeps shrinking, the income of companies adopting AI will also gradually shrink, which then reduces tax revenue and UBI.

During the transition, what’s needed is for the cost of living to decrease faster than UBI does. How can we do that? We would need to develop stronger AI faster than the rate at which companies fire people—but without hype that encourages companies to fire people just to boost stock prices.

The worst outcome from my pessimistic view

I’m normally a pessimistic person who always prepares for the worst. Therefore, I’ll conclude with my most pessimistic view of the post-AI world:

AI will still take a very long time to reach the assumptions for the perfect outcome. However, before that, AI will create massive targeted job losses in white-collar middle-class jobs, creating a systemic crisis in the current capitalist economy. However, we cannot stop advancing AI until we reach, or come near to, the perfect outcome.

This is stressful in some ways, and motivating (in a more negative way). We will be impacted negatively. Even as university teachers, we may soon face a sharp decrease in students because of the consumption problem. Therefore, we need to adapt fast instead of hoping the worst won’t come.

I hope this whole thing doesn’t scare you. If it does, just treat it as fiction and do what you need to do anyway.

Prompt for polishing this blog using ChatGPT 5.2 Thinking

Review the following text I wrote for my blog. Fix grammar problems and make it easy to read for non-technical audiences. Do not change any view or format in the text. Keep and improve the markdown format. Explain your changes one by one.