Excellent articles can cause the market to confuse “scenario planning” with “prophecy.”
On February 22, 2026, a report titled “The 2028 Global Intelligence Crisis” ignited social media and financial markets, with over 27 million views. On the day of release, IBM plunged 13%, and stocks of DoorDash, American Express, KKR, and others fell more than 6%.
The report was authored by James van Geelen, founder of Citrini Research. This 33-year-old researcher has over 180,000 followers on X, and his Substack ranks first among finance writers, focusing on equity investment and global macro research. His style is known for cross-asset, lateral associations, with a real investment portfolio returning over 200% since 2023. The report presents a scenario set in 2028: AI rapidly replaces white-collar workers within two years, leading to consumption contraction, software asset defaults, credit tightening, and ultimately pushing the economy into a distorted state of “technological prosperity” and “social decline” coexisting. Van Geelen notes at the beginning: “This article discusses a possible scenario, not a prophecy.” But the market clearly lacks patience to distinguish between the two.
More noteworthy than the brief market panic is the widespread discussion triggered over the past few days. From academia to investment circles, from Wall Street to Chinese internet, responses from various perspectives have emerged. Instead of blindly trusting one extreme conclusion, perhaps we can piece together a clearer future from the “disagreements and overlaps” among different viewpoints.
What Citrini Said
The logic in Citrini’s article is straightforward: rapid advances in AI capabilities lead to large-scale replacement of white-collar jobs → rising unemployment causes consumption to shrink → structured financial products based on SaaS assets face defaults → credit tightens across the broader financial system → the economy falls into a distorted state of “technological prosperity” and “social decline.”
Each link in this causal chain is not unfounded. But connecting them seamlessly into a crisis requires a series of quite radical assumptions.
There are many ways to dissect this chain. Let’s focus on three core points: the speed and scale of labor replacement, the transmission mechanism of demand collapse, and the possibility of a financial crisis. We’ll explore what different voices are debating around each link.
Breaking and Building
Citrini’s scenario begins with AI replacing white-collar labor on a large scale. He envisions this acceleration between 2026 and 2028, primarily impacting law, finance analysis, software development, customer service, and similar fields.
The change in corporate spending on AI model providers and online labor platforms, grouped by industry’s AI exposure
There is evidence supporting Citrini’s view. An empirical study by Bick, Blandin, and Deming, based on corporate expenditure data, shows that after ChatGPT’s release, firms with the highest AI exposure (those that previously spent most on online labor markets) significantly increased their spending on AI model providers while reducing online labor market expenditures, by about 15%. Notably, this substitution is not “one-to-one”—for every dollar cut from labor market spending, firms only increased AI spending by $0.03 to $0.30. In other words, AI is accomplishing the same work at a fraction of the cost of human labor.
However, Citrini may overestimate the speed of change. Critics cite the U.S. real estate broker industry as an example: despite the technology’s capacity to drastically reduce the number of agents, the industry still employs over 1.5 million people. Institutional inertia, regulatory barriers, and internal industry interests form a much stronger barrier than technology alone. They argue Citrini underestimates the resistance posed by “institutional momentum.”
Others cite research by Kimball, Basu, and Fernald (1998), which suggests that technological shocks historically tend to be positive supply-side stimuli—short-term employment adjustments occur, but long-term output gains far outweigh job destruction.
In fact, every wave of general-purpose technology from labs to widespread adoption has historically been much slower than the technology’s maturity. Electricity took 30 years to go from 5% to 50% household penetration; telephones took 35 years; even the fastest-growing smartphones took about 5 years. AI may already have the capacity to disrupt many industries, but the gap between technological capability and institutional absorption is never bridged solely by ability.
The second key link in Citrini’s scenario is demand-side spiral: unemployment → income reduction → consumption contraction → corporate profit decline → further layoffs.
Citrini confuses demand-side deflation with supply-side deflation here. The former means consumers’ purchasing power shrinks; the latter is about technological progress lowering production costs—AI-driven price reductions are more akin to the latter, similar to the trajectory of electronics and communication services over past decades. Some analysts believe the Jevons paradox still applies: as AI drastically lowers costs for legal advice, medical diagnostics, software development, and other services, demand previously excluded by high prices could be unleashed, leading to explosive growth rather than contraction. Meanwhile, the “Moravec paradox” also plays a role: for machines, the hardest tasks are often not high-level reasoning or data retrieval, but human-like physical movements, sensory perception, and emotional communication. This suggests that physical labor and finely perceptive service jobs may be more resilient than we think.
But the Jevons paradox might also fail. Alex Imas, a professor of economics at the University of Chicago, argues that if AI automates most labor and labor income share in total income drops sharply, then who will buy these highly efficient goods and services? This touches on distribution mechanisms. When output capacity approaches infinity but effective demand becomes concentrated, we may face not a recession but an imbalance—material abundance that cannot be accessed.
Glimpsing the Big Picture
The most expansive part of Citrini’s scenario is the transmission from employment shocks to a financial crisis. He envisions structured financial products (“Software-Backed Securities”) based on SaaS revenues suffering widespread defaults, triggering a credit crunch similar to 2008.
Critics point out that, compared to 2008, the U.S. corporate sector’s leverage is much healthier now, and the banking system is far more resilient after Dodd-Frank reforms and stress tests.
Compared to the pre-2008 financial crisis, resilience indicators have improved significantly: Tier 1 capital ratios rose from 8.1% to 13.7%, household debt-to-disposable income ratio fell from 130% to 97%, and non-performing loan rates dropped from 1.4% to 0.7%.
Even if some SaaS companies face revenue declines, their scale is unlikely to trigger systemic credit crises. Nick Smith, a former Bloomberg finance columnist, argues Citrini makes a common mistake: extrapolating micro-level industry shocks linearly to macro-level systemic risk. Regarding demand collapse, Smith’s answer is fiscal policy. If unemployment truly surges, the government has the capacity and willingness to implement large-scale fiscal stimulus to support demand.
The system’s capacity to respond is also underestimated. During COVID-19, for example, the policy response was swift: on March 11, 2020, WHO declared a pandemic, and within just 16 days, the $2.2 trillion CARES Act was signed into law. Over the following year, the U.S. deployed a total of $5.68 trillion in fiscal stimulus—about 25% of 2020 GDP.
If AI-driven unemployment occurs at the speed and scale Citrini describes, policy interventions are unlikely to be absent.
Some critics raise more fundamental questions. Doomsday scenarios in technology often stem from a lack of faith in human institutions. Citrini’s scenario treats the market as a self-operating machine, driven solely by causal chains until collapse. But in reality, the economy is not so mechanical. Laws, institutions, politics, culture, and ideology profoundly influence how society absorbs technological shocks.
Consensus and Disagreement
We might attempt to identify some points of consensus and divergence.
Almost no one denies that AI is, and will continue to, change the demand structure for white-collar labor; the debate is about the speed and scale of this change. Also, the pain of transition is real and should not be masked by long-term optimism. Moreover, the quality and speed of policy responses will largely determine the outcome.
Disagreements lie in deeper logic. Some believe this wave of technological impact may surpass historical precedents in speed and scope, limiting the applicability of historical analogy; others trust in the adaptability of institutions and the repeatability of history.
Looking Ahead
Citrini’s article has several issues: overly tight logical connections, underestimation of institutional responses, and a leap from micro-industry impacts to macro systemic risk lacking sufficient intermediate reasoning. But its fundamental flaw may be an underestimation of human society: it assumes a static institutional environment where technology relentlessly crushes everything at near unstoppable speed. Historical doomsday scenarios are abundant; they often seem logically impeccable but almost always ignore the variable of “people.” The complexity, friction, redundancy, and seemingly inefficient institutions of human society form a powerful, distributed resilience. We have ample time to avoid the apocalyptic outcomes depicted—so long as we do not let the scenarios themselves scare us into inaction.
What about optimistic narratives? The Jevons paradox is an observation about long-term trends. The Moravec paradox tells us physical labor is temporarily safe but does not address what happens to displaced white-collar workers. Historical analogy is insightful but never exact; it merely provides a rhythm. Optimistic stories need time to prove themselves, and we are at the starting point of that test.
Doomsday narratives produce anxiety, and those who produce them pay the price. Develop your own judgment, bear the risks, manage your positions, rather than get lost in those “endless” articles.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
We all worry about being replaced by AI, but what did Citrini's apocalyptic prediction overlook?
Excellent articles can cause the market to confuse “scenario planning” with “prophecy.”
On February 22, 2026, a report titled “The 2028 Global Intelligence Crisis” ignited social media and financial markets, with over 27 million views. On the day of release, IBM plunged 13%, and stocks of DoorDash, American Express, KKR, and others fell more than 6%.
The report was authored by James van Geelen, founder of Citrini Research. This 33-year-old researcher has over 180,000 followers on X, and his Substack ranks first among finance writers, focusing on equity investment and global macro research. His style is known for cross-asset, lateral associations, with a real investment portfolio returning over 200% since 2023. The report presents a scenario set in 2028: AI rapidly replaces white-collar workers within two years, leading to consumption contraction, software asset defaults, credit tightening, and ultimately pushing the economy into a distorted state of “technological prosperity” and “social decline” coexisting. Van Geelen notes at the beginning: “This article discusses a possible scenario, not a prophecy.” But the market clearly lacks patience to distinguish between the two.
More noteworthy than the brief market panic is the widespread discussion triggered over the past few days. From academia to investment circles, from Wall Street to Chinese internet, responses from various perspectives have emerged. Instead of blindly trusting one extreme conclusion, perhaps we can piece together a clearer future from the “disagreements and overlaps” among different viewpoints.
What Citrini Said
The logic in Citrini’s article is straightforward: rapid advances in AI capabilities lead to large-scale replacement of white-collar jobs → rising unemployment causes consumption to shrink → structured financial products based on SaaS assets face defaults → credit tightens across the broader financial system → the economy falls into a distorted state of “technological prosperity” and “social decline.”
Each link in this causal chain is not unfounded. But connecting them seamlessly into a crisis requires a series of quite radical assumptions.
There are many ways to dissect this chain. Let’s focus on three core points: the speed and scale of labor replacement, the transmission mechanism of demand collapse, and the possibility of a financial crisis. We’ll explore what different voices are debating around each link.
Breaking and Building
Citrini’s scenario begins with AI replacing white-collar labor on a large scale. He envisions this acceleration between 2026 and 2028, primarily impacting law, finance analysis, software development, customer service, and similar fields.
The change in corporate spending on AI model providers and online labor platforms, grouped by industry’s AI exposure
There is evidence supporting Citrini’s view. An empirical study by Bick, Blandin, and Deming, based on corporate expenditure data, shows that after ChatGPT’s release, firms with the highest AI exposure (those that previously spent most on online labor markets) significantly increased their spending on AI model providers while reducing online labor market expenditures, by about 15%. Notably, this substitution is not “one-to-one”—for every dollar cut from labor market spending, firms only increased AI spending by $0.03 to $0.30. In other words, AI is accomplishing the same work at a fraction of the cost of human labor.
However, Citrini may overestimate the speed of change. Critics cite the U.S. real estate broker industry as an example: despite the technology’s capacity to drastically reduce the number of agents, the industry still employs over 1.5 million people. Institutional inertia, regulatory barriers, and internal industry interests form a much stronger barrier than technology alone. They argue Citrini underestimates the resistance posed by “institutional momentum.”
Others cite research by Kimball, Basu, and Fernald (1998), which suggests that technological shocks historically tend to be positive supply-side stimuli—short-term employment adjustments occur, but long-term output gains far outweigh job destruction.
In fact, every wave of general-purpose technology from labs to widespread adoption has historically been much slower than the technology’s maturity. Electricity took 30 years to go from 5% to 50% household penetration; telephones took 35 years; even the fastest-growing smartphones took about 5 years. AI may already have the capacity to disrupt many industries, but the gap between technological capability and institutional absorption is never bridged solely by ability.
The second key link in Citrini’s scenario is demand-side spiral: unemployment → income reduction → consumption contraction → corporate profit decline → further layoffs.
Citrini confuses demand-side deflation with supply-side deflation here. The former means consumers’ purchasing power shrinks; the latter is about technological progress lowering production costs—AI-driven price reductions are more akin to the latter, similar to the trajectory of electronics and communication services over past decades. Some analysts believe the Jevons paradox still applies: as AI drastically lowers costs for legal advice, medical diagnostics, software development, and other services, demand previously excluded by high prices could be unleashed, leading to explosive growth rather than contraction. Meanwhile, the “Moravec paradox” also plays a role: for machines, the hardest tasks are often not high-level reasoning or data retrieval, but human-like physical movements, sensory perception, and emotional communication. This suggests that physical labor and finely perceptive service jobs may be more resilient than we think.
But the Jevons paradox might also fail. Alex Imas, a professor of economics at the University of Chicago, argues that if AI automates most labor and labor income share in total income drops sharply, then who will buy these highly efficient goods and services? This touches on distribution mechanisms. When output capacity approaches infinity but effective demand becomes concentrated, we may face not a recession but an imbalance—material abundance that cannot be accessed.
Glimpsing the Big Picture
The most expansive part of Citrini’s scenario is the transmission from employment shocks to a financial crisis. He envisions structured financial products (“Software-Backed Securities”) based on SaaS revenues suffering widespread defaults, triggering a credit crunch similar to 2008.
Critics point out that, compared to 2008, the U.S. corporate sector’s leverage is much healthier now, and the banking system is far more resilient after Dodd-Frank reforms and stress tests.
Compared to the pre-2008 financial crisis, resilience indicators have improved significantly: Tier 1 capital ratios rose from 8.1% to 13.7%, household debt-to-disposable income ratio fell from 130% to 97%, and non-performing loan rates dropped from 1.4% to 0.7%.
Even if some SaaS companies face revenue declines, their scale is unlikely to trigger systemic credit crises. Nick Smith, a former Bloomberg finance columnist, argues Citrini makes a common mistake: extrapolating micro-level industry shocks linearly to macro-level systemic risk. Regarding demand collapse, Smith’s answer is fiscal policy. If unemployment truly surges, the government has the capacity and willingness to implement large-scale fiscal stimulus to support demand.
The system’s capacity to respond is also underestimated. During COVID-19, for example, the policy response was swift: on March 11, 2020, WHO declared a pandemic, and within just 16 days, the $2.2 trillion CARES Act was signed into law. Over the following year, the U.S. deployed a total of $5.68 trillion in fiscal stimulus—about 25% of 2020 GDP.
If AI-driven unemployment occurs at the speed and scale Citrini describes, policy interventions are unlikely to be absent.
Some critics raise more fundamental questions. Doomsday scenarios in technology often stem from a lack of faith in human institutions. Citrini’s scenario treats the market as a self-operating machine, driven solely by causal chains until collapse. But in reality, the economy is not so mechanical. Laws, institutions, politics, culture, and ideology profoundly influence how society absorbs technological shocks.
Consensus and Disagreement
We might attempt to identify some points of consensus and divergence.
Almost no one denies that AI is, and will continue to, change the demand structure for white-collar labor; the debate is about the speed and scale of this change. Also, the pain of transition is real and should not be masked by long-term optimism. Moreover, the quality and speed of policy responses will largely determine the outcome.
Disagreements lie in deeper logic. Some believe this wave of technological impact may surpass historical precedents in speed and scope, limiting the applicability of historical analogy; others trust in the adaptability of institutions and the repeatability of history.
Looking Ahead
Citrini’s article has several issues: overly tight logical connections, underestimation of institutional responses, and a leap from micro-industry impacts to macro systemic risk lacking sufficient intermediate reasoning. But its fundamental flaw may be an underestimation of human society: it assumes a static institutional environment where technology relentlessly crushes everything at near unstoppable speed. Historical doomsday scenarios are abundant; they often seem logically impeccable but almost always ignore the variable of “people.” The complexity, friction, redundancy, and seemingly inefficient institutions of human society form a powerful, distributed resilience. We have ample time to avoid the apocalyptic outcomes depicted—so long as we do not let the scenarios themselves scare us into inaction.
What about optimistic narratives? The Jevons paradox is an observation about long-term trends. The Moravec paradox tells us physical labor is temporarily safe but does not address what happens to displaced white-collar workers. Historical analogy is insightful but never exact; it merely provides a rhythm. Optimistic stories need time to prove themselves, and we are at the starting point of that test.
Doomsday narratives produce anxiety, and those who produce them pay the price. Develop your own judgment, bear the risks, manage your positions, rather than get lost in those “endless” articles.