And even if incumbents are displaced, the new opportunities created by AI-induced cost reductions and productivity enhancements need not lead only to more AI. They may also require human labour, as with the internet and the rise of influencers.
Still, in some ways, the Citrini post is not pessimistic enough. Even setting aside the possibility that we might all become slaves to some AI overlord, the broader economic outcomes depend on how good AI becomes and how quickly, the pace of user adoption, who profits from it, and how society reacts. Given all these variables, some extreme scenarios are indeed conceivable.
Consider, for example, a future in which a few differentiated platforms (say, Anthropic or Meta) reach a level of generalised AI that allows them to outpace the competition and steadily charge high prices to user firms. These dominant platforms would generate enormous profits, augmenting the incomes of their employees (who will be few, because AI will cull their ranks) and their shareholders. At the same time, the many firms relying on their services would be willing to pay, because AI would raise their own productivity, allowing them to shed more white-collar workers.
These unemployed workers would then look for work in adjacent industries where AI has not yet rendered their skills useless. But if those jobs are few, they will join the ranks of workers as gardeners, waiters and shop assistants, further depressing wages in those occupations. Assuming that AI displaces cognitive tasks before skilled physical ones, machinists, plumbers, and masons may still have work until robots become sophisticated enough. But over time, competition for those jobs will also increase as white-collar workers retrain. The pain will spread, and only the AI platforms and their investors will benefit. Or will they?
See also: Why integration will determine Southeast Asia’s vertical AI winners
Before answering that, consider another “competitive” scenario in which no platforms “win” because there is little differentiation between ChatGPT 33.2, Gemini 25, and all the others. Although this scenario may still be devastating for white-collar jobs, AI prices will be low, and productivity benefits will flow through the economy, as will the resulting profits. Spared the expense of enormous sums on AI, user firms could cut prices and expand production to meet the increased demand, thereby creating more jobs elsewhere. There would be far less pain than in the first scenario, because lower-priced goods and services would allow pre-existing worker savings to go further.
Not only do current trends suggest that this second scenario is more likely than an AI oligopoly, but the government could also take steps to ensure it materialises, for example, through AI price regulations or a refusal to protect AI model builders from those who copy their models. Would-be AI oligarchs should not assume that society will defend their enormous profits even as their products cause widespread job losses and hardship.
Of course, AI incumbents will lobby aggressively, corrupting some legislators to block regulation. They will mount public campaigns, using their many channels of influence to argue (not entirely incorrectly) that regulation will be ham-handed, harming efficiency and innovation while benefiting geopolitical rivals. But if the AI-induced pain is indeed widespread, the political impetus for intervention will remain strong.
See also: AI push faces a two-speed reality
Even if the state fails to ensure competitive AI prices, it can tax oligopolistic AI providers, their employees, and their shareholders to compensate the affected. The difficulty here lies in targeting. How do you identify those who are making supernormal profits from AI? How do you support those harmed, given how hard it has been to assist trade-affected workers in the past? And how do you distinguish between a technologically displaced worker and a worker laid off because of adverse business conditions or incompetence?
To avoid these questions, there will likely be a push for generous unemployment benefits regardless of the cause — a first step toward an eventual universal basic income. But this raises another problem: even if fiscally strapped governments can raise sufficient revenue, many jobs will still require human workers. Overly generous unemployment benefits, therefore, will push up the wages employers have to offer to coax workers out of unemployment, further reducing job creation.
Ultimately, there are no easy public responses to the problem of large-scale but not universal unemployment. Societies will have to experiment creatively, improving the safety net while encouraging businesses to create jobs and reskill workers where possible. At the same time, if any of the AI platforms racing to achieve a near-monopoly does reach its goal, government policy reactions will almost certainly impair its profits. How, then, will these companies’ massive and still-growing debts be serviced? Will a financial crisis follow?
The best we can hope for is a Goldilocks scenario where the AI rollout is not so fast that workers cannot learn how to augment their jobs with AI, rather than being displaced, and where the AI industry is not too oligopolistic, so that the benefits accrue to society more broadly. Imaginative commentaries like the Citrini post prompt us to consider what might happen if the AI story turns out differently. Now is the time to map out the possible scenarios and start preparing for them. — © Project Syndicate, 2026
Akhil Rajan also contributed to this commentary. Raghuram G Rajan, a former governor of the Reserve Bank of India and chief economist of the International Monetary Fund
