top of page

AI, Productivity, and the Future of Inequality

Introduction


Artificial intelligence (AI) is transforming economies and societies. Many believe that artificial general intelligence (AGI), AI systems that can perform most tasks as well as or better than humans, will cause economic growth to speed up sharply. Some even think it could lead to a new economic era with little or no need for human labor. Yet, this raises serious questions: Will productivity keep rising forever? What happens if machines replace workers but then stop improving? Could AGI make inequality worse? Most importantly, are governments ready for this?


Artificial general intelligence (AGI) refers to AI systems with the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to or beyond that of human beings. They are designed to be autonomous.


Unlike narrow AI, which is designed for specific tasks (like image recognition or translation), AGI possesses general cognitive abilities, enabling it to reason, adapt, and improve its own performance across different domains.


This article makes a modest attempt to explore the questions above by combining insights from economic theory, historical experience, and contemporary debates on AI and AGI.


Is There an End to Productivity Growth?


Between the years 1 and 1700, global GDP per capita barely grew, by just 0.1% a year. Then came the Industrial Revolution, and growth picked up to 0.5% per year between 1700 and 1820. By the 20th century, growth had reached 2.8% per year. Productivity gains, doing more with less, were the main driver.


Productivity growth has long been treated as the engine of economic growth, and development to a certain extent.


While classical growth models assumed technology to be exogenous, more recent models introduced the idea that knowledge creation feeds on itself (endogenous), leading to cumulative and accelerating innovation under the right conditions.


From this perspective, if AGI systems are able to autonomously improve science, software, and themselves, the growth process could become self-reinforcing, what the AI terminology calls “recursive self-improvement.” However, this vision faces hard constraints.


Some note that even powerful AI systems are still constrained by complementary inputs, such as energy, data, and institutional frameworks. Others similarly caution that even superintelligent systems may exhaust ‘low-hanging fruit,’ echoing earlier skepticism about long-term technological momentum.


Moreover, productivity growth in recent decades has not matched the excitement around digital technologies, suggesting that innovation may require more than algorithms alone.


In short, the limits to productivity may not come from AI's capabilities, but from the physical, social, and institutional infrastructure that supports its deployment.


Can Machines Stop Being Productive?


Yes, they can. AGI may outperform humans in many domains, but it does not guarantee infinite productivity gains. If progress in hardware, materials, or supporting infrastructure slows, even advanced AI systems could stagnate. For example, if robotics lags behind software, many physical tasks, like plumbing or caregiving, will continue to require human labor.


This leads to a bifurcated economy. Sectors that are easily automated, digital content, logistics, mass manufacturing, may see falling costs. Yet, labor-intensive services as of today like education, healthcare, and personal care may become more expensive. This is an illustration of Baumol’s cost disease: productivity in some sectors grows quickly, while others stagnate, but wages rise across the board because workers must be retained.


The outcome might be a dual economy: one of deflationary abundance for digital goods, and inflationary scarcity for human services. This mismatch could cause new forms of inequality, not in income alone, but in access to essential life services.


Is AGI a Threat to Humanity?


AGI may cause deep social harm if it reshapes the economy in ways that leave people behind.


If AGI systems become highly capable and cheap, human labor will lose value. Employers will prefer machines unless workers can offer something unique, and it seems that it is going to be hardly possible especially for office jobs.


In such a world, capital accumulation becomes the sole source of income for many. What we define as “capital” expands to include not only physical assets and intellectual property but also training data, algorithmic architectures, and compute infrastructure.


When capital becomes a perfect substitute for labor and keeps accumulating, almost all income eventually goes to capital owners. Labor’s share of income declines, even if a few “superstar” workers earn very high wages. This creates a new type of inequality, not between working and non-working people, but between owners and non-owners.


If AGI can do most things better than humans, and people cannot earn wages, only ownership matters. This is a striking point.


Is Inequality Going to Widen?


Unless society changes how capital is owned and how income is distributed, the answer to the question above is yes, and I do not think that those structural changes are going to take place. Today's crisis-triggering problems are systemic and structural. Today, ownership of data, AI models, and computing infrastructure is concentrated in a few firms. If AGI systems become the main source of economic value, and access to them is limited, inequality will rise sharply.


Some workers may move into human-centered services and be protected by Baumol’s cost disease. Yet, others, especially those displaced from knowledge jobs, may struggle to adapt. They might have to accept lower incomes, even as AI-generated goods become cheaper. The question is which jobs are going to remain human-centered?


As a result, the economy will exhibit what we might call "asymmetric affluence": abundance in consumer goods and scarcity in relational services.


In this world, wealth is not about earning, but owning. This means inequality could become not just economic, but structural and permanent.


Have Past Technological Revolutions Caused Inequality?


Historically, every major technological revolution has caused a rise in inequality, at least during its early phases.


The Industrial Revolution in Britain, for example, led to rapid growth in national income but also to deep social divisions. Between 1760 and 1840, while GDP per capita rose, real wages for the average worker stagnated, and child labor, poor housing, and long working hours became widespread.


Historical accounts argue that capital accumulation and mechanization initially benefited only a small segment of society, especially industrialists and landowners. The working class had to wait until the late 19th century to see substantial improvements in living conditions.


Other studies show that the Second Industrial Revolution (1870–1914), which introduced electricity, steel, and mass production, further widened inequality. In both Europe and the United States, income and wealth concentrated in the hands of a small industrial-financial elite.


Moreover, colonial extraction amplified these dynamics on a global scale, with labor, resources, and surplus value flowing from the periphery to imperial cores. The consequences of that asymmetry are still present in North-South inequality today.


The ICT (Information and Communication Technology) revolution that began in the 1970s has had similar effects. Several researchers have demonstrated that automation leads to labor displacement in many sectors and that technological change has been “biased” in favor of capital. Other scholars found that automation has led to “job polarization,” where middle-skill jobs disappear while low-skill service work and high-skill elite roles grow.


Some argue that when the rate of return on capital exceeds the rate of economic growth, as it often does in periods of technological upheaval, wealth inequality grows. Without progressive taxation or institutional counterweights, this inequality tends to persist and even deepen.


In short, technological revolutions often cause inequality unless governments take deliberate steps to mitigate their effects. The arrival of AGI will likely follow the same trajectory. However, at a much greater scale it seems.


Are Politicians Responding to These Threats?


Most governments are focused on short-term issues like inflation and unemployment. While there is growing interest in regulating AI, for safety, data privacy, or misinformation, there is little discussion about the long-term human-centered economic effects of AI or AGI.


Few political leaders have proposed serious plans to deal with mass labor displacement, capital concentration, or the need for new institutions like public AI ownership, universal basic income (UBI), or wealth redistribution mechanisms.


Public policy remains reactive rather than anticipatory. Most national AI strategies emphasize competitiveness, security, or industrial policy, not distributive justice or democratic governance.


The comparison with climate change is useful here. Despite decades of scientific warnings, meaningful action on climate has been slow and weak. It is fair to ask: if governments have not addressed the climate crisis seriously, can we expect them to handle AGI’s challenges better?


Policymakers tend to respond only when crises are visible. But AGI’s economic impacts may grow quietly and unevenly, making it harder to respond in time.


What Should We Do?


If AGI drives explosive growth - although I have doubts about it - and replaces most labor, owning capital will be the only reliable way to stay economically secure. That is why many wealthy people are saving aggressively or investing in AI-related firms. Yet, this is not a solution for everyone.


The world is at risk of building an economy where machines work, owners profit, and most people are left behind.


AGI will not just automate work, it will automate power. To ensure a just transition, democratic societies must reclaim control over the architectures of intelligence.


AGI is coming. The future may not be fair without planning, and all signs suggest that it will not be.

Comments


© 2025 by Arda Tunca

bottom of page