A Typology of AI and Firm-Level Adoption
- Arda Tunca
 - Jul 24
 - 12 min read
 
Updated: Jul 26
Abstract
Despite widespread enthusiasm about artificial intelligence (AI) and its potential to reshape productivity, adoption in the corporate sector remains slower than anticipated.
This article offers a typology of AI, ranging from narrow to agentic and frontier systems, and explores the organizational, political, and institutional frictions that obstruct its diffusion. Drawing on insights from public choice theory and the economics of innovation, I argue that the resistance to AI is not merely technical, but deeply embedded in intra-firm power dynamics. In particular, the role of middle management as a gatekeeper between executive intent and operational implementation constitutes a key structural barrier to automation.
Introduction
We have witnessed the rapid evolution of artificial intelligence in recent years, culminating in powerful general-purpose systems with unprecedented capabilities in language, vision, and reasoning. Yet, the translation of these advances into firm-level productivity gains remains slow. While narrow forms of AI have been integrated into business processes, the adoption of more advanced generative and agentic AI systems has been either slow or does not evolve.
This article investigates this gap between technological potential and organizational reality. It begins by clarifying the typology of AI systems and then turns to the institutional and political economy of their adoption.
Building on classical insights from public choice theory and the economics of innovation, I make an attempt to show that there are technical frictions, bureaucratic inertia, misaligned incentives, and internal power struggles.
A Typology of Contemporary AI
The term "artificial intelligence" encompasses a heterogeneous range of technologies. For analytical clarity, I distinguish four types relevant to current economic applications. These types are presented in order of increasing technical advancement and autonomy, progressing from narrowly specialized tools to general-purpose, autonomous systems.
Narrow AI (Weak AI)
Narrow AI refers to systems optimized for specific, well-bounded tasks. These include recommendation engines, fraud detection algorithms, and image classifiers. Such systems are ubiquitous and represent the majority of successful AI integrations to date. They exhibit no autonomy or general reasoning ability.
Generative AI
Generative AI leverages large datasets to create novel content, including text, images, and code. Powered by large language models (LLMs) such as GPT-4, Claude, and Gemini, these systems are being piloted in marketing, software development, and customer service. They offer productivity gains in ideation, summarization, translation, and content creation.
AI Agents vs. Agentic AI
It is important to distinguish between AI agents and agentic AI, which are often mistakenly conflated.
AI agents are systems that perceive their environment, process information, and take action to achieve specified goals. These agents can be simple or complex but are typically bound by task-specific programming and limited autonomy. A chatbot that responds to predefined customer queries or a thermostat that adjusts based on temperature input are classic examples of AI agents.
In contrast, agentic AI refers to a more advanced class of AI systems that exhibit autonomous, goal-directed behavior. These systems can independently decompose tasks, use external tools, adapt to changing contexts, and often demonstrate continuity over time with memory and learning mechanisms. Systems such as AutoGPT or Devin illustrate the agentic AI paradigm, as they can execute multi-step tasks with minimal human intervention.
Thus, while all agentic AIs are AI agents, not all AI agents are agentic. The distinction is crucial for understanding the strategic implications of AI deployment within organizations.
Frontier AI
Frontier AI encompasses the most capable systems currently under development or deployment. These are general-purpose models that exhibit emergent reasoning, analogical thinking, coding proficiency, and multi-modal understanding. While not yet Artificial General Intelligence (AGI), they challenge traditional boundaries between machine computation and human cognition.
While this article discusses advanced AI systems such as frontier and agentic AI, it is important to distinguish these from AGI. AGI refers to a hypothetical form of AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at or beyond human-level performance. Unlike narrow or task-specific AI, AGI would be capable of reasoning, abstraction, and transfer learning in unfamiliar domains without needing retraining. No such system currently exists, though some frontier AI models are viewed as early steps toward AGI.
Data-Driven Management Through AI
Data-driven management refers to the practice of making strategic, operational, and tactical decisions based on data analysis rather than intuition or tradition. It involves integrating data collection, processing, and interpretation into every layer of the organization, from logistics to customer service to executive decision-making.
The roots of data-driven governance trace back to landmark moments such as the OECD’s 2000 Ministerial Conference in Paris, which framed the digital economy as a transformative force for decision-making in both public and private sectors.
The implementation of data-driven management through AI involves several components:
Data Integration: AI systems can unify disparate data sources, including structured databases, unstructured text, and real-time sensor inputs.
Pattern Recognition: Machine learning algorithms identify trends, correlations, and anomalies within large datasets that human analysts might overlook.
Predictive Analytics: AI models forecast future outcomes based on historical data, enabling proactive decision-making.
Decision Automation: AI agents and agentic AI systems can automate routine decisions, such as inventory replenishment or marketing personalization, freeing human managers to focus on higher-order problems.
One of the most impactful areas for data-driven management is supply chain management. AI can enhance supply chain operations by:
Predicting demand and optimizing inventory levels using real-time data streams
Identifying bottlenecks and proposing route or supplier adjustments based on environmental or geopolitical risks
Automating procurement decisions by evaluating vendor performance and price fluctuations
Coordinating logistics through dynamic scheduling and route optimization
These applications not only improve efficiency and reduce costs but also increase resilience against disruptions, a quality increasingly valued in global markets these days.
Adopting data-driven management requires more than technological investment. It demands organizational alignment, training, and a culture that values evidence-based reasoning over hierarchical or tradition-based authority.
Why AI Diffusion Has Stalled
While technical hurdles remain, such as data integration and system interoperability, the major impediments to AI adoption are organizational and political.
Many firms lack the necessary cloud infrastructure to support AI at scale. Data remain siloed, fragmented, or subject to access restrictions. This impedes the real-time functionality that agentic systems require. However, infrastructure deficits were anticipated. What surprises observers is that AI adoption has underperformed even conservative projections.
A more profound explanation lies in the distribution of power within firms. While the CEO has formal authority to mandate technological change, the operational control rests with middle management. These managers understand the implementation details and are responsible for execution. If their interests are threatened, they can delay, obstruct, or reshape AI initiatives.
Drawing from public choice theory, which argues that bureaucrats optimize personal rather than collective utility (Buchanan and Tullock, 1962), we can understand the corporation as a fragmented coalition of interests. Just as public servants may block reforms to protect patronage networks, managers may resist AI to preserve roles, status, and team coherence.
This resistance within firms can also be framed through the lens of institutional economics. As Douglass North (1990) emphasized, institutions evolve in response to constraints and incentives, but they are also path-dependent and prone to inertia. Elinor Ostrom’s work further suggests that governance mechanisms—whether formal hierarchies or informal norms—shape the adoption of collective innovations. In this context, firm-level resistance to AI reflects not just rational self-interest, as public choice theory posits, but also the embedded routines and governance structures that resist discontinuity.
This resistance is not irrational. Agentic AI, by design, reduces the need for oversight, coordination, and low-level decision-making, precisely the domains middle managers control.
This dynamic has historical precedent. Frederick Taylor, the founder of scientific management, lamented how internal power struggles hampered the adoption of efficiency-enhancing techniques in the late 19th century. Joel Mokyr emphasizes that technological progress has always encountered purposeful resistance from incumbent actors, those whose rents or routines are disrupted by innovation.
This "micro-politics of adoption" has received little attention in contemporary economics but may prove decisive in the trajectory of AI.
Firms also cite "compliance and regulatory concerns" as major barriers to AI integration. These concerns are valid, generative models can hallucinate, discriminate, or misuse data. But they are also useful alibis for managerial hesitation. In industries such as finance, law, and healthcare, where reputational and legal risks are high, invoking regulation allows internal opposition to appear prudent rather than obstructionist.
The central irony of AI-driven automation is that it is meant to reduce labor input, yet its adoption is obstructed by labor, especially white-collar labor. The bureaucratization of the modern firm, particularly in high-income economies, means that significant parts of the organizational apparatus may be structurally incentivized to resist automation.
This is not unique to AI. Resistance to labor-saving devices, from the Luddites to late-20th-century office computing, has always shaped the pace and pattern of technological diffusion. What distinguishes AI is that the threat now extends to “cognitive labor,” long thought immune to mechanization.
While the analysis here focuses primarily on advanced economies, AI adoption in emerging and developing markets reveals a different pattern. In many lower-income contexts, infrastructural deficits, such as unreliable cloud access, fragmented data ecosystems, or energy instability constitute the primary constraints, rather than managerial resistance. However, the relative absence of entrenched middle management structures may create more agility in deploying narrow or generative AI for localized applications. These asymmetries suggest that global diffusion of AI will be uneven, shaped by institutional capacities as much as by technical readiness.
Recent literature has begun to explore these questions with renewed urgency. Acemoglu and Johnson (2023) argue that automation increasingly undermines institutional stability by displacing decision-making authority from democratic institutions to private platforms.
Brynjolfsson et al. highlight the “Productivity J-Curve,” suggesting that visible returns from AI adoption lag behind investment due to the need for complementary intangible assets such as retraining, reorganization, and data infrastructure. These perspectives reinforce the view that AI’s organizational integration is not automatic but contingent on broader structural adaptation.
AI and Labor: Threats vs. Productivity Gains
The labor market impact of AI depends heavily on the type of AI being deployed and the nature of the tasks it automates or augments.
Narrow AI (Task-Specific)
Productivity Effects: Enhances human labor by automating simple, repetitive tasks.
Job Risk: Low to moderate, limited to clerical or support roles.
Generative AI (e.g., ChatGPT, Claude)
Productivity Effects: Accelerates writing, coding, summarizing, designing.
Job Risk: Medium. Threatens content creators, translators, paralegals, and junior programmers.
Agentic AI (e.g., AutoGPT, Devin)
Productivity Effects: Performs multi-step tasks across tools and systems.
Job Risk: High. Poses a threat to mid-level managers, analysts, consultants, and coordinators.
Frontier AI (e.g., GPT-4, Gemini 1.5)
Productivity Effects: Broad, depending on how it's used as a co-pilot or autonomous system.
Job Risk: Varies. May boost productivity for specialists or replace entire job functions.
Ultimately, AI tools that complement human labor and amplify human expertise are more likely to create productivity gains, while autonomous systems with generalized capabilities carry greater substitution risks, particularly in white-collar domains.
Evolution of Economics Education in Light of Technological and Global Developments
The evolution of economics education worldwide is increasingly shaped by the profound disjunction between traditional curricula and real-world developments such as climate change, inequality, digitalization, financial crises, and geopolitical fragmentation. This tension has prompted a gradual but noticeable transformation in what is taught, how it is taught, and who shapes the discourse.
Traditional economics education, dominated by neoclassical models, rational choice theory, and a heavy emphasis on mathematical abstraction, has been criticized on several fronts: lack of realism, insufficient pluralism, neglect of real-world crises.
Forces driving change today are global crises (2008 crash, climate change, pandemic), student-led movements (e.g., Rethinking Economics, Post-Crash Economics), and policy failures that discredit orthodox prescriptions.
In June 2000, a group of economics students at the École Normale Supérieure in Paris launched a protest against the dominance of neoclassical theory in their university curriculum. Frustrated by what they called the “autistic” nature of economics education, overly abstract, mathematically rigid, and detached from real-world concerns, they issued a public statement calling for methodological pluralism and greater relevance to social and historical realities. This sparked the creation of the movement known as Post-Autistic Economics, which quickly gained traction both within and beyond France.
The uprising resonated with students in other countries who shared similar concerns about the narrowness of mainstream economics instruction. It would eventually help lay the groundwork for broader reform movements, including Rethinking Economics.
The emerging trends in education are interdisciplinarity (integrating history, sociology, ecology, etc.), rise of pluralism in theory, emphasis on real-world relevance and applications, data literacy and use of computational tools, decolonization and inclusion of non-Western perspectives.
There are institutional examples from UCL, SOAS, University of Leeds, Sciences Po, and the CORE Project are leading curriculum reform.
Institutional inertia and conservative incentives in top departments are the ongoing challenges.
Economics education is gradually becoming more responsive to contemporary technological, social, and ecological realities, including the growing importance of AI and data-driven economic management.
Conclusion and Implications
AI adoption is not only a technical challenge, but a political economy problem within the firm. Its resolution will require more than compute power or better models. It demands institutional reform, cultural change, and the realignment of incentives.
AI depends heavily on organizational culture, professional adaptation, and the presence of workforce training mechanisms. AI orientation is increasingly recognized as a critical enabler of successful integration, helping bridge the gap between technical capability and practical implementation.
As the complexity and autonomy of AI systems grow, there is a rising global demand for AI literacy across professions. This demand has led to the development of certification programs and upskilling platforms tailored for specific sectors, including law, healthcare, marketing, education, finance, and supply chain management.
AI certification programs typically cover:
Core AI concepts and ethical principles
Sector-specific AI applications (e.g., smart contracts in law, AI logistics planning)
Practical skills in prompt engineering, data interpretation, and model evaluation
Risk management, compliance, and AI governance
For example, in supply chain management, professionals are trained to use predictive analytics, optimize procurement using machine learning models, and integrate real-time logistics platforms powered by AI agents.
These programs foster a culture of experimentation and informed decision-making, equipping professionals to engage productively with AI rather than view it as a threat. Just as financial literacy became indispensable for modern business, AI literacy is now emerging as a basic competency for navigating 21st-century organizational environments.
Without such orientation and institutional support, even the most advanced AI tools will fail to translate into operational success.
Future research should explore:
Formal models of intra-firm resistance to automation
Comparative case studies of AI adoption across sectors and national contexts
The implications of agentic AI for organizational theory and labor economics
The role of data-driven management as a bridge between digital infrastructure and strategic transformation
The evolving impact of AI-powered supply chain optimization on global production networks
The need to align economics education with contemporary realities, including the digital economy, climate change, and geopolitical instability
The labor-market polarization risks posed by different types of AI and their governance implications
Artificial intelligence is not merely a technological frontier, it is a social and organizational one. As AI evolves from narrow tools to autonomous agents embedded in complex workflows, its adoption challenges established hierarchies, labor arrangements, and institutional routines.
While the hype surrounding AI often emphasizes imminent disruption, the organizational reality is more inertial. In the near term, adoption will be shaped less by what AI can do and more by what firms, bureaucracies, and professionals are willing to let it do.
Ultimately, no actor, individual, firm, or institution can escape the transformative pressures of evolving market dynamics. In this context, the most viable strategy is not resistance, but adaptation. Enhancing adaptive capabilities through orientation, training, and organizational learning is not optional, it is imperative.
At the same time, it must be acknowledged that if “this time is truly different,” as mounting evidence increasingly suggests, the scale and speed of transformation may generate not just organizational resistance, but broader social and political upheaval.
As AI reshapes labor markets, power structures, and the knowledge economy, it is likely to provoke tumultuous societal reactions. The political economy of AI is thus inseparable from its technological trajectory. The challenge ahead is not only to manage innovation, but to govern the societal transition it brings.
Governance of AI’s societal impact remains an open question. Will it involve regulatory prohibitions, bans on certain capabilities, limits on autonomy, constraints on deployment contexts, or will it be steered through softer mechanisms such as incentives, ethical guidelines, or institutional redesign? As of now, we lack both the empirical evidence and the conceptual tools to offer a precise forecast. What is clear, however, is that the governance of AI cannot be an afterthought. It must evolve alongside the technology itself, informed by interdisciplinary debate and guided by democratic values.
Moreover, it is deeply unpredictable how demand will be managed in future economies if large segments of the population become structurally unemployed due to technological displacement. The issue of managing aggregate demand in an AI-disrupted labor market remains deeply uncertain. Historical voices such as Charles Booth and Major C.H. Douglas had already raised this issue in the late 19th and early 20th centuries. These concerns, once considered fringe, may resurface as society grapples with the social contract implications of post-labor economic systems.
Once considered marginal and utopian, these ideas are resurfacing today in response to the structural transformations triggered by AI. A post-AI economy will require a radical rethinking not only of the means of production, but also of income distribution and consumption capacity.
In particular, agentic and frontier AI systems have the potential to displace large segments of the middle class by substituting not only physical but also cognitive labor. This signals a transformation too profound to be addressed merely by the creation of new job categories. As a result, issues such as universal basic income, publicly supported education programs, and new definitions of productivity beyond the labor market are likely to move to the center of the economic policy agenda in the coming years.
Just as important as how AI will transform society is the question of how this transformation will be governed, and in whose interest it will unfold. In this context, a broad societal negotiation is inevitable, involving not only technologists but also politicians, trade unions, professionals, academics, and citizens.