Ideas that Shaped AI’s Makers
- Arda Tunca
- Sep 10
- 6 min read
Most people might think that Artificial Intelligence (AI) is a recently developed phenomenon. However, the foundations of its scientific discovery has a story of decades.
AI did not begin as an engineering problem. It began as a philosophical wager about mind, logic, and the possibility that thinking could be mechanized. Its trajectory, from early logic machines to today’s deep learning and foundation models, owes as much to the social imagination and political economy of its practitioners as to their algorithms.
This historical perspective complements my earlier analyses of AI’s contemporary dilemmas, from productivity paradoxes to inequality and environmental strain, by showing that today’s debates are continuations of long-standing tensions.
Turing’s test, Wiener’s cybernetics, McCarthy’s “AI,” Simon and Newell’s problem solvers, Minsky and Papert’s critique of early neural nets, expert systems, probabilistic reasoning, statistical learning, and the deep learning resurgence culminating in transformers and large language models constitute the story of AI.
Beneath AI’s developmental steps lie distinctive worldviews, liberal humanism, anti-authoritarian progressivism, Cold-War technocracy, social-democratic rationalism, Silicon-Valley market optimism, and contemporary ethics-and-safety reformism, that quietly steered research priorities, funding coalitions, and the very notion of “intelligence.”
These early debates about mechanized thought framed the stage on which AI would formally be born in the mid-1950s.
Early Foundations (1940s–1950s)
The canonical point of origin remains Alan Turing’s 1950 essay “Computing Machinery and Intelligence,” which reframed the question “Can machines think?”
Turing’s universal machine and codebreaking work provided the conceptual and practical substrate for programmable computation. The paper treated thinking as symbol-manipulation subject to engineering, collapsing metaphysical debates into empirical design problems. Turing’s life was celebrated in Andrew Hodges’s meticulous biography. Turing’s intellectual posture reads as broadly liberal and humanist.
A different, strongly socio-political starting point arrived with Norbert Wiener’s cybernetics. In Cybernetics (1948) and his widely cited Science essay “Some Moral and Technical Consequences of Automation (1960),” Wiener defined feedback, control, and communication as unifying principles across organisms and machines. Crucially, he warned that automation, if guided by militarism or narrow corporate interests, could produce unemployment, de-skilling, and dangerous concentrations of power.
Wiener’s stance is a recognizably progressive humanism. He is anti-militarist, wary of unaccountable corporate control, and insistent that technical systems must answer to human purposes.
These warnings connect directly with my writing on the hidden costs of digitalization and AI’s environmental footprint, where automation and data infrastructures also produce unseen social and ecological strains.
The Birth of AI (1956–1960s)
AI’s institutional birth took place at the 1956 Dartmouth workshop, organized by John McCarthy, who also named the field and soon created LISP (1958). McCarthy embodied an alternative ideological current within AI: a bracing techno-optimism, suspicion of moral panics, and a preference for individual freedom over heavy state constraint.
McCarthy’s public writings and interviews, along with the Computer History Museum oral history, show him as combative toward critiques he viewed as moralistic, notably in his caustic review of Weizenbaum’s Computer Power and Human Reason. That posture has often been read as libertarian in tone, aligned with confidence that technological progress and market mechanisms are net-liberating forces.
Herbert A. Simon and Allen Newell carried the early promise of symbolic AI into working programs: Logic Theorist and the General Problem Solver. Simon, also a Nobel laureate in economics, grounded his view of rationality in institutions and bounded cognition rather than in laissez-faire individualism.
Simon’s administrative theory and later reflections emphasize planning, organizational design, and policy, ideas congenial to social-democratic governance. In Models of My Life and earlier administrative writings, one sees a reformist trust in reasoned, public-minded institutions, with AI conceived as a tool to extend rational decision-making.
Simon’s insistence on organizational limits and planning resonates with my argument in the “AI Productivity Paradox” article, where I noted that technological breakthroughs do not automatically translate into economic gains without institutional adaptation.
The 1960s also delivered early neural learning with Frank Rosenblatt’s perceptron, and, in 1969, a bracing correction with Marvin Minsky and Seymour Papert’s Perceptrons, which demonstrated the mathematical limits of single-layer networks.
Minsky’s broader project, later expressed in The Society of Mind, was a technocratic, reductionist theory of cognition: intelligence as an emergent assembly of simple processes. While not a political program, it carried the distinct ethos of Cold-War computationalism, confidence that mind could be decomposed, engineered, and scaled in laboratory settings.
Critical Responses and New Foundations (1970s–1980s)
As AI met the recalcitrant realities of language, commonsense, and perception, a countermovement formed. Joseph Weizenbaum’s Computer Power and Human Reason (1976) is the most famous moral critique from inside AI. Having built ELIZA and seen people over-ascribe understanding to it, Weizenbaum argued that delegation of judgment to machines was a category mistake and, in many realms, a moral abdication.
Weizenbaum’s stance, shaped by exile from Nazi Germany and a deep suspicion of technocratic rationality untethered from ethics, belongs squarely in a left-liberal humanism that rejects both militarized computing and market logics that treat persons as mere information-processing nodes.
As symbolic approaches waned, a different foundation emerged in probabilistic reasoning.
His concern that machines might reduce people to mere processing nodes anticipates themes I explored in “AI, Productivity, and the Future of Inequality,” where I argued that the benefits of AI risk being concentrated among a few at the expense of broader social well-being.
If symbolic approaches promised too much and delivered too brittlely, Judea Pearl’s probabilistic turn supplied a new foundation. Probabilistic Reasoning in Intelligent Systems (1988) made Bayesian networks the lingua franca of reasoning under uncertainty, replacing brittle rules with coherent graphical inference.
Pearl’s work is almost pointedly apolitical in its technical core: it is a grammar for uncertainty, not a social program. Yet, its implicit political economy is notable: a commitment to transparent, inspectable causal reasoning, models that can, at least in principle, be audited and contested.
Statistical Learning and Robotics (1990s)
Through the 1990s, AI leaned ever more heavily into statistics and data. Vladimir Vapnik’s statistical learning theory and support vector machines professionalized pattern recognition.
Rodney Brooks’s “Intelligence without representation” argued for embodied, bottom-up robotics that learns in the world rather than only in symbols. These were not ideologies so much as epistemologies with downstream political effects: privileging empiricism, interaction, and iterative improvement over closed theory, stances comfortable within a marketized research economy where performance benchmarks and rapid prototyping rule.
The Deep Learning Renaissance (2010s)
The 2010s brought the deep learning renaissance. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio, awarded the 2018 A.M. Turing Award, reintroduced multilayer neural networks as scalable engines for perception, speech, and language. Their own public positions diverge subtly.
Hinton often voiced unease with heavy corporate capture of research. LeCun, long embedded in industry, champions open science, but within corporate labs. Bengio, based in Montréal’s academic ecosystem, increasingly foregrounds ethics and governance.
On the technical record, the ACM’s Turing citation and their Communications of the ACM overview stitch the scientific arc. On the ideological record, Bengio’s leadership in drafting and promoting the Montréal Declaration for Responsible AI (2018) and subsequent cross-disciplinary “Managing (Extreme) AI Risks” consensus papers situate him in a reformist, pro-regulation camp that sees democratic governance and precaution as complements to innovation.
AlphaGo’s defeat of Lee Sedol in 2016, documented in Nature, announced a new fusion of reinforced search and deep networks. Demis Hassabis’s DeepMind exemplifies a contemporary political economy of AI: scientific ambition nested within platform capitalism. It is not an electoral ideology but a structural one, an embrace of long-horizon scientific bets underwritten by a large firm, paired with assurances about safety and beneficial AGI.
Transformers and Ethical Governance (2017–Present)
Finally, the transformer architecture introduced by Vaswani et al. (2017) reshaped natural language processing, enabling large language models and multimodal systems that now dominate AI discourse. This has forced a convergence of technical and political questions: whose data, which labor, what externalities?
Stuart Russell’s Human Compatible Artificial Intelligence articulates a program of value alignment and provably beneficial AI that is explicitly normative, an argument that advanced AI must be designed from the outset to pursue human preferences under uncertainty, subject to oversight and corrigibility. Russell’s stance, liberal, institutionalist, and safety-first, anchors a growing scholarly literature connecting AI design to democratic governance.
Conclusion
The history of AI is not just a sequence of technical methods, but also a reflection of the intellectual and social environments in which they emerged.
Turing’s work made the testing of machine thinking a practical possibility, grounded in a liberal and humanist vision of reason.
Wiener’s cybernetics brought attention to the societal consequences of automation, especially its potential misuse in militarized or corporate contexts.
McCarthy’s optimism represented a strand of confidence in technological progress and individual freedom, while Simon’s emphasis on bounded rationality connected AI to ideas of planning and governance within institutions.
Weizenbaum’s critique highlighted ethical limits and the risks of overestimating machines, whereas Pearl’s probabilistic reasoning reintroduced humility and rigor into how uncertainty is managed.
The deep learning revolution of the 2010s demonstrated the power of scale and data, while contemporary debates around ethics and safety underline the need for transparency, regulation, and alignment with human values. Taken together, the politics and philosophies of intelligence have always shaped, and continue to shape, the engineering of intelligence.
As I have argued in my previous articles on Demos, from productivity paradoxes to climate burdens, the story of AI today cannot be separated from the worldviews and structures that sustain it. The history traced here is not only about algorithms but about politics, economies, and ideas, forces that remain as decisive for AI’s future as they were for its past.
The politics of intelligence are inseparable from the engineering of intelligence.



Comments