The Reality Behind the Singularity

Every few weeks, a technology executive makes another announcement about the imminent arrival of artificial general intelligence, and the cycle of breathless commentary begins again. Sam Altman suggests we are approaching a moment of civilizational transformation. Elon Musk warns that someone else’s version of that transformation will be catastrophic, while his own proceeds apace. The disagreement between them is real, and loudly performed. What is less often remarked upon is how much their underlying assumptions have in common.

Both men operate from a set of convictions about intelligence, progress, and the future that are, on close inspection, more theological than scientific. And the competitive dynamic they have helped create bears a closer resemblance to a familiar historical pattern than most technology commentary is willing to acknowledge.

In Stanley Kubrick’s 1964 film Dr. Strangelove, a cast of generals, politicians, and strategists find themselves unable to stop a nuclear catastrophe that none of them actually wanted. The mechanism of destruction was not malice but logic: the inexorable internal logic of an arms race, in which every actor, following rational self-interest, produces a collectively catastrophic result. No one intended the ending. Everyone contributed to it.

The parallel to the current state of AI development is not metaphorical. It is structural.

The Category Error at the Foundation

The concept of the technological singularity, as articulated by mathematician Vernor Vinge and later popularized by futurist Ray Kurzweil, rests on a specific and contestable assumption: that intelligence is a property that can be meaningfully measured on a single scale, and that a system sufficiently advanced on that scale will be qualitatively different from anything that came before it. Kurzweil’s version of this argument, laid out in The Singularity Is Near (2005), predicts this threshold will be crossed around 2045.

What this framework inherits, without much examination, is a fundamentally Cartesian view of mind. Cognition, in this model, is computation. The brain is hardware. Intelligence is a function that runs on it, and can in principle run just as well, or better, on different hardware entirely. The substrate, in other words, does not matter.

This is a position that several decades of neuroscience research have given us good reasons to question.

Antonio Damasio’s somatic marker hypothesis, developed across a series of influential works including Descartes’ Error (1994), demonstrated through careful clinical study that emotional processing is not incidental to rational decision-making but constitutive of it. Patients with damage to the ventromedial prefrontal cortex, the region most associated with emotional integration, did not become more rational in the absence of emotional interference. They became incapable of making decisions at all. The implication is significant: what we commonly describe as reason is not separable from the affective systems that the singularity framework is inclined to treat as noise.

Lisa Feldman Barrett’s more recent work in How Emotions Are Made (2017) extends this argument further, presenting evidence that emotion and cognition are not merely intertwined but that the distinction itself may be a useful fiction. The brain, in Barrett’s account, is a predictive organ constantly modeling the body’s internal state and its relationship to the external environment. Feeling and thinking are different descriptions of the same underlying process.

The implications for the singularity argument are not trivial. If intelligence in any robust sense requires embodiment, a body whose states are continuously integrated into cognition, then the prospect of disembodied computational intelligence reaching or exceeding human cognitive capacity is not simply technically difficult. It may be the wrong description of what intelligence is.

The broader framework known as 4E cognition (embodied, embedded, enacted, and extended) further develops this position across a range of disciplines, arguing that cognition emerges from the dynamic interaction between organism, body, and environment rather than from computation occurring within a bounded system. On this account, the question of whether a machine could be more intelligent than a human is a bit like asking whether a map could be a better traveler than a person. The category does not transfer cleanly.

And then there is the social dimension, which the singularity framework tends to underweight to a degree that borders on negligence. Human cognitive capacity is not simply individual. It is distributed across relationships, institutions, cultural practices, and accumulated knowledge that has been refined across tens of thousands of years of collective life. The organizational intelligence that allows human societies to coordinate at scale, to build and maintain institutions, to sustain trust across generations. This is not separable from the embodied, emotionally regulated, socially embedded creatures who produce it.

The Economic Logic Beneath the Theology

Understanding why the singularity narrative has proven so resilient in the face of these objections requires looking not only at its philosophical assumptions but at the economic interests it serves.

Matteo Pasquinelli’s The Eye of the Master: A Social History of Artificial Intelligence (Verso, 2023), winner of the Deutscher Memorial Prize, offers one of the most rigorous accounts available of where artificial intelligence actually comes from, and the answer is considerably less flattering to the industry’s self-conception than the standard origin story. Pasquinelli argues that AI’s inner structure has been shaped not by the imitation of biological intelligence but by the intelligence of labor and social relations, traceable in a direct line from Charles Babbage’s industrial calculating engines of the nineteenth century through to the deep learning systems of the present day. What AI encodes, in this account, is not mind but work, the accumulated collective knowledge of human labor, abstracted into algorithms and enclosed as private property.

This reframing has significant implications for how we understand the singularity narrative. What current large-scale AI systems represent, on Pasquinelli’s analysis, is the automation of what Marx called the general intellect, the collective knowledge that arises from social cooperation across generations, crystallized into a form of monopoly power over knowledge itself. The knowledge was always collective. The ownership is entirely private.

The singularity narrative performs a specific ideological function within this economic structure. By framing AI development as an inexorable march toward a transcendent future, it accomplishes two things simultaneously: it naturalizes the concentration of enormous power and wealth in the hands of a small number of private actors, and it preemptively disqualifies democratic oversight as futile interference with historical destiny. As Pasquinelli observes, the apocalyptic dimensions of singularity thinking, the warnings about existential risk, the scenarios of human obsolescence, are not in tension with the interests of the technology industry. They actively serve those interests, by making the technology appear as a force beyond human control and therefore beyond human accountability.

This is the reason for what might otherwise seem like a paradox: many of the most alarming predictions about artificial intelligence come from the same people who are most aggressively building it. The alarm and the acceleration are not contradictory. They are complementary. The alarm establishes that what is being built is of world-historical significance. The acceleration establishes who will control it when it arrives. The ordinary human beings whose collective knowledge trained these systems, whose labor produced the data, whose cultural output provided the corpus, they are present in the technology in every meaningful sense, and absent from its ownership entirely.

In this light, the singularity is less a scientific prediction than an eschatological story that capital tells about itself at the moment of its most extreme internal contradiction: a system that has extracted and enclosed collective human intelligence, now declaring that intelligence to be on the verge of rendering its human sources obsolete. It is not a vision of transcendence. It is the final chapter of a very long story about who owns what, told in the register of prophecy rather than political economy.

The Arms Race and Its Logic

Setting aside the philosophical objections for a moment, there is a more immediate concern that the singularity framing tends to obscure.

Even granting agnosticism about whether true artificial general intelligence is achievable, the competitive dynamic currently driving AI development has consequences that do not depend on resolving that question. The relevant dynamic is not the eventual arrival of a superintelligent system. It is the arms race logic that the possibility of such a system has already set in motion.

Consider the structure of the current moment. OpenAI, Google DeepMind, Anthropic, xAI, and a constellation of well-funded competitors are each advancing AI capabilities as rapidly as their resources allow, with the explicit justification in several cases that it is safer to be at the frontier than to cede that position to others. The argument is self-sealing: the danger of the race is used to justify continued participation in it. Every actor has a compelling reason not to stop. No actor has the power to stop alone.

This is the logic that Kubrick’s film understood so well. The catastrophe in Dr. Strangelove is not produced by villains. It is produced by rational actors, each following the internal logic of their own position, in a system where the structure of incentives makes the collectively catastrophic outcome nearly inevitable. General Ripper is a true believer. President Muffley is a reasonable man. Ambassador de Sadesky is simply doing his job. None of them wanted what happened. All of them contributed to it.

The AI acceleration dynamic has a similar structure, with the additional complication that several of its most prominent architects have persuaded themselves that they are not simply participants in an arms race but custodians of a transformation that will ultimately benefit humanity. This self-conception deserves scrutiny. The belief that a small group of technologists, operating largely outside democratic accountability, are building toward a future that will be good for everyone is a faith position in a fairly precise sense. It is held with conviction, it is resistant to falsification, it offers a narrative of redemption through disruption, and it licenses the present costs (displacement of workers, concentration of power, erosion of privacy, and the redirection of enormous capital and human talent away from other pressing problems) as necessary transition costs on the way to a promised state that is always just beyond the horizon. The Cold War strategists who built the nuclear stockpiles were, in many cases, entirely sincere. They believed they were protecting civilization. The belief did not make the arms race less dangerous.

What the Analogy Recovers

There is, however, a more hopeful dimension to the nuclear parallel that is worth examining seriously, because the story did not end with Dr. Strangelove.

The nuclear arms race produced catastrophic risk, several near-misses that remain poorly understood by the general public, and a period of sustained terror that shaped the second half of the twentieth century. It also, eventually, produced arms control. The Partial Nuclear Test Ban Treaty of 1963, the Non-Proliferation Treaty of 1968, the SALT agreements, and eventually the START treaties represent an imperfect but real record of human institutions managing an existential technology through negotiation, multilateral agreement, and the sustained pressure of civil society.

What produced those outcomes was not a more intelligent computation. It was the full range of characteristically human capacities that the singularity framework tends to discount: diplomatic relationships built on trust and sustained over years, emotional intelligence applied to adversarial negotiations, the social and political pressure of citizens and scientists who had bodies and communities and a clear interest in survival, and institutional structures capable of making and keeping commitments across time.

The Cartesian model of intelligence has no good account of how any of that works. The embodied, socially embedded, emotionally integrated model of human cognition explains it rather well.

There is an additional dimension here that Pasquinelli’s analysis makes visible. If what is encoded in AI systems is collective human knowledge (the general intellect, in Marx’s formulation) then the question of who controls and directs these systems is not a technical question but a political one, and one to which collective human action is precisely the appropriate response. The arms control treaties of the Cold War did not emerge from the calculations of individual rational actors. They emerged from a sustained political struggle over who had the right to make decisions that affected everyone. The parallel struggle over AI governance is already underway, in legislative bodies, international forums, labor organizations, and civil society groups, and its outcome is genuinely uncertain.

The lesson of the nuclear age for the present moment is not simply that arms races are dangerous, though they are. It is that the same human capacities that the technology optimists are most inclined to treat as limitations, our embeddedness in relationships and communities, our emotional responsiveness, our dependence on social trust and institutional continuity, are precisely the capacities that have historically enabled us to manage technologies that threatened to outrun our wisdom.

The question that the current moment puts to us is not whether artificial intelligence will eventually exceed human cognitive capacity on whatever scale one chooses to measure it. It is whether we will preserve enough of the distinctively human, embodied, relational, socially accountable, to do what we have managed to do before: look at a system we have built, recognize that its trajectory is dangerous, and find the collective will to change course.

That is not, in the end, a technological question. It never was.

Next
Next

The Risk of AI Writing Is Leadership Without Judgment