Extended abstract of invited contribution to "Singularity Hypotheses" (Springer, 2013)

Technological Singularities, Intelligence Explosions
& The Future of Biological Sentience

"THE BRAIN is wider than the sky,
For, put them side by side,
The one the other will include
With ease, and you beside"

(Emily Dickinson)

  • Does the future of life in the universe lie in the biological descendants of Homo sapiens? Or in post-biological superintelligence - a transition known to believers as The Singularity?

  • If the future of life lies in post-biological superintelligence, may we anticipate any place for "uploads" - functioning whole-brain emulations of organic humans? Or will humans be superseded by some kind of recursively self-improving super-AGI (artificial general intelligence)?

  • If the future lies in super-AGI, will these super-AGIs be sentient? Or is consciousness purely incidental to intelligence - a mere implementation detail of primordial organic minds?



The structure of this essay will be as follows. The history and multiple meanings of the terms "[Technological] Singularity" and "Intelligence Explosion" will be charted from their origins in the work of Hungarian polymath John von Neumann (1950), the mathematician I.J. Good (1965), and computer scientist Vernor Vinge (1983) to their recent eruption into popular culture: "2045, The Year Man becomes Immortal" (Time magazine, Feb. 10, 2011). Moore's Law and Kurzweil's "Law of Accelerating Returns" are reviewed. What is the nature of recursive self-improvement in the absence of a self to be improved? Will the pleasure-pain axis of carbon-based life be superseded by the formal "utility functions" of non-biological machines? An ecologically naturalistic account of intelligence will be constructed. Is "g" real or a statistical artefact? A distinction will be drawn between "mind-blind", autistic intelligence, as measured by IQ tests, and the perspective-taking, mind-reading, "Machiavellian" intelligence that enabled one species of social primate to dominate life on Earth.

Next, the contrasting empirical claims and background assumptions of two pivotal figures in the modern Singularity movement, Ray Kurzweil ("The Singularity is Near") and Eliezer Yudkowsky of the Singularity Institute for Artificial Intelligence (SIAI) will be critically examined. Kurzweil foresees an ecosystem of enhanced biological humans, human-machine hybrids, digital "uploads" and super-AGIs. By contrast, Yudkowsky draws on the analogy of a chain of nuclear fissions gone critical to prophesy an uncontrolled positive feedback cycle that goes "FOOM", leading to a singleton super-AGI. Most likely this singleton super-AGI will be non-friendly to humans. The Singularity Institute exists to avert this outcome and promote "[human-] friendly AI". After reviewing the chain of inferences supporting this scenario, it will be concluded that the risks of primitive human malevolence trump machine indifference. The greatest underlying source of global catastrophic and existential risk this century lies, not in super-AGI, but rather the competitive dominance behaviour of human male primates using "narrow AI" to pursue what evolution biologically "designed" male hunter-warriors to do i.e. wage war. Fortunately, futurology based on extrapolation has pitfalls.


One metric of progress in AI remains stubbornly unchanged. Despite the exponential growth of transistors on a microchip, the soaring clock speed of microprocessors, the growth in computing power measured in MIPS, the dramatically falling costs of manufacturing transistors and the plunging price of dynamic RAM (etc), any chart plotting the growth rate in digital sentience shows neither exponential growth, nor linear growth, but no progress at all. Our machines are still zombies. On some fairly modest philosophical assumptions, digital computers were not subjects of experience in 1946 (cf. ENIAC); nor are they conscious subjects in 2011 (cf. "Watson"); nor do researchers know how any kind of sentience may be "programmed" in future. On theoretical grounds it will be argued that no robot or digital computer with a von Neumann architecture, or using massively parallel "neurally inspired" classical computation, will be non-trivially conscious, nor run software that is non-trivially conscious, nor emulate the computational power of the human mind/brain. Some AI researchers dismiss our teeming diversity of qualia - the philosopher's term of art for the myriad "raw feels" of consciousness - as epiphenomena and hence functionally irrelevant to intelligence. What can they do? Yet since epiphenomena by definition are causally impotent, presumably they lack the causal efficacy to inspire treatises on their existence. It will be argued that if phenomenology is the hallmark of the mental, then talk of "digital minds" is anthropomorphic. An explanation will be offered of Moravec's Paradox i.e. the unexpected discovery by robotics researchers that high-level reasoning requires comparatively little computation, whereas supposedly low-level sensorimotor skills demand immense computational resources. This section concludes with a discussion of the Church–Turing thesis and its implications for digital sentience.


    Evolutionary biologist Theodosius Dobzhansky famously observed how "Nothing in Biology Makes Sense Except in the Light of Evolution". Likewise, nothing in the future of intelligent life makes sense except in the light of a solution to the Hard Problem of Consciousness and the closure of Levine's Explanatory Gap. The evolutionary success of organic robots has been driven by our capacity to run egocentric, real-time macroscopic world-simulations - what the naive realist simply calls the physical world. Unlike classical digital computers, organic computers can "bind" multiple features (edges, colours, motion, etc) into unitary phenomenal objects embedded in unitary phenomenal world-simulations apprehended by a momentarily unitary self. These simulations run in (almost) real time. Occasionally, object binding and/or the unity of consciousness partially breaks down with devastating results (cf. akinetopsia or "motion blindness"; simultanagnosia, or an inability to apprehend more than a single object at a time, etc). Yet normally our simulations of fitness-relevant patterns in the mind-independent world are seamless. Neurons, (mis)construed as classical processors, are pitifully slow, with spiking frequencies barely up to 200 per second. By contrast, silicon (etc) processors are ostensibly millions of times faster. Yet unlike the CPUs of classical robots, an organic mind/brain delivers dynamic unitary objects and unitary world-simulations with a "refresh rate" of many billions per second (cf. the persistence of vision as experienced watching a movie run at a mere 30 frames per second). These cross-modally matched simulations take the guise of what passes as the macroscopic world: a spectacular egocentric simulation run by the vertebrate CNS that taps into the world's fundamental quantum substrate. Quantum mind theorists, most notably Roger Penrose and Stuart Hameroff, typically focus on the ability of mathematically-inclined brains to perform non-computable functions in higher mathematics for which selection pressure has presumably been non-existent. More generally, the capacity for serial linguistic thought and formal logico-mathematical reasoning is a late evolutionary novelty executed by a slow, brittle virtual machine running on top of its quantum parent. By contrast, the immensely adaptive capacity to run unitary world-simulations, simultaneously populated by hundreds or more dynamic objects, enables organic robots to solve the computational challenges of navigating a hostile environment that would leave the fastest classical supercomputer grinding away until Doomsday. Physical theory (cf. the Bekenstein bound) shows that informational resources as classically conceived are not just physical but finite and scarce. Thus invoking computational equivalence and asking whether a classical Turing machine can run a human-equivalent macroscopic world-simulation is akin to asking whether a classical Turing machine can factor 1500 digit numbers in real time [i.e. no]. Organic minds are ultrafast; classical serial computers are slow. It will be argued that "substrate-independent" phenomenal world-simulations are impossible for the same reason that "substrate-independent" chemical valence structure is impossible. Reality has only one ontological level even though it's amenable to description at different levels of computational abstraction. Contra Marvin Minsky ("The most difficult human skills to reverse engineer are those that are unconscious"), the most difficult skills for roboticists to engineer are intensely conscious: our colourful, noisy, tactile, sometimes hugely refractory virtual worlds. A contrast will be drawn between the neurocomputational prowess of the mind/brain of a bumble bee - whose world-simulation runs on less than one million neurons - and a humble Cray supercomputer.

    A second obstacle to building human-level artificial general intelligence - let alone building a recursively self-improving super-AGI culminating in a Technological Singularity - depends on finding a solution to the first challenge. The evolution of distinctively human intelligence, sitting on top of our world-simulating prowess, has been driven by the interplay between our rich generative syntax and superior mind-reading skills (cf. the Machiavellian Ape hypothesis). Critically for building AGI, this real-time mind-reading expertise is parasitic on the neural wetware to generate first-order world-simulations populated with intentional agents whose minds can be read. Not all humans possess the cognitive capacity to acquire a theory of mind. Thus people with autistic spectrum disorder don't just fail to understand other minds; autistic intelligence cannot begin to understand its own mind. Extreme autistic intelligence has no conception of a self that can be improved, recursively or otherwise. Autists can't "read" their own minds. The inability of the autistic mind to take what Daniel Dennett calls the intentional stance parallels the inability of classical computers to understand the minds of intentional agents or understand their own zombie status. Even with smart algorithms, the ability of ultra-intelligent autists to predict the behaviour of mindful organic robots by relying exclusively on the physical stance (e.g. solving the Schrödinger equation of the intentional agent in question) is extremely limited. Thus it will be argued that - in common with autistic "idiot savants" - classical AI gone rogue will be vulnerable to the low cunning of Machiavellian apes and the high cunning of our transhuman descendants.

Historian Lewis Namier once remarked how “One would expect people to remember the past and imagine the future, but in fact…they imagine the past and remember the future.” The "future" most of us remember is a blend of childhood science-fiction novels and Hollywood movies, Skynet and HAL. Proverbially, the dominant technology of an age supplies its root-metaphor of mind. Our dominant metaphor is the digital computer. ["What else could the mind be?"] The rise of artificial quantum computing later this century promises to shift our root-metaphor of mind, life and the multiverse. Yet there are extraordinarily tight physical constraints on building artificial quantum minds, let alone artificial quantum mind-supporting robots. These constraints make the prospect of a Singularity, or imminent world-domination by artificial machine intelligence, quite unlikely. Contra Max Tegmark, the robust capacity of organic neurons to resist thermally-induced decoherence long enough to do computationally useful work means that the near-future belongs to (post-)human biological minds aided by digital zombies, not "uploads" or non-biological AGIs. (Post-)humanity can look forward to intelligence-amplification via genetic rewrites and machine augmentation; effectively unlimited material abundance via molecular nanotechnology; designer paradises in immersive VR; the end of ageing and disease; the abolition of suffering in our forward light-cone; the option of life based entirely on gradients of intelligent bliss; and perhaps the adaptive radiation of sentience across the Galaxy. Yet without sentience, nothing matters at all.

* * *


Baron-Cohen, S, (1995). "Mindblindness: an essay on autism and theory of mind". (MIT Press/Bradford Books).

Bostrom, N. “Existential risks: analyzing human extinction scenarios and related hazards” (2002). Journal of Evolution and Technology, 9.

Brooks, R. (1991). "Intelligence without representation", Artificial Intelligence 47 (1-3): 139–159, doi:10.1016/0004-3702(91)90053-M.

Byrne & Whiten, A. (1988). "Machiavellian intelligence". (Oxford: Oxford University Press).

Chalmers, DJ. (2010). “The singularity: a philosophical analysis”. http://consc.net/papers/singularity.pdf

Chalmers DJ. (1995). Facing up to the hard problem of consciousness. Journal of Consciousness Studies 2, 3, 200-219.

Churchland, P. (1989). "A Neurocomputational Perspective: The Nature of Mind and the Structure of Science". (MIT Press).

Clark, A. (2008). "Supersizing the Mind: Embodiment, Action, and Cognitive Extension". (Oxford University Press, USA).

Cochran, G. Harpending, H. (2009). "The 10,000 Year Explosion: How Civilization Accelerated Human Evolution". (Basic Books).

Cohn, N. (1957). "The Pursuit of the Millennium: Revolutionary Millenarians and Mystical Anarchists of the Middle Ages". (Pimlico).

Dennett, D. (1987). "The Intentional Stance" (MIT Press).

Deutsch, D. (1997). "The Fabric of Reality". (Penguin).

Deutsch, D. (2011). "The Beginning of Infinity". (Penguin).

Good, IJ. (1965). “Speculations concerning the first ultraintelligent machine”, Franz L. Alt and Morris Rubinoff, ed., Advances in computers (Academic Press) 6: 31–88.

Hameroff, S. (2006). "Consciousness, neurobiology and quantum mechanics" in: The Emerging Physics of Consciousness, (Ed.) Tuszynski, J. (Springer).

Huxley, A. (1932). "Brave New World". (Chatto and Windus).

Kurzweil. R. (2005). "The Singularity Is Near". (Viking).

Kurzweil, R. (1998). "The Age of Spiritual Machines". (Viking).

Levine, J. (1983). "Materialism and qualia: The explanatory gap". Pacific Philosophical Quarterly 64 (October):354-61.

Lockwood, L. (1989). "Mind, Brain, and the Quantum". (Oxford University Press).

Metzinger, T.(2009). "The Ego Tunnel: The Science of the Mind and the Myth of the Self". (Basic Books).

Minsky, M. (1987). "The Society of Mind". (Simon and Schuster).

Moravec, H. (1990). "Mind Children: The Future of Robot and Human Intelligence". (Harvard University Press).

Penrose, R. (1994). "Shadows of the Mind: A Search for the Missing Science of Consciousness". (MIT Press).

Revonsuo, A. (2005) "Inner Presence: Consciousness as a Biological Phenomenon". (MIT Press).

Revonsuo, A. and Newman, J. (1999). Binding and Consciousness. Consciousness and Cognition 8, 123-127.

Rumelhart, D.E., J.L. McClelland and the PDP Research Group (1986). "Parallel Distributed Processing: Explorations in the Microstructure of Cognition". Volume 1: Foundations. (Cambridge, MA: MIT Press).

Seager, W. (1999). "Theories of Consciousness". (Routledge).

Shulgin, A. (1995). "PiHKAL: A Chemical Love Story". (Transform Press, U.S.).

Simon S., Barrett, J., Kent, A., Wallace, D., (2010). "Many Worlds?: Everett, Quantum Theory, and Reality". (Oxford University Press).

Shulman, C. & Sandberg, A. “Implications of a software-limited singularity” (2010). Proceedings of the European Conference of Computing and Philosophy.

Strawson et al. (2006). "Consciousness and Its Place in Nature: Does Physicalism Entail Panpsychism?" (Imprint Academic).

Tegmark, M. (2000). "Importance of quantum decoherence in brain processes". Phys. Rev. E 61 (4): 4194–4206. doi:10.1103/PhysRevE.61.4194.

Vinge, V. “The coming technological singularity”, Whole Earth Review, New Whole Earth LLC, March 1993.

Vitiello, G. (2001). "My Double Unveiled; Advances in Consciousness". (John Benjamins).

Waal, F. (2000). "Chimpanzee Politics: Power and Sex among Apes". (Johns Hopkins University Press).

Wohlsen, M. (2011) : "Biopunk: DIY Scientists Hack the Software of Life". (Current).

Yudkowsky, E. (2007). "Three Major Singularity Schools". http://yudkowsky.net/singularity/schools.

Yudkowsky, E. (2008). “Artificial intelligence as a positive and negative factor in global risk” in Bostrom, Nick and Cirkovic, Milan M. (eds.), Global catastrophic risks, pp. 308–345 (Oxford: Oxford University Press).

David Pearce (March 2011)

see too The Biointelligence Explosion


Talks 2014
H+ Interview
More Interviews
Social Media (2015)
Transhumanism (pdf)
The Abolitionist Project
Reprogramming Predators
The Reproductive Revolution
The End of Suffering? (H+, 2010)
The Biointelligence Explosion (2011)
Technological Singularities and Intelligence Explosions
Top Five Reasons Transhumnism Can Abolish Suffering
The Fate of the Meat World (Harvard H+ Summit, 2010)
What is Empathetic Superintelligence? (Powerpoint, H+ 2011)

(Brazilian Portuguese translation)