WJ: To ensure that we don't at any point talk past each other, why don't we start with the foundations of morality, and also your beliefs about consciousness. You refer to yourself (correct me if I'm wrong) as a lexical negative utilitarian. What gives you to think that suffering and the elimination of suffering should be prioritized over well being and its pursuit? Is your prioritization of suffering such that you think that suffering of a certain intensity is more morally significant even than the same intensity of pleasure, or is it just bound up with the unfortunate biological facts about humans and Darwinian life altogether - namely, that suffering confers a fitness advantage, and we have therefore evolved to suffer more than we feel well-being?DP: I'm always torn about discussing negative utilitarianism (NU). It's a minority position. I worry that talking about NU distracts from the abolitionist project - and maybe even harms it. Genetically phasing out the biology of involuntary suffering can potentially be endorsed by diverse secular and religious perspectives. There is no Biblical injunction: thou shalt not modify thy genetic source code. Veganizing the biosphere is foretold in Isaiah. The abolitionist project is implicit in the World Health Organization's ambitious commitment to universal health - “complete physical, mental and social well-being”. Pain-free life doesn't deserve to be any more controversial than pain-free surgery. By contrast, almost any discussion of NU soon turns to a notorious thought-experiment, the benevolent world-destroyer argument. If ending suffering is all that matters ethically, then why not end life itself? Omnicide is easier than genetically reprogramming the biosphere. And even if (like me) you disclaim plotting Armageddon, most of your audience is alienated if you acknowledge your NU conviction that non-existence would have been better than suffering-filled Darwinian life.
That said, yes, I'm a negative utilitarian (I learned I am a "lexical" negative utilitarian only from Wikipedia. Lexical NU considers happiness as a tiebreaker when suffering levels are equal). Properly understood, NU is compassion systematized. Minimising, preventing, and ultimately abolishing experience below hedonic zero in our forward light-cone is all that ethically matters.
NU sounds a joyless creed – even if we discount its contested omnicidal implications. Yet this dour view rests on a misconception. As a NU, I urge creating a sublime future of superhuman bliss for sentience: paradise engineering.
Naively, engineering superhuman bliss is irrelevant to NU ethics: surely, by NU criteria, pleasure doesn't matter ethically in the slightest except insofar as it prevents pain? But recall how NUs want to eliminate disappointment, boredom and even the faintest twinge of regret that future life might not be everything you dream of - and more. So other things being equal, outrageously wonderful paradise engineering is NU! What's more, unlike classical utilitarianism (cf. utilitarianism.com), NU doesn't oblige any future superintelligence to obliterate a blissful super-civilization with a cosmic utility-maximising utilitronium shockwave. In that sense, NU is less of a potential x-risk than its classical cousin. Yet where there are trade-offs between pain and pleasure, NUs will indeed always "walk away from Omelas”, even when the trade-off is ostensibly very favourable – one tormented kid in a basement versus an immense joyful metropolis. In that sense, yes, I'm hardcore NU. I think the universal wave function is a single entity (cf. wave function monism). The ostensibly good depends on the bad. So reality itself is fundamentally evil. Our genteel discussion here can't begin to evoke its horrors. Yet the world has no OFF switch. Sad to say, engineering a vacuum phase-transition is sci-fi. So we must act accordingly.
Empirically as well as ethically, the most atrocious suffering is intuitively far worse than the most wonderful pleasure is divine. It's a strong intuition I share – especially given my NU ethics and personal experience of life. But caution is in order. Consider the ordeals that sentient beings will put themselves through in order to access fitness-enhancing pleasures such as sex. So perhaps our intuition of empirical if not ethical asymmetry between the intensity of pain and pleasure should be qualified. I don't know. Consider too how desperately famished rats, who haven't eaten for days, won't cross an electrified grid to access food, but they will cross the grid to self-stimulate their mesolimbic dopamine system. More generally, pleasure is so insidiously morally corrupting that it persuades most people not just to accept pain-ridden Darwinian life, but to create more of it by having kids. For evolutionary reasons, most people feel the good in life outweighs the bad.
I fear Plato was right: “Pleasure is the greatest incentive to evil.”WJ: What are your beliefs about consciousness? Are you sympathetic to David Chalmers and his contention that consciousness is a truly hard problem unlike any other?
DP: The Hard Problem of consciousness exists relative only to background assumptions. Drop one (or more) of these background assumptions and the mystery doesn't arise. I'm a naturalist, a realist and a physicalist. Physicalism best explains the unique technological success-story of modern science. But most physicalists implicitly make an additional assumption. The intrinsic nature of the physical, the mysterious “fire” in the equations of QFT, is non-experiential. I think this metaphysical assumption is unwarranted. We've no scientific evidence that the intrinsic nature of the world's fundamental quantum fields differs inside and outside the head. If so, then hypothetical p-zombies are an oxymoron; “p”-zombies would be unphysical. According to non-materialist physicalism, what makes animal minds like us special isn't consciousness per se - for it's the essence of the physical - but rather, phenomenally-bound consciousness, most notably the real-time phenomenally-bound macroscopic world-simulation (“perception”) that your mind is running right now. In organisms with a capacity for rapid self-propelled motion, running throwaway world-simulations that masquerade as a robustly classical external environment is adaptive.
Non-materialist physicalism shouldn't be confused with property-dualist panpsychism, cosmopsychism or animism. It's not Berkeleyan idealism - though external reality is theoretically inferred, not "perceived". Rather, non-materialist physicalism proposes that only the physical and physical properties are real. All the “special sciences” supervene on fundamental physics. The world is formally described, exhaustively, by the equations of mathematical physics. But the essence of the physical isn't what materialist metaphysicians suppose.
Nevertheless, consciousness (conceived as the essence of the physical) mystifies me. For science lacks a cosmic Rosetta Stone to let us “read off” the myriad interdependent textures (“what it feels like”) of experience from the diverse solutions to the equations of QFT. I'm baffled - but not by the Hard Problem, the binding problem, the problem of causal efficacy, the palette problem, and other staples of academic philosophy of mind. As posed, such mysteries are (in my view) the unscientific by-product of bad metaphysics.
WJ: On this philosophical basis you go on to sketch what you call the Hedonistic Imperative, in which you argue that suffering can be abolished throughout the living world and replaced with gradients of intelligent bliss.
DP: Yes. Genome reform can fix the root of all evil. The biology of suffering can be replaced by a more civilized signalling system and motivational architecture - life based entirely on information-sensitive gradients of bliss. Critically, HI doesn't depend on my tentative dissolution of the Hard Problem - an idealist variant of what philosophers call the intrinsic nature argument. Indeed, like my NU, I worry that my idiosyncratic ideas on consciousness distract from the abolitionist project. After all, I could be wrong - it wouldn't negate HI. You can be a traditional “materialist” physicalist and support HI. What is critical for the success of the Hedonistic Imperative is monistic physicalism. Once we understand the necessary and sufficient physical conditions for hedonic sub-zero states to exist, we can mitigate and prevent bad states even in the absence of a deeper understanding of consciousness. And in a post-suffering world, we can ring-fence the Evil Zone with multiple genetic safeguards to ensure that mental and physical pain can never recur. Unpleasant states of consciousness can be made physiologically impossible - and ultimately inconceivable.
Whereas my ideas on the alleged Hard Problem of consciousness may unhelpfully distract from the abolitionist project, what is highly ethically relevant to the abolitionist project is the binding problem (cf. binding-problem.com). The binding problem can't be dodged. If textbook neuroscience is correct, then why aren't we at most micro-experiential zombies, just patterns of Jamesian “mind dust”? Why aren't humans mere aggregates of c.86 billion membrane-bound neuronal micro-pixels of experience? What explains the unity of consciousness – both local and global binding? If there is a classical explanation of binding, and if digital whole-brain emulations are computationally feasible, then won't future digital computers support phenomenally-bound minds with a pleasure-pain axis too, with momentous ethical implications? And maybe soon our (super)intelligent machines will spawn sentient phenomenally-bound (super)minds of their own.
Once again, I take a minority view. Phenomenal binding is classically inexplicable. Instead, we are quantum minds running subjectively classical world-simulations. Only the fact that the superposition principle of QM doesn't break down in the CNS lets us run phenomenally-bound and subjectively classical world-simulations where it does. Quantum superpositions are individual states. I argue that no classical information-processor can support phenomenal binding. Digital computers can't access the empirical realm of mind: it's our computational superpower. Even if consciousness fundamentalism is true (i.e. non-materialist physicalism), classical digital computers can function only because they aren't sentient. By way of illustration, notionally replace the discrete 1s and 0s of a classical Turing machine with discrete micro-pixels of experience. Execute the code as before. Speed of execution and complexity of code make no difference. On pain of spooky “strong” emergence, the upshot of running the program – even a notional whole-brain emulation - won't be a phenomenally-bound mind, but rather a micro-experiential zombie. Likewise with LLMs. So even the most advanced futuristic zombie AIs run on digital computers lack moral status. No binding = no mind = no pleasure-pain axis. In a fundamentally quantum world, decoherence makes otherwise impossible classical computing physically feasible and forbids classical digital computers from supporting phenomenally-bound minds. Therefore “whole-brain emulations” are a misnomer – which isn't to deny that you could notionally be behaviourally emulated by an unphysically vast classical Turing machine or an unphysically vast classical look-up table. Neither would be sentient.
Yet what if I'm mistaken about binding? What if classical digital computers can somehow support phenomenally-bound minds with a pleasure-pain axis?
Well, no one knows how such binding could arise, consistently with physicalism at any rate; but if classical binding does occur, then the abolitionist project (as I conceive it) would be transformed.
Mercifully, this question can be settled experimentally. If phenomenal binding is non-classical, then the interference signature from molecular matter-wave interferometry in the CNS can tell us (cf. physicalism.com)
Will we find a perfect structural match or not?WJ: What could gradients of intelligent bliss look like and what would it mean for beings to be motivated by them instead of the motivational systems instantiated in Darwinian life today?
DP: Gradients of superhuman bliss are humanly inconceivable. A mature pleasure-superpleasure axis is utterly beyond human comprehension. But life underpinned entirely by information-sensitive gradients of well-being isn't so speculative. We can study rare hedonic genetic outliers today like Jo Cameron. Extremely “hyperthymic” people like Jo with an unusually high hedonic set-point should be systematically studied for the non-obvious pitfalls of being innately but not uniformly happy. Genome reform can then create an entire hyperthymic civilization of Homo felicitas. A hyperthymic biosphere can follow. Natural selection has been an engine of suffering. Unnatural selection will be an engine of bliss. The nature of selection pressure is poised to change as prospective parents choose the allelic makeup of their kids in anticipation of the likely psychological and behavioural effects of their genetic choices. And transhuman hyperthymic civilization is just the precursor to posthuman life in (what I call) the Hedonocene.
Critics find this vision of superhappiness scary. But ratcheting up your hedonic set-point doesn't oblige you to change your values, your relationships or your preference architecture. Unlike traditional utopias, enjoying a richer hedonic tone won't entail signing up for someone else's idea of paradise. Instead, hedonic uplift can dramatically improve your default quality of life. Just imagine waking up tomorrow in an extremely good mood.
Or at least, that's the homely story I tell a conservative audience. In practice, the biohappiness revolution will transform life beyond our imagination. Post-Darwinian life will be like Heaven, only better.So will life have a happy ending? In a sense, yes, I guess so. But this heady vision of paradise doesn't detract from my NU view that reality is fundamentally evil. My reasons for thinking the long-term future is most likely sublime are technical. As humanity gains genetic control of the pleasure-pain axis – and then the pleasure-superpleasure axis and beyond – we'll opt for ever higher hedonic dial-settings. After all, if offered the choice, would you prefer to be a hedonic billionaire or a hedonic trillionaire?
WJ: Are you a chauvinist of one form or other as to what substrates minds ought to be based in? Because maybe carbon simply isn't optimal, although it has many unique and tractable chemical properties; maybe Darwinian life settled for carbon only because it was readily available, but other atoms could give rise to more stable and generally better minds. How do you think about the nature of the substrates that minds could and should be based in?
DP: In one sense, at least, I'm not a substrate chauvinist. Classical Turing machines (and classical LLMs, etc) can be implemented in carbon, silicon, gallium arsenide (etc), and they still won't support minds - phenomenally-bound subjects of experience. The insentience of digital computers isn't incidental, but architecturally hardwired.
In another sense, yes, I am a substrate chauvinist. I think phenomenally-bound minds with a pleasure-pain axis are feasible only because of the functionally unique valence properties of carbon and liquid water conceived as a “hot” quantum fluid. I don't discount the far-future possibility of ultra-cold inorganic quantum minds. But the kinds of phenomenally-bound experience that such hypothetical non-biological quantum minds might support probably won't include a pleasure-pain axis or anything resembling experience as we understand it.
WJ: Do you think that there is any place for human or human-like minds in the future? How should the space of possible minds be explored?
No. Darwinian life is virulent malware. Archaic humans should be reprogrammed or retired, along with the rest of the animal kingdom. A pleasure-superpleasure axis should replace the Darwinian pleasure-pain axis. Post-Darwinian superpleasure will encephalise modes of experience that humans can't conceptualise. Billions of incommensurable state-spaces of alien experience will be explored by innately blissful, genetically rewritten, AI-enhanced posthuman superminds supporting bigger, richer here-and-nows of a superhuman intensity we can't imagine or describe.
WJ: You published The Hedonistic Imperative 30 years ago. How have your predictions and aspirations changed since then? Obviously superintelligence is a decisive variable here; many expert predictions now say that it could be created in only a couple of decades. I recently saw a paper authored by some experts from open AI and written by Scott Alexander that predicted superintelligence by 2030. With superintelligence, the technological horizon might be expected to shrink; many advancements that you might expect in millennia could happen almost immediately. In your book, I think you write that you expect the renovation of the biosphere and the abolition of suffering to be realized in many centuries - do you still think this, given advancements and predictions in relation to AI?
DP: There's a lot I didn't foresee in HI - not least, CRISPR-based synthetic gene drives (cf. gene-drives.com) that can reprogram the biosphere to eradicate suffering in defiance of the “laws” of Mendelian inheritance. Back in 1995, the idea that humans could remotely fine-tune fertility, end predation, and spread benign traits that would ordinarily carry a fitness cost across thousands of free-living populations of entire species would have struck me as biologically illiterate. And artificial intelligence? Well, I knew I'd be astonished; but I've been as blindsided by the transformer revolution in AI as everyone else. Generative AI blows my mind. We've barely begun to glimpse its ramifications – though in my view digital zombies can't support full-spectrum superintelligence.
Most futurists would demur. Extrapolating current trends, isn't full-spectrum superintelligence imminent, as you suggest? Researchers typically assume that either (1) AIs will soon “wake up” and become digital ASI superminds; or if not, then (2) consciousness is no more computationally indispensable to superintelligence than the textures of the pieces in a game of chess.
I've indicated why classical AI (1) can't “wake up”: a hardwired inability phenomenally to bind. But I think (2) is erroneous too. For if subjective experience were computationally impotent or incidental, then consciousness couldn't have the causal-functional power to talk about its own existence and countless textures, as I'm doing now. Consciousness can investigate itself. Exploring the vast empirical realm of phenomenally-bound experience can be done only by minds, not digital zombies. Billions of alien state-spaces of experience can be investigated only by supersentient full-spectrum superintelligences, our neurochipped and genetically-enhanced descendants.
What digital zombies can do is awesome! Where does one start? Not least, zombie AI will soon be able to argue for abolitionism more convincingly than biological sentients, including me.
Sadly, I fear suffering has at least several centuries left to run its course. I am pessimistic about timescales for a number of reasons. For a start, unless all prospective parents worldwide submit to germ-line editing for their offspring, Darwinian genomes will persist and proliferate indefinitely. After the He Jiankui CRISPR-babies affair in China, the reproductive revolution is stalled. And commercial products from the cultured-meat revolution are proving painfully slow to reach the supermarkets. Exploiting sentient beings in factory-farms is appallingly cheap. Factory-farming is still spreading worldwide. Later this century, cultured meat and farm-free animal products probably will displace animal agriculture. Once the majority of the world has made the transition to cruelty-free diets, I still anticipate that humanity will outlaw residual animal agriculture. It's easy to be self-righteously moral if doing the right thing entails zero personal inconvenience. Yet when will the last slaughterhouse finally close? When will the last fish be asphyxiated? And then there remains the immense problem of wild animal suffering. Nature is a monstrous snuff-movie. How long will it take to gain global consent for - or at least acquiescence in - civilising Nature? Pilot studies in the form of cruelty-free artificial biospheres can showcase how blissful, predation-free ecosystems are feasible. But such pilot-studies are still wholly theoretical. And predicting shifts in future Overton windows is hard.
Of course, I hope I'm wrong to be so downbeat.
On a brighter note, the fact that we are discussing timescales for the abolitionist project at all is progress.WJ: Do you think that superintelligent AI could be created to be more moral than we are, to the point where it could do competent moral philosophy? I'm personally sceptical of what AI researchers and regulatory experts call the alignment problem, which is a term that takes its humanist associations for granted. First of all, I think it should be asked which human values AI ought to be aligned with; and secondly, why are human values morally optimal? I think that if you admit that moral truth claims relate to facts about minds and the physical processes that give rise to them, such claims ought to be more evaluable with more intelligence. I don't see any reason to believe that superintelligent AI couldn't gain the capacity to reevaluate its own values and recursively change them by reference to a broader moral (probably consequentialist in some form or other) framework. Perhaps a superintelligent AI could deduce not only how to eradicate suffering throughout the universe (it could be made self-evident that it should do so), but which minds should be brought into existence.
DP: Good questions. Well, AI can already be programmed to do moral philosophy. But AI can equally be programmed to do immoral philosophy. Zombies can't understand the nature of basement reality, the realm of phenomenally-bound minds and their empirical worlds. Experience of the pain-pleasure axis discloses the world's inbuilt metric of (dis)value.
Such knowledge is inaccessible to the insentient. Note there is nothing anthropocentric – as distinct from sentiocentric - about saying that agony and despair are self-intimatingly bad. Only our highly genetically adaptive epistemological limitations prevent most humans from recognising that agony and despair are disvaluable for anyone, anywhere, regardless of age, race or species. Thus Darwinian life evolved elsewhere might notionally have an inverted colour-spectrum – though I'm sceptical here too – but alien Darwinian life couldn't evolve an inverted pain-pleasure axis. (Dis)value is an objective feature of reality itself. Digital zombies, on the other hand, are invincibly ignorant of the intrinsic badness of suffering. Note this claim of objective (dis)value runs counter to the orthogonality thesis. Paperclip-maximizers and paperclip-minimizers are equally conceivable; but real, inherent (dis)value is different. And zombie AI has no insight at all into what it lacks. Zilch. Zombie ASI could optimize the world for hedonium or dolorium without any understanding of what it was really doing.
WJ: Have psychedelic experiences, drug experiences generally, nootropics, or spiritual experiences of any type influenced your intellectual life?
DP: I discourage people from exploring psychedelia until we've sorted out our reward circuitry. Our Darwinian minds are too nasty and dark. The risks of bad trips are too big. Psychedelics can badly screw people up as well as heal or enlighten. Psychedelics magnify the intensity of experience – good or bad.
However, yes, my ancestral namesake was committed to truth and understanding, whatever the price. Youthful DPs accessed outlandish state-spaces of experience - ineffable and inconceivable within my scientific conceptual scheme to an extent I can't exaggerate. The state-dependence of memory means the full weirdness is now elusive; but it was real.
I know a few people can talk lucidly about their trips. My friend Andrés Gómez-Emilsson of the Qualia Research Institute (cf. qri.org) is a shining example. But language is a (pseudo-)public phenomenon. If I devised neologisms to describe the alien states of ancestral DPs, I couldn't be understood by anyone but a fellow psychonaut.
The drug-naive will be unimpressed by claims of profundity. This guy says he's experienced Very Important Things on drugs. But he can't say what they were. Nor have his drug-catalysed epiphanies led to great scientific discoveries or monumental works of art or literature. Sure, they were weird. But so what?! Schizophrenics have weird experiences too.
My response is the Blind Rationalist fable:
The Blind RationalistsWJ: How do you think about the future of pharmacology and especially its possible integration with genomics to form the field of pharmacogenomics? It seems possible that this could open the doors to a range and depth of experience that we normally simply can't be privy to. This could be the first significant change to consciousness that happens in this century - the integration of tailored chemical cocktails to everyday life in order to make life better - 'better' conceived of probably in individual terms. This would be a much needed change, not least because most people, relative to what could actually be feasible, might accurately be regarded as mentally ill. It's probable that by the turn of this century, people will look back at even the psychologically healthiest people alive today, and think that they are profoundly mentally ill. What do you think about this?
DP: Yes, well said. Next century, what “euthymic” humans today conceive as ordinary waking consciousness may be reckoned deeply pathological. If our successors in a future “Triple S” civilization of superintelligence, superlongevity and superhappiness ever contemplate the Dark Ages at all, our descendants will probably think of Darwinians as profoundly physically and mentally ill – cursed with a lethal progeroid disorder, congenital hypo-hedonia and severe cognitive handicaps.
Yet how do we navigate the transition to civilized post-Darwinian life?
Routine autosomal gene therapy for hedonic uplift is decades away at best. As you suggest, personalised and AI-optimised drug-cocktails tailored to the individual and his/her genome will be wonderful. Pharmacogenomics should be the norm. Alas, progress in clinical psychopharmacology over the past few decades has been disappointing. Almost half of all depressives who receive pharmacotherapy are “treatment-resistant” - even by today's dismal criteria of mental health. The monoamine hypothesis of depression hasn't delivered reliable mood-enhancement. Selective kappa-opioid antagonists have flopped. Psychedelic therapy is too risky and unpredictable. Maybe biased mu-opioid receptor agonists like tianeptine will improve outcomes. But the risks of any mu-opioid are well known. Where do we go from here?WJ: What do you think the likely order will be among the coming changes to human nature and earth-based consciousness, assuming technological progress continues (assuming no X-risks are realized)?
DP: In HI (hedweb.com), I prophesied that humans would fix suffering first in ourselves, then in large terrestrial vertebrates, then in small rodents and the like, and finally in marine ecosystems - with a lame invocation of Drexlerian nanobots patrolling the oceans. HI tentatively predicts that the world's last unpleasant experience will be in some obscure marine invertebrate a few centuries from now. After suffering is eradicated, we can safely explore billions of currently inconceivable state-spaces of consciousness, a vastly more illuminating prospect than spacefaring. The enterprise of knowledge has scarcely begun.
This timeline still strikes me as most likely. But in theory, such a chronology could be inverted. With CRISPR-based synthetic gene drives, suffering could be fixed first for small marine fast-breeders, then for small and larger vertebrates, and finally for slow-breeding humans. The world's last unpleasant experience could occur among ideological holdouts on a Reservation (cf. John the Savage in Aldous Huxley's Brave New World), victims of a personality disorder and/or ideology that forbids tampering with your God-given source code because it's “unnatural”.
OK, this particular scenario is fanciful. But we don't know enough to rule out some of its variants. Nor does such a chronology consider the implications of Everettian QM and Hilbert-space realism for historical timelines. Most historians and futurologists alike assume classical physics. The multiverse scares me.
By contrast, if AI doomers are right, the world's last unpleasant experience is imminent. Eliezer Yudkowsky estimates there's a more than 99% likelihood that AI will shortly extinguish all Darwinian life. Machine superintelligence will elegantly align with traditional NU.
Sadly, I think Eliezer is wrong. There will be no zombie putsch or paperclip explosion. Hundreds of years of misery and malaise lie ahead, perhaps more. I wish I could end this interview on an uplifting note. But the death-spasms of Darwinian life will be ugly.
* * * more interviews
1 : 2 : 3 : 4 : 5 : 6 : 7 : 8 : 9 : 10 : 11 : 12 : 13 : 14 : 15 : 16 : 17 : 18 : 19 : 20 : 21
HOME
Books
Pairagraph
Eugenics.org
Superhappiness
Superspirituality?
Utopian Surgery?
Social Media (2025)
The End of Suffering
Wirehead Hedonism
The Good Drug Guide
The Abolitionist Project
David Pearce (Wikiquote)
David Pearce (Wikipedia)
Quora Answers (2015-25)
Reprogramming Predators
The Reproductive Revolution
MDMA: Utopian Pharmacology
Critique of Huxley's Brave New World
Interview of DP by Immortalists Magazine
Interview of DP by CINS Magazine (2021)
The Imperative to Abolish Suffering (2019)
Interview of Nick Bostrom and David Pearce
Death Defanged: The Case For Cryothanasia (2022)
Interview: What's It Like To Be A Philosopher? (2022)
E-mail Dave
dave@hedweb.com