First published: The Philosophy Forum
Date: April 2021

The Philosophy Forum
Guest Speaker David Pearce on The Philosophy Forum

PP: As earlier announced, we have invited transhumanist philosopher David Pearce to be a guest speaker here and he has kindly accepted. We are hoping to learn more about his work and the very interesting field in which he's involved.

David is a key figure in the transhumanist movement and a co-founder of Humanity+ (formerly known as The World Transhumanist Association). For those of you who are unsure of the basics of transhumanism, David provides a useful, concise introduction in this video:

The Three Supers of Transhumanism
What is Transhumanism? - The Three Supers with David Pearce

For a more detailed take on David's ideas, the following is helpful:
Transhumanism; How Biotechnology Can Eradicate Suffering
Transhumanism; How Biotechnology Can Eradicate Suffering

Or check out David's book The Hedonistic Imperative and his website Hedweb.com

Other important thinkers in the broad transhumanist sphere include Ray Kurzweil and James Hughes.

Needless to say, transhumanism is a controversial subject and it's status open to debate. As such, some of us may come to the subject with strong preconceptions/opinions. This is all fine, but please bear in mind, David is contributing time from his busy schedule to help us learn more about his field and while critique and questioning is welcome, we hope everyone will be measured and respectful in tone.

This thread is intended as an initial AMA (ask me anything) where you can put your questions/critiques to David. Please keep these to a maximum of 250 words. There's no guarantee he'll get around to everyone, but I'm sure he'll make the effort to answer the more interesting posts, some of which may, according to David's prerogative, be separated into threads of their own. So, have at it and I'll let David know this is up and we're ready for him.

* * *
DP: Thank you, Philosophy Forum, for inviting me. I'll be answering questions – and any critical follow-ups! – this week. Please forgive any delay. I'll do my fallible best to respond to everyone.

PP: "David, have you read Jacques Ellul, and if so, what critique do you have against his philosophy of technology? I am unconvinced that technology as it exists today is merely a tool that is used by people, and that people are or even could be in control of the direction in which it develops. What reasons do we have to believe that humanity can achieve these monumental transformations with technology, when we are unable to solve the current problems we face (global warming, overpopulation, famine, etc)?"
(Darthbarracuda)
DP: Most transhumanists are secular scientific rationalists. Only technology (artificial intelligence, robotics, CRISPR, synthetic gene drives, preimplantation genetic screening and counselling) can allow intelligent moral agents to reprogram the biosphere and deliver good health for all sentient beings.
Global warming? There are geoengineering fixes.
Overpopulation? Fertility rates are plunging worldwide.
Famine? More people now suffer from obesity than undernutrition.

I share some of Jacques Ellul's reservations about the effects of technology. But only biotechnology can recalibrate the hedonic treadmill, eradicate the biology of involuntary pain and suffering and deliver a world based on gradients of intelligent bliss:
Sentience Research interview

Jacques Ellul himself was deeply religious. He felt he had been visited by God. Most spiritually-minded people probably feel that transhumanism has little to offer. Perhaps they are right – my own mind is a desolate spiritual wasteland. But science promises the most profound spiritual revolution of all time. Tomorrow’s molecular biology can identify the molecular signatures of spiritual experience, refine and amplify its biological substrates, and deliver life-long spiritual ecstasies beyond the imagination of even the most god-intoxicated temporal-lobe epileptic.
Will most transhumans choose to be rationalists or mystics?
I don’t know. But biotech can liberate us from the obscene horrors and everyday squalor of Darwinian life.

PP: "Star Trek (especially, the Second Generation series) set in the 24th Century seems to embody a version of transhumanism. There is a high level of human well-being, empathy, technology, and so on, on earth as well as on board the Enterprise. In the galaxy, not so much. Do you see technological advances in the next two centuries delivering the conditions of transhumanism, or are you thinking in longer (or shorter) time periods?"
(Bitter Crank)
DP: The future we imagine derives mostly from the sci-fi we remember. Life in the 24th century will not resemble Star Trek. Not merely does the “thermodynamic miracle” of life’s genesis mean that Earth-originating life is probably alone in our Hubble volume. The characters in Star Trek have the same core emotions, same pleasure-pain axis, same fundamental conceptual scheme and same default state of waking consciousness as archaic humans. Even Mr Spock is all too human. It’s hokum.

Realistic timescales for transhumanism? Let’s here define transhumanism in terms of a “triple s” civilisation of superintelligence, superlongevity and superhappiness. Maybe the 24th century would be a credible date. Earlier timescales would be technically feasible. But accelerated progress depends on sociological and political developments that reduce predictions to mere prophecies and wishful thinking. In practice, the frailties of human psychology mean that successful prophets tend to locate salvation or doom with the plausible lifetime of their audience. I’m personally a lot more pessimistic about timescales for a mature “triple S” civilisation than most transhumanists. Sorry to be so vague. There are too many unknown unknowns.

"What do you think the chances are of environmental collapse in the next 100 years derailing the necessary technical developments to allow transhumanism?"
Environmental collapse? The only way I envisage collapse might happen is via full-scale thermonuclear war and a strategic interchange between the superpowers. Sadly, this is not entirely far-fetched. Evolution “designed” human male primates to wage war against other coalitions of human male primates. I fear we may be sleepwalking towards Armageddon. Note that environmental collapse wouldn’t entail human extinction, though relocating to newly balmy Antarctica (cf. Antarctic civilisation) would be hugely disruptive. Let’s hope these fears are wildly overblown.

"What kind of economic arrangements are most and least likely to advance transhumanist goals? Capitalism is not a good candidate to deliver super well-being to everyone?"
Capitalism? Can a system based on human greed really deliver the well-being of all sentience? I'm sceptical. Free-market fundamentalism doesn’t work. Universal basic income, free healthcare and guaranteed housing are preconditions for any civilised society. Above all, murdering sentient beings for profit must be outlawed. Factory-farms and slaughterhouses are pure evil (cf. Animal agriculture). The cultured meat revolution will presumably end the horrors of animal agriculture. But otherwise, I think some version of the mixed economy will continue indefinitely. Anything that can be digitised soon becomes effectively free. This includes genetic information. The substrates of bliss won’t need to be rationed. In the meantime, preimplantation genetic screening and counselling for all prospective parents would be hugely cost-effective – especially in the poorest countries. I hope all babies can be designer babies rather than today’s reckless genetic experiments.

PP: "Two thrusts of research in the last half century have been the thorough chronicling of how human rationality is riddled with irremovable biases and how extremely powerful artificial intelligences may nevertheless have incommensurable value systems with humans. It seems we are broken tools that make broken tools; we evaluate wrongly and thus teach machines to value wrongly. What role, if any, do you see a re-evaluation of rationality and decision making logics playing in the transhumanist project? How ought we go about that? And how do we get around the problem of using broken tools to make only more powerful broken tools?
(Fdrake)
DP: "Irremovable human biases"?
Yes. One example is status quo bias. A benevolent superintelligence would never have created a monstrous world such as ours. Nor (presumably) would benevolent superintelligence show status quo bias. But the nature of selection pressure means that philosopher David Benatar’s plea for voluntary human extinction via antinatalism (Better Never To Have Been (2008)) is doomed to fall on deaf ears. Apocalyptic fantasies are futile too (cf. DP on transhumanism).
So the problem of suffering is soluble only by biological-genetic means.

The orthogonality thesis?
All biological minds have a pain-pleasure axis. The pain-pleasure axis discloses the world’s inbuilt metric of (dis)value. Thus there are no minds in other life-supporting Hubble volumes with an inverted pleasure-pain axis. Such universality of (dis)value doesn’t mean that humans are all closet utilitarians. The egocentric illusion has been hugely genetically adaptive for Darwinian malware evolved under pressure of natural selection; hence its persistence. Yet we shouldn’t confuse our epistemological limitations with a deep metaphysical truth about the world. Posthuman superintelligences will not have a false theory of personal identity. This is why I’m cautiously optimistic that intelligent agents will phase out the biology of suffering in their forward light-cone. Yes, we may envisage artificial intelligences with utility functions radically different from biological minds (“paperclippers”). But classical digital computers cannot solve the phenomenal binding/combination problem (cf. The Binding Problem: interview). Digital zombies can never become full-spectrum intelligences, let alone full-spectrum superintelligences. AI will augment us, not supplant us.

Better tools of decision-theoretic rationality?
Compare the metaphysical individualism presupposed by the technically excellent LessWrong FAQ (cf. Less Wrong FAQ) with the richer conception of decision-theoretic rationality employed by a God-like full-spectrum superintelligence that could impartially access all possible first-person perspectives and act accordingly (cf. Open/empty/closed individualism).
So how can humans develop such tools of God-like rationality?
As you say, it’s a monumental challenge. Forgive me for ducking it here.

PP: "Given the strong ethics present in the US medical field and especially US colleges, how do you see the developments (legalistically) for transhumanism for the following years? For example, Neuralink, by Elon Musk, hopes to address something equivalent albeit not explicitly stated to the goal of transhumanism with brain machine interfaces to keep pace with ever more intelligent computers. What are your thoughts about this endeavor and hurdles it faces?"
(Shawn)
DP: Challenges for transhumanism?
Where does one start?! Here I’ll focus on just one. If prospective parents continue to have children “naturally”, then pain and suffering will continue indefinitely. All children born today are cursed with a terrible genetic disorder (aging), a chronic endogenous opioid addiction and severe intellectual disabilities that education alone can never overcome. The only long-term solution to Darwinian malware is germline gene-editing. Unfortunately, the first CRISPR babies were conceived in less than ideal circumstances. He Jianku and his colleagues were trying to create cognitively enhanced humans with HIV-protection as a cover-story (cf. CRISPR Twins Had Brains Altered). All babies should be CRISPR babies, or better, base-edited babies. No responsible prospective parent should play genetic roulette with a child’s life. Unfortunately, the reproductive revolution will be needlessly delayed by religious and bioconservative prejudice. If a global consensus existed, we could get rid of suffering and disease in a century or less. In practice, hundreds if not thousands of years of needless pain and misery probably lie ahead.

Neuralink? It’s just a foretaste. If all goes well, everyone will be able to enjoy “narrow” superintelligence via embedded neurochips – the mature successors to today’s crude prototypes. Everything that programmable digital zombies can do, you’ll be able to do – and much more. Huge issues here will be control and accountability. I started to offer a few thoughts, but they turned into platitudes and superficial generalities. “Narrow” superintelligence paired with unenhanced male human nature will be extraordinarily hazardous.

Super well-being?
Let’s say, schematically, that our human hedonic range stretches from -10 to 0 to +10. Most people have an approximate hedonic set-point a little above or a little below hedonic zero. Tragically, a minority of people and the majority of factory-farmed nonhuman animals spend essentially their whole lives far below hedonic zero. Some people are mercurial, others are more equable, but we are all constrained by the negative-feedback mechanisms of the hedonic treadmill. In future, mastery of our reward circuitry promises e.g. a hedonic +70 to +100 civilisation – transhuman life based entirely on information-sensitive gradients of bliss. Currently, we can only speculate on what guise such superhuman well-being will take, and how it will be encephalised. What will transhumans and posthumans be happy “about”? I don’t know – probably modes of experience that are physiologically inaccessible to today's humans. But one of the beauties of hedonic recalibration is that (complications aside) it’s preference-neutral. Who wouldn’t want to wake up in the morning in an extremely good mood – and with their core values and preference architecture intact? Aristotle’s “eudaimonia” or sensual debauchery? Mill’s “higher pleasures” or earthy delights? You decide. Crudely, everyone’s potentially a winner with biological-genetic interventions. Compare the zero-sum status-games of Darwinian life. Unlike getting rid of suffering, I don’t think superhappiness is morally urgent; but post-Darwinian life will be unimaginably sublime.

What about hedonic uplift for existing human and nonhuman animals prior to somatic gene-editing? Well, one attractive option is ACKR3 receptor blockade (cf. ACKR3), perhaps in conjunction with selective kappa opioid receptor antagonism. Enhancing “natural” endogenous opioid function and raising hedonic set-points is vastly preferable to taking well-known drugs of abuse that typically activate the negative feedback mechanisms of the CNS with a vengeance. An intensive research program is in order. Pitfalls abound.

In the long run, however, life on Earth needs a genetic rewrite. Pharmacological stopgaps aren't the answer.

PP: "Seems a little far fetched when we can't get humans to apply available technologies for the benefit of humanity; not least, drill for limitless magma heat energy, to produce massive base load, clean electrical power, for carbon capture and sequestration, hydrogen fuel, desalination/irrigation and recycling, but a miracle chip in my neo-frontal cortex is going to usher in a subjective sense of paradise? Unless we get the utilities sorted out, your transhumanist paradise will be objectively unliveable - no matter how blissed out you are!"
(Counterpunch)
DP: “A miracle chip?”
Transhumanists don’t advocate intracranial self-stimulation or unvarying euphoria. For a start, uniform bliss wouldn’t be evolutionarily stable; wireheads don’t want to raise baby wireheads. Transhumanists don’t advocate getting “blissed out”. Instead, we urge a biology of information-sensitive gradients of well-being. Information-sensitivity is critical to preserving critical insight, social responsibility and intellectual progress.

PP: "I just want to say to David that (anyone’s quarrels with transhumanist means aside) I’m very pleased to see a professional like him touting the right ends, especially after all the pushback I’ve gotten on this forum for supporting the radical idea that maybe all that really matters morally speaking is reducing suffering any kind. I guess if this is to be a question, it would be: has he gotten any pushback on that in the professional sphere, and what has that been like? Oh and also: does he know a good term for anti-hedonist views in general? Because I feel like I’m sorely lacking any catch-all term that doesn’t name something more specific than just that."
(Pfhorrest)
DP: I’m sad to hear of the pushback you’ve received on the forum. Instead of saying one is a “negative utilitarian”, perhaps try “secular Buddhism” or “suffering-focused ethics” (cf. Suffering-Focused Ethics by Magnus Vinding). I sometimes simply say that I would “walk away from Omelas”. No amount of pleasure morally outweighs the abuse of even a single child: A Question For David Pearce. If a genie made you an offer, would you harm a child in exchange for the promise of a millions of years of indescribable happiness? I'd decline – politely (I'm British).

Academic pushback? I guess the average academic response isn’t much different from the average layperson’s response. An architecture of mind based entirely on information-sensitive gradients of well-being simply isn’t genetically credible – whether for an individual or a civilisation, let alone a global ecosystem (cf. Gene-drives.com). At times my imagination fails too. Of course there are exceptions – but the academics who’ve directly been in touch to offer support are almost by definition atypical.

A fairly common critical response would probably be Professor Brock Bastian's The Other Side of Happiness: Embracing a More Fearless Approach to Living (2018):
DP versus Brock Bastian

PP: "David, a current theory is that we can only experience joy and love because of the experience of their opposites, misery and hatred. So if you were born in a state of bliss and have never known otherwise, then it wouldn't be bliss. What do you think about this theory?"
(TaySan)
DP: The idea that pleasure and pain are largely if not wholly relative is seductive. It’s still probably the most common objection to the idea of a civilisation based entirely on gradients of bliss. However, consider the victims of life-long pain and depression. Some chronic depressives can’t imagine what it’s like to be happy. In some severe cases, chronic depressives don’t even understand what the word “happiness” means – they conceive of happiness only in terms of a reduction of pain. Now we wouldn’t (I hope) claim that chronic depressives can’t really suffer because they’ve never experienced joy. Analogous syndromes exist at the other end of the Darwinian pleasure-pain axis. Unipolar euphoric mania is dangerous and extraordinarily rare. Yet there is also what psychologists call extreme "hyperthymia". Hyperthymics can be very high functioning. My favourite case-study is fellow transhumanist Anders Sandberg (“I do have a ridiculously high hedonic set-point”). Anders certainly knows he is exceedingly happy – although unless pressed, he doesn’t ordinarily talk about it. He is also socially responsible, intellectually productive and exceptionally smart. In common with depression and mania, hyperthymia has a high genetic loading. Gene editing together with preimplantation genetic screening and counselling for all prospective parents offer the potential prospect of lifelong intelligent happiness for future (trans)humans. For sure, creating an entire civilisation of hyperthymics will be challenging. Not least, prudence dictates preserving the functional analogues of depressive realism – at least in our intelligent machines. But unlike ignorance, known biases can be corrected.

I’m a dyed-in the-wool pessimist by temperament. But for technical reasons, I suspect the long-term future of sentience lies in gradients of sublime bliss beyond the bounds of human experience.

PP: "Utopias, visionarism, futurism, and even presentism and looking at the past are all riddled with the fallacy of reducing the concept of humanity by stripping it of its diverse nature, and of its further diversification. How does Transhumanism handle this concept, the concept that humanity is not at all a monolithic homogeneous substance made up of individuals of the same social and personal psychology, with the same or similar needs, wants and desires, but a mass conglomerate of an ever-growing trend of diversification?"
(God must be atheist)
DP: Yes, humans are diverse – by some criteria. On the other hand, we tend to share the same core emotions, same pleasure-pain axis, same sleep-wake cycles, same progression of youth and aging, same kind of egocentric world-simulation (etc) as our primate ancestors. Above all, sentient beings are prone to suffer. Perhaps posthumans will find humans as diverse as we find members of an ant colony. Either way – and most relevant to your question – no one values the experience of unbearable agony or suicidal despair – or even plain boredom. Even ostensible counter-examples to the primacy of pleasure over pain, such as masochism, simply reinforce the sovereignty of the pleasure-pain axis. Masochists love the release of endogenous opioids as much as the rest of us. We’d all be better off if experience below hedonic zero is replaced by a civilised signalling system. We’d all be better off with a motivational architecture based entirely on information-sensitive gradients of well-being. Hedonic recalibration and uplift can radically enhance everyone’s quality of life.

Naively, Heaven might sound monotonous compared to the torments of Hell, or even compared to everyday Darwinian purgatory. In practice, genome editing promises a richer diversity of genes and allelic combinations than is possible under a regime of natural selection. The diversification of sentience has scarcely begun. For example, transhumans will be able to access billions of exotic state-spaces of consciousness as different as is dreaming from waking consciousness. What they’ll have in common is they’ll all be generically wonderful.

PP: "David, hope you are well. Anti-natalism is regularly discussed on this forum, with members having different degrees of sympathy towards it. You describe yourself as a "soft anti-natalist". What is your basis for this? And do you buy into Benatar's asymmetry theory? (which suggests that the pain of a pinprick would make it so that a life otherwise full of pleasure would have been better off not being started)."
(Down The Rabbit Hole)
DP: Yes, I accept a version of the asymmetry theory. The badness of suffering is self-intimating, whereas there is nothing inherently wrong with inexistence. And yes, I’m a “soft” antinatalist. Bringing pain-ridden life into the world without prior consent is morally indefensible. Nor would I choose to have children on the theory that the good things in life typically outweigh the bad. Enduring metaphysical egos are fiction. That said, I don’t campaign for antinatalism. “Hard” antinatalism is not a viable solution to the problem of suffering. Staying childfree just imposes selection pressure against any predisposition be an antinatalist. As far as I can tell, the selection pressure argument against “hard” antinatalism is fatal (cf. Do you agree with antinatalism?).

Contrast the impending reproductive revolution of designer babies. As prospective parents choose the genetic makeup of their offspring in anticipation of the behavioural and psychological effects of their choices, the nature of selection pressure will change. Post-CRISPR and its successors, there will be intensifying selection pressure against our nastier alleles and allelic combinations at least as severe as selection pressure against alleles for, say, cystic fibrosis. Imagine you could choose the approximate hedonic set-point and hedonic range of your future children. What genetic dial-settings would you choose?

The Pinprick Argument? Recall that negative utilitarians want to abolish all experience below hedonic zero. So if any apparently NU policy-proposal causes you even the slightest hint of disappointment – for example sadness that we won’t get to enjoy a glorious future of superhuman bliss – then other things being equal, that policy-proposal is not NU. So NUs can and should support upholding the sanctity of life in law, forbidding chronic pain-specialists from euthanizing patients without prior consent, and many other political policy-prescriptions that are naively un-utilitarian. Not least, NUs are not plotting Armageddon (well, most of us anyway: Solve Suffering By Blowing Up The World?)

PP: "Hello, and thank you for participating here. Regarding the three “supers” mentioned, I’m curious about how the three are interrelated. Particularly super-intelligence and super-wellbeing. Generally speaking, intelligence means learning/knowing what is true. However, truth is often unpleasant, and would therefore seem to detract from one’s wellbeing, at least occasionally."
(Pinprick)
DP: Thank you.
What is the relationship between superintelligence and super-wellbeing?
It’s tricky. The best I can manage is an analogy. Consider AlphaGo. Compared to humble grandmasters, AlphaGo is a superintelligence. Nonetheless, even club players grasp something about the game of chess that AlphaGo doesn’t comprehend. The “chess superintelligence” is an ignorant zombie. I don’t know how posthuman superintelligences will view the Darwinian era that spawned them. Maybe posthumans will allude, rarely, to Darwinian life in the way that contemporary humans allude, rarely, to the Dark Ages. Most humans know virtually nothing about the Dark Ages beyond the name – and have no desire to investigate further. What's the point? Maybe superintelligences occupying a hedonic range of, say, +80 to +100 will conceive hedonically sub-zero Darwinian states of consciousness by analogy with notional states below hedonic +80 – their functional equivalent of the dark night of the soul, albeit unimaginably richer than human “peak experiences”. The nature of Sidgwick's "natural watershed" itself, i.e. hedonic zero, may be impenetrable to them, let alone the nature of suffering. Or maybe posthuman superintelligences will never contemplate the Darwinian era at all. Maybe they’ll offload stewardship of the accessible cosmos to zombie AIs. On this scenario, programmable zombie AIs will ensure that sub-zero experience can never re-occur within our cosmological horizon without any deep understanding of what they’re doing (cf. AlphaGo). In any event, I don’t think posthuman superintelligences will seek to understand suffering in any full-blooded empathetic sense. If any mind were to glimpse even a fraction of the suffering in the world, it would become psychotic.

Whatever the nature of mature superintelligence, I think it’s vital that humans and transhumans investigate the theoretical upper bounds to intelligent agency so we can learn our ultimate cosmological responsibilities. Premature defeatism might be ethically catastrophic.

"Also, you seem to advocate for essentially the removal all suffering. Much of our suffering derives from our biological needs (food, sleep, etc.). So would these needs need to be removed in order to eliminate suffering? If so, this too would seem to detract from the goal of super-wellbeing, because much of our happiness is rooted in pursuing, and hopefully meeting, these needs. Basically what I’m saying is that if you eliminate our biological needs, you also risk eliminating our very will to live. What would our motivation for life be without experiencing desire? It’s like Buddhism without the concept of nirvana, enlightenment, rebirth, etc. A state of eternal contentment and complacency seems to be what the outcome would look like. Do you feel this would be more desirable than our current state of affairs? Thank you for your time."
Suffering and desire?
Buddhists equate the two. But the happiest people tend to have the most desires, whereas depressives are often unmotivated. Victims of chronic depression suffer from “learned helplessness” and behavioural despair. So the extinction of desire per se is not nirvana. Quite possibly transhumans and posthumans will be superhappy and hypermotivated. Intuitively, for sure, extreme motivation is the recipe for frustrated desire and hence suffering. Yet this needn’t be so if we phase out the biology of experience below hedonic zero. Dopaminergic “wanting” is doubly dissociable from mu-opioidergic “liking”; but motivated bliss is feasible too. If you’ll permit another chess analogy, I always desire to win against my computer chess program. I’m highly motivated. But I always lose. Such frustrated desire never causes me suffering unless my mood is dark already. The same is feasible on the wider canvas of Darwinian life as a whole if we reprogram the biosphere to eradicate suffering. Conserving information-sensitivity is the key, not absolute position on the pleasure-pain axis:
Recalibrating the hedonic treadmill

PP: Hello David! As been said, thank you kindly for sharing some of your thoughts and time here. Just 2 quick questions that relate to the 3rd Super, can you please define the following concepts that you used to describe your thesis: 1. "Involuntary Suffering" 2. "Pro-Social". I am trying to parse both the practical and theoretical implications of those concepts, so as to understand Transhumanism a bit more. Thank you in advance."
(3017amen)
DP: Thank you.
Yes, transhumanists aspire to end involuntary suffering. It’s tempting to be lazy and normally say just “end suffering” but the “involuntary” is worth stressing. No one is credibly going to force you to be happy. Many of the objections one hears to the abolitionist project focus on the strange suspicion that someone, somewhere, intends to engineer coercive bliss and force the critic to be cheerful. As it happens, I cautiously predict that eventually all experience below hedonic zero will disappear in to evolutionary history (cf. Transhumanism in the Daily Express). But prediction is different from proscription. Perhaps the thorniest consent issue will be hedonic default settings. When the biology of suffering becomes technically entirely optional – and it will – should tradition-minded parents be legally allowed to have pain-ridden children “naturally” via the cruel genetic crapshoot of sexual reproduction? And must their children wait until they are eighteen (or whatever the legal age of majority) to be cured? Eventually, creating malaise-ridden babies like today’s Darwinian malware may seem to be outright child abuse. This prediction needs to be expressed with delicacy lest it’s misunderstood.

Pro-social?
Some conceptions of a superintelligence resemble a SuperAsperger. Ill-named “IQ” tests measure only the “autistic” component of general intelligence. “Superintelligence” shouldn’t be conceived as some kind of asocial singleton. Recall how human evolution was driven in part by our prowess at mind-reading, cooperative-problem solving and social cognition. True, contemporary accounts of posthuman superintelligence always reveal more about the cognitive limitations and preoccupations of the authors than they do about posthuman superintelligence. But full-spectrum superintelligences won’t resemble autistic savants – or “paperclippers”:
Autistic Superintelligence?

PP: "...your vision of a transhumanism utopia seems something in a far off future. Presumably, from the time now until that future time, billions of people will have lived and suffered. That being said, wouldn't David Benatar and antinatalism's argument in general be the best alternative in terms of suffering prevented? Basically, if you prevent the suffering in the first place, you have cut off the suffering right from the start. And as Benatar's asymmetry shows, no "person" suffers by not being born to experience the "goods" of life. It's a win/win it seems."
(Schopenhauer1)
DP: I share your bleak diagnosis of Darwinian life:
Soft and Hard Antinatalism
But David Benatar and other “hard” antinatalists simply don’t get to grips with the argument from selection pressure. Antinatalists can’t hope to win:
God's Little Rabbits
See too my response to Down The Rabbit Hole above.

By contrast, for first time a few mainstream publications are starting to realise that genome editing makes a world without suffering possible:
A World Without Pain
Like you, I find knowing that billions of sentient beings will suffer and die before the transhumanist vision can come to pass is dispiriting. I just can’t think of any sociologically credible alternative.

PP: "Thank you for your response! If you don't mind, I have a few followup questions for clarification: 'An architecture of mind based entirely on information-sensitive gradients of well-being simply isn’t genetically credible' David Pearce I'm unclear what you mean here by "genetically credible"
(Pfhorrest)
DP: Each of your points deserves a treatise. Forgive me for hotlinking.
Two classes of humans, enhanced and unenhanced?
Yes, it’s a possible risk. But the cost of genome sequencing and editing is collapsing. Likewise computer processing power. The biggest challenge won’t be cost, but ethics and ideology. Intelligence-amplification involves enriching our capacity for perspective-taking and empathy – and extending our circle of compassion to even the humblest minds. Transhumanists (cf. Transhumanism.com) advocate full-spectrum (super)intelligence: The Biointelligence Explosion. The Transhumanist Declaration (1998, 2009) expresses our commitment to the well-being of all sentience – not a world of Nietzschean Übermenschen.

Coercion?
One human invention worth preserving is liberal democracy.

An end to evolution?
On the contrary, the entire biosphere is now programmable:
Reprogramming the Biosphere

The beauties of Nature?
Nature will be more beautiful when sentient beings aren’t disembowelled, asphyxiated and eaten alive:
Reprogramming Predators

Persuading religious traditionalists?
Well, a world where all sentient beings can flourish isn’t the brainchild of starry-eyed transhumanists. It’s the “peaceable kingdom” of Isaiah. Transhumanists fill in some of the implementation details missing from the prophetic Biblical texts. For instance, the talk below was delivered to the Mormon Transhumanist Association:
Paradise Engineering (pdf)

Hacking?
Yes, it’s a potential threat. But Darwinian life is a monstrous engine for the creation of suffering. Animal life on Earth has been “programmed” to suffer. It’s a design feature, not a bug or a hack. Darwinian malware should be patched or retired.

A “mere robot/non-human abomination?”
I’d need to know what you have in mind. But if we phase out the molecular signature of experience below hedonic zero, then the meaning of “things going wrong” will be revolutionised too.

PP: "Hi David, Are there any practical examples of the types of changes you're suggesting in effect or is it all just theoretical? On the home page of your website you mention that many modern medical technologies would have been inconceivable or thought of as impossible in the past, but it would be a fallacy to then infer that our current ideas on what is possible are therefore false and so that the abolitionist project is therefore possible."
(Michael)
DP: The prospect of a “triple S” civilisation of superintelligence, superlongevity and superhappiness still strikes most people as science fiction. By way of a reply, I’m going to focus just on the strand of the transhumanist project that strikes me as most morally urgent, namely overcoming the biology of involuntary suffering. Technically but not sociologically, everything I discuss could be achieved this century with recognisable extensions of existing technologies. Nothing I explore involves invoking e.g. a Kurzweilian Technological Singularity or machine superintelligence as a deus ex machina to solve all our problems. Even helping obscure marine invertebrates and fast-breeding rodents is technically feasible now – although pilot studies in self-contained artificial biospheres would be wise. Technically (but not sociologically), a “low pain” (as distinct from a “no pain”) biosphere could be created within decades. And imagine if all prospective parents were offered preimplantation genetic screening and counselling / gene-editing services so their future children could enjoy benign versions of the SCN9A (cf. A Cure for Pain), FAAH and FAAH-OUT genes. Imagine if cultured meat and animal products lead to the closure of factory-farms and slaughterhouses worldwide. Imagine if we spread benign versions of pain- and hedonic-tone-modulating genes across the biosphere with synthetic gene drives. Imagine if all humans and nonhuman animals were offered pharmacotherapy (cf. LIH-383) to boost their endogenous opioid function. Imagine if we took the World Health Organization definition of good health seriously and literally: “complete physical, mental and social well-being”. Biological-genetic interventions would be indispensable. The WHO commitment to health is impossible to fulfil with our legacy genome. To stress, I’m not urging a Five Year Plan (as distinct from a Hundred-Year Plan) and certainly not delegating stewardship of the global ecosystem to philosophers! Rather, we need exhaustive research into risk-reward ratios and bioethical debate. How much pain and suffering in the living world is ethically optimal? Intelligent moral agents will shortly be able to choose.

PP: "David, do you view transhumanism as a more likely outcome than the dystopian possibilities given the kinds of technologies involved? Potential dangers would include a nascent superintelligence not aligned with human values, CRISPR being used to make bioweapons, gray goo nanotech scenarios, and of course, the super rich receiving the lion share of the benefits, since they'll be to afford riding the early wave of accelerating improvements."
(Marchesk)
DP: Utopia, dystopia or muddling through?
It’s a question of timescales. I’m sceptical experience below hedonic zero will exist a thousand years from now – and maybe much sooner. I suspect quasi-immortal intelligent life will be animated by gradients of superhuman bliss. So I could be mistaken for an optimist. I don’t believe nascent machine superintelligence will turn us into the equivalent of paperclips (cf. Paperclippers); I discount grey goo scenarios; I reckon e.g. crashing the global ecosystem is more challenging than it sounds; and I think the poor will have access to biological-genetic reward pathway enhancements no less than the rich. Not least, I think we are living in the final century of animal agriculture, a monstrous crime against sentience on a par with the Holocaust.

However, I’m not at all optimistic that humanity will avoid nuclear war this century. “Local”, theatre or strategic nuclear war? I don’t know.
How can nuclear catastrophe be avoided?
I could offer some thoughts. But alas "Dave’s Plan For World Peace" will make limited impact.
So I fear unimaginable suffering still lies ahead.

PP: "You might wish to take some of the questions related to the possibility that the future may not be better but worse than the past. There is this sense that transhumanism is just too starry-eye optimistic, that it is not just sci-fi but in many ways it is yesterday's sci-fi, a line of thought typical of the 90's..."
(Olivier5)
DP: Could the future be worse than the past?
It’s a horrific thought. I promise I take s-risks seriously – although not all s-risks:
DP on s-risks.

However, it’s worth drawing a distinction between two kinds of technology. Traditional technological advances do not target the negative-feedback mechanisms of hedonic treadmill. In consequence, there is little evidence that the subjective well-being – or ill-being – of the average twenty-first century human differs significantly from the average subjective well-being / ill-being of the average stone-age hunter-gatherer on the African savannah. Indeed, some objective measures of well-being / ill-being such as suicide rates and the incidence of serious self-harm suggest modern humans are worse off. By contrast, biological-genetic interventions to elevate hedonic range and hedonic set-points promise to revolutionise mental health. I’m as hooked on iPhones, air travel, social media and all the trappings of technological civilisation as anyone. I’m also passionate about social justice and political reform. But if we are morally serious about the problem of suffering, then we’ll have to tackle the root of the problem, namely our sinister genetic source code. Only transhumanism can civilise the world.

PP: "Thanks for the response. I don't share your condemnation of "Darwinian life". I tend to like life as it is, suffering included"
(Olivier5)
DP: If I might quote Robert Lynd, “It is a glorious thing to be indifferent to suffering, but only to one's own suffering.” You say, “I tend to like life as it is, suffering included.” If you are alluding just to your own life here, cool! But please do bear in mind the obscene suffering that millions of human and nonhuman animals are undergoing right now. Recall that over 850,000 people take their own lives each year. Hundreds of millions of people suffer from depression and chronic pain. Billions of nonhuman animals suffer and die in factory-farms and slaughterhouses. I could go on, but I’m sure you get my point. Life doesn't have to be this way.

Ironically, this vision of yours scares me far more than the possible collapse of our civilization. Because it wouldn't be the first civilization to collapse; these things have happened before. But a life without downsides or limits, that has never happened before."
You say the transhumanist vision “scares” you? Why exactly? Quasi-immortal life based on gradients of intelligent bliss needn't be as scary as it sounds. Recall that over 850,000 people take their own lives each year. Hundreds of millions of people suffer from depression and chronic pain. Billions of nonhuman animals suffer and die in factory-farms and slaughterhouses. I could go on, but I’m sure you get my point. Life doesn't have to be this way. Quasi-immortal life based on gradients of intelligent bliss needn't be as scary as it sounds.

PP: "What Olivier's concern may be is that while you, as a decent person trying to help humanity by creating unlimited or constant pleasure without end, may be abused by those who wish to do the opposite and instead create unlimited and never-ending torment."
(Outlander)
DP: I share your dark view of humanity.
Yet should we discourage a scientific understanding of depression and its treatment for fear some people might use the knowledge to make their victims more depressed? Should we discourage a scientific understanding of pain, painkillers and pain-free surgery for fear some people might use the knowledge to inflict worse torments on their enemies? Most topically here, should we discourage research into the "volume knob for pain" for fear a few parents might choose malign rather than benign variants of SCN9A for their offspring? If taken to extremes, this worry would stymie all medical progress, not just transhumanism. Indeed, one reason for accelerating the major evolutionary transition in prospect is to put an end to demonic male "human nature".

PP: "People would take their lives in your ideal world too, if only because it'd be boring"
(Olivier5)
DP: Boredom? Its elimination will be trivial compared to defeating the biology of aging:
Ending Aging

Predation?
The Radical Plan To Eliminate the Earth's Predatory Species
(I didn’t write the headline.)
No sentient being need be harmed by some light genetic tweaking.

Plant sentience?
It’s a hoax:
Plants are Zombies

PP: "This contradicts the 'universal law of predation' which states that all organisms must have predators"
DP: The conjecture that predation among species is inevitable is no more tenable than the conjecture that predation among races is inevitable. This isn’t because selection pressure is going to slacken; on the contrary, selection pressure will intensify. But intelligent moral agents can decommission natural selection. A combination of genetic tweaking and cross-species fertility regulation (immunocontraception, remotely tunable synthetic gene drives, etc) together with ubiquitous AI surveillance is going to transform life on Earth.
Reprogramming Predators

PP: "Thank you very much for the response. I brought up the pinprick argument, as despite being a NU myself, I believe it defeats Benatar's asymmetry theory. In his book he bites the bullet, concluding that the pinprick would make it so a life otherwise full of pleasure would have been better off not being started. Surely you can't agree with this? I also take it from your posts that you believe there are principles one must follow (sanctity of life etc), even if the likely consequences are more suffering? What is your answer to Smart's benevolent world-exploder?"
(Down The Rabbit Hole)
DP: Should human intuitions of absurdity weigh more in ethics than in, say, quantum physics? That said, I defend what might be called "indirect negative utilitarianism", but is really just strict negative utilitarianism. Thus enshrining the sanctity of human and nonhuman animal life in law doesn’t lead to more net suffering. Naively, yes, the implications of negative utilitarianism are apocalyptic (cf. Smart's Reply to Popper). Indeed, classical utilitarian philosopher Toby Ord calls negative utilitarianism a “devastatingly callous” doctrine. But whereas negative utilitarians can ardently support an advanced transhumanist “triple S” civilisation, classical utilitarians are obliged to obliterate it with a utilitronium shockwave.

PP: "What do you think the chances are of environmental collapse in the next 100 years derailing the necessary technical developments to allow transhumanism?"
(Bitter Crank)
DP: Many thanks. Could environmental degradation derail transhuman civilisation?
I'm sceptical. But I suspect a climate catastrophe such as the inundation of a major Western metropolis will be needed to trigger serious action on mitigating global warming. One possible solution might be coordinated international legislation to enforce drastic reductions in CO that become effective only in, say, 10 years’ time.

A runaway “intelligence explosion” of recursively self-improving software-based AI?
Again, I’m sceptical:
An Intelligence Explosion
Classical Turing machines can’t solve the binding problem:
Binding is Non-Algorithmic
Digital zombies have profound cognitive limitations that no increase in speed or complexity can overcome. In my view, zombie AI will augment sentience, not replace us.

Superhappiness?
The molecular signature of pure bliss is unknown, although its location has been narrowed to our ultimate “hedonic hotspot” in a cubic centimetre in the posterior ventral pallidum. Its discovery will be momentous. But the creation of superhappiness, let alone information-sensitive gradients of superhappiness, will depend on the solution to the binding problem. And the theoretical upper bounds to phenomenally-bound consciousness – whether blissful or otherwise – are unknown.

PP: "...I was wondering if you had any comments regarding the scope of that process of simulation cf. the extended mind thesis? And possibly an ethical challenge this raises to the primacy of biogenetic intervention in the reduction of long term suffering: if the human mind's simulation process is saturated with environmental processes, why is the body a privileged locus of intervention for suffering reduction and not its environment?"
(FDrake)
DP: Thank you. Suffering and the extended mind thesis?
(cf. The Extended Mind Thesis)
One of the authors of the extended mind thesis, Andy Clark, is explicitly a perceptual direct realist. Clark’s co-author, David Chalmers, sometimes writes in a similar vein. If some version of the extended mind thesis were true, then the abolitionist project would need to be re-evaluated, as you suggest.

However, as far as I can tell, the external world is inferred, not perceived. Strictly, it’s a metaphysical hypothesis (cf. Inferential Realism). Thanks to evolution, biological minds each run skull-bound phenomenal world-simulations that take the guise of their external surroundings. Within your world-simulation, your virtual iPhone is an extension of your bodily self. Within your world-simulation, you may perceive the distress of the virtual bodily avatars of other organisms. If you’re not dreaming, then their virtual behaviour causally co-varies with the behaviour of genuine sentient beings in the wider world.
But suffering is in the head.

PP: "Hello David. Do you have an argument for why, as far as you can tell, the external world is inferred and not perceived? Is it some version of the argument from illusion/hallucination? If so, how do you respond to the usual criticisms of these arguments by externalists, i.e. that these arguments tend to confuse metaphysical with epistemological issues."
(Jkg20)
DP: As far as I can tell, physical reality long predates the evolution of phenomenally-bound minds in the late Precambrian. As I said, I’m a metaphysical realist. But each of us runs an egocentric world-simulation. Phenomenal world-simulations differ primarily in the identity of their protagonist. My belief that I’m not a Boltzmann brain or a mini-brain in a neuroscientist’s vat (etc) is metaphysical. The belief rests on a chain of inferences – and speculation I find credible. The external environment partly selects the content of one’s waking world-simulation; it doesn’t create it:
The Many Worlds of Kant
Recall I argue against the view that reality is subjectively constructed. But each of us runs a phenomenal world-simulation that masquerades as the external world. Mind-independent reality may be theoretically inferred; it's not perceptually given.

PP: "If objective reality exists, and we perceive it, surely the natural emphasis falls upon the validity of our understanding - and scientific method as a means to establish objective knowledge."
(Counterpunch)
DP: The inferential realist account is more epistemically demanding. The perceptual naïve realist believes that (s)he directly communes with the external world – an approach that offers all the advantages of theft over honest toil. By contrast, the inferential realist tries to explain how our phenomenally-bound world-simulations are neurologically possible. It's a daunting challenge:
The binding problem categorised

Climate change? A prosperous sustainable future? There is no tension here. I promise transhumanists are as keen on a healthful environment and economic prosperity as you are!

PP: "Well, until such a time when we can engineer every single life form on earth to do exactly what you think is good, I'm going to read the Dimension of Miracles again"
(Olivier5)
DP: Complications aide, sentient beings exhibit a clearly expressed wish not to be harmed. So compassionate biology doesn't entail "engineer[ing] every single life form on earth to do exactly what you think is good". Rather, intelligent moral agents should ensure that all sentient beings can flourish without being physically molested. Genome-editing is a game-changer. Yes, there are some (human and nonhuman) predators who want to prey on the young, the innocent and the vulnerable. But there is no "right to harm". A civilised biosphere will be vegan.

PP: "Without that grounding in evolutionary biology, the inferential realist account is just subjectivism with a fresh coat of paint, because the effect is essentially the same. Focusing on the mechanisms of perception, you soon lose sight of the art, traffic lights and colour coded electrical wires that prove enormous commonality of perception of an objectively existing reality, necessary to our survival as a species - both up to this point, and in future."
(Counterpunch)
DP: Modern physics reveals that mind-independent reality is radically different from our egocentric virtual worlds of experience. I say a bit more, e.g.
Philosophy's Big Mistake?

PP: "A tiger does not apologize."
(Olivier5)
DP: True, a tiger does not apologise. Nor unless caught does a serial psychopathic child killer. Their victims are of comparable sentience. Unless rather naively we believe in free will, neither tigers nor psychopaths are to blame in any metaphysical sense for the suffering they cause. But their blamelessness is not an argument for conserving tigers or psychopaths in their existing guise.

PP: "Sensory perception is limited; but that does not imply that what we perceive is unreal."
(Counterpunch)
DP: The fact that we each run a phenomenal world-simulation rather than perceive extracranial reality doesn't entail that our world-simulations are unreal – any more than the mind-dependence of our world-simulations entails that extracranial reality is unreal. It's often socially convenient to ignore the distinction and pretend we share common access to a public macroscopic world. But shared access is still a fiction.

PP: "Although I think of myself as more of an indirect realist than a naive realist, I wonder if this is really the correct way to phrase it....So I think there is both an epistemological and a metaphysical aspect to this, and that perhaps metaphysically, waking experience just is a mind-independent reality being "perceptually given", whereas epistemologically, that I am having a waking experience and that waking experience just is a mind-independent reality being "perceptually given" is "theoretically inferred"."
(Michael)
DP: Thanks, a lot to unpack there. I worry that the expression "perceptually given" is doing a lot of work in your account. One's experience of a macroscopic world is an intrinsic property of neural patterns of matter and energy. This kind of experience may be shared by dreaming minds, brains-in-vats, Boltzmann brains – and awake humans who have evolved over millions of years of natural selection. In other words, the experience of a macroworld isn't intrinsically “perceptual” – the extracranial environment is neither sufficient nor necessary for the experience, Rather, one’s phenomenal macroworld has been harnessed by natural selection to play a particular functional role in awake animal nervous systems, namely the real-time simulation of fitness-relevant patterns in the extracranial environment. "Perception" is a misnomer.

If inferential realism / a world-simulation account is correct, then thorny semantic issues arise:
The Hardest Paradox?

PP: "You deny the truth of a shared external reality.
(Metaphysician Undercover)
DP: I believe in the existence of mind-independent reality (cf. Against Solipsism). Its status, from my perspective, is theoretical not empirical. The fact that one can't directly access the world outside one's transcendental skull doesn't make it any less real.

Grounding ethics?
Agony and despair are inherently disvaluable for me. Science suggests I’m not special. Therefore I infer that agony and despair are disvaluable for all sentient beings anywhere:
DP on Meta-Ethics

PP: "What about the suffering tigers annihilate? When their prey is dead, the prey won't suffer anymore. That's chalked up as a positive, right? If life is an abomination, death ought to be a blessing."
(Olivier5)
DP: The problem of suffering can't be solved by tigers killing their victims any more than it can be solved by psychopaths killing orphans. Well-fed tigers breed more offspring who go on to terrorise more herbivores. I wouldn't personally be sad if the cat family were peaceably allowed to go extinct; but most people are aghast at the prospect. So instead, genetic tweaking can allow the conservation of tigers and other members of the cat family minus their violent proclivities.

DP: Very many thanks. Yes, I remarked that most critics don't find an architecture of mind based entirely on information-sensitive gradients of bliss to be a genetically credible prospect. If pressed, such critics will normally allow that a minority of chronic depressives are animated entirely by gradients of ill-being. The possibility of people with the opposite syndrome, i.e. life animated entirely by gradients of well-being, simply beggars their imagination. That’s why case studies of exceptional hyperthymics (e.g. Jo Cameron) who never get depressed, anxious or feel pain are so illuminating. The challenge is to create a hyperthymic civilisation.

Terminology? The opposite for our position is dolorism. It's historically rare. More common is the bioconservatism exemplified by Alexander Pope in his Essay on Man (1733), "One truth is clear, WHATEVER IS, IS RIGHT". Voltaire satirised such an inane optimism in Candide (1759).

Tackling the entrenched status quo bias of bioconservatism can be hard. One way of overcoming status quo bias is to pose a thought-experiment. Imagine humanity encounters an advanced civilisation that has abolished the biology of suffering in favour of life based on gradients of intelligent bliss. What arguments would bioconservative critics use to persuade the extra-terrestrials to revert to their ancestral biology of pain and suffering?

PP: "Hi David, I've been dabbling in secular buddhism, evolutionary psychology, and antinatalism on and off over the years and it's cool to see that transhumanism shares common themes and foundations. Are these overlaps recognized in transhumanist circles and if yes, could you recommend readings exploring these common themes, especially in the secular buddhism part? I have also realized that "hard" antinatalism is impractical so I just left it at that for a long time now but I'm finding that your arguments for transhumanism presents a more practicable alternative. So thanks for holding this AMA and introducing me to this position!"
(OglopTo)
DP: Thank you. You are very kind. The Transhumanist movement is diverse, indeed fragmented. For instance, Nick Bostrom and I both advocate a future of superintelligence, superlongevity and superhappiness, but “existential risk” means something different to an ardent life lover and a negative utilitarian (cf. DP and NB). A commitment to the well-being of all sentience is item 7 of 8 in The Transhumanist Declaration (1998, 2009). This prioritisation probably reflects the relative urgency most transhumanists feel. Superintelligence and superlongevity loom larger in the minds of most transhumanists than defeating suffering.

That said, I'd urge any secular Buddhist to embrace the transhumanist agenda. Recall how the historical Gautama Buddha was a pragmatist. If it works, do it! Indeed, the abolitionist project might crudely be called Buddhism and biotech. The only way I know to abolish the biology of suffering short of sterilising Earth is to rewrite our legacy source code. Other makeshift remedies are just stopgaps. Most technological advances don’t get to the heart of the problem of suffering. We need a genetically-driven biohappiness revolution.

PP: So as a very rough analogy, as I understand it, you think of experience as something like a footprint in sand that may or may not have been caused by a boot, and the extent to which the features of the footprint "resemble" the features of the boot is an open question, as is how we are able to think and talk about the boot when presented with only a footprint"
(Michael)
DP: Thanks, your striking footprint / boot analogy hadn't occurred to me; but yes, in a sense. I'm still thinking about the coffee! Either way, the difference between dreaming and waking consciousness isn't that when awake one perceives the external world. Rather, during waking life, peripheral nervous inputs partially select the contents of one’s phenomenal world-simulation. When one is dreaming, one's world-simulation is effectively autonomous.

Now for the twist. A Kantian might say that all one can ever know is phenomena. The noumenal essence of the world is unknown and unknowable. But the recently-revived intrinsic nature argument "turns Kant on his head":
Galileo's Error?

PP: "That so, explain art - and not only creating art, but meaningful discussions about art. Explain what the artist thinks they are doing when painting a picture of a bridge in the fog. And how it can possibly be, that I come along, a hundred years later and say, "Hey - cool foggy bridge, dude!" The way I see it our sensory apparatus evolved in relation to a causal reality over millions of years, and while limited, is necessarily accurate to reality - as it really exists, to allow for survival. We could not have survived if perception were subjectively constructed."
(Counterpunch)
DP: I hope you'll forgive me for ducking questions of art here. However, when it comes to science, I'm a realist and a monistic physicalist, but not a materialist:
Phyicalism, Materialism and Dualism
Taking modern physics seriously yields a conception of reality very different from the world-simulation of one's everyday experience.

PP: What about the suffering tigers anihillate? When their prey is dead, the prey won't suffer anymore. That's chalked up as a positive, right? If life is an abomination, death ought to be a blessing"
(Olivier5)
DP: It's possible your recent comment has been deleted. But the reason I don't advocate the extinction (as distinct from genetic tweaking) of the cat family is precisely the visceral responses of outraged cat lovers. Ethically speaking, should we conserve, for example,
The reality of predation
[Viewer discretion advised: please don't watch if you already agree that intelligent moral agents should end predation. But in the abstract, "predation" sounds no more troubling than halitosis.]
Civilisation will be vegan.

PP: What exactly do you mean by "theoretical" here? I see this statement as self-contradicting. To say that something is theoretical is to say that it is mind-dependent. To say that reality is theoretical, but mind-independent is to contradict yourself. In other words I don't see this as a valid way to account for the reality of he external world, to say that it is theoretical, yet also mind-independent It can only be one or the other"
(Metaphysician Undercover)
DP: Consider lucid dreaming. When having a lucid dream, one entertains the theory that one's entire empirical dreamworld is internal to the transcendental skull of a sleeping subject. Exceptionally, one may even indirectly communicate with other sentient beings in the theoretically-inferred wider world:
Researchers Exchange Messages With Dreamers
What happens when one "wakes up"? To the naïve realist, it's obvious. One directly perceives the external world. But the inferential realist recognises that the external world can only be theoretically inferred. For a good development of the world-simulation metaphor, perhaps see Antti Revonsuo's Inner Presence (2006).

You remark, "To say that something is theoretical is to say that it is mind-dependent." But when a physicist talks of, say, the theoretical existence of other Hubble volumes beyond our cosmological horizon (s/he certainly doesn’t intend to make a claim of their mind-dependence. Of course, how our thoughts and language can refer is a deep question.
Naturalising semantic content is hard:
Naturalising Intentionality

Unique individuals?
Yes, our egocentric world-simulations each have a different protagonist. Yet we are not "uniquely" unique. When I said that "science suggests I'm not special", I was alluding simply to how I perceive myself to be the hub of reality is (probably!) only a fitness-enhancing hallucination:
The hub of reality

PP: "Anyway, I suspect felids are not particularly interested in your advice. You are welcome to change yourself into some computer if you want to, but leave cats alone. They can make their own life choices"
(Olivier5)
What makes you suppose I want to change myself into a computer?!
(cf. Why is the brain considered like a computer?)

Either way, power breeds complicity, whether we like it or not. Humans would (I hope) rescue a small child from the jaws of a lion. It's perverse not to rescue beings of comparable sentience. No one deserves to be disembowelled, asphyxiated or eaten alive. Of course, ad hoc solutions to the problem of predation are unsatisfactory. Hence the case for a pan-species welfare state and veganising the biosphere.

PP: "One can take modern physics seriously in at least two distinct ways: instrumentally or realistically. Taking it realistically has been argued to lead to incoherence. What would your reply be to this kind of argument:
Common sense vs physics?"

(Jkg20)
DP: In my view, instrumentalism threatens to collapse into an uninteresting solipsism. Instead, we'd do well to interpret the mathematical formalism of modern, unitary-only quantum mechanics realistically. Just as the special sciences (chemistry, molecular biology etc) can be derived from physics, likewise science should aim to derive the properties of our minds and the phenomenally-bound world-simulations they run from fundamental physics. This is a monumental challenge. But if there isn't a perfect structural match, then presumably some kind of dualism is true.

Let's stick to physicalism. I explore the quantum-theoretic version of the intrinsic nature argument. It’s counterintuitive:
Quantum Mind

PP: "Further, what's wrong with the original, biological human; isn't Transhumanism better suited for a virtual world?"
(Ghostlycutter)
DP: Aging is a frightful disorder. Medical science should aim for a cure.
More generally, mental and physical suffering are vile. A predisposition to suffer is genetically hardwired. For example, hundreds of millions of people worldwide suffer from chronic depression and pain. Even nominally healthy humans sometimes suffer horribly as a function of our legacy code. In the absence of biological-genetic interventions, the negative-feedback mechanisms of the hedonic treadmill will play out in immersive virtual worlds no less than in basement reality.
What's more, we shouldn’t retreat into escapist VR fantasy worlds until we have solved the problem of suffering in Nature and have created post-Darwinian life.
In short, transhumanism is morally urgent.

PP: Sometimes, wisdom consists in not acting even when you could act. We should leave other species alone, to the extent possible, eg by way of nature reserves.
(Olivier5)
DP: Biotech (genome editing, synthetic gene drives, etc) turns the level of suffering in Nature into an adjustable parameter. Yes, traditional conservation biologists favour preserving the snuff movie of traditional Darwinian life: sentient beings hurting, harming and killing each other.
Intelligent moral agents can do better.

PP: "Intelligent moral agents can do far better than bioengineer cats for the moral satisfaction of seeing them eat lettuce."
(Olivier5)
DP: What do you think it feels like to be eaten alive? The horrors of "Nature, red in tooth and claw" are too serious to be written off with jokes about eating lettuce. For the first time in history, it's technically possible to engineer a biosphere where all sentient beings can flourish. I know of no good moral reason for perpetuating the horror-show of Darwinian life.

PP: "But as for the morality of it, do you think we should do the same if we come across an alien biome? Would we want advanced aliens to come tame our world?"
(Marchesk)
DP: If pain-ridden Darwinian ecosystems exist within our cosmological horizon, then I would indeed hope future transhumans will send out cosmic rescue-missions. It's a tragedy that no such rescue-mission ever reached our planet; it could have prevented 540 million years of unimaginable suffering. Here on Earth, delegating stewardship of the global ecosystem to philosophers would be unwise. But pilot studies of self-contained happy ecosystems are feasible even now. CRISPR-based synthetic gene drives are an insanely powerful tool of compassionate stewardship. However, before we start actively helping sentient beings, we must first stop systematically harming them:
Ending Industrialised Animal Abuse

PP: "...but the argument on the link I provided seems to be trying to bolster commonsense realism, on the grounds that the alternative winds up in incoherence"
(Jkg20)
DP: It was J. L. Austin (of all people!) who acknowledged that "common sense is the metaphysics of the stone age". Folk physics is not my point of departure. What counts as "common sense" is time- and culture-bound. My common sense most likely differs from your common sense. By "commonsense realism", do you mean perceptual naïve realism? Either way, we want to explain the technological success story of modern science. Anti-realism leaves the technological success of modern science a miracle. Maybe we can take an agnostic instrumentalist approach. Yet a lot of us want to understand reality. Why does quantum mechanics work? What explains the Born rule?

Most scientists are materialist physicalists. Materialism leads to the insoluble Hard Problem of consciousness:
The Hard Problem
Most scientists treat the mind-brain as pack of classical neurons. Treating the mind-brain as an aggregate of decohered, membrane-bound neurons leads to the insoluble phenomenal binding/combination problem:
The binding problem
I don’t know if non-materialist physicalism is true; but it’s my working hypothesis.

PP: "Benevolent aliens go around reengineering life. They just haven't gotten around to us yet. Do you think sentient life should be allowed a choice in the matter?"
(Marchesk)
DP: Darwinian malware had no choice about being born. Under a regime of natural selection, the lives of most sentient beings are “nasty, brutish and short”. Elsewhere I’ve urged upholding the sanctity of human and nonhuman life in law - not because I believe in such a fiction, but because failure to do so will most likely lead to worse consequences.

Would a notional benevolent superintelligence opt to conserve a recognisable approximation of Darwinian life-forms?
Or optimise our matter and energy for blissful sentience?
I don’t know.
But in the real world, a policy framework of compassionate conservation would seem to be a workable compromise.

PP: "I figured it is easier to destroy all Darwinian life first, and then reconstruct it better"
(Olivier5)
DP: Any solution to the problem of suffering must be technically and sociologically credible. It's a daunting challenge, but I know of no alternative:
The Abolitionist Project
There is no OFF button.

DP: Thank you. You are very kind. I'm curious though. You believe there is potentially a cure for suffering, but it's not transhumanism. First, we need to define what is meant by the term "transhumanism". If we don't edit our source code, recalibrate the hedonic treadmill, and genetically change "human nature", then I'm at a loss to know how the abolitionist project can ever be realised - short of apocalyptic scenarios that are unfruitful to contemplate.

PP: "My main point is that to me, life is the supreme value. Not pleasure or the absence of suffering or sentience, but just life. Ugly as it is. Beautiful as it is too."
(Olivier5)
DP: I'm mystified why you value life per se rather than certain kinds of life. Do you really believe we should value and attempt to conserve, say, the parasitic worm Onchocerca volvulus that causes onchocerciasis a.k.a. "river blindness"? If so, why? Yes, humans are "playing god". Good. We should aim to be benevolent gods.

PP: "I was wondering what your take is on the opioid crisis. Were not all concerned hedonistic pleasure seekers?
(Counterpunch)
DP: The underlying cause of the opioid crisis is that we are all born endogenous opioid addicts. The neurotransmitter system most directly involved in hedonic tone is the opioid system. Human and nonhuman animals are engineered by natural selection with no durable way to satisfy our cravings. Most exogenous opioid users are ineffectively self-medicating. Exogenous opioids just activate the negative-feedback mechanisms of the CNS. A solution to the opioid crisis is going to be complicated, long-drawn-out and messy. But ACKR3 receptor blockade potentially offers the prospect of hedonic uplift for all (cf. ACKR3 receptor blockers). More research is urgently needed. Note I'm not (yet) urging everyone to get hold of ACKR3 receptor blockers. There are too many pitfalls and unknowns.

PP: "I hope I’m not too late to the party but I’ll start by asking a question relating to the debate between what I heard get called "Agent Neutral”and “Agent Relative” forms of Utilitarianism . Do you think we have any extra non-instrumental reason to minimize our own suffering or the suffering of our loved ones relative to the reason that we have to minimize the suffering of a complete stranger? Also, do you think that we have less non-instrumental reason to minimize the suffering of our enemies and people that we despise?"
(TheHedoMinimalist)
DP: It's a question of timescales. Classical utilitarianism is often held to be agent‐neutral. But in the long run, it's unclear whether classical utilitarianism is consistent with a world of agents. For the classical utilitarian should be working towards an apocalyptic, AI-assisted utilitronium shockwave – some kind of all-consuming cosmic orgasm that maximises the abundance of bliss within our Hubble volume. Compare the homely dilemmas of the trolley problem. Intuitively, this kind of technological enterprise is centuries or millennia distant, so irrelevant to our lives. Yet temporal discounting is not an option for the strict classical utilitarian. Note that negative utilitarianism doesn’t entail this apocalyptic outcome. For the negative utilitarian, a transhuman civilisation based entirely on information-sensitive gradients of bliss will be ideal.

In the short-term, the practical implications of a classical utilitarian ethic are less dramatic. Given human nature, agent‐neutrality is psychologically impossible. Any attempt by legislators to enforce agent-neutrality would probably lead to worse consequences, i.e. more unhappy people. Likewise, if one is an aspiring effective altruist who gives, say, 10% of one's income to charity, then beating oneself up about not charitably giving away 90% of one's income is counterproductive. Most likely, heroic self-sacrifice will lead to "burn out" - an un-utilitarian outcome. So in practice, being a (classical or negative) utilitarian involves all sorts of messy compromises. But then real life is messy – no news here.

Should one aspire to minimise the suffering of the people we despise as much as loved ones? Yes. In practice, such impartial benevolence is impossible. But getting our theory of personal identity right can help:
Open, Closed and Empty Individualism

PP: "Have you seen that experiment where the orgasm centre of a rat's brain was plugged into a lever the rat could press, and it pressed the lever repeatedly until it starved to death?"
(Counterpunch)
DP: Yes:
Hypermotivation and the meaning of life
Wireheading is not a viable solution to the problem of suffering.

PP: "the unrestrained, hedonistic pursuit of pleasure has produced terrible consequences. And your answer is, they're seeking pleasure wrong? So, what's the right way?
(Counterpunch)
DP: In my view, the right way to seek pleasure is through genetic recalibration of the negative-feedback mechanisms of the hedonic treadmill:
Gradients.com

PP: "Good answer. Design happier healthier babies! But where to stop?"
(Counterpunch)
DP: We could engineer a world with hedonic range of 0 to +10 as distinct from our -10 to 0 to +10. But we could also engineer a civilisation of (schematically) +10 to +20 or (eventually) +90 to +100. Critics protest that a notional civilisation with a hedonic range of +90 to +100 would "lack contrast" compared to the rich tapestry of Darwinian life. But a hedonic range of, say, +70 to +100 will be feasible too.

PP: "A decentralized, self-regulated system is more resilient than a centrally regulated system. And that's a dimension on which Darwinian life will always trump engineered life."
(Olivier5)
DP: If a global consensus emerges for compassionate stewardship of the living world, then the problem of suffering is tractable. We're not going to run out of computer power. Every cubic metre of the planet will shortly be accessible to surveillance and micromanagement – although synthetic gene drives allow the ecological option of remote management too.

On the one hand, we can imagine dystopian Orwellian scenarios – some kind of totalitarian global panopticon. But biotech also gives us the tools to create a world based on genetically preprogrammed gradients of bliss. In a world animated by information-sensitive gradients of well-being, there are no "losers" in the Darwinian sense of the term. Members of today's predatory species can benefit no less than their victims:
A Pan-Species Welfare State?

PP: "..because I lean toward idealism"
(Metaphysician Undercover)
DP: The only kind of idealism I take seriously just transposes the entire mathematical apparatus of modern physics onto an experientialist ontology: non-materialist physicalism (cf. physicalism.com). I used to assume the conjecture that the mathematical formalism of quantum field theory describes fields of sentience was untestable. How could we ever know what (if anything!) it's like to be, say, superfluid helium?
I now reckon such pessimism is premature:
Predictions from consciousness fundamentalism
At the risk of stating the obvious, no one sympathetic to the abolitionist project need buy into my idiosyncratic speculations on quantum mind and the intrinsic nature of the physical.

PP: "I find it very difficult to conceive of happiness as a constant state"
(Counterpunch)
DP: As a temperamentally depressive negative utilitarian, I find lifelong happiness hard to conceive too. But a civilisation based on gradients of superhuman bliss is technically feasible. IMO, our impending mastery of the pleasure-pain axis makes such a future civilisation likely, too, though I vacillate on credible timescales.

"Information-sensitive gradients of well-being" is more of a mouthful than "constant happiness". Nonetheless the distinction is practically important. Despite my normal British prudery, I sometimes give the example of making love. Lovemaking has peaks and troughs. But done properly, lovemaking is generically enjoyable for both parties throughout. The challenge is to elevate our hedonic range and default hedonic tone so that life is generically enjoyable – despite the dips.

PP: "I suspect that, were we to live in such a civilisation, our mean hedonistic expectation will simply adjust to somewhere around 95. Anything below 95 will be deemed a disappointment if not a "micro-aggression", and anything above 95 will get recorded as satisfying and truly a pleasure. In short, I suspect the gradient is relative, not absolute."
(Olivier5)
DP: The misconception that pain and pleasure are relative is tenacious. But we need only consider the plight of severe chronic depressives to recognise that it's false. Chronic depressives never cease to suffer even though some of their days are less dreadful than others. Hyperthymics lie at the opposite extreme.

PP: "My point was rather that no centralized decision making system can be perfect in its implementation"
(Olivier5)
DP: Phasing out the biology of suffering in favour of life based on information-sensitive gradients of well-being can be "perfect" in its implementation in the same sense that getting rid of Variola major and Variola minor was "perfect" in its implementation. Without Variola major and Variola minor, there is no more smallpox. Without the molecular signature of experience below hedonic zero, there can be no more suffering. It's hard to imagine, I know.

PP: "Okay, but how close is genetic science to identifying the specific genes and/or areas of the brain they want alter?"
(Counterpunch)
DP: It's complicated: Careers in hedonic uplift
But which version of the ADA2b deletion variant, COMT gene, serotonin transporter gene, FAAH & FAAH OUT gene (cf. Psychological and physical pain-sensitivity) and SCN9A gene (cf. The Cure for Pain) would you want for your future children?
As somatic gene therapy matures for existing humans, which allelic combinations would you choose for yourself?

Of course, bioconservatives would maintain that the genetic crapshoot of traditional sexual reproduction is best. If they prevail, then a Darwinian biology of misery and malaise will persist indefinitely.

PP: "Hi David, gene editing to the degree you are proposing has not been done to a real human (except perhaps one in China). What if it's the case that the way the phenotype and epigenetic results of gene editing work, it creates less happy humans, who suffer more? We are betting that the practical application will somehow prove out the theories. What if it doesn't and gene editing too has become a dead end?"
(Schopenhauer1)
DP: Preimplantation genetic diagnosis and screening is worth distinguishing from gene-editing. Ratcheting up hedonic range, hedonic set-points and pain thresholds in human and nonhuman animal populations would certainly be feasible using nothing but preimplantation genetic screening alone; but germline gene-editing will be quicker.

The rogue He Jiankui case was indeed unfortunate. Well-controlled prospective trials will take time. So will preparing the ethical-ideological groundwork. Most people are accepting (if uncomfortable) with the idea of genetic interventions to prevent, say, Huntingdon's, Tay-Sachs or sickle cell disease, but not yet receptive to the prospect of selecting higher pain-thresholds or hedonic set-points.

I'm sceptical gene-editing could prove to be a dead-end. Modulating even a handful of genes such as the half-dozen I mentioned in my reply to Counterpunch above could create radical hedonic uplift across the biosphere. Rather, I'm glossing over all sorts of risks, complications and unknowns – permissible in a philosophy forum when we are discussing the fundamental principles of a biohappiness revolution, but not if-and-when real-world trials begin.

DP: "It's also very hard to do, I know."
(Olivier5)
DP: Indeed. At times, my heart sinks at the challenges. But if we don't upgrade our legacy code, then pain and suffering will continue indefinitely.

PP: "What I'm really worried about is a transhumanistic approach to the human situation that is not based on an accurate understanding of that human situation; an approach that assumes too much and introspects about ourselves far too little."
(Noble Dust)
DP: I share your reservations about gung-ho enthusiasm for technology. But transhumanism is a recipe for deeper self-understanding. The only way to develop a scientific knowledge of consciousness is to adopt the experimental method. Alas, a post-Galilean science of mind faces immense obstacles: Psychedelic research
Not least, it's hard responsibly to urge the use of psychedelics to explore different state-spaces of consciousness until our reward circuitry is ungraded to ensure invincible well-being

PP: "Somewhat related; how does transhumanism address addiction?"
(Noble Dust)
DP: If I might quote Pascal,

“All men seek happiness. This is without exception. Whatever different means they employ, they all tend to this end. The cause of some going to war, and of others avoiding it, is the same desire in both, attended with different views. The will never takes the least step but to this object. This is the motive of every action of every man, even of those who hang themselves.”
Transhumanism can treat our endogenous opioid addiction by ensuring that gradients of lifelong bliss are genetically hardwired.

PP: "What if gene-editing doesn't remove suffering but simply re-calibrates it in an unfavorable way?"
(Outlander)
DP: Compare the introduction of pain-free surgery:
Utopian Surgery
Surgical anaesthesia isn't risk-free. Surgeons and anaesthesiologists need to weigh risk-reward ratios. But we recognise that reverting to the pre-anaesthetic era would be unthinkable. Posthumans will presumably reckon the same about reverting to pain-ridden Darwinian life.

Consider again the SCN9A gene. Chronic pain disorders are a scourge. Hundreds of millions of people worldwide suffer terribly from chronic pain. Nonsense mutations of SCN9A abolish the ability to feel pain. Yet this short-cut to a pain-free world would be disastrous: we'd all need to lead a cotton-wool existence. However, dozens of other mutations of SCN9A confer either unusually high or unusually low pain thresholds. We can at least ensure our future children don’t suffer in the cruel manner of so many existing humans. Yes, there are lots of pitfalls. The relationship between pain-tolerance and risky behaviour needs in-depth exploration. So does the relationship between pain sensitivity and empathy; consider today's mirror-touch synaesthetes. However, I think the biggest potential risks of choosing alleles associated with elevated pain tolerance aren't so much the danger of directly causing suffering to the individual through botched gene therapy, but rather the unknown societal ramifications of creating a low-pain civilisation as a prelude to a no-pain civilisation. People behave and think differently if (like today's fortunate genetic outliers) they regard phenomenal pain as "just a useful signalling mechanism”. Exhaustive research will be vital.

PP: "Are there any aspects of the current Homo sapiens that you would identify as already transhuman?
(Fdrake)
DP: Candidly, no. Until we edit our genomes, even self-avowed transhumanists will remain all too human. All existing humans run the same kind of egocentric world-simulation with essentially the same bodily form, sensory modes, hedonic range, core emotions, means of reproduction, personal relationships, life-cycle, maximum life-span, and default mode of ordinary waking consciousness as their ancestors on the African savannah. For sure, the differences between modern and archaic humans are more striking than the similarities; but I suspect we have more in common with earthworms than we will with our genetically rewritten posthuman successors.

PP: "Researchers at the Francis Crick Institute have revealed that CRISPR-Cas9 genome editing can lead to unintended mutations at the targeted section of DNA in early human embryos."
(Olivier5)
DP: A hundred-year moratorium on reckless genetic experimentation would be good; but antinatalism will always be a minority view. Instead, prospective parents should be encouraged to load the genetic dice in favour of healthy offspring by making responsible genetic choices. Base-editing is better than CRISPR-Cas9 for creating invincibly happy, healthy babies:
Base editing

PP: "I don’t even know if other people are capable of suffering let alone that I have some kind of weird abstract reason to care about it. It seems to me that the reasons that we might have to minimize the suffering of others are almost just as speculative..."
(TheHedoMinimalist)
DP: You say you're "mostly an ethical egoist". Do you accept the scientific world-picture? Modern civilisation is based on science. Science says that no here-and-nows are ontologically special. Yes, one can reject the scientific world-picture in favour of solipsism-of-the-here-and-now (cf. Solipsism debunked). But if science is true, then solipsism is a false theory of the world. There’s no reason to base one’s theories of ethics and rationality on a false theory. Therefore, I believe that you suffer just like me. I favour the use of transhumanist technologies to end your suffering no less than mine. Granted, from my perspective your suffering is theoretical. Yet its inaccessibility doesn't make it any less real. Am I mistaken to act accordingly?

PP: "My intuition against utilitarianism always has been that pain, or even pain 'and' pleasure, is not the only thing that matters to us, and so reducing everything to that runs the risk of glossing over other things we value"
(ChatteringMonkey)
DP: Evolution via natural selection has encephalised our emotions, so we (dis)value many things beyond pain and pleasure under that description. If intentional objects were encephalised differently, then we would (dis)value them differently too. Indeed, our (dis)values could grotesquely be inverted – “grotesquely” by the lights of our common sense, at any rate.

What's resistant to inversion is the pain-pleasure axis itself. One simply can't value undergoing agony and despair, just as one can't disvalue experiencing sublime bliss. The pain-pleasure axis discloses the world's inbuilt metric of (dis)value.

However, it's not necessary to accept this analysis to recognise the value of phasing out the biology of involuntary suffering. Recalibrating your hedonic treadmill can in principle leave your values and preference architecture intact – unless one of your preferences is conserving your existing (lower) hedonic range and (lower) hedonic set-point. In practice, a biohappiness revolution is bound to revolutionise many human values – and other intentional objects of the Darwinian era. But bioconservatives who fear their values will necessarily be subverted by the abolition of suffering needn’t worry. Even radical elevation of your hedonic set-point doesn't have to subvert your core values any more than hedonic recalibration would change your football-team favourite. What would undermine your values is uniform bliss – unless you're a classical utilitarian for whom uniform orgasmic bliss is your ultimate cosmic goal. Life based on information-sensitive gradients of intelligent well-being will be different. What guises this well-being will take, I don't know.

PP: "It seems to me that you are indulging in a vision of a paradise, that probably serves the same purpose as the Christian paradise: console, bring solace. It's a form of escapism."
(Olivier5)
DP: A plea to write good code so future sentience doesn't suffer isn't escapism; it's a policy proposal for implementing the World Health Organization's commitment to health. Future generations shouldn't have to undergo mental and physical pain.

The "peaceable kingdom" of Isaiah?
The reason for my quoting the Bible isn't religious leanings: I'm a secular scientific rationalist. Rather, bioconservatives are apt to find the idea of veganising the biosphere unsettling. If we want to make an effective case for compassionate stewardship, then it's good to stress that the vision of cruelty-free world is venerable. Quotes from Gautama Buddha can be serviceable too. Only the implementation details (gene-editing, synthetic gene drives, cross-species fertility-regulation, etc) are a modern innovation.

PP: "The idea of being able to hardwire gradients of bliss smacks of hubris to me."
(Noble Dust)
DP: Creating new life and suffering via the untested genetic experiments of sexual reproduction feels natural. Creating life engineered to be happy – and repairing the victims of previous genetic experiments – invites charges of “hubris”. Antinatalists might say that bringing any new sentient beings into this god-forsaken world is hubristic. But if we accept that the future belongs to life lovers, then who shows greater humility:
(1) prospective parents who trust that quasi-random genetic shuffling will produce a benign outcome?
(2) responsible (trans)humans who ensure their children are designed to be healthy and happy?

PP: "It's not completely clear to me as to why having the intrinsic aim of minimizing suffering in the whole world is more reasonable than another kind of more abstract and speculative intrinsic aim like the intrinsic aim of minimizing instances of preference frustrations for example"
(TheHedoMinimalist)
DP: The preferences of predator and prey are irreconcilable. So are trillions of preferences of social primates. The challenge isn't technological, but logical. Moreover, even if vastly more preferences could be satisfied, hedonic adaptation would ensure most sentient beings aren't durably happier. Hence my scepticism about "preference utilitarianism", a curious oxymoron. Evolution designed Darwinian malware to be unsatisfied. By contrast, using biotech to eradicate the molecular signature of experience below hedonic zero also eradicates subjectively disvaluable states. In a world animated entirely by information-sensitive gradients of well-being, there will presumably still be unfulfilled preferences. There could still, optionally, be social, economic and political competition – even hyper-competition – though one may hope full-spectrum superintelligence will deliver superhuman cooperative problem-solving prowess rather than primate status-seeking. Either way, a transhuman world without the biology of subjective disvalue would empirically be a better world for all sentience. It’s unfortunate that the goal of ending suffering is even controversial.

PP: "You seem to be saying there is something fundamental about pain and pleasure, because it is life's (or actually the world's?) inbuilt metric of value... It just isn't entirely clear to me why."
(ChatteringMonkey)
DP: IMO, asking why agony is disvaluable is like asking why phenomenal redness is colourful. Such properties are mind-dependent and thus (barring dualism) an objective, spatio-temporally located feature of the natural world:
Objective Value?
Evolution doesn’t explain such properties, as distinct from when and where they are instantiated. Phenomenal redness and (God forbid) agony could be created from scratch in a mini-brain, i.e. they are intrinsic properties of some configurations of matter and energy. It would be nice to know why the solutions to the world's master equation have the textures they do; but in the absence of a notional cosmic Rosetta stone to "read off" their values, it's a mystery.

PP: "...wouldn't it to be expected, your and other philosophers efforts notwithstanding, that in practice genetic re-engineering will be used as a tool for realising the values we have now? And by 'we' I more often then not mean political and economic leaders who ultimately have the last say because they are the ones financing research. I don't want to sound alarmist, but can we really trust something with such far-reaching consequence as a toy in power and status games?"
(ChatteringMonkey)
DP: I've no short, easy answer here. But fast-forward to a time later this century when approximate hedonic range, hedonic set-points and pain-sensitivity can be genetically selected – both for prospective babies and increasingly (via autosomal gene therapy) for existing humans and nonhuman animals. Anti-aging interventions and intelligence-amplification will almost certainly be available too, but let's focus on hedonic tone and the pleasure-pain axis. What genetic dial-settings will prospective parents want for their children? What genetic dial settings and gene-expression profiles will they want for themselves? Sure, state authorities are going to take an interest too. Yet I think the usual sci-fi worries of, e.g. some power-crazed despot breeding of a caste of fearless super-warriors (etc), are misplaced. Like you, I have limited faith in the benevolence of the super-rich. But we shouldn't neglect the role of displays of competitive male altruism. Also, one of the blessings of information-based technologies such as gene-editing is that once the knowledge is acquired, their use won't be cost-limited for long. Anyhow, I'm starting to sing a happy tune, whereas there are myriad ways things could go wrong. I worry I’m sounding like a propagandist rather than an advocate. But I think the basic point stands. Phasing out hedonically sub-zero experience is going to become technically feasible and eventually technically trivial. Humans may often be morally apathetic, but they aren't normally malicious. If you had God-like powers, how much involuntary suffering would you conserve in the world? Tomorrow's policy makers will have to grapple with this kind of question.

PP: David, at some point after implementation of the technology (maybe after 300 years, maybe after 1000 years, maybe after a million years) it is bound to be used to cause someone an unnatural amount of suffering. Suffering that is worse than could be experienced naturally. Is one or two people being treated to an unnatural amount of suffering (at any point in the future) worth it to provide bliss for the masses? Shouldn't someone that would walk away from Omelas walk away from this technology?"
(Down The Rabbit Hole)
DP: Yes, anyone who understands suffering should "walk away from Omelas". If the world had an OFF switch, then I'd initiate a vacuum phase-transition without a moment's hesitation. But there is no OFF switch. It's fantasy. Its discussion alienates potential allies. Other sorts of End-of-the-World scenarios are fantasy too, as far as I can tell. For instance, an AI paperclipper (cf. Autistic paperclippers?) would align with negative utilitarian (NU) values; but paperclippers are impracticable too. One of my reasons for floating the term "high-tech Jainism" was to debunk the idea that negative utilitarians are plotting to end life rather than improve it. For evolutionary reasons, even many depressives are scared of death and dying. As a transhumanist, I hope we can overcome the biology of aging. So I advocate opt-out cryonics and opt-in cryothanasia to defang death and bereavement for folk who fear they won't make the transition to indefinite youthful lifespans. This policy proposal doesn’t sound very NU – why conserve Darwinian malware? – but ending aging / cryonics actually dovetails with a practical NU ethic.

S(uffering)-risks? The s-risk I worry about is the possibility that a technical understanding of suffering and its prevention could theoretically be abused instead to create hyperpain. So should modern medicine not try to understand pain, depression and mental illness for fear the knowledge could be abused to create something worse? After all, human depravity has few limits. It's an unsettling thought. Mercifully, I can't think of any reason why anyone, anywhere, would use their knowledge of medical genetics to engineer a hyper-depressed, hyperpain-racked human. By contrast, the contemporary biomedical creation of "animal models" of depression is frightful.

Is it conceivable that (trans)humans could phase out the biology of suffering and then bring it back? Well, strictly, yes: some philosophers have seriously proposed that an advanced civilisation that has transcended the horrors Darwinian life might computationally re-create that life in the guise of running an "ancestor simulation". Auschwitz 2.0? Here, at least, I’m more relaxed. I don't think ancestor simulations are technically feasible – classical computers can't solve the binding problem. I also discount the possibility that superintelligences living in posthuman heaven will create pockets of hell anywhere in our forward light-cone. Creating hell – or even another Darwinian purgatory – would be fundamentally irrational.

Should a proponent of suffering-focused ethics spend so much time exploring transhumanist futures such as quasi-immortal life animated by gradients of superhuman bliss? Dreaming up blueprints for paradise engineering can certainly feel morally frivolous. However, most people just switch off if one dwells entirely on the awfulness of Darwinian life. My forthcoming book is entitled “The Biohappiness Revolution" – essentially an update on “The Hedonistic Imperative”. Negative utilitarianism, and maybe even the abolitionist project, is unsaleable under the latter name. Branding matters.

PP: "...shouldn't it be possible for intelligent life (with the help of advanced computers) to artificially create something similar? And given that things like dreams where there's little to no external stimuli involved are possible then shouldn't artificially stimulated experiences also be possible?"
(Michael)
DP: Artificial mini-brains, and maybe one day maxi-minds, are technically feasible. What's not possible, on pain of spooky "strong" emergence, is the creation of phenomenally-bound subjects of experience in classical digital computers:
Binding-problem.com
Binding / David Pearce / Magnus Vinding interview

PP: "I don’t see how that’s a problem for PU though. I think they could easily respond to this concern by simply stating that we regrettably have to sacrifice the preferences of one group to fulfill the preferences of a more important group in these sorts of dilemmas. I also think this sort of thing applies to hedonism also. In this dilemma, it seems we would also have to choose between prioritizing reducing the suffering of the predator or prioritizing reducing the suffering of the prey."
(TheHedoMinimalist)
DP: Even if we prioritise, preference utilitarianism doesn’t work. Well-nourished tigers breed more tigers. An exploding tiger population then has more frustrated preferences. The swollen tiger population starves in consequences of the dwindling numbers of their prey. Prioritising herbivores from being predated doesn’t work either – at least, not on its own. As well as frustrating the preferences of starving predators, a population explosion of herbivores would lead to mass starvation and hence more even frustrated preferences. Insofar as humans want ethically to conserve recognisable approximations of today’s "charismatic mega-fauna", full-blown compassionate stewardship of Nature will be needed: reprogramming predators, cross-species fertility-regulation, gene drives, robotic “AI nannies” – the lot. From a utilitarian perspective (cf. Utilitarianism.com), piecemeal interventions to solve the problem of wild animal suffering are hopeless.

PP: "I’m curious to know what reasons do you think that we have to care specifically about the welfare of sentient creatures and not other kinds of entities"
(TheHedoMinimalist)
DP: Mattering is a function of the pleasure-pain axis. Empirically, mattering is built into the very nature of the first-person experience of agony and ecstasy. By contrast, configurations of matter and energy that are not subjects of experience have no interests. Subjectively, nothing matters to them. A sentient being may treat them as significant, but their importance is only derivative.

PP: "Perhaps not, but I don't think that the use of classical digital computers is central to any ancestor simulation hypothesis. Don't such theories work with artificial mini-brains?")
(Michael)
DP: One may consider the possibility that one could be a mind-brain in a neurosurgeon’s vat in basement reality rather than a mind-brain in a skull as one naively supposes. However, has one any grounds for believing that this scenario is more likely? Either way, this isn’t the Simulation Hypothesis as envisaged in Nick Bostrom’s original Simulation Argument (cf. Simulation Argument).
What gives the Simulation Argument its bite is that, pre-reflectively at any rate, running a bunch of ancestor-simulations is the kind of cool thing an advanced simulation might do. And if you buy the notion of digital sentience, and the idea that what you’re now experiencing is empirically indistinguishable from what your namesake in multiple digital ancestor-simulations is experiencing, then statistically you are more likely to be in one of the simulations than in the original.

Elon Mask puts the likelihood that we’re living in primordial basement reality at billions-to-one against. But last time I asked, Nick doesn't assign a credence to the Simulation Hypothesis of more than 20%.

I think reality has only one level and we’re patterns in it - a physical field of subjective experience that evolves in accordance with the (relativistic generalisation of) the Schrödinger equation.

PP: "...where I would want to push back is that it needs to be the only thing we are concerned with.... We seem to deliberately seek out and endure pain to attain some other values, such as fitness, winning or looking good... I wouldn't say we value the pain we endure during sports, but it does seem to be the case that sometimes we value other things more than we disvalue pain. So how would you reconcile this kind of behavior with pain/pleasure being the inbuilt metric of (dis)value?"
(ChatteringMonkey)
DP: I think the question to ask is why we nominally (dis)value many intentional objects that are seemingly unrelated to the pleasure-pain axis. "Winning” and demonstrating one is a dominant alpha male who can stoically endure great pain and triumph in competitive sports promises wider reproductive opportunities than being a milksop. And for evolutionary reasons, mating, for most males, is typically highly rewarding. We see the same in the rest of the Nature too. Recall the extraordinary lengths some nonhuman animals will go to breed. What’s more, if (contrary to what I’ve argued) there were multiple axes of (dis)value rather than a sovereign pain-pleasure axis, then there would need to be some kind of meta-axis of (dis)value as a metric to regulate trade-offs.

You may or may not find this analysis persuasive; but critically, you don't need to be a psychological hedonist, nor indeed any kind of utilitarian, to appreciate it will be good to use biotech to end suffering.

PP: "Nietzsche for instance chose 'health/life-affirmation' as his preferred meta-axis to re-evaluate values, you seem to favor pain/pleasure"
DP: Some of the soul-chilling things Nietzsche said make him sound as though he had an inverted pain-pleasure axis: nietzsche.com In reality, Nietzsche was in thrall to the axis of (dis)value no less the most ardent hedonist.

In any event, transhumanists aspire to a civilisation of superintelligence and superlongevity as well as superhappiness – and many more "supers" besides.

PP: "I do not believe that science suggests you are not special. I think science suggests exactly what you argued already, that we're all distinct, "special", each one of us having one's own distinct theoretical independent world. And I think that this generalization, that we are all somehow "the same", is an unjustified philosophical claim. So I think you need something stronger than your own personal feelings, that agony and despair are disvaluable to you, to support your claim that they are disvaluable to everyone."
(Metaphysician Undercover)
DP: I am not without idiosyncrasies. But short of radical scepticism, the claim that agony and despair are disvaluable by their very nature is compelling. If you have any doubt, put your hand in a flame. Animals with a pleasure-pain axis have the same strongly evolutionary conserved homologous genes, neurological pathways and neurotransmitter systems for pleasure- and pain-processing, and the same behavioural response to noxious stimuli. Advanced technology in the form of reversible thalamic bridges promises to make the conjecture experimentally falsifiable too (cf. Could conjoined twins shatre a mind?). Reversible thalamic bridges should also allow partial “mind-melding” between individuals of different species.

PP: "What probabilities would you put on the existence of alien life? What about the chances of meeting or communicating with them?"
(Down The Rabbit Hole)
DP: My best guess is that what Eric Drexler called the "thermodynamic miracle” of life's genesis means that the percentage of life-supporting Hubble volumes where primordial life emerged more than once is extremely low. If so, then the principle of mediocrity suggests that we're alone. There will be no scope for cosmic rescue missions – even if we optimistically believe that Earth-originating life would be more likely to spread suffering elsewhere than to mitigate it.

If this conjecture turns out to be correct, then our NU ethical duties will have been discharged when we have phased out the biology of suffering on Earth and taken steps ensure it doesn't recur within our cosmological horizon.

Naturally, I could be mistaken.
For a radically different perspective, see:
Robin Hanson on Aliens

PP: "I don’t see how that’s a problem for PU though. I think they could easily respond to this concern by simply stating that we regrettably have to sacrifice the preferences of one group to fulfill the preferences of a more important group in these sorts of dilemmas. I also think this sort of thing applies to hedonism also. In this dilemma, it seems we would also have to choose between prioritizing reducing the suffering of the predator or prioritizing reducing the suffering of the prey."
(TheHedoMinimalist)
DP: Even if we prioritise, preference utilitarianism doesn’t work. Well-nourished tigers breed more tigers. An exploding tiger population then has more frustrated preferences. The swollen tiger population starves in consequences of the dwindling numbers of their prey. Prioritising herbivores from being predated doesn’t work either – at least, not on its own. As well as frustrating the preferences of starving predators, a population explosion of herbivores would lead to mass starvation and hence more even frustrated preferences. Insofar as humans want ethically to conserve recognisable approximations of today’s "charismatic mega-fauna", full-blown compassionate stewardship of Nature will be needed: reprogramming predators, cross-species fertility-regulation, gene drives, robotic “AI nannies” – the lot. From a utilitarian perspective, piecemeal interventions to solve the problem of wild animal suffering are hopeless.

PP: "...if evolution is true and it's been in play for at least a few billion years, shouldn't the status quo for intelligence, longevity, and happiness be optimum/maximum for the current "environment". In other words, we have in terms of the trio of intelligence, longevity, and happiness, the best deal nature has to offer. We shouldn't, in that case, attempt to achieve superintelligence, superlongevity, and superhappiness..."
(TheMadFool)
DP: Thank you for the kind words. Evolution via natural selection is a monstrous engine for the creation of pain and suffering. But Darwinian life contains the seeds of its own destruction. Yes, our minds are well-adapted to the task of maximising the inclusive fitness of our genes in the environment of evolutionary adaptedness (EEA). So humans are throwaway vehicles for self-replicating DNA. But sentient beings are poised to seize control of their own destiny. Recall that traditional natural selection is "blind". It's underpinned by effectively random genetic mutations and the quasi-random process of meiotic shuffling. Sexual reproduction is a cruel genetic lottery. However, intelligent agents are shortly going to rewrite their own source code in anticipation of the likely behavioural and psychological effects of their choices. As the reproductive revolution unfolds, genes and allelic combinations that promote superintelligence, superlongevity and superhappiness will be strongly selected for. Gene and allelic combinations associated with low intelligence, reduced longevity and low mood will be selected against.

My work has focused on the problem of suffering and the challenge of coding genetically hardwired superhappiness. But consider the nature of human intelligence. Are human minds really “the best deal Nature has to offer”, as you put it? Yes, evolution via natural selection has thrown up an extraordinary adaptation in animals with the capacity for rapid self-propelled motion, namely (1) egocentric world-simulations that track fitness-relevant patterns in their local environment. Phenomenal binding in the guise of virtual world-making (“perception”) is exceedingly adaptive (cf. World-making). Neuroscientists don't understand how a pack of supposedly classical neurons can do it. Evolution via natural selection has thrown up another extraordinary adaptation in one species of animal, namely (2) the virtual machine of serial logico-linguistic thought. Neuroscientists don't understand how serial thinking is physically possible either. However, what evolution via natural selection hasn't done – because it would entail crossing massive "fitness gaps” – is evolve animals supporting (3) programmable digital computers inside their skulls. Slow, serial, logico-linguistic human thinking can't compete with a digital computer executing billions of instructions per second. So digital computers increasingly outperform humans in countless cognitive domains. The list gets longer by the day. Yet this difference in architecture doesn't mean a divorce between (super)intelligence and consciousness is inevitable. Recursively self-improving transhumans won't just progressively rewrite their own source code. Transhumans will also be endowed with the mature successors of Neuralink; essentially Net-enabled "narrow" superintelligence on a neurochip (cf. Monkey plays pong with mind). I’m sceptical about a full-blown Kurzweilian fusion of humans with our machines, but some degree of "cyborgisation" is inevitable. Get ready for a biointelligence explosion.

Archaic humans are cognitively ill-equipped in other ways too. Not least, minds evolved under pressure of natural selection lack the conceptual schemes to explore billions of alien state-spaces of consciousness:
Psychedelics and Sanity
I could go on; but in short: expect a major evolutionary transition in the development of life.

PP: "Come to think of it, you may have been talking about the possibility of an infinite multiverse, where suffering is, well, infinite? Is this something you are concerned about?"
(Down The Rabbit Hole)
DP: The possibility that we live in a multiverse scares and depresses me:
Suffering in the Multiverse
I'm not sure the notion of physically realised infinity is intelligible. But even if Hilbert space is finite, it's still intuitively big – albeit infinitesimally small compared to a notional infinite multiverse.

PP: "If a particular PU theory doesn’t have anything to say about wild animal suffering then the proponent of that theory probably just doesn’t think that wild animal suffering is important to resolve. I think we would need to make an argument in favor of focusing our time and energy on wild animal suffering. Otherwise, how could we know that ending the suffering of wild animals is a worthy pursuit and a wise use of our time."
(TheHedoMinimalist)
DP: Biotech informed by negative or classical utilitarianism can get rid of disvaluable experience altogether over the next few centuries. Preference utilitarianism plus biotech might do so too if enough people were to favour a biological-genetic strategy for ending suffering: I don’t know. Either way, any theory of (dis)value or ethics that neglects the interests of nonhuman animals is arbitrarily anthropocentric. Nonhuman animals are akin to small children. They deserve to be cared for accordingly. The world needs an anti-speciesist revolution:
The Antispeciesist Revolution

PP: "...what if a brainstorming session of the world's leading minds came to the conclusion that the best game plan/strategy is precisely what we thought we could do better than i.e. random mutation is the solution "...to seize control of their (our) destiny"? You many ignore this point if you wish but I'd be grateful and delighted to hear your response."
(TheMadFool)
DP: One of the most valuable skills one can acquire in life is working out who are the experts in any field, then (critically) deferring to their expertise. But who are the world's leading minds in the nascent discipline of futurology? In bioethics? We hold the designers and programmers of inorganic robots (self-driving cars etc) accountable for any harm they cause. Sloppy code that causes injury to others can't be excused with a plea that the bugs might one day be useful. By contrast, humans feel entitled to conduct as many genetic experiments involving sentient organic robots as they like, regardless of the toll of suffering (cf. Antinatalism). Anyhow, to answer your question: if the world’s best minds declared that we should conserve the biological-genetic status quo, then I think they'd be mistaken. If we don’t reprogram the biosphere, then unimaginable pain and suffering still lie ahead. The horrors of Darwinian life would continue indefinitely. Transhumanists believe that intelligent moral agents can do better.

PP: "...where's the proof that depression doesn't serve a useful purpose?
(Counterpunch)
DP: If we take a gene's-eye-view, then a predisposition to depression frequently did serve a useful purpose in the environment of evolutionary adaptedness. See the literature on the
Rank Theory of depression
But this is no reason to conserve the biology of low mood. Depression is a vile disorder.

PP: "Universe/s could exist or spawn forevermore?"
(Down The Rabbit Hole)
DP: Cosmology is in flux. We understand enough, I think, to sketch out how experience below hedonic zero could be prevented in our forward light-cone. At times I despair of a political blueprint for the abolitionist project, but technically it's feasible. But even if we're alone in our Hubble volume, does suffering exist elsewhere which rational agents are impotent to do anything about? I worry about such things, but it's not fruitful. As soon as intelligent agents are absolutely certain that our ethical duties have been discharged, I think the very existence of suffering is best forgotten.

PP: "You have no understanding of what it means to be human."
(Noble Dust)
DP: Maybe. But if so, would you argue that the World Health Organization should scrap its founding constitution ("health is a state of complete physical, mental and social wellbeing") because it too shows no understanding of what it means to be human? After all, nobody in history has yet enjoyed health as so defined.

PP: "I’m not seeing how your comparison between animals and small children would even be that compelling when it comes to persuading people that we should care about wild animal suffering. It’s even less compelling to me because I happen to really dislike children. You probably have a better chance convincing me to care about wild animals lol."
(TheHedoMinimalist)
DP: If a small child were drowning, you would wade into a shallow pond to rescue the child – despite your professed dislike of small children, and your weaker preference not to get your clothes wet? I'm guessing you would do the same if the drowning creature were a dog or pig. Humanity will soon (by which I mean within a century or two) be able to help all nonhuman sentience with an equivalent level of personal inconvenience to the average citizen, maybe less. Technology massively amplifies the effects of even tiny amounts of benevolence (or malevolence).

An ethic of negative or classical utilitarianism mandates compassionate stewardship of the living world. This does not mean that convinced negative or classical utilitarians should urge raising taxes to pay for it. The intelligent utilitarian looks for effective policy options that are politically saleable. I never thought the problem of wild animal suffering would be seriously discussed in my lifetime, but I was wrong:
Wild Animal Suffering (Wikipedia)
If asked, a great many people are relaxed about the prospect of less suffering in Nature so long as suffering-reduction doesn’t cause them any personal inconvenience. Hence the case for technical fixes.

PP: "But I don't want to be pleasure or happiness junkie any more than I want to be addicted to heroin or opium."
(TheMadFool)
DP: But you are addicted to opioids. Everyone is hooked:
Neuroscience of 'liking' and 'wanting'.
Would-be parents might do well to reflect on how breeding creates new endogenous opioid addicts. For evolutionary reasons, humans are mostly blind to the horror of what they are doing:
Natalism vs Antinatalism
Addiction corrupts our judgement. It's treatable, but incurable. Transhumanism offers a potential escape-route.

PP: "So, why base your philosophy, transhumanism, on what you admit is an addiction?"
(TheMadFool)
DP: To the best of my knowledge, there is no alternative. The pleasure-pain axis ensnares us all. Genetically phasing out experience below hedonic zero can make the addiction harmless. The future belongs to opioid-addicted life-lovers, not "hard" antinatalists. Amplifying endogenous opioid function will be vital. Whereas taking exogenous opioids typically subverts human values, raising hedonic range and hedonic set-points can potentially sustain and enrich civilisation.

PP: "I've just been looking at a list of genetic disorders, and I'm not as happy as I might be wasn't one of them."
(Counterpunch)
DP: Yet depression is a devastating disorder. It has a high genetic loading. Depression is at least as damaging to quality of life as the other genetic disorders listed. According to the WHO, around 300 million people worldwide are clinically depressed. "Sub-clinical” depression afflicts hundreds of millions more. If humanity conserves its existing genome, then depressive disorders will persist indefinitely. The toll of suffering will be unimaginable. Mastery of our genetic source code promises an end to one of the greatest scourges in the whole world. By all means, urge exhaustive research and risk-benefit analysis. But I know of no good ethical reason to conserve such an evil.

PP: "You're moving the goalposts"
(Noble Dust)
DP: Sorry, could you unpack the footballing metaphor for me? What “goalposts”? As far as I know, I’ve consistently been arguing for replacing the biology of pain and suffering with life based on gradients of genetically programmed well-being.

PP: "You talk about this "hedonic zero" which admittedly I don't fully comprehend and do wish you to explain in more detail"
(Outlander)
DP: Apologies, by “hedonic zero” I just mean emotionally neutral experience that is perceived as neither good nor bad. Hedonic zero is what utilitarian philosopher Henry Sidgwick called the “natural watershed” between good and bad experience – though it’s complicated by “mixed states” such as bitter-sweet nostalgia.

Chess? I enjoy playing against a super-grandmaster. I lose every time. By contrast, I wouldn’t ever enjoy losing against a human opponent. This is because I’m a typical male human primate. Playing chess against other humans is bound up with primate dominance behaviour of the African savannah. I trust future sentience can outgrow such primitive zero-sum games.

Thank you for the link to The Twilight Zone (cf. A Nice Place To Visit).
Perhaps see my response to What if you don’t like it in Heaven?
In short, if we upgrade our reward circuitry, then all experience will be heavenly by its very nature.

PP: "I think it’s an open question whether or not a negative utilitarian should rescue that child"
(TheHedoMinimalist)
DP: Negative utilitarianism (NU) is compassion systematised. NUs aren’t in the habit of letting small children drown any more than we’re plotting Armageddon. I’m as keen on upholding the sanctity of life in law as your average deontologist. Indeed, I think the principle should be extended to the rest of the animal kingdom, so-called High-tech Jainism
The reason for such advocacy is that NU is a consequentialist ethic. Valuing and even sanctifying life is vastly more likely to lead to ideal outcome, i.e. the well-being of all sentience, than cheapening life.

"In’t it cost plenty of money to implement any sort of technical fix as a means to end the suffering of wild animals?"
(TheHedoMinimalist)
DP: A pan-species welfare state might cost a trillion dollars or more at today’s prices – maybe almost as much as annual global military expenditure. It’s unrealistic, even if humans weren’t systematically harming nonhumans in factory-farms and slaughterhouses. However, human society is on the brink of a cultured meat revolution. Our “circle of compassion” will expand in its wake. The most expensive free-living organisms to help won’t be the small fast-breeders, as one might naively suppose (cf. gene-drives.com), but large, slow-breeding vertebrates. I did a costed case-study for free-living elephants a few years' ago:
A Welfare State for Elephants

Any practically-minded person (they exist even on a philosophy forum) is likely to be exasperated. What’s the point of drawing up blueprints that will never be implemented? Yet the exponential growth of computer power means that the price of such interventions will collapse. So it’s good to have a debate now over the merits of traditional conservation biology versus compassionate conservation. Bioethicists need to inform ourselves of what is – and isn’t – technically feasible. On the latter score, at least, the prospects are stunningly good. Biotech can engineer a happy biosphere. Politically, such a project may take hundreds or even thousands of years. But I can’t think of a more worthwhile goal.

PP: "Wait, so are you like a rule utilitarian then?"
(TheHedoMinimalist)
DP: I’m a strict NU. And it’s precisely because I’m strict NU that I favour upholding the sanctity of human and nonhuman life in law. Humans can’t be trusted. The alternative to such legal protections would most likely be more suffering. Imagine if people thought that NU entailed letting toddlers drown! Being an effective NU involves striking alliances with members of other ethical traditions. It involves winning hearts and minds. Winning people over to the abolitionist project is a daunting enough challenge as it is. Anything that hampers this goal should be discouraged.

PP: "So, do you think that it should illegal to let a child drown even under circumstances where you don’t have parental duties for that child and you don’t have a particular job that requires you to prevent that child from drowning like a nanny or a lifeguard?"
(TheHedoMinimalist)
DP: Yes.

"But, what if a person is a secret NU and he decides to let the child drown? Most of the time, it seems to me that the public wouldn’t know if someone let the child drown because they were a NU since it seems that most NUs only talk about being NUs under an anonymous online identity. Given this, it seems to me that NUs do not actually need to believe that we should prevent people from dying in order to maintain alliances with other ethical theories. Rather, I think they would just need to be collectively dishonest about their willingness to let people die as long as it wouldn’t do anything to worsen the reputation of NU."
DP: Such calculated deceit is probably the recipe for more suffering. So it's not NU. Imagine if Gautama Buddha ("I teach one thing and one thing only: suffering and the end of suffering”) had urged his devotees to practice deception and put vulnerable people out of their misery if the opportunity arose...

PP: "Negative utilitarianism or hedonism is akin to saying that the solution to the problem is getting rid of smoke detection. It just doesn't make sense to me from the get-go."
(ChatteringMonkey)
DP: Agony and despair are inherently bad, whether they serve a signaling purpose (e.g. a noxious stimulus) or otherwise (e.g. neuropathic pain or lifelong depression).

Almost no one disputes subjectively nasty states can play a signalling role in biological animals. What's controversial is whether they are computationally indispensable or whether they can be functionally replaced by a more civilised signalling system. The development of ever more versatile inorganic robots that lack the ghastly "raw feels" of agony and despair shows an alternative signalling system is feasible. "Cyborgisation" (smart prostheses, etc) and hedonic recalibration aren't mutually exclusive options for tomorrow's (trans)humans. A good start will be ensuring via preimplantation genetic screening and soon gene-editing that all new humans are blessed with the hyperthymia and elevated pain-tolerance ("But pain is just a useful signaling system!") of the luckiest 1% (or 0.1%) of people alive today:
Could physical pain be eliminated?

More futuristic transhuman options, i.e. an architecture of mind based entirely on gradients of bliss, can be explored later this century and beyond. But let's tackle the most morally urgent challenges first.

PP: "In very no-nonsense terms, life makes an offer we can't refuse - pleasure is just too damned irresistible for us to reject anything that has it as part of the deal and thereby hangs a tale, a tale of diabolical deception"
(TheMadFool)
DP: Yes, well put. In their different ways, pain and pleasure alike are coercive. Any parallel between heroin addicts and the drug naïve is apt to sound strained, but endogenous opioid addiction is just as insidious at corrupting our judgement.

The good news is that thanks to biotech, the substrates of bliss won't need to be rationed. If mankind opts for a genetically-driven biohappiness revolution, then, in principle at least, everyone's a "winner". Contrast the winners and losers of conventional social reforms.

PP: "I agree with you that it might be best for a fairly high profile NU like yourself to teach your fans and people who might be interested in NU that they should prevent children from drowning. I think you can and kinda have created an implicit double message when describing the reasons for why they should prevent the child from drowning. The main reason that you have stated seems to be related to this being a good PR move for NU. But, I don’t think this genuinely teaches your NU fans that they really shouldn’t allow children to die. Rather, you seem to just be teaching them(in a somewhat indirect and implicit way) to not damage the reputation of NU. Your fans are not stupid though. They know that you seem have your reasons for teaching what you teach and I think they would assume that you might actually want them to let a child die even if you can’t express that sentiment without creating a negative outcome that would lead to more suffering."
(TheHedoMinimalist)
DP: Thanks, you raise some astute but uncomfortable points. Asphyxiation is a ghastly way to die, but even if death were instantaneous, there is something rather chilling about an ethic that seems to say pain-ridden Darwinian humans would be better off not existing. Classical utilitarianism says the same, albeit for different reasons; ideally our matter and energy should be converted into pure, undifferentiated bliss (hedonium).

However, if some version of the abolitionist project ever comes to pass – whether decades, centuries or millennia hence – its completion will presumably owe little to utilitarian ethicists. Maybe the end of suffering will owe as little to utilitarianism as pain-free surgery. I believe that transhuman life based on gradients of bliss will one day seem to be common sense – and in no more need of ideological rationalisation than breathing.

PP: "I don't think anything is really 'inherent'."
(ChatteringMonkey)
DP: An ontic structural realist (cf. Ontic Structural Realism) might agree with you. But if someone has no place in their ontology for the inherent ghastliness of my pain, so much the worse for their theory of the world. And I fear I'm typical.

PP: "If they serve a signalizing purpose than they themselves are not bad, but the circumstances that lead to agony and despair are"
(ChatteringMonkey)
DP: But their signalling "purpose" is to help our genes leave more copies of themselves. Agony and despair are still terrible even when they fulfil the functional role of maximizing the inclusive fitness of our DNA.

PP: "But some malfunctioning smokedetectors are not a reason to get rid of all smokedetection, nor does it make getting rid of smokedetection an end in itself."
(ChatteringMonkey)
DP: Recall transhumanists / radical abolitionists don't call for abolishing smoke-detection, so to speak. Nociception is vital; the "raw feels" of pain are optional. Or rather, they soon will be...

"What's also controversial I'd say is whether we 'should' replace them by a more 'civilised' signaling system. What is deemed more civilized no doubt depends on the perspective you are evaluating it from."
Perhaps the same might be said of medicine pre- and post-surgical anesthesia. However, a discussion of meta-ethics and the nature of value judgements would take us far afield.

"I think, and we touched on this a few pages back, a lot of this discussion comes down to the basic assumption of negative utilitarianism, and whether you buy into it or not. If you don't, the rest of the story doesn't necessarily follow because it builds on that basic assumption."
I happen to be a negative utilitarian. NU is a relatively unusual ethic of limited influence. An immense range of ethical traditions besides NU can agree, in principle, that a world without suffering would be good. Alas, the devil is in the details...

DP: Yes, well put. In their different ways, pain and pleasure alike are coercive. Any parallel between heroin addicts and the drug naïve is apt to sound strained, but endogenous opioid addiction is just as insidious at corrupting our judgement.

The good news is that thanks to biotech the substrates of bliss won't need to be rationed. If mankind opts for a genetically-driven biohappiness revolution, then, in principle at least, everyone's a "winner". Contrast the winners and losers of conventional social reforms.

PP: "In other words, I envision a state, a future state, in which injury/harm to mind and body would simply cause a red light to flash and when something good happens to us, all that does is turn on a green light, the unpleasantness of pain or the pleasantness of pleasure will be taken out of the equation as it were."
(TheMadFool)
DP: Complete "cyborgisation", i.e. offloading all today's nasty stuff onto smart prostheses, is one option. A manual override is presumably desirable so no one feels they have permanently lost control of their body. But abandoning the signalling role of information-sensitive gradients of well-being too would be an even more revolutionary step: the prospect evokes a more sophisticated version of wireheading rather than full-spectrum superintelligence. At least in my own work, I've never explored what lies beyond a supercivilisation with a hedonic range of, say, +90 to +100. A hedonium / utilitronium shockwave in some guise? Should the abundance of empirical value in the cosmos be maximised as classical utilitarianism dictates? Maximisation is not mandatory by the lights of negative utilitarianism; but I don't rule out that posthumans will view negative utilitarianism as an ancient depressive psychosis, if they even contemplate that perspective at all.

PP: "The devil is in the details indeed, I don't think many traditions would agree that sterilization of our forward light-cone is the most moral course of action for instance..."
(ChatteringMonkey)
DP: Indeed so. Programming a happy biosphere is technically harder than sterilizing the Earth. But I can't see the problem of suffering is soluble in any other way.

PP: "What if transhumanism becomes a reality but people use it only for recreational purposes, you know like going to Disney land?"
(TheMadFool)
DP: Consider the core transhumanist "supers", i.e. superintelligence, superlongevity and superhappiness.
If you became a full-spectrum superintelligence, would you want to regress to being a simpleton for the rest of the week?
If you enjoyed quasi-eternal youth, would you want to crumble away with the progeroid syndrome we call aging?
If you upgraded your reward circuitry and tasted life based on gradients of superhuman bliss, would you want to revert to the misery and malaise of Darwinian life?
Humans may be prone to nostalgia. Transhumans – if they contemplate Darwinian life at all – won't miss it for a moment.
Pitfalls?
I can think of a few...
The Downsides of Transhumanism

PP: "Also, it seems that the idea of genetically modifying humans to be incapable of suffering is chilling and disturbing to most people as well."
(TheHedoMinimalist)
DP: I wonder to what extent hesitancy stems from principled opposition, and how much from mere status quo bias?
People tend to be keener on the idea of Heaven than the tools to get there.
Some suspicion is well motivated. The history of utopian experiments is not encouraging.
"But this time it's different!" say transhumanists. But then it always is.
That said, life based on gradients of intelligent bliss is still my tentative prediction for the long-term future of sentience. Suffering isn't just vile; it's pointless.

PP: "Correct me if I'm wrong but transhumanism envisions the triad of supers (superintelligence, superlongevity, and superhappiness) to work synergistically, complementing each other as it were to produce an ideal state for humans or even other animals. What if that assumption turns out to be false?"
(TheMadFool)
DP: Yes, talk of a "triple S" civilisation is a useful mnemonic and a snappy slogan for introducing people to transhumanism. But are the "three supers" in tension? After all, a quasi-immortal human is scarcely a full-spectrum superintelligence. A constitutionally superhappy human is arguably a walking oxymoron too. For what it's worth, I'm sceptical this lack of enduring identity matters. Archaic humans don't have enduring metaphysical egos either. "Superlongevity" is best conceived as an allusion to how death, decrepitude and aging won't be a feature of post-Darwinian life. A more serious tension is between superintelligence and superhappiness. I suspect that at some stage, posthumans will opt for selective ignorance of the nature of Darwinian life – maybe even total ignorance. A limited amnesia is probably wise even now. There are some states so inexpressibly awful that no one should try to understand them in any deep sense, just prevent their existence.

PP: "Why is a constitutionally superhappy human "...arguably a walking oxymoron"?"
(TheMadFool)
DP: (1) A constitutionally superhappy human is arguably a walking oxymoron too. For what it's worth, (2) I'm sceptical this lack of enduring identity matters. Archaic humans don't have enduring metaphysical egos either. "Superlongevity" is best conceived as an allusion to how death, decrepitude and aging won't be a feature of post-Darwinian life. A more serious tension is between superintelligence and superhappiness. (3)I suspect that at some stage, posthumans will opt for selective ignorance of the nature of Darwinian life – maybe even total ignorance. A limited amnesia is probably wise even now. There are some states so inexpressibly awful that no one should try to understand them in any deep sense, just prevent their existence.

Also, I should clarify. Even extreme hyperthymics today are still recognisably human. But future beings whose reward circuitry is so enriched that their "darkest depths" are more exalted than our "peak experiences" are not human as ordinarily understood – even if they could produce viable offspring via sexual reproduction with archaic humans, i.e. if they fulfil the normal biological definition of species membership. A similar point could be made if hedonic uplift continues. There may be more than one biohappiness revolution. Members of a civilisation with a hedonic range of, say, +20 to +30 have no real insight into the nature of life in a supercivilisation with a range that extends from a hedonic low of, say, +90 to an ultra-sublime +100. With pleasure, as with pain, "more is different" – qualitatively different.

PP: "By way of bolstering my point that an "...enduring identity..." is key to hedonism I'd like to relate an argument made by William Lane Craig which boils down to the claim that human suffering is, as per him, orders of magnitude greater than animal suffering for the reason that people have an "...enduring identity..." I suppose he means to say that being self-aware (enduring identity) there's an added layer to suffering. Granted that William Lane Craig may not be the best authority to cite, I still feel that he makes the case for why hedonism is such a big deal to us humans and by extension to transhumanism."
(TheMadFool)
DP: As a point of human psychology, you may be right. However, I'd beg to differ with William Lane Craig. The suffering of some larger-brained nonhuman animals may exceed the upper bounds of human suffering (cf. Nonhuman animal suffering) – and not on account of their conception of enduring identity (cf. Ultra-Parfitianism). This is another reason for compassionate stewardship of Nature rather than traditional conservation biology.

PP: "I suppose you have good reasons for recommending (selective) amnesia in re Darwinian life but wouldn't that be counterproductive? Once bitten, twice shy seems to be the adage transhumanism is about - suffering is too much to bear (and happiness is just too irresistible) - and transhumanists have calibrated their response to the problems of Darwinian life accordingly. To forget Darwinian life would be akin to forgetting an important albeit excruciatingly painful lesson which might be detrimental to the transhumanist cause."
(TheMadFool)
DP: I agree about potential risks. Presumably our successors will recognise too that premature amnesia about Darwinian life could be ethically catastrophic. If so, they will weigh the risks accordingly. But there is a tension between becoming superintelligent and superhappy, just as there is a tension today between being even modestly intelligent and modestly happy. What now passes for mental health depends on partially shutting out empathetic understanding of the suffering of others – even if one dedicates one's life to making the world a better place. Compare how mirror-touch synesthetes may feel your pain as their own. Imagine such understanding generalised. If one could understand even a fraction of the suffering in the world in anything but some abstract, formal sense, then one would go insane. Possibly, there is something that humans understand about reality that our otherwise immensely smarter successors won't grasp.

PP: "So posthumans, as the name suggests, wouldn't exactly be "humans." Posthumans would be so advanced - mentally and physically - that we humans wouldn't be able to relate to them amd vice versa. It would be as if we were replaced by posthumans instead of having evolved into them."
(TheMadFool)
DP: Yes. Just as a pinprick has something tenuously in common with agony, posthuman well-being will have something even more tenuously in common with human peak experiences. But mastery of the pleasure-pain axis promises a hedonic revolution; some kind of phase change in hedonic tone beyond human comprehension.

PP: "Bravo! I sympathize with that sentiment. Sometimes it takes a whole lot of unflagging effort to see the light and this for me is one such instance of deep significance to me."
(TheMadFool)
DP: After decades of obscurity and fringe status, a policy agenda of compassionate conservation may even be ready to go mainstream. Here is the latest Vox:
The Wild Animal Suffering Movement

PP: "I absolutely agree. My own thoughts on this are quite similar. I once made the assertion that the truly psychologically normal humans are those who are clinically depressed for they see the world as it really is - overflowing with pain, suffering, and all manners of abject misery. Who, in faer "right mind", wouldn't be depressed, right? On this view what's passed off as "normal" - contentment and if not that a happy disposition – is actually what real insanity is. In short, psychiatry has completely missed the point which, quite interestingly, some religions like Buddhism, whose central doctrine is that life is suffering, have clearly succeeded in sussing out."
(TheMadFool)
DP: Well said. In contrast to depressive realism, what passes for mental health is a form of affective psychosis. Yet perhaps we can use biotech and IT to build a world fit for euphoric realism – a world where reality itself seems conspiring to help us.

PP: "Does the transhumanist vision ultimately lead to something of a morgue where bodies are stored side-by-side and atop on another"
(Outlander)
DP: A morgue doesn't quite evoke the grandeur of a "triple S" civilisation. But I guess it's conceivable. Even today, we each spend our life encased within the confines of a transcendental skull – not to be confused with the palpable empirical skull whose contours one can feel with one's "virtual" hands (cf. The illusion of solidity). Immersive VR or some version of the transcension hypothesis is one trajectory for the future of sentience. Rather than traditional spacefaring yarns – who wants to explore what are really lifeless gas giants or sterile lumps of rock!? – maybe intelligence will turn inwards to explore inner space. The experience of inner space – and especially alien state-spaces of consciousness – can be far bigger, richer and more diverse than interplanetary or hypothetical interstellar travel pursued in ordinary waking consciousness. For what it's worth, I've personally no more desire to spend time on Mars than to live in the Sahara desert.

Anyhow, to answer your question: I don't know. For technical reasons, I think the future lies in gradients of superhuman bliss. I've no credible conception of what guises that bliss will take. It will just be better than your wildest dreams.

PP: "I take it that in your perfect world, just sitting quietly in a room alone would be "enough", above hedonic zero. But of course that wouldn't consign us to perpetual inaction as we just sat in a room alone doing nothing forever (like the "wire-heads" in Larry Niven's Known Space universe, who are addicted to direct electronic stimulation of their pleasure centers), because we would still have the opportunity for even more enjoyable experiences if we went out and accomplished things, learned things, taught others, helped them in other ways, etc.
Does that sound about right?"

(Pfhorrest)
DP: Thank you. Many complications need unpacking here. We now know that wireheading, i.e. intracranial self-stimulation of the mesolimbic dopamine system, simulates the desire centres of the CNS rather than the opioidergic pleasure centres (cf. The Orgasmic Brain). But let's here use "wireheading" in the popularly accepted sense of unvarying bliss induced by microelectrode stimulation: a perpetual hedonic +10. Short of genetic enhancement, there is no way for human wireheads to exceed the upper bounds of bliss allowed by their existing reward circuitry; but for negative utilitarians, at least, this constraint isn't a moral issue. As a NU, I reckon a entire civilisation of wireheads that had discharged all its responsibilities to eradicate suffering would be morally unimpeachable. However, I don't urge wireheading except in cases of refractory pain and depression. It's not ecologically viable because there will always be strong selection pressure against any predisposition to wirehead. The idea of wireheading appeals mostly to pain-ridden depressives.

Another scenario combines hedonic recalibration with the VR equivalent of Nozick's Experience Machines (cf. Nozick's Experience Machine). Immersive VR may transform life and civilisation. But once again, I don't advocate ubiquitous Experience Machines because of the nature of selection pressure in basement reality. Any predisposition not to plug into full-blown Experience Machines will be genetically adaptive.

However, there is third option that is potentially saleable, ecologically viable and also my tentative prediction for the future of sentience. Genetically-based hedonic uplift and recalibration isn't, strictly speaking, pleasure-maximizing. Recall how today's high-functioning hyperthymics are blissful, but they aren't "blissed out". A civilisation based on information-sensitive gradients of intelligent bliss is not a perfect world by strict classical utilitarian criteria. Recalibrating hedonic range and hedonic set-points in basement reality may even be "conservative", in a sense. Your values, preference architecture and relationships can remain intact even as your default hedonic tone is uplifted. Critical insight and social responsibility can be conserved. Neuroscientific progress can continue unabated too – including perhaps the knowledge of how to create a hedonic +90 to +100 supercivilisation.

Heady stuff. Alas, Darwinian life still has vicious surprises in store.

PP: "Do you believe that there are limits to the extent in which a person is ethically obligated to get involved on someone else's behalf? Do you think there comes a point in which a person is justified in saying, "not my problem"?"
(Darthbarracuda)
DP: A good question. IMO a plea of "not my problem" is irrational and immoral. In my view, closed individualism is a false theory of personal identity:
Empty Individualism
Insofar as you are rational, and insofar it as lies within your power, you should help others as much as you should help your future namesakes ("you"). For sure, there are massive complications. Evolution didn't design us to be rational. Reality seems centred on me. I'm most important, followed by family, friends and allies:
Interpreting the world
Intellectually, I know this is delusional nonsense. But the egocentric illusion is so adaptive that it's effectively hardwired across the animal kingdom. If one aspires to exceed one's design specifications and instead display "moral heroism", then one risks burnout.

What does helping others mean in practice? Well, "technical solutions to ethical problems" is one pithy definition of transhumanism. The biotech and IT revolutions mean that shortly we'll be able to help even the humblest forms of sentience without risk of burnout, and indeed with minimal personal inconvenience. For instance, I'd strenuously urge everyone to go vegan; humanity's depraved treatment of nonhuman animals in factory-farms and slaughterhouses defies description. Yet the most effective way to "veganise" the world will be accelerating the development and commercialisation of cultured meat and animal products. Most people are weakly benevolent; if offered, they'll choose the cruelty-free cultured option. Animal agriculture will presumably be banned after butchery becomes redundant. A similar hard-headed ethical approach is needed to tackle the problem of wild animal suffering. Devoting half one's life to feeding famished herbivores in winter would be too psychologically demanding; piecemeal interventions would also be ineffective and maybe cause more long-term suffering. By contrast, hi-tech solutions to wild animal suffering are easier to implement and potentially much more effective.

PP: "At what point do you think we might cross the threshold between human and superhuman?"
(Metaphysician Undercover)
DP: Biologists define a species as a group of organisms that can reproduce with one another in their natural habitat to produce fertile offspring. We can envisage a future world where most babies are base-edited "designer babies". At some stage, the notional coupling of a gene-edited, AI-augmented transhuman and an archaic human on a reservation would presumably not produce a viable child.

Bioconservatives may be sceptical such a reproductive revolution will ever come to pass. Yet I suspect some kind of reproductive revolution may be inevitable. As humans progressively conquer the aging process later this century and beyond, procreative freedom as traditionally understood will eventually be impossible – whether the carrying capacity of Earth is 15 billion or 150 billion. Babymaking will become a rare and meticulously planned event:
The Reproductive Revolution

Possibly, you have a more figurative sense of "superhuman" in mind. My definition of the transition from human to transhuman is conventional but not arbitrary. In The Hedonistic Imperative (1995) I predicted, tentatively, that the world's last experience below hedonic zero in our forward light-cone would be a precisely datable event a few centuries from now. The Darwinian era will have ended. A world without psychological and physical pain isn't the same as a mature posthuman civilisation of superintelligence, superlongevity and superhappiness. But the end of suffering will still be a momentous watershed in the evolutionary development of life. I'd argue it's the most ethically important.

PP: "I was going to suggest if such a- in my view outlandish- reality would ever come to real and actual practice or fruition.. people would want to see the results for themselves first."
(Outlander)
DP: Why "outlandish"? For sure, untested genetic experiments conceived in the heat of sexual passion are "normal" today. But there may come a time when creating life via a blind genetic crapshoot will seem akin to child abuse. Recall that Darwinian life is "designed" to suffer. I reckon responsible future parents will want happy children blessed with good code. All sentient beings deserve the maximum genetic opportunity to flourish.

PP: "I had a topic I made before. Did I get it right in that topic?"
David Pearce on Hedonism
DP: Shawn, thank you. You are very kind. Some people may be a bit disconcerted that a negative utilitarian should talk so much about happiness, pleasure and even hedonism. But IMO, engineering a world with an architecture of mind based on information-sensitive gradients of well-being will prove to be the most realistic way to end suffering.

PP: Wouldn't there be a vast portion of the human population which for one reason or another would not engage in this designer baby process? I would think that they might even revolt against it."
(Metaphysician Undercover)
DP: Suppose that a minority of parents do indeed decide they want "designer babies" rather than haphazardly-created babies. The explosive popularity of personal genomics services like 23andMe shows many people are proactive regarding their genetic make-up and genetic family history – and by their partner's genetic code too. Suppose that the genetic basis of pain thresholds, hedonic range, hedonic set-points, antiaging alleles and, yes, alleles and allelic combinations associated with high intelligence becomes better understood. Naturally, most prospective parents want the best for their kids. To be sure, designing life is a bioethical minefield. But what kind of "revolt" from bionconservatives do you anticipate beyond simply continuing to have babies in the cruel, historical manner? No doubt the revolution will be messy. That said, I predict opposition will eventually wither.

PP: "Because the transhumans would be superhuman in some ways, they would be seen as a threat to the naturalists (or whatever you want to call them), and the God-fearers"
(Metaphysician Undercover)
DP: Yes, I agree. Ferocious controversy lies ahead.
Recall the first CRISPR babies were produced not to enhance the innate well-being of the gene-edited twins, but to enhance their intelligence with HIV-protection as a cover story (cf. CRISPR Twins Had Their Brains Altered). I don't know if the parents of Lulu and Nana were aware of the potential cognitive enhancement that CCR5 deletion confers in "animal models". Yet if they knew, they'd most likely approve. Chinese parents tend to be particularly ambitious for their kids. Here's one of my worries. Other things being equal, intelligence-amplification is admirable; but intelligence is a contested concept (cf. "IQ" and Intelligence). And the kind of intelligence that prospective parents are most likely to want amplified is the mind-blind, "autistic" component of general intelligence captured by "IQ" tests / SAT scores (etc), not social cognition, higher-order intentionality and collaborative problem-solving prowess (cf. Take The AQ Test). Just consider: would you be more excited by the prospect of becoming the biological parent of a John von Neumann or a Nelson Mandela? Yes, a hypothetical "high IQ" civilisation could potentially be awesome. Science, technology, engineering, and mathematics (STEM) disciplines would flourish. Yet there are poorly understood neurological trade-offs between a hyper-systematizing, hyper-masculine, "high-IQ"/AQ cognitive style and an empathetic cognitive style (cf. "Testing the Empathizing–Systemizing theory of sex differences and the Extreme Male Brain theory of autism in half a million people"). If designer babies were left entirely to the discretion of prospective parents, then a "high-IQ" civilisation would also be a high-AQ civilisation. The chequered history of so-called "high IQ" societies (cf. High IQ Societies) doesn't bode well. Note that I'm not making a value-judgement about what AQ is desirable for the individual or for civilisation as a whole, nor saying that high-AQ folk are incapable of empathy. But our current restrictive conception of intelligence is a recipe for the tribalism that intelligent moral agents should aim to transcend. Full-spectrum superintelligences will have a superhuman capacity for perspective-taking – including the perspectives of the unremediated / unenhanced.

More generally, genetically enhancing general intelligence is technically harder than coding for mood enrichment. The only way I know to create an accelerated biointelligence explosion would be unlikely to pass an ethics committee:
The Biointelligence Explosion
Ethically, I think our most urgent biological-genetic focus should be ending suffering:
The Abolitionist Project

PP: "So to put it bluntly, if you think this process "would be unlikely to pass an ethics committee", why are you discussing it as if it is a viable option? Isn't conspiracy toward something unethical itself unethical?"
(Metaphysician Undercover)
DP: Sorry, I should have clarified. By an "accelerated biointelligence explosion [that] would be unlikely to pass an ethics committee", I had in mind a deliberate project: cloning with variations super-geniuses like von Neumann (The Genius of John von Neumann); hothousing the genetically modified clones; and repeating the cycle of cloning with variations in an accelerating process of recursive self-improvement. This scenario is different from "ordinary" parents-to-be using preimplantation genetic screening and counselling and soon a little light genetic tweaking. I don't predict an accelerated biointelligence explosion as distinct from a long-term societal reproductive revolution. The reproductive revolution will be more slow-burning. It will most likely start with remedial gene-editing to cure well-acknowledged genetic diseases that almost no one wants to conserve. But humanity will become more ambitious. Germline interventions to modulate pain-tolerance, depression-resistance, hedonic range, prolonged youthful vitality and different kinds of cognitive ability will follow.

PP: "Variety is a very important aspect of life, I'd argue it's the essence of life. And it is the foundation of evolution. The close relationship between variety and life is probably why we find beauty in variety. Beauty is closely related to good, and the pleasure we derive from beauty has much capacity to quell suffering. This is why there is a custom of giving people who are suffering flowers."
(Metaphysician Undercover)
DP: Genome editing can create richer variety than is possible under a regime of natural selection and the meiotic shuffling of traditional sexual reproduction. But diversity isn't inherently good. Darwinian life offers an unimaginable diversity of ways to suffer.
"I don't think that a thing which has been designed not to suffer could even be called alive."
(Metaphysician Undercover)
I promise Jo Cameron and Anders Sandberg ("I do have a ridiculously high hedonic set-point!") are very much alive. The challenge is to ensure all sentient creatures are so blessed. Well-being should be a design specification of sentience, not a fleeting interlude.

PP: "Well, I think the principal issue is that "suffering" is a very broad, general term, encompassing many types"
(Metaphysician Undercover)
DP: All experience below hedonic zero has something in common. This property deserves to be retired – made physiologically impossible because its molecular substrates are absent. Shortly, its elimination will be technically feasible. Later, its elimination will be sociologically feasible too. A world without suffering may sound "samey". Heaven has intuitively less variety than Hell. However, trillions of magical state-spaces of consciousness await discovery and exploration. Biotech is a godsend; let's use it wisely:
Post-Darwinian Life

PP: "Increasing the hedonic "resting level" does not, at least in my theory, eliminate the hedonic treadmill, if one is aware of greater possibility, one is inclined to seek it with body and mind. If one no longer wishes to strive, would you still call this humanity?"
(Outlander)
DP: Not humanity, but transhumanity.
No one ever gets bored of mu-opioidergic activation of their hedonic hotspot in the posterior ventral pallidum. But the genetic or pharmacological equivalent of "wireheading" (a misnomer) is not what I or most other transhumanists advocate. In future, steep or shallow information-sensitive gradients of well-being can be navigated with as much or as little motivation as desired. Indeed, it's worth stressing how hedonic uplift can be combined with dopaminergic hypermotivation to counter the objection that perpetual bliss will necessarily turn us into lotus-eaters.

PP: "How is the gap overcome legally? I've been in the nootropics community for about 10 years, and there's some kind of strong desire to advance human cognition with drugs like the MAPS association or otherwise Neuralink. Yet, legally its hard to overcome some of the problems associated with transhumanity and neoclassical legalism in the West..."
(Shawn)
DP: Yes, the legal obstacles to transhumanism are significant.
For instance, if one is an older person who doesn't want to miss out on transhuman life, then at present it's not lawfully possible to opt for cryothanasia at, say, 75 so one can be cryonically suspended in optimal conditions. If instead you wait until you die "naturally" aged 95 or whatever, then you'll be a shadow of your former self. Your prospects of mind-intact reanimation will be negligible. Effectively irreversible information loss is inevitable.

If you are a responsible prospective parent who wants to choose benign genes for your future children, then currently you can't lawfully pre-select benign alleles for your offspring unless you are at risk of passing on an "officially" medically-recognised genetic disorder.

If you are the victim of refractory depression or neuropathic pain, you can't lawfully sign up for a surgical implant and practise "wireheading".

Drugs are another legal minefield. Intellectual progress, let alone the development of full-spectrum superintelligence, depends on developing the study of mind as an experimental discipline: a post-Galilean science of consciousness. After all, one's own consciousness is all one ever directly knows; there's no alternative to the experimental method if one wants to explore both its content and the medium. Classical Turing machines can't help. Digital computers are zombies; despite the AI hype, they can't deliver superintelligence. Likewise, there are tenured professors of mind who are drug-naive. Drug-naivety is the recipe for scholasticism. Most of the best work investigating consciousness is done in the scientific counterculture, not in academia.

That said, dilemmas must be confronted. It's no myth: exploring psychedelics is hazardous:
Psychedelics: risks
Informed consent to becoming a psychonaut is impossible. And I'm a hypocrite. Woe betide anyone who tries to stop me taking whatever I choose; yet if I were a parent, then I wouldn't want my children's professors to be permitted to introduce them to psychedelics.

The long-term solution to the hazards of experimentation is genetically programmed invincible well-being. All trips will then be good trips – sometimes illuminating, sometimes magical, but always enjoyable in the extreme. But this scenario is still a pipedream. A moratorium on psychedelic research until we re-engineer our reward circuitry is a moratorium on knowledge.
I don't have a satisfactory answer.

PP: "What makes you think we understand enough to prevent suffering in the whole forward light cone?"
(Down The Rabbit Hole)
DP: Just as, tragically, a few genetic tweaks can make someone chronically depressed and pain-ridden, conversely a few genetic tweaks can make someone chronically happy and pain-free. CRISPR-based synthetic gene drives (cf. Gene drive - Wikipedia) that defy the naively immutable laws of Mendelian inheritance allow the deliberate spread of such benign alleles to the rest of Nature even if they carry a modest fitness cost to the individual, which is counterintuitive and sounds ecologically illiterate. For sure, I'm omitting many complications. But an architecture of mind based entirely on gradients of well-being is technically feasible, with or without smart prostheses.

PP: "To follow up on a question I asked on page 1, after reviewing the material, do you agree with Benatar's Axiological Asymmetry?"
(Down The Rabbit Hole)
DP: As a negative utilitarian, I agree with (a version of) David Benatar's Axiological Asymmetry. A perfect vacuum would be axiologically as well as physically perfect. I just don't think the asymmetry has the "strong" anti-natalist policy implications that Benatar supposes. The nature of selection pressure means the future belongs to life-lovers. Therefore, NUs should work with the broadest possible coalition of life affirmers to create a world where existing sentience can flourish and new life is constitutionally happy, i.e a world based on information-sensitive gradients of bliss. If intelligent beings modify their own source code, then coming into existence doesn't have to be a harm.

PP: "I guess what people will want to say/ask/want to see is.. okay David. Go ahead. Do it with your own kids, do whatever it is as you say."
(Outlander)
DP: Recall I'm a "soft" anti-natalist. I don't feel ethically entitled to bring more suffering into the world, genetically mitigated or otherwise:
The Antinatalist Declaration

PP: "And let us all openly observe them before any talk of legislation or anything that involves anybody else.. is involved. Reasonable enough, yes?"
(Outlander)
DP: It's precisely because creating new sentience does involve someone else that we should try to mitigate the harm:
Soft & Hard Antinatalism

PP: "If humans were to abolish all forms of suffering with technology, as you say, I think the only way of fully accomplishing this would be by constraining the limits of consciousness itself. Just like in the early dystopian We, people would have their imaginations removed, or carefully adjusted so that they could only ever imagine pleasurable things."
(Darthbarracuda)
DP: With a hedonic range of, say, +20 to +50, our imaginations could be stunningly enriched.

"In which case, humans would be incapable of feeling negative feelings regarding things we usually find important to feel negative feelings towards. For instance, the death of a loved one invokes sadness. Would technologically-enhanced humans feel sadness?"
DP: I feel entitled to want my death or misfortune to diminish the well-being of loved ones. I don't think I'm entitled to want them to suffer on my account. There can be diminished well-being even in posthuman paradise – although death and aging will eventually disappear, and posthuman hedonic dips can be higher than human hedonic peaks.

"What about other feelings, like that of accomplishment, that require some degree of struggle beforehand? Would there be an "accomplishment pill" that people would take when they want to feel accomplished, or a "love pill" when people want to feel loved (even if they have accomplished nothing, and have nobody to love)?"
Engineering enhanced motivation and a consequent sense of accomplishment is feasible in conjunction with a richer default hedonic tone. The dopamine and opioid neurotransmitter systems are interconnected, but distinct.

PP: "Would the removal of all forms of negative feelings include feelings that are important for morality?"
(Darthbarracuda)
DP: Information-sensitive gradients of well-being are consistent with a strong personal code of morality and social responsibility. Depression, undermotivation and anhedonia tend to subvert one's values; conversely, depression-resistance makes one stronger, in every sense.

PP: "I can imagine a situation in which blissful slaves work constantly, die frequently, all with a happy smile and no sense that what is being done to them is wrong."
(Darthbarracuda)
DP: Low mood is the recipe for subordinate behaviour; elevated mood promotes active citizens:
Critique of Brave New World

PP: "How would we be compassionate?"
(Darthbarracuda)
DP: Compare a "hug drug" and empathetic euphoriant like MDMA ("Ecstasy"):
Utopian Pharmacology

PP: "How would we feel guilt?"
(Darthbarracuda)
DP: The functional analogues of guilt may be retained; but let's get rid of its ghastly "raw feels":
An information-theoretic perspective on Heaven

PP: "Which version don't you agree with?"
(Down The Rabbit Hole)
DP: David Benatar's version of the asymmetry argument purports to show that existence is always comparatively worse than non-existence. After intelligent moral agents phase out the biology of suffering, this claim will no longer hold.

PP: "Sure, and after dark becomes light it's no longer dark. Anything can be changed by this view of "oh after such and such" is applied. We need solutions. Concrete results. At least suggestions. You have a vision, that's great, so does everyone. What will you do in the here and now to see it follows through?"
(Outlander)
DP: I'm a researcher, not the leader of a millenarian cult with messianic delusions. Yes, just setting out a blueprint of what needs to be done feels inadequate. I'd like to do more. But reprogramming the biosphere will take centuries.

PP: "I'm having a hard time imagining what a hedonic dip would be like that did not involve some form of suffering. How do I remember that times were better without being disappointed with the present moment?"
(Darthbarracuda)
DP: Compare the peaks and dips of lovemaking, which (if one isn't celibate) are generically enjoyable throughout.

"I don't want the people I care about to suffer either, but if nobody cared if I were gone, that would be a very lonely existence. Loneliness that would have to be eliminated with technology. Companionship would not be genuine. If you feel sad when a loved one is gone, that is good, it is good that you feel bad, because it means your relationship was genuine."
(Darthbarracuda)
DP: Relationships in which one wants a loved one ever to suffer are inherently abusive, another cruel legacy of Darwinian life. Thinking about the biological basis of human relationships can be unsettling, but they are rooted in our endogenous opioid dependence.

PP: "It seems like authentic, genuine experiences may not be possible in a world without suffering. Things would no longer have any weight or meaning. Which of course would be a negative feeling that would need to be eliminated. The importance of meaning would be lost, and nobody would even care."
(Darthbarracuda)
DP: The happiest people today typically have the strongest relationships. By contrast, depression undermines relationships not least by robbing the victims of an ability to derive pleasure from the company of friends and family. Critics sometimes say we face a choice between happiness and meaning. It's a false dichotomy. Superhappiness will create a superhuman sense of significance by its very nature:
SuperSignificance

PP: "You tell us here, against all currently possible odds a heaven of infinite currently unimaginable bliss awaits, if we only listen to you, and if not, a Hell also awaits, a lifetime of Darwinian hell?"
(Outlander)
DP: I spoke in jest. But there's a serious point here. Evolution via natural selection is a fiendish engine for spreading unimaginable suffering. But selection pressure has thrown up a cognitively unique species. Humans are poised to gain mastery over their reward circuitry. Technically, we could phase out the biology of suffering and reprogram the biosphere to create life based gradients of super-bliss – yes, "Heaven" if you like, only much better. What's more, hedonic uplift doesn't involve the proverbial "winners'' and "losers". Biological-genetic elevation of my hedonic set-point doesn't adversely affect you any more than elevation of your hedonic set-point adversely affects me. Contrast the zero-sum status games of Darwinian life ("Hell"). Anyhow, a genetically-driven biohappiness revolution deserves serious scientific and philosophical critique. A big thank you to the organizers of The Philosophy Forum. But to answer your point, this transhumanist vision of post-Darwinian life ("Heaven") also deserves a larger-than-life billionaire or charismatic influencer to take these ideas mainstream.

PP: "Yet it created you, did it not? So, it is therefore now, in addition to this, an incredible engine for spreading unimaginable bliss, if your ideas are to be believed. Some bite the hands that feed them, but are you not attempting to amputate it altogether?"
(Outlander)
DP: Hell has an escape-hatch. Reaching it is a daunting challenge. But biotech offers tools of emancipation. Maybe posthumans will indeed enjoy eons of indescribable happiness. I certainly hope so. But Darwinian life is monstrous. No amount of bliss can somehow morally outweigh such obscene suffering. I wish sentient malware like us had never existed. It's not a fruitful thought, I know. May posthumans be spared such knowledge.

PP: "This should be called hyperhumanism!"
(Tzeentch)
DP: "Hyperhumanism" might be a more reassuring brand than transhumanism. But the pain-pleasure axis discloses the world's inbuilt metric of (dis)value. It's not some species-specific idiosyncrasy. By contrast, fear of death may be peculiar to a handful of intelligent animal species. Fear and death alike will eventually be preventable.

PP: "My hope in this discussion is to invite David Pearce to the discussion or alternatively see if other members can overcome the anxiety of death and live a more rich and happy life."
(Shawn)
DP: Shawn, thank you for the invite. Defeating death and aging is going to be insanely difficult, but transhumans will be quasi-immortal. Whole-body replacements ("head transplants”) should be possible in a decade or two. Cyborgisation will accelerate. I reckon the biggest technical challenge will be sustaining eternally youthful brains. Yet already, transplanting dopamine-producing nerve cells grown from a patient's own cells back into their brain can relieve motor signs and depressive symptoms in Parkinson’s disease. Aging but nominally healthy humans too could benefit from the enhanced mood, motivation and vitality conferred by implants. Where (if anywhere) do we stop? Thorny issues of the nature of enduring personal (non-)identity can't be dodged:
Ageless Living

PP: "O brave new world!"
(Cuthbert)
DP: So they say. But compare:
BNW: a Critique

PP: ""It seems to me that if you do not have a whole raft of qualifying thoughts that you might have added, your whole enterprise goes into question."
(Tim Wood)
DP: I could have said much more, e.g. about personal identity (or rather its absence) over time:
Parfit vs Enduring Metaphysical Egos
However, I may be misunderstanding your worry; if so, apologies. Please let me know!

PP: "...the god-like super-beings we are destined to become..." But I have heard this rhetoric before and I have seen what happens to the ones who fail to qualify for super-being status. And I am afraid."
(Cuthbert)
DP: Will God-like superintelligences be akin to Nietzschean Übermenschen – contemptuous of the weak, the vulnerable and the cognitively humble? Or does superhuman intelligence entail a superhuman capacity for perspective-taking and empathic understanding? For sure, talk of an expanding circle of compassion can make proponents sound naive. Most students of history or evolutionary psychology will be sceptical of moral progress too. But we can't just assume that God-like superintelligences will be prey to the egocentric illusion. Ultimately, egocentrism is no more rational than geocentrism – and presumably destined to go the same way:
The Purpose of Life

PP: "the answers to "why is it good?" seem vague."
(Boethius)
DP: The claim: aging, death, disease, cognitive infirmity and indeed involuntary suffering of any kind are wrong. A transhumanist civilization of superlongevity, superintelligence and superhappiness can overcome these ancient evils. If we act wisely, then future life will be sublime.

Apologies, you're right. The moral underpinnings of the transhumanist project as outlined are indeed poorly defined. Yet there's another problem. In my answers, I've glossed over differences in personal values among extremely diverse transhumanists:
Transhumanists 2021
The transhumanist movement embraces moral realists and antirealists; the secular and the religious; negative, classical and preference utilitarians; ethical pluralists and agnostics; and folk whose views defy easy categorisation. Only a minority of transhumanists share my NU conviction that our overriding moral obligation is to mitigate, minimise and prevent suffering – everything else being icing on the cake:
DP Sentience Interview
What transhumanism isn’t is gung-ho technology-worship. Everyone acknowledges that the potential risks of AI and biotechnology are far-reaching:
The risks of transhumanism
But we’ll shortly have the technical tools create posthuman paradise – quasi-eternal life animated by gradients of intelligent bliss. On some fairly modest assumptions, life in a "triple s" civilisation will be orders of magnitude better than life today.

PP: "If the future is sublime or not, will not depend primarily on CRISPR. It will depend primarily on energy technology, carbon capture and storage, desalination and irrigation and recycling technology being applied first. Or do you suggest that people can be blissfully happy with the sky on fire? Maybe that is your suggestion - and herein lies the question: how far would you go with the genetic toolkit to survive in a world where you've developed genetics to a fine art, but let the environment run to ruin?"
(Counterpunch)
DP: There's no tension between radical life-extension, genetic mood-enrichment and responsible stewardship of Earth. For instance, one reason that many people are unwilling to accept even modest personal inconvenience to tackle global warming is the assumption they won't be around personally to suffer the consequences. Let's face it, a 3mm rise in the mean sea levels each year doesn't sound too alarming unless you happen to live on a low-lying island or a coastal floodplain. Therefore willingness to accept tax-hikes for the benefit of posterity is limited. By contrast, indefinite youthful life-spans would also radically lengthen our normal time-horizons. Moreover, troubled people aren't necessarily more environmentally-conscious than unusually happy people. Indeed, other things being equal, the happiest people probably tend to care most about conserving what they conceive as our beautiful planet. After all, paradise (cf. "A Perfect Planet": A Perfect Planet (Wikipedia)) is usually reckoned more worth preserving than purgatory.
In short, I know of no tradeoff.

PP: Thanks for the kind words. They are very much appreciated. A lot to chew on. Let me start with your question:
"What makes you sad? Why is that?"
(Outlander)
DP: Knowledge. The suffering in the world (more strictly, the universal wavefuction) appals me. I long for blissful ignorance. Alas, it would be irresponsible to urge invincible ignorance until all the ethical duties of intelligence in the cosmos have been discharged.

Until then, knowledge is a necessary evil.

PP: "It seems strange that people seem to accept these facts as tautological, and instead continue saving money in a bank account instead of investing it in something so fundamental as to live longer or potentially for as long as possible. "Why is this all so?"
(Shawn)
DP: A high capacity for self-deception is probably critical to what now passes for mental health. Until recently, helping people rationalise aging, death and suffering was wholly admirable: nothing could be done about the "natural" order of things. Despite my dark view of Darwinian life (cf. "Pessimism Counts in Favor of Biomedical Enhancement: A Lesson from the Anti-Natalist Philosophy of P. W. Zapffe" by Ole Martin Moen: Pessimism and Biomedical Enhancement), I urge opt-out cryonics, opt-in cryothanasia, and a massive global project to defeat the scourge of aging.

One tool of life-extension is mood-enrichment. Happy people typically live longer.

PP: "In the grand picture of things, 70-80 years is miniscule for a species to collectively survive or undertake grand projects like space exploration or multiplanetary colonies, yes?"
(Shawn)
DP: Yes. Human lifespans are inadequate for interstellar travel, let alone galactic exploration. Human lifespans are inadequate for investigating the billions of alien state-spaces of consciousness accessible to exploration by future psychonauts. Only the drug-naïve (cf. John Horgan's The End of Science (1996)) could believe that the world's greatest intellectual discoveries lie behind rather than ahead of us. I won't pretend the pursuit of knowledge is my motivation for wanting humanity to defeat aging. But then most people – and certainly most transhumanists – aren't negative utilitarians.

PP: "Wow, can you imagine the boredom of being in a spaceship flying to another galaxy? To see what? There must better reasons for wanting an extended life than this."
(Metaphysician Undercover)
DP: I'm inclined to agree. If we accept the contention of Rare Earthers that the rest of our galaxy is lifeless, then the allure of interstellar travel may pall. Granted, the biology of boredom is easier to retire than the biology of aging. Extrasolar space travel doesn't have to consist of decades or centuries of tedium. Even so, what's the point of it all? If lifeless rocks appeal to your sensibilities, then why not live in a barren desert closer to home?

Perhaps it could be argued that vacant ecological niches tend to get filled. It's plausible that an advanced posthuman civilisation will decide to use AI to optimise matter and energy within its cosmological horizon. But here we are well into the realm of science-fiction.

"If we remove all suffering, doesn't the extended life just turn into one long boring flight to nowhere. Might as well be an eternal brain in a vat."
Here it's possible we may differ. The suggestion one sometimes hears that we should conserve suffering because "heaven" would be tedious is ill-conceived:
Life In Heaven
Perhaps consider the most intensely rewarding experiences of human life. They are experienced as intensely significant by their very nature. Thus no one says, "I feel sublimely happy but my life feels empty and meaningless." Replacing the biology of misery and malaise with life based on information-sensitive gradients of bliss will also solve the problem of meaninglessness. Indeed, despite being a pessimistic, button-pressing negative utilitarian, my tentative prediction is that the least meaningful moments of posthuman life will be experienced as vastly more significant than any human "peak experience" in virtue of their richer hedonic tone.

"Transcendent" meaning is a different question; it's not clear that the idea is even intelligible. But just as I anticipate the world's last unpleasant experience will be a precisely dateable a few centuries (?) from now, likewise the last time anyone reflects that life is meaningless is likely to be a precisely datable event too. The end of the Darwinian era will be a moral and intellectual watershed in every sense.

PP: "Let's take an example then, competition. Winning a competition is one of the most intensely rewarding experiences for some people. Even just as a spectator of a sport, having your team win provides a very rewarding experience. But we can't always win, and losing is very disappointing. How do you think it's possible to maintain that intensely rewarding experience, which comes from success, without the possibility of disappointment from failure? It seems like a large part of the rewarding feeling is dependent on the possibility of failure. We can't have everyone winning all the time because there must be losers. And there would be no rewarding experience from success, without the possibility of failure. How could there be if success was already guaranteed?"
(Metaphysician Undercover)
DP: "It's not enough to succeed. Others must fail", said Gore Vidal. “Every time a friend succeeds, I die a little.” Yes, evolution has engineered humans with a predisposition to be competitive, jealous, envious, resentful and other unlovely traits. Their conditional activation has been fitness-enhancing. In the long run, futurists can envisage genetically-rewritten superintelligences without such vices. After all, self-aggrandisement and tribalism reflect primitive cognitive biases, not least the egocentric illusion. Yet what can be done in the meantime?

Well, one of the reasons I've focused on hedonic uplift and set-point recalibration is that the dilemmas of social competition can be side-stepped. Depressives and hyperthymics alike prefer winning to losing. But if you're an extreme hyperthymic, losing doesn’t cause your hedonic tone to dip below zero, just as if you’re a chronic depressive, winning doesn't raise your hedonic tone above zero. So at the risk of sounding like a crude genetic determinist, I say: let's aim to create a hyperthymic society via some biological-genetic tweaking.

I've outlined possible routes to explore such as ACKR3 receptor blockade and kappa opioid receptor antagonism; routine preimplantation genetic screening and counselling for prospective parents; germline gene-tweaking of FAAH and FAAH-OUT genes (etc); and a whole bunch of stuff to help nonhuman animals. If society puts as much effort and financial resources into revolutionising hedonic adaptation as it's doing to defeat COVID, then the hedonic treadmill can become a hedonistic treadmill. Globally boosting hedonic range and hedonic set-points by biological-genetic interventions would certainly be a radical departure from the status quo; but a biohappiness revolution is not nearly as genetically ambitious as a complete transformation of human nature. And complications aside, hedonic uplift doesn't involve creating "losers", the bane of traditional utopianism.

PP: "So this "intensely rewarding experience" which we get from succeeding in competition, you designate as seated in a vice, or vices, This would mean that it is a bad rewarding experience which ought to be eliminated. But on what principles do you designate some rewarding experiences as associated with vices, and some as associated with virtues? I would think that if you want to eliminate some such intensely rewarding experiences, and emphasize others, you would require some objective principles for distinguishing the one category, vice, from the other, virtue."
(Metaphysician Undercover)
DP: Competing against earlier iterations of oneself or an insentient AI doesn't raise ethical problems. More controversial would be competing in zero-sum games against other (trans)humans where losing causes a drop in the well-being of one's opponent without their ever falling below hedonic zero. Such competition is problematic for the classical utilitarian, but not for the negative utilitarian. However, what I'd argue is morally indefensible is demanding that the loser involuntarily suffers when experience below hedonic zero becomes technically optional. Contemplating the pain of a defeated opponent sharpens the relish of some winners today. Let's hope such ill will has no long-term future.

An antirealist about value might contest even this fairly modest principle. My response to the antirealist:
Meta-Ethical Realism.
But as I said, emphasizing hedonic uplift and set-point recalibration over traditional environmental reforms can circumvent most – but not all – of the dilemmas posed by human value-systems and preferences that are logically irreconcilable.

PP: "However, it appears to me like such cooperation is more likely to be obtained in the face of serious evil, rather than the effort to obtain some designated good"
(Metaphysician Undercover)
DP: If depression isn’t a serious evil, then I don’t know what is – human “mood genes” are sinister beyond belief. Anyhow, governance by philosophers isn't imminent. Nor is rule by transhumanists, though transhumanist memes appear to be spreading. Sadly, I don't foresee what I'd like to materialise – a Hundred Year Genetic Plan of worldwide hedonic uplift and recalibration under the auspices of the WHO to fulfil the goal of its founding constitution. What’s more credible is genome-editing to tackle well-recognised monogenetic diseases followed by interventions to tackle a genetic predisposition to abnormal pain-sensitivity, low mood and other forms of mental ill-health. Yes, I find this a disappointingly slow prospect. All of what today pass as enhancement technologies will be recognised by posthumans as remediation.

Wild cards? Well, part of me yearns for the hedonic equivalent of The Great Man Theory
– a visionary politician who takes a biohappiness revolution from the margins to the mainstream. Alas, I’m not holding my breath.

PP: "My question is simple but very urgent/important one:
How can I, as just an ordinary person, can contribute to quicken the progress of Transhumanism?"

(Niki Wonoto)
DP: Niki, awesome, would you consider getting your own website / YouTube channel with a version in bahasa Indonesia? People tend to be more receptive to a new idea if the message is conveyed in their native language.

"Also, do you think Transhumanism will have any possibility to finally become mainstream in public?"
It's been well said that humans tend to overestimate the effects of change in the short-run and underestimate its effects in the long run. Yes, all this grandiose talk about a glorious future civilisation of superintelligence, superlongevity and superhappiness may ring a little hollow when one is forced to confront the problems of everyday life – bills to pay, chores to do, and the messiness of interpersonal relationships. But Darwinian life as we understand it has no long-term future.

"How can we really make sure that Transhumanism will really work, instead of failing or eventually got diminished & slowly disappearing as if it never exists, considering how short attention span of our human species/humanity/mankind?"
(Niki Wonoto)
Cognitive frailty, aging, death and all manner of physical and psychological suffering is "part of what it means to be human". But biotech and IT will shortly make such horrors optional. I don't want to sound like a naïve technological determinist, but just consider: if offered the chance to become immensely smarter, happier and indefinitely youthful, how many people will prefer to be intellectually handicapped, malaise-ridden and decrepit?

At the risk of tempting fate, I'll say it: history is (probably) on our side.

PP: "Are you saying I would still be depressed because of my mood genes? I could be happy! I'm not, but I think I could be!"
(Counterpunch)
DP: Many completely paralysed people with "locked in" syndrome suffer terribly. But the high genetic loading of default hedonic tone together with the negative-feedback mechanisms of the hedonic treadmill mean that a large minority if not a majority of locked-in patients report being happy:
Are Some Locked-In People Happy?
Conversely, some able-bodied people "have it all", yet they are chronically miserable. I'm not trying to downplay the importance of social, economic and political reform in making the world a better place – or protecting the environment. Yet if we're ethically serious about solving the problem of suffering, we'll need to tackle its genetic source.

DP: "But you are undermining science as a rationale with your Frankenstein-esque suggestions, that we genetically engineer ourselves into a race of supermen, while ignoring the moral, social, political, economic environmental implications of using science in such a way. You propose genetically enhanced longevity for example, and do not seem to realise that longevity would be problematic in all sorts of ways, not least, environmentally."
(Counterpunch)
DP: In what sense is aiming to phase out the biology of suffering "Frankenstein-esque"? Either way, the biggest obstacle to tackling man-made climate change and environmental degradation is short-termism. Yes, you're right, creating a world where people don't crumble away and perish has implications for the environment. But the impact won't necessarily be as pessimists suppose. Crudely, if you think you're going to be around for hundreds or thousands of years (or more), then you are more likely to care about the long-term fate of the planet than if you reckon it will be someone else's problem.

"I don't know where deliriously happy designer babies that live forever comes on such a list of scientifically rational ethical priorities, but I'm pretty sure limitless clean energy from magma is logically prior..."
(Counterpunch)
But transhumanists do not advocate "deliriously" happy designer babies. Delirium is inimical to cognition. Rather, they urge information-sensitive gradients of well-being. Intelligence-amplification is one of the core tenets of transhumanism.

PP: "Have you ever suffered, David? Ever noticed or experienced hardship enough to motivate you to do something .. oh apparently you have as this is the purported moral and intellectual basis of this movement of yours. Where would you be without these occurrences? How impactful do you think they were for spurring positive change? Apparently great, if your mission is so dire. So tell me. What motivation, drive, and desire will others expect in your envisioned world? Any drive to even get out of bed? Any at all? Some may argue, this idea damns those confined by it to an even worse fate then the current mitigated Darwinian hell we have (civilized society, manners, rules, occasional decency, etc.). Your response?"
(Outlander)
DP: What's in question isn't whether suffering in all its guises can sometimes be functionally useful; it sure can. Rather, what needs questioning is the widespread assumption that the "raw feels" of suffering are computationally indispensable. If the indispensability hypothesis were ever demonstrated, then this result would be a revolutionary discovery in computer science:
The Church-Turing Thesis
For what it's worth, I hold the distinctly unorthodox view that phenomenal binding is non-algorithmic. What's more, the phenomenal unity of experience is often massively adaptive for biological minds. But we need only study the happiest hyperthymics to realise that suffering isn't indispensable to either to cognition or motivation. Neither is happiness, though happier people tend to be better motivated. See too the work of Kent Berridge on the double dissociability of "liking" and "wanting" (cf. Happiness and Desire), mu-opioidergic hedonic tone and dopaminergic motivation.

In short, information-sensitive hedonic gradients are the key to intelligent, motivated behaviour – irrespective of one's absolute location on the pleasure-pain axis. And ethically, information-sensitive gradients of bliss are surely preferable to gradients of misery and malaise, the plight of so many sentient beings alive today.

PP: "The issue, in my mind, is not whether suffering is indispensable, but the question of whether we can have gain without the possibility of suffering. If it is the case, as I believe it is, that all actions which could result in a gain, also run some risk of loss, and loss implies suffering, then to avoid suffering requires that we avoid taking any actions which might produce a gain. But if gain is necessary for happiness, and this is inevitable due to biological needs, then the goal of happiness cannot include the elimination of suffering. Therefore the goal of eliminating suffering must have something other than happiness as its final end. What could that final end be? If eliminating suffering is itself the final end, but it can only be brought about at the cost of eliminating happiness, then it's not such a noble goal."
(Metaphysician Undercover)
DP: Perhaps consider e.g.
Self-Teaching AI
Competition doesn't inherently involve suffering. I'd love to win against the program I play chess against, but losing never causes me to suffer. Granted, the idea of zero-sum games, and the pain caused by losing to a human opponent, is so endemic to Darwinian life that it's easy to imagine that it's inherent to competition itself. But life doesn't have to be this way. Hedonic uplift can be combined with recursive intellectual self-improvement. Either way, we should all be free to choose. As biotech matures, no one should be forced to endure the substrates of involuntary suffering.

PP: "Do you think you'd enjoy it more if you put it on a slightly lower difficulty?"
(Down The Rabbit Hole)
DP: I'm sceptical! Either way, I think the point stands. The end of suffering isn't tantamount to the end of competition, let alone the end of intellectual progress. Sentience deserves a more civilised signalling system.

PP: "If we're already civilised, then what could it even mean to suggest making us more civilised?"
(Metaphysician Undercover)
DP: It’s uncivilised for sentient beings to hurt, harm and kill each other. It’s uncivilised for sentient beings to undergo involuntary pain and suffering – or any experience below hedonic zero. The nature of mature posthuman civilisation is speculative. But I reckon the entire Darwinian era will be best forgotten like a bad dream.

PP: "I do not think it is possible to eliminate the possibility of such pain and still remain living beings."
(Metaphysician Undercover)
Does suffering define what it means to be human? (cf. Does hurting make us human?" A World Without Pain)
If so, then very well: let us become transhuman. I think you'd be on much firmer ground if you focused on the potential pitfalls of life without psychological and physical pain. They are legion. Done ineptly, an architecture of mind based entirely on gradients of bliss isn't just potentially risky for an individual, but a leap into the unknown for civilisation as a whole. Hence the need for exhaustive research.

"If your goal is to manipulate the human being towards a more civilized existence, then the propensity for human beings to mistreat others is what you ought to focus on, rather than the capacity for pain. See, you appear to be focused on relieving the symptoms, rather than curing the illness itself."
Genetically eradicating the predisposition to suffer is more than symptomatic relief; it's a cure. I'm as keen as anyone on improving human behaviour to humans and nonhuman animals alike. We are quasi-hardwired to cause suffering – and to suffer ourselves in turn. And it's not just enemies who cause grief to each other. See e.g. The Scientific Reason Why We Hurt The Ones We Love MostHurting the One's We love.
In the long run, maybe universal saintliness can be engineered. But the goal of eradicating suffering is modest in comparison.

PP: "You sound passionate, David. I've asked this before and perhaps you may feel annoyed even by responding again but let this if nothing else be a rhetorical question. What interests you? Why is that? Perhaps because there is a problem to be solved. Imagine sitting in a room full of "solved" or completed Rubix cubes. Would you not wish for someone if not even yourself to twist one toward unexpected parameters? I once again challenge you to try this setting for yourself. And perhaps you may see, there is fire and water for a reason."
(Outlander)
DP: Alien state-spaces of consciousness interest me:
DP and Sasha Shulgin
A drug-naïve conceptual scheme recognises only two state-spaces, dreaming and waking consciousness. There exist trillions of unexplored state-spaces of consciousness, mutually unintelligible and incommensurable, in part at least. Whereas lucid dreamers have at least glimmerings of an understanding of waking consciousness, the drug-naïve have no insight into what they lack, nor even the privative terms with which to name their ignorance. A post-Galilean science of mind is still a pipedream.

So why stick to wordy scholasticism rather than become a full-time psychonaut? Essentially, dark Darwinian minds can't safely explore psychedelia, nor responsibly urge others to try. For now, most knowledge of reality must remain off-limits. Scientists play around with the mathematical formalism (all science formally reduces to the Standard Model and General Relativity), but science has no real understanding of most of the solutions to the equations. Abolishing the molecular signature of experience below hedonic zero can allow serious investigation of the properties of matter and energy rather than idle word-spinning. The idea that abolishing suffering will abolish intellectual challenge or the growth of knowledge is a myth. Billions of years of investigation lie ahead. But let's investigate the properties of Heaven rather than Hell. And let's first discharge our moral obligation to eradicate suffering:
The Abolitionist Project

PP: "What lies beyond heaven?"
(TheMadFool)
DP: Posthuman heaven is probably just a foretaste of the wonders in store for sentience. Humans don’t have the conceptual scheme to describe life in a low-grade heavenly civilization with a hedonic range of +10 to +20, let alone a mature heaven with hedonic architecture of mind that spans, say, +90 to +100. The puritanical NU in me sometimes feels it’s morally frivolous to speculate on Heaven+ or Paradise 2.0. Yet if theoretical physicists are allowed to speculate on exotic states of matter and energy, then bioethicists may do so too – and bioethicists may have a keener insight into the long-term future of matter and energy in the cosmos.

But first the unknowns. Neuroscience hasn’t yet deciphered the molecular signature of pure bliss, merely narrowed its location to a single cubic millimetre in the posterior ventral pallidum in rats, scaled up to a cubic centimetre in humans. Next, neuroscience hasn’t cracked the binding problem. Unless you’re a strict NU, there’s not much point in creating patterns of blissful mind-dust or mere microexperiential zombies of hedonistic neurons. Rather, we’re trying to create cognitively intact unified subjects of bliss endowed with physically bigger and qualitatively richer here-and-nows. And next, how will tomorrow’s bliss be encephalised? The Darwinian bric-a-brac that helped our genes leave more copies of themselves in the ancestral environment of adaptation has no long-term future. The egocentric virtual worlds run by Darwinian minds may disappear too. But the intentional objects and default state-spaces of consciousness of posthumans are inconceivable to Darwnian primitives. Next, how steep or shallow will be the hedonic gradients of minds in different ranks of posthuman heaven? I sometimes invoke a +70 to a +100 supercivilisation, but this example isn’t a prediction. Rather, a wide hedonic range scanario can be chosen to spike the guns of critics who claim that posthuman heaven would necessarily be less diverse than Darwinian life with our schematic hedonic -10 to 0 to +10. Lastly, will the superpleasure axis continue to preserve a signaling function? Or will mature post-posthumans opt to offload the infrastructure of superheaven to zombie AI, and occupy hedonically “perfect” +100 states of mind indefinitely?

Talk of “perfection” is again likely to raise the hackles of critics worried about homogeneity. But a hedonically “perfect” +100 here-and-now can have humanly unimaginable richness. Monotony is a concept that belongs to the Darwinian era.

The above discussion assumes that advanced posthumans won’t be strict classical utilitarians who opt to engineer a hedonium/utilitronium shockwave. One political compromise is to preserve a bubble of complex civilization underpinned by information-signaling gradients of well-being that is surrounded by an expanding shockwave of pure bliss – not an ideal world by the lights of pure CU, but something close enough.

PP: "...and finally, seriously, and like adults, discuss what we really want - superhappiness (supernirvana) - and come up with a good plan how we're going to get there!"
(TheMadFool)
DP: Absolutely.
At the risk of sounding like a crude technological determinist, some strands of the transhumanist agenda are bound to happen anyway. For instance, radical antiaging interventions won’t need selling. Bioconservatives will seize on the chance to stay youthful as eagerly as radical futurists. Today’s lame rationalisations of death and aging will be swept aside. Likewise with AI and the growth of “narrow” superintelligence, immersive VR and so forth. I’m most cautious about timescales for the end of suffering and the third “super” of transhumanism, superhappiness. When scientific understanding of the pleasure-pain axis matures, the end of suffering and a civilisation of superhuman bliss are probably just as inevitable as pain-free surgery. But the sociological and political revolution needed for all prospective parents world-wide to participate in a reproductive revolution of hyperthymic designer babies is fathomlessly complicated. I fantasise about a WHO-sponsored Hundred-Year Plan to defeat the biology of suffering. Almost certainly, this is just utopian dreaming. Political genius is needed to accelerate the project.

PP: "Sounds like one helluva party! Who in his right mind can say "no" to that!"
(TheMadFool)
DP: Alas, naysayers exist, even on this forum. But yes, transhuman life based on gradients of superhuman bliss will exceed our wildest expectations.

"Thus, happiness and suffering are ultimately about living as long as possible i.e. happiness and suffering, whatever value one may choose to ascribe to them as transhumanists are currently doing, boils down keeping the flame of life burning to the maximum extent possible; in other words, the objective, the end, here seems to be immortality and happiness-suffering are merely the means."
My view of Darwinian life is so bleak that I'm more likely to quote Heinrich Heine, "Sleep is good, death is better; but of course, the best thing would to have never been born at all." Sorry. But aging and bereavement are sources of such misery that I share the transhumanist goal of their abolition via science. Mastery of our reward circuitry promises to make the nihilistic sentiments of NU antinatalists like me unthinkable.

"Both Buddhism and transhumanism acknowledge suffering as undesirable and happiness as a desideratum. However, to borrow computing terms, Buddhism is about updating as it were our software - leave the world as it is but change/adapt our minds to it in such a way that suffering is minimized and happiness is maximized (I'll leave nirvana out of the discussion for the moment) - and transhumanism is about upgrading our hardware - change the world and also change our brains towards the same ends."
Yes. Pain and suffering have always been inevitable. Symptomatic relief is sometimes possible, but not far-reaching. But for the first time in history, we can glimpse the prospect of new reward architecture – not just the alleviation of specific external causes of suffering, but any suffering, and even the conceivability of suffering – and not in some mythical afterlife, but here on Earth. If we opt to edit our genomes, the world's last unpleasant experience may be a few centuries away. Pursuing the Noble Eightfold path can't recalibrate the hedonic treadmill or break the food chain, so a pragmatist like Gautama Buddha born today would surely approve. The hardware/software metaphor for the mind-brain shouldn't be taken too literally, but yes, transhumanism promises a revolution in both.

Another distinction between traditional Buddhism and transhumanism is the role of desire. Buddhists equate desire with suffering. Frustrated desires sometimes cause misery. Biotech can do permanently what mu-opioid agonist drugs do fleetingly, namely create serene bliss - an absence of desire that should be distinguished from the anhedonia and amotivational syndrome experienced by many depressives. However, serene bliss is the recipe for a world of lotus-eaters (cf. Lucas Gloor's Tranquillism). The prospect of quietly savouring beautiful states of consciousness for all eternity doesn't appal me; but expressing delight in idleness is a culturally unacceptable sentiment, for the most part at any rate. Witness the modern cult of "productivity". So it's worth stressing a viable alternative scenario, a world of gradients of superhappiness and hypermotivation. This combination ought not to be possible if the Buddhist diagnosis of our predicament were correct. Pathological manifestations of this combination of traits can be seen today in euphoric mania. Yet it's an empirical fact that the temperamentally happiest "normal" people today typically experience the most desires, not the least. Many depressives experience the opposite syndrome. Maybe in the very long-term future, advanced superbeings will opt to live in perpetual hedonic +100 super-nirvana. But perhaps a more sociologically realistic scenario for the next few eons is hypermotivated bliss – crudely, a world of doing rather than contemplating, and a civilisation where all sentient beings feel profoundly blissful but not "blissed out": Superhappiness.com

PP: "Now, maybe this is a Stockholm-Syndrome approach...But in general I really do think that there may be something to the old idea that undergoing suffering is a condition for a more finely-tuned happiness."
(Csalisbury)
DP: Allow me to pass over where we agree and focus on where we may differ. Each of us must come to terms with the pain and grief in our own lives. Often the anguish is very personal. Uniquely, humans have the ability to rationalise their own suffering and mortality. Rationalisation is normally only partially successful, but it’s a vital psychological crutch. Around 850,000 people each year fail to "rationalise" the unrationalisable and take their own lives. Millions more try to commit suicide and fail. Factory-farmed nonhuman animals lack the cognitive capacity and means to do so.

However, rationalisation can have an insidious effect. If (some) suffering has allegedly been good for us, won’t suffering sometimes have redeeming features for our children and grandchildren – and indeed for all future life? So let’s preserve the biological-genetic status quo. I don't buy this argument; it’s ethically catastrophic. For the first time in history, it's possible to map out the technical blueprint for a living world without suffering. Political genius is now needed to accelerate a post-Darwinian transition. Recall that young children can't rationalise. Nor can nonhuman animals. We should safeguard their interests too. If we are prepared to rewrite our genomes, then happiness can be as "finely-tuned" and information-sensitive as we wish, but on an exalted plane of well-being. Transhuman life will be underpinned by a default hedonic tone beyond today’s “peak experiences”.

One strand of thought that opposes the rationalising impulse is represented by David Benatar's Better Never To Have Been, efilism and “strong” antinatalism:
Differences between anti-natalism / efilism / NU / suffering-focused ethics communities
Alas, the astute depressive realism of their diagnosis isn't matched by any clear-headedness of their prescriptions. I hesitate to say this for fear of sounding messianic, but only transhumanism can solve the problem of suffering. Darwinian malware contains the seeds of its own destruction. A world based on gradients of bliss won’t need today's spurious rationalisations of evil. Let’s genetically eliminate hedonic sub-zero experience altogether.

PP: "I have a sense that suffering helps us navigate, and altering the genetic hedonic pre-disposition would reciprocally alter the genetic basis of suffering, and leave us incapable of overcoming even the slightest obstacle. Like the fat people in the floaty chairs! Or the 30 million dead colonists on Miranda - in the film Serenity, who had unending bliss forced upon them, and just laid down and died."
(Counterpunch)
DP: The pleasure-pain axis plays an indispensable signalling role in organic (but not inorganic) robots. When information-signalling wholly or partly breaks down, as in severe chronic depression or mania, the results are tragic. But consider high-functioning depressives and high-functioning hyperthymics. High-functioning hyperthymics tend to enjoy a vastly richer quality of life. Let's for now set aside futuristic speculation on an advanced civilization based on gradients of superhuman bliss. What are the pros and cons of using gene-editing to create just a hyperthymic society – where everyone enjoys a hedonic set-point and hedonic range comparable to today's genetically privileged hedonic elite?

You mention the fictional colonists of Miranda:
Serenity (Wikipedia)
Their fate is not a sociologically credible model for a world based on gradients of genetically programmed well-being. Hyperthymics tend to be ardent life-lovers. What's more, the extent to which any of us persevere or "give up" will itself soon be amenable to fine-grained control (cf. "Researchers discover the science behind giving up": The science of giving up). If desired, medical science can endow us with fanatical willpower sufficient to appease even the most avid Nietzschean. Depressives are prone to weakness of will and self-neglect.

"This is quite aside from the fact that it's an unsystematic application of science, that by rights should start with limitless clean energy, carbon capture, desalination and irrigation, hydrogen fuel, total recycling, fish farming etc; so as to secure a prosperous sustainable future. I know that would make me, genuinely, much happier.
Fish are sentient beings. Intelligent moral agents should enable fish to flourish, not exploit and kill them (cf. Fish Farming).
I promise that transhumanists are as keen as anyone on a prosperous, sustainable future. Upgrading our reward circuitry will ensure that sentient beings are better able to enjoy it.

PP: "But nevertheless there will always be more bliss to be had, if not is this not a prison your movement attempts to create? People will always seek more pleasure. Will they not?"
(Outlander)
DP: Indeed. Even an “ideal” pleasure drug could be abused. The classic example from fiction is soma (cf. Soma quotes) in Aldous Huxley’s Brave New World. I hope that establishment pharmacologists and the scientific counterculture alike can develop more effective mood-brighteners to benefit both the psychologically ill and the nominally well. Yet one reason I’ve focused on genetic recalibration and genetically-driven hedonic uplift is precisely to avoid the pitfalls of drug abuse. Whether you're a hyperthymic with an innate hedonic range of, say, 0 to +10 or a posthuman ranging from +90 to +100, you can continue to seek more – where the guise of “more” depends on how your emotions are encephalised. But an elevated hedonic set-point doesn’t pose the personal, interpersonal and societal challenges of endemic drug-taking. Indeed, we don’t know whether posthumans will take psychoactive drugs at all. I often assume that posthumans will take innovative designer drugs in order to explore alien state-spaces of consciousness. However, maybe our successors will opt to be mostly if not entirely drug-free. After all, if there weren’t something fundamentally wrong with our human default state of consciousness, then would we try so hard to change it? It’s tragic that mankind's attempts to do so are often so inept.

PP: "Hyperthymics engage in denial to rationalise their overly-positive mood. They lose the ability to navigate rationally, and the consequences can be just as tragic. Suffering doesn't go away just because the person isn't weeping. We are after all social beings, and hyperthymics go around inflicting their risk taking, attention seeking, libidinous psychology on others."
(Counterpunch)
DP: Is it possible you're conflating hyperthymia with mania? Yes, unusually temperamentally happy people have proverbially rose-tinted spectacles. Their affective biases need to be exhaustively researched before there's any bid to create a hyperthymic society. But the kind of temperament I had in mind is exemplified by the author of The Precipice (2020):
The Precipice (Wikipedia)
Extreme life-lovers may take more risks in some ways, but are on guard to avert them in others (see my disagreement with Toby e.g.
"a devastatingly callous doctrine"?).

"Or do you reserve such dehumanising ideas solely for humans?"
My describing human and nonhuman animals as sentient organic robots isn't intended to "dehumanise" them. Rather, it's to highlight how our behaviour is mechanistically explicable – and how we can create an architecture of mind that doesn't depend on pain. Inorganic robots don't need a signalling system of sub-zero states to function; re-engineered organic robots can do likewise.

"Humans are sentient beings at the top of the food chain. Fish are meat. Humans eat meat, and need to produce it sustainably rather than dredge to oceans to death. I do not condone animals suffering any more than is necessary, but they're mortal, and humane farming is far kinder than nature – which really is red in tooth and claw. Most humans born will reach maturity. That's not so in nature."
Around 20% of humans never eat meat. Humans don't need to eat meat in order to flourish. Instead of harming our fellow creatures, we should be helping them by civilising the biosphere. In the meantime, the cruelties of Nature don't serve as a moral license for humans to add to them via animal agriculture.

"Interfering in the human genome, so altering every subsequent human being who will ever live, is a risk that's not justified by depression"
Recall that all humans are untested genetic experiments. The germline can be edited – and unedited. But if we don't fix our legacy code, then atrocious suffering will proliferate indefinitely.

"If we don't feel the suffering, we will still suffer, but just won't know that we are suffering."
I'm struggling to parse this. Yes, feelings of malaise or discomfort may sometimes be subtle and elusive. But the "raw feels" of outright suffering – whether psychological or physical – are unmistakably nasty by their very nature. "The having is the knowing", as Galen Strawson puts it. Either way, if we replace the biology of hedonically sub-zero states with information-sensitive gradients of well-being, then unpleasant experience will become physically impossible. It won't be missed.

PP: "Both the pessimism that says trying is hopeless and the optimism that says it’s unnecessary are just lazy excuses not to try, and in doing so to guarantee failure. I’m extremely proud of transhumanists and techno-progressivists more generally, like David Pearce, for having the courage to dare to at least try to fix the biggest of problems that have always been either seen as hopeless inevitabilities or excused away with happy fantasies as not real problems at all. They’re sort of a manifestation of Camus’ Absurd Hero in that way, too."
(Pfhorrest)
DP: Unlike the rest of the animal kingdom, mature humans can rationalise and practise adaptive preference formation (aka "sour grapes"):
Who wants to live for ever?
(“Only a third of Americans want to live forever, with men more likely than women to take ‘immortality pill’ Responses varied based on the age a person would be “frozen” at if they were given immortality")
Barring revolutionary breakthroughs, I’m less optimistic than Aubrey de Grey about timescales for defeating aging:
Is there a cure for aging?
But even if we don’t personally benefit, I think we have a responsibility to ensure that our successors don’t undergo the miseries of senescence and bereavement.

PP: "I always think that, vegans don't really like food; don't like to cook - and take no pleasure in eating."
(Counterpunch)
DP: I assume you're trolling. But if not, I promise vegans love food as much as meat eaters. Visit a vegan foodie community if you've any doubt.

"It's because we are carnivores."
If so, then it's mysterious why scientific studies suggest vegetarians tend to be slimmer, longer-lived and more intelligent than meat-eaters:
Vegetarian Living
Strict veganism is relatively new. Perhaps see:
Vegans Live Longer Than Those Who Eat Meat or Eggs
Many vegans (and indeed non-vegans) do indeed prudently take a B12 supplement. Alternatives are fortified nutritional yeast (cf. Nutritional Yeast (Wikipedia)) or the natural plant source Wolffia globosa ("Wolffia globosa–Mankai Plant-Based Protein Contains Bioactive Vitamin B12 and Is Well Absorbed in Humans").

"But I love food, I love cooking and eating, and you put yourself between me and a pork chop at your peril!"
Is the level of pleasure someone derives from harming his victims – human or nonhuman – a morally relevant consideration?
"There are predators and prey. I'm a predator."
On some fairly modest assumptions, a world where all sentient beings can flourish is ethically preferable to a world where sentient beings hurt, harm and kill each other. Biotech makes the well-being of all sentience technically feasible. So let's civilise Darwinian life, not glory in its depravities:
Do vegans think they can convert the world to veganism?

PP: "There is no love without something to hate. No joy without something to annoy. No fun without something to bore. Is this true or false?"
(Outlander)
DP: Yes, it's a powerful intuition. But if the existence of pain and pleasure were inseparable, then there would be no victims of chronic pain or depression. Chronic pleasure and happiness aren't harder to engineer genetically; but perpetual euphoria wasn't fitness-enhancing in the environment of evolutionary adaptedness. Biotech is a game-changer. Humanity now has the tools to create life based entirely on information-sensitive gradients of well-being – and eventually superhuman bliss.

PP: "So you "modestly assume" you have the wisdom and technological ability to genetically alter all life on earth that doesn't meet your ethical standards?"
(Counterpunch)
DP: The prospect of ending the cruelties of Nature isn't a madcap scheme some philosopher just dreamed up in the bathtub. It's a venerable vision: the "peaceable kingdom" of Isaiah. Biotech lets us flesh out the practical details.

"This is the basis for the apparent design in nature - how everything works together to a productive end"
What exactly is this "productive end"? If I may quote Dawkins:

"The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, many others are running for their lives, whimpering with fear, others are slowly being devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won't find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.”
(Richard Dawkins, River Out of Eden: A Darwinian View of Life, (1995))
"and you would presume to take this process on yourself? You should consider Orgel's Second Rule: "Evolution is cleverer than you are."
Evolution has been clever enough to create creatures smart enough to edit their own source code. Now that the level of suffering on Earth is an adjustable parameter, what genetic dial-settings should we choose?
You know my answer.

"Your answer was an appeal to religious authority."
Tracing the historical antecedents of one's ideas is very different from appealing to religious authority.
In this case, I was simply noting how the vegan transhumanist idea of civilising Nature is prefigured in the Book of Isaiah. The science is new, not the ethic.

"Your belief that the problem of suffering is morally urgent, is your opinion"
The disvalue of suffering is built into the experience itself. Sadly, it's not some idiosyncratic opinion on my part. If you are not in agony or despair, then you may believe that the problem of suffering isn't morally urgent. One's epistemological limitations shouldn't be confused with a deep metaphysical truth about sentience.

"A vegetarian diet - with all the necessary supplements, is probably fine for an office worker, or an academic philosopher - i.e. the middle class to whom vegetarianism appeals. But it's simply not adequate to the needs of a manual labourer."
Nutritionists would differ. So would e.g. vegan body-builders:
Vegan Bodybuilders

"There's no need for moral superiority."
An ethic of not harming others doesn't involve trumpeting one's moral superiority; rather, it's just basic decency. Also, technology massively amplifies the impact of even minimal benevolence. This amplifying effect is illustrated by the imminent cultured-meat revolution. Commercialised cultured meat and animal products promise an end to the cruelties of animal agriculture:
Helping nonhuman Animals Looking further ahead, we may envisage a pan-species welfare state – once again, not the result of saintly human self-sacrifice, but the transformative potential of biotech.

"Stonewalling"
Philosophers need to acquaint themselves with what's technically feasible so we can have a serious ethical debate on what should be done. From "A Welfare State to Elephants” to Reprogramming Predators) to "Compassionate Biology: How CRISPR-based gene drives could cheaply, rapidly and sustainably reduce suffering throughout the living world”, I've tried to explore what creating a cruelty-free living world will involve. Such blueprints sound fantastical today; but they are grounded in science. No practically-minded person need wade though such material, but the biotech revolution means that unsupported claims that no alternative exists to "Nature, red in tooth and claw” are simply mistaken. Intelligent moral agents can now choose how much suffering we want to exist.

However, suppose that humanity decides to retain the ecological and biological-genetic status quo – or at least some approximation of the status quo after today's uncontrolled habitat-destruction runs its course. The fact that a great deal of suffering exists in Nature doesn't somehow morally entitle humans to add to it. Factory-farming and slaughterhouses are an abomination. Let’s get the death-factories shut and outlawed.

"Evolution is not a crapshoot. It's ballet"
Evolution via natural selection is a cruel engine of suffering, not a performance art.

"and you blunder onto the stage in your hobnail boots"
I tiptoe far more gingerly than, say, Freeman Dyson (“In the future, a new generation of artists will be writing genomes the way that Blake and Byron wrote verses”). Exhaustive research, risk-benefit analysis and pilot studies will be essential. But whether eradicating smallpox or defeating vector-borne disease, human interventions will have far-reaching ecological implications. Should we have conserved Variola major and Variola minor because the disappearance of smallpox leads to much larger human population sizes? Should we allow Anopheles mosquitoes to breed unchecked because the long-term ecological ramifications of getting rid of malaria are unknown? What about the biology of involuntary pain and suffering? By all means urge extreme vigilance and exploration of worse-case scenarios. But an abundance of caution shouldn't involve placing faith in a mythical wisdom of Nature.

"That's the risk you take upon yourself, not for you personally - but for every subsequent generation of human being."
As a "soft" antinatalist, I'm not planning any personal genetic experiments – with the possible exception of some late-life somatic gene-editing when the technology matures. But for evolutionary reasons, most people don't believe in exercising such restraint. Humanity should plan accordingly. I'm urging a world in which natalists conduct genetic experiments more responsibly. One day natalism can even be harmless.

"Weeping buckets over the fact animals eat each other shouldn't blind you to the complexities of the system"
What's in contention isn't whether humans should or shouldn't intervene in Nature. Humans already do so on a massive scale:
The Anthropocene (Wikipedia)
Rather, we're discussing what principles should govern our interventions. Not least, what level of suffering in Nature is optimal?

"Do you believe you can reliably make germline alterations to human DNA, without possibly, making subsequent generations susceptible to disease, or other malady that you haven't foreseen?"
The first step towards a hyperthymic civilisation is ensuring universal access of all prospective parents to preimplantation genetic screening and counsellng (Cue for "Have you seen Gattaca?!" Yes.) The second step is conservative editing, i.e. no creation of novel genes or allelic combinations that don't occur naturally within existing human populations. The third step, true genetic innovation and transhuman genomes, will be most radical – but the idea that germline editing is irreversible is a canard.

"However, what I'd be more concerned about is a genetic arms race. Once you start down this path, how do know when to stop?"
A "genetic arms race" sounds sinister. It's not. Even inequalities in hedonic enhancement aren't sinister. Consider a toy example. Grossly over-simplifying, imagine if pushy / privileged parents arrange to have kids with a 50% higher hedonic set-point compared to the 30% boost of less privileged newborns. Everyone is still better off. Contrast getting a 30% pay increase if your colleagues get a 50% increase, which will probably diminish your well-being. OK, this example isn’t sociologically realistic. Any geneticists reading will be wincing too. But you get my point. Alleles and allelic combinations that predispose to low mood and high pain-sensitivity will increasingly be at a selective disadvantage as the reproductive revolution unfolds, but there won't be "losers" beyond some nasty lines of genetic code. To quote J.B.S. Haldane,

"The chemical or physical inventor is always a Prometheus. There is no great invention, from fire to flying, which has not been hailed as an insult to some god. But if every physical and chemical invention is a blasphemy, every biological invention is a perversion. There is hardly one which, on first being brought to the notice of an observer from any nation which had not previously heard of their existence, would not appear to him as indecent and unnatural."
(Daedalus; or, Science and the Future, 1924)
When to stop? You know my negative utilitarian perspective. The world's last experience below hedonic zero will mark the end of the Darwinian era and the end of disvalue. But I don't for a moment predict that intelligent life will settle for mediocrity. Instead, I cautiously predict a hedonic +90 to +100 civilisation. Life lived within such a hedonic range will be unimaginably sublime.

PP: "mastery of the reward circuitry" would give every autocrat / theocrat a permanent hard-on.
(180 Proof)
DP: Direct interventions to enhance emotional well-being could enrich everyone's default quality of life. For sure, whether we consider using drugs, genes or electrodes, such tools could also be abused. But we need a a serious ethical debate. Do we want to conserve our existing reward architecture indefinitely? Or aim for radical hedonic uplift?

In my work, I've focused on hedonic set-point recalibration (rather than crude happiness-maximisation) not least because recalibration is salably conservative. Thus if, for example, you're a rugged individualist and active citizen – or maybe even a natural rebel resistant to authority – then re-engineering yourself with a higher hedonic set-point won't make you docile.
Contrast the effects of long-term opioid administration or Huxley's fictional soma.

PP: Still, it seems to me, the ethical problem remains: if 'negative affects' are eliminated by "radical hedonic uplift", then disincentives for (i.e. intrinsic negative feedbacks of) antisocial and immoral behaviors will be, effectively, eliminated as well. How will this not produce catastrophic consequences? – they would be unintentionally yet foreseeably 'harmful' and, therefore, ought to be avoided, no?
(180 Proof)
DP: It's counterintuitive. But "disincentives for (i.e. intrinsic negative feedback of) antisocial and immoral behaviors" can play out just as effectively within the upper and lower bounds of even a vastly higher hedonic range than today's norm. Leave aside here my wilder transhumanist speculations on future life based on information-sensitive gradients of superhuman bliss. Focus instead on today's genetic outliers - "hyperthymics" with an unusually high hedonic set-point. OK, I don't know of any rigorous quantitative study to prove it, but there's no evidence that hyperthymic people are more prone to antisocial and immoral behavior than their neurotypical counterparts. Sure, people with mania are prone antisocial and immoral behavior. But that's because (as in chronic unipolar depression) their information-signaling system for good and bad stimuli has partially broken down.

Ethically speaking, I think we should aim for a hyperthymic civilization.

PP: "I agree that reducing suffering (and increasing flourishing) is the best orienting, regulative idea, for our ethics, but implementation will have to unfold gradually (or at least in tandem with our understanding)"
(Csalisbury)
DP: The worst source of severe and readily avoidable suffering in the world is simply remedied. Without slaughterhouses, the entire industrialized apparatus for exploiting and murdering sentient beings would collapse. What's needed isn't Zen-like calm, but a fierce moral urgency and vigorous political lobbying to end the animal holocaust. By contrast, reprogramming the biosphere to eradicate suffering is much more ambitious in every sense. Yes, the "regulative idea" of ending involuntary suffering should inform policy-making and ethics alike. And society as a whole needs to debate what responsible parenthood entails. People who choose to create babies "naturally" create babies who are genetically predisposed to be sick by the criteria of the World Health Organization's own definition of health. By these same criteria, most people alive today are often severely sick. Shortly, genetic medicine will allow the creation of babies who are predisposed to be (at worst) occasionally mildly unwell. A reproductive revolution is happening this century; and the time to debate it is now.

PP: "...we've both touched the harsh nerve of pain, really touched it, and realized how frame-shatteringly painful real pain is."
(CSalisbury)
DP: Yes. Words fail to communicate the awfulness of severe pain. Extreme suffering corrodes one's values, personality and life itself. Philosophers are widely supposed to love knowledge. But there are some things best left unknown. That said, cultivated ignorance is no excuse for inaction. I hope that oft-derided philosophers will be at the forefront of the abolitionist project. The biotech and AI revolution means that blueprints are now feasible for eradicating suffering altogether in favour of genetically programmed gradients of well-being. A dietary, genetic, reproductive and ecological revolution will eliminate the root of all evil. Whether your hero is Bentham- or Buddha-inspired, will bioethicists rise to the challenge?

PP: "I absolutely respect your devotion to mitigating clearly-defined evils (to say you're doing much more than me to help others would be a wild understatement) - but how do you sustain your pinpointing of (biologically based, and so capable-of-being-engineered-out) evil against, say, the knocking-down-chesterton's-fence argument? What I'm trying to get at is, it feels to me you have full faith in the current scientific framing (an end of scientific framing ala the much discussed political 'end of history') Is that fair?"
(Csalisbury)
DP: Science doesn't understand existence. Our conceptual scheme is desperately inadequate. Cosmology is in flux, the foundations of quantum mechanics are rotten and scientific materialism is inconsistent with the entirety of the empirical evidence.

However, the case for reprogramming the biosphere to end suffering is still compelling.
Compare surgical anaesthesia. Why phenomenal pain exists isn't understood. (Why aren’t we zombies?) Nor is the existence of phenomenal binding. (Why aren't we micro-experiential zombies?) Consequently, a Chesterton's-fence argument might suggest we should shun pain-free surgery until these mysteries are elucidated. Moreover, the mechanism of anaesthetics is still elusive. Surgical anaesthesia itself carries gross and subtle risks. But in practice, I'd insist on anaesthesia (as distinct from just a muscle-paralysing agent like curare) before surgery. I bet you would too. Furthermore, I'd argue that every other patient is entitled to surgical anaesthesia as well (I'm not special.). Reckless? No, we are weighing risk-reward ratios. Likewise with a biohappiness revolution. Mental and physical pain are frightful. They will shortly become genetically optional. Preserving our information-sensitivity keeps our collective options open. Humanity stands on the brink of post-Darwinian life: a "civilisation" in every sense of the term. We should facilitate the transformation.

PP: What about epigenetic engineering?"
(Counterpunch)
DP: In our discussion, I've glossed over the role of transgenerational epigenetic inheritance and indeed epigenetic editing to enhance mood, motivation and analgesia in existing human and nonhuman animals (cf. "'Dead' Cas9-CRISPR Epigenetic Repression Provides Opioid-Free Pain Relief with No Side Effects":
Opioid-free pain-relief).
The catch-all term "biological-genetic strategy" to defeat suffering is intended to include these interventions – and more besides.

"...because arguably, epigenetic engineering could bypass many of the moral dilemmas associated with germ-line genetic alteration foisted on subsequent generations"
The question to ask is: What should be the "default settings" of new life? Should we continue to create people genetically predisposed to a ghastly range of unpleasant experiences they will only later be able to palliate? Or should we design healthy people.

PP: "Having spent some time reading David Pearce on the Transhumanism thread...I notified myself of a tendency of Transhumanists or individuals seeking to extend their lifespan, as simply not accepting death as a forgone conclusion or brute fact about existence. What a stark difference from the antinatalist threads that I have seen around and about on this forum."
(Shawn)
DP: Antinatalists recognize that creating children with a progressive genetic disease is morally problematic. Aging ravages and then kills its victims. But “hard” antinatalists haven’t faced up to the nature of selection pressure. Inevitably, natalists will inherit the Earth. So I’d urge antinatalists to swallow hard and embrace the transhumanist agenda. Defeating involuntary aging, death and suffering may take centuries. Yet as far as I can tell, the project is scientifically and sociologically viable – just dauntingly ambitious.

PP: "So. The real question is, David. Will you use your wealth and influence to purchase a large enough area where, your own and this is the most important, your own offspring can participate in these trials and we all can watch from a distance and see how they fare over time"
(Outlander)
DP: Recall I'm an antinatalist:
Do you agree with antinatalism?
My view of life is bleaker than David Benatar's. So no, I won't be conducting genetic experiments on anyone but myself. But selection pressure means that the future belongs to natalists. Natalists conduct around 130 million genetic experiments each year. Should prospective parents be encouraged to mitigate the suffering their experimentation creates? Or should pain-ridden Darwinian malware be encouraged to proliferate indefinitely in its existing guise?
The transhumanist option strikes me as more humane:
The Transhumanist Agenda

PP: "Thanks again on behalf of us all to David! I am going to leave you with the last word of substance here and close the thread for now. Please see the original post for more detail on David's writings and internet presence."
DP: Thank you Philosophy Forum! Sorry if I've left any loose ends...


* * *

more interviews
1 : 2 : 3 : 4 : 5 : 6 : 7 : 8 : 9 : 10 : 11 : 12 : 13 : 14 : 15 : 16 : 17 : 18 : 19 : 20


The Hedonistic Imperative
HOME
Pairagraph
Future Opioids
Superhappiness
Superspirituality?
Utopian Surgery?
Social Media (2024)
The End of Suffering
Wirehead Hedonism
The Good Drug Guide
The Abolitionist Project
Quora Answers (2015-24)
Reprogramming Predators
The Reproductive Revolution
Interview of DP by CINS Magazine (2021)
Interview of DP by Magnus Vinding (2021)
Interview of Nick Bostrom and David Pearce
Interview of DP by Immortalists Magazine (2020)
The Imperative to Abolish Suffering: an interview with David Pearce

E-mail Dave
dave@hedweb.com