First published on Sentient Developments
Date: May 2009
May 1, 2009
David Pearce is guest blogging this week

A World Without Suffering?

the lion lies down with the lamb

In November 2005, at the Society for Neuroscience Congress, the Dalai Lama observed: "If it was possible to become free of negative emotions by a riskless implementation of an electrode - without impairing intelligence and the critical mind - I would be the first patient."

Note that the Dalai Lama wasn't announcing his intention to queue-jump. Nor was he proposing that high-functioning bliss should be the privilege of one special group or species. Unlike the Abrahamic religions, but in common with classical utilitarianism, Buddhism is committed to the welfare of all sentient beings. Instead, the Dalai Lama was stressing that we should embrace the control of our reward circuitry that modern science is shortly going to deliver - and not disdain it as somehow un-spiritual.

Smart neurostimulation, long-acting mood-enhancers, genetically re-engineering our hedonic "set-point" (etc) aren't therapeutic strategies associated with Buddhist tradition. Yet if we are morally serious about securing the well-being of all sentient life, then we have to exploit advanced technology to the fullest possible extent. Nothing else will work (short of some exotic metaphysics that is hard to reconcile with the scientific world-picture). Non-biological strategies to enrich psychological well-being have been tried on a personal level over thousands of years - and proved inadequate at best.

This is because they don't subvert the brutally efficient negative feedback mechanisms of the hedonic treadmill - a legacy of millions of years of natural selection. Nor is the well-being of all sentient life feasible in a Darwinian ecosystem where the welfare of some creatures depends on eating or exploiting others. The lion can lie down with the lamb; but only after both have been genetically tweaked. Any solution to the problem of suffering ultimately has to be global.

In the meantime, I think the greatest personal contribution to reducing suffering that an individual can make is both to:

  1. Abstain from eating meat
  2. Make it clear to his or her entire circle of acquaintance that meat-eating is abhorrent and morally unacceptable
Such plain speaking calls for moral courage that alas sometimes deserts me.

I know many readers of Sentient Developments are Buddhists. Not all of them will agree with the above analysis. Some readers may suspect that I'm just trying to cloak my techno-utopianism in the mantle of venerable Buddhist wisdom. (Heaven forbid!)

In fact the abolitionist project is just a blueprint for implementing the aspiration of Gautama Buddha two and a half millennia ago: "May all that have life be delivered from suffering". I hope other researchers will devise (much) better blueprints; and the project will one day be institutionalized, internationalized, properly funded, transformed into a field of rigorous academic scholarship, and eventually government-led.

I've glossed over a lot of potential pitfalls and technical challenges. Here I'll just say I think they are a price worth paying for a cruelty-free world.

Many thanks to George for inviting me to guest-blog this week. And many thanks to Sentient Developments readers for their critical feedback. It's much appreciated.

* * *

April 30, 2009
David Pearce is guest blogging this week.

Guest blogger David Pearce answers your questions (part 3)


Michael Kirkland alleges that the abolitionist project violates the Golden Rule.

Maybe. But recall J.S. Mill: “In the golden rule of Jesus of Nazareth, we read the complete sprit of the ethics of utility.” The abolitionist project, and indeed superhappiness on a cosmic scale, follows straightforwardly from application of the principle of utility in a postgenomic era. Yes, there are problems in interpreting an ethic of reciprocity in the case of non-human animals, small children, and the severely mentally handicapped. But the pleasure-pain axis seems to be common to the vertebrate line and beyond.

Michael also believes that my ideas about predators are "monstrous" and "genocidal".

OK, here is a thought-experiment. Imagine if there were a Predator species that treated humans in the way cats treat mice. Let's assume that the Predator is sleek, beautiful, elegant, and is endowed with all the virtues we ascribe to cats. In common with cats, the Predator simply doesn't understand the implications of what it is doing when tormenting and killing us in virtue of its defective theory of mind. What are our policy options? We might decide to wipe out the race of Predators altogether - "genocide" so to speak - from the conviction that a race of baby-killers was no more valuable than the smallpox virus. Or alternatively, we might be more forgiving and tweak its genome, reprogramming the Predator so it no longer preyed on us (or anyone else). However, perhaps one group of traditionally-minded humans decide to protest against adopting even the second, "humane" option. Tweaking its genome so the Predator no longer preys on humans will destroy a vital part of its species essence, the ecological naturalists claim. Curbing its killer instincts would be "unnatural" and have dangerously unpredictable effects on a finely-balanced global ecosystem (etc). Perhaps a few extremists even favour a "rewilding" option in which the Predator should be reintroduced to human habitats where it is now extinct.

Should we take such an ethic seriously? I hope not.

Granted, the above analogy sounds fantastical. But the parallel isn't wholly far-fetched. If your child in Africa has just been mauled to death by a lion, you'll be less enthusiastic about the noble King Of The Beasts than a Western armchair wildlife enthusiast. A living world based around creatures eating each is barbaric; and later this century it's going to become optional. One reason we're not unduly troubled by the cruelties of Darwinian life is that our wildlife "documentaries" give a Disneyfied view of Nature, with their tasteful soundtracks and soothing commentary to match. Unlike on TV news, the commentator never says: "some of the pictures are too shocking to be shown here". But the reality is often grisly in ways that exceed our imagination.

In response to Leafy: Are cats that don't kill really "post-felines" - not really cats at all? Maybe not, but only in a benign sense. A similar relationship may hold between humans and the (post-)humans we are destined to become.

Carl worries that I might hold a "[David] Chalmers-style 'supernatural' account of consciousness".

We differ, but like Chalmers, I promise I am a scientific naturalist. The behaviour of the stuff of the world is exhaustively described by the universal Schrödinger equation (or its relativistic generalization). This rules out dualism (casual closure) or epiphenomenalism (epiphenomenal qualia would lack the causal efficacy to inspire talk about their own existence). But theoretical physics is completely silent on the intrinsic nature of the stuff of the world; physics describes only its formal structure. There is still the question of what "breathes fire into the equations and makes a universe for them to describe", in Hawking's immortal phase; or alternatively, in John Wheeler's metaphor, "What makes the Universe fly?"

Of the remaining options, monistic idealism is desperately implausible and monistic materialism is demonstrably false (i.e. one isn't a zombie.) So IMO the naturalist must choose the desperately implausible option. See Galen Strawson [Consciousness and Its Place in Nature: Does physicalism entail panpsychism? (2006)] for a defence of an ontology he'd hitherto dismissed as "crazy"; and Oxford philosopher Michael Lockwood [Mind, Brain and the Quantum (1991)] on how one's own mind/brain gives one privileged access to the intrinsic nature of the "fire in the equations".

I think two distinct problems of consciousness need to be distinguished here. A solution to the second presupposes a solution to the first.

First, why does consciousness exist in the natural world at all? This question confronts the materialist with the insoluble "explanatory gap". But for the monistic idealist who thinks fields of microqualia are all that exist, this question is strictly equivalent to explaining why anything at all exists. Mysteries should not be multiplied beyond necessity.

Secondly, why is it that, say, an ant colony or the population of China or (I'd argue) a digital computer - with its classical serial architecture and "von Neumann bottleneck" - don't support a unitary consciousness beyond the aggregate consciousness of its individual constituents, whereas a hundred billion (apparently) discrete but functionally interconnected nerve cells of a waking/dreaming vertebrate CNS can generate a unitary experiential field? I'd argue that it's the functionally unique valence properties of the carbon atom that generate the macromolecular structures needed for unitary conscious mind from the primordial quantum mind-dust. No, I don't buy the Hameroff Penrose "Orch-OR" model of consciousness. But I do think Hameroff is right insofar as the mechanism of general anaesthesia offers a clue to the generation of unitary conscious mind via quantum coherence. The challenge is to show a) how ultra-rapid thermally-induced decoherence can be avoided in an environment as warm and noisy as the mind/brain; and 2) how to "read off" the values of our qualia from the values of the solutions of the quantum mechanical formalism.

Proverbially, the dominant technology of an age supplies it root metaphor of mind. Our dominant technology is the digital computer. IMO our next root metaphor is going to be the quantum computer - our new root metaphor both of conscious mind and the multiverse itself. But if your heart sinks when you read the title of any book - or blog post - containing the words "quantum" and "consciousness", then I promise mine does too.

* * *

April 29, 2009
David Pearce is guest blogging this week

Guest blogger David Pearce answers your questions (part 2)


Here are two more replies in response to questions about my Abolitionist Project article from earlier this week.

Carl makes an important point: "Why think that affective gradients are necessary for motivation at all? Consider minds that operate with formal utility functions instead of reinforcement learning. Humans are often directly motivated to act independently of pleasure and pain."

Imagine if we could find a functionally adequate substitute for the signaling role of negative affect - a bland term that hides a multitude of horrors - and replace its nastiness with formal utility functions. Why must organic robots like us experience the awful textures of physical pain, depression and malaise, while our silicon robots function well without them? True, most people regard life's heartaches as a price worth paying for life's joys. We wouldn't want to become zombies.

But what if it were feasible to "zombify" the nasty side of life completely while amplifying all the good bits - perhaps so we become "cyborg buddhas".

More radically, if the signaling role of affect proves dispensable altogether, it might be feasible computationally to offload everything mundane onto smart prostheses - and instead enjoy sublime states of bliss every moment of our lives, without any hedonic dips at all. I say more on this theme in my reply to "Wouldn't a permanent maximum of bliss be better?" I need scarcely add this is pure speculation.

Leafy asks me to comment on an "animal welfare state, and [...] how your views about the treatment of nonhuman animals (e.g., that animals need care and protection, not liberation, and when animal use or domination might be morally acceptable) differ from those of people such as Singer and Francione".

First, let's deal with an obvious question. Millions of human infants die needlessly and prematurely in the Third World each year. Shouldn't we devote all our energies to helping members of our own species first? To the extent humans suffer more than non-humans, I'd answer: yes - though rationalists should take extraordinary pains to guard against anthropocentric bias. Critically, there is no evidence that domestic, farm or wild mammals are any less sentient than human infants and toddlers. If so, we should treat their well-being impartially. A critic will respond here that human infants have moral priority because they have the potential to become full-grown adults - with the moral primacy that we claim. But we wouldn't judge that a toddler with a terminal disease who will never grow up deserves any less love and care than a healthy youngster. Likewise, the fact that a dog or a chimpanzee or a pig will never surpass the intellectual accomplishments of a three year old child is no reason to let them suffer more. Thus I think it's admirable that we spend a hundred thousand dollars trying to save the life of a 23 week old extremely premature baby; but it's incongruous that we butcher and eat billions of more sophisticated sentient beings each day. Actually, IMO words can't adequately convey the horror of what we're doing in factory farms and slaughterhouses. Self-protectively, I try and shut it out most of the time. After all, my intuitions reassure me, they're only animals, what's going on right now can't really be as bad as I believe it to be. Yet I'm also uncomfortably aware that this is moral and intellectual cowardice.

Is a comprehensive welfare system for non-human animals technically feasible? Yes. The implications of an exponential growth of computing power for the biosphere are exceedingly counterintuitive. See, for example, The Singularity Institute or Ray Kurzweil -- though I'm slightly more cautious about timescales. In any event, by the end of the century we should have the computational resources to micromanage an entire planetary ecosystem. Whether we use those computational resources systematically to promote the well-being of all sentient life in that kind of timeframe is presumably unlikely. However, we already - and without the benefit of quantum supercomputers - humanely employ, for example, depot-contraception rather than culling to control the population numbers of elephants in some overcrowded African national parks. Admittedly, ecosystem redesign is only in its infancy; and we've barely begun to use genetic engineering, let alone genomic rewrites. But if our value system dictates, then we could use nanobots to go to the furthest ends of the Earth and the deep oceans and eradicate the molecular signature of unpleasant experience wherever it is found. Likewise, we could do the same to the sinister genetic code that spawns it. In any case, for better or worse, by the mid-century large terrestrial mammals are unlikely to survive outside our "wildlife" reserves simply in virtue of habitat destruction. How much suffering we permit in these reserves is up to us.

Gary Francione and Peter Singer? Despite their different perspectives, I admire them both. As an ethical utilitarian rather than a rights theorist, I'm probably closer to Peter Singer. But IMO a utilitarian ethic dictates that factory-farmed animals don't just need "liberating", they need to be cared for. Non-human animals in the wild simply aren't smart enough adequately to look after themselves in times of drought or famine or pestilence, for instance, any more than are human toddlers and infants, and any more than were adult members of Homo sapiens before the advent of modern scientific medicine, general anaesthesia, and painkilling drugs. [Actually, until humanity conquers ageing and masters the technologies needed reliably to modulate mood and emotion, this control will be woefully incomplete.]

At the risk of over-generalising, we have double standards: an implicit notion of "natural" versus "unnatural" suffering. One form of suffering is intuitively morally acceptable, albeit tragic; the other is intuitively morally wrong. Thus we reckon someone who lets their pet dog starve to death or die of thirst should be prosecuted for animal cruelty. But an equal intensity of suffering is re-enacted in Mother Nature every day on an epic scale. It's not (yet) anybody's "fault." But as our control over Nature increases, so does our complicity in the suffering of Darwinian life "red in tooth and claw". So IMO we will shortly be ethically obliged to "interfere" [intervene] and prevent that suffering, just as we now intervene to protect the weak, the sick and the vulnerable in human society.

But here comes the real psychological stumbling-block. One of the more counterintuitive implications of applying a compassionate utilitarian ethic in an era of biotechnology is our obligation to reprogram and/or phase out predators. In the future, I think a lot of thoughtful people will be relaxed about phasing out/reprogramming, say, snakes or sharks. But over the years, I've received a fair bit of hate mail from cat-lovers who think that I want to kill their adorable pets. Naturally, I don't: I'd just like to see members of the cat family reprogrammed [or perhaps "uplifted"] so they don't cause suffering to their prey. As it happens, I've only once witnessed a cat "playing" with a tormented mouse. It was quite horrific. Needless to say, the cat was no more morally culpable than a teenager playing violent videogames, despite the suffering it was inflicting. But I've not been able to enjoy watching a Tom and Jerry cartoon since. Of course the cat's victim was only a mouse. Its pain and terror were probably no worse than mine the last time I caught my fingers in the door. But IMO a sufficiently Godlike superintelligence won't tolerate even a pinprick's worth of pain in post-human paradise. And (demi)gods, at least, is what I predict we're going to become...

* * *

April 28, 2009
David Pearce is guest blogging this week

Guest blogger David Pearce answers your questions

In yesterday's post I promised I'd try to respond to comments and questions. Here is a start - but alas I'm only skimming the surface of some of the issues raised.

t.theodorus ibrahim asks: "what are the current prospects of a merger or synergy between transhumanist concerns of the rights of natural humans within a world dominated by greater intelligences and the existing unequal power relations between animals - especially between human and non-human animals? Will the human reich give way to a transhuman reich?!"

The worry that superintelligent posthumans might treat natural humans in ways analogous to how humans treat non-human animals is ultimately (I'd argue) misplaced. But reflection on such a disturbing possibility might (conceivably) encourage humans to behave less badly to our fellow creatures. Parallels with the Third Reich are best used sparingly - the term "Nazi" is too often flung around in overheated rhetoric. Yet when a Jewish Nobel laureate (Isaac Bashevis Singer, The Letter Writer) writes: "In relation to them, all people are Nazis; for the animals it is an eternal Treblinka", he is not using his words lightly. whether we're talking about vivisection to satisfy scientific curiosity or medical research, or industrialized mass killing of "inferior" beings, the parallels are real - though like all analogies when pressed too far, they eventually break down.

One reason I'm optimistic some kind of "paradise engineering" is more likely than dystopian scenarios is that the god-like technical powers of our descendants will most likely be matched by a god-like capacity for empathetic understanding - a far richer and deeper understanding than our own faltering efforts to understand the minds of sentient beings different from "us". Posthuman ethics will presumably be more sophisticated than human ethics - and certainly less anthropocentric. Also, it's worth noting that a commitment to the well-being of all-sentience is enshrined in the Transhumanist/H+ Declaration - a useful reminder of what, I hope, unites us.

Spaceweaver, if I understand correctly, argues that perpetual happiness would lead to stasis. Lifelong bliss would bring the evolutionary development of life to a premature end: discontent is the motor of progress.

I'd agree this sort of scenario can't be excluded. Perhaps we'll get stuck in a sub-optimal rut: some kind of fool's paradise or Brave New World. But IMO there are grounds for believing that our immediate descendants (and perhaps our elderly selves?) will be both happier and better motivated. Enhancing dopamine function, for instance, boosts exploratory behaviour, making personal stasis less likely. By contrast, it's depressives who get "stuck in a rut". Moreover if we recalibrate the hedonic treadmill rather than dismantle it altogether, then we can retain the functional analogues of discontent minus its nasty "raw feels". See "Happiness, Hypermotivation and the Meaning Of Life."

FrF asks: "Could it be that transhumanists who are, for whatever reason, dissatisfied with their current lives overcompensate for their suffering by proposing grand schemes that could potentially bulldoze over all other people?"

For all our good intentions, yes. The history of utopian thought is littered with good ideas that went horribly wrong. However, technologies to retard and eventually abolish ageing, or tools to amplify intelligence, are potentially empowering - liberating, not "bulldozing". Transhumanists/H+-ers aren't trying to force anyone to be eternally youthful, hyperintelligent or even - God forbid - perpetually happy.

Genuine dilemmas do lie ahead: for instance, is a commitment to abolishing aging really consistent with a libertarian commitment to unlimited procreative freedom? But I think one reason many people are suspicious of future radical mood-enhancement technologies, for instance, is the curious fear that someone, somewhere is going to force them to be happy against their will.

It's also worth stressing that radically enriching hedonic tone is consistent with preserving most of your existing preference architecture intact: indeed in one sense, raising our "hedonic set-point" is a radically conservative strategy to improve our quality of life. Uniform bliss would be the truly revolutionary option: and uniform lifelong bliss probably isn't sociologically viable for an entire civilisation in the foreseeable future. However, stressing a future of hedonic gradients and the possibility of preference architecture conservation is arguably to underplay the cognitive significance of the hedonic transition (IMO) in prospect. Just as the world of the clinically depressed today is unimaginably different from the world of the happy, so I reckon the world of our superhappy successors will be unimaginably more wonderful than contemporary life at its best. In what ways? Well, even if I knew, I wouldn't have the conceptual equipment to say.

Visigoth rightly points out that any long-acting oxytocin-therapy [or something similar] would leave us vulnerable to being "suckered". Could genetically non-identical organisms ever wholly trust each other - or ever be wholly trustworthy - without compromising the inclusive fitness of their genes? Maybe not. I discuss the nature of selection pressure in a hypothetical post-Darwinian world here: "The Reproductive Revolution: Selection Pressure in a Post-Darwinian World". Needless to say, any conclusions drawn are only tentative.

Away from genetics, it's worth asking what does it mean to be "suckered" when life no longer resembles a zero-sum game and pleasure is no longer a finite resource - a world where everyone has maximal control over their own reward circuity. Right now this scenario sounds fantastical; but IMO the prospect is neither as fantastical (nor as remote) as it sounds. See the Affective Neuroscience and Biopsychology Lab for how close we're coming to discovering the anatomical location and molecular signature of pure bliss.

More tomorrow....

* * *

April 27, 2009
David Pearce is guest blogging this week

The Abolitionist Project: Using biotechnology to abolish suffering in all sentient life

First, many thanks to George for inviting me to blog on Sentient Developments. I asked George what I should blog about. He suggested I might start with The Hedonistic Imperative. This topic might be more interesting to readers of Sentient Developments if I respond to critical questions or blog on themes readers feel I've unjustly neglected. If so, please let me know.

Briefly, some background. In 1995 I wrote an online manifesto which advocates the use of biotechnology to abolish suffering in all sentient life. The Hedonistic Imperative predicts that the world's last unpleasant experience will be a precisely dateable event in the next thousand years or so - probably a "minor" pain in some obscure marine invertebrate. More speculatively, HI predicts that our descendants will be animated by genetically preprogrammed gradients of intelligent bliss - modes of well-being orders of magnitude richer than today's peak experiences.

I write from the perspective of what is uninspiringly known as negative utilitarianism i.e. I'd argue that we have an overriding moral responsibility to abolish suffering. If my background had been a bit different, I'd probably just call myself a scientifically-minded Buddhist. True, Gautama Buddha didn't speak about biotechnology; but to Buddhists (and Jains) talk of engineering the well-being of all sentient life is less likely to invite an incredulous stare than it does in the West.

I should also add that credit for the first published scientifically literate blueprint for a world without suffering belongs IMO to Lewis Mancini. See "Riley-Day Syndrome, Brain Stimulation and the Genetic Engineering of a World Without Pain", Medical Hypotheses (1990) 31. 201-207. As far as I can tell, Mancini's original paper sank with barely a trace. However, it is now online where it belongs: I've uploaded the text here.
[I confess my jaw dropped a couple of years ago when I stumbled across it.]

HI was originally written for an audience of analytic philosophers. The Abolitionist Project (2007) and Superhappiness (2008) are (I hope) more readable and up-to-date. I won't now go into the technical reasons for believing we can use biotech, robotics and nanotechnology to eradicate the molecular substrates of suffering and malaise from the biosphere. Given the exponential growth of computing power and biotechnology, the abolitionist project could in theory be completed in two or three centuries or less. This timescale is unlikely for sociological reasons. So why should anyone think it's ever going to happen? All sorts of stuff is technically feasible in principle; but a lot of so-called futurology is just a mixture of disguised autobiography and wish-fulfillment fantasy. Is this any different?

Quite possibly not; but here are two reasons for guarded optimism.

Futurists spend a lot of time discussing the possibility of posthuman superintelligence. Whatever else superintelligence may be, we implicitly assume that it must at least weakly be related to what IQ tests measure - just completely off the scale. However, IQ tests ignore one important and extraordinarily cognitively demanding skill that non-autistic humans possess. At least part of what drove the evolution of our uniquely human intelligence was our superior "mind-reading" skills and enhanced capacity for empathetic understanding of other intentional systems. This capacity is biased, selective, and deeply flawed; but I'd argue its extension and enrichment are going to play a critical role in the development of intelligent life in the universe. By contrast, conventional IQ tests are "mind-blind"; they simply ignore social cognition. I'd argue that our posthuman descendants will have a vastly richer capacity to understand the perspective of "what it is like to be" other sentient beings; and this recursively self-improving empathetic capacity will be a vital ingredient of mature superintelligence and posthuman ethics. Of course "super-empathy" doesn't by itself guarantee a utopian outcome. And I'm personally sceptical that digital computers with a classical von Neumann architecture will ever be sentient, let alone superintelligent. But a future (hypothetical) superhuman capacity for empathetic understanding does, I think, make a universal compassion for all sentient beings more likely.

Viewing the way we currently treat other sentient beings as a cognitive and not just a moral limitation is of course controversial. So secondly, let's fall back on a more cynical and conservative assumption. Assume, pessimistically, that what Bentham says of humans will be true of posthumans too: "Dream not that men will move their little finger to serve you, unless their advantage in so doing be obvious to them. Men never did so, and never will, while human nature is made of its present materials." Does this bleak analysis of (post)human nature rule out a world that supports the well-being of all sentience?

No, I don't think so. If it's broadly correct, this limitation does mean that morally serious actors today should strive to develop advanced technology that makes the expression of (weak) benevolence towards other sentient beings trivially easy - so easy that its expression involves less effort on the part of the morally apathetic than raising one's little finger. For example, whereas one way to combat the cruelty of factory farming is to use moral arguments to promote its abolition - as in their very different ways do PETA and Peter Singer - the other, complementary strategy is to promote technologies that will allow "us" all to lead a cruelty-free lifestyle at no personal cost. Thus see the nonprofit research organization New Harvest: advancing meat substitutes.

Thirty years hence, if meat-eaters are presented with two equally tasty products, one "natural" from an intensively-reared factory-farmed animal that's been butchered for its flesh as now, the other labelled "cruelty-free" in the form of attractively branded vatfood, how many consumers are deliberately going to choose the cruel option if it doesn't taste better? I'm aware that this kind of optimism can sound naive. Yes, we can all be selfish; but I think relatively few people are malicious, and still fewer people are consistently malicious. So long as the slightest personal inconvenience to members of the master species can be avoided, I think we can extend the parallel of developing cruelty-free cultured meat to the eradication of suffering throughout the living world: ecosystem redesign, depot-contraception, rewriting the vertebrate genome, the lot. With sufficiently advanced technology, the creation of a living world without cruelty needn't be effortful or burdensome to the morally indifferent. Technology can make what is today impossibly difficult soon merely challenging, then relatively easy, and eventually trivial. And of course a lot of people do aspire to be more than merely weakly benevolent. Maybe we're "really" just signalling to potential mates our desirability as nurturing fathers [or whatever story evolutionary psychology tells us explains our altruistic desires.]. But what matters is not our motivation or its ultimate cause, but the outcome.

A cruelty-free world is one thing; but many of us feel ambivalent about extreme happiness, let alone lifelong superhappiness of the kind promised by utopian neurobiology. One reason we may feel ambivalent is that we contemplate, for instance, the selfishness and drug-addled wits of the heroin addict; or the crazed lever-pressing of the rodent wirehead; or the impaired judgement of the euphorically manic. Intellectuals especially may be resistant to the prospect of superhappiness, fearing that their intellectual acuity may be compromised. Beyond a certain point, must there be some kind of tradeoff between hedonic tone and intellectual performance?

Not necessarily. Here is just one way in which reprogramming our reward circuitry could actually serve as a tool for intelligence-amplification and cognitive enhancement. Recall Edison's much-quoted dictum: “Genius is one percent inspiration and ninety-nine percent perspiration.” The relative percentages are disputable; but the contribution of sheer hard work and intellectual focus to productivity isn't in doubt. Now if you're a student, an academic or an intellectual, imagine if you could selectively amplify the subjective reward you derive from all and only the cerebral activities that you think you ought to enjoy doing most; and conversely, imagine if you could diminish or switch off altogether the reward from life's baser pleasures. What might you achieve intellectually if you could reprogram your reward circuitry so that you could work pursuing your highest aspirations for 14 hours a day? By way of contrast, using the Internet offers an uncomfortable insight into what one is really interested in. [Sadly, I lose track of the endless hours I've wasted online viewing complete fluff. I tell myself that I'm soon going to enjoy writing a 500 page scholarly tome, The Abolitionist Project. Alas in practice it's more fun surfing the Net for trivia.] In any event, IMO the enemy of intelligence isn't bliss but indiscriminate, uniform bliss; and in the future I think superhappiness and superintelligence can be fused - seamlessly or otherwise.

Are there pitfalls here? Yes, lots. But they are technical problems with a technical solution.

Here's another example. One reason we may be ambivalent about extreme happiness is that we see how it can make people antisocial. One thinks of the heroin addict who neglects his family for the sake of his opioid habit. But what if safe, sustainable designer drugs or gene therapies were available that conferred an unlimited capacity for altruistic pleasure? It's only recently been discovered that the empathogenic hugdrug MDMA (Ecstasy) triggers copious release of the "trust hormone" oxytocin: oxytocin seems to be the missing jigsaw piece in explaining MDMA's unique spectrum of action. So to take one scenario, what if mass oxytocin-therapy enabled us to be chronically kind, trusting and empathetic towards each other - the very opposite of "selfish hedonism" of popular stereotype.

Moreover this option isn't just a matter of personal lifestyle choice; I think the implications are more far-reaching. Thoughtful researchers are increasingly concerned about existential and global catastrophic risks in an era of biowarfare, nanotechnology and weapons of mass destruction. Britain's Astronomer Royal, Sir Martin Rees, puts the odds of human extinction this century at 50%. I suspect this figure is too high, but clearly the risk is not negligible. Anyhow, arguably the greatest underlying source of existential and global catastrophic risk lies in the Y chromosome: testosterone-driven males are responsible for the overwhelming bulk of the world's wars, aggression and reckless behaviour. Decommissioning the Y chromosome isn't currently an option; but the potential civilizing influence of pro-social drugs and gene therapies on dominant alpha males shouldn't be lightly dismissed as a risk-reduction strategy. In general, a world where intelligent agents are happier, trusting and more trustworthy is potentially a much safer world - and much more civilised too.

Are there pitfalls to modifying human nature? Again yes, lots. But there are also profound risks in retaining the biological status quo.

David Pearce
dave@hedweb.com
https://www.hedweb.com/


* * *

H+ Interview (Sept. 2009)
Cronopis Interview (Dec. 2007)
Vanity Fair Interview (April 2007)
Nanoaging Interview (Dec. 2005)
The Neofiles Interview (Dec. 2003)

DP Drug Regimen
[August 2005]

The End of Suffering?
[Philosophy Now, July-August 2006]


The Hedonistic Imperative
HOME
Future Opioids
Superhappiness
Utopian Surgery?
The End of Suffering
Wirehead Hedonism
The Good Drug Guide
Paradise Engineering
The Abolitionist Project
Nanotechnology Hotlinks
Reprogramming Predators
The Reproductive Revolution
MDMA: Utopian Pharmacology
Critique of Huxley's Brave New World
DP (Wikipedia) + DP Friends (Facebook)

E-mail Dave
dave@hedweb.com