4. Objections.

No 34

"Why the headlong rush to paradise engineering? Why not wait until we have the wisdom to understand the implications of what we're doing? Let's get it right."

We are faced with a "bootstrap" problem. Human beings may only ever be wise enough to understand the ramifications of what we're doing after we have enhanced ourselves sufficiently to be able to do so. Perhaps La Rochefoucauld was wiser than he knew: "No man is clever enough to know all the evil he does." Our species may take pains to avoid building a fools' paradise or some sort of Brave New World. But when, and by what means, will we ever be intelligent enough to be sure of succeeding? When will we be wise enough to avoid making mistakes that we haven't even conceived? As the reproductive, infotech and nanotech revolutions unfold, (post-)humans are bound to seek ways to make ourselves incrementally smarter. Does it really make sense to postpone a parallel emotional enrichment - assuming, naïvely, that emotional and cerebral intelligence could be so cleanly divorced? After all, narrowly-conceived intelligence-amplification carries risks of its own; greater wisdom may depend on emotional enrichment rather than being a prerequisite for it. For example, it transpires that genetically engineered "Doogie mice", endowed with an extra copy of the NR2B subtype of NMDA receptor, have not merely superior memories, but a chronically enhanced sensitivity to pain. Imagine if, prior to clinical trials, ambitious prospective human parents had rashly arranged to insert multiple copies of the gene in their designer babies to give them a future competitive advantage in education. The outcome might be pain-ridden child prodigies. Vastly more subtle and complex pitfalls doubtless lie ahead that make any steps towards a post-human civilisation problematic, not just paradise-engineering. If the risk-reward ratio of a proposed intervention is unfavourable, then clearly a potentially life-enriching drug, gene therapy (etc) shouldn't be rushed. But sometimes the risk-reward ratio is unclear. A more intractable problem is that some risks may be unknown, or inadequately quantified, or both.

So is the Objection essentially correct? Should we opt to conserve the genetic status quo of Darwinian life? Or at best defer the prospect of distinctively emotional enrichment to the presumed wisdom of our distant descendants?

Delay would be morally reckless for the following reason: ethically, even a non-negative utilitarian can agree that it's critical to distinguish between the relief of present suffering and the refinement of future bliss - between the moral urgency of the abolitionist project and the moral luxury of a (hypothetical) full-blown paradise-engineering. The risk-reward ratio of proposed interventions will shift as life on Earth gets progressively better - both for an individual and for civilisation as a whole. We demand a far higher level of proven safety from an improved version of aspirin, for example, than from a potentially life-saving anti-AIDS drug. By parity of reasoning, the same yardstick should apply to their affective counterparts, the different forms of psychological distress. If, fancifully, we were already living in some kind of heaven-on-earth, or even just in a civilised, pain-free society, then it would indeed be foolish to put our well-being at risk by hazardous and premature enhancements designed to make life even better. Bioconservativism might be a wise policy. The Objection might then be tenable. Manifestly, we don't dwell anywhere of the sort.

Compare the introduction of pain-free surgery. In the pre-anaesthetic era, a surgical operation could be tantamount to torture. Patients frequently died. Survivors were often psychologically as well as physically scarred for life. Then a wholly unexpected breakthrough occurred. Within a year of William Morton's demonstration of general anaesthesia at Massachusetts General Hospital in 1846, ether and chloroform anaesthesia were being adopted in operating theatres across the world - in Europe, Asia and Australasia. Instead of embracing this utopian dream-come-true, would it have been wise to wait 30 years while conducting well-controlled trials to see if agents used as general anaesthetics caused delayed-onset brain damage, for instance? Ideally, yes. Should prospective studies have first been undertaken comparing the safety of ether versus chloroform? Again, yes - ideally. Rigorous longitudinal studies would have been more prudent. In the mid-19th Century, there were no professional anaesthesiologists, no balanced anaesthesia, no patient monitoring apparatus, muscle relaxants or endotracheal intubation. The mechanisms of anaesthesia in the central nervous system weren't understood at all. Nor, initially, were the principles of antiseptic surgery: only the combination of anaesthesia plus antisepsis could ever make surgery comparatively safe. If the use of anaesthetics had led to delayed-onset long-term brain damage (etc), then the medical doubters might now be hailed as uncommonly prescient - instead of enduring the "enormous condescension of posterity", relegated to a footnote in our incorrigibly Whiggish potted histories of medicine.

Despite these caveats, the world-wide introduction of general anaesthesia in surgery is, by common consent, one of the greatest triumphs of medical history. Why the precipitate haste of its adoption? In essence, anaesthetic use spread rapidly across the world because the horrors of extreme physical pain entailed by surgery without anaesthesia were judged by most (but not all) physicians and their patients to outweigh the potential risks - even though the risks weren't properly known or adequately quantified. Surgeons, too, were able thereafter to attempt ambitious life-saving interventions that were effectively impossible before. By our lights, early anaesthesia was appallingly crude, just as narcotic analgesia remains to this day. But the moral urgency of getting rid of suffering - whether its guise is "physical" or "mental" or both - is obscure only to those not caught in its grip. This is why almost everyone will "break" under torture; and why, globally, hundreds of thousands of depressed people take their own lives each year: in fact "mental" pain effectively kills more people than its nominally physical counterpart. If one is looking for historical role-models, then perhaps Dr John Snow - "the man who made anaesthesia a science" - may serve as an exemplar. As the use of surgical anaesthesia spread like wildfire in the late 1840s, Snow didn't advocate the "safe", bioconservative option of abstinence or delay. That would have been callous. But unlike some of his more gung-ho medical colleagues, Snow was mindful of the potential risks of the seemingly miraculous discovery. His introduction of standardised dosing through efficient inhalers and careful patient monitoring saved many lives. Moral urgency is not a license for recklessness.

Like most analogies, this one is far from exact. Currently millions of sentient creatures, human and non-human, are indeed stricken by suffering no less grievous than patients in the pre-anaesthetic, pre-opioid analgesic era; and likewise, exciting but largely unproven technologies exist to remedy their plight. So to that extent, the historical parallel holds. But statistically, most people are not in the throes of extreme psychological distress. Thus if one is currently relatively satisfied with one's life, and if one's dependants are relatively satisfied too, then there are strong grounds for caution over experimenting with ill-tested interventions that promise to enhance one's existing well-being. Thus the advent of a putative sustainable mood-enricher to reset one's emotional thermostat, a novel intellect-sparing serenic to banish unwanted anxiety, an illuminating new psychedelic, a super-empathogen, a genius-pill (or whatever) might represent a tantalizing prospect. Yet they should presumably undergo rigorous prior testing before general public licensing - however dazzling the anticipated benefits. It might seem that delay is the only responsible option; there can be wisdom in inaction.

The pitfall to this "safety-first" approach lies in the extreme risk of moral complacency it breeds. Hundreds of millions of human beings, and billions of non-human animals, are not in such a fortunate position. On a universalist utilitarian ethic, or simply a Buddhist-style ethic of compassion, we should systematically apply the same level of urgency to relieving their suffering as one would be justified in exercising if one were oneself tormented by intense pain or suicidal despair. Extreme suffering is the plight of billions of sentient beings alive today, whether in our factory-farms, in a Darwinian state of nature, or a depressed neighbour. Desperate straits mandate taking risks one would otherwise shun.

On the face of it, if one aims to lead a cruelty-free lifestyle, one may disclaim personal complicity in such suffering. But this moral opt-out clause may be delusive. Simply by deciding to have genetically unenriched children, for instance, one perpetuates the biology of suffering by bringing more code for its substrates into world. A healthy caution toward untested novelties should not collapse into status quo bias.

Any plea, then, for institutionalized risk-assessment, beefed-up bioethics panels, academic review bodies, worse-case scenario planning, more intensive computer simulations, systematic long-term planning and the institutionalized study of existential risks is admirable. But so is urgent action to combat the global pandemic of suffering. "The easiest pain to bear is someone else’s".

next


  • 4.35 ...the Simulation Argument suggests suffering can never be abolished...


    E-mail Dave : dave@hedweb.com