by David Pearce
To You, What is the "Ultimate" Outcome of Transhumanism?Wakata, is your argument in favour of retaining (some) suffering directed against a biology of lifelong well-being per se? Or only against addling our wits with soma or some other blissful intoxicant? And by "more alive", have you in mind the intensity of lived experience? Or something else? For sure, a couch-potato contentedly watching soap opera on TV will be "less alive" by this criterion than, say, a triumphant marathon runner with aching limbs crossing the finishing line. But biotechnology promises to deliver not just gradients of life-long bliss, but also a radical intensification of all our experience. Thus we know that enhanced mesolimbic dopamine function, for example, is not merely exhilarating - it strengthens, drive, motivation, and a sense of things-to-be-done.
by ParagonRenegade in Transhuman
I don't pretend to know what kinds of well-being our distant descendants and/or successors will enjoy.
But why preserve experience below "hedonic zero" at all?
* * *
Yes, recalibrating the hedonic treadmill can vastly enrich your quality of life. But perhaps there's one other big advantage to hedonic recalibration. Recalibration doesn't entail any shared commitment to ultimate goals - or sacrificing your values and conception of the good life on the altar of someone else's vision of the ideal society. Few of us today are as genetically fortunate as transhumanist scholar Anders Sandberg ("I do have a ridiculously high hedonic set-point":
Yet the prospect of life animated by gradients of intelligent bliss needn't be the sort of utopian dream enjoyable only by our descendants and successors. We're on the brink of an era of rapid genome self-editing:
("Right on target: New era of fast genetic engineering")
Even some fairly modest genetic tweaking, e.g.
("Genes predispose some people to focus on the negative")
("The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.")
could potentially enrich everyone's quality of life next decade and beyond as the CRISPR genome-editing revolution gathers pace.
Also, radical mood enrichment can - potentially - complement other core transhumanist goals such as radical life extension. How many times does one hear "I wouldn't want to live for ever", "But I'd get bored" (etc). A predisposition to low mood tends to be life-shortening.
In addition, there is an (IMO) under-explored link between the persistence of involuntary suffering and existential / global-catastrophic risk. For example, how many of the million of so depressed people who kill themselves globally each year would take the rest of the world down with them if they could? Biotechnology (and later this century perhaps AI) opens up frightening prospects to nihilistic depressives. By contrast, the more one loves life, then - other things being equal - the more motivated one is to preserve it. It's probably no coincidence that Anders, for instance, focuses on the field of existential risk.
* * *
ObservationalHumor, one of the blessings of re-engineering the hedonic treadmill is that your existing values and preference architecture can be retained. Unless your values involve inflicting involuntary suffering on others, recalibrating yourself to enjoy life based on gradients of intelligent bliss doesn't involve buying into anyone's else's conception of the good life or the ideal society - least of all mine. Now maybe you're happy with your hedonic range and normal hedonic set-point as they are now. If so, then I would say: "Awesome" (seriously!). But the point of HI is not to urge coercive happiness. Rather, it's to promote a biology of invincible well-being as an option available to anyone who wants it.
The Hedonistic Imperative
by derivedabsurdity7 in Futurology
Mmafc, I too have immense doubts. The question turns on risk-reward ratios. At present, each child born of sexual reproduction is a unique genetic experiment. Our source code is the product of genetic roulette. If we are ever going to phase out involuntary suffering, then we will need to edit our DNA - and choose the genetic make-up of our future children. Preimplantation genetic screening is a start - but it isn't true genetic engineering because we are simply choosing from what having sex throws up "naturally". But the era of "designer genomes" is imminent. For what it's worth, I reckon our descendants will live lives animated by gradients of genetically preprogrammed well-being orders of magnitude richer than anything physiologically feasible today.
Could things go horribly wrong? Yes.
Mmafc, let's guard against a false dichotomy here. Today, the biology of suffering is involuntary. We shouldn't conceive the alternative as involuntary bliss. Rather, biotechnology promises us the opportunity to choose our own upper and lower bounds of well-being - and also our average hedonic set-point. For as user "Derivedabsurdity7" points out, there is a huge difference between arguing for uniform bliss and urging hedonic recalibration, i.e. an enhanced biology of information-sensitive gradients of well-being. I'd agree with you: unvarying bliss is a recipe for intellectual stasis, loss of critical insight and risk to life and limb. Lives animated by gradients of intelligent well-being, on the other hand, are not just vastly more rewarding than today's norm. In addition, a high average hedonic set-point is consistent with strong motivation, the potential for intellectual development, and also a sense of social responsibility.
Of course, genetic and pharmacological interventions alike have numerous potential pitfalls. Practically speaking, user-friendly autosomal gene-editing tools are probably still decades away. But right now, if you were offered the opportunity to pick - via implantation genetic screening - a high or low hedonic set-point version of the COMT gene for your future child, which allele would you choose? (cf. "The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.": http://www.ncbi.nlm.nih.gov/pubmed/17687265 )
Likewise, would you choose a high pain threshold or a low pain threshold for your prospective children?
(cf. "Pain perception is altered by a nucleotide polymorphism in SCN9A"
Naturally, many people may prefer to continue the traditional genetic crapshoot - trusting God [or the wisdom of Mother Nature, etc] to ensure a happy outcome. As we know, the results are often mediocre and sometimes catastrophic. In the long run, I'd argue that ethically responsible prospective parents should consider making intelligent genetic choices instead.
Should we eliminate all suffering in the future?Simcurious, the enemy of motivation isn't bliss but uniform bliss. Hence the importance of hedonic recalibration. Tellingly perhaps, the happiest people alive today don't tend to be indolent lotus-eaters - like the Eloi in HG Wells' "The Time Machine". Rather the happiest people are typically also the most motivated, most active life-lovers. They find the days too short to get everything done. Thus we know it's physiologically possible to preserve the functional analogues of discontent without the nasty "raw feels" as most of us experience them today. Functional analogues of discontent can be the engine of progress even in a world animated by gradients of intelligent bliss.
by Buck-Nasty in Futurology
Coldplazma, suffering, by its very nature, is experience below hedonic zero. So if we phase out the molecular signature of experience below hedonic zero in the CNS, suffering thereby becomes physically impossible. Your view that pleasure and pain are largely or wholly relative is intuitively powerful. But compare people today who are chronically depressed. Some chronic depressives can't imagine what it is like to be happy - or even what the word "happiness" means. For sure, some of their days may be "merely" bad rather than terrible. Yet it would be cruel for us to tell them that their merely miserable days were somehow happy. Conversely, there exist today a small percentage of temperamentally "hyperthymic" people who spend almost their whole lives animated by gradients of well-being. Hyperthymics don't suffer on days that are merely good rather than wonderful. Their hedonic floor is higher than some people's hedonic ceiling. Thus I think we need to ensure that everyone has the opportunity to enjoy such a rewarding life - and ultimately aim for superhappiness and beyond.
Wadcam, as you note, there are "cocoon-like" ways to phase out the biology of suffering. These ways range from wireheading to heroin-like drugs to some version of Nozick's Experience Machine"
But the advantage of recalibrating the hedonic treadmill is precisely that such recalibration doesn't entail giving up your existing values. Indeed you can retain much if not all of your existing preference architecture. More concretely, right now you could choose the benign version of e.g. the COMT gene for your future children via preimplantation genetic diagnosis (cf. http://www.ncbi.nlm.nih.gov/pubmed/17687265) But in the future you could modify your own genetic source code to ensure your own life is animated by gradients of intelligent well-being too - even gradients of bliss. Other things being equal, an enhanced capacity for reward makes one more robust and resilient. By way of contrast, compare the propensity to "catastrophising" of depressives, and their predisposition to learned helplessness and behavioural despair.
Interview with transhumanist abolitionist philosopher David PearcePaxfeline, sorry, but could you clarify why you believe that choosing not to bring more suffering into the world is "egotistical"? Parents who choose to create life for their own gratification at least have a responsibility not to create gratuitous suffering - and in the future all suffering will be gratuitous.
by beyonsense in europhilosophy
Transhumanity.net Condemns Factory FarmingENDZYM3, the use of technology to secure the well-being of all sentient life is a core tenet of transhumanist ethics. It's not a new commitment. Compare the statement of principles laid out in the Transhumanist Declaration (1998, 2009) of the World Transhumanist Association / Humanity Plus:
by acunsion in Transhumanism
The evolution of a sophisticated capacity for perspective-taking seems to have driven the evolution of distinctively human intelligence. Unfortunately, selection pressure has ensured this "mind-reading" capacity is biased, partial and selective. By contrast, A God-like posthuman superintelligence could presumably access, and impartially weigh, all possible first-person perspectives. Is such a God-like capacity for empathetic understanding consistent with the industrialised abuse and killing of other sentient beings as now? I don't find this a credible claim. A pig, after all, is as sentient - and for what it's worth, as intelligent - as a prelinguistic human toddler. Yet humans routinely do things to pigs that would earn the perpetrator a lifetime prison sentence if our victims were human. Our typical excuse ("But I like the taste!") doesn't seem ethically compelling.
Factory-farms are a source of suffering on an unimaginable scale. So it's good to see that transhumanity.net advocates their closure.
Five Top Reasons Transhumanism Can Eliminate Sufferinghttp://io9.com/5946914/should-we-eliminate-the-human-ability-to-feel-pain
by Buck-Nastyin, Transhuman
Reading through some of the more critical comments below, I'd be intrigued (and grateful) if critics were to write up and publish online even a short paper outlining why they believe either:
1) any commitment on the part of the transhumanist movement to the well-being of all sentience is a bad idea.
2) why it's a good idea, but best achieved in radically different ways from wholesale genetic engineering. And if so, (in outline) how?
Psygnisfive, it's sentient beings who don't like being asphyxiated, disembowelled or eaten alive. Just for 10 seconds, try and imagine what it feels like. Such horrors are not some abstract philosophical conjecture on my part. When humans do intervene in Nature e.g. to eradicate smallpox, we can't understand all of the ramifications of doing so - any more than we can understand all the ramifications of conserving the status quo. Instead we just have to try and weigh risk-reward ratios. At the moment, more money goes on e.g. captive breeding programs for big cats than on compassionate intervention. Should we be "tampering" with the wisdom of Nature? Well, should we "tamper" with the cystic fibrosis allele? Barring identical twins etc, every member of a sexually reproducing species is a unique genetic experiment. The question is whether genetic roulette is likelier to lead to a happier outcome than responsible planning.
There is nothing elegant about natural selection, any more than there is anything beautiful about an equation whose solutions include Auschwitz. I'd agree with everything Richard Dawkins says apart from his last sentence; "It must be so": "The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute that it takes me to compose this sentence, thousands of animals are being eaten alive, others are running for their lives, whimpering with fear, others are being slowly devoured from within by rasping parasites, thousands of all kinds are dying of starvation, thirst and disease. It must be so." (Richard Dawkins) Status quo bias may indeed stop us practising high-tech Jainism; but it's not forbidden by some immutable law of Nature.
Psygnisfive, to make a compelling case, I think you're going to need to do more than claim that "we do not have justification for claiming that any non-human animals are sentient". Instead, you're going to need to offer supportive evidence that no non-human animals are sentient. That was my point about the burden of proof. We'd give short shrift to someone who abused small (human) children on the grounds he believed toddlers were insentient automata. Perhaps it's worth recalling that not so long ago some folk were convinced that members of other ethnic groups were barely sentient either:
Doubtless Meiners and his colleagues could claim black sentience couldn't scientifically be demonstrably proven. One just wishes such "scientific racists" could have erred on the side of caution.
My point was that when the consequences of being mistaken are ethically catastrophic, one tries (wherever possible) to play safe. Thus even if one entertains the philosophical conjecture that the extreme "distress vocalisations" emitted by a pig, a dog, or a human baby who is physically assaulted don't signal intense subjective distress - or indeed consciousness of any kind - it would be ethically reckless to condone such behaviour. There is a huge difference between the claim that we can't scientifically prove nonhuman animals / prelinguistic humans are sentient, and the claim that the chances of their being sentient are negligible. Perhaps look / listen to Richard Dawkins.
Dawkins' position on nonhuman animal sentience is actually more radical than mine.
Deterrence, genes and culture have co-evolved. The idea of overcoming suffering isn't new. Rather what has changed most recently is the technology to put such ethics into practice. I don't quite understand why you consider phasing out suffering in the rest of the living world to be "navel-gazing in our self-absorbed perspectives of what constitutes suffering and usefulness." Surely extending our circle of empathy is an attempt to transcend anthropocentric bias - not a case of it? "A denial of reality"? Well, I guess one might say the same to whoever invented the wheel or first conceived humans could fly. Ideas that seemed crazy at the time, e.g. pain-free surgery, are now simply taken for granted. In the mature postgenomic era, the absence of experience below Sidgwick's "hedonic zero" may seem equally natural too. Challenging each other's sanity gets us nowhere.
Psygnisfive, if you were persuaded of the consensus wisdom that members of species other than members of Homo sapiens are indeed sentient beings, would this change your position? If a dog, for example, needs an operation, should we use an anaesthetic - or just a muscle relaxant?
Kylesox, in future, we'll be able to choose our own hedonic range. Today, some people endure gradients of lifelong ill-being; most of us experience a mixture of well-being and ill-being; and a small percentage of "hyperthymic" people spend their lives animated by gradients of well-being. We know that hedonic set-points have a high degree of genetic loading - though environmental factors play a big role too. Critically, I think it would be unethical to compel people to endure miserable and mediocre experience when its biology has become purely optional.
In one sense, humanity is likely to become more homogenous. If we eliminate e.g. the cystic fibrosis allele, then a source of genetic variety is lost. However, in the era of "designer zygotes", genetic diversity can potentially be increased without obvious limit. Such human diversity can still be set against a constant backdrop of invincible psychological and physical health.
Psygnisfive, it's sentient beings who don't like being asphyxiated, disembowelled or eaten alive. Such horrors are not some abstract philosophical conjecture on my part. Just for 10 seconds, try and imagine what it feels like. For sure, when humans do purposely intervene in Nature e.g. to eradicate smallpox, we can't understand all of the ramifications of doing so - any more than we can understand all the ramifications of conserving the status quo. Instead we just have to try and weigh risk-reward ratios. At the moment, more money goes on e.g. captive breeding programs for big cats than on compassionate intervention.
Should we be "tampering" with the wisdom of Nature? Well, should we "tamper" with the cystic fibrosis allele? Barring identical twins etc, every member of a sexually reproducing species is a unique genetic experiment. The question is whether genetic roulette is likelier to lead to a happier outcome for our fellow sentient beings than responsible planning.
Indeed, here we agree. Right now, any such project would be hugely challenging and costly. But what is effortful for mere mortals will be much less so for our more God-like successors.
Mass extinctions? Well, right now humans are in the throes of causing a big one, albeit not in a planned fashion. Intuitively, I agree with you. Planned extinction is potentially an ethically momentous step; and not to be undertaken lightly. That is the point of the radical conservatism of the Reprogramming Predators essay. If we judge the conservation of snakes, crocodiles, lions and other iconic predators to be prudent and wise, they can nonetheless be genetically tweaked so they don't cause any harm. And what about the species essentialist argument that a lion that doesn't cause murder and mayhem wouldn't truly be a lion? Well, is a human who wears clothes truly human? I'll leave that question to the metaphysicians.
The switch from (human) slave-holding was undoubtedly disruptive. Ceasing to treat other sentient beings as though they are property will surely be disruptive too - though the advent of in vitro meat will most likely ensure consumers, at least, endure minimal inconvenience. Later this century, habitat destruction means that large terrestrial vertebrates will mostly be extinct outside our wildlife parks in any case. So our complicity in suffering will be obvious in a way it isn't now. What today would be hugely technically demanding and vastly expensive isn't like to remain so - not with the computational resources we'll command later this century and beyond
Clearly, what kinds of life-form and what states of consciousness should exist in future is a highly contentious issue. But anyone who shares a commitment to the well-being of all sentience, yet disagrees with the sorts of scenario I discuss, is cordially welcome to set out alternatives. I'll be delighted and intrigued.
* * *
You are standing next to a shallow pond. A prelinguistic toddler, a dog, and a pig are drowning. Have you really no ethical obligation to pull them out because we have no conclusive scientific evidence they are conscious?
The above example is fanciful. But it's realistic to the extent it hints at the Godlike powers over the rest of living world we are poised to acquire this century and beyond. My focus is on the ethical use of technology under conditions of uncertainty of the kind you describe.
And if you actually rescue a zombie?
Well, at worst you get your clothes wet.
* * *
There are certainly knotty theoretical issues involved in the notion of the neural correlates of consciousness. But it would be harsh to call researchers who investigate the topic, e.g. Christof Koch
somehow disingenuous in doing so.
Ethically, the point is surely that we have grounds - not conclusive grounds for sure - for believing that human infants, prelinguistic toddlers and nonhuman vertebrates are conscious - and in some cases highly sentient - even when they lack the meta-cognitive capacities of language-speaking adult humans. Treating a sentient being as though (s)he were a zombie when s/he is in fact experiencing profound distress is potentially ethically catastrophic.
Microelectrode studies using awake human subjects allow us to map out the neural correlates of consciousness. Stimulation of the phylogenetically ancient limbic structures that mediate our core emotions triggers intense, raw experience (pain, rage, fear, joy, etc). By contrast, the phenomenology mediated by some parts of the neocortex is subtle and elusive. Thus the trait that most distinguishes mature humans from nonhuman animals, i.e. a rich generative syntax, is almost impenetrable to introspection. Babies, prelinguistic toddlers, dogs and pigs cannot verbalise their experiences. But the same activated genes, neural pathways, hormonal responses and overt behaviour ("distress vocalizations") are involved when they are subjected to noxious stimuli. So a reasonable but unproven inference is that they suffer intensely too. Ethically speaking, we should act accordingly.
Onslaught147, I guess there are two separate issues here. First, is it technically feasible to phase out [involuntary] suffering throughout the living world? Second, is it ethically desirable to do so? In the long run, should we aim to phase out - or preserve - experience below "hedonic zero" in our forward light-cone?
What would technically be feasible for large, long-lived vertebrates even now (cf. https://www.abolitionist.com/reprogramming/elephantcare.html ) will also be feasible lower "down" the phylogenetic tree later this century and beyond. The worst forms of suffering appear to occur in higher vertebrates: a welfare state for marine molluscs is not high on the priority list of even devout negative utilitarians. Strictly speaking, I don't think the problem is predation per se. Rather it's the plight of creatures with central nervous systems - and brain-like cephalic ganglia. So by all means, let's continue to smite pathogenic bacteria wherever they are found! :-)
hmm, Desullen, researching suffering and depression can be quite harrowing at times; and writing about misery isn't much fun either. Agony and despair have measurable neural correlates in human and nonhuman animals alike. They are as real as the rest mass of the electron - but fortunately more amenable to change and control.
Whether posthumans choose to conserve or create anything resembling lions, snakes and crocodiles (etc) in their wildlife parks is indeed an open question. But in the postgenomic era, "reprogramming" predators (cf. https://www.abolitionist.com/reprogramming/index.html ) or provision of in vitro meat for obligate carnivores is technically feasible. Which policy options we implement will depend on how ethically serious we are about phasing out involuntary suffering.
Sentience, "the ability to feel, perceive, or be conscious, or to have subjective experiences" (cf. https://en.wikipedia.org/wiki/Sentience is not the same as sapience or a capacity for reflective self-awareness. Our most intense experiences, notably raw agony or blind panic, are evolutionarily ancient and strongly conserved over hundreds of millions of years. Their neural substrates extend throughout the vertebrate line and beyond.
In which case, rest assured that I have no wish to harm individual predators, human or nonhuman. But this doesn't entail we are ethically entitled to create more of them. The interests of their victims should be considered too.
Desullen, for what it's worth, my AQ score is a rather un-nerdish:
But one can subscribe to the well-being of all sentience whether one's AQ is 12 or 42. What counts is the calibre of the arguments, not the psychological foibles of the participants.
Desullen, I cannot empathise with a taxonomic abstraction. Taxonomic abstractions (species, genera, families etc) are not subjects of experience. Only subjects of experience are capable of well-being or ill-being. Thus mass use of sterilants to phase out Anopheles mosquitoes, for example, does not serve the metaphorical "interest" of the genus of mosquito in question. But such sterilization doesn't in any way harm the well-being of the individual.
Verzingetorix, I regard the notion that any sentient being is merely "prey" as a misconception to be transcended.
An ecosystem is not a subject of experience. Individual sentient beings are subjects of experience. The Transhumanist Declaration calls for the well-being of all sentience, not the well-being of all ecosystems. So the goal of achieving sustainable ecosystems is subject to ethical constraints. Thus when famine strikes Ethiopia we don't any longer believe that it's ethically acceptable to "let Nature take its course" on the grounds that Africa is overpopulated. Instead we send famine relief and aid with family planning. This principle can be generalised with tomorrow's technologies.
Alternatively, we could aim to sustain the cruelties of Nature under a regime of natural selection indefinitely.
Psygnisfive, even if you are sceptical about the consciousness of human infants and prelinguistic toddlers, you would (please correct me if I'm wrong) judge it ethically unacceptable to castrate a human child without an anaesthetic. The ethical implications of entertaining a false theory of (non)consciousness in this case would be catastrophic. Yet likewise they are potentially ethically catastrophic in the case of castrating unanaesthetised pigs, as happens today. [Factory-farmed nonhuman animals are so distressed they have to be debeaked, tail-docked, castrated (etc) to prevent them physically mutilating themselves and each other.] Or are you claiming, not merely that you don't believe nonhuman animals feel pain, but also that there is no non-negligible possibility that you could be mistaken?
Verzingetorix, overcoming our biological limitations entails overcoming arbitrary anthropocentric and ethnocentric bias. So does enriching our perspective-taking capacity beyond members of our own race or species. The Transhumanist Declaration calls for the well-being of all sentience, not mitigating the suffering of one group of sentient beings exploited by another group. I promise I'm well acquainted with the thermodynamics of a food chain. But later this century, every cubic metre of the planet will be computationally accessible to micro-management, surveillance and control. Not least, we'll have the option of genomic rewrites and cross-species fertility control to manage populations in our wildlife parks. Indeed the technologies of immunocontraception to do so already exist.
By all means, feel free to urge that Article Seven of the Transhumanist Declaration be scrapped. But by calling the aim of phasing out suffering "childish", you're denigrating the work of a lot of transhumanists who hammered out the consensus document in 1998 and its slightly revised formulation in 2009.
The pre-Darwinian contrast between "humans" and "animals" is unscientific. A pig, a dog, and a human toddler, for example, have the same neurological circuitry that mediates pain and fear. To claim that the toddler possesses consciousness but the pig and the dog are insentient automata flies in the face of virtually every expert in the field.
Can I prove nonhumans feel pain? No. But then one can't defeat radical scepticism about other minds, human or nonhuman alike. It's only an inference. But the consequences of being wrong could be ethically catastrophic.
Becoming posthuman entails overcoming arbitrary anthropocentric and ethnocentric bias - a formidable challenge, for sure. Whether one is a human, a dog, a pig, or a zebra, all sentient beings are alike in having an interest in not being harmed. This shared interest extends to not starving to death, being asphyxiated, disembowelled or eaten alive. In the case of members of other ethnic groups, for example the genocide of the Tutsis by the Hutu in Burundi
should we practise non-intervention on the grounds that we'd be disrupting the natural ecosystem - or that the interests of the victims are an inscrutable mystery to us? Or does a commitment to the well-being of all sentience entail compassionate intervention irrespective of ethnic group or species identity?
Verzingetorix, how would you attempt to reconcile your perspective with the Transhumanist Declaration? (1998, 2009)
Or are you urging that the commitment to the well-being of all sentience be scrapped?
Thebardinggreen, if you think the Transhumanist Declaration's (cf. http://humanityplus.org/philosophy/transhumanist-declaration/ ) commitment to the well-being of all sentience should be scrapped, feel free to say so. But if not, to uphold the principle and balk at its implications would be weak-minded.
Recall that humans already massively intervene - and impose our values upon - the rest of the living world. We do so in ways ranging from uncontrolled habitat destruction to captive breeding programs for big cats, "rewilding" and so forth. So the question is what principles should govern our interventions - the traditional ideology of "conservation biology" or an ethic of compassionate stewardship?
indeed so. I would only be moderately indignant if some public-spirited soul releases a pirate copy for public sharing. In the meantime, here is a review: https://www.biopsychiatry.com/depression/index.html
Permachine, what on earth makes you think I want to phase out biology? That would be a very different proposal to phasing out the molecular signatures of suffering and malaise!
And why do you suppose someone who advocates high-tech Jainism wants to "exterminate" predators? Whether predators should have reproductive rights is a separate question. Use of mass sterilants to curb the reproduction of the Anopheles mosquito would seem both ethical and prudent -and needn't harm any sentient being.
Intelligence? Other things being equal, intelligence is genetically fitness-enhancing. Indeed, without the intellectual capacity to rewrite our own genetic source code, pain and suffering would presumably persist indefinitely. But like uniform misery, uniform bliss impairs critical insight. This is why I think we should aim for a biology of information-sensitive gradients of well-being.
Some folk associate transhumanism with Nietzsche ["To those human beings who are of any concern to me I wish suffering, desolation, sickness, ill-treatment, indignities - I wish that they should not remain unfamiliar with profound self-contempt, the torture of self-mistrust, the wretchedness of the vanquished: I have no pity for them, because I wish them the only thing that can prove today whether one is worth anything or not - that one endures." ("The Will to Power", p 481) "You want, if possible - and there is no more insane "if possible" - to abolish suffering. And we? It really seems that we would rather have it higher and worse than ever. Well-being as you understand it - that is no goal, that seems to us an end, a state that soon makes man ridiculous and contemptible - that makes his destruction desirable. The discipline of suffering, of great suffering - do you not know that only this discipline has created all enhancements of man so far?" (Beyond Good and Evil, p 225 ) "I do not point to the evil and pain of existence with the finger of reproach, but rather entertain the hope that life may one day become more evil and more full of suffering than it has ever been." [etc.]
I prefer the commitment to the well-being of all sentience set out in the Transhumanist Declaration (1998, 2009).
The second law of thermodynamics doesn't entail that inorganic robots must suffer. Why assume the second entails organic robots are forever fated to do the same? In the postgenomic era, we can rewrite our own genetic source code. Whether we should choose to conserve the biology of misery and malaise is ultimately an ethical question.
Deterrence, can one really commit genocide against a taxonomic abstraction? Does the Anopheles mosquito have reproductive rights? The high-tech Jainism I argue for doesn't entail killing any sentient being. And though Jeff McMahan's NYT piece doesn't go into detail, I'm pretty confident he'd agree.
Mindbleach, does ethical status depend on the capacity to pass the mirror self-recognition test? If so, then human infants and prelinguistic toddlers are excluded up to the age of 1.5 to 2 years. IMO, becoming transhuman entails overcoming arbitrary anthropocentric bias. Other things being equal, beings of equivalent sentience deserve equivalent consideration regardless or ethnic group or species.
Thebardingreen, could you possibly be a little more specific? In what sense is phasing out the biology of suffering "philosophical gibberish"?
Psygnisfive, forgive me, but is it possible you may be confusing sentience with sapience?
["Sentience is the ability to feel, perceive, or be conscious, or to have subjective experiences."
Pigs, elephants and human prelinguistic toddlers are not very cognitively sophisticated, in some ways at least. But they are sentient beings with a profound capacity to suffer distress.
Just in case anyone missed the episode in question:
More realistically, in an era of WMD, is the existence of happiness or suffering the greater existential risk?
Tobislu, I think it's worth studying the happiest, "hyperthymic" people alive today. They aren't manic; some of them are extremely productive. The evolutionary origins of low mood lie deep in our evolutionary past:
I don't think we need to re-enact the dominance rituals of tribal society indefinitely.
MemoryZeta, indeed so. But not everyone is happy in Brave New World. And happiness doesn't need to be shallow, hedonistic and one-dimensional. (cf. https://www.huxley.net )
Fiddichcrosbow, arbitrarily deep or shallow contrasts are still feasible if we phase out experience below "hedonic zero". But life would be much richer if we all enjoyed higher typical hedonic set-points.
Onslaught147, yes indeed, predators are a traditional form of population control. So was smallpox. The question is whether there are ethical alternatives. Later this century, every cubic metre of our planet is going to be computationally accessible to surveillance, micro-management and control. Should we merely watch the suffering of other sentient beings - or use high-tech Jainism to prevent it? Cross-species fertility control via immunocontraception is just a foretaste of what's feasible with tomorrow's technologies.
Mindrust, phasing out the cruelties of Nature may be a misguided idea. But to describe it simply as "brain dead" is too quick. Consider situations like:
Clearly, we'd intervene if a human toddler were suffering such torments. So why allow members of another species of equivalent sentience to undergo such a horrible fate?
Actually, if confronted with the realities of Nature in the raw, I suspect many (most?) people would argue that we should intervene. But does suffering matter only when we happen to stumble across it? I just think our compassion needs to be systematised rather than piecemeal. Hence the ethical case for compassionate stewardship of the living world.
Kylesox, sadly some people today endure chronic pain and / or depression. We wouldn't (I hope) claim that their suffering is unreal, shallow or inauthentic in virtue of its lack of contrast with the happier side of life. So why assume that life lived at the opposite extreme, i.e. a civilisation whose members are animated by gradients of intelligent bliss, is any less authentic? Let's use technology to enrich our capacity for empathy. I just hope in future we can empathise with each other's joys rather (as is typically the case today) with each other's sorrows.
ClassicalFizz, technically speaking it's feasible to phase out boredom. For sure, to find everything indiscriminately interesting is a recipe for loss of critical discernment. But with a bit of genetic tweaking, life can range from the merely fascinating to the utterly enthralling.
Creativity? Once again, uniform bliss is a recipe for stagnation. Discontent is the motor of progress. But we can preserve the functional analogues of pain and depression without the nasty "raw feels". A civilisation based on life lived entirely above "hedonic zero" can be incomparably richer in every sense than life on Earth today.
Astrus, no one to my knowledge is arguing for coercive happiness, i.e. "making" people happy. Rather the question is whether we should be free to choose. Are we ethically entitled to force others to endure the biology of suffering after it becomes technically optional? Today, of course, hundreds of millions of depressive and malaise-ridden people have no choice at all.
Predators? Well, we'd give short shrift to someone who argued for preserving human predators who prey on the weak, the young, the innocent and the vulnerable. Should we confine our benevolence to a single race or species? Maybe so, but I think the case needs to be argued rather than merely assumed.
The greatest source of chronic, severe and readily avoidable suffering in the world today is factory farming. So ethically speaking, I think our priority should be ending the atrocious suffering for which humans are directly responsible. But power brings complicity. There is no technical reason why a moderately advanced civilisation can't later go on to use cross-species fertility control to manage populations in our wildlife parks. Technologies of immunocontraception are already well understood.
Maybe there is an ethical case for preserving starvation, disease, asphyxiation, disembowelling, being eaten alive and other cruelties of Nature. But if such horrors exist two centuries from now, this won't be because their eradication was technically too difficult, but because we chose - for whatever reason - to preserve them.
Ivorrytoweryscientist, forgive me, but you've misread the article: I was arguing against constant bliss. Rather I was making the case for information-sensitive gradients of well-being. That's the purpose of recalibrating the hedonic treadmill; recalibration of hedonic set-points can (potentially) hugely enrich everyone's quality of life while preserving existing preference architectures - unless, that is, one's values directly involve causing suffering.
Erasing individuality? Other things being equal, it's depressives who live monotonous lives "struck in a rut". They experience learned helplessness and behavioural despair. By contrast, enriching mood tends not merely to enhance motivation, but also to make one more sensitive to a broader range of rewarding stimuli - thereby making stagnation less likely. both for an individual and civilisation as a whole.
Narwi, forgive me, but you're thinking of different alleles, not different genes. [The mistake is excusable: sometimes one reads statements such as "You share half your genes with your siblings and an eighth of your genes with your cousins." [etc] - which if true would make human sexual reproduction a remarkable achievement.] Of our 22,000 odd protein-coding genes, humans share around 99 percent with the chimpanzee, roughly 98 percent with the gorilla, and around 97 percent with the orang-utan. Of course, these percentages hide a multitude of allelic variations.
Mikespinn, for a good overview of the nascent science of pleasure as it now stands, perhaps see:
("Building a neuroscience of pleasure and well-being")
Wireheading conjures up images of an intracranially self-stimulating rat. Rats are not held in high esteem in most human societies. But the main problem, I think, is that most of us have never had our mesolimbic dopamine system - or our ultimate mu opioidergic "hedonic hotspots" - directly stimulated in this way. So we simply don't know what we're missing. By analogy, imagine how we might struggle to convey to a completely asexual person what they were missing in the way of lovemaking with only a video of what humans physically get up to the bedroom to draw upon.
In future, gradients of intelligent bliss orders of magnitude richer than today's peak experiences may be a design feature of the post-human mind. However, I don't think intracranial self-stimulation is consistent with intelligence or critical insight. This is because such stimulation is uniformly rewarding. Intelligence depends on informational sensitivity to positive and negative stimuli - even if "negative" post-human hedonic dips are richer and higher than the human hedonic ceiling.
Thanks Andrew....Bentham was celibate. No orgy was ever graced by the body of John Stuart Mill. Ethical utilitarians do not on the whole lead rock-star lifestyles of drink, drugs and debauchery. I wonder why!?
The Science of pleasure will soon unlock limitless bliss.
by appliedphilosophy in transhumanism
Indeed. Even utopian designer drugs that lack the side-effects of today's recreational euphoriants will have multiple pitfalls. This is precisely the reason why we should consider other options. Imagine if instead we aim genetically to recalibrate our typical hedonic set-point - and ensure via preimplantation genetic diagnosis that our future children enjoy an elevated hedonic set-point too (cf. https://www.reproductive-revolution.com/comt.pdf). Our default state of consciousness can be profound well-being. Motivation, preference architecture, and full informational sensitivity to positive and negative stimuli can be preserved. In short, posthuman life will be wonderful - but only if we resign our primitive reward circuitry.
An interesting question. And the short answer is no. Amphetamines were widely used in the treatment of depression before the rise of MAOIs and tricyclics in the late 1950s
(cf. http://jhmas.oxfordjournals.org/content/61/3/288.full )
The problems of abuse are well known
(cf http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1557685/?tool=pubmed )
But some people can apparently use them indefinitely without either escalating the dosage or encountering serious health problems.
As an aside, I've used the selective monoamine oxidase type-B inhibitor selegiline (l-deprenyl) at 2 x 5mg daily since 1995. Its trace metabolites include amphetamine and methamphetamine. Amphetamines cause oxidative stress. But low-dosage selegiline actually increases longevity in multiple "animal models".
Will the newer MAOI rasagiline (Azilect), which lacks selegiline's amphetamine trace metabolites, have a similar effect on longevity?
If we found everything indiscriminately interesting, then the capacity for critical insight would be lost. However, if we re-engineer ourselves to find some aspects of life enthralling, and other aspects of life merely interesting, then the functional analogues of boredom can be retrained - minus the unpleasant "raw feels". This scenario isn't science fiction. We merely need look at the most productive "hyperthymic" people alive today.
(e.g. transhumanist scholar https://en.wikipedia.org/wiki/Anders_Sandberg )
Hence the case for genetically recalibrating the hedonic treadmill. Uniform bliss, like uniform despair, would be completely demotivating - and bring the human experiment to an end. Raising our hedonic set-point is different. There is no technical reason why we can't enjoy gradients off intelligent bliss orders of magnitude richer than anything physiologically feasible today. Indeed one advantage of genetically recalibrating the hedonic treadmill is precisely that recalibration is ethically consistent with different kinds of utilitarianism, Kantianism, virtue ethics, Buddhism, and much else besides.
Reprogramming Predators for a cruelty-free world.
Article by Philosopher David Pearce. by Mike12349 in Philosophy
Just a quick technical note. Selection pressure will be no less intense under the kind of (hypothetical) ecosystem management described in "Reprogramming Predators". What's different is that a very different suite of traits will be adaptive. Compare the selective breeding of dogs over thousands of years of human domestication.
"May all that have life be delivered from suffering." (Gautama Buddha) OK, I'm intrigued Anzereke. Are you opposed to the use of biotech to phase out the biology of (involuntary) suffering? Maybe it's utopian dreaming. But "repugnant"? In what sense? Or have you something else in mind?
Devin asked me if I could do an AMA. So I'll be around after 3.00pm EDT (that is 7.00pm British time) on Wednesday 21st. Hopefully, consensus beckons... Dave
AMA with David Pearce, transhumanist philosopher; advocates the use of science to eradicate suffering in all lifeOK, done! https://plus.google.com/u/0/?tab=wX#105903603302602842440/posts
by spaceman_groovesin PhilosophyofScience
David Pearce: AMA by davidcpearce in Transhuman
The next challenge: CAPTCHA. It's very effective at keeping some organic robots out. I must now wait 5 minutes (or so the AI tell me)
Hello, I am around for the rest of this evening...and delighted in "AMA" to have discovered a new three letter word!
An interesting hypothesis! The short answer: I don't know. Whatever the answer, I don't think the mind-independent world leaves its signature on the phenomenal contents of our world-simulations. Rather when we're awake, impulses from the optic nerve (etc) select from a finite menu of mind/brain states. These mind/brain states track fitness-relevant patterns in the local environment in almost real time. When we're dreaming, this selection mechanism is absent. Either way, there's a sense in which all one can ever know is the intrinsic subject properties of some configurations of matter and energy.
Favourite living philosophers? Well, I'm going to be diplomatic and just cite three philosophers I typically agree with...Antti Revonsuo, William Seager, Bryan Magee
Nearly all transhumanists share a desire for eternal youth. But what is this entity that will enjoy quasi-immortality? It's hard to make sense of the notion of an enduring metaphysical ego. Ultimately, there are only here-and-nows strung together in particular sequences. And just as post-Everett quantum mechanics (cf. https://en.wikipedia.org/wiki/Many-worlds_interpretation) teaches us there is no unique classical future, likewise there is no unique classical past either: even our "memories" of ancestral namesakes can't be trusted.
How about what philosophers call the "synchronic" entity of the self? Here I'm sympathetic to a Humean bundle theory (cf. https://en.wikipedia.org/wiki/Bundle_theory ). Ultimately, however, I don't think it works. Bundle theory can't explain the phenomenal unity of bound objects (cf. http://www.ncbi.nlm.nih.gov/pubmed/10448000 ) and the phenomenal unity of our world-simulations. By analogy, 1.3 billion skull-bound Chinese minds can never be a unitary subject of experience, irrespective of how they are interconnected or bundled together. Why are several billion neurons so different? Even if each neuron has a primitive micro-consciousness, billions of discrete pixels of classical "mind dust" can't generate unitary bound objects, or the unity of perception, or (fleetingly) unitary phenomenal selves that can reflect on their own existence. However, here we get into the swamp of quantum mind theories.
I'd highly commend Bruce Hood's new book "The Self Illusion: Why There is No 'You' Inside Your Head":
"THE BRAIN is wider than the sky,
For, put them side by side,
The one the other will include
With ease, and you beside."
One sometimes seems to transcend one's mind and commune with the world. On psychedelics, ego boundaries become fluid and sometimes disappear altogether. But I think what each of us apprehends as the mind-independent world is actually an egocentric simulation run by the mind/brain. So one never really transcends the boundaries of one's transcendental ego - as distinct from one's body-image.
Unfortunately, this analysis doesn't really convey the nature of the psychedelic experience.
[Just in case anyone misses the allusion
Yes, but only on extremely high doses of the "penicillin of the soul". Despite the sublime nature of such experience, I suspect ordinary posthuman well-being will be orders of magnitude richer. These days, sad to say, my consciousness is shallow and Darwinian.
An excellent resource. Many thanks! One thing I might worry about is the state-dependence of memory (cf. the heavy drinker who can only remember where he left his keys when drunk). Even if an agent promotes, say, better recall of learned material, is the effect sustained when one isn't on the drug?
In practice, normally no; and I wonder if many people who take "nootropics" are actually doing so for their subtle mood-brightening effects? Piracetam, for instance, boosts dopamine as well as cholinergic nerve transmission, which may explain its (weakly) mood-brightening action. I have a fondness for mild psychostimulants (selegiline, amineptine, occasionally modafinil; and 7-8 cups of strong black coffee daily). Although psychostimulants (very) modestly improve cognitive performance on a number of tests, IMO they aren't real nootropics. I am currently exploring the selective orally active kappa opioid antagonist JDTic. I find JDTic's subtle nootropic action most interesting, though I haven't seen it reported even in "animal models" - and my response may be idiosyncratic.
Excellent! Ignoring ethics, it's safer to be a lazy meat eater than a lazy vegan; and even some vegetarians, despite their statistically greater longevity, might benefit from a crash course on nutrition. IMO nutrition courses ought to be part of the educational core curriculum. Worldwide, good diet in conjunction with daily aerobic exercise is perhaps the single most effective way to improve emotional, physical and intellectual health. I know some people who feel healthier and calmer on a vegan diet. Perhaps this is because a high-carb diet allows more l-tryptophan, the rate-limiting step in the production of serotonin, into the brain (cf. http://wurtmanlab.mit.edu/publications/tag/11/ ) But other people seem to do better on a high protein diet. One reason for this may be the consequent increased passage into the brain of neutral amino acid precursors of the activating neurotransmitters dopamine and noradrenaline. The solution here is either to eat unusually protein-rich vegan foods or add protein isolate.
One of the few intelligent arguments in favour of prohibition is that the drug-naive can't make an informed choice prior to taking psychedelics. Psychedelia is unimaginable to the drug-naive mind. Even the most seasoned psychonauts can't know what inner demons - or angels - they are going to unleash with today's dysfunctional reward circuitry. On balance, however, I think the risks of totalitarian state control outweigh the risks of freedom of choice. There are few things more damaging to anyone's mind than prison.
Vegetarians tend to be slimmer, longer lived and more intelligent than meat-eaters. There are many possible confounding variables on each count (e.g. intelligent people are more likely to give up eating meat in the first instance (cf. http://news.bbc.co.uk/1/hi/6180753.stm ) But clearly no great personal sacrifice is involved in living without cruelty, just (very) mild inconvenience. Yes, I agree with you. It is clearly ethically preferable that sentient beings from other species be kept and killed " humanely" rather than inhumanely. The same is true of sentient beings from other ethnic groups. But we need to abolish the property status of sentient beings regardless of race or species.
1) You are more qualified to speak than me here: I've never tried doing maths on psychedelics. I suspect I'd find the task impossible. LSD, for example, seems to interfere with the creation of serial, sequential thought-episodes (the "flooding" phenomenon). The capacity for serial thought is a capacity we take normally for granted; but it's an unexplained evolutionary innovation: the creation of a slow, serial virtual machine sitting on top of its massive parallel quantum-mechanical(?) parent.
2) "Just how it had/happens to be?". The appearance of arbitrary contingency to our diverse qualia - and undiscovered state-spaces of posthuman qualia and hypothetical micro-qualia - may be illusory. Perhaps they take the phenomenal values they do as a matter of logico-mathematical necessity. I'd make this conjecture against the backdrop of some kind of zero ontology. Intuitively, there seems no reason for anything at all to exist. The fact that the multiverse exists (apparently) confounds one's pre-reflective intuitions in the most dramatic possible way. However, this response is too quick. The cosmic cancellation of the conserved constants (mass-energy, charge, angular momentum) to zero, and the formal equivalence of zero information to all possible descriptions [the multiverse?] means we have to take this kind of explanation-space seriously. The most recent contribution to the zero-ontology genre is physicist Lawrence Krauss's readable but frustrating A "Universe from Nothing: Why There Is Something Rather Than Nothing" (cf. http://www.scientificamerican.com/article.cfm?id=universe-from-nothing ) Anyhow, how does a zero ontology tie in with (micro-)qualia? Well, if the solutions to the master equation of physics do encode the field-theoretic values of micro-qualia, then perhaps their numerically encoded textures "cancel out" to zero too. To use a trippy, suspiciously New-Agey-sounding metaphor, imagine the colours of the rainbow displayed as a glorious spectrum - but on recombination cancelling out to no colour at all. Anyhow, I wouldn't take any of this too seriously: just speculation heaped on idle speculation. It's tempting simply to declare the issue of our myriad qualia to be an unfathomable mystery. And perhaps it is. But mysterianism (cf. https://en.wikipedia.org/wiki/New_mysterianism ) is sterile.
For most purposes, some form of platonism is vastly more fertile in mathematics. My own sympathies lie with the extreme form of constructivism known as finitism (technically, ultrafinitism), historically associated with Alexander Esenin-Volpin (a Russian mathematician once diagnosed with "pathological honesty"):
and expounded today by Ed Nelson:
Finitists aren't very popular with most working mathematicians. But Bertrand Russell once said that “Mathematics may be defined as the subject where we never know what we are talking about, nor whether what we are saying is true.” My own (very) tentative view is that mathematical physics is really about patterns of micro-qualia - and the Hard Problem of consciousness is an artefact of materialist metaphysics and a false direct theory of perception.
I probably couldn't have persuaded Cardinal Bellarmine to look through Galileo's telescope. Likewise, philosophers of mind convinced on a priori grounds that empirical research isn't relevant to the study of consciousness aren't going to be swayed by anything we write here. They will slumber on accordingly. "Exhaustive"? Formally, yes, I think so. In one sense, physics provides an extraordinarily rigorous mathematical straitjacket for our understanding of the world. If a theory is inconsistent with the Standard Model (at sub-Plancknian energies at any rate), then it won't fly. "Hidden variables" theories in quantum mechanics seem hopeless: there is no "element of reality", as Einstein put it, that isn't captured in the quantum formalism. In another sense, however, physical science is hopeless. Physics offers no account of the nature of the (in)famous "fire" in the equations; or an explanation of why we aren't zombies; or of why our innumerable textures of consciousness ("qualia" in philosophy-speak) take the values they do; or how they combine into unitary phenomenal objects apprehended by a fleetingly unitary self (the binding problem). All one ever has access to, except by inference, are the experiences of one's own mind. Yet on a standard materialist ontology, conscious mind ought to be impossible. On a materialist ontology, there is no way to close Levine's notorious Explanatory Gap (cf. https://en.wikipedia.org/wiki/Explanatory_gap ). If, on the other hand, we assume a Strawsonian physicalism (cf. http://philpapers.org/rec/STRRM-2 ), then the solutions to the fundamental field-theoretic equations yield the values of micro-qualia. Ultimately - and this is my best stab at a solution - we should be able rigorously to derive the properties of our macro-qualia from the underlying physical fields of micro-qualia, experimentally mapping them - initially via the "neural correlates of consciousness" - onto the formalism of mathematical physics. This enterprise can't be done without experimentation on a heroic scale - probably a job for posthuman superintelligence. But Shulgin has started the ball rolling.
Government attempts to control the private states of consciousness of individuals are totalitarian by their very nature. I think they should be resisted. Unless one is psychologically robust, however, taking psychedelics may not be wise until the underpinnings of invincible well-being are in place later this century. Some critics claim that (hypothetical) future life based on gradients of bliss will mean the end of intellectual progress. On the contrary, I'd argue that redesigning our reward circuitry means that psychedelia can safely be explored in depth.
Craig, The best scholarly treatment I know of the binding problem is still Antti Revonsuo's "Binding and the Phenomenal Unity of Consciousness": http://tracker.preterhuman.net/texts/body_and_health/Neurology/Binding.pdf Compare the 80 billion odd neurons of your brain with, say, 1.3 billion discrete, skull-bound minds in China. Regardless of how intimately interconnected is the population of China, the population of China will never be a single subject of experience. However, when you are awake or dreaming - but not when you are in a dreamless sleep - what's generated inside your skull isn't 80 billion discrete speckles of classical "mind-dust" (phenomenal edges, colours, vertices, textures, motions etc) but "bound" unitary objects in a unitary field of experience apprehended by at least a fleetingly unitary self. How do the 86 billion odd neurons of the brain carry this off? No one knows. Could a classical serial digital computer ever do so? How? Some idea of the computational power involved in phenomenal object binding and the unity of perception is illustrated by rare victims of neurological deficits who even partly lack it. Consider people with simultanagnosia, or the inability to see more than one object at once, which is a feature of Balint's syndrome: http://www.youtube.com/watch?v=4odhSq46vtU
or akinetopsia ("motion blindness")
Perhaps I should stress again that my ideas on where the solution may lie, i.e. a combination of Strawsonian physicalism and ultrarapid sequences of macroscopic quantum-coherent states in the mind/brain - are speculative in the extreme. (cf. http://www.utsc.utoronto.ca/~seager/strawson_on_panpsychism.doc )
The question of what moral obligations we would have in a post-suffering world isn't easy to answer. If one is a classical utilitarian for example, how does one avoid further commitment to designing a "utilitronium shockwave"? But a conservative answer to your question might be that global recalibration of our hedonic treadmill still allows the virtue ethicist, deontologist, preference utilitarian (etc) to maintain his or her values and preferences as before, for the most part at least. Considered in isolation, the abolition of experience below "hedonic zero" still allows the functional equivalent of Darwinian vices and virtues. Thus insulting you, or betraying your trust, could prevent you from feeling sublime - as distinct from merely feeling wonderful. From an ethical perspective, such conduct should be avoided in future, as now. In practice, I think the abolition of experience below hedonic zero will lead to a personal, ethical, intellectual and sociological revolution beyond our comprehension - and a biotech-driven "re-encephalisation of emotion". Further hedonic phase changes in the more distant future (cf. https://www.superhappiness.com) may make our existing value schemes largely irrelevant.
More motivated, all traces of my former melancholia banished, less sleep-prone. But also a degree of inner tension: this is not a regimen suitable for anyone with anxiety disorders.
Hmm, apologies, it simply hadn't occurred to me this point might be contentious. Some bodily functions and activities in the bedroom are typically regarded as undignified in human cultures. Implantation of microelectrodes in one's reward pathways doesn't involve these bodily functions and activities. Of course the whole notion of "dignity" is deeply problematic; but wars are fought over less. (cf. http://www.amazon.com/Dignity-Essential-Resolving-Conflict-Relationships/dp/0300163924/)
Just as one can only imperfectly understand the nature of dreaming "from the inside" - even in a lucid dream - likewise the nature of the ordinary waking consciousness may yield state-specific knowledge that can only imperfectly be understood "from the inside" too. How much does the medium of expression of propositional thought infects that propositional content itself?
(cf. Nicholas Rescher's "Conceptual Idealism": http://www.jstor.org/discover/10.2307/20129056 )
I hope I've passed. :-) Personalised versions of the Turing test may be interesting a few decades hence. Just as a cartoon or caricature of a person can be more recognisable than the original, presumably small quirks of style and personality could be programmed/trained up to deceive investigators. Knowing this, one might decide subtly to accentuate the quirks for the benefit of investigators; but if so, more intelligent observers might start to suspect one was the artificial agent....(etc)
As you'll recall, buprenorphine is also a delta antagonist as well as a partial mu agonist. So it's "messier":
("Memory function in opioid-dependent patients treated with methadone or buprenorphine along with benzodiazepine: longitudinal change in comparison to healthy individuals")
Thanks for feedback, Stephthegeek. In the field of psychotropic medicine, we need to abolish the whole doctor-patient relationship as normally conceived today - with its asymmetric power relationships and the "power of the prescription pad". Cautious, well-informed self-experimentation in collaboration with an experienced clinician is precisely what an intelligent subject ought to be doing. Unfortunately many doctors still discourage it.
Not just ethical behaviour, but any kind of intelligent agency, depends on the cognitive capacity to represent the mind-independent world. Insofar as this representation or simulation is accurate, it will include not just physical objects behaving in ways described by the laws of physics, but also other egocentric virtual worlds analogous on one's own in which subjectively valuable and disvaluable experience occur. Let's be more concrete. You go for a walk in the countryside. You come across a small child - or a dog - drowning in a shallow pond. Accurately representing what is going on entails not just understanding the laws of physics, the nature of aerobic respiration, the fact the child/dog is emitting distress vocalizations (etc.) Accurate representation also entails understanding the nature of the first-person experiences involved: to be picnic-stricken, unable to breathe, feeling water entering one's lungs etc. Insofar as one understands the laws of physics and the first-person perspective in question, then one acts to bring those ghastly experiences to an end as though they were one's own, i.e. one rescues the drowning dog or child. Not to be empathetic is to be ignorant; and not to be capable of empathy is to be invincibly ignorant.
So no, we can't hold the invincibly ignorant accountable for their actions. But as we come understand more about the world - third-person facts and first-person perspectives alike - we will understand more about how to behave as intelligent agents.
Crudely speaking, Nature "designed" men to be hunters/warriors. So many - perhaps most - men find violence, competition and conflict exciting, whether within or between different races and species. But being a transhumanist entails overcoming our biological limitations. Any civilisation worthy of the name will be founded on the well-being of all sentience. [It's worth stressing that advocacy of a pan-species welfare state is utterly different from so-called welfarism, i.e. the belief that humans have the right to use nonhuman animals as we see fit as long as they are treated "humanely"]
If my supply of coffee were cut off, I might turn to crime or hustle on street corners to score. Fortunately, this is not a likely prospect, but yes, you're raised a concern. It's prudent to stockpile for a rainy day. JDTic? I'm not enrolled in http://clinicaltrials.gov/ct2/show/NCT01431586 and I wouldn't in any case meet its inclusion criteria. JDTic is not scheduled, so I just commissioned the syntheses of a batch from China. Likewise amineptine. The bureaucratic paperwork, purity testing etc is a hassle. Safety? Well, it's a matter of weighing risk-reward ratios. One predictor of a short life-expectancy is low mood / intolerance of stress: my default state needs improving on both counts. Interestingly, JDTic may have nootropic properties too, presumably a function of the interplay between the kappa receptors and the cholinergic system. The only other med I take is selegiline, which increases lifespan in multiple "animal models" (cf. http://www.ncbi.nlm.nih.gov/pubmed/9928438) Ethylphenidate?? No, I haven't tried it, unlike hard-drinking students who take methylphidate (Ritalin) to study: ethylphenidate is commonly formed as a by-product of their joint consumption. I guess I'd be worried about is "abuse potential". How do you respond to modafinil?
Both amineptine and selegiline can modestly improve performance on a number of tests e.g. memory; but probably neither deserve the label of "nootropic". The limited cognitive benefits from even the weakest psychostimulants are often a function of increased arousal and motivation.
Safety? Selegiline (l-deprenyl) at MAO(B)-selective dosages increases longevity in a number of "animal models" (cf. http://www.ncbi.nlm.nih.gov/pubmed/9928438) I know of no such studies of amineptine; but one of the factors most strongly predictive of an early death is low mood - whether or not one satisfies the criteria for clinical depression.
Anyhow, I certainly don't advocate causal use of either agent; but nor is personal use necessarily irresponsible.
Insofar as negative utilitarians can be optimists, Khannea, maybe. History suggests one characteristic of utopian visions is they can all go horribly wrong. Utopias turn out to be dystopias. Is phasing out the biology of suffering is different? Perhaps not; but if we do opt for a biology based on gradients of bliss, then at least we won't realise it had all been a ghastly mistake.
* * *
Phew! Khannea, I fear you may overestimate my competence in such fields. :-)
However, here goes....
1) Short term, I guess. In view of the massive deleveraging going on right now in debt-ridden Western economies, it's surprising how some of these entitlement programs have held up as well as they have; and now - with the imminence of "Obamacare" - others are about to be extended. Yet the biggest beneficiaries of Government largesse since 2008 have been Wall Street and corporate America. I think free-market fundamentalists are still in denial about what has happened: without unimaginable amounts of government cash all (sic) the banks would have gone bust and taken the rest of the Fortune 500 with them. But the good times will roll again.
2) Not to my knowledge, it's more to do with the astronomical levels of debt Western countries accumulated in the boom years. Whereas in most recessions, governments and central banks can apply the usual counter-cyclical fiscal and monetary tools, in Europe at least most governments are now being forced to make pro-cyclical cuts in public spending that may aggravate the recession. America is lucky: it can just monetize its debts away by printing dollars. This is not an option for Greece or Portugal.
3 My best guess - and it is just a guess - is that we are on the brink of an unprecedented long boom fuelled by the explosive growth of biotech and infotech coupled with the integration of China and India into the world market. I'm much more cautious about The Singularity (cf. https://www.biointelligence-explosion.com and https://www.biointelligence-explosion.com/parable.html ) than some of my transhumanist colleagues. But even if I'm wrong, I can't see the current downturn making a significant dent in Moore's Law, for instance.
4 The beauty of IT-based technologies is that over time the cost of information effectively trends to zero. This isn't to underplay the grotesque inequalities of wealth in the world today. Rather I'm just noting the long-term egalitarian effect of e.g. ubiquitous music, film and software "piracy". We can all be entertained like princes.
5 I broadly agree with the thrust of Steven Pinker's thesis in "The Better Angels of Our Nature". http://www.amazon.com/The-Better-Angels-Our-Nature/dp/0670022950 But in absolute terms, the amount of human and nonhuman suffering in the world increases each decade. I hope this trend can be reversed later this century: a transition to global veganism/ invitrotarianism together with a closing of factory farms and the death factories; and a reproductive revolution of "designer babies" that lets prospective parents avoid bringing more misery into the world. (cf. https://www.reproductive-revolution.com) Sadly one can paint a lot of darker scenarios too.
Perhaps compare the explosive growth of pain-free artificial intelligence with the slow cognitive progress made by organic life over hundreds of millions of years. Traditional reinforcement learning - a bland term to cover all manner of horrors - was adaptive in the ancestral environment. But I know of no evidence that experience below "hedonic zero" will play any functionally indispensable role in future: its ancient signalling function can be replaced by more civilised alternatives.
OK, I'm struggling. In what sense is the nature of agony and despair "emptiness"? Surely these ghastly states of mind have a molecular and neuroanatomical substrate - and potentially a cure?
Like sneezing, rumination is involuntary for many depressives. I know many forms of meditation involve emptying the mind; but this is easier for some people than others.
I share your dark views on what humanity has done to Nature. But the fact remains that only one species on Earth has the technical capacity or the ethical imagination to phase out suffering - and this can only happen via biotechnology.
An ethic of negative utilitarianism is often reckoned a greater threat to intelligent life (cf. the hypothetical "button-pressing" scenario) than classical utilitarianism. But whereas a negative utilitarian believes that once intelligent agents have permanently phased out the biology of suffering in our forward light-cone, all our ethical duties have been discharged, the classical utilitarian seems further ethically committed to converting all accessible matter and energy into relatively homogeneous matter optimised for maximum bliss: "utilitronium". Hence the most empirically valuable outcome entails the extinction of intelligent life. Further (and ignoring complications relating to uncertainty] to the classical utilitarian, any rate of time-discounting indistinguishable from zero is ethically unacceptable, so s/he should presumably be devoting most time and resources to that ultimate goal.
How might a classical utilitarian respond? Well, the nature of "utilitronium" is currently as obscure as its theoretical opposite, "hellium" or "dolorium". For just as the torture of one mega-sentient being may be accounted worse than a billion discrete pinpricks, conversely the sublime experiences of hypothetical Jupiter minds may be accounted preferable to tiling our Hubble volume with the maximum abundance of micro-bliss. What is the optimal trade-off between quantity and intensity of blissful experience?
As a negative utilitarian, would I (purely hypothetically) initiate an all-consuming utilitronium shockwave? Well, I don't want to get put on a U.S. watch list. So I'm going to respond: of course not!
Is suffering purely or largely relative? If this were so, then presumably victims of chronic pain and/or depression couldn't really be suffering since they lack a contrast effect. Recall some depressives can't even imagine what it's like to be happy or even what the word "happiness" means. Alas it's physiologically possible to spend life entirely below "hedonic zero". Likewise life can be lived entirely above hedonic zero too - but that kind of life history will entail redesigning our genetic source code.
Evolutionary pressures? Well, consider low mood again. Depression seems to have arisen as adaptation to group living on the African Savannah
[cf. Rank Theory https://www.biopsychiatry.com/depression/index.html)
[Note I'm not endorsing group selectionism here] A novel (and not mutually exclusive] evolutionary explanation of depression centres on its potential immunological advantages.
Either way, thankfully what was genetically adaptive on the African savannah needn't be genetically adaptive in future when prospective parents can choose the genetic make-up of our future children. And rather than dulling consciousness, genetic enhancement should allow us to intensify our well-being beyond anything neurochemically feasible today. (cf. https://www.reproductive-revolution.com/ )
Using Buddhist techniques of meditation to recalibrate the hedonic treadmill would be wonderful if such an outcome could be demonstrated in well-controlled clinical trials. Unfortunately, there is empirical evidence that the group least helped by meditation are melancholic and/or anhedonic depressives who may be over-inclined to rumination in the first instance.
The food chain? By all means replace "cruelties" by "immense suffering". I certainly didn't mean to impute any degree of malice to carnivorous predators - with the exception of some members of the genus Homo. But treating such immense suffering as simply Nature's way is no longer ethically tenable given our new-found power over the rest of the living world. Our level of complicity in the suffering of other sentient beings can only increase.
As you know, most "free-range" chickens suffer grim living conditions and agonising debeaking. They undergo rough handling and transport followed by grisly slaughter when they are "spent". Male chicks are generally crushed alive or electrocuted - though this fate is presumably preferable to being slowly suffocated. None of which applies in the case you cite. My worries here are more indirect utilitarian concerns, to do with loss of moral clarity - even if no sentient being is harmed. A powerful ethical case can be made for abolishing the property status of other sentient beings. I think you've admirably taken the right decision.
"Enlightenment"? Can enlightenment recalibrate the hedonic treadmill? i.e. the viciously effective suite of negative feedback mechanism in the brain that stop most of us from being happy for very long?
Can enlightenment abolish the cruelties of the "food chain"? If we're serious about phasing out the biology of suffering, only high-tech solutions can work. Are there pitfalls? You bet. But there are immense risks too in trying to maintain the status quo.
I've met a number of folk over the years who tell me they've found enlightenment. Excellent news, if so! But I'm reminded of the words of one veteran psychonaut: "It's not hard to hear voices. It's knowing whether they tell you the truth."
Many thanks Xodarap. Well, if we take modern neuroscience seriously, the repugnant conclusion doesn't follow: https://www.repugnant-conclusion.com/ A classical utilitarian ethic entails some wildly counterintuitive conclusions - far more counterintuitive than the usual trolleyology served up in x-Phi.
(cf. https://en.wikipedia.org/wiki/Trolley_problem )
Psychedelics can be illuminating or bewildering. Either way, with a handful of exceptions I wouldn't describe them as tools for a richer empathetic understanding of other minds. One exception is MDMA ("Ecstasy") (cf. https://www.mdma.net). Alas it's short-acting and potentially neurotoxic in higher doses/regular use. With the exception of oxytocin, which has limited bioavailability, I don't know of any safe and sustainable empathogens - or how easy it will prove to amplify native oxytocin function. However, advanced neuroscanning technology should shortly allow us to practise, in effect, mutual telepathy http://www.guardian.co.uk/science/2012/jan/31/mind-reading-program-brain-words or even a full-spectrum generalisation of mirror-touch synaesthesia. Sadly, until we beautify our Darwinian minds, the contents of most of them are best well-hidden. On the other hand, perhaps Longfellow was right: "If we could read the secret history of our enemies, we should find in each life sorrow and suffering enough to disarm all hostility."
Is suffering bad by its very nature - or simply in virtue of the racial or species identity of the victim? I assume - please forgive me if I'm mistaken - that you care about the suffering of members of ethnic groups: skin pigmentation is not a morally relevant difference. But is there a morally relevant difference between the suffering of a black-skinned prelinguistic toddler and a non-human animal of equivalent sentience? [To avoid getting side-tracked by discussions of "potential", let's assume the toddler is an orphan with a life-shortening progressive disorder.]
“No man is a hero to his wife's psychiatrist.”
Thankfully I am unmarried.
Compare the attendance figures and level of enthusiasm at a football game with a transhumanist meet-up - and the comparative importance of the issues at stake. Yes, a case could be made for tapping in via music and ritual to our needs as social primates for a sense of belongingness. I sometimes wish I could experience a tingling of the spine and a sense of "us". On the other hand, most of us have acquired a healthy distrust of hidden persuaders and sources of bias - and anything with even a hint of a hint of cultishness. So what's the best way for transhumanists to promote group cohesion - and a message that resonates effectively with policy-makers and public alike - without sacrificing an uncompromising rationalism? And how can transhumanism offer as much to people with below-average as above-average AQ?
(cf. https://en.wikipedia.org/wiki/Autism_Spectrum_Quotient )
Jbschirtzinger, I share your dark views about "bad eggs". But isn't this why we need to rewrite our own genetic source code and change "human nature" - and ultimately bootstrap our way to full-spectrum superintelligence? One aspect of full-spectrum superintelligence will be an enriched capacity for empathy. At the risk of sounding naive, mind-reading prowess doesn't just make us smarter: other things being equal, it makes us kinder to each other too.
http://jdtic.is/ . Some of the drug companies are just giving up on depression:
But as long as the neurotransmitter system most directly implicated in hedonic tone if off-limits, perhaps we shouldn't be surprised.
1) JDTic is currently merely an "investigational drug". One of the most severe chronic depressives I know has experienced a complete remission of symptoms: better than one dared hope. Other nondepressive subjects have reported mild or negligible effects. JDTic is not scheduled. So one can commission a batch from a chemical supply company (in which China abounds). JDTic seems to be remarkably free of adverse side-effects - the only one consistently reported to date is a slight dry mouth. But I guess I'd better say [at the risk of sounding old and responsible] that prudence dictates waiting until more well-controlled studies have been done on human subjects. [Do I have double standards? Yes. But who will cast the first stone?] Recall too that the history of clinical psychopharmacology is littered with false dawns. These are early days.
2) Genetically preprogrammed mental superhealth - either via the coming reproductive revolution of "designer babies" or breakthroughs in autosomal gene therapy - is presumably still decades or more away at best. So I fear you can expect a long career ahead if you do decide to study clinical psychopharmacology. If you want to explore the interface between drugs and genetics, you might consider specializing in e.g.
http://www.ornl.gov/sci/techresources/Human_Genome/medicine/pharma.shtml Or what most intrigues me:
3) If the ethical consensus for a compassionate biology existed, then the kind of surveillance and tracking technology currently used for e.g. military proposes could be used to monitor large [and no-so-large] terrestrial vertebrates [and to a lesser degree in marine ecosystems]. Or you might want to study immunocontraception and other techniques of fertility control because they will be absolutely critical if and when any kind of consensus emergence to phase out involuntary suffering in Nature. Or more broadly, it might be worth getting a solid grounding in ecology and population dynamics.
I hope you're not prone to:
* * *
I admire Peter Singer. I'm now going to state some reservations about his work; but they don't detract from my admiration.
First, when advocating any seemingly utilitarian policy initiative, it's vital to consider the wider ramifications - including whether "non-utilitarian" policies may actually have a better outcome. Thus enshrining in law the right of other sentient beings not to be regarded as property will, I think, most probably lead to better consequences in strictly utilitarian terms. Championship of the right of nonhuman animals not to be regarded as property is most commonly associated with the professedly anti-utilitarian American legal scholar Gary Francione. Peter Singer, on the other hand, argues that humans are entitled to use and kill members of many (but not all) species of nonhuman animals so long as we do so "humanely" on the grounds that most nonhuman animals don't have an enduring sense of personal identity.
Secondly, Singer also argues against a long-term phasing out of carnivorous predation in the living world on the grounds that human ecological interventions have generally been disastrous to date. This may well be true; but we need to weigh risk-reward ratios. Free-living nonhuman animals often endure misery-ridden lives and grisly deaths. Thanks to the exponential growth of computer power, humans will shortly be in a position to micro-manage every cubic metre on the planet. Do we really want to conserve the cruelties of Nature indefinitely? Why? Cross-species fertility regulation and behavioural-genetic tweaking of obligate carnivores will be kinder.
(cf. https://www.abolitionist.com/reprogramming/ )
"Famine, Affluence, and Morality"? It's an admirable work IMO. However, I wonder if one's time and resources might more fruitfully be spent in campaigning for affluent First-World governments to donate a larger proportion of tax revenues for (intelligently directed) Third World aid - rather than by making personal donations to Oxfam etc. Better still, I reckon, would be to devote one's time and energy to tackling the biological root causes of suffering: everything else we do is just sticking-plaster stuff. Rising GNP levels are not likely significantly to improve average subjective quality of life in Third World countries. A recent international study of the percentage of the population describing themselves as "very happy" ranked Indonesia first, followed by India. Western nations tended to score worse on self-reported happiness; and our suicide rates, an "honest signal" of severe psychological distress, are typically much higher. I guess a critic might respond that "biological interventions" are futuristic fantasy. But not so. Here I'll just take one example. The COMT gene has two alleles. One of those alleles is implicated in a predisposition to both greater altruism and greater subjective well-being: (cf. http://www.sciencedaily.com/releases/2010/11/101108072309.htm ; http://www.ncbi.nlm.nih.gov/pubmed/17687265 ) At the moment, large numbers of prospective parents in e.g. India and China use preimplantation genetic diagnosis (cf. https://en.wikipedia.org/wiki/Preimplantation_genetic_diagnosis ) for the purposes of gender selection i.e. to avoid the supposedly dreadful risk of having a girl. Imagine if preimplantation genetic diagnosis were instead used by prospective parents across the world to avoid the risk of endowing their child with the "nasty" version of COMT?
This example could be multiplied. In essence, I'd argue for a twin-track global strategy of poverty reduction and biological intervention.
Would the ethical case for phasing out the biology of suffering be more credible with greater stress on how it may very well never happen? Maybe: I'm certainly open to the possibility. But it's hard to get the balance right. Would Ray Kurzweil ("2045: The Year Man Becomes Immortal", TIME Magazine, etc) carry greater weight if he showed more diffidence about his powers of prediction? To the typical Darwinian mind, self-confidence inspires confidence in others you are right - not infrequently with disastrous results.
Alas George doesn't share my views on veganism, gun control and a welfare state for bunny rabbits.
Negative utilitarians may be prey to many sources of systemic bias; but wishful thinking isn't normally accounted one of them. Shouldn't negative utilitarians - and depressives generally - be more on guard against systemic pessimistic bias? [None of which means I'm not hopelessly mistaken; I'm just sceptical the error will be due to habitual wishful thinking.]
Indeed. We are both descended from thieves, murderers and rapists i.e. we wouldn't be here today unless (some of) our ancestors behaved savagely. Thankfully, this is not a morally compelling argument for the virtues of theft, murder and rape!
Studies of e.g. yeast
can be hugely illuminating. And all sorts of experimental interventions in human and nonhuman animals alike needn't involve killing or harming them in any way. Thus life extension techniques can benefit a humble worm (the simplest creatures I know that possess a dopamine-opioid neurotransmitter system) and humans too. You are quite right to be wary of most contemporary courses on neuroscience; but the life sciences don't have to be unethical.
Actually, part of the problem with a lot of psychedelic drugs is that they don't have mood-brightening properties. Salvia, for example, often induces acute dysphoria - which doesn't make it any less illuminating, just disturbing. Most contemporary analytic philosophers of mind regard intentionality
(cf. http://plato.stanford.edu/entries/intentionality/ )
as the hallmark of the mental; but IMO phenomenology would be a better criterion. Formal models of mind are inadequate as they stand.
* * *
How critical to your argument is maturity - and the idea of potential for further intellectual or personal development? After all, today we (rightly) recognise that a prelinguistic toddler with a progressive disease who won't reach his or her third birthday should be cared for and protected. We value highly sentient but cognitively limited young humans not just for their potential (or not) to develop into adult humans, but for who they are now. We wouldn't consider it ethically acceptable humanely to raise - and slaughter for the dinner table - happy black babies with a progressive genetic disorder, nor humanely to raise and slaughter otherwise happy mentally handicapped people with the body of an adult but the mind of a toddler. So if we agree that the scenario you sketch is ethically unacceptable for cognitively limited humans who have reached their maximum "maturity", on what grounds is it ethically acceptable to kill nonhuman animals of equivalent sentience and cognitive capacity?
Such thought-experiments are intellectually important. The danger of evoking such pastoral idylls is that - at a psychological level - they can diminish the incentive to give up eating factory-farmed meat and animal products now - and allow us forget the cruelties of factory-farming that we're paying for today.
* * *
Even now, we could program digital computers to emit distress vocalizations when their operating systems are under attack by malware. They could make heart-rending pleas for mercy protesting their sentience as we prepare to re-format their hard drives. But of course this won't create consciousness.
How about progressively replacing each organic neuron in a biological human with a functionally identical silicon copy, as David Chalmers proposes by way of a thought-experiment? Would total replacement of your organic neurons (and glial cells, etc) by silicon chips result in an identical subject of experience being (re)created - or indeed the subject just carrying on life oblivious as before?
Well, I think the key term here is "functionally identical". If we endorse a coarse-grained functionalism, perhaps yes - though even here we must be extremely cautious. For just as an "identical" game of chess can be implemented in pieces of countless different textures, and even no textures at all (i.e. in a computer program) likewise if coarse-grained functionalism is correct, then there is no guarantee that your silicon functional analogue will have type-identical qualia to you, or any qualia at all. This possibility isn't just idle scepticism, a rehash of The Problem of Other Minds, but (I'd argue) a very serious worry.
As it happens, however, I think coarse-grained functionalism is false, not least I'm extremely sceptical that digital computers - regardless of their efficiency of algorithm or processing power, can ever solve the "binding problem" (in the phenomenological sense of "binding"http://tracker.preterhuman.net/texts/body_and_health/Neurology/Binding.pdf ) However, the conjecture that unitary conscious minds, capable as we are of both unitary object-binding and the phenomenal unity of perception, are dependent on the functionally unique valence properties of carbon and the irreducibly quantum-mechanical properties of liquid water would be discounted by most mainstream AI researchers. This is micro-functionalism with a vengeance.
Scientists can test how hard a human or nonhuman animal will work to avoid/receive a particular stimulus. In that sense, (un)happiness can be (crudely) operationalised even in sentient beings who can't verbally self-report. Presumably what we really need, however, is more sophisticated neuroscanning techniques to investigate (un)happiness at the molecular level.
* * *
Just a little bit of background
One needn't be any kind of ethical utilitarian to agree we should phase out the biology of (involuntary) suffering. Recalibrating the hedonic treadmill is consistent with being a deontologist or a virtue ethicist, for instance.
Starvingfortruth, yes indeed. Alas hedometry is not yet one of the exact sciences. But we can measure (un)happiness by methods other than self-reports, including that of nonhuman animals, by investigating e.g. how hard they are prepared to work to receive/avoid certain stimuli.
Eventually, sophisticated neuroscanning techniques should allow neuroscientists rigorously to quantify well-being and ill-being in terms of receptor density; receptor occupancy by full, partial and inverse agonists (and antagonists; gene expression profiles, and so forth, at the molecular level. Or so I'd guess, at any rate.
You are surely right to stress the pitfalls of focusing on happiness alone. IMO we need to develop safe and sustainable empathogens e.g. long-acting oxytocin enhancers.
* * *
The persistence of exclusive homosexuality in human and nonhuman animals is an anomaly because being gay is so obviously [genetically] maladaptive for the individual organism. But if we adopt the ultra-Darwinist perspective of e.g. Robert Trivers and Austin Burt in "Genes in Conflict: The Biology of Selfish Genetic Elements"
then we can at least glimpse possible solutions. Thus see for example:
* * *
http://esciencenews.com/ , http://www.physorg.com/ , http://medicalxpress.com/ , http://ndpr.nd.edu/recent-reviews/ , http://arxiv.org/archive/gr-qc , http://www.last.fm/home http://www.ncbi.nlm.nih.gov/pubmed/ https://plus.google.com/ now Reddit, and.... and (blush) Facebook, though I'm trying to quit, again.
Kenametz, yes, but I'm not sure about timescale. Perhaps part of the problem is that gains in cognitive performance by taking today's nootropics are quite modest, at best - and are frequently offset by subtle deficits elsewhere. Also, taking psychostimulants like methylphenidate (Ritalin), for example, and other noradrenergic/dopaminergic drugs that increase the brain's "signal-to-noise" ratio, can modestly enhance memory and improve performance on a number of tests. Yet I wouldn't call them smart drugs/nootropics. Methylphenidate boosts arousal and motivation (two confounding variables) But original ideas can benefit from less focused, free-floating "cholinergic" states - a reason one often has interesting ideas while drifting off to sleep.
Perhaps the best way to boost intelligence world-wide right now, I think, would be optimal nutrition.
Folk psychology, as J L Austin once remarked, embodies the metaphysics of the stone-age. Sometimes it's illuminating to probe ordinary people's intuitions; but at other times, this entails simply cataloging error. [not that I'd claim to have penetrated the riddle of existence myself]
Just consider analytic Philosophy of Mind. IMO it's still at the pre-Galilean stage - some might say Pre-Aristotelian. A true experimental philosophy would entail adopting something on the lines of the rigorous methodology pioneered by Sasha Shulgin (cf. http://www.erowid.org/library/books_online/pihkal/pihkal.shtml ) - rather than probing each other's drug-naive intuitions.
Now there may be prudential reasons for deferring such experimental philosophy of mind until we have gained mastery over human reward circuitry. I wouldn't for a moment discount the possibility of bad trips. But much "experimental philosophy" is nothing of the kind. For a more sympathetic overview of x-Phi, perhaps see e.g.
The scenario you describe is utterly unlike today's factory farming practices. It's also clearly less ethically unacceptable. Assuming classical utilitarianism rather than a negative utilitarian ethic, is it actually commendable? No, IMO, for lots of reasons. I'll start with just one. The most common inquiry received by New Harvest (cf. http://www.new-harvest.org/ ) is whether it is technically feasible to grow in vitro human flesh [in principle, yes.] Sensing a gap in the market, would it be ethically acceptable for us to raise for the pot and humanely kill (human) infants and toddlers from other ethnic groups - along the lines of your scenario? Unlike so many black African children on the subcontinent today, our humanely reared black toddlers would enjoy a guaranteed food supply, protection from predators, and regular affection, as you suggest. They would be killed swiftly in a familiar environment (e.g. a bullet to the brain in an open field). Such youngsters would never have existed without our direct intervention, nor could they have conceivably have desired anything further during the span of their existence.
Well, on (indirect) ethical utilitarian grounds, I think treating members of other ethnic groups this way would be ethically unacceptable, i.e. it would lead to worse long-term hedonic consequences for society as a whole. By the same token, treating beings of equivalent sentience from different species as mere property to be used for human culinary pleasure would be unethically unacceptable too. i.e. it would lead to worse hedonic consequences than enshrining the sanctity of sentient life in law.
More specific claims of the kinds of well-being (e.g social or solipsistic) our enhanced descendants / successors may enjoy are clearly less likely to be true than the simple generic prediction they will enjoy lifelong subjective well-being. Thus if the conception of superintelligence championed by e.g. The Singularity Institute for Artificial Intelligence (SIAI) is correct (entailing the likelihood of a singleton AGI late this century: The Singularity) then predictions of hyper-social intelligent well-being will (presumably) be confounded. The abolition of factory farming? Well, I won't pretend to cool critical detachment on this issue; I think what we're doing to other sentient beings is an abomination. Moral indignation is not a fruitful frame of mind for making dispassionate forecasts. But breakthroughs in vitro meat technology mean that a global transition to veganism / invitrotarianism later this century is no longer remotely as incredible as the prospect seemed even a few years ago.
Kenametz, yes indeed. The information-signalling role of physical pain today typically plays a vital functional role in protecting bodily integrity. People born with congenital analgesia are prone all sorts of medical problems, some life-threatening.
A partial solution to the problem of pain is to use preimplantation genetic diagnosis to make sure our prospective children are born with one of the benign alleles of the SCN9A gene associated with low pain-sensitivity. (cf. http://clinicaltrials.gov/ct2/show/NCT01507493 ) Other alleles promote high pain-sensitivity. Nonsense mutations of the SCN9A gene induce complete insensitivity to physical pain.
But what about banishing physical pain altogether?
I know of two classes of solution, not mutually exclusive. One option relies entirely on re-engineering ourselves to enjoy lifelong information-sensitive gradients of physical well-being, with dips in well-being [never falling below hedonic zero] indicating noxious or threatening stimuli. Without wishing to be too indelicate, couples making love, for example, can rely on information-signalling peaks and dips in bodily well-being to heighten pleasure without even the least delightful part of the proceedings ceasing to be enjoyable.
The other option is to offload the information-signalling role of painful stimuli onto smart neuroprostheses [presumably carrying an optional manual override so one doesn't sacrifice bodily autonomy]. The function of nociception should be distinguished from the experience of phenomenal pain: they are doubly dissociable. Recall how our silicon [etc] robots can be programmed to avoid and respond to noxious stimuli without undergoing the nasty "raw feels" of physical distress currently experienced by organic robots like us.
Apologies Underscore. Perhaps in future I should stick to soundbites and one-liners. :-) On a more serious note, I worry about advocating psychedelic research until we know how to deliver invincible well-being that makes bad trips impossible. No drug-naive person can make an informed choice of what they are letting themselves in for...
* * *
Taking outcome-counting [rather than the Born rule] as fundamental is intellectually elegant and very disturbing. In fact taking Everett seriously (as Robin does) leads to horrendous scenarios of death and destruction all around. (I guess casual readers of this thread might want to work through https://en.wikipedia.org/wiki/Many-worlds_interpretation before tackling Robin's http://hanson.gmu.edu/mangledworlds.html ) It's also one possible defeater to my tentative prediction that "we" are going to phase out the biology of suffering in "our" forward light-cone.
I'm less convinced by a future of simulated minds. Can digital computers solve the "binding problem" (in the phenomenological sense of "binding"
I'm sceptical unitary conscious minds are feasible in silico. IMO unitary conscious minds are dependent on the functionally unique valence properties of carbon and the irreducibly quantum-mechanical properties of liquid water - commonly dismissed in AI as mere "substrate". However, most researchers don't share my combination of Strawsonian physicalism (cf. http://www.utsc.utoronto.ca/~seager/strawson_on_panpsychism.doc ) plus macroscopic quantum coherence [perhaps 10 to the power 13 quantum-coherent neural "frames" per second; cf. the persistence of vision watching a movie running at 30 frames per second] as an explanation of the unity of perception; the (fleeting) existence of the phenomenal self; and phenomenal object-binding in our virtual worlds.
Income at a bare subsistence level? I'm not sure this scenario is sociologically realistic. In any case, immersive VR should allow us all to live like princes. Moreover unlike positional goods and services, the substrates of pleasure don't need to be rationed. Would you rather live in a pauper's mud-hut with enriched reward pathways or as a prince in a palace today with an unenhanced brain?
SSRIs can sometimes makes melancholic depression worse; but they are worth exploring if one is an anxious depressive. Atomoxetine gave me a rash; bupropion can be useful but it can make one grouchy. For now I'm sticking to my normal regimen amineptine and selegiline while exploring JDTic. JDTIc an unlicensed investigational drug. But it's not scheduled.
Yes. Notions of personal dignity are clearly subjective. But I suspect most people would rather have a recording of themselves wireheading than making love in the public domain.
Necessary? Not for silicon (etc) robots avoiding noxious stimuli. In future, we can it phase it out for organic robots too. Perhaps its functional analogues may be retained for signalling purposes. Alternatively, its signalling function today can be offloaded onto smart prostheses.
Rockthepenguins, If you want a brief synopsis of where I'm coming from, perhaps see e.g. https://www.abolitionist.com
De-raptured, your ability to act as an ethical agent depends on your cognitive capacity to represent other first-person perspectives. You are treating empathy as though it were a mere personality variable; whereas IMO it's an epistemic capacity. If you were truly incapable of empathetic understanding - as "mind-blind" as a cat tormenting a mouse - then indeed it's hard to see how you could be morally accountable for your actions: "ought" implies "can". But insofar as you adequately represent the first-person experience of other subjects - human or non-human - then you will recognize that you should not act wantonly to hurt them. (cf. http://news.sciencemag.org/sciencenow/2007/06/18-01.html ) Thus if blessed with the representational capacities of a mirror-touch synaesthete, you could no more hurt another subject out of callousness than you could wantonly hurt yourself. It's time to repair humanity's neurological deficits; I hope you can be enraptured once more!
Distinctchaos, you've touched on a hugely important issue. The only reason I don't write more on the democratization of research (and society in general) is that my views here are mostly derivative.
Compare the Transhumanist Declaration (1998, 2009) "We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise." (cf. http://humanityplus.org/philosophy/transhumanist-declaration/ ) with Nietzsche:
"To those human beings who are of any concern to me I wish suffering, desolation, sickness, ill-treatment, indignities - I wish that they should not remain unfamiliar with profound self-contempt, the torture of self-mistrust, the wretchedness of the vanquished: I have no pity for them, because I wish them the only thing that can prove today whether one is worth anything or not - that one endures." (The Will to Power, p 481)
"You want, if possible - and there is no more insane "if possible" - to abolish suffering. And we? It really seems that we would rather have it higher and worse than ever. Well-being as you understand it - that is no goal, that seems to us an end, a state that soon makes man ridiculous and contemptible - that makes his destruction desirable. The discipline of suffering, of great suffering - do you not know that only this discipline has created all enhancements of man so far?" (Beyond Good and Evil, p 225)
"I do not point to the evil and pain of existence with the finger of reproach, but rather entertain the hope that life may one day become more evil and more full of suffering than it has ever been." (etc)
Imagine if we were to stumble onto an alien civilisation that had abolished experience below "hedonic zero". They enjoy lives animated by gradients of intelligent bliss. What arguments would you use to persuade them of the value of what they were missing? Can we see why they might regard us as the crazy ones?
So long as participation is voluntary, great. Such implants sound cool. But even with utopian counterparts of Anderson's dystopian "Feed"; (immersive VR designer paradises; Nozick's Experience Machines; and so forth) I don't think our quality of life will be radically improved unless we also recalibrate the hedonic treadmill. Mastery of our reward circuitry is indispensable to any transition to posthuman paradise.
Jbschirtzinger, when confronted both with appalling suffering and novel technologies of psychological pain-relief, we need (very) carefully to weigh risk-reward ratios. You are quite right to emphasise the potential risks. But when, say, general anaesthetics were discovered, (cf. https://www.general-anaesthesia.com/ ) should surgeons have delayed their widespread introduction for decades until longitudinal studies had been conducted of their use?
Tackling "psychological" pain can be just as urgent as tackling "physical" pain: a false dichotomy in any case.
Excellent ideas! Fortunately most of the options above can be readily combined. The only item I'd be cautious about is volunteering for drugs trials. If you want to explore a novel pharmaceutical agent, do you want to risk receiving the placebo instead? Also, do consider getting a dedicated website (and please send me the URL.)
I have done a rough draft for "The Abolitionist Project". But life has many distractions...
Reading through DSM,
I could probably lay claim to dozens. But I tend to give doctors a wide berth.
OK, perhaps I should waive my right to patient confidentiality:
http://mindfull.spc.org/vaughan/talks/ns_assignment/Capgras_and_Lycanthropy.pdf (Case report; Co-existence of lycanthropy and Cotard’s syndrome in a single case")
There are some folk who just don't like transhumanists full stop. But the sharpest criticism I receive within some sections of the transhumanist community itself is for taking an "extreme" position on nonhuman animals. i.e. I think factory farms should be outlawed. By contrast, a modest plea to promote the development and commercialization of in vitro meat - rather than urging the adoption of ethical veganism now - would raise scarcely a murmur. [There are other transhumanists who think I just don't "get" the inevitability of a mid-century Technological Singularity that will sweep biological life into the dustbin of history. But I doubt they would hit the downvote button.]
Either way, the commitment of Transhumanist Declaration (1998, 2009) to the well-being of all sentience has extraordinarily radical implications if taken literally; and IMO it's important they are rigorously critiqued.
My default state of consciousness in the absence of chemical enhancement (better, remediation) is a dull, low-grade depression. I see no purpose in spending life in that state. So no.
“It is a glorious thing to be indifferent to suffering, but only to one's own suffering.” (Robert Lynd)
Yes, we could take that route. On the other hand, I just don't think we can trust ourselves to avoid self-serving bias. Thus if we were trying to persuade cannibals not to eat (human) babies, it's prudent to urge an absolute taboo on eating human flesh - and not tackle cases where babies have died of natural causes, etc.
Not unless I were to opt for cryonic suspension.
Jmfavuevomuu, you are right to take me to task here. Apologies. Our language (and consequently our thoughts) is shot through with heterosexist, sexist, racist and speciesist assumptions - and my lazy wording just contributes.
Perhaps turn the (tentative) prediction around. One thousand years hence, the biology of suffering in some guise or other will exist in our forward light-cone. How strong is the supportive evidence for this claim? Is it antecedently more probable than the claim suffering will have been phased out?
For sure, there is one big complication, namely that what futurists predict in human affairs can sometimes take on the role a self-fulfilling prophecy - and knowledge of this fact can colour their public predictions (recall Alan Kay, "The best way to predict the future is to create it") I probably wouldn't bother to write if I didn't fleetingly entertain the thought there was some small but non-negligible chance I could make some kind of positive difference. However, it's hard to disagree with Bertrand Russell: "One of the signs of impending mental breakdown is the belief that your work is terribly important."
Abolishing, say, torture chambers in all their fiendish variety reduces diversity in one sense. But their abolition enables their potential victims to flourish instead - and promotes diversity in a different sense. Among individuals, it's depressives who get "stick in a rut" By contrast, enriching mood increases not just motivation but the range of stimuli an organism finds rewarding. This increased range makes getting "stuck in a rut" and thereby reducing diversity, less likely. Other things being equal, the lesson of this well-attested experimental finding can be transferred to society as a whole. Likewise posthuman paradise and its lovingly designed ecosystems can be as arbitrarily diverse as we wish. All that will be missing is the molecular signature of experience below "hedonic zero".
Or would it be preferable for disembowelling, asphyxiation and being eaten alive to be preserved indefinitely?
Plonce, if you're interested in transhumanism, just conceivably. Otherwise not. :-) (cf. https://www.hedweb.com/transhumanism/index.html ) Back in 1998, I co-founded with Nick Bostrom the World Transhumanist Association, now Humanity Plus. But I'm new to Reddit and its customs. Until I was invited a few days ago, I didn't even know the meaning of an "AMA". So apologies if it's not your cup of tea!
Paparatto, first, yes, your core thesis is surely correct. But I think we need to weigh risk-reward ratios. For example, smallpox killed hundreds of millions of people in history, and blighted the lives (and bodies) of hundreds of millions more. Last century the decision was taken not just to reduce its incidence, but systematically to wipe out the smallpox virus altogether. Eventually, the eradication campaign succeeded. The last naturally occurring case of smallpox occurred in 1977.
Unforeseen ecological consequences? Well, they are still playing themselves out, not least the effects of increased human population densities.
But did we take the right decision? IMO yes. Likewise with the long-term goal of phasing out the molecular signature of unpleasant experience wherever it is found. Of course arguing this will take a lot of work. :-)
* * *
1) To suggest that only white people, say, should go vegan would clearly be arbitrary and absurd. But if it's arbitrary and absurd to restrict veganism to particular races, on what grounds should we confine a cruelty-free vegan lifestyle to members of particular species? The racial- or species- identity of the predator is irrelevant to the victim. Is adopting a cruelty-free lifestyle about us (i.e. advertising our personal purity) or about them? For sure, some nonhuman predators are currently obligate carnivores, whereas humans can choose. But once again, the physiology of their killers is irrelevant to the victims.
2)The overwhelming majority of nonhuman animals cannot verbalize their distress. Nor can they verbally articulate their interests. But (almost) no sentient being wants to be harmed: to be asphyxiated, disembowelled or eaten alive, or to starve to death, or to die slowly of thirst. If you saw a small (human) child starving in front of your eyes, then on some fairly modest assumptions you would be duty-bound to intervene. So why aren't we likewise duty-bound to rescue a starving gazelle or elephant? Recall that later this century we'll have the computational power to surveil and micromanage every cubic metre of our planet. With power comes complicity.
3) A reversion to primitivism would entail a massive shrinkage of the global human population. But when primitivists say the Earth is overpopulated, they don't generally have themselves in mind. Either way, there is only one species intellectually capable of phasing out the cruelties of "Nature, red in tooth and claw". How much suffering do we want to create and conserve in the living world? For better or worse, humans are acquiring God-like powers. Let's use those powers benevolently rather than callously. The development of compassionate ecosystems in our wildlife parks will depend on high technology: cross-species fertility control via immunocontraception, behavioural and genetic tweaking of (ex) predators and much else besides.
[apologies, I am now back on duty. All questions will receive responses (many thanks), but please forgive my one-fingered typing: roll on the digital Singularity.]
Somewhat against my better judgement, I did try comparing Buddhism and (negative) utilitarianism a few years ago. https://www.bltc.com/buddhism-suffering.html The reason I hesitated is that inevitably plenty of Buddhist scholars would say I haven't understood the "true" meaning of Buddhism. But I know of no technical reason why we can't abolish suffering. If we eliminate its molecular signature, experience below "hedonic zero" becomes biologically impossible.
Cybernetically-enhanced biological minds have a long future ahead IMO:
Unlike some futurists, I don't think there's going to be a "robot rebellion".
First, I think we need to decouple sentience from intelligence, and intelligence from moral status. Some cognitively humble creatures can be intensely sentient, whereas artificially designed nonbiological systems can behave in ways most naturally labelled as "intelligent" without being unitary subjects of experience. [Why classical digital computers were, are, and always will be zombies IMO is a deep question I won't explore here, not least because my views are quite unorthodox. For a start, I don't think digital computers can solve the binding problem - in the sense consciousness-related binding (cf. http://tracker.preterhuman.net/texts/body_and_health/Neurology/Binding.pdf)
What about biological life? Well, unlike scarce goods and services, the substrates of pleasure don't need to be rationed. So if the political consensus existed, we could probably engineer the well-being of sentience in a century or less. In practice, I fear centuries of misery still lie ahead.
For now, I think we should focus on shutting down factory farms and extending the principles of the welfare state (but not "welfarism") to large-brained vertebrates. But in the longer run:
Artificial intelligence research attracts people with high AQ's as well as high IQs (cf. http://www.wired.com/wired/archive/9.12/aqtest.html ) So existing conceptions of "posthuman superintelligence" tend to resemble autistic spectrum disorder more than full-spectrum superintelligence. I enjoyed "The Artilect War" But IMO the prospect of inevitable "gigadeath" conflict between Cosmists and Terrans later this century is science fiction.
Ah, I probably shouldn't have hotlinked that depressing paper (cf. https://www.abolitionist.com/multiverse.html; Quantum Ethics? Suffering In The Multiverse") I just wanted to rebut Benthamite's charge of susceptibility to wishful thinking. The universal wave function encodes some truly ghastly stuff I'd rather not discuss here. Like the Holocaust, it's best not to spend time dwelling on unspeakable events one has no power to influence.
I don't think scientific curiosity ethically entitles us to harm or kill another sentient being. Fortunately, many tests on human and nonhuman animals don't involve harming or killing. And of those that do, many involve organisms that don't pass the threshold of sentience. Thus we share a large number of genes with yeast. The exponential of computer power should also allow us to simulate what could once only be discovered by human and nonhuman animal testing. But yes, there are real ethical dilemmas here. And what is the threshold of sentience?
Well, it's easier to do on Peruvian marching powder than my own austere regimen. Daily aerobic exercise, good diet, sleep discipline...and drugs. (2 x 5mg selegiline, c. 250mg amineptine daily.)
The possibility of systematic bias is certainly relevant. As well as the well-known cognitive biases
there are biases reflective of temperament. Most obviously, happy people tend to make optimistic predictions, depressives pessimistic predictions. Perhaps one should trust one's judgement more when one arrives at mood-incongruent conclusions. In that respect at least, I don't think I'm unusually at fault. Is the author of https://www.abolitionist.com/multiverse.html unusually prone to wishful thinking? Yes, for technical reasons I think we're likely to phase out the biology of suffering in "our" forward light-cone (very crudely, a combination of the pleasure principle plus biological superintelligence) But post-Everett quantum mechanics can make a mockery of human pretensions. I fear our success can only be parochial at best.
A recent international survey of the percentage of people describing themselves as "very happy" put Indonesia at the top followed by India, followed by Mexico. (http://www.economist.com/node/21548213 ) For the most part, "developed" Western nations scored poorly by comparison. So belief that advanced technology will shortly let us claw our way out of the Darwinian abyss requires something of an act of faith. But ultimately, only biotechnology can allow us to phase out the biology of suffering throughout the living world - and eventually abolish experience below "hedonic zero" altogether.
I think we need to accelerate progress in everything from in vitro meat to gene therapy. But unless we recalibrate the hedonic treadmill, I can't see the subjective quality of human life being significantly enhanced.
All I'd argue is that the nonhuman animals we currently factory farm and kill should be accorded the same degree of care and respect we give human youngsters of equivalent sentience. We currently spend e.g. £100.000 looking after 23 week old micro-preemies in neonatal intensive care units - when far more sentient creatures end up on our dinner plates after being horribly abused. Such is anthropocentric bias.
The transition to a cruelty-free vegan lifestyle is only one strand of the abolitionist project. But insofar as we acknowledge that it's ethically unacceptable to kill or abuse human babies and prelinguistic toddlers, then I think rational self-consistency dictates treating sentient beings of equivalent sentience with the same care and respect - regardless of race or species. For sure, only human infants and toddlers have the "potential" to become mature adult human beings. But we don't regard human toddlers with a progressive disease who will never reach their third birthday as any less worthy of care and respect than their normally developing contemporaries. We value human infants and prelinguistic toddlers for who they are, not just for what they may - or may not - become. I'd argue that we should take exactly the same approach with nonhuman animals. Or to quote Jeremy Bentham: “The question is not, "Can they reason?" nor, "Can they talk?" but "Can they suffer?”
IMO a change of the law is urgently needed to allow people to be cryonically suspended before their medically pronounced death. All too often today irretrievable (?) information-loss occurs before suspension. I'm not personally signed up because I think my matter and energy could more fruitfully be configured into a blissful posthuman smart angel instead...
I guess a depressive angst-ridden temperament first drew me to philosophy as a teenager. I know a lot of scientists as well as laymen are scornful of philosophy - perhaps understandably so. Reading academic philosophy journals often makes my heart sink too. But without exception, we all share philosophical background assumptions and presuppositions. The penalty of not doing philosophy isn't to transcend it, but simply to give bad philosophical arguments a free pass.
Yes. :-) I'm currently trying JDTic at 2mg daily. http://esciencenews.com/articles/2012/03/21/team.finds.atomic.structure.molecule.binds.opioids.brain The ongoing trial http://clinicaltrials.gov/ct2/show/NCT01431586 is testing subjects on 1mg, 3mg and 10mg JDTic daily. I shall report back.
https://www.abolitionist.com (1997) is an overview, together with https://www.abolitionist.com/reprogramming/index.html https://www.superhappiness.com Perhaps see too https://www.reproductive-revolution.com The most recent is https://www.biointelligence-explosion.com, https://www.biointelligence-explosion.com/parable.html
* * *
Cruelty to members of other species is ethically no different from cruelty to members of other races. If human babies and prelinguistic toddlers were being treated the way we treat pigs, you wouldn't consider the concern disproportionate. On the contrary, you'd judge the systematic killing and abuse to be the greatest crime of our age - and devote your energies to bringing the horror to an end. Of course pigs are different from human babies or toddlers. But the question to ask is whether any of the differences between them (e.g. the slightly different structure of the FOXP2 gene implicated in generative syntax) are morally relevant differences? Is there any evidence an adult pig is less sentient than a two year old toddler? Sadly none of which I'm aware.
I've never eaten meat: all four of my grandparents were vegetarian. This is an accident of birth, not a mark of virtue. But being raised on a meatless diet does remove one obvious source of self-serving bias i.e most humans like the taste of animal flesh and therefore seek to rationalise their eating habits. Heaven knows how we'll explain what we did to other sentient beings to our grandchildren. ( "But I liked the taste" [?] )
I anticipate superintelligence will be hyper-social - perhaps a cognitive extension of the hyper-empathising condition of mirror touch synaesthesia:
After all, it was our superior "mind-reading" skills that helped make humans the cognitively dominant species on the planet. We just need to enrich and de-bias our perspective-taking capacities. IMO aggression is likely to pass into history together with archaic primate minds. But heaven knows how much death and suffering will occur this century. I won't attempt an essay on futurology here, or even a comparative review of Stross vs Egan. But the most recent substantive piece I've written is https://www.biointelligence-explosion.com/ A tidied up version will be appearing in the forthcoming Springer volume later this year. http://singularityhypothesis.blogspot.com/
I wrote HI in 1995. Since then we've discovered how to abolish, reduce (and amplify) the capacity to experience pain (different alleles of the SCN9A gene). In vitro meat has passed from being mere sci-fi to a scientifically credible option that may be commercialized within a decade. And even the "wildest" aspect of the abolitionist project, namely the proposal to phase out carnivorous predation, was championed in print for the first time last year by Jeff McMahan the New York Times.
But I need scarcely tell you the obstacles are still daunting. And most of the scientific community still regard phasing out the biology of suffering as far-fetched, to say the least.
* * *
Aubrey de Grey? Deep admiration. I started "Ending Aging" with a host of reservations.
Aubrey demolished them one after the other. I still fear he is optimistic on timescales. But then I'm temperamentally a pessimist about most things.
I am just as "aggressively" opposed to racism. In practice, I wouldn't say boo to a goose. But the worst source of severe and avoidable source of suffering in the world today is factory farming. Would it be preferable to express mild disapproval instead?
Anzereke, excellent, on one critical point at least we're agreed. When the technology to phase out the biology of suffering becomes available, we should be free to use it. Above I (mis)interpret you as wishing to see it banned.
However, I gather we do still disagree on the wisdom of its use. Two points here.
First, you argue that no longer experiencing aversive states must lead to "functional loss". But take boredom, for example. Yes, finding everything indiscriminately interesting would lead to loss of critical insight. Yet finding instead some aspects of life utterly enthralling, while others merely fascinating, preserves the signalling role of gradients of interest while banishing the feeling of tedium that many people undergo today. In common with hedonic tone, one's default level of interest can be biologically recalibrated - indeed they are intimately connected. Which default setting leads to a higher quality of life?
Second, you suggest that something valuable will be lost if we no longer undergo the variety of aversive states commonly experienced at present. Here we differ. But let's suppose for the sake of argument you are right: the "nasty" side of life really is valuable in its own right: jealousy, resentment, depression, malaise and the more unpleasant Darwinian emotions. We also need to consider their opportunity cost. Is time spent being depressed, jealous, malaise-ridden (etc) not just valuable, but more valuable, than time spent exploring different varieties of life's sublime experiences? This would be a very radical claim. Do I interpret you correctly?
Comparisons of each other's relative arrogance or humility are rarely fruitful (Recall Golda's Meir's acid put down to one of her ministers: "Don't be so humble. You're not that great.") But if we do want to focus on this issue, I think we need to ask which is more arrogant: 1) forcing others to undergo involuntary suffering indefinitely? or 2) ensuring the biology of suffering (or bliss) becomes entirely a personal choice?
For evolutionary reasons, there are far more depressives than hyperthymics in this world. But as it happens, one of the most extreme hyperthymics I know focuses especially on issues of existential risk, namely the transhuman scholar Anders Sandberg. [I name Anders only by express permission!] Other things being equal, the more one loves life, the more one is motivated to conserve it.
Variety? Intuitively, yes, Heaven is less diverse than Hell. One recalls Tolstoy's opening line in Anna Karenina: "All happy families are alike; each unhappy family is unhappy in its own way." But in practice, the opposite is true. It's depressives who tend to get "stuck in a rut". By contrast, lifting mood tends not just to strengthen motivation, but also expand the range of stimuli one finds enjoyable - making monotony less likely. Control over our reward circuitry will allow us to switch on a degree of e.g. hypomanic exuberance at will. Such control will also enable us to explore the fabulous diversity of psychedelia without the risk of "bad trips" that discourages responsible advocacy of such exploration today.
I suspect posthuman paradise will be diverse beyond anything we can physiologically imagine.
Sober but cutting-edge analysis "The Biointelligence Explosion: How recursively self-improving organic robots will modify their own source code and bootstrap our way to full-spectrum superintelligence"
by appliedphilosophy in Transhumanism
"Faul_sname", it's possible we don't share some of the same philosophical background assumptions. So I fear you may find my response frustrating. But I'll do my best, so here goes...
1.Giving MDMA (cf. https://www.mdma.net) to a bug-eyed monster from Alpha Centauri - or to a Pentagon drone - is unlikely to elicit a friendly response. Nor, if the I.J. Good/SIAI/MIRI conception of singleton AGI-in-a-box is tenable, would use of MDMA guarantee that the AGI goes FOOM in a human-friendly way. But my focus in this section isn't on agents lacking a mirror-neuron architecture. Rather it's on the potential unfriendliness of male human primates - currently perhaps the world's single greatest underlying source of global catastrophic and existential risk. Psychiatrists disagree whether sociopathy should be regarded as dimensional or categorical. Either way, the great majority of humans don't meet the clinical criteria for sociopathy, despite our frequent unfriendliness to other sentient beings. So how do "hug drugs" elicit their short-lived but profound empathetic effect on most of the humans who take them? What can the mechanism of action of such drugs teach us? Until recently, the standard story of MDMA's mechanism of action (i.e. serotonin and dopamine release) didn't really make sense. The missing piece of the jigsaw was provided when M.R. Thompson and his colleagues discovered that taking MDMA also releases copious quantities of oxytocin. (cf. http://www.ncbi.nlm.nih.gov/pubmed/17383105 ) For sure, oxytocin itself is no panacea: it won't help induce friendliness in e.g. artificial nonbiological agents that lack mirror neurons. But one option for enhancing our limited human capacity for friendliness is long-term oxytocin-enrichment. Potential pitfalls? Yes, sadly there are many.
2.There are all sorts of circumstances in which empathetic understanding is not the most efficient way to predict and manipulate the actions of humans. For example, the handful of people who made money shorting the market in the global market meltdown of 2008 were mostly Aspergers who trusted what the numbers were telling them in defiance of consensus wisdom. (cf. Michael Lewis' illuminating "The Big Short": https://en.wikipedia.org/wiki/The_Big_Short ) So might we extend this example without limit? Can we imagine a utopian supercomputer that just solved Schrödinger equation for the seven billion intentional agents it was monitoring without taking what Daniel Dennett calls the intentional stance at all? (cf. https://en.wikipedia.org/wiki/Intentional_stance ) In the abstract, yes. But in the real world, informational resources are physical, finite and scarce. By means of mechanisms that are very poorly understood, neurotypical humans use their "mind-reading" prowess to understand the behaviour of intentional agents in situations where taking the physical stance yields results indistinguishable from guesswork. Whether human behaviour will ever be fully emulable by a real-world classical digital computer is, I think, very much an open question.
3.A nonsentient system can indeed be programmed to develop models. The system can use those models to make predictions about everything from the weather to thermonuclear reactions to the early Big Bang. But we need to explain Moravec's paradox: https://en.wikipedia.org/wiki/Moravec's_paradox. Why is a bumblebee more successful at navigating its environment in open-field contexts than the most advanced artificial robot we can build today? The conjecture I offer is controversial. But IMO a solution to the phenomenal binding problem (cf. http://tracker.preterhuman.net/texts/body_and_health/Neurology/Binding.pdf ) is critical to understanding the evolutionary success of organic robots over the past 540 million years - and why digital computers are insentient zombies.
4.You describe consciousness as a "puzzle". By the same token, we might call e.g. the disappearance from view of the lower parts of a ship on the horizon as a "puzzle" for the theory that the Earth is flat. Given a standard materialist ontology, phenomenal consciousness ought not to exist (cf. Levine's "Explanatory Gap": https://en.wikipedia.org/wiki/Explanatory_gap ). So how is it possible that humans can systematically explore, reason about, and manipulate the manifold varieties of a phenomenon that a materialist ontology seemingly prohibits? I think consciousness is better conceived as a deep problem for our entire conceptual framework.
5.Empathy? There is far more to empathy than a capacity to understand another sentient being's location on the pleasure-pain axis. If we - or a notional AGI - want to understand e.g. what it's like to be bat, or master sixth-order intentionality like Shakespeare, or understand what it's like simultaneously to have lycanthropy combined with Cotard's syndrome (cf. http://www.ncbi.nlm.nih.gov/pubmed/15701110 ) or simply to be of a different gender or sexual orientation (etc), then we need to enrich our capacity for perspective-taking.
6.Convergence versus the orthogonality hypothesis? Which argument for convergence [on the well-being of all sentience in our forward light-cone] do you find unconvincing? This topic needs a separate post (indeed a separate paper!) But I'm slightly mystified why you assume I'm unfamiliar with Nick's work: I've followed his writing with respect ever since we set up the World Transhumanist Association back in 1998. Naturally, we don't agree on some topics. But then a history of philosophy suggests it's a minor miracle when philosophers agree on anything at all. :-)
* * *
Anzereke, could you possibly clarify what you have in mind when you say that you "get along without empathy"? Are you alluding to personal relationships? Or just while doing technical tasks? Talk of moral progress can sound naive. But the decline in violence - and the mistreatment of blacks, gays, children, women (etc) chronicled by e.g. Steven Pinker in "The Better Angels of Our Nature" - https://en.wikipedia.org/wiki/The_Better_Angels_of_Our_Nature - seems to be in part a function of our expanding "circle of compassion". Rules can certainly systematise our moral sentiments. But in the absence of a perspective-taking capacity, the rules governing our conduct towards other sentient beings would be arbitrary.
Perhaps it's worth stressing that genetic recalibration of the hedonic treadmill isn't an alternative to other transhumanist visions of the future. Rather it's complementary. Imagine for a moment your own personal conception of paradise. If your hedonic set-point were raised, then the reality will consistently be more wonderful than anything you've ever imagined. By contrast, if your old genetically constrained hedonic set-point is retained, the usual vicious set of negative feedback mechanisms will kick in - and you won't be much (un)happier than before. Conserving the settings our existing reward pathways guarantees that misery and malaise will persist indefinitely - even in a high-tech version of the Garden of Eden. Thus compare the level of well-being / ill-being of, say, lottery winners and catastrophic accident victims six months after the win / accident.
("Lottery winners and accident victims: is happiness relative?")
Does recalibration have potential pitfalls? Yes. But there is a difference between being blissful and being "blissed out" - between life animated by gradients of intelligent well-being and a Brave New World getting high on soma.
* * *
All biological structures, including the brain, exist as a succession of macroscopic superpositions. To claim otherwise would be to claim that quantum mechanics is incomplete / false.
Thermally-induced decoherence ("splitting" in the rather misleading language of many worlds) in a warm, noisy system like the mind/brain is so rapid that the physical signature of macroscopic quantum coherence will be exceedingly hard to detect with our existing instrumentation. But I'm not assuming any kind of new physics: on the contrary. Likewise, I'm assuming (if only for the sake of argument) that quantum-mind sceptic Tegmark rather than his critics is roughly correct about decoherence time-scales too. Instead, I'm asking a question. What, if anything, does it feel like to instantiate 1013 irreducible quantum-coherent CNS states per second? Whether true or false, my working assumption is that the phenomenon each of us apprehends as the classical, macroscopic world is what a naturally evolved quantum computer feels like from the inside - a data-driven simulation of fitness-relevant patterns the mind-independent environment optimised by hundreds of millions of years of evolution. No, this conjecture doesn't address Tegmark's related argument that 10-13 second is too short an interval to do computationally useful work. The mind/brain certainly doesn't perceive the external world on sub-picosecond timescales. But then I think it's naive to suppose the mind/brain "perceives" the external world at all. Quasi-classical worlds are mind-dependent: this is the point of the world-simulation metaphor. Rather the role of inputs from the optic nerve (etc) while one is awake is to select from a finite menu of possible mind-brain states. Functionally speaking, it's of no consequence if some frames of our quasi-classical world-simulations are dropped or "mangled"
Such frames aren't explicitly represented or remembered. So it's simply not the case that possible solutions to the binding problem turning on quantum coherence stem from any desire on my part at least to "have a separate stuff of consciousness". On the contrary, it's precisely because we need rigorously to derive the macro-phenomenology of our world-simulations from the underlying microphysics that I take the role of quantum coherence so seriously. I promise I'm an ontological monist loath to sacrifice the completeness and causal closure of physics. By contrast, quantum mind theorists who don't assume a Strawsonian physicalism are indeed guilty of the kind of dualism you criticise. Such theorists try to pull an ontological rabbit out of the proverbial hat by suggesting that a fundamentally non-sentient world can (somehow) generate sentience and solve the binding problem at one fell swoop by invoking a quantum mechanical effect. I think we can both agree such ideas don't work.
You argue that because microelectrode stimulation of some surgically exposed regions of the visual cortex "causes conscious experience whereas stimulation of other regions is silent that the lack of reported experience shows "we can say fairly conclusively that it's something above the level of the individual neuron that causes conscious experience." I have two main worries here. My first worry I discuss above. Causes temporally precede their effects. Causes don't act between levels of description. If consciousness is identical with a set of physical process, then those selfsame processes do not cause the consciousness in question, any more than the state of consciousness causes those selfsame physical processes. But my second worry is very different. It's too quick to pass from the observation that microelectrode stimulation of the exposed neocortex doesn't elicit reports of experience to the conclusion that no such experience occurs. For how do we know that the speech-generating mechanism and/or the prefrontal cortex of the subject merely isn't accessing the fleeting qualia induced by electrode stimulation? Perhaps we are faced with the opposite of the binding problem / unity of consciousness. We can't just assume all parts of the mind/brain are fully transparent to each other. Compare the "tip-of-the-tongue" phenomenon. Is the inaccessible item present or absent to consciousness? Or compare how one may withdraw one's hand from a hot stove before one feels the pain. Does this delay prove that phenomenal pain wasn't responsible for one's hand withdrawing? No. For how do we know that the relevant nerve ganglia external to the CNS do experience phenomenal pain but the neocortex doesn't have direct access to their primitive "encapsulated" experiences? The question is open.
* * *
Non-sentient zombie, could you possibly clarify? Before major surgery, would you request administration of a general anaesthetic as well as a muscle relaxant? And if so, why?
* * *
[Note the conjecture is that only a combination of Strawsonian physicalism and ultra-rapid sequences of quantum coherent states (not decoherence) can potentially dissolve the Binding Problem and generate unitary experiential objects and the unity of consciousness. By contrast, spatially discrete classical processes could at most yield patterns of "mind dust", leaving the Explanatory Gap unbridged. On this story, what we naively describe as the brain's spatially distributed processes of edge detection, motion detection, colour detection (etc) must be conceived as a sequence of single, fleetingly irreducible quantum mechanical states - subjectively experienced as the quantum analogue of a classical 30-frames-per-second movie. Max Tegmark, in his much cited critique of quantum mind theories (cf. http://arxiv.org/pdf/quant-ph/9907009v2.pdf) doesn't dispute the existence of macroscopic quantum coherence in biological systems per se. Such a denial would amount to the claim that quantum mechanics is incomplete. Rather Tegmark is arguing that thermally-induced decoherence occurs much too rapidly for quantum coherent superpositions to do computationally useful work. Hence the brain is not a quantum computer.]
Could the enigma of consciousness turn out to be analogous to the enigma of life before Watson and Crick? I don't think so. Before the Modern Synthesis, matter and energy in living and non-living systems seemed to obey different laws. The existence of living beings seemed inconsistent with the second law of thermodynamics. Natural science was radically incomplete. This is no longer the case. In fact I'd have to differ with you over the scope of physics. If physicalism is true, then all the facts supervene on the microphysical facts: "All science is either physics or stamp collecting", said Rutherford. Ontologically speaking, IMO Rutherford is right. The great triumph of the Standard Model - at least within sub-Planckian energy ranges not exceeded since a fraction of a second after the Big Bang - is that the Standard Model exhaustively describes the behaviour of all matter and energy: all of chemistry and all of molecular biology. All of the special sciences are subsumed within its formalism Indeed as far as our experiments can show, physics is causally closed and complete. In principle, we could rigorously derive the gross behaviour of complex biological and nonbiological systems from the underlying microphysical interactions. Within the mathematical straitjacket of physics, there is no room for an élan vital , entelechies, spirits, souls - or consciousness. Yet as you say, if one's theory entails that a phenomenon is impossible, and you discover that the phenomenon exists in the real world - and the phenomenon causes you to talk and write about its existence as now - then something is wrong with your theory: either its ontology or its formalism, or both.
So what gives? All one ever knows, except by inference, is the phenomenology of one's own mind. Yet according to the ontology of (what is commonly reckoned to be) our best account of the world, the phenomenology of mind is unexplained and apparently inexplicable This isn't a puzzle, it's a crisis. Hence the desperate-sounding solutions bubbling away on the margins of the mainstream: Chalmers' dualism, Dennett's radical eliminativism, McGinn's Mysterianism - and yes indeed, the formally physicalist version of monistic idealism that I explore as a working hypothesis.) Positing microqualia as the "fire" in the field-theoretic equations is the only way I can think of to rescue physicalism, just as conjecturing quantum coherence is the only conceivable route I can think of to resolving the binding problem. As we know, Strawson goes so far as to claim that physicalism entails panpsychism. By contrast, if I understand you correctly, you'd argue that consciousness should be regarded merely as an anomaly in the way, say, high-temperature superconductivity is anomalous. The mechanism for its emergence isn't adequately understood. But high-temperature superconductivity is unequivocally physical - in the traditional sense of "physical" rather than the revisionary sense of the term canvassed by Strawsonian physicalists.
Anyhow, if I were a betting man, I'd stick my neck out. Explaining consciousness will entail a mega-paradigm shift (cf. https://en.wikipedia.org/wiki/Paradigm_shift ) more profound than anything in the history of science to date: an ontological revolution in our conception of what it means to be "physical". The mysteries of consciousness will never be investigated, let alone solved, by digital zombies - mere super-savants - but by sentient unitary organic minds.
Here, I suspect, our background assumptions - and hence our intuitions - radically differ...
* * *
Can a cognitive agent be intelligent, let alone superintelligent, and yet fail to understand, or lack any capacity to investigate, fundamental features of the natural world? If the agent in question were constitutionally ignorant of the properties of, say, matter, energy and the second law of thermodynamics, then I trust we would say no: such an agent is profoundly ignorant, or at best an idiot savant. Yet if the cognitive agent in question is constitutionally ignorant of the properties of, say, phenomenal objects, conscious minds, or the nature of pain and pleasure, then many AI researchers are nonetheless willing to ascribe intelligence - and potentially even superintelligence. IMO this is an anthropomorphic projection on our part.
How might the apologist for digital (super)intelligence respond? Several ways, I guess. Here are just two.
First, s/he might argue that the manifold varieties of consciousness are unimportant and/or causally impotent. Intelligence, and certainly not superintelligence, does not concern itself with trivia. But in what sense are, say, the experience of agony or despair trivial, whether subjectively to their victim, or conceived as disclosing a feature of the natural world? Compare how, in a notional zombie world otherwise physically type-identical to our world, nothing would inherently matter at all. Some of our supposed counterparts might undergo boiling in oil, but who cares: they aren't sentient. By contrast to such a fanciful zombie world, the nature of phenomenal agonies as we undergo such states isn't trivial: indeed the thought that (1) I'm in unbearable agony and (2) the agony doesn't matter, is devoid of cognitive sense. And in any case, we can be sure that phenomenal properties aren't causally impotent epiphenomena. Epiphenomena, by definition, lack causal efficacy - and hence lack the ability physically or functionally to stir us to write and talk about their existence.
Second, perhaps the believer in digital (super)intelligence might claim that (some of the programs executed by) digital computers are conscious, or at least potentially conscious, not least future software emulations of human brains. For reasons we admittedly don't yet understand, some physical states of matter and energy, perhaps the different algorithms executed in various information processors, are identical with different states of consciousness, i.e. a functionalist version of the mind-brain identity theory is correct. Granted, we don't yet understand the mechanisms by which information processing generates consciousness. But whatever these consciousness-generating processes may turn out to be, materialism is correct. Biological and nonbiological agents alike can be conscious minds.
Unfortunately, there is an insurmountable problem here. Identity is not a causal relationship. We can't simultaneously claim that a conscious state is identical with a brain state and maintain that this brain state causes (or "generates", or "gives rise to", etc) the conscious state in question. Nor - and this is where Searle stumbles - can causality operate between what are only levels of description. Hence the Hard Problem of Consciousness and the Explanatory Gap. What I meant by denying that consciousness is a mere "puzzle" is that the solution to puzzles don't challenge our conceptual scheme. Thus a difficult crossword clue may stump us; but we may be confident the answer will leave our world-picture intact. By contrast, if we discovered a fairy living at the bottom of the garden, even a little one, then materialism would be falsified. Materialism is the thesis that the physical facts exhaustively constitute all the facts. A suppressed premise is that the intrinsic nature of the physical is devoid of phenomenal properties. Granted this suppressed premise, the existence of consciousness - even a single instance of consciousness - falsifies materialism. Actually, not everyone would agree here. Radical eliminativism about consciousness has been described as the craziest theory in the history of philosophy; but eliminativists are right in one sense: given the ontology of physics as standardly understood, consciousness is impossible. Most of us find eliminativism literally incredible.
Anyhow, the conjecture I offer to resolve the mystery of conscious mind, involving a combination of Strawsonian physicalism plus macroscopic quantum coherence, may most likely be false. But it's empirically adequate; eliminates the Hard Problem and the Explanatory Gap; yields a falsifiable prediction of a perfect match between the phenomenology of our minds and the microstructural properties of the brain at picosecond timescales; and predicts that digital computers will never be sentient. We shall see. :-)
1 : 2 : 3 : 4 : 5 : 6 : 7 : 8 : 9 : 10 : 11 : 12 : 13 : 14 : 15
David Pearce (2014)
Social Media Postings 2023