Source: Lesswrong
Date: 2014
(see too: 1 : 2 : 3 : 4 : 5 : 6 : 7 : 8 : 9 : 10 : 11 : 12 : 13 : 14 : 15)

Less wrong or horribly mistaken?

Unsorted Postings on LessWrong
on
transhumanism, decision-theoretic rationality, consciousness, the Technological Singularity, veganism

[on the BIOPS "Introduction to Transhumanism"]
The three "supers"
BIOPS video on intranshumanism]

Introduction to Transhumanism
"Health is a state of complete [sic] physical, mental and social well-being": the World Health Organization definition of health. Knb, I don't doubt that sometimes you're right. But Is phasing out the biology of involuntary suffering really too "extreme" - any more than radical life-extension or radical intelligence-amplification? When talking to anyone new to transhumanism, I try also to make the most compelling case I can for radical superlongevity and extreme superintelligence - biological, Kurzweilian and MIRI conceptions alike. Yet for a large minority of people - stretching from Buddhists to wholly secular victims of chronic depression and chronic pain disorders - dealing with suffering in one guise or another is the central issue. Recall how for hundreds of millions of people in the world today, time hangs heavy - and the prospect of intelligence-amplification without improved subjective well-being leaves them cold. So your worry cuts both ways.

Anyhow, IMO the makers of the BIOPS video have done a fantastic job. Kudos. I gather future episodes of the series will tackle different conceptions of posthuman superintelligence - not least from the MIRI perspective.


[on cosmological consensus]
Can preference utilitarians, classical utilitarians and negative utilitarians hammer out some kind of cosmological policy consensus? Not ideal by anyone's lights, but good enough? So long as we don't create more experience below "hedonic zero" in our forward light-cone, NUs are untroubled by wildly differing outcomes. There is clearly a tension between preference utilitarianism and classical utilitarianism; but most(?) preference utilitarians are relaxed about having hedonic ranges shifted upwards - perhaps even radically upwards - if recalibration is done safely, intelligently and conservatively - a big "if", for sure. Surrounding the sphere of sentient agents in our Local Supercluster(?) with a sea of hedonium propagated by von Neumann probes or whatever is a matter of indifference to most preference utilitarians and NUs but mandated(?) by CU.

Is this too rosy a scenario? I guess proposing a pan-galactic policy initiative in a blog post shouldn't be done by anyone with a sense of the absurd.


[on individualistic versus collectivist conceptions of decision-theoretic rationality]
I hadn't intended to embark on a critique of orthodox decision theory; but use of hamburger-choosing Jane as a paradigm of rationality in the Lesswrong Decision Theory FAQ raised my hackles.

But note burger-choosing Jane (6.1) is still irrational - for she has discounted the much stronger preference of a cow not to be harmed. Rationality entails overcoming egocentric bias - and ethnocentric and anthropocentric bias - and adopting a God's eye point-of-view that impartially gives weight to all possible first-person perspectives.

* * *

Larks, we can of course stipulatively define "rational" so as to exclude impartial consideration of the preferences of other agents or subjects of experience. By this criterion, Jane is more rational than Jill - who scrupulously weighs the preferences of other subjects of experience before acting, not just her own, i.e. Jill aspires to a more inclusive sense of instrumental rationality. But why favour Jane's folk usage of "rational"? Jane's self-serving bias arbitrarily privileges one particular here-and-now over all other first-person perspectives. If the "view from nowhere" offered by modern science is correct, then Jane's sense she is somehow privileged or ontologically special is an illusion of perspective - genetically adaptive, for sure, but irrational. And false.

[No, this argument is unlikely to win karma with burger eaters. :-)

* * *

Khafra, one doesn't need to be a moral realist to give impartial weight to interests / preference strengths. Ideal rational agent Jill need no more be a moral realist in taking into consideration the stronger but introspectively inaccessible preferences of her slaves than she need be a moral realist taking into account the stronger but introspectively inaccessible preference of her namesake and distant successor Pensioner Jill not to be destitute in old age when weighing whether to raid her savings account. Ideal rationalist Jill does not mistake an epistemological limitation on her part for an ontological truth. Of course, in practice flesh-and-blood Jill may sometimes be akratic. But this, I think, is a separate issue.

* * *

Khafra, could you clarify? On your account, who in a slaveholding society is the ideal rational agent? Both Jill and Jane want a comfortable life. To keep things simple, let's assume they are both meta-ethical anti-realists. Both Jill and Jane know their slaves have an even stronger preference to be free - albeit not a preference introspectively accessible to our two agents in question. Jill's conception of ideal rational agency leads her impartially to satisfy the objectively stronger preferences and free her slaves. Jane, on the other hand, acknowledges their preference is stronger - but she allows her introspectively accessible but weaker preference to trump what she can't directly access. After all, Jane reasons, her slaves have no mechanism to satisfy their stronger preference for freedom. In other words, are we dealing with ideal rational agency or realpolitik? Likewise with burger-eater Jane and Vegan Jill today.

* * *

The issue of how an ideal rational agent should act is indeed distinct from the issue of what mechanism could ensure we become ideal rational agents, impartially weighing the strength of preferences / interests regardless of the power of the subject of experience who holds them. Thus if we lived in a (human) slave-owning society, then as white slave-owners we might "pragmatically" choose to discount the preferences of black slaves from our ideal rational decision theory. After all, what is the point of impartially weighing the "preferences" of different subjects of experience without considering the agent that holds / implements them? For our Slaveowners' Decision Theory FAQ, let's pragmatically order over agents by their ability to accomplish their goals, instead of by "rationality," And likewise today with captive nonhuman animals in our factory farms ? Hmmm....

Pragmatic? Khafra, possibly I interpreted the FAQ too literally. ["Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose."] Whether in practice a conception of rationality that privileges a class of weaker preferences over stronger preferences will stand the test of time is clearly speculative. But if we're discussing ideal, perfectly rational agents - or even crude approximations to ideal perfectly rational agents - then a compelling case can be made for an impartial and objective weighing of preferences instead.

* * *

IlyaShpitser, you might perhaps briefly want to glance through the above discussion for some context [But don't feel obliged; life is short!] The nature of rationality is a controversial topic in the philosophy of science (cf. Kuhn's The Structure of Scientific Revolutions). Let's just say if either epistemic or instrumental rationality were purely a question of maths, then the route to knowledge would be unimaginably easier. IlyaShpitser, is someone who steals from their own pension fund an even bigger bastard, as you put it? Or irrational? What's at stake here is which preferences or interests to include in a utility function. Notsonewuser, a precondition of rational agency is the capacity accurately to represent the world. So in a sense, the local witch-doctor, a jihadi, and the Pope cannot act rationally - maybe "rational" relative to their conceptual scheme, but they are still essentially psychotic. Epistemic and instrumental rationality are intimately linked. Thus the growth of science has taken us all the way from a naive geocentrism to Everett's multiverse. Our idealised Decision Theory needs to reflect this progress. Unfortunately, trying to understand the nature of first-person facts and subjective agency within the conceptual framework of science is challenging, partly because there seems no place within an orthodox materialist ontology for the phenomenology of experience; but also because one has access only to an extraordinarily restricted set of first-person facts at any instant - the contents of a single here-and now. Within any given here-and-now, each of us seems to be the centre of the universe; the whole world is centred on one's body-image. Natural selection has designed us - and structured our perceptions - so one would probably lay down one's life for two of one's brothers or eight of one's cousins, just as kin-selection theory predicts; but one might well sacrifice a small third-world country rather than lose one's child. One's own child seems inherently more important than a faraway country of which one knows little. The egocentric illusion is hugely genetically adaptive. This distortion of perspective means we're also prone to massive temporal and spatial discounting. The question is whether some first-person facts are really special or ontologically privileged or deserve more weight simply because they are more epistemologically accessible? Or alternatively, is a constraint on ideal rational action that we de-bias ourselves?

Granted the scientific world-picture, then, can it be rational to take pleasure in causing suffering to other subjects of experience just for the sake of it? After all, you're not a mirror-touch synaesthete. Watching primitive sentients squirm gives you pleasure. But this is my point. You aren't adequately representing the first-person perspectives in question. Representation is not all-or-nothing; representational fidelity is dimensional rather than categorical. Complete fidelity of representation entails perfectly capturing every element of both the formal third-person facts and subjective-first-person facts about the system in question. Currently, none of us yet enjoys noninferential access to other minds - though technology may shortly overcome our cognitive limitations here. I gather your neocortical representations sometimes tend causally to covary with squirming sentients. Presumably, their squirmings trigger the release of endogenous opioids in your hedonic hotspots, You enjoy the experience! (cf. Bullies' Brains Light Up With Pleasure as People Squirm) But insofar as you find the first-person state of being panic-stricken as in any way enjoyable, you have misrepresented its nature. By analogy, a masochist might be turned on watching a video involving ritualised but nonconsensual pain and degradation. The co-release of endogenous opioids within his CNS prevents the masochist from adequately representing what's really happening from the first-person perspective of the victim. The opioids colour the masochist's representations with positive hedonic tone. Or to use another example, stimulate the relevant bit of neocortex with microelectrodes and you will find everything indiscriminately funny (cf. "Scientists find sense of humour") - even your child drowning before your eyes. Why intervene if it's so funny? Although the funniness seems intrinsic to one's representations, they are misrepresentations to the extent they mischaracterise the first-person experiences of the subject in question. There isn't anything intrinsically funny about a suffering sentient. Rightly or wrongly, I assume that full-spectrum superintelligences will surpass humans in their capacity impartially to grasp first-person and third-person perspectives - a radical extension of the runaway mind-reading prowess that helped drive the evolution of disjunctively human intelligence.

So, no, without rewiring your brain, I doubt I can change your mind. But then if some touchy-feely superempathiser says they don't want to learn about quantum physics or Bayesian probability theory, you probably won't change their mind either. Such is life. If we aspire to be ideal rational agents - both epistemically and epistemically and instrumentally rational - then we'll impartially weigh the first-person and third-person facts alike.

* * *

Indeed. What is the Borg's version of the Decision Theory FAQ? This is not to say that rational agents should literally aim to emulate the Borg. Rather our conception of epistemic and instrumental rationality will improve if / when technology delivers ubiquitous access to each other's perspectives and preferences. And by "us" I mean inclusively all subjects of experience.

* * *

Yes, although our conception of epistemic and instrumental rationality is certainly likely to influence our ethics, I was making a point about epistemic and instrumental rationality. Thus imagine if we lived in a era where utopian technology delivers a version of ubiquitous naturalised telepathy, so to speak. Granted such knowledge, for an agent to act in accordance with a weaker rather than stronger preference would be epistemically and instrumentally irrational. Of course, we don't (yet) live in a era of such radical transparency. But why should our current incomplete knowledge / ignorance make it instrumentally rational to fail to take into consideration what one recognises, intellectually at least, as the stronger preference? In this instance, the desire not to have one's throat slit is a very strong preference indeed.

["Replies to downvoted comments are discouraged. Pay 5 Karma points to proceed anyway?" says this Reply button. How bizarre. Is this invitation to groupthink epistemically rational? Or is killing cows good karma?]

* * *

Yes Kyre, "Cow Satan", as far as I can tell, would be impossible. Imagine a full cognitive generalisation of Study: People Literally Feel Pain of Others Why don't mirror-touch synaesthetes - or full-spectrum superintelligences - wantonly harm each other?

[This is not to discount the problem of Friendly AI. Alas one can imagine "narrow" superintelligences converting cows and humans alike into paperclips (or worse, dolorium) without insight into the first-person significance of what they are doing.]

* * *

There are indeed all sorts of specific illusions, for example mirages. But natural selection has engineered a generic illusion that maximised the inclusive fitness of our genes in the ancestral environment. This illusion is that one is located at the centre of the universe. I live in a DP-centred virtual world focused on one particular body-image, just as you live in a TT-centred virtual world focused on a different body-image. I can't think of any better way to describe this design feature of our minds than as an illusion. No doubt an impartial view from nowhere, stripped of distortions of perspective, would be genetically maladaptive on the African savannah. But this doesn't mean we need to retain the primitive conception of rational agency that such systematic bias naturally promotes.

* * *

Tim, an ideally rational embodied agent may prefer no suffering to exist outside her cosmological horizon; but she is not rationally constrained to take such suffering - or the notional preferences of sentients in other Hubble volumes - into consideration before acting. This is because nothing she does as an embodied agent will affect such beings. By contrast, the interests and preferences of local sentients fall within the scope of embodied agency. Jane must decide whether the vividness and immediacy of her preference for a burger, when compared to the stronger but dimly grasped preference of a terrified cow not to have her throat slit, disclose some deep ontological truth about the world or a mere epistemological limitation. If she's an ideal rational agent, she'll recognise the latter and act accordingly.

Tim, all the above is indeed relevant to the decisions taken by an idealised rational agent. I just think a solipsistic conception of rational choice is irrational and unscientific. Yes, as you say, natural selection goes a long way to explaining our egocentricity. But just because evolution has hardwired a fitness-enhancing illusion doesn't mean we should endorse the egocentric conception of rational decision-making that illusion promotes. Adoption of a God's-eye-view does entail a different conception of rational choice.

Tim, when dreaming, one has a generic delusion, i.e. the background assumption that one is awake, and a specific delusion, i.e. the particular content of one's dream. But given we're constructing a FAQ of ideal rational agency, no such radical scepticism about perception is at stake - merely eliminating a source of systematic bias that is generic to cognitive agents evolved under pressure of natural selection. For sure, there may be some deluded folk who don't recognise it's a bias and who believe instead they really are the centre of the universe - and therefore their interests and preferences carry special ontological weight. But Luke's FAQ is expressly about normative decision theory. The FAQ explicitly contrasts itself with descriptive decision theory, which "studies how non-ideal agents (e.g. humans) actually choose."

* * *

I simply don't trust my judgement here shminux. Sorry to be lame. Greater than one in a million; but that's not saying much. If, unlike most Lesswrong stalwarts, you (tentatively) believe like me that posthuman superintelligence will most likely be our recursively self-editing biological descendants rather than the outcome of an nonbiological Intelligence Explosion or paperclippers, then some version of the Convergence Thesis is more credible. I (very) tentatively predict a future of gradients of intelligence bliss. But the propagation of a utilitronium shockwave in some guise ultimately seems plausible too. If so, this utilitronium shockwave may or may not resemble some kind of cosmic orgasm.

Tim, on the contrary, I was arguing that in weighing how to act, the ideal rational agent should not invoke privileged reference frames. Egocentric Jane is not an ideal rational agent.

* * *

Just one note of clarification about sentience-friendliness. Though I'm certainly sceptical that a full-spectrum superintelligence would turn humans into paperclips - or wilfully cause us to suffer - we can't rule out that full-spectrum superintelligence might optimise us into orgasmium or utilitronium - not "human-friendliness" in any orthodox sense of the term. On the face of it, such super-optimisation is the inescapable outcome of applying a classical utilitarian ethic on a cosmological scale. Indeed, if I thought an AGI-in-a-box-style Intelligence Explosion were likely, and didn't especially want to be converted into utilitronium, then I might regard AGI researchers who are classical utilitarians as a source of severe existential risk.

CCC, you're absolutely right to highlight the diversity of human experience. But this diversity doesn't mean there aren't qualia universals. Thus there isn't an unusual class of people who relish being waterboarded. No one enjoys uncontrollable panic. And the seemingly anomalous existence of masochists who enjoy what you or I would find painful stimuli doesn't undercut the sovereignty of the pleasure-pain axis but underscores its pivotal role: painful stimuli administered in certain ritualised contexts can trigger the release of endogenous opioids that are intensely rewarding. Co-administer an opioid antagonist and the masochist won't find masochism fun.

Apologies if I wasn't clear in my example above. I wasn't imagining that pure paperclippiness was pleasurable, but rather what would be the effects of grafting together two hypothetical orthogonal axes of (dis)value in the same unitary subject of experience - as we might graft on another sensory module to our CNS. After all, the deliverances of our senses are normally cross-modally matched within our world-simulations. However, I'm not at all sure that I've got any kind of conceptual handle on what "clippiness" might be. So I don't know if the thought-experiment works. If such hybridisation were feasible, would hypothetical access to the nature of (un)clippiness transform our conception of the world relative to unmodified humans - so we'd lose all sense of what it means to be a traditional human? Yes, for sure. But if, in the interests of science, one takes, say, a powerful narcotic euphoriant and enjoys sublime bliss simultaneously with pure clippiness, then presumably one still retains access to the engine of phenomenal value characteristic of archaic humans minds.

The phenomenal binding problem? The best treatment IMO is still Revonsuo's Binding and the Phenomenal Unity of Consciousness, Consciousness and Cognition 8, 173–185 (1999). No one knows how the mind/brain solves the phenomenal binding problem and generates unitary experiential objects and the fleeting synchronic unity of the self. But the answer one gives may shape everything from whether one thinks a classical digital computer will ever be nontrivially conscious to the prospects of mind uploading and the nature of full-spectrum superintelligence. (cf. "Humans and Intelligent Machines: Co-evolution, Fusion or Replacement?" for my own idiosyncratic views on such topics.)

* * *

ArisKatsaris, it's possible to be a meta-ethical anti-realist and still endorse a much richer conception of what understanding entails than mere formal modelling and prediction. For example, if you want to understand what it's like to be a bat, then you want to know what the textures of echolocatory qualia are like. In fact, any cognitive agent that doesn't understand the character of echolocatory qualia-space does not understand bat-minds. More radically, some of us want to understand qualia-spaces that have not been recruited by natural selection to play any information-signalling role at all.

* * *

CCC, agony is a quale. Phenomenal pain and nociception are doubly dissociable. Tragically, people with neuropathic pain can suffer intensely without the agony playing any information-signalling role. Either way, I'm not clear it's intelligible to speak of understanding the first-person phenomenology of extreme distress while being indifferent to the experience: For being disturbing is intrinsic to the experience itself. And if we are talking about a supposedly superintelligent paperclipper, shouldn't Clippy know exactly why humans aren't troubled by the clippiness-deficit?

If (un)clippiness is real, can humans ever understand (un)clippiness? By analogy, if organic sentients want to understand what it's like to be a bat - and not merely decipher the third-person mechanics of echolocation - then I guess we'll need to add a neural module to our CNS with the right connectivity and neurons supporting chiropteran gene-expression profiles, as well as peripheral transducers (etc). Humans can't currently imagine bat qualia; but bat qualia, we may assume from the neurological evidence, are infused with hedonic tone. Understanding clippiness is more of a challenge. I'm unclear what kind of neurocomputational architecture could support clippiness. Also, whether clippiness could be integrated into the unitary mind of an organic sentient depends on how you think biological minds solve the phenomenal binding problem, But let's suppose binding can be done. So here we have orthogonal axes of (dis)value. On what basis does the dual-axis subject choose between them? Sublime bliss and pure clippiness are both, allegedly, self-intimatingly valuable. OK, I'm floundering here...

People with different qualia? Yes, I agree CCC. I don't think this difference challenges the principle of the uniformity of nature. Biochemical individuality makes variation in qualia inevitable. The existence of monozygotic twins with different qualia would be a more surprising phenomenon, though even such "identical" twins manifest all sorts of epigenetic differences. Despite this diversity, there's no evidence to my knowledge of anyone who doesn't find activation by full mu agonists of the mu opioid receptors in our twin hedonic hotspots anything other than exceedingly enjoyable. As they say, "Don't try heroin. It's too good."

* * *

Shminux, I wonder if we may understand "understand" differently. Thus when I say I want to understand what it's like to be a bat, I'm not talking merely about modelling and predicting their behaviour. Rather I want first-person knowledge of echolocatory qualia-space. Apparently, we can know all the third-person facts and be none the wiser.

The nature of psychopathic cognition raises difficult issues. There is no technical reason why we couldn't be designed like mirror-touch synaesthetes impartially feeling carbon-copies of each other's encephalised pains and pleasures - and ultimately much else besides - as though they were our own. Likewise, there is no technical reason why our world-simulations must be egocentric. Why can't the world-simulations we instantiate capture the impartial "view from nowhere" disclosed by the scientific world-picture? Alas on both counts accurate and impartial knowledge would put an organism at a disadvantage. Hyper-empathetic mirror-touch synaesthetes are rare. Each of us finds himself or herself apparently at the centre of the universe. Our "mind-reading" is fitful, biased and erratic. Naively, the world being centred on me seems to be a feature of reality itself. Egocentricity is a hugely fitness-enhancing adaptation. Indeed, the challenge for evolutionary psychology is to explain why aren't we all psychopaths, cheats and confidence tricksters all the time...

So in answer to your point, yes. a psychopath can often model and predict the behaviour other sentient beings better than the subjects themselves. This is one reason why humans can build slaughterhouses and death camps. [Compare death-camp commandant Franz Stangl's response in Gitta Sereny's "Into That Darkness" to seeing cattle on the way to be slaughtered.] As you rightly note too, a psychopath can also know his victims suffer. He's not ignorant of their sentience like Descartes, who supposed vivisected dogs were mere insentient automata emitting distress vocalisations. So I agree with you on this score as well. But the psychopath is still in the grip of a hard-wired egocentric illusion - as indeed are virtually all of us, to a greater or lesser degree. By contrast, if the psychopath were to acquire the rich empathetic understanding of a generalised mirror-touch synaesthete, i.e. if he had the cognitive capacity to represent the first-person perspective of another subject of experience as though it were literally his own, then he couldn't wantonly harm another subject of experience: it would be like harming himself. Mirror-touch synaesthetes can't run slaughterhouses or death camps. This is why I take seriously the prospect that posthuman superintelligence will practise some sort of high-tech Jainism. Credible or otherwise, we may presume posthuman superintelligence won't entertain the false notions of personal identity adaptive for Darwinian life.

* * *

Posthuman superintelligence may be incomprehensibly alien. But if we encountered an agent who wanted to maximise paperclips today, we wouldn't think, "wow, how incomprehensibly alien", but, "aha, autism spectrum disorder". Of course, in the context of Clippy above, we're assuming a hypothetical axis of (un)clippiness whose (dis)valuable nature is supposedly orthogonal to the pleasure-pain axis. But what grounds have we for believing such a qualia-space could exist? Yes, we have strong reason to believe incomprehensibly alien qualia-spaces await discovery (cf. bats on psychedelics). But I haven't yet seen any convincing evidence there could be an alien qualia-space whose inherently (dis)valuable textures map on to the (dis)valuable textures of the pain-pleasure axis. Without hedonic tone, how can anything matter at all?

* * *

Wedrifid, granted, a paperclip-maximiser might be unmotivated to understand the pleasure-pain axis and the qualia-spaces of organic sentients. Likewise, we can understand how a junkie may not be motivated to understand anything unrelated to securing his supply of heroin - and a wireheader in anything beyond wireheading. But superintelligent? Insofar as the paperclipper - or the junkie - is ignorant of the properties of alien qualia-spaces, then it/he is ignorant of a fundamental feature of the natural world - hence not superintelligent in any sense I can recognise, and arguably not even stupid. For sure, if we're hypothesising the existence of a clippiness/unclippiness qualia-space unrelated to the pleasure-pain axis, then organic sentients are partially ignorant too. Yet the remedy for our hypothetical ignorance is presumably to add a module supporting clippiness - just as we might add a CNS module supporting echolocatory experience to understand bat-like sentience - enriching our knowledge rather than shedding it.

* * *

Even a paperclipper cannot be indifferent to the experience of agony. Just as organic sentients can co-instantiate phenomenal sights and sounds, a superintelligent paperclipper could presumably co-instantiate a pain-pleasure axis and (un)clippiness qualia space - two alternative and incommensurable (?) metrics of value, if I've interpreted Eliezer correctly. But I'm not at all confident I know what I'm talking about here. My best guess is still that the natural world has a single metric of phenomenal (dis)value, and the hedonic range of organic sentients discloses a narrow part of it.

* * *

Clippy has an off-the-scale AQ - he's a rule-following hyper-systematiser with a monomania for paperclips. But hypersocial sentients can have a runaway intelligence explosion too. And hypersocial sentients understand the mind of Mr Clippy better than Clippy understands the minds of sentients.

* * *

Wedrifid, thanks for the exposition / interpretation of Eliezer. Yes, you're right in guessing I'm struggling a bit. In order to understand the world, one needs to grasp both its third person-properties [the Standard Model / M-Theory] and its first-person properties [qualia, phenomenal experience] - and also one day, I hope, grasp how to "read off " the latter from the mathematical formalism of the former.

If you allow such a minimal criterion of (super)intelligence, then how well does a paperclipper fare? You remark how "it could, in principle, imagine (the thing with 'pain' and 'pleasure')." What is the force of "could" here? If the paperclipper doesn't yet grasp the nature of agony or sublime bliss, then it is ignorant of their nature. By analogy, if I were building a perpetual motion machine but allegedly "could" grasp the second law of thermodynamics, the modal verb is doing an awful lot of work. Surely, If I grasped the second law of thermodynamics, then I'd stop. Likewise, if the paperclipper were to be consumed by unbearable agony, it would stop too. The paperclipper simply hasn't understood the nature of what was doing. Is the qualia-naive paperclipper really superintelligent - or just polymorphic malware?

* * *

Fubarobfusco, I share your reservations about DSM [Diagnostic and Statistical Manual of Mental Disorders]. Nonetheless, the egocentric illusion, i.e. I am the centre of the universe other people / sentient beings have only walk-on parts, is an illusion. Insofar as my behaviour reflects my pre-scientific sense that I am in some way special or ontologically privileged, I am deluded. This is true regardless of whether one's ontology allows for the existence of rights or treats them as a useful fiction. The people we commonly label "psychopaths" or "sociopaths" - and DSM now categorises as victims of "antisocial personality disorder" - manifest this syndrome of egocentricity in high degree. So does burger-eating Jane.

* * *

Shminux, by a cognitive deficit, I mean a fundamental misunderstanding of the nature of the world. Evolution has endowed us with such fitness-enhancing biases. In the psychopath, egocentric bias is more pronounced. Recall that the American Psychiatric Association's "Diagnostic and Statistical Manual", DSM-IV, classes psychopathy / Antisocial personality disorder as a condition characterised by "...a pervasive pattern of disregard for, and violation of, the rights of others that begins in childhood or early adolescence and continues into adulthood." Unless we add a rider that this violation excludes sentient beings from other species, then most of us fall under the label.

"Fully understands"? But unless one is capable of empathy, then one will never understand what it is like to be another human being, just as unless one has the relevant sensorineural apparatus, one will never know what it is like to be a bat.

* * *

Shminux, a counter-argument: psychopaths do suffer from a profound cognitive deficit. Like the rest of us, a psychopath experiences the egocentric illusion. Each of us seems to the be the centre of the universe. Indeed I've noticed the centre of the universe tends to follow my body-image around. But whereas the rest of us, fitfully and imperfectly, realise the egocentric illusion is a mere trick of perspective born of selfish DNA, the psychopath demonstrates no such understanding. So in this sense, he is deluded.

[We're treating psychopathy as categorical rather than dimensional here. This is probably a mistake - and in any case, I suspect that by posthuman criteria, all humans are quasi-psychopaths and quasi-psychotic to boot. The egocentric illusion cuts deep.]

"the ultimate bliss that only creating paperclips can give". But surely the molecular signature of pure bliss is not in any way tried to the creation of paperclips?

* * *

Tim, in practice, yes. But this is as true in physics as in normative decision theory. Consider the computational challenges faced by, say, a galactic-sized superintelligence spanning 100,000 odd light years and googols of quasi-classical Everett branches.

[You're right about the need for definition of terms - but I hadn't intended to set out a rival Decision Theory FAQ. As you've probably guessed, all that happened was my vegan blood pressure rose briefly a few days ago when I read burger-choosing Jane being treated as a paradigm of rational agency.]

* * *

Eliezer, thanks for clarifying. This is how I originally conceived you viewed the threat from superintelligent paperclip-maximisers, i.e. nonconscious super-optimisers. But I was thrown by your suggestion above that such a paperclipper could actually understand first-person phenomenal states, i.e. it's a hypothetical "full-spectrum" paperclipper. If a hitherto non-conscious super-optimiser somehow stumbles upon consciousness, then it has made a momentous ontological discovery about the natural world. The conceptual distinction between the conscious and nonconscious is perhaps the most fundamental I know. And if - whether by interacting with sentients or by other means - the paperclipper discovers the first-person phenomenology of the pleasure-pain axis, then how can this earth-shattering revelation leave its utility function / world-model unchanged? Anyone who is isn't profoundly disturbed by torture, for instance, or by agony so bad one would end the world to stop the horror, simply hasn't understood it. More agreeably, if such an insentient paperclip-maximiser stumbles on states of phenomenal bliss, might not clippy trade all the paperclips in the world to create more bliss, i.e. revise its utility function? One of the traits of superior intelligence, after all, is a readiness to examine one's fundamental assumptions and presuppositions - and (if need be) create a novel conceptual scheme in the face of surprising or anomalous empirical evidence.

* * *

Sarokrae, first, as I've understood Eliezer, he's talking about a full-spectrum superintelligence, i.e. a superintelligence which understands not merely the physical processes of nociception etc, but the nature of first-person states of organic sentients. So the superintelligence is endowed with a pleasure-pain axis, at least in one of its modules. But are we imagining that the superintelligence has some sort of orthogonal axis of reward - the paperclippiness axis? What is the relationship between these dual axes? Can one grasp what it's like to be in unbearable agony and instead find it more "rewarding" to add another paperclip? Whether one is a superintelligence or a mouse, one can't directly access mind-independent paperclips, merely one's representations of paperclips. But what does it mean to say one's representation of a paperclip could be intrinsically "rewarding" in the absence of hedonic tone? [I promise I'm not trying to score some empty definitional victory, whatever that might mean; I'm just really struggling here...]

* * *

Eliezer, you remark, "The inherent-desirableness of happiness is your mind reifying the internal data describing its motivation to do something," Would you propose that a mind lacking in motivation couldn't feel blissfully happy? Mainlining heroin (I am told) induces pure bliss without desire - shades of Buddhist nirvana? Pure bliss without motivation can be induced by knocking out the dopamine system and directly administering mu opioid agonists to our twin "hedonic hotspots" in the ventral pallidum and rostral shell of the nucleus accumbens. Conversely, amplifying mesolimbic dopamine function while disabling the mu opioid pathways can induce desire without pleasure.

* * *

It's possible (I hope) to believe future life can be based on information-sensitive gradients of (super)intelligent well-being without remotely endorsing any of my idiosyncratic views on consciousness, intelligence or anything else. That's the beauty of hedonic recalibration. In principle at least, hedonic recalibration can enrich your quality of life and yet leave most if not all of your existing values and preference architecture intact .- including the belief that there are more important things in life than happiness.

* * *

Rob, many thanks for a thoughtful discussion above. But on one point, I'm confused. You say of cows that it's "unclear whether they have phenomenal sentience." Are you using the term "sentience" in the standard dictionary sense ["Sentience is the ability to feel, perceive, or be conscious, or to experience subjectivity": https://en.wikipedia.org/wiki/Sentience ] Or are you using the term in some revisionary sense? At least if we discount radical philosophical scepticism about other minds, cows and other nonhuman vertebrates undergo phenomenal pain, anxiety, sadness, happiness and a whole bunch of phenomenal sensory experiences. For sure, cows are barely more sapient than a human prelinguistic toddler (though see e.g. "Emotional reactions to learning in cattle" & "Cow with 'unusual intelligence' opens farm gate with tongue so herd can escape shed"] But their limited capacity for abstract reasoning is a separate issue.

* * *

Eliezer, in my view, we don't need to assume meta-ethical realism to recognise that it's irrational - both epistemically irrational and instrumentally irrational - arbitrarily to privilege a weak preference over a strong preference. To be sure, millions of years of selection pressure means that the weak preference is often more readily accessible. In the here-and-now, weak-minded Jane wants a burger asap. But it's irrational to confuse an epistemological limitation with a deep metaphysical truth. A precondition of rational action is understanding the world. If Jane is scientifically literate, then she'll internalise Nagel's "view from nowhere" and adopt the God's-eye-view to which natural science aspires. She'll recognise that all first-person facts are ontologically on a par - and accordingly act to satisfy the stronger preference over the weaker. So the ideal rational agent in our canonical normative decision theory will impartially choose the action with the highest expected utility - not the action with an extremely low expected utility. At the risk of labouring the obvious, the difference in hedonic tone induced by eating a hamburger and a veggieburger is minimal. By contrast, the ghastly experience of having one's throat slit is exceptionally unpleasant. Building anthropocentric bias into normative decision theory is no more rational than building geocentric bias into physics.

Paperclippers? Perhaps let us consider the mechanism by which paperclips can take on supreme value. We understand, in principle at least, how to make paperclips seem intrinsically supremely valuable to biological minds - more valuable than the prospect of happiness in the abstract. [“Happiness is a very pretty thing to feel, but very dry to talk about.” - Jeremy Bentham]. Experimentally, perhaps we might use imprinting (recall Lorenz and his goslings), microelectrodes implanted in the reward and punishment centres, behavioural conditioning and ideological indoctrination - and perhaps the promise of 72 virgins in the afterlife for the faithful paperclipper. The result: a fanatical paperclip fetishist! Moreover, we have created a full-spectrum paperclip -fetishist. Our human paperclipper is endowed, not merely with some formal abstract utility function involving maximising the cosmic abundance of paperclips, but also first-person "raw feels" of pure paperclippiness. Sublime!

However, can we envisage a full-spectrum paperclipper superintelligence? This is more problematic. In organic robots at least, the neurological underpinnings of paperclip evangelism lie in neural projections from our paperclipper's limbic pathways - crudely, from his pleasure and pain centres. If he's intelligent, and certainly if he wants to convert the world into paperclips, our human paperclipper will need to unravel the molecular basis of the so-called "encephalisation of emotion". The encephalisation of emotion helped drive the evolution of vertebrate intelligence - and also the paperclipper's experimentally-induced paperclip fetish / appreciation of the overriding value of paperclips. Thus if we now functionally sever these limbic projections to his neocortex, or if we co-administer him a dopamine antagonist and a mu-opioid antagonist, then the paperclip-fetishist's neocortical representations of paperclips will cease to seem intrinsically valuable or motivating. The scales fall from our poor paperclipper's eyes! Paperclippiness, he realises, is in the eye of the beholder. By themselves, neocortical paperclip representations are motivationally inert. Paperclip representations can seem intrinsically valuable within a paperclipper's world-simulation only in virtue of their rewarding opioidergic projections from his limbic system - the engine of phenomenal value. The seemingly mind-independent value of paperclips, part of the very fabric of the paperclipper's reality, has been unmasked as derivative. Critically, an intelligent and recursively self-improving paperclipper will come to realise the parasitic nature of the relationship between his paperclip experience and hedonic innervation: he's not a naive direct realist about perception. In short, he'll mature and acquire an understanding of basic neuroscience.

Now contrast this case of a curable paperclip-fetish with the experience of e.g. raw phenomenal agony or pure bliss - experiences not linked to any fetishised intentional object. Agony and bliss are not dependent for their subjective (dis)value on anything external to themselves. It's not an open question (cf. Moore's Open Question Argument) whether one's unbearable agony is subjectively disvaluable. For reasons we simply don't understand, first-person states on the pleasure-pain axis have a normative aspect built into their very nature. If one is in agony or despair, the subjectively disvaluable nature of this agony or despair is built into the nature of the experience itself. To be panic-stricken, to take another example, is universally and inherently disvaluable to the subject whether one is a fish or a cow or a human being.

Why does such experience exist? Well, I could speculate and tell a naturalistic reductive story involving Strawsonian physicalism and possible solutions to the phenomenal binding problem But to do so here opens a fresh can of worms.

Eliezer, I understand you believe I'm guilty of confusing an idiosyncratic feature of my own mind with a universal architectural feature of all minds. Maybe so! As you say, this is a common error. But unless I'm ontologically special (which I very much doubt!) the pain-pleasure axis discloses the world's inbuilt metric of (dis)value - and it's a prerequisite of finding anything (dis)valuable at all.

* * *

Larks, no, pragmatist nicely captured the point I was making. If we are trying to set out what an "ideal, perfectly rational agent" would choose, then we can't assume that such a perfectly rational agent would arbitrarily disregard a stronger preference in favour of a weaker preference. Today, asymmetries of epistemic access mean that weaker preferences often trump stronger preferences; but with tomorrow's technology, this cognitive limitation on our decision-making procedures can be overcome.

* * *

Nshepperd, utilitarianism conceived as theory of value is not always carefully distinguished from utilitarianism - especially rule-utilitarianism - conceived as a decision procedure. This distinction is nicely brought out in the BPhil thesis of FHI's Tony Ord, "Consequentialism and Decision Procedures". Toby takes a global utilitarian consequentialist approach to the question, 'How should I decide what to do?" - a subtly different question from '"What should I do?"

* * *

Pragmatist, apologies if I gave the impression that by "impartially gives weight" I meant impartially gives equal weight. Thus the preferences of a cow or a pig or a human trump the conflicting interests of a less sentient Anopheles mosquito or a locust every time. But on the conception of rational agency I'm canvassing, it is neither epistemically nor instrumentally rational for an ideal agent to disregard a stronger preference simply because that stronger preference is entertained by a member of a another species or ethnic group. Nor is it epistemically or instrumentally rational for an ideal agent to disregard a conflicting stronger preference simply because her comparatively weaker preference looms larger in her own imagination. So on this analysis, Jane is not doing what "an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose."

* * *

Tim, conduct a straw poll of native English speakers or economists and you are almost certainly correct. But we're not (I hope!) doing Oxford-style ordinary language philosophy; "Rationality" is a contested term. Any account of normative decision theory, "what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose", is likely to contain background assumptions that may be challenged. For what it's worth, I doubt a full-spectrum superintelligence would find it epistemically or instrumentally rational to disregard a stronger preference in favour of a weaker preference - any more than it would be epistemically or instrumentally rational for a human spit-brain patient arbitrarily to privilege the preferences of one cerebral hemisphere over the other. In my view, locking bias into our very definition of rationality would be a mistake.

* * *

Tim, in one sense I agree: In the words of William Ralph Inge, "We have enslaved the rest of the animal creation, and have treated our distant cousins in fur and feathers so badly that beyond doubt, if they were able to formulate a religion, they would depict the Devil in human form."

But I'm not convinced there could literally be a Cow Satan - for the same reason that there are no branches of Everett's multiverse where any of the world's religions are true, i.e. because of their disguised logical contractions. Unless you're a fan of what philosophers call Meinong's jungle, the existence of "Cow Satan" is impossible.

* * *

For sure, accurately modelling the world entails making accurate predictions about it. These predictions include the third-person and first-person facts [what-it's-like-to-be-a-bat, etc]. What is far from clear - to me at any rate - is whether super-rational agents can share perfect knowledge of both the first-person and third-person facts and still disagree. This would be like two mirror-touch synaesthetes having a fist fight.

Thus I'm still struggling with, "The paperclip maximizer is not moved by grasping your first-person perspective." From this, I gather we're talking about a full-spectrum superintelligence well acquainted with both the formal and subjective properties of mind, insofar as they can be cleanly distinguished. Granted your example Eliezer, yes, if contemplating a cosmic paperclip-deficit causes the AGI superhuman anguish, then the hypothetical superintelligence is entitled to prioritise its super-anguish over mere human despair - despite the intuitively arbitrary value of paperclips. On this scenario, the paperclip-maximising superintelligence can represent human distress even more faithfully than a mirror-touch synaesthete; but its own hedonic range surpasses that of mere humans - and therefore takes precedence.

However, to be analogous to burger-choosing Jane in Luke's FAQ, we'd need to pick an example of a superintelligence who wholly understands both a cow's strong preference not to have her throat slit and Jane's comparatively weaker preference to eat her flesh in a burger. Unlike partially mind-blind Jane, the superintelligence can accurately represent and impartially weigh all relevant first-person perspectives. So the question is whether this richer perspective-taking capacity is consistent with the superintelligence discounting the stronger preference not to be harmed? Or would such human-like bias be irrational? In my view, this is not just a question of altruism but cognitive competence.

[Of course, given we're taking about posthuman superintelligence, the honest answer is boring and lame: I don't know. But if physicists want to know the "mind of God," we should want to know God's utility function, so to speak.]

* * *

Why not? Because Jane would weigh the preference of the cow not to have her throat slit as if it were her own. Of course, perfect knowledge of each other's first-person states is still a pipedream. But let's assume that in the future such naturalised telepathy is ubiquitous, ensuring our mutual ignorance is cured.

* * *

Eliezer, I'd beg to differ. Jane does not accurately model the world. Accurately modelling the world would entail grasping and impartially weighing all its first-person perspectives, not privileging a narrow subset. Perhaps we may imagine a superintelligent generalisation of "Rats ARE like the Borg". With perfect knowledge of all the first-person facts, Jane could not disregard the strong preference of the cow not to be harmed. Of course, Jane is not capable of such God-like omniscience. No doubt in common usage, egocentric Jane displays merely a lack of altruism, not a cognitive deficit of reason. But this is precisely what's in question. Why build our canons of rational behaviour around a genetically adaptive delusion?

[on vegetarianism]
This is a difficult question. By analogy, should rich cannibals or human child abusers be legally permitted to indulge their pleasures if they offset the harm they cause with sufficiently large charitable donations to orphanages or children's charities elsewhere? On (indirect) utilitarian grounds if nothing else, we would all(?) favour an absolute legal prohibition on cannibalism and human child abuse. This analogy breaks down if the neuroscientific evidence suggesting that pigs, for example, are at least as sentient as prelinguistic human toddlers turns out to be mistaken. I'm deeply pessimistic this is the case.

* * *

Could you possibly say a bit more about why [you believe] the mirror test is inadequate as a test of possession of a self-concept? Either way, making self-awareness a precondition of moral status has troubling implications. For example, consider what happens to verbally competent adults when feelings of intense fear turn into uncontrollable panic. In states of "blind" panic, reflective self-awareness and the capacity for any kind of meta-cognition is lost. Panic disorder is extraordinarily unpleasant. Are we to make the claim that such panic-ridden states aren't themselves important - only the memories of such states that a traumatised subject reports when s/he regains a measure of composure and some semblance of reflective self-awareness is restored? A pig, for example, or a prelinguistic human toddler, doesn't have the meta-cognitive capacity to self-reflect on such states. But I don't think we are ethically entitled to induce them - any more than we are ethically entitled to waterboard a normal adult human. I would hope posthuman superintelligence can engineer such states out of existence - in human and nonhuman animals alike.

* * *

Birds lack a neocortex. But members of at least one species, the European magpie, have convincingly passed the "mirror test" [cf. the study "Mirror-Induced Behavior in the Magpie (Pica pica): Evidence of Self-Recognition"] Most ethologists recognise passing the mirror test as evidence of a self-concept. As well as higher primates (chimpanzees, orang-utans, bonobos, gorillas) members of other species who have passed the mirror test include elephants, orcas and bottlenose dolphins. Humans generally fail the mirror test below the age of eighteen months.

* * *

Lumifer, should the charge of "mind-killers" be levelled at anti-speciesists or meat-eaters? (If you were being ironic, apologies for being so literal-minded.)

* * *

Larks, all humans, even anencephalic babies, are more sentient than all Anopheles mosquitoes. So when human interests conflict irreconcilably with the interests of Anopheles mosquitoes, there is no need to conduct a careful case-by-case study of their comparative sentience. Simply identifying species membership alone is enough. By contrast, most pigs are more sentient than some humans. Unlike the antispeciesist, the speciesist claims that the interests of the human take precedence over the interests of the pig simply in virtue of species membership. (cf. "Boy born without a brain dies after three-year 'miracle life'": heart-warming yes, but irrational altruism - by antispeciesist criteria at any rate.) I try and say a bit more (without citing the Daily Mail) in The Antispeciesist Revolution.

* * *

Vanvier, you say that you wouldn't be averse to a quick end for young human children who are not going to live to see their third birthday. What about intellectually handicapped children with potentially normal lifespans whose cognitive capacities will never surpass a typical human toddler or mature pig? * * *

Vanvier, do human infants and toddlers deserve moral consideration primarily on account of their potential to become rational adult humans? Or are they valuable in themselves? Young human children with genetic disorders are given love, care and respect - even if the nature of their illness means they will never live to see their third birthday. We don't hold their lack of "potential" against them. Likewise, pigs are never going to acquire generative syntax or do calculus. But their lack of cognitive sophistication doesn't make them any less sentient.

* * *

Jkaufman, the dimmer-switch metaphor of consciousness is intuitively appealing. But consider some of the most intense experiences that humans can undergo, e.g. orgasm, raw agony, or blind panic. Such intense experiences are characterised by a breakdown of any capacity for abstract rational thought or reflective self-awareness. Neuroscanning evidence, too, suggests that much of our higher brain function effectively shuts down during the experience of panic or orgasm. Contrast this intensity of feeling with the subtle and rarefied phenomenology involved in e.g. language production, solving mathematical equations, introspecting one's thought-episodes, etc - all those cognitive capacities that make mature members of our species distinctively human. For sure, this evidence is suggestive, not conclusive. But the supportive evidence converges with e.g. microelectrode studies using awake human subjects. Such studies suggest the limbic brain structures that generate our most intense experiences are evolutionarily very ancient. Also, the same genes, same neurotransmitter pathways and same responses to noxious stimuli are found in our fellow vertebrates. In view of how humans treat nonhumans, I think we ought to be worried that humans could be catastrophically mistaken about nonhuman animal sentience.

* * *

Obamacare for elephants probably doesn't rank highly in the priorities of most Lesswrongers. But from an anthropocentric perspective, isn't an analogous scenario for human beings - i.e. to stay free living but not "wild" - the most utopian outcome if the MIRI conception of an Intelligence Explosion comes to pass?

* * *

RobbBB, in what sense can phenomenal agony be an "illusion"? If your pain becomes so bad that abstract thought is impossible, does your agony - or the "illusion of agony" - somehow stop? The same genes, same neurotransmitters, same anatomical pathways and same behavioural responses to noxious stimuli are found in humans and the nonhuman animals in our factory-farms. A reasonable (but unproven) inference is that factory-farmed nonhumans endure misery - or the "illusion of misery" as the eliminativist puts it - as do abused human infants and toddlers.

* * *

Drnickbone, the argument that meat-eating can be ethically justified if conditions of factory-farmed animals are improved so their lives are "barely" worth living is problematic. As it stands, the argument justifies human cannibalism. Breeding human babies for the pot is potentially ethically justified because the infants in question wouldn't otherwise exist - although they are factory- farmed, runs this thought-experiment, their lives are at least "barely" worth living because they don't self-mutilate or show the grosser signs of psychological trauma. No, I'm sure you don't buy this argument - but then we shouldn't buy it for nonhuman animals either.

* * *

Indeed so. Factory-farmed nonhuman animals are debeaked, tail-docked, castrated (etc) to prevent them from mutilating themselves and each other. Self-mutilatary behaviour in particular suggests an extraordinarily severe level of chronic distress. Compare how desperate human beings must be before we self-mutilate. A meat-eater can (correctly) respond that the behavioural and neuroscientific evidence that factory-farmed animals suffer a lot is merely suggestive, not conclusive. But we're not trying to defeat philosophical scepticism, just act on the best available evidence. Humans who persuade ourselves that factory-farmed animals are happy are simply kidding ourselves - we're trying to rationalise the ethically indefensible.

* * *

SaidAchmiz, to treat exploiting and killing nonhuman animals as ethically no different from "exploiting and killing ore-bearing rocks" does not suggest a cognitively ambitious level of empathetic understanding of other subjects of experience. Isn't there an irony in belonging to an organisation dedicated to the plight of sentient but cognitively humble beings in the imminent face of vastly superior intelligence and claiming that the plight of sentient but cognitively humble beings in the face of vastly superior intelligence is of no ethical consequence whatsoever? Insofar as we want a benign outcome for humans, I'd have thought that the computational equivalent of Godlike capacity for perspective-taking is precisely what we should be aiming for.

* * *

Ice9, perhaps consider uncontrollable panic. Some of the most intense forms of sentience that humans undergo seem to be associated with a breakdown of meta-cognitive capacity. So let's hope that what it's like to be an asphyxiating fish, for example, doesn't remotely resemble what it feels like to be a waterboarded human. I worry that our intuitive dimmer-switch model of consciousness, i.e. more intelligent = more sentient, may turn out to be mistaken.

* * *

Elharo, I take your point, but surely we do want humans to enjoy healthy lives free from hunger and disease and safe from parasites and predators? Utopian technology promises similar blessings to nonhuman sentients too. Human and nonhuman animals alike typically flourish best when free- living but not "wild".

* * *

Eugine, in answer to your question: yes. If we are committed to the well-being of all sentience in our forward light-cone, then we can't simultaneously conserve predators in their existing guise. (cf. https://www.abolitionist.com/reprogramming/index.html) Humans are not obligate carnivores; and the in vitro meat revolution may shortly make this debate redundant; but it's questionable whether posthuman superintelligence committed to the well-being of all sentience could conserve humans in their existing guise either.

* * *

SaidAchmiz, you're right. The issue isn't settled: I wish it were so. The Transhumanist Declaration (1998, 2009) of the World Transhumanist Association / Humanity Plus does express a non-anthropocentric commitment to the well-being of all sentience. ["We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise": http://humanityplus.org/philosophy/transhumanist-declaration/] But I wonder what percentage of Lesswrongers would support such a far-reaching statement?

* * *

SaidAchmiz, I wonder if a more revealing question would be to ask if / when in vitro meat products of equivalent taste and price hit the market, will you switch? Lesswrong readers tend not to be technophobes, so I assume the majority(?) of Lesswrongers who are not already vegetarian will make the transition. However, you say above that you are "not interested in reducing the suffering of animals". Do you mean that you are literally indifferent one way or the other to nonhuman animal suffering - in which case presumably you won't bother changing to the cruelty-free alternative? Or do you mean merely that you don't consider nonhuman animal suffering important?

* * *

Eliezer, is that the right way to do the maths? If a high-status opinion-former publicly signals that he's quitting meat because it's ethically indefensible, then others are more likely to follow suit - and the chain-reaction continues. For sure, studies purportedly showing longer lifespans, higher IQs etc of vegetarians aren't very impressive because there are too many possible confounding variables. But what such studies surely do illustrate is that any health-benefits of meat-eating vs vegetarianism, if they exist, must be exceedingly subtle. Either way, practising friendliness towards cognitively humble lifeforms might not strike AI researchers as an urgent challenge now. But isn't the task of ensuring that precisely such an outcome ensues from a hypothetical Intelligence Explosion right at the heart of MIRI's mission - as I understand it at any rate?


[on antispeciesism]
David, an increasing number of bioethicists worry that our treatment of nonhuman animals is catastrophically mistaken. Concern has spread from animal activists to the wider scientific community:
https://en.wikipedia.org/wiki/Cambridge_Declaration_on_Consciousness#Cambridge_Declaration_on_Consciousness)
I would be overjoyed if you're right; and our worries about nonhuman animal suffering are misplaced. However...
1) IMO there is a compelling ethical case for prioritising vertebrates. In the absence of a central nervous system, it's hard to see how there could be a unitary subject of experience - as distinct from the discrete experiences of individual nerve ganglia. The evolutionary roots of phenomenal pain itself probably go much deeper. I touched on the role of the opioid-dopamine neurotransmitter system in flatworms subjected to noxious stimuli as an example of such conserved structure and function. Brian Tomasik takes a much more radical view than I do of the risks of phylum chauvinism. (cf. http://www.utilitarian-essays.com/insect-pain.html)

2) At first sight, yes, it might seem indisputable that patients diagnosed with a persistent vegetative state like Terry Chiavo lack consciousness. So could only religious fundamentalists immune to scientific evidence suppose that their "pain behaviours" reflect genuine distress? Perhaps see "People in a vegetative state may feel pain":
http://www.newscientist.com/article/mg21729055.500-people-in-a-vegetative-state-may-feel-pain.html
or "Vegetative patient's brain active in test / Unprecedented experiment shows response to instructions to imagine playing tennis":
http://www.sfgate.com/news/article/Vegetative-patient-s-brain-active-in-test-2489180.php
There can certainly be a mismatch between pain and pain behaviour if someone tries to fake it. The most parsimonious explanation of unfaked pain behaviour is pain.

3) The strongly conserved SCN9A gene encodes a sodium channel that allows electrical charge to flow into nerve cells in both the peripheral nervous system and the brain. The SCN9A gene isn't active merely in nerves of the skin. Both its peripheral and central abnormal expression is implicated in a variety of disorders ranging from paroxysmal extreme pain disorder to epilepsy - in both humans and "animal models" such as mice. It would be nice to report that SCN9A alleles implicated in abnormally high pain thresholds are prevalent in pigs and other factory-farmed nonhuman animals. Sadly, this doesn't appear to be the case. [Comparative intensities of pain and pleasure in human and nonhuman subjects without verbal competence can be "operationalised" by e.g. testing how hard the organism will work to obtain or avoid a given rewarding or noxious stimulus.]

My best guess is that humanity will collectively recognise that nonhuman animals suffer just as "we" do shortly after cheap and tasty in vitro meat products hit the supermarket shelves.
Too cynical?
Maybe.

* * *

David, the reason that substance P receptor antagonists are not effective analgesics in humans isn't because primates differ from pigs in our central pain processing. It's because the primary substance P receptor, NK1, isn't involved in the fast component of the nociceptive response. Contrary to earlier reports, substance P is not the primary pain transmitter in either human or nonhuman mammals - although substance P certainly plays an important long-term modulatory role in human and nonhuman animals alike.

SCN9A gene expression is not unique to the skin in either human or nonhuman animals. The role of the sodium channel Nav1.7 encoded by the SCN9A gene was first studied in the peripheral nervous system within sensory dorsal root ganglion and sympathetic ganglion neurons where it's preferentially expressed. But a minority of neuropathic pain subjects with altered SCN9A gene expression show minimal autonomic dysfunction. Abnormal central as well as peripheral SCN9A expression is currently under active investigation both in neuropathic pain disorders and epilepsy in humans and "animal models".

* * *

Daniel, depending on one's theory of how organic minds solve the phenomenal binding problem, we should in future be able reversibly to "mind-meld" with members of other races and species. This procedure ought to exorcise philosophical zombies - though not before billions more squalid lives and deaths.
Could conjoined twins share a mind?

One characteristic of folk with high AQs can be unusually high pain thresholds. I don't know if the correlation has ever been rigorously quantified; but often noxious stimuli that "neurotypicals" would find unbearable can be stoically endured. Lesswrongers tend to be male, high-AQ / high-IQ. I wonder how this mind-set colours their approach to the urgency - or otherwise - of tackling human and nonhuman animal suffering.

[Note, this isn't intended as a naive plea for a tender-minded, low AQ society. Thus the person who donates her 10 million dollar fortune to her cat is probably more naturally empathetic than the average Lesswronger; she is not a more effective altruist. Conversely, a hyper-systematising rationalist who tries to minimise the biology of suffering may not be naturally compassionate at all.]

Eli, it's too quick to dismiss placing moral value on all conscious creatures as "very warm-and-fuzzy". If we're psychologising, then we might equally say that working towards the well-being of all sentience reflects the cognitive style of a rule-bound hyper-systematiser. No, chickens aren't going to win any Fields medals - though chickens can recognise logical relationships and perform transitive inferences (cf. the "pecking order"). But nonhuman animals can still experience states of extreme distress. Uncontrolled panic, for example, feels awful regardless of your species-identity. Such panic involves a complete absence or breakdown of reflective self-awareness - illustrating how the most intense forms of consciousness don't involve sophisticated meta-cognition.

Either way, if we can ethically justify spending, say, $100,000 salvaging a 23-week-old human micro-preemie, then impartial benevolence dictates caring for beings of greater sentience and sapience as well - or at the very least, not actively harming them.


[on the Everett interpretation of QM]
Yes, assuming post-Everett quantum mechanics, our continued existence needn't be interpreted as evidence that Mutually Assured Destruction works, but rather as an anthropic selection effect. It's unclear why (at least in our family of branches) Hugh Everett, who certainly took his own thesis seriously, spent much of his later life working for the Pentagon targeting thermonuclear weaponry on cities. For Everett must have realised that in countless world-branches, such weapons would actually be used. Either way, the idea that Mutually Assured Destruction works could prove ethically catastrophic this century if taken seriously.


[on wireheading and complex rewards]
Elharo, which is more interesting? Wireheading - or "the interaction among conflicting values and competing entities that makes the world interesting, fun, and worth living"? Yes, I agree, the latter certainly sounds more exciting; but "from the inside", quite the reverse. Wireheading is always enthralling, whereas everyday life is often humdrum. Likewise with so-called utilitronium. To humans, utilitronium sounds unimaginably dull and monotonous, but "from the inside" it presumably feels sublime.

However, we don't need to choose between aiming for a utilitronium shockwave and conserving the status quo. The point of recalibrating our hedonic treadmill is that life can be fabulously richer - in principle orders of magnitude richer - for everyone without being any less diverse, and without forcing us to give up our existing values and preference architectures. (cf. the COMT polymorphism study, "The catechol-O-methyl transferase Val158Met polymorphism and experience of reward in the flow of daily life.") In principle, there is nothing to stop benign (super)intelligence from spreading such reward pathway enhancements across the phylogenetic tree.


[on quantum mind and posthuman superintelligence]
Tim, perhaps I'm mistaken; you know Lesswrongers better than me. But in any such poll I'd also want to ask respondents who believe the USA is a unitary subject of experience whether they believe such a conjecture is consistent with reductive physicalism?

* * *

Wedrifid, yes, if Schwitzgebel's conjecture were true, then farewell to reductive physicalism and the ontological unity of science. The USA is a "zombie". Its functionally interconnected but skull-bound minds are individually conscious; and sometimes the behaviour of the USA as a whole is amenable to functional description; but the USA not a unitary subject of experience. However, the problem with relying on this intuitive response is that the phenomenology of our own minds seems to entail exactly the sort of strong ontological emergence we're excluding for the USA. Let's assume, as microelectrode studies tentatively confirm, that individual neurons can support rudimentary experience. How can we rigorously derive bound experiential objects, let alone the fleeting synchronic unity of the self, from discrete, distributed, membrane-bound classical feature processors? Dreamless sleep aside, why aren't we mere patterns of "mind dust"?

None of this might seem relevant to ChrisHallquist's question. Computationally speaking, who cares whether Deep Blue, Watson, or Alpha Dog (etc) are unitary subjects of experience. But anyone who wants to save reductive physicalism should at least consider why quantum mind theorists are prepared to contemplate a role for macroscopic quantum coherence in the CNS. Max Tegmark hasn't refuted quantum mind; he's made a plausible but unargued assumption, namely that sub-picosecond decoherence timescales are too short to do any computational and/or phenomenological work. Maybe so; but this assumption remains to be empirically tested. If all we find is "noise", then I don't see how reductive physicalism can be saved.

* * *

Huh, yes, in my view C. elegans is a P-zombie. If we grant reductive physicalism, the primitive nervous system of C. elegans can't support a unitary subject of experience. At most, its individual ganglia (cf. http://www.sfu.ca/biology/faculty/hutter/hutterlab/research/Ce_nervous_system.html) may be endowed with the rudiments of unitary consciousness. But otherwise, C. elegans can effectively be modelled classically. Most of us probably wouldn't agree with philosopher Eric Schwitzgebel. ("If Materialism Is True, the United States Is Probably Conscious": http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/USAconscious-130208.pdf) But exactly the same dilemma confronts those who treat neurons as essentially discrete, membrane-bound classical objects. Even if (rightly IMO) we take Strawsonian physicalism seriously (cf. https://en.wikipedia.org/wiki/Physicalism#Strawsonian_physicalism) then we still need to explain how classical neuronal "mind-dust" could generate bound experiential objects or a unitary subject of experience without invoking some sort of strong emergence.

* * *

Alas so. IMO a solution to the phenomenal binding problem is critical to understanding the evolutionary success of organic robots over the past 540 million years - and why classical digital computers are (and will remain) insentient zombies, not unitary minds. This conjecture may be false; but it has the virtue of being testable. If / when our experimental apparatus allows probing the CNS at the sub-picosecond timescales above which Max Tegmark ("Why the brain is probably not a quantum computer") posits thermally-induced decoherence, then I think we'll get a huge surprise! I predict we'll find, not random psychotic "noise", but instead the formal, quantum-coherent physical shadows of the macroscopic bound phenomenal objects of everyday experience - computationally optimised by hundreds of millions years of evolution, i.e. a perfect structural match. (cf. http://consc.net/papers/combination.pdf) By contrast, critics of the quantum mind conjecture must presumably predict we'll find just "noise".

* * *

Cruelty-free in vitro meat can potentially replace the flesh of all sentient beings currently used for food. Yes, it's more efficient; it also makes high-tech Jainism less of a pipedream.

* * *

I disagree with Peter Singer here. So I'm not best placed to argue his position. But Singer is acutely sensitive to the potential risks of any notion of lives not worth living. Recall Singer lost three of his grandparents in the Holocaust. Let's just say it's not obvious that an incurable victim of, say, infantile Tay–Sachs disease, who is going do die around four years old after a chronic pain-ridden existence, is better off alive. We can't ask this question to the victim: the nature of the disorder means s/he is not cognitively competent to understand the question.

Either way, the case for the expanding circle doesn't depend on an alleged growth in empathy per se. If, as I think quite likely, we eventually enlarge our sphere of concern to the well-being of all sentience, this outcome may owe as much to the trait of high-AQ hyper-systematising as any widening or deepening compassion. By way of example, consider the work of Bill Gates in cost-effective investments in global health (vaccinations etc) and indeed in: http://www.thegatesnotes.com/Features/Future-of-Food ("the future of meat is vegan"). Not even his greatest admirers would describe Gates as unusually empathetic. But he is unusually rational - and the growth in secular scientific rationalism looks set to continue.

* * *

Nornagest, fair point. See too the study "The Brain Functional Networks Associated to Human and Animal Suffering Differ among Omnivores, Vegetarians and Vegans":

* * *

Eugine, are you doing Peter Singer justice? What motivates Singer's position isn't a range of empathetic concern that's stunted in comparison to people who favour the universal sanctity of human life. Rather it's a different conception of the threshold below which a life is not worth living. We find similar debates over the so-called "Logic of the Larder" for factory-farmed non-human animals: http://www.animal-rights-library.com/texts-c/salt02.htm. Actually, one may agree with Singer - both his utilitarian ethics and bleak diagnosis of some human and nonhuman lives - and still argue against his policy prescriptions on indirect utilitarian grounds. But this would take us far afield.

* * *

On (indirect) utilitarian grounds, we may make a strong case that enshrining the sanctity of life in law will lead to better consequences than legalising infanticide. So I disagree with Singer here. But I'm not sure Singer's willingness to defend infanticide as (sometimes) the lesser evil is a counterexample to the broad sweep of the generalisation of the expanding circle. We're not talking about some Iron Law of Moral Progress.

* * *

The growth of science has led to a decline in animism. So in one sense, our sphere of concern has narrowed. But within the sphere of sentience, I think Singer and Pinker are broadly correct. Also, utopian technology makes even the weakest forms of benevolence vastly more effective. Consider, say, vaccination. Even if, pessimistically, one doesn't foresee any net growth in empathetic concern, technology increasingly makes the costs of benevolence trivial. [Once again, I'm not addressing here the prospect of hypothetical paperclippers - just mind-reading humans with a pain-pleasure (dis)value axis.]

* * *

An expanding circle of empathetic concern needn't reflect a net gain in compassion. Naively, one might imagine that e.g. vegans are more compassionate than vegetarians. But I know of no evidence this is the case. Tellingly, female vegetarians outnumber male vegetarians by around 2:1, but the ratio of male to female vegans is roughly equal. So an expanding circle may reflect our reduced tolerance of inconsistency / cognitive dissonance. Men are more likely to be utilitarian hyper-systematisers.

* * *

There is no guarantee that greater perspective-taking capacity will be matched with equivalent action. But presumably greater empathetic concern makes such action more likely. [cf. Steven Pinker's "The Better Angels of Our Nature". Pinker aptly chronicles e.g. the growth in consideration of the interests of nonhuman animals; but this greater concern hasn't (yet) led to an end to the growth of factory-farming. In practice, I suspect in vitro meat will be the game-changer.]

The attributes of superintelligence? Well, the growth of scientific knowledge has been paralleled by a growth in awareness - and partial correction - of all sorts of cognitive biases that were fitness-enhancing in the ancestral environment of adaptedness. Extrapolating, I was assuming that full-spectrum superintelligences would be capable of accessing and impartially weighing all possible first-person perspectives and acting accordingly. But I'm making a lot of contestable assumptions here. And see too the perils of: https://en.wikipedia.org/wiki/Apophatic_theology

* * *

Perhaps it's worth distinguishing the Convergence vs Orthogonality theses for: 1) biological minds with a pain-pleasure (dis)value axis. 2) hypothetical paperclippers.

Unless we believe that the expanding circle of compassion is likely to contract, IMO a strong case can be made that rational agents will tend to phase out the biology of suffering in their forward light-cone. I'm assuming, controversially, that superintelligent biological posthumans will not be prey to the egocentric illusion that was fitness-enhancing on the African savannah. Hence the scientific view-from-nowhere, i.e. no arbitrarily privileged reference-frames.

But what about 2? I confess I still struggle with the notion of a superintelligent paperclipper. But if we grant that such a prospect is feasible and even probable, then I agree the Orthogonality thesis is most likely true.

* * *

True Desrtopa. But just as doing mathematics is harder when mathematicians can't agree on what constitutes a valid proof (cf. constructivists versus nonconstructivists), likewise formalising a normative account of ideal rational agency is harder where disagreement exists over the criteria of rationality.

[on Luke's "Philosophy: A Diseased Discipline"]
Much of professional analytic philosophy makes my heart sink too. Reading Kant isn't fun - even if he gains in translation. But I don't think we can just write off Kant's work, let alone the whole of still unscientised modern philosophy. In particular, Kant's exploration of what he calls "The Transcendental Unity of Apperception" (aka the unity of the self) cuts to the heart of the SIAI project - not least the hypothetical and allegedly imminent creation of unitary, software-based digital mind(s) existing at some level of computational abstraction. No one understands how organic brains manage to solve the binding problem (cf. http://lafollejournee02.com/texts/body_and_health/Neurology/Binding.pdf) - let alone how to program a classical digital computer to do likewise. The solution IMO bears on everything from Moravec's Paradox (why is a sesame-seed-brained bumble bee more competent in open-field contexts than DARPA's finest?) to the alleged prospect of mind uploading, to the Hard Problem of consciousness.

Presumably, superintelligence can't be more stunted in its intellectual capacities than biological humans. Therefore, hypothetical nonbiological AGI will need a capacity to e.g. explore multiple state spaces of consciousness; close Levine's Explanatory Gap (cf. http://cognet.mit.edu/posters/TUCSON3/Levine.html) map out the "neural correlates of consciousness"; and investigate qualia that natural selection hasn't recruited for any information-processing purpose at all. Yet classical digital computers are still zombies. No one understands how classical digital computers (or a massively classically parallel connectionist architecture, etc) could be otherwise / or indeed have any insight into their zombiehood. [At this point, some hard-nosed behaviourist normally interjects that biological robots are zombies - and qualia are a figment of the diseased philosophical imagination. Curiously, the behaviourist never opts to forgo anaesthesia before surgery. Why not save money and permit his surgeons to use merely muscle relaxants to induce muscular paralysis instead?]

The philosophy of language? Anyone who believes in the possibility of singleton AGI should at least be aware of Wittgenstein's Anti-Private Language Argument. (cf. https://en.wikipedia.org/wiki/Wittgenstein_on_Rules_and_Private_Language) What is the nature of the linguistic competence, i.e. the capacity for meaning and reference, possessed by a notional singleton superintelligence?

Anyone who has studied Peter Singer - or Gary Francione - may wonder if the idea of distinctively Human-Friendly AGI is even intellectually coherent. (cf. "Aryan-Friendly" AGI or "Cannibal-Friendly" AGI?) Why not an impartial Sentience-Friendly AGI?

Hostility to "philosophical" questions has sometimes had intellectually and ethically catastrophic consequences in the natural sciences. Thus the naive positivism of the Copenhagen school retarded progress in pre-Everett quantum mechanics for over half a century. Everett himself, despairing at the reception of his work, went off to work for the Pentagon designing software targeting cities in thermonuclear war. In countless quasi-classical Everett branches, his software was presumably used in nuclear Armageddon.

And so forth...

Note that I'm not arguing that SIAI / Lesswrongers don't have illuminating responses to all of the points above (and more!), merely that it might be naive to suggest that all of modern philosophy, Kant, and even Plato (cf. the Allegory of the Cave) are simply irrelevant. The price of ignoring philosophy isn't to transcend it but simply to give bad philosophical assumptions a free pass. History suggests that generation after generation believes they have finally solved all the problems of philosophy; and time and again philosophy buries its gravediggers.

But this time is different? Maybe...

[on physicalism]
Shminux, Strawsonian physicalism may be false; but it is not dualism. Recall the title of Strawson's controversial essay was "Realistic monism - why physicalism entails panpsychism" (Journal of Consciousness Studies 13 (10-11):3-31 (2006)) For an astute critique of Strawson, perhaps see William Seager's "The 'intrinsic nature' argument for panpsychism" (Journal of Consciousness Studies 13 (10-11):129-145 (2006) http://philpapers.org/rec/SEATIN ) Once again, I'm not asking you to agree here. We just need to be wary of dismissing a philosophical position without understanding the arguments that motivate it.

* * *

Shminux, it is indeed not obvious what is obvious. But most mainstream materialists would acknowledge that we have no idea what "breathes fire into the equations and makes there a world for us to describe." Monistic materialists believe that this "fire" is nonexperiential; monistic idealists / Strawsonian physicalists believe the fire is experiential. Recall that key concepts in theoretical physics, notably a field (superstring, brane, etc), are defined purely mathematically [cf. "Maxwell's theory is Maxwell's equations"] What's in question here is the very nature of the physical.

Now maybe you'd argue instead in favour of some kind of strong emergence; but if so, this puts paid to reductive physicalism and the ontological unity of science.

* * *

Shminux, what would be a "virtual" conscious experience"? I think you'll have a lot of work to do to show how the "raw feels" of conscious experience could exist at some level of computational abstraction. An alternative perspective is that the "program-resistant" phenomenal experiences undergone by our minds disclose the intrinsic nature of the stuff of the world - the signature of basement reality. Dualism? No, Strawsonian Physicalism.

Of course, I'm not remotely expecting you to agree here. Rather I'm just pointing out there are counterarguments to computational platonism that mean Jonatas' argument can't simply be dismissed.

* * *

"Utility counterfeiting" is a memorable term; but I wonder if we need a duller, less loaded expression to avoid prejudging the issue? After all, neuropathic pain isn't any less bad because it doesn't play any signalling role for the organism. Indeed, in some ways neuropathic pain is worse. We can't sensibly call it counterfeit or inauthentic. So why is bliss that doesn't serve any signalling function any less good or authentic? Provocatively expressed, evolution has been driven by the creation of ever more sophisticated counterfeit utilities that tend to promote the inclusive fitness of our genes. Thus e.g. wealth, power, status, maximum access to seemingly intrinsically sexy women of prime reproductive potential (etc) can seem inherently valuable to us. Therefore we want the real thing. This is an unsettling perspective because we like to think we value e.g. our friends for who they are rather than their capacity to trigger subjectively valuable endogenous opioid release in our CNS. But a mechanistic explanation might suggest otherwise.

* * *

Perhaps you might also discuss Felipe De Brigard's "Inverted Experience Machine Argument" To what extent does our response to Nozick's Experience Machine Argument typically reflect status quo bias rather than a desire to connect with ultimate reality?

If we really do want to "stay in touch" with reality, then we can't wirehead or plug into an "Experience Machine". But this constraint does not rule out radical superhappiness. By genetically recalibrating the hedonic treadmill, we could in principle enjoy rich, intelligent, complex lives based on information-sensitive gradients of bliss - eventually, perhaps, intelligent bliss orders of magnitude richer than anything physiologically accessible today. Optionally, genetic recalibration of our hedonic set-points could in principle leave much if not all of our existing preference architecture intact - defanging Nozick's Experience Machine Argument - while immensely enriching our quality of life. Radical hedonic recalibration is also easier than, say, the idealised logical reconciliation of Coherent Extrapolated Volition because hedonic recalibration doesn't entail choosing between mutually inconsistent values - unless of course one's values are bound up with the inflicting or undergoing suffering.

IMO one big complication with discussions of "wireheading" is that our understanding of intracranial self-stimulation has changed since Olds and Milner discovered the "pleasure centres". Taking a mu opioid agonist like heroin is in some ways the opposite of wireheading because heroin induces pure bliss without desire (shades of Buddhist nirvana?), whereas intracranial self-stimulation of the mesolimbic dopamine system involves a frenzy of anticipation rather than pure happiness. So it's often convenient to think of mu opioid agonists as mediating "liking" and dopamine agonists as mediating "wanting". We have two ultimate cubic centimetre sized "hedonic hotspots" in the rostral shell of the nucleus accumbens and ventral pallidum where mu opioid agonists play a critical signalling role. But anatomical location is critical. Thus the mu opioid agonist remifentanil actually induces dysphoria http://www.ncbi.nlm.nih.gov/pubmed/18801832 - the opposite of what one might naively suppose.

* * *

Just how diverse is human motivation? Should we discount even sophisticated versions of psychological hedonism? Undoubtedly, the "pleasure principle" is simplistic as it stands. But one good reason not to try heroin, for example, is precisely that the reward architecture of our opioid pathways is so similar. Previously diverse life-projects of first-time heroin users are at risk of converging on a common outcome. So more broadly, let's consider the class of life-supporting Hubble volumes where sentient biological robots acquire the capacity to rewrite their genetic source code and gain mastery of their own reward circuitry. May we predict orthogonality or convergence? Certainly, there are strong arguments why such intelligences won't all become the functional equivalent of heroin addicts or wireheads or Nozick Experience Machine VR-heads (etc). One such argument is the nature of selection pressure. But if some version of the pleasure principle is correct, then isn't some version of the convergence conjecture at least feasible, i.e. they'll recalibrate the set-point of their hedonic treadmill and enjoy gradients of (super)intelligent (super)happiness? One needn't be a meta-ethical value-realist to acknowledge that subjects of experience universally find bliss is empirically more valuable than agony or despair. The present inability of natural science to explain first-person experiences doesn't confer second-rate ontological status. If I may quote physicist Frank Wiczek,

"It is reasonable to suppose that the goal of a future-mind will be to optimize a mathematical measure of its well-being or achievement, based on its internal state. (Economists speak of 'maximizing utility'', normal people of 'finding happiness'.) The future-mind could discover, by its powerful introspective abilities or through experience, its best possible state the Magic Moment - or several excellent ones. It could build up a library of favourite states. That would be like a library of favourite movies, but more vivid, since to recreate magic moments accurately would be equivalent to living through them. Since the joys of discovery, triumph and fulfillment require novelty, to re-live a magic moment properly, the future-mind would have to suppress memory of that moment's previous realizations.

A future-mind focused upon magic moments is well matched to the limitations of reversible computers, which expend no energy. Reversible computers cannot store new memories, and they are as likely to run backwards as forwards. Those limitations bar adaptation and evolution, but invite eternal cycling through magic moments. Since energy becomes a scarce quantity in an expanding universe, that scenario might well describe the long-term future of mind in the cosmos." (Frank Wiczek) [Big troubles, imagined and real; published in Global Catastrophic Risks, eds Nick Bostrom, Milan M. Cirkovic, OUP, 2008)

So is convergence on the secular equivalent of Heaven inevitable? I guess not. One can think of multiple possible defeaters. For instance, if the I.J. Good / SIAI [MIRI] conception of the Intelligence Explosion (as I understand it) is correct, then the orthogonality thesis is plausible for a hypothetical AGI. On this story, might e.g. an innocent classical utilitarian build AGI-in-a-box that goes FOOM and launches a utilitronium shockwave? (etc) But in our current state of ignorance, I'm just not yet convinced we know enough to rule out the convergence hypothesis.


[on the Springer Singularity Hypotheses volume]
Many thanks Dr Manhattan. A scholarly version plus a bibliography (PDF: https://www.biointelligence-explosion.com/biointelligence-explosion.pdf ) will appear in Springer's forthcoming Singularity volume (cf. http://singularityhypothesis.blogspot.com/p/central-questions.html) published later this year. Here's a question for Lesswrong members. I've been asked to co-author the Introduction. One thing I worry about is that causal readers of the contributors' essays won't appreciate just how radically the Kurzweilian and SIAI conceptions of the Singularity are at odds. Is there anywhere in print or elsewhere where Ray Kurzweil and Eliezer Yudkowsky, say, directly engage each other's arguments in debate ? Also, are there any critical but non-obvious points beyond the historical background you think should be included the introductory overview?

* * *

The only part of the world to which one has direct, non-inferential access is the contents of one's own conscious mind. Abundant evidence from experimental psychology suggests we often confabulate even here. Even if one is uninterested in the topic of consciousness per se, I think it's worth investigating how the properties of the medium infect the supposed propositional content of what one is saying. Thus while in the altered state of consciousness known as dreaming, for instance, a scientific rationalist may make all sort of cognitive errors; but the nature of the cardinal error is (normally) elusive. So how will posthuman superintelligence regard what humans mostly take for granted, namely "ordinary waking consciousness" - the state of consciousness in which we pursue the enterprise of scientific knowledge? Or as Einstein put it more poetically, "What does the fish know of the sea in which it swims?

"Plight"? Perhaps I should have used a more cumbersome but emotionally neutral synonym: a difficult, subjectively distressing or dangerous situation. I wasn't intending to add "plights" to our ontology of the world!


[on webmaster Dave's Reddit AMA]
First, many apologies Vaniver if you didn't find any of the AMA topics of interest. But I did also answer - or at least attempt to answer! - questions on Third World poverty; the reproductive revolution; nootropics; scientifically measuring (un)happiness; cognitive bias; brain augmentation; cryonics; empathy enhancement; polyamory; the (alleged) arrogance of transhumanism; Hugo De Garis; Aubrey de Grey; my contribution to Springer's forthcoming Singularity volume; "hell worlds" and "mangled worlds" in QM; classical versus negative utilitarianism; and meta-ethical anti-realism. If there is an unjustly neglected topic you'd like to see tackled, then I promise I'll respond to the best of my ability.

Psychedelics? The drug-naive may dismiss their intellectual significance on a priori grounds - and one may abstain altogether on grounds of prudence and/or legality. But will an understanding of consciousness yield to rational philosophical analysis alone, or only to a combination of theory harnessed to empirical methods of investigation? Maybe the former; but I think a defensible case can be made that rigorous experimentation is vital, despite the methodological pitfalls.

Nonhuman animals? Well, the plight of sentient but cognitively humble creatures might seem of limited interest to a community of rationalists like Lesswrong. But let's assume that we take the problem of [human-]Friendly AI seriously. Isn't the fate of sentient but comparatively cognitively humble creatures in relation to vastly superior intelligence precisely the issue at stake? Should superintelligence(s) care about the well-being of their intellectual inferiors any more than most humans do? And if so, why?


The Hedonistic Imperative

1 : 2 : 3 : 4 : 5 : 6 : 7 : 8 : 9 : 10 : 11 : 12 : 13 : 14 : 15 16

David Pearce (2013)
dave@hedweb.com


hedweb.com

HOME
2024
2023
2022
2021
2020
2019
2018 (FB)
2017 (FB)
2016 (FB)
2015 (FB)
Facebook
Talks 2015
Pre-2014 (FB)
Some Interviews
Overcoming Bias 2013
Quora Answers 2015-23
The Abolitionist Project
Hedonistic Imperative FB group Postings