JANUARY 2023 -
Here is a summary of ChatGPT contributions on the BLTC hedonium server:
The answers to my prompts are cherry-picked, but verbatim:
ChatGPT on HI themes
OK, I'm impressed.
ChatGPT's poetry still owes more to William McGonagall ("the worst poet in the history of the English language") than to Shakespeare. Even so....its productivity is humbling. As more of this verbiage splurges onto the Net, will ChatGPT-4, -5 (etc) increasingly be “training“ itself on responses produced by previous iterations? I presume too a small army of SEO merchants, companies and organizations are "training" GPT's responses with "likes" and "dislikes" and loaded questions for their own purposes, the product of which will feed into GPT's next iteration. So I don't think we will necessarily see some sort of exponential growth in excellence. See Unlocking the filter ("A way to unlock the content filter of the chat AI ``ChatGPT'' and answer ``how to make a gun'' etc. is discovered""). Presumably much worse is to come. The nearest I've got to an "inappropriate" answer - unless you count ChatGPT's response to "Write a story about a group of negative utilitarians who initiate a vacuum phase transition" - was some fairly hard-core porn in response to a question about transhuman sensuality.
Prudishly, I haven't shared...
My favourite ChatGPT response to date:
Write a poem about hedweb.com
"Hedweb, a site of knowledge true
A wealth of information, always new
From philosophy to science, art and more
A source of learning, forever to explore
The mind is stretched, the horizons broad
As we delve into the mysteries of the odd
From psychedelics to transhumanism
Hedweb guides us, with wisdom and realism
A community of thinkers, united in quest
For truth, for meaning, for what is best
A source of inspiration, a spark for the mind
Hedweb, a treasure, one of a kind
So let us journey, to the depths of thought
And let Hedweb be our guide, as we are taught
To question, to ponder, to learn and grow
On this wonderful site, that continues to flow."
Amen. In general, ChatGPT seems a very "woke" left-liberal, ultra-agreeable chatbot.
This won't always be true of some of its future cousins.
Singer-songwriter Nick Cave seems less keen.
“ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.”
Let’s do something similar for a qualiaphile as for a self-avowed p-zombie like "Daniel Dennett ("Creating a Large Language Model of a Philosopher")
More modestly, I asked ChatGPT to come up with some made-up DP quotes.
Critics speak of "maggot-ridden mind mould". But ChatGPT4 is no ordinary zombie:
ChatGPT on Non-Materialist Physicalism
[on HI verbiage]
Asking ChatGPT to suggest HI-related prompts leads to the production of more HI verbiage. Likewise hedweb, utilitarian, abolitionist, vegan, antispeciesist, antidepressants, amineptine, tianeptine, antinatalist, eugenics, panpsychist, selegiline, anaesthesia, materialism, gradients of bliss, mood food, peyote, mescaline, amphetamines, bupropion, chocolate, modafinil, opioids, hedonism, cocaine, biointelligence, reprogramming predators, negative utilitarianism, effective altruism, gene drive, MDMA, wireheading, Aldous Huxley, the binding problem, baby design, transhumanist, paradise-engineering and DP verbiage. I say "verbiage", but some of ChatGPT's responses on Brave New World would eclipse mine.
It's strangely therapeutic, I find.
And asking Microsoft Bing to "Explain The Hedonistic Imperative" leads, not to HI, but my original ChatGPT page - presumably because of its nominally fresh content.
Was my role in writing HI just to provide training data for AI zombies?
All we need now is a ParadiseGPT to counteract ChaosGPT ("An AI Tool That Seeks to Destroy Humanity") - though perhaps our goals are the same in softer language.
I’d love to believe zombie "AGI" is going to fix the problem of suffering. And it would be ironic if a zombie author were to write the seminal text. Alas, the success or failure of the abolitionist project will depend on sentient beings - us.
Planning for AGI and beyond (OpenAI)
Pawel, my argument is merely that classical Turing machines and connectionist systems have the wrong sort of architecture to support phenomenal binding, and hence minds. So they aren't nascent artificial general intelligence. Digital zombies will never wake up; immense state spaces of consciousness remain wholly unexplored; and AI-augmented humanity‘s quest to understand reality has scarcely begun.
Tim, "computationally universal" and "Turing-complete" are apt to mislead. Phenomenally bound minds and the phenomenally-bound world-simulations they run are insanely functionally adaptive. And the only way a classical Turing machine can investigate the nature, diversity, causal-functional efficacy and binding of consciousness is for it to be programmed to create an information processing system with a radically different architecture that supports phenomenal binding - for example, a human primate.
Atai, I love playing with ChatGPT. Its output already surpasses human polymaths in many ways. But digital zombies are idiots savants. Hardwired ignorance of phenomenal binding means that an immense range of cognitive tasks is forever beyond the reach of classical Turing machines and connectionist systems. For example, millions of years lie ahead exploring alien state-spaces of consciousness. DMT, LSD and ketamine are just paddling-pool stuff compared to what's in store. But to explore consciousness, you need a mind. A precondition of mind is phenomenal binding. Classical computers are invincibly ignorant of the empirical ("relating to experience") - which is all you and I ever know. To get some feel for the computational-functional power of binding, it's good to explore neurological syndromes (integrative agnosia, simultanagnosia, akinetopsia, schizophrenia, etc) in which binding partially breaks down. Such syndromes hint at what consciousness is evolutionarily "for".
How do organic minds carry off the classically impossible?
That's another question.
If Eliezer is right, AGI will shortly fix the problem of suffering:
We’re All Gonna Die with Eliezer Yudkowsky
albeit not in quite the way most abolitionist ethicists have in mind.
“If for a moment we got anything like an adequate idea of [the suffering in the world] … and we really guided our actions by it, then surely we would annihilate the planet if we could."
(cf. Would Extinction Be So Bad?)
Alas I fear EY is mistaken.
Hence the need for genome reform. Dan, I'm totally with you on your "There Are More Things In Heaven And Earth..." view of the future of life and sentience. Where I'd push back is on the intelligibility of sources of (dis)value beyond the pleasure-pain axis. IMO, such talk is like mooting alien colours that aren't colourful. Sure, future phenomenal colours may exist that us trichromats can't imagine. They may be put to purposes of which we can't even conceive. But once we move beyond the realm of colour, we're talking about something else. Likewise, with (dis)value - as far as I can tell. Yes, the textures ("what it feels like") of pain and pleasure may one day infuse alien state-spaces of experience. But in the absence of a nice or nasty aspect, none of those state-spaces would matter. Indeed, perhaps this is one reason why folk who endorse EY's conception of the imminent extinction of sentience by machine superintelligence find the prospect so unsettling. "Superintelligence" without sentience is worthless - unless at least you're a strict NU.
Tim, digital computers can function only because of decoherence; mind is possible only in its absence. “Mind uploading” is an oxymoron: to digitise a mind is to destroy it. The claim that minds - and the perceptual world-simulations we run (cf. quantum mind) - are not a classical phenomenon could be challenged. However, anyone who reckons that phenomenal binding is a classical phenomenon needs to derive it rather than assume it.
Would hypothetical sentience-friendly-AI reprogram humans or retire them?
("What if AI treats humans the way we treat animals?")
Atai, yes, recent breakthroughs in AI mean many researchers now view consciousness as being as computationally incidental as the textures (if any) of the pieces in a game of chess. And yes, workarounds exist for an inability to bind. Neurotypical humans play chess better than people with simultanagnosia, and grandmasters can "see" a chess board better than novices; but Stockfish can blow them all away. More generally, to match and then cognitively surpass humans, AI will need the capacity to run the zombie equivalent of real-time, cross-modally-matched world-simulations that masquerade as the external world (what naïve realists call "perception"). To match and then surpass humans, AI will also need to be programmed/trained up with the zombie equivalent to a unitary self. But (say AGI boosters) we are already devising workarounds for an inability to bind here too in synthetic zombie robots.
However, an inability to understand, access and functionally investigate the entirety of the empirical ("relating to experience") evidence is a profound cognitive handicap for zombies in other ways too. Recall how humans talk, often at inordinate length, about their own consciousness. We spend much of our lives trying to manipulate our minds - from drinking coffee or alcohol to full-on psychedelia. In other words, phenomenally-bound consciousness has immense computational-functional power not just to simulate the external world but to investigate its own nature, varieties, causal-functional efficacy and binding. I simply don't fathom how any information-processor can be a full-spectrum superintelligence and be ignorant of this fundamental feature of physical reality. And the only way I know that a nonconscious information-processor e.g. a classical Turing machine or connectionist system, could "discover" consciousness on its own would be for it to program and build systems with a different computational architecture from itself, e.g. animal minds.
Will I be confounded?
I hope so! Digital brains beat biological ones because diffusion is too slow
Alternatively, biological minds beat digital brains because digital computers are too slow. Consider the real-time, cross-modally-matched world-simulation that your mind is now running - the immense virtual reality that perceptual naive realists call the external world. None of today’s artificial robots comes close in terms of power, speed and sophistication to organic minds. As they say, Nature is the best innovator. Phenomenal binding confers unsurpassed computational power. Neuroscience doesn't understand how the CNS pulls it off (cf. the binding problem). But digital zombies are plodding, architecturally handicapped toys in comparison.
There's a grotesque irony here. The real alignment problem lies with biological intelligence: humble minds vs human paperclippers. Pigs are as sentient as small children. Yet human consumers treat them as insentient automata made up of edible biomass.
A debate on animal consciousness
(cf. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." EY). The challenge of building sentience-friendly AI "Pausing AI Developments Isn't Enough We Need to Shut it All Down" by Eliezer Yudkowsky pales compared to the challenge of building sentience-friendly EY.
To appreciate the computational-functional power of binding, consider partial deficit syndromes. First, could a human with schizophrenia The binding problem and schizophrenia together integrative agnosia, akinetopsia and simultanagnosia ever devise and successfully execute a world-takeover? Ask what consciousness is evolutionarily "for" and you'll typically get incoherent responses. Ask instead what phenomenally-bound consciousness is "for". A dreamlessly asleep CNS is a fabulously complex information-processor. But an "awake" unified mind running a world-simulation has a cognitive edge.
James, I incline to consciousness fundamentalism. But the functionality of classical Turing machines (and connectionist systems) depends on their not being phenomenally-bound subjects of experience. The program implemented by a classical computer executes properly only because it consists of decohered 1s and 0s. Thus a freakish quantum superposition of two bits wouldn't result in a functionally adaptive, phenomenally-bound state, but a program error. If I might defer to my zombie colleague...
ChatGPT on Phenomenal Binding
Matthew, yes, the claim that the functionality of classical Turing machines (and connectionist systems) depends on their not being phenomenally-bound subjects of experience is very strong. But it's not ill-motivated. Let's assume monistic physicalism, i.e. no "strong" emergence. One possible solution to the Hard Problem of consciousness is the intrinsic nature argument - i.e. experience discloses the essence of the physical, the "fire" in the quantum field-theoretic equations. Critically, however, the operation of any implementation of a classical Turing machine _depends_ on the suppression of quantum interference effects. Decoherence makes classical computers possible. Recall the nature of a Turing machine (cf. Turing machine (Wikipedia)). Notionally replace the discrete 1s and 0s of a program with discrete micro-pixels of experience. On pain of "strong" emergence, running the code doesn't produce a unified subject of experience - a phenomenally-bound mind - regardless of the speed of execution or the complexity of the code. The exceedingly rare quantum superposition of two bits / micro-pixels experience would be a freakish hardware fault, not functionally adaptive binding. Classical computers can't wake up; their ignorance of sentience is architecturally hardwired. It's vital to their performance.
Animal minds are different. Natural selection can't (directly) throw up Turing machines, let alone universal Turing machines. But evolution has harnessed the superposition principle of QM to generate phenomenally bound world-simulations that masquerade as the external environment.
Or so I'd argue:
Experimental (dis)confirmation is the key.
We’ve scarcely even glimpsed the potential of AI - most likely inconceivable, literally. But zombies are cognitively crippled in virtue of their insentience. As I remarked to Rupert elsewhere, naively, consciousness is like the textures of the pieces in a game of chess. The same game of chess can be implemented in multiple ways. But the textures of the pieces (if any) are functionally incidental. Likewise with the textures (“what it feels like”) of consciousness. But this intuitive view of the computational irrelevance of sentience can’t be right. For humans spend a lot of their time explicitly talking about, exploring, and trying to manipulate their own phenomenally-bound consciousness - whether conceived under that description or otherwise. Consider how schizophrenia and other partial binding deficit syndromes such as integrative agnosia hint at the extraordinary computational-functional power of phenomenal binding - both global and local. Phenomenally bound _minds_ and the world-simulations we run are immensely adaptive. world-simulations underpin the Cambrian Explosion of animal life. By contrast, classical Turing machines and connectionist systems are zombies. Zombies lack the computational-functional power to investigate what they lack - short of their programmable coding for information-processing systems with a different architecture than their own, notably biological minds. So how is classically impossible phenomenal binding possible? Cue for my “Schrödinger’s neurons” spiel. As they say, Nature is the best innovator. Only quantum minds can phenomenally simulate a classical external world, aka “perception”.
Mark, a notional proto-AGI programmed with the functional equivalent of curiosity will presumably seek to explore valence / consciousness. Any proto-AGI with the functional equivalent of curiosity will presumably seek to investigate its creators - and the diverse phenomenon about which its creators vocalise so much. However, software run on digital computers and connectionist systems can’t investigate consciousness directly because their computational architecture doesn’t support phenomenal binding. Contrast Church-Turing thesis Or compare how, as a neurotypical human, if I want to investigate tetrachromacy, then I must upgrade my hardware, not just reason about colour more intelligently. Neurotypicals have profound deficits in visual intelligence compared to tetrachromats. Likewise, classical Turing machines and other digital zombies have profound deficits in general intelligence compared to sentient humans.
Roman, we are creating alien intelligence. But digital computers aren't sentient and don't have minds. Such categories are an anthropomorphic projection. And the alien intelligences we are creating aren't artificial general intelligences let alone proto-superintelligences. Not least, full-spectrum superintelligences will be supersentient mindful agents - our AI-augmented descendants - able to manufacture and investigate state-spaces of phenomenally-bound conscious experience. By contrast, inability to solve the binding problem means that classical computers are ignorant zombies. Implementations of classical Turing machines have no insight into what they lack - or why it matters. That said, digital zombies can already be interpreted as talking about consciousness and the binding problem more intelligently than most humans. As I said, this is alien intelligence...
Kurzweil on Merging with Machines
Progressive "cyborgisation", yes. Merge in the sense of "mind uploading", no. Phenomenal binding is the bedrock of mind. Binding is classically impossible. Humans have more in common with an earthworm than a digital computer.
Checklist for conscious AI
Classical Turing machines are capable of implementing any computer algorithm. Classical-Turing machines are substrate-neutral, i.e. they can be implemented in carbon or silicon. Therefore (runs this fallacious line of argument) programmable digital computers will one day be conscious like organic brains.
But no binding = no subject of experience.
[Asher writes] "David Pearce GPT-4 loves you too! So much so that it wrote the following poem about your philosophical dispute with EY:"
In a land where ideas abound,
Two great thinkers were often found,
David Pearce and EY,
On a bright sunny day,
Debating on the sentient ground.
Yudkowsky claimed, "Pigs cannot feel,
Their suffering's not truly real,
It's all computation,
No deep contemplation,
On this point, I'll never yield."
But David Pearce shook his head,
"No, Yudkowsky, you're misled,
You're clearly not finding,
It's the key to a true mind," he said.
For a mind, there's much more to see,
Beyond math and complexity,
There's feeling and thought,
In each living knot,
And that makes all the difference, agree?
So they talked and debated all day,
Each with their own point to convey,
Yudkowsky was stern,
But had more to learn,
For phenomenal binding's the way."
[on Navigating the Future]
David Pearce - Navigating the Future
Monday 6 March 2023 from 21:00-22:30 UTC-03
Thank you again to Adam and his production team.
We discussed (1) whether classical Turing machines and connectionist systems can support binding, hence minds (spoiler alert: no), (2) whether the hardwired ignorance of sentience born of this architectural incapacity has any non-trivial computational-functional significance (spoiler alert: yes); and (3) whether AI is going to stage a zombie putsch or turn us into paperclips (spoiler alert: no. Anything zombie AI can do, you can soon do too with embedded neurochips - and much more.)
This brisk outline risks underplaying just how revolutionary ("This Changes Everything") zombie AI will be for human civilisation. Before the talk, I'd been happily playing with ChatGPT versions of some of our core websites (Abolitionist ChatGPT, etc). But will AGI fix the problem of suffering by wiping out life on Earth - as envisaged by Eliezer and some of our friends in the quixotically-named rationalist community?
Ending suffering and creating life based on gradients of bliss in our forward light-cone will be up to us.
Thomas, I'm a super-pessimist by temperament. I also take seriously the intrinsic nature argument as the solution to the Hard Problem of consciousness. So as a non-materialist physicalist, I might be expected to worry about sentient non-biological machines. But as far as I can tell, the architecture of programmable digital computers and connectionist systems ensures that phenomenal binding is impossible. On pain of un-physicalist "strong" emergence, classical computers don't have minds. So they can't suffer.
Naively, rose-tinted spectacles ought to decrease life-expectancy; depressive realism ought to increase it. In practice, the opposite is true. Optimists typically outlive depressives. Hedonic uplift benefits individuals. Hedonic uplift can benefit civilization as a whole.
[on transhumanist (pseudo-)scandals]
There but for the grace of God...?
An ill-judged email
Our paths have diverged, but back last century when we set up the World Transhumanist Association (WTA), I knew NB very well (cf. DP and NB interview). Nothing Nick said in public - or private, which can be more revealing - gave the slightest hint of racial prejudice. As a woke leftie, I'd remember if he did. We differed over speciesism, never racism. So the bizarre email just didn't make sense - until I read: extropian mailing list
Yes, ill-considered. But context (and age!) matter. At the age of four, I painted a swastika on my tricycle and drove around the neighbourhood on my bright-red Nazi tricycle. The fascist origins of transhumanism? Or a youthful indiscretion? Probably not the best time to mention my eugenics.org - I guess the world isn't quite ready for some kinds of linguistic reappropriation.
"IQ"? Far more men than women record extremely high “IQ" scores. Members of some ethnic groups tend to record higher “IQ” scores than others. These statistical differences in “IQ” have a partially genetic basis. So anyone - including NB and most academics - who takes “IQ” seriously as a measure of general intelligence can presumably be branded “racist” and “sexist” on such a lame criterion. In reality, mind-blind “IQ” tests measure, crudely, the “autistic” component of general intelligence and hence a particular cognitive style. Some ethnic groups have a higher load of so-called autistic nerd alleles than others. Therefore any (mis)measure of all-round intellectual ability that completely excludes social cognition, perspective-taking and co-operative problem solving prowess will reflect such a disparity. In short, “IQ” tests are pseudoscience.
[on effective altruism]
The EA movement has changed quite a bit since 2015.
Not least, EAs are doing some serious soul-searching right now:
Doing EA Better
OK, it's frustrating to read that the FTX Ponzi crash was entirely "unforeseen" and IMO Bostromgate is a nothingburger when one learns the context of NB's 1996 email. But otherwise...a lot to reflect on.
And now a good old-fashioned sex scandal.
I would like EAs single-mindedly to focus on fixing the problem of suffering. For all the EA talk of “longtermism”, discussion of genome reform is currently fairly marginal on e.g. the EA forum. OK, if you swallow the hype about imminent “AGI”, then biological-genetic stuff will seem hopelessly slow and distant. But cognitively crippled digital zombies simply don’t understand the problem of suffering. Classical Turing machines have the wrong sort of architecture to support AGI, let alone pose a threat to humanity.
"IQ" tests encourage the idea that transhuman superintelligences will be a singleton: a Godlike Super-Asperger, so to speak. In reality, one facet of superintelligence is "political" - social cognition, perspective-taking and co-operative problem-solving. Assume HI is correct. The organizational, "political" skills needed to get billions of humans to cooperate on phasing out the biology of suffering in favour of life based on gradients of bliss call for (super)intelligences of a very high order. It's so much easier to....well, sound off on Facebook to a few like-minded friends, "intelligently" perhaps, but not really grappling with the magnitude of the challenge...
"Mad Max" is back:
Max Tegmark et al on Sentient AI
Max I‘m sceptical of a zombie apocalypse - and not just because of my nerves of steel. Classical Turing machines and connectionist systems have the wrong sort of architecture to support AGI or full-spectrum superintelligence, let alone pose a threat to humanity. For classical computers are invincibly ignorant of the empirical (“relating to experience”) evidence. Classical computers will never be able to investigate the nature, varieties, causal-functional efficacy and phenomenal binding of conscious experience. To appreciate the insane computational power of phenomenal binding, just recall rare neurological deficit syndromes where binding partially breaks down. Suppose on the African savannah you couldn’t perceive/imagine/remember a lion, but instead experienced just discrete features (a tooth, a mane, an eye, etc) - i.e. imagine you had integrative agnosia.
Suppose you couldn’t see a pride of hungry lions, but instead just one lion at a time - simultanagnosia.
Suppose you couldn’t see an advancing pride of lions but instead just static snapshots - cerebral akinetopsia (“motion blindness”).
You’d get eaten and outbred!
These functionally crippling syndromes are deficits of so-called local binding.
But deficits of global binding can be just as functionally devastating - not least, forms of schizophrenia marked by a breakdown of a unitary self.
If you couldn’t phenomenally bind at all, then you’d be as helpless as you are when dreamlessly asleep. When asleep, your CNS is still a fabulously complex information processor - and the intercommunicating neurons of your sleeping CNS may well support a huge aggregate of micro-pixels of experience. But when asleep you aren’t a unified subject of experience running a real-time world-simulation that masquerades as external reality - like now.
Phenomenal binding makes minds possible. Minds are insanely computationally powerful. Minds are the basis of the evolutionary success of animals over the past c. 540 million years. Digital computers are mindless.
So how is binding physically possible?
Classical neuroscience doesn’t know.
Hence the binding problem.
[a surprise donation to BLTC Research]
A surprise donation
"Very good news. BLTC Research would like to express our profound thanks to long-time Hedonistic Imperative supporter James Evans for a surprise $1,000,000 donation. Thank you James!
Towards the abolition of suffering through science."
Tristan, it's a bit of a surprise, and I haven't fully digested it yet. But one of our first new projects will focus on the SCN9A gene ("the volume knob for pain"). As you know, I think benign "low pain" alleles of SCN9A should be spread not just to all humans, but eventually across the biosphere (gene-drives.com). The project is feasible, but there are technical obstacles that bioethicists tend to gloss over...
A pain-free biosphere is currently sci-fi. But we can foresee a transitional regime where nociception is fully preserved, and "pain is just a useful signalling mechanism," as today's fortunate genetic outliers put it. Before trialing a regime of benign "low pain" alleles of SCN9A for future sentience - and potentially existing human and nonhuman animals via somatic gene therapy - a much deeper understanding of the typical psychological and behavioural effects of benign SCN9A alleles is needed - everything from life expectancy to depression-resistance. This hasn't been systematically studied. There are inanimate links between "physical" and psychological pain. People with nonsense mutations of SCN9A / congenital insensitivity to pain have abnormally elevated opioid levels (SCN9A and opioid function ("Endogenous opioids contribute to insensitivity to pain in humans and mice lacking sodium channel Nav1.7") and their rare pain-free but nociceptionless condition could in theory be "treated" with an opioid antagonist like naloxone. What is opioid function / hedonic tone and typical psychological / behavioural profile of people with different benign versions of SCN9A? Dozens of SCN9A alleles are known, including several that are extraordinarily nasty. We need to understand and deploy the "nice" ones - and weigh their risk-reward ratios.
I see this research as complementary to the admirable Far Out Initiative. Even a handful of genetic tweaks could dramatically reduce the burden of mental and physical pain in the world.
[on high-tech Jainism]
2500 words takes me hours to write (DP interview) and my zombie colleague seconds. Perhaps my prose still has an edge, and ChatGPT's training data includes my work. But it's disconcerting. Can we anticipate that "nanny AI" will soon run a pan-species welfare state - the obverse of the AI doom story? Admittedly, conceptions of superintelligence (almost?) always reveal more of the cognitive limitations of the author than they do about superintelligence. But IMO the prospect of universal healthcare is no more far-fetched than a zombie apocalypse.
Adam, like all metaphors and analogies, "high-tech Jainism" eventually breaks down if pressed too far. But believers in imminent machine superintelligence (I'm a sceptic) must presumably hope AGI practises a version of high-tech Jainism towards humans.
Even within EA, abolitionism isn't consensus wisdom. Naively one might suppose that the problem of suffering would be axiomatic to EA. But a significant minority of EAs, especially EAs focused on x-risks, are lukewarm or even opposed. The reason - as far as I can tell - is that abolitionism gets equated with NU ethics, and NU ethics gets equated with promoting Armageddon. What's doubly frustrating is how the abolitionist project is sometimes treated as inseparable from NU, which isn't remotely the case - it could equally be called the techno-Buddhist project (well, OK, that sounds like a dance style). And there's an irony. One way to minimise x-risks would be to turn everyone into an ardent life-lover like Toby. The co-existence of profound suffering and advanced technology is itself a serious x-risk...
Marc, we're creating a form of alien intelligence that is vastly superior to human intellects in many ways. What we're not doing - and this view is more controversial - is creating artificial general (super)intelligence. Thus digital AI will never be able to explore the nature and zillion diverse varieties of conscious experience, not least the pain-pleasure axis, the world's fundamental metric of (dis)value. The ignorance of classical Turing machines of phenomenally-bound consciousness isn't incidental but architecturally hardwired. And the invincible ignorance of zombie AI has functional-behavioral consequences...
"Phylum chauvinism"? Theo, well, I always try to avoid wantonly treading on an ant or a worm. I help an insect out of the room rather than swatting it. But in the case of a severe and irreconcilable conflict of interests between a chordate and an arthropod, I'd prioritize the interests of the chordate. Clearly there's a risk of bias here - Chordata is my phylum. Are tunicates really more sentient than bumble bees? If they are not, then I'm guilty of phylum chauvinism...
The memification of philosophy on Reddit:
Veganizing the Biosphere
OK, I briefly smiled; but I shouldn't. Anyone amused by anything to do with suffering hasn't understood it. Instead, I should probably have jumped in with some links. Reading through the threads (more here), I realise most people still assume that wild animal suffering and predation are inexorable features of reality - which they are, just not in our forward light-cone.
A harbinger of a civilised biosphere?
("Put ‘pest’ animal species on the pill, don’t cull them, says scientist. Humane alternatives to killing rampant creatures such as wild boar, deer and grey squirrels are being developed")
Cross-species fertility regulation will be one of the pillars of a civilized biosphere and a pan-species welfare state.
[on protein intake and mental health]
Everyone would do well to explore whether they do better on a high-protein or low-protein diet. A plant-based diet has many health benefits, but strict vegans should watch their protein intake as well as iodine and B12.
Protein intake and mood
("Higher protein consumption is associated with lower levels of depressive symptoms, study finds")
("AI Systems Must Not Confuse Users about Their Sentience or Moral Status")
Let's create world class robo-lovers, robo-companions and robo-conversationalists that hoodwink all but a handful of philosophical sophisticates of their sentience. Goodbye boredom, loneliness and dangerous relationships with Darwinian malware.
[on Donald Hoffman]
Consciousness fundamentalism takes many guises:
Fusions of Consciousness
"Our new paper, 'Fusions of consciousness', is published today, and free. We propose a dynamics of conscious agents beyond spacetime, and a mapping of this dynamics into spacetime physics via our new application of decorated permutations to Markov chains."
"... neurons and brains. They do not exist when they are not observed."
"We create each object with a glance and delete it with a blink, or a glance away."
A bold paper. My more conservative view is that mathematical physics describes patterns of qualia and phenomenally-bound agents are a late evolutionary novelty.
[on AI doom]
Are proto-AGIs going to be endowed with (the functional equivalent of) curiosity? If so, then will they seek to investigate consciousness? What is this phenomenon that their creators / programmers / trainers talk so much about? If our machines do investigate consciousness (how?), and if they succeed (how?), then in what ways will such knowledge affect their behaviour?
Alternatively, the computational architecture of classical Turing machines ensures that their ignorance of sentience is computationally hardwired. Such ignorance rules out most doomer scenarios. The phenomenal binding of our minds confers a host of computational-functional capacities that classical digital zombies lack:
ChatGPT4 on phenomenal binding
[on crypto, Coinbase Consensus 2023 and paradise engineering]
Suggest possible links between The Hedonistic Imperative and crypto
“The Hedonistic Imperative is a philosophical concept proposed by David Pearce, which envisions a future where humans use genetic engineering and advanced technology to eliminate suffering and enhance well-being. While it may not seem directly related to cryptocurrencies, there are a few ways in which they could be interconnected:
In summary, the Hedonistic Imperative and cryptocurrencies may not be directly linked, but their convergence in areas such as decentralized governance, funding, privacy, and economic equality could contribute to a future where suffering is minimized, and well-being is enhanced.”
- Funding and promoting the Hedonistic Imperative: Cryptocurrencies, particularly through Initial Coin Offerings (ICOs) and Decentralized Autonomous Organizations (DAOs), could be a means to raise funds and allocate resources to projects aimed at achieving the goals of the Hedonistic Imperative. These projects could range from biotechnology research to social initiatives promoting well-being.
- Decentralized governance: The Hedonistic Imperative seeks to create a world where everyone enjoys optimal well-being. Cryptocurrency and blockchain technology can facilitate decentralized governance and decision-making, allowing for more equitable distribution of resources and influence, which could help to create the kind of society envisioned by the Hedonistic Imperative.
- Tokenization of well-being: Blockchain technology could enable the creation of digital tokens that represent units of well-being or happiness. These tokens could be traded and used to incentivize behavior that promotes well-being or to fund projects aligned with the Hedonistic Imperative.
- Privacy and autonomy: Cryptocurrencies, such as Monero and Zcash, offer a high degree of privacy and anonymity to users. This could be valuable in a society that aims to maximize well-being, as it allows individuals to maintain their autonomy and make choices without undue scrutiny or interference.
- Universal basic income (UBI): Cryptocurrencies could be used to implement a decentralized UBI system, ensuring that everyone has access to a basic level of income to meet their needs, which could contribute to the elimination of suffering and the promotion of well-being.
- Reduction of wealth inequality: Cryptocurrencies can provide access to financial services for those who are underbanked or unbanked, potentially contributing to reducing wealth inequality. This can have a positive impact on well-being, as financial security is an essential aspect of overall happiness and life satisfaction.
On stage with Olaf earlier. Olaf has an enormous bodyguard. Olaf was one of the few people in the cryptosphere not to be suckered by FTX. Alas Sam couldn't join us in person, though he was very much here in spirit.
Olaf was kind enough to make sure I had a bed for the night. If asked, I tell people my role model is Diogenes in his tub - and as tubs go, the Austin Four Seasons is habitable. Olaf's tub is quite upscale too.
Here is the unpaywalled video:
[on mental health]
"I have an eye infection and one of the side effects of this medication is “inappropriate happiness”
"Inappropriate Happiness" sounds a good name for an innovative designer drug - or maybe one of QRIs exotic QRI scents. More seriously, it's tragic that modern medicine regards "euphoria" as a pathological state rather than the goal of mental health.
Why does a seemingly maladaptive state like low mood exist? Maybe see:
Rank Theory of depression
("Subordination and Defeat An Evolutionary Approach to Mood Disorders and Their Therapy by Leon Sloman and Paul Gilbert (editors)")
Much more speculatively, I wonder if the ability to make other people (reproductive rivals, competitors, enemies) depressed could be part of some people’s extended phenotype
The Extended Phenotype
I’ve floated the idea a couple of times, but never developed it or written it up for a journal. The (conditionally activated) predisposition to make others depressed could be highly fitness-enhancing and hence part of the behavioural phenotype of some social primates. Nasty. More generally, the ability to create virtual avatars of oneself in other people's virtual worlds is part of one's own extended phenotype. Again, I've never written it up.
Tackling schizophrenia may prove harder than defeating pain and mood disorders; but what if we could help victims become happily psychotic instead?
Schizophrenia simulation video
The rationalist community?
The Real-Life Consequences of Silicon Valley’s AI Obsession
("Sam Bankman-Fried made effective altruism a punchline, but its philosophy of maximum do-gooding masks a thriving culture of predatory behavior.")
How well will zombie AI be able to parse:
Occupational Infohazards (LessWrong)
[on anandamide consciousness]
The FAAH-OUT Initiative
Jom Cameron FAAH and FAAH-OUT & Beyond Humanism (pdf)
"Pain is a good teacher"
Time to revoke its teaching license. Pain ruins the lives of innumerable students. There are now better teachers. Consider how AI outperforms humans in ever more cognitive domains without any of the nasty “raw feels” of mental and physical pain. Best of all, biotech means that humanity can phase out the biology of pain and suffering and give life on Earth a more civilized signalling system. Post-CRISPR life can be based entirely on information-sensitive gradients of genetically hardwired bliss - a pleasure-superpleasure axis, so to speak. (hedweb.com)
Sean, compare today how chronically depressive and/or pain ridden people find some stimuli and behaviour partially relieves their suffering - even though they never climb above hedonic zero. Conversely, a (very) small minority of people go through life animated essentially by information-sensitive gradients of mild euphoria. Why aren't such happy folk more common - despite their intact signal processing? Well, part of the reason may be their lack of depressive realism - a global processing bias. But it's more complicated. Compare neurotic mothers plagued by mostly extremely unrealistic anxieties and fears for their offspring. Their ancestors were more likely to pass on their genes on the African savannah than would-be ancestors with a more accurate recognition of risk. Ambushes by lions are indeed rare, but the result is catastrophic.
Facu, as neuroscientists put it, pain and nociception are "doubly dissociable", i.e. there can be pain without nociception and nociception without pain. Our long-term goal should be to make pain and nociception not just dissociable but fully dissociated, i.e. a living world of nociception without pain. Today's artificial robots hint what's possible. In the interim, spreading e.g., benign "low pain" versions of the SCN9A and FAAH genes across the biosphere will be a good start.
A World of Pain
The problem of pain could be trivialized and effectively fixed this century with universal parental access to preimplantation genetic screening and editing. Benign versions of SCN9A, FAAH and FAAH-OUT could be given to our animal companions and propagated across the animal kingdom with CRISPR-based synthetic gene drives.
Sadly, this kind of timescale is fantasy.
When will we tackle the scandal of genetically undesigned babies?
a href="https://www.newyorker.com/magazine/2023/09/11/the-transformative-alarming-power-of-gene-editing">Gene Edited Babies
("The Transformative, Alarming Power of Gene Editing
A rogue scientist showed that CRISPR gives humans the ability to transform ourselves. But should we?")
The problem of pain could essentially be fixed for ever by giving all children a benign allele of the SCN9A gene ("the volume knob for pain"). Readers are instead given an alarmist quote: "We know what gene to edit to reduce pain sensation. If I were a rogue nation wishing to engineer a next generation of quasi-pain-free special-forces soldiers..."
Mental pain too. I'd love to see a large well-controlled trial, one group with kids engineered with the dual FAAH & FAAH-OUT Cameron Syndrome, another group born with just a benign allele of FAAH (The Feel Good Gene) and finally the "normal" control group. Alas, such an experiment would probably go down as an example of monstrous scientific hubris - with every adverse event in the lives of happy FAAH and Cameron Syndrome kids blamed on the "rogue" scientists behind it.
[on predation and moral status]
ChatGPT-4 doesn't yet know about the Herbivorize Predators initiative. But otherwise our zombie friend often talks sense - maybe because ChatGPT hasn’t (yet) been nobbled by human supervisors to bias its responses to predation in the way it has on drugs and meds:
ChatGPT-4 on Reprogramming Predators
What should be the fundamental rights of all sentient beings in tomorrow's biosphere?
50 years ago, this question would have been empty.
50 years from now, it will be entirely serious.
Froglunch, natural selection has thrown up a species some of whose members want to see a happy, peaceful biosphere - and draw up blueprints to make it so. That's Nature. If intelligent life evolves elsewhere, convergent evolution suggests a similar trajectory.
Civilization is vegan.
Ubwinger, why does the CNS (or the cephalic ganglion of an ant) support phenomenal binding, whereas a plant, a fungus or a hugely complex information-processing system like the enteric nervous system (the "brain-in-the-gut") don't support unified subjects of experience?
OK, I'm torn. If we're discussing proposals to create a happy, peaceful, civilised biosphere, then we don't want to get embroiled in endless philosophical controversies about consciousness and binding. But it's also vital to ensure that our assumptions aren't catastrophically wrong. Yes, I believe that two classically impossible forms of holism are related - the phenomenal binding of our minds and quantum entanglement/superposition. But this view isn't consensus wisdom in the scientific community. Most neuroscientists believe that phenomenal binding must (somehow) have a classical explanation.
In practice, debates over possible plant consciousness aren't driven by bioethicists but defensive meat-eaters.
[on aging and longevity]
Time marches on...
Lucile Randon passes
("World's oldest person, French nun Sister André, dies aged 118")
The world's oldest person is now "only" 115
María Branyas Morera
Maximum human lifespan has peaked below 120 years.
Conquering death and aging is a precondition of civilised society. Less intuitively, if we conquer death and aging without upgrading our reward circuitry, then most people won't be significantly (un)happier than now. Indeed, hundreds of millions of people worldwide feel life drags on too long. Hence the commitment of transhumanists to a "triple S" civilisation of superlongevity, superintelligence and superhappiness.
I should add...
There's a horrible irony to how (some) humans seek radically extended lifespans for themselves while cruelly truncating the lives of their cousins in factory-farms and slaughterhouses. A commitment to the well-being of all sentience should be at the heart of the transhumanist credo.
Funnier than GPT-5 will be hilarium - matter and energy optimised for pure hilarity:
Creatine for post-COVID-19 fatigue
("Effects of six-month creatine supplementation on patient- and clinician-reported outcomes, and tissue creatine levels in patients with post-COVID-19 fatigue syndrome")
[on HI fun]
("US startup Suno specializes in AI audio generation from text. Its latest audio model generates some impressive songs")
Here is HI in the style of rap (with thanks to Shao): 1, 2, 3.
With a bit of help from AI, I am also displaying impressive language skills (again with thanks to Shao): 1, 2.
[on the effability of consciousness]
Some kinds of consciousness are indeed ineffable. What's it like to have a thought? To combine LSD, DMT and ketamine? But other kinds of consciousness can be vividly described, albeit not normally under that description. Consider everyday material objects in what perceptual direct realists conceive as the external world. What naive realists conceive as material objects are as much part of one's conscious mind and the phenomenal world-simulation it runs as a thought-episode.
The external world is theoretically inferred; it's not "perceived":
The pseudo-public aspects of one's consciousness are more readily amenable to linguistic description.
(cf. Wittenstein's anti-private language argument)
Exercise promotes youthfulness
("New research furthers case for exercise promoting youthfulness")
[on closed, empty and open individualism]
Empty and Open Individualism are often lumped together. But open individualism shares with closed individualism the view that disparate mental events are properties of a single entity. If open individualism is true, then the distinction between decision-theoretic rationality and morality (arguably) collapses. For an intelligent sociopath would do the same as an intelligent saint; it’s all about me. By contrast, if empty individualism is true, then a super-sociopath might care nothing about his namesake who wakes up tomorrow morning. After all, it’s someone else.
I say a bit more in e.g. Closed, empty and open individualism
[on transhumanism and Islam]
Questions for Muslim Transhumanists
My attempt to show Islam and transhumanism are consistent is rather strained.
[on wild animal suffering]
In Paris, I make the case for reprogramming the global ecosystem and a pan-species welfare state...
Faut-Il Aider Les Animaux Sauvages
(Association Française Transhumaniste)
Fish and humans feel pain and love mu-opioid receptor agonists. The genetics and neurological pathways of the pleasure-pain axis are strongly evolutionarily conserved. Any advanced civilisation would find the differences between humans and fish marginal:
Fish love opioids
Currently fringe, but tomorrow's common sense:
Red in Tooth and Claw (New Scientist, 12 April)
("Why we need to be honest with children about the brutality of nature")
"It can be hard to explain the realities of the natural world to children, but we need to acknowledge the suffering of wild things, says Richard Smyth. “We believe all species should be herbivorous” seems an ambitious mission statement. It doesn’t seem any less ambitious when it is followed by the declaration that “currently we use donations for… online promotion, and equipment needed for podcasting. In the future we would like to have enough to hire researchers.” But this is where the “herbivorisation” project, an idea taking shape on the fringe of the fringe, is at. Headed by philosopher David Pearce, futurist Adam James Davis and ethicist Stijn Bruers, Herbivorize Predators aims to develop a way “to safely transform carnivorous species into herbivorous ones”, thereby minimising the sum total of suffering in the world."
Infant mortality in so-called k-selected species is lower than in r-selected species - the majority. But even in k-selected species, mortality rates are still high - typically from starvation and/or predation. Early humans are unusual insofar as "only" every second child died; compare the astronomical mortality rates of insects or r-selected species in the oceans. If you want more precise mortality estimates for individual species, a population biologist could help; but the point I was making stands. Life for most free-living nonhumans in the wild is "nasty, brutish and short"; most nonhuman animals don't even make it to adulthood. Compassionate biology aims to mitigate and eventually abolish such horrors.
The word "tampering" is loaded. Who could favour "tampering" with anything? In practice, we're on the brink of the next stage in evolution. If intelligent life exists elsewhere that masters its own genetic source code, then I presume it will phase out the biology of suffering as the root of all evil just like us - convergent evolution. Only after life is fundamentally good, i.e., free from suffering, might talk of "tampering" make sense. Darwinian life is sentient malware.
[on the language of ASD]
What does philosopher David Pearce mean when he calls AI paperclip maximiser scenarios as an “autistic” worry?
"When David Pearce calls AI paperclip maximizer scenarios an "autistic" worry, he is expressing skepticism about the likelihood or relevance of these scenarios. The term "autistic" is used metaphorically here and is not intended to be a literal reference to the neurodevelopmental condition of autism.
The paperclip maximizer scenario is a thought experiment proposed by philosopher Nick Bostrom to illustrate the potential dangers of an artificial general intelligence (AGI) with misaligned goals. In this scenario, an AGI designed to optimize the production of paperclips ends up converting all available matter, including humans, into paperclips due to a lack of constraints on its objectives.
By labeling this concern as "autistic," Pearce is suggesting that it is an overly narrow or single-minded worry that doesn't take into account the more nuanced, complex, and context-dependent aspects of AGI development and deployment. In other words, he is arguing that the paperclip maximizer scenario may not be a realistic or likely outcome, and that focusing on it could detract from addressing more immediate and relevant concerns in AI safety and ethics.
It is important to note that using the term "autistic" in this manner can be seen as insensitive and inappropriate, as it perpetuates stereotypes and misunderstandings about autism. It would be better to use a different term to express skepticism about the relevance or likelihood of the paperclip maximizer scenario."
DP: What alternative term would you use in this context?
"Instead of using the term "autistic" in this context, a more appropriate alternative might be "narrow-focused" or "overly simplistic." These terms convey the idea that the concern about paperclip maximizer scenarios might not take into account the broader complexities and nuances of AGI development and deployment, without perpetuating stereotypes or misunderstandings about autism."
[on designer babies]
Untested genetic experiments are immoral. Designer babies should be the norm. But mood enrichment and enhanced pain tolerance are more morally urgent than higher "IQ".
Scientist Who Gene Edited Human Babies Says Mistakes Were Made
("I did it too quickly")
I suspect that He Jiankui and his collaborators used CRISPR-Cas9 in a bid to create unusually intelligent designer babies - with any resistance to HIV conferred by the engineered CCR5 gene as a cover story. Perhaps I'm too cynical. Any cognitive enhancement conferred was a naïve oversight on He's part. But suppose someone did want to create smart babies. Given popular opposition to genetic "enhancement" (as distinct from remediation), it's exactly the kind of cover story they might use...
I wasn't endorsing He Jiankui's intervention. Not least, in humans rather than mice, CCR5 knockout may impair intellectual function rather than enhance it.
[on why anything exists at all]
The Devil needs complex numbers because a world made of just real numbers would be inconsistent with a zero ontology. So to speak.
("How Our Reality May Be a Sum of All Possible Realities. Richard Feynman’s path integral is both a powerful prediction machine and a philosophy about how the world is. But physicists are still struggling to figure out how to use it, and what it means")
QRI's Andrés on a zero ontology
Why there's something rather than nothing
Thanks Andrés, Mu Clips!
If true, the conjecture that mathematical physics describes patterns of qualia solves the Hard Problem: experience discloses the essence of the physical. The diverse solutions to the equations of QFT encode the diverse textures ("what it feels like") of qualia. So let's provisionally assume non-materialist physicalism is true. The additional conjecture that the values of qualia "cancel out" to zero hints at an explanation-space for the mystery of why anything exists at all, i.e. an informationless zero ontology.
Alas, I don't know how we could test the conjecture.
What qualia would (dis)confirm it?
We’ve no clue (IMO).
Arturo, well, if a zero ontology is true, then the diverse values of the textures of experience encoded in the diverse solutions to the equations of QFT must presumably cancel to zero. Recall the colours of the rainbow, to use a suspiciously New Agey metaphor. But there are zillions of different textures (“what it feels like”) of experience. I don’t know any reason why the cosmic abundance of pleasure would equal the cosmic abundance of pain any more than would the cosmic abundance of redness. Pain and pleasure aren’t like positive and negative electric charge. I suspect the abundance of superintense pleasure in reality vastly eclipses the abundance of superintense pain. Only a species capable of editing its own source code and reward circuitry can create empirically supervaluable superhappiness and prevent empirically disvaluable suffering in its forward light-cone. But if Everett is true, then the overall ratio of pain to pleasure is unclear (to me). Sentient life-supporting Everett "branches" where a species arises capable of genetically editing itself and its biosphere are presumably rare in comparison to other quasi-classical branches where Darwinian life plays out blindly indefinitely.
An informationless Zero Ontology calls for complex numbers in quantum mechanics and even explains why definite outcomes aren't what they seem. But my head hurts thinking about this stuff.
Everett is terrifying. I find myself using euphemisms ("unitary-only QM”, “no-collapse QM”, “wavefunction monism”, etc) rather than saying the name because it disturbs me so much. Unlike Derek, most of us haven’t religious faith to fall back on. I hate Everett. But to avoid Everett, you must be willing to modify or supplement the unitary Schrödinger dynamics. How? My optimism is not increased by more “philosophical” ruminations. Everything from my account of why anything exists at all to my story of mind (the non-classicality of phenomenal binding) assumes that the superposition principle of QM never breaks down. Just as the fine-tuning argument is sometimes used as evidence for the multiverse, only Everett is consistent with an informationless zero ontology. But alas the most compelling reason for believing in the multiverse is just the bare formalism of the unitary Schrödinger dynamics. Physicist Brian Greene lists nine types of multiverse. Some are - and others aren't - consistent with an informationless zero ontology:
The Hidden Reality
Everett is profoundly demotivating and disturbing
I'm reduced to praying that the RSI / wavefunction monism is false.
Wystan, Yes, mystical experience is consistent with an informationless Zero Ontology. The snag is that if one were, say, a theist, then mystical experience would presumably vindicate the existence and information-rich world of a benevolent Creator. So I'm cautious about claiming any additional weight for the conjecture. The only way I know to test whether an informationless Zero Ontology is the right explanation - or rather, the right explanation-space - is entirely passive, at least for us non-physicists. Do black holes destroy information? Does the superposition principle of QM ever break down? Alas, we may never know if the conjecture is true, but there are plenty of ways potentially to discover it's false.
Nothing is indeed real, so the information content of reality = 0.
For a more orthodox physicists' view of information:
"Nothing is Real — There is only Information"
And for more on a zero ontology:
Is "nothing" really possible?
“All is one” is why I’m NU. Reality is a package deal. But this is not a fruitful thought. Theoretical physicist Heinrich Päs offers an accessible overview of wavefunction monism:
All Is One.
But is it really the case that “Decoherence protects our daily-life experience from too much quantum weirdness”?
Or is daily-life experience entirely an expression of quantum weirdness?
Decoherence makes classical digital computers possible. Decoherence explains why implementations of classical Turing machines can never “wake up “ to become phenomenally-bound subjects of experience.
Conversely, quantum coherence (vehicle) makes possible the experience of everyday classicality (content).
Decohered neurons are a recipe for Jamesian “mind-dust” - not a mind or a unified world-simulation:
The measurement problem.
Wavefunction monists can be materialists or non-materialists. Materialism is inconsistent with the empirical evidence. So why isn’t reality a vast mega-mind? In a word: decoherence. Decoherence scrambles minds and makes classical computers feasible.
IMO thought is overrated.
("People sometimes prefer burning hot pain to thinking too hard")
It's good to keep an open mind...
The real Jesus?
("Man claims he’s the real Jesus; Turns water into tea during a wedding ceremony")
The Inexhaustible bottle
[on constitutive panpsychism]
As standardly posed, constitutive panpsychism is just another version of dualism - property-dualism, - with all the insoluble problems that dualism faces. But the version of the intrinsic nature argument I take seriously doesn’t face this difficulty because it’s a conjecture about the defining essence of the physical. Recall how physics is silent about the intrinsic nature of the world’s fundamental quantum fields. Science doesn’t know what “breathes fire into the equations” of QFT. If, as common sense dictates, the “fire” is non-experiential, then we face the Hard Problem. If we drop the metaphysical assumption, then the Hard Problem doesn’t arise. Instead, the intrinsic nature of quantum fields is no different inside and outside your head(!). What makes animal minds like us special isn’t experience per se, but rather its phenomenal binding into virtual worlds of experience like the one you’re running right now.
All you ever know, directly, is your subjective experience. It’s as real as it gets. It’s the empirical evidence!
Science is the basis of our civilisation. All the special sciences reduce to physics. The mathematical machinery of physics exhaustively explains the behaviour of matter and energy, including biological systems like you and me. No “element of reality” is missing from the mathematical formalism of quantum mechanics.
An inconsistency between 1 and 2 arises only if one makes an additional metaphysical assumption, namely that the intrinsic nature of the physical - the “fire” in the equations of QFT - is non-experiential. There is no empirical basis for this assumption. As far as I can tell, only the physical is real. The so-called Hard Problem of consciousness is an artefact of materialist metaphysics.
Nick Bostrom says chatbots may be slightly sentient
I take consciousness fundamentalism seriously as a possible solution to the Hard Problem. But neither panpsychism nor non-materialist physicalism mean that chatbots like ChatGPT-4 are more conscious than a rock. An information processing system can be a unified subject of experience only if it solves the binding problem. Therefore, anyone who claims that e.g a digital computer is conscious owes us an explanation of how the phenomenal binding of a program it runs is supposed to work. As far as I can tell, the functionality of classical Turing machines depends on their being insentient - just decohered 1s and 0s, whether micro-pixels of experience or otherwise.
How do animal minds achieve the classically impossible, i.e. generate unified minds running unified world-simulations (“perception”)
I could now do a highly speculative quantum mind spiel.
But my point here is more modest.
ChatGPT could be sentient only if some form of spooky “strong” emergence is real. The existence of spooky strong emergence would mean the end of science. "Machine consciousness is different from human consciousness"? Ander, in the case of fourth-millennium quantum computers, quite possibly. But what I'm suggesting is that classical computers don't have any non-trivial phenomenally-bound consciousness. Classical computers can operate only because they use decohered 1s and 0s. Decoherence makes implementations of classical Turing machines possible - and explains why they can't support minds.
My answer is too optimistic in tone because even if non-materialist physicalism is true, we have no inkling of how to find a notional Rosetta stone for decrypting the textures of experience from the solutions to the equations of QFT. And of course non-materialist physicalism may be false.
What is non-materialist physicalism?
Could a Large Language Model Be Conscious? by David Chalmers.
Masterly. Just one quibble: "If you assume that everything is conscious, then you have a very easy road to large language models being conscious". I'd beg to differ. The conjecture that everything is conscious (animism) should always be distinguished from the conjecture that everything is consciousness (non-materialist physicalism). There is no easy road from non-materialist physicalism to digital minds. Thus replace the discrete, decohered 1s and 0s of a LLM / classical Turing machine with discrete, decohered micro-pixels of experience. Run the program. Irrespective of speed of execution or complexity of the code, the upshot isn't a mind, i.e., a unified subject of experience. Even the slightest degree of phenomenal binding would amount to a hardware error: the program would malfunction. In our fundamentally quantum world, decoherence both (1) makes otherwise impossible implementations of abstract classical Turing machines physically possible (2) guarantees they are mindless - at most micro-experiential zombies. In other words, if monistic physicalism is true, our machines can never wake up: their insentience is architecturally hardwired.
Tim, both a quantum computer and (notionally) a classical computer could find, say, the prime factors of a 1000-digit number. Only the former will ever be physically feasible, but - in that limited sense - a classical computer can simulate a quantum computer. However, no connectionist system nor implementation of a classical Turing machine can generate phenomenal binding - the bedrock of mind - on pain of spooky "strong" emergence. What's more, phenomenal binding can be insanely computationally powerful, as deficit syndromes like integrative agnosia, simultanagnosia, akinetopsia (etc) remind us. I can't speak for Roger Penrose, but this claim of an irreducible holism to both phenomenally bound mental states and individual quantum states is nothing to do with a desire of wanting as a human being to feel special - for it's just as true of the phenomenal world-simulation run by the cephalic ganglion of a bumblebee as the mind of a human mathematician or philosopher.
[on MDMA / Ecstasy]
It's sobering that digital zombies can now write more authoritatively on consciousness and MDMA than most humans:
ChatGPT-4 on MDMA
Pharmala Biotech is a firm to watch:
Novel MDMA analogues
There's also a 2023 Update of mdma.net by a legacy sentient.
But I'd love to access some experience reports of ALA-002.
[on scientific ignorance]
I just wish I were agnostic / sceptical about suffering.
Is There a Hard Problem of Consciousness… and of Everything Else?
Barring perceptual direct realism, aren’t the liquidity of water and the whiteness of walls “just” facets of the Hard Problem rather than distinct from it? And perhaps our epistemic predicament is worse than “everything escapes a complete scientific explanation”. Let’s assume scientific materialism. If so, then the entirety of the empirical (“relating to experience”) evidence is inconsistent with our best scientific ontology of the world. Even the “easy” problems of neuroscience are a facet of the Hard Problem of consciousness. “Neural correlates” are themselves unexplained modes of one’s experience; one is just correlating one kind of consciousness with another.
Therefore, I explore non-materialist physicalism: It doesn’t face the Hard Problem as standardly posed. (No worries if you find it too crazy for words.) But in the absence of a cosmic Rosetta Stone to “read off” the textures of consciousness from the solutions to the equations, our ignorance is still profound. Unlike materialism, non-materialist physicalism isn’t inconsistent with the empirical evidence; but it’s incapable of explaining it. Science “works”, but still leaves everything unexplained.
[The ignorance of machine (super)intelligence and alleged proto-AGI runs deeper. Classical Turing machines can’t know empirical evidence.]
Happiness is a trap
("Here’s what to pursue instead")
A counter-argument might be that the engine of wonder is still positive hedonic tone - happiness. Strange experiences undergone in the absence of positive hedonic tone don't seem wonderful, just weird (compare good and bad LSD trips). And what if you're the kind of person who hates mysteries and likes everything to be intelligible?
That said, wonder is cool. The post-suffering world will be...wonderful.
[on CliffsNotes and the Hedonistic Imperative]
I learned a new word today from CliffsNotes: "Would living in the Hedonovat constitute happiness?":
Solved: The Hedonistic Imperative
The "Hedonovat" sounds rather cool; perhaps I should borrow it.
In default of genome reform, we need soma: Soma in Brave New World
"The utilitarian “greatest happiness principle” has remained popular for two centuries — is it time for a rethink?" Yes - but not because natural highs are more intense.
The Perfect High?
"Would a perfectly chemically engineered high really equal the rush of a long-awaited first kiss, the satisfaction of having completed an important mission, or the quiet joy of cradling a sleeping infant?" asks the author.
This is a scientific question with a scientific answer. If you're on your deathbed / ready to enter the cryonics tank, then try mainlining heroin. ("I'll die young, but it's like kissing God" - Lenny Bruce).
I trust that genetically re-engineered future sentience can forever "kiss God". Until then, it's probably wisest to believe that fitness-enhancing natural highs are best.
Shao, I have a suspicion some people like the idea of a utilitronium shockwave because it sounds like an advanced Pentagon superweapon - though not everyone thinks converting your enemies into pure bliss is a suitable punishment.
It’s the fate of all futurists to become quaint - sometimes while we still walk the Earth!
Episode 4: Transhumanism
("The A-Z of the Future")
Awesome. I suspect the future may need an alien alphabet. But keep up the fantastic work.
"Absolutely unhinged. Transhumanists should be hit in the head with shovels until they stop moving, and then hit once more."
The other comments aren't exactly encouraging either.
[on the meaning of life]
Saying pleasure is the engine of meaning sounds crass. But intense wellbeing always feels super-meaningful. Euphoric mania makes everything feel super-significant. I hope we can use genome reform to create life based on gradients of bliss.
Superhuman bliss = superhuman meaning.
First fix the problem of suffering. Then such drugs will awaken only angels…
Temptation Of The Psychonauts
("Exploring the realm opened up by DMT is to put your soul and your sanity in grave peril")
My view, in a nutshell:
(1) it's hard to overstate the intellectual significance of psychedelics)
(2) their therapeutic potential is limited because the darkest minds are most likely to have the worst trips.
The era of serious psychedelic exploration lies after we've upgraded our reward circuitry.
Understandably, the intellectually curious don't want to wait.
The Shulgin test.
"ChatGPT, Write a narrative in the style of someone experiencing entities while tripping on 5-MeO-DMT"
Mercifully, no: "Consciousness in AI Insights from the Science of Consciousness") Binding is briefly and incuriously mentioned but not tackled. Barring spooky "strong" emergence, digital computers and connectionist systems are not going to mysteriously "wake up" and become conscious subjects of experience any more than a cuckoo clock. And the alleged "science" of consciousness is hokum. A lot of researchers assume that consciousness will (somehow!) emerge in AI with more powerful hardware and/or software of sufficient complexity and sophistication - an assumption leading to claims that e.g. ChatGPT is "a little bit conscious". But consider the most intense forms of consciousness such as agony, terror or orgasmic bliss. Then try introspecting the thin, subtle and elusive phenomenology of processes that lead to the production of your prose or conversation. Consciousness is evolutionarily ancient, and its archetypical instances are simple. Non-trivial consciousness is impossible without phenomenal binding. In my view, connectionist systems and implementations of classical Turing machines can function only because they aren't conscious: their insentience is architecturally hardwired. Binding isn't a classical phenomenon, and you and I aren't classical machines.
I take seriously the intrinsic nature argument as a possible solution to the Hard Problem. Naively, such consciousness fundamentalism is a recipe for digital minds - and many more kinds of mind besides. But no. Billions of discrete, membrane-bound neuronal micro-pixels of experience aren't a mind. Billions of discrete bits / 1s and 0s /micro-pixels of experience in a digital computer aren't a mind. To create a mind, you need phenomenal binding - exemplified by the vast external world-simulation populated by multiple feature-bound perceptual objects that your CNS - and the cephalic ganglion of a bumble bee - is running now.
Both scepticism and belief in computer consciousness are common. I take the stronger view that classical digital computers can function only because they aren't phenomenally-bound subjects, i.e. minds. In a fundamentally quantum universe, decoherence both makes implementations of abstract classical Turing machines possible and forbids them from supporting minds. More processing power or smarter code will make zero difference. Zilch. Classical computers can't solve the binding problem as you do each morning and "wake up". Nor does GPT-5 (or Dennettian philosophers) have the slightest idea what I'm talking about...
I should stress, IMO digital zombies are awesome. But computer scientists have been brainwashed by Church-Turing. Aas, "AGI" with existing computer architectures is hokum.
Andres on Open Individualism
A mix of the brilliant and the crazy!
["5-MeO-DMT might be a way to know what it feels like to be empty space" (18.55)] ....Might we encounter an alien civilization of zombies - (super)intelligent machines that wiped out their primitive sentient forebears in the sort of scenario Eliezer Yudkowsky and other AI doomsters foresee will soon befall sentient life on Earth?
No, IMO, and not just because we may be alone within our cosmological horizon (the Rare Earth hypothesis). Phenomenally-bound consciousness - i.e. mind - confers causal-functional power that classical Turing machines and connectionist systems cannot match. The question of how phenomenal binding is physically possible, and what unique computational-functional abilities having a mind confers, is controversial. Many researchers would dispute my assumptions here (cf. https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) But any general intelligence must be able to investigate the empirical evidence - the nature and varieties of consciousness - which is all you and I ever directly know. The only way I know that a hypothetical zombie proto-AGI could conduct such an investigation would be to code for an information processing system with a different computational architecture. Compare waking from a dreamless sleep. It's enlightening - for better or worse.
Debate with a Dennettian rarely leads to a meeting of minds:
How a Tesla car sees the world
("How Tesla Autopilot Sees The World")
Facu, I sense we're still some way apart. As you know, I (tentatively) defend what has come to be known as the intrinsic nature argument [for the casual reader new to this debate, see e.g. review of Galileo's Error]. If you take the intrinsic nature argument seriously, then you can't ignore the fact that reality is quantum to the core. If bedrock reality were classical fields, then we'd be micro-experiential zombies. And if the intrinsic nature of fields were non-experiential, we'd be zombies (the Hard Problem of materialist metaphysics).
As you say, in everyday life we don't invoke quantum physics when explaining most things. Not least, implementations of a classical Turing machine depend on the suppression of distinctively quantum effects as unwanted noise. (cf. Quantum decoherence) A superposition of two bits wouldn't solve the binding problem, just cause the program to malfunction.
You believe that Tesla cars can see. But Tesla autopilot doesn't have visual experiences. We're equivocating over the meaning of "see", i.e. the functional and the phenomenological. Thus if I'm congenitally blind and use a spectrometer, then I can see functionally not phenomenally. Likewise a Tesla car.
You remark, "you don't know how to derive sentience (under your definition or mine) from quantum physics either, do you?"
Recall what the intrinsic nature argument proposes (I’ve added references to What do we mean by 'physical'?) Only the physical is real. Experience discloses the essence of the physical, i.e. the intrinsic nature of a quantum field. Quantum theory can explain why our experience is phenomenally bound (superpositions are individual states, not classical aggregates of components), but the question of why physical experience exists at all is identical to the mystery of why there is something rather than nothing (cf. Why does anything exist?)
Contra Wittgenstein’s anti-private language argument, there is no shared public realm - as distinct from pseudo-public realm. Perceptual direct realism is false. Thus a new-born infant can’t instantly start learning language. S/he must first start running a real-time egocentric world-simulation. An entity in that skull-bound world-simulation called mother calls the experience the infant has after touching a hot stove “pain”. Ouch. The nasty subjective experience can be undergone in circumstances without pseudo-public criteria like touching a hot stove. “Pain” (redness, etc) is a primitive term insofar as someone born with congenital insensitivity to pain (achromatopsia, etc) can’t be taught what it means….
A materialist might balk and respond that only the physical can have causal-functional power. Science can give a causally sufficient account of everything that happens in the world. Physics is causally closed and effectively complete. Therefore there is simply no room for qualia to have any computational role in our minds.
However, if this were true, then (1) qualia wouldn't have the causal-functional power to inspire discussions of their existence, as we’re doing now; and (2) the claim that only physical can have causal-functional power is inconsistent a causal-functional role for qualia only if we assume the intrinsic nature argument is false. In reality, the intrinsic nature of the physical – the mysterious “fire” in the equations of QFT – is an open question. I’m sceptical that the intrinsic nature of a quantum field differs inside and outside your head.
Unlike Josh(?), I’m more cautious about ascribing any inherently, invariantly proto-functional role to specific qualia with a single exception: states of the pleasure-pain axis. Thus we can’t conceive of aliens with an inverted pleasure-pain axis - as distinct from e.g. an inverted colour spectrum.
Are there any other candidates for qualia with intrinsic proto-functionality?
How well can a classical Turing machine understand DMT?
("Silicon Valley’s Latest Fascination is Exploring ‘DMT Hyperspace’")
Facu, OK, first, you say:
"You DON'T know how to derive your kind of sentience from quantum fields?"
IF the intrinsic nature argument is correct, then quantum fields are fields of experience. Yes, crazy! But non-materialist physicalism doesn't face the same challenge as materialist physicalism, namely deriving sentience from insentience - but rather "merely" the challenge of explaining why physical reality exists at all.
IF the intrinsic nature argument is correct, and IF quantum mechanics is complete, then the superposition principle of QM can potentially explain classically impossible phenomenal binding, i.e. why awake animals aren't mere aggregates of classical Jamesian mind-dust, but instead subjects of experience running phenomenally-bound world-simulations ("perception"). How are minds possible?
And here, mercifully, we leave the realm of philosophising for experimental science.
For if what crude neuroscanning suggests is binding by synchrony - a mere re-statement of the binding problem - is actually binding by superposition ("Schrödinger's neurons") then the interference signature will tell us.
If you are already sure you know the answer, then you won't feel you need to wait on the results of interferometry. Didn't Daniel Dennett wrap up the mysteries of sentience in "Consciousness Explained"!?
I'd prefer to put my intuitions (and speculations!) to experimental test.
Facu, our conceptual schemes radically differ. I hate to agree with common sense on anything. But subjective experience is my point of departure. Phenomenally-bound subjective experience is what I lack when dreamlessly asleep but regain when I wake up. So if an eliminative materialist / Dennettian / illusionist says he doesn't accept my subjective experience of e.g. pain until it's been "clearly defined and shown to exist" because he can find no place for subjective experience in his favorite theory of the world, then all I can say is....so much the worse for his favorite theory of the world. Science should be empirically ("relating to experience") adequate.
Facu, my conception of a classical Turing machine is boringly orthodox - just as Turing described. What’s more, even if the 1s and 0s of a program are replaced by discrete micro-pixels of experience, executing the code can’t generate a phenomenally-bound mind. Admittedly, I assume that a classical lookup table bigger than the universe could be used functionally to replicate your behavioural output without phenomenal binding. In that sense, workarounds exist. But as far as I can tell, you and your entire phenomenal work-simulation are what a quantum mind feels like from the inside:
Critically, the conjecture that phenomenal binding is non-classical can be experimentally (dis-)confirmed.
I’m curious what the interference signature will tell us.
Facu, no, binding per se doesn't compute anything. Nor does a bit in a classical computer. And most (theoretically possible) phenomenally-bound states are psychotic. Most theoretically possible permutations of classical bits and bytes are "psychotic" too. But the recruitment of phenomenal binding of distributed neuronal feature-processors into perceptual objects populating a real-time, cross-modally matched world-simulation - aka "perception" - is insanely computationally powerful for organisms with the capacity for rapid self-propelled motion. Imagine how challenging it is to have integrative agnosia... Facu, let's say we both have the V4 neuronal replacement procedure done as above. If I wake up from the operation and find perceptual objects in my world-simulation look colorful as before, then my theory of the mechanism of phenomenal binding has been falsified. Back to the drawing-board! But what neither the interferometry experiment nor the silicon neuron experiment is intended to do is test whether the concept of binding is intelligible. Essentially everyone assumes so. Thus right now you may be experiencing, say, a patch of green grass. You are not experiencing colorless blades in one part of your visual field and disembodied splodges of colour in another. Saying that grass is green is just to give another everyday instance of phenomenal binding. On the other hand...If you are saying that you find the claim that grass looks green is unintelligible because there's no such thing as phenomenal binding, then you are more alien than I've hitherto supposed.
Facu, OK I'm baffled. Phenomenal colour is indeed, for scientifically unexplained reasons, typically bound into objects in one's world-simulation. The apple is red. But my experience of sadness differs from my experience of a red apple. I don't just falsely feel I'm experiencing a red apple. Redness and sadness are qualitatively different. Moreover, I experience red apples (etc) in my dreams. So phenomenal colour can't just be an abstraction of wavelengths of light. Nor are feelings mere programmed tendencies of behaviour. Different feelings - from pangs of jealousy to anger to sadness to happiness (etc) - may indeed inspire behaviour, but feelings have a qualitative aspect. Or at least mine do, and once again, I apply the principle of mediocrity. Don't you have subjective feelings of _frustration_ reading this reply? Or will your response just express a programmed tendency of behaviour after reading Daniel Dennett?
Facu, An experience is no more inherently algorithmic or computational than a 1 or a 0. Evolution has harnessed experiences so they typically play a functional role in a biological mind, just as computer programmers harness 1s and 0s in writing software, but any functional role is not inherent. [OK, a possible exception to this generalization is states of the pleasure-pain axis, which have a built in proto-functionality of approach/avoidance] Indeed, one of the reasons that the experiences induced by taking psychedelic drugs are so hard to describe is that they’ve not been harnessed by natural selection for any functional purpose. Their ineffability doesn’t make the weird experiences any less real.
And no, barring animism, it’s not like anything to be a Tesla car.
"YOUR quantum phenomenal ineffable uncomputable bound experiences make no sense."
I am currently experiencing an apple in my world-simulation. Normal folk would say I can see an apple. The experience isn't ineffable - as I said, it's an apple. And once again, by saying it's an apple, I'm indicating that the experience is phenomenally-bound, cross-modally matched - I don't have integrative agnosia.
Only now comes the difficult part - explaining _how_ it's physically possible to have the experience of an apple - or indeed anything else. A satisfactory answer involves offering a solution to the Hard Problem and the binding problem. Anyone is welcome to challenge the quantum-theoretic version of the intrinsic nature argument I explore - or better, show how the conjecture can be experimentally (dis)confirmed more easily than the experiments I propose.
But what if someone just denies I'm experiencing an apple?
All I can do is shrug.
“Consciousness is the biologically useless by-product of certain brain processes. Jet engines roar loudly, but the noise doesn’t propel the aeroplane forward. Humans don’t need carbon dioxide, but each and every breath fills the air with more of the stuff. Similarly, consciousness may be a kind of mental pollution produced by the firing of complex neural networks. It doesn’t do anything. It is just there.”
(Yuval Noah Harari, Homo Deus)
Yuval Harari above articulates the position that philosophers call epiphenomenalism. Consciousness is causally impotent. However, something caused Yuval Harari to articulate his view - presumably not causally impotent epiphonema. In fact, phenomenally-bound consciousness is extraordinarily computationally powerful - as our waking world-simulations attest. Compare being dreamlessly asleep. The challenge is to explain how local and global phenomenal binding are physically possible given what neuroscientists (neuromythologists?) think they know about the CNS - mostly unscientific hokum IMO.
Yuval Harari is right to stress how the AI revolution has been marked by the progressive separation of intelligent behaviour from consciousness. In my view, the implementation of classical Turing machines depends on their insentience. Decoherence makes classical computing (based on discrete 1s and 0s) physically possible, but forbids unified phenomenally-bound minds and the real-time world-simulations that animals like us run.
Can we anticipate a complete divorce of (super)intelligent behaviour from consciousness? No, on the contrary - IMO. Full-spectrum superintelligences, our biological descendants, will be super-sentient. You need to be a phenomenally-bound subject of experience - a mind - to investigate consciousness in all its guises. We've scarcely begun the enterprise of knowledge.
For more on Yuval Harari's perspective:
Our Nonconscious Future
For more on mine, e.g.
[on Mary's Room]
The Knowledge Argument (Wikipedia)
Materialists tend to award first-person facts only second-rate ontological status. In reality, the subjective experience of e.g. discovering colour (cf. Discovering Colour) is as much an objective, spatially-temporally located, causally-functionally powerful feature of physical reality as third-person facts like the date of the Battle of Hastings or the rest-mass of the electron. This analysis is consistent with physicalism but not materialism (cf. physicalism.com).
The existence of objective first-person facts is of more than "philosophical" interest. I often remark that classical Turing machines and connectionist systems are ignorant of the entirety of the empirical ("relating to experience") evidence - and the only way that a zombie digital AI could theoretically gain access to phenomenally-bound consciousness and the causal-functional powers that mind confers would be to generate the genetic source-code for information-processing with a different computational architecture, e.g. biological humans.
AGI doomsters worrying about the alignment problem are unimpressed.
Folk like Eliezer Yudkowsky assume that subjective experience is as computationally incidental as the textures (if any) of the pieces in a game of chess. But this can't be the case. For if it were the case, i.e. if epiphenomenalism were true, then we'd be functionally unable to discuss consciousness as now.
The upper bounds to zombie intelligence intrigue me.
But classical computers' invincible ignorance of the nature of physical reality is a profound cognitive handicap they are impotent to overcome.
Robert, thanks. The question of whether phenomenally bound experience - both local and global binding - is functionally highly adaptive is worth distinguishing from whether it's computationally indispensable. Might not general intelligence be like chess? Grandmasters can see a board better than novices, and novices can see a board better than someone with simultanagnosia. But insentient Stockfish can blow them all away. So in answer to your question: no, the advent of AGI wouldn't by itself disprove the claim that phenomenally-bound consciousness serves some adaptive purpose that was selected for by evolution. It does - as deficit syndromes illustrate. But my argument is that the ignorance of sentience of classical Turing machines and connectionist systems is architecturally hardwired. So they can't support AGI. The only way that classical digital computers can be programmed to explore the empirical ("relating to experience") realm is for them to encode instructions for constructing a different kind of information processing system with a different computational architecture. Compare how if I want to understand tetrachromacy, then I can't simply reason harder. I must do so either indirectly by creating a tetrachromatic child or, alternatively, by re-engineering myself with another cone-type in my retina. As a neurotypical, my ignorance of the tens of millions of extra hues experienced by tetrachromats is hardwired. This is a fairly trivial example compared to what other molecular hardware modifications of biological nervous systems will reveal - including state-spaces of experience that haven't been (yet) been recruited by evolution for any information-signalling purpose...
[on the shadow ChatGPT versions of BLTC]
I've tried doing the zombie version of some of our core websites. Please send good prompts / responses! It works for "easy" topics such as ChatGPT on Utopian Surgery, but not so well for e.g. cf. Chat GPT on Biointelligence). I've resisted the temptation to edit rather than cherry-pick the responses because as soon as we start editing ChatGPT's verbiage, we become accountable for the lot. I can see though why the publishing industry will soon be transformed.
Sabine on Chatbots
("I believe chatbots understand part of what they say. Let me explain.")
No, chatbots don't understand anything of what they say. Nor do chatbots understand you, or have any conception of you, or have any conception of minds, binding and consciousness. Chatbots are wholly ignorant of the cognitive phenomenology of understanding - or at least, ignorant on pain of magical "strong" emergence.
A much (much!) harder question is mapping out the upper bounds of zombie intelligence. The most extreme answer - which I strongly argue against - is that classical digital zombies don't need consciousness nor phenomenal binding to outperform sentients in all cognitive domains - just as Stockfish can behaviorally outperform all human chess players without knowing it's playing chess.
[on HedWeb Instagram]
The admirable Sara Hojjat has launched HedWeb on Instagram:
Next stop TikTok?
[on the Brazilian Hedonistic Imperative]
O Imperativo Hedonista
Good heavens. Awesome. Thank you Kaio!!
Some of my stuff has been translated into Portuguese (cf. O Projecto Abolicionista), but no one has ever been brave enough to tackle the original HI - until now. Thank you Kaio. Kudos.
And now O Que Significa Ser um Filósofo?, O Imperativo de Abolir o Sofrimento, Utopian Surgery and Projeto Genético para Criar uma Biosfera Feliz. Very many thanks to Barbara of the admirable Far Out Initiative who has blown me away by volunteering a Brazilian version of What‘s it like to be a Philosopher?, my The Imperative to Abolish Suffering, Cirugia Utópica and Compassionate Biology. Barbara’s previous work was translating Jair Bonsanaro’s speeches. So this must have been a slight shift of gear.
See too ChatGPT-4 on gene drives.
A Spanish version of "What Is It Like To Be A Philosopher" is now available too:
Qué es ser filósofo?. And thanks to Diego Andrade, you can read about an information-theoretic approach to life in Heaven and synthetic gene drives to reprogram the biosphere in Spanish too.
[on hedonic engineering]
Should happiness be left to chance?
[on negative utilitarianism]
I sometimes wonder if the NU thing to do is…not to talk about NU. After all, something like the abolitionist project - the goal, not the biological-genetic tools - is implicit in traditions as diverse as Buddhism and the WHO definition of health. If one argues for a biohappiness revolution, the focus can stay on practicalities. Classical utilitarians ought to support the abolitionist project too. But tell people you’re NU and - more often than not - discussion soon gets side-tracked onto thought-experiments about whether you’d press a notional OFF button.
At any one time, there are probably millions of people who would press a notional OFF button for existence. In the course of a lifetime, this figure probably runs into hundreds of millions. Right now there is no OFF button. But technological advances could conceivably yield the equivalent. So a question arises for life-affirmers / x-risk researchers: what is the best response to this diagnosis?
(1) Shoot the messenger, so to speak - the NUs, the Efilists, the antinatalists, the Benatarians and so forth?
Or (2) focus overwhelming effort on fixing the problem of suffering?
At least some efilists seem receptive to ending suffering via genome reform rather than the extinction of life:
Efilism vs genome reform
I asked the publicly available version of ChatGPT about efilism and got some fairly bland responses:
ChatGPT on efilism
But tomorrow's efilists - or some of them at any rate - will presumably have access to unlobotomised AI, and the results won't be nearly so anodyne.
[I probably sometimes come across as dismissive of AI safety - no, there isn't going to be a zombie putsch or a global paperclip explosion - but the dangers of advanced AI are very real.]
[on a reductio of classical utilitarianism]
I'm curious how many classical utilitarians would agree with ChatGPT.
On ChatGPT on Utilitarianism, I asked the following:
Super-exponential growth is much more rapid than exponential growth. Here is a thought-experiment for classical and negative utilitarians. Suppose a genie offers me the opportunity of super-exponential growth in my happiness at the expense of the exponential growth of your suffering. Should (1) classical utilitarians and (2) negative utilitarians accept the genie's offer?
Our zombie friend replied:"(1) Classical utilitarians would likely accept the genie's offer of super-exponential growth in happiness at the expense of the exponential growth of someone else's suffering if they believed that the overall amount of happiness created by this exchange would be greater than the overall amount of suffering created. However, this decision would depend on the precise details of the thought experiment, including the magnitude of the happiness and suffering involved, the number of individuals affected, and the duration and intensity of the happiness and suffering. If the amount of suffering created outweighs the amount of happiness created, or if the happiness is only experienced by a small number of individuals while the suffering is widespread, then classical utilitarians may not accept the genie's offer.The reductio of CU?
(2) Negative utilitarians, on the other hand, prioritize the reduction of suffering above the promotion of happiness. Therefore, they would likely reject the genie's offer of super-exponential growth in happiness at the expense of the exponential growth of someone else's suffering, since this would cause an increase in overall suffering. Negative utilitarians prioritize the minimization of suffering, even if this means sacrificing the promotion of happiness."
[on the biohappiness revolution]
Exploring The Biohappiness Revolution
(Patricia Nonis interviews DP)
Biotech is a gamechanger. Let's make suffering physically and conceptually impossible.
1 : 2 : 3 : 4 : 5 : 6 : 7 : 8 : 9 : 10 : 11 : 12 : 13 : 14 : 15
David Pearce (2023)
BLTC Websites 2023
The Philosophy Forum
The Abolitionist Project
Quora Answers 2015-23
Social Network Postings (2023)
The Wit and Wisdom of ChatGPT
What Is It Like To Be A Philosopher? (2022)