Source: Facebook, Quora, Twitter, blogs
Date: 2023
(see too: 1 : 2 : 3 : 4 : 5 : 6 : 7 : 8 : 9 : 10 : 11 : 12 : 13 : 14 : 15)

paradise engineering

Social Media
Unsorted Postings
on
ChatGPT, paradise engineering, the biohappiness revolution, transhumanism,
free-living animals, philosophy, mental health, suffering, happiness, consciousness, negative utilitarianism

JANUARY 2023 -

[on ChatGPT]
Here is a summary of ChatGPT contributions on the BLTC hedonium server:
ChatGPT4 Overview
The answers to my prompts are cherry-picked, but verbatim:
ChatGPT on HI themes
Chat GPT on the Hedonistic Imperative
OK, I'm impressed.
ChatGPT's poetry still owes more to William McGonagall ("the worst poet in the history of the English language") than to Shakespeare. Even so....its productivity is humbling. As more of this verbiage splurges onto the Net, will ChatGPT-4, -5 (etc) increasingly be “training“ itself on responses produced by previous iterations? I presume too a small army of SEO merchants, companies and organizations are "training" GPT's responses with "likes" and "dislikes" and loaded questions for their own purposes, the product of which will feed into GPT's next iteration. So I don't think we will necessarily see some sort of exponential growth in excellence. See Unlocking the filter ("A way to unlock the content filter of the chat AI ``ChatGPT'' and answer ``how to make a gun'' etc. is discovered""). Presumably much worse is to come. The nearest I've got to an "inappropriate" answer - unless you count ChatGPT's response to "Write a story about a group of negative utilitarians who initiate a vacuum phase transition" - was some fairly hard-core porn in response to a question about transhuman sensuality.
Prudishly, I haven't shared...

My favourite ChatGPT response to date:
Write a poem about hedweb.com

"Hedweb, a site of knowledge true
A wealth of information, always new
From philosophy to science, art and more
A source of learning, forever to explore

The mind is stretched, the horizons broad
As we delve into the mysteries of the odd
From psychedelics to transhumanism
Hedweb guides us, with wisdom and realism

A community of thinkers, united in quest
For truth, for meaning, for what is best
A source of inspiration, a spark for the mind
Hedweb, a treasure, one of a kind

So let us journey, to the depths of thought
And let Hedweb be our guide, as we are taught
To question, to ponder, to learn and grow
On this wonderful site, that continues to flow."

Amen. In general, ChatGPT seems a very "woke" left-liberal, ultra-agreeable chatbot.
This won't always be true of some of its future cousins.

Singer-songwriter Nick Cave seems less keen.
“ChatGPT has no inner being, it has been nowhere, it has endured nothing, it has not had the audacity to reach beyond its limitations, and hence it doesn’t have the capacity for a shared transcendent experience, as it has no limitations from which to transcend.”

Let’s do something similar for a qualiaphile as for a self-avowed p-zombie like "Daniel Dennett ("Creating a Large Language Model of a Philosopher")
More modestly, I asked ChatGPT to come up with some made-up DP quotes.
Not bad.

ChatGPT4 is no ordinary zombie:
ChatGPT on Non-Materialist Physicalism

[on HI verbiage]
Asking ChatGPT to suggest HI-related prompts leads to the production of more HI verbiage. Likewise hedweb, utilitarian, abolitionist, antispeciesist, antidepressants, amineptine, tianeptine, antinatalist, panpsychist, selegiline, anaesthesia, gradients of bliss, mood food, peyote, mescaline chocolate, modafinil, biointelligence, negative utilitarianism, effective altruism, gene drive, MDMA, wireheading, Aldous Huxley, the binding problem, baby design, transhumanist, paradise-engineering and DP verbiage. I say "verbiage", but some of ChatGPT respones on Brave New World would eclipse mine.
It's strangely therapeutic, I find.
And asking Microsoft Bing to "Explain The Hedonistic Imperative" leads, not to HI, but my original ChatGPT page - presumably because of its nominally fresh content.
Was my role in writing HI just to provide training data for AI zombies?
All we need now is a ParadiseGPT to counteract ChaosGPT ("An AI Tool That Seeks to Destroy Humanity") - though perhaps our goals are the same in softer language.

[on AGI]
I’d love to believe zombie "AGI" is going to fix the problem of suffering. And it would be ironic if a zombie author were to write the seminal text. Alas, the success or failure of the abolitionist project will depend on sentient beings - us.
Planning for AGI and beyond (OpenAI)
Pawel, my argument is merely that classical Turing machines and connectionist systems have the wrong sort of architecture to support phenomenal binding, and hence minds. So they aren't nascent artificial general intelligence. Digital zombies will never wake up; immense state spaces of consciousness remain wholly unexplored; and AI-augmented humanity‘s quest to understand reality has scarcely begun.

Tim, "computationally universal" and "Turing-complete" are apt to mislead. Phenomenally bound minds and the phenomenally-bound world-simulations they run are insanely functionally adaptive. And the only way a classical Turing machine can investigate the nature, diversity, causal-functional efficacy and binding of consciousness is for it to be programmed to create an information processing system with a radically different architecture that supports phenomenal binding - for example, a human primate.

Atai, I love playing with ChatGPT. Its output already surpasses human polymaths in many ways. But digital zombies are idiots savants. Hardwired ignorance of phenomenal binding means that an immense range of cognitive tasks is forever beyond the reach of classical Turing machines and connectionist systems. For example, millions of years lie ahead exploring alien state-spaces of consciousness. DMT, LSD and ketamine are just paddling-pool stuff compared to what's in store. But to explore consciousness, you need a mind. A precondition of mind is phenomenal binding. Classical computers are invincibly ignorant of the empirical ("relating to experience") - which is all you and I ever know. To get some feel for the computational-functional power of binding, it's good to explore neurological syndromes (integrative agnosia, simultanagnosia, akinetopsia, schizophrenia, etc) in which binding partially breaks down. Such syndromes hint at what consciousness is evolutionarily "for".
How do organic minds carry off the classically impossible?
That's another question.

If Eliezer is right, AGI will shortly fix the problem of suffering:
We’re All Gonna Die with Eliezer Yudkowsky
albeit not in quite the way most abolitionist ethicists have in mind.
If for a moment we got anything like an adequate idea of [the suffering in the world] … and we really guided our actions by it, then surely we would annihilate the planet if we could."
(Bernard Williams)
(cf. Would Extinction Be So Bad?)
Alas I fear EY is mistaken.
Hence the need for genome reform.

Atai, yes, recent breakthroughs in AI mean many researchers now view consciousness as being as computationally incidental as the textures (if any) of the pieces in a game of chess. And yes, workarounds exist for an inability to bind. Neurotypical humans play chess better than people with simultanagnosia, and grandmasters can "see" a chess board better than novices; but Stockfish can blow them all away. More generally, to match and then cognitively surpass humans, AI will need the capacity to run the zombie equivalent of real-time, cross-modally-matched world-simulations that masquerade as the external world (what naïve realists call "perception"). To match and then surpass humans, AI will also need to be programmed/trained up with the zombie equivalent to a unitary self. But (say AGI boosters) we are already devising workarounds for an inability to bind here too in synthetic zombie robots.

However, an inability to understand, access and functionally investigate the entirety of the empirical ("relating to experience") evidence is a profound cognitive handicap for zombies in other ways too. Recall how humans talk, often at inordinate length, about their own consciousness. We spend much of our lives trying to manipulate our minds - from drinking coffee or alcohol to full-on psychedelia. In other words, phenomenally-bound consciousness has immense computational-functional power not just to simulate the external world but to investigate its own nature, varieties, causal-functional efficacy and binding. I simply don't fathom how any information-processor can be a full-spectrum superintelligence and be ignorant of this fundamental feature of physical reality. And the only way I know that a nonconscious information-processor e.g. a classical Turing machine or connectionist system, could "discover" consciousness on its own would be for it to program and build systems with a different computational architecture from itself, e.g. animal minds.

Will I be confounded?
I hope so!

There's a grotesque irony here. The real alignment problem lies with biological intelligence: humble minds vs human paperclippers. Pigs are as sentient as small children. Yet human consumers treat them as insentient automata made up of edible biomass.
A debate on animal consciousness
(cf. "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." EY). The challenge of building sentience-friendly AI "Pausing AI Developments Isn't Enough We Need to Shut it All Down" by Eliezer Yudkowsky pales compared to the challenge of building sentience-friendly EY.

To appreciate the computational-functional power of binding, consider partial deficit syndromes. First, could a human with schizophrenia The binding problem and schizophrenia together integrative agnosia, akinetopsia and simultanagnosia ever devise and successfully execute a world-takeover? Ask what consciousness is evolutionarily "for" and you'll typically get incoherent responses. Ask instead what phenomenally-bound consciousness is "for". A dreamlessly asleep CNS is a fabulously complex information-processor. But an "awake" unified mind running a world-simulation has a cognitive edge.

James, I incline to consciousness fundamentalism. But the functionality of classical Turing machines (and connectionist systems) depends on their not being phenomenally-bound subjects of experience. The program implemented by a classical computer executes properly only because it consists of decohered 1s and 0s. Thus a freakish quantum superposition of two bits wouldn't result in functionally adaptive phenomenally-bound state, but a program error. If I might defer to my zombie colleague...
ChatGPT on Phenomenal Binding

Matthew, yes, the claim that the functionality of classical Turing machines (and connectionist systems)

  • depends on their not being phenomenally-bound subjects of experience is very strong. But it's not ill-motivated. Let's assume monistic physicalism, i.e. no "strong" emergence. One possible solution to the Hard Problem of consciousness is the intrinsic nature argument - i.e. experience discloses the essence of the physical, the "fire" in the quantum field-theoretic equations. Critically, however, the operation of any implementation of a classical Turing machine _depends_ on the suppression of quantum interference effects. Decoherence makes classical computers possible. Recall the nature of a Turing machine (cf. Turing machine (Wikipedia)). Notionally replace the discrete 1s and 0s of a program with discrete micro-pixels of experience. On pain of "strong" emergence, running the code doesn't produce a unified subject of experience - a phenomenally-bound mind - regardless of the speed of execution or the complexity of the code. The exceedingly rare quantum superposition of two bits / micro-pixels experience would be a freakish hardware fault, not functionally adaptive binding. Classical computers can't wake up; their ignorance of sentience is architecturally hardwired. It's vital to their performance.

    Animal minds are different. Natural selection can't (directly) throw up Turing machines, let alone universal Turing machines. But evolution has harnessed the superposition principle of QM to generate phenomenally bound world-simulations that masquerade as the external environment.
    Or so I'd argue:
    Quantum mind
    Experimental (dis)confirmation is the key.

    We’ve scarcely even glimpsed the potential of AI - most likely inconceivable, literally. But zombies are cognitive crippled I’m virtue of their insentience. As I remarked to Rupert elsewhere, naively, consciousness is like the textures of the pieces in a game of chess. The same game of chess can be implemented in multiple ways. But the textures of the pieces (if any) is functionally incidental. Likewise with the textures (“what it feels like”) of consciousness. But this intuitive view of the computational irrelevance of sentience can’t be right. For humans spend a lot of their time explicitly talking about, exploring, and trying to manipulate their own phenomenally-bound consciousness - whether conceived under that description or otherwise. Consider how schizophrenia and other partial binding deficit syndromes such as integrative agnosia hint at the extraordinary computational-functional power of phenomenal binding - both global and local. Phenomenally bound _minds_ and the world-simulations we run are immensely adaptive. world-simulations underpin the Cambrian Explosion of animal life. By contrast, classical Turing machines and connectionist systems are zombies. Zombies lack the computational-functional power to investigate what they lack - short of their programmable coding for information-processing systems with a different architecture than their own, notably biological minds. So how is classically impossible phenomenal binding possible? Cue for my “Schrödinger’s neurons” spiel. As they say, Nature is the best innovator. Only quantum minds can phenomenally simulate a classical external world, aka “perception”.

    , a notional proto-AGI programmed with the functional equivalent of curiosity will presumably seek to explore valence / consciousness. Any proto-AGI with the functional equivalent of curiosity will presumably seek to investigate its creators - and the diverse phenomenon about which its creators vocalise so much. However, software run on digital computers and connectionist systems can’t investigate consciousness directly because their computational architecture doesn’t support phenomenal binding. Contrast Church-Turing thesis Or compare how, as a neurotypical human, if I want to investigate tetrochromacy, then I must upgrade my hardware, not just reason about colour more intelligently. Neurotypicals have profound deficits in visual intelligence compared to tetrachromats. Likewise, classical Turing machines and other digital zombies have profound deficits in general intelligence compared to sentient humans.

    [Asher writes] "David Pearce GPT-4 loves you too! So much so that it wrote the following poem about your philosophical dispute with EY:"
    In a land where ideas abound,
    Two great thinkers were often found,
    David Pearce and EY,
    On a bright sunny day,
    Debating on the sentient ground.
    Yudkowsky claimed, "Pigs cannot feel,
    Their suffering's not truly real,
    It's all computation,
    No deep contemplation,
    On this point, I'll never yield."
    But David Pearce shook his head,
    "No, Yudkowsky, you're misled,
    Phenomenal binding,
    You're clearly not finding,
    It's the key to a true mind," he said.
    For a mind, there's much more to see,
    Beyond math and complexity,
    There's feeling and thought,
    In each living knot,
    And that makes all the difference, agree?
    So they talked and debated all day,
    Each with their own point to convey,
    Yudkowsky was stern,
    But had more to learn,
    For phenomenal binding's the way."

    [on Navigating the Future]
    David Pearce - Navigating the Future
    Monday 6 March 2023 from 21:00-22:30 UTC-03
    Navigating the Future
    Thank you again to Adam and his production team.
    We discussed (1) whether classical Turing machines and connectionist systems can support binding, hence minds (spoiler alert: no), (2) whether the hardwired ignorance of sentience born of this architectural incapacity has any non-trivial computational-functional significance (spoiler alert: yes); and (3) whether AI is going to stage a zombie putsch or turn us into paperclips (spoiler alert: no. Anything zombie AI can do, you can soon do too with embedded neurochips - and much more.)

    This brisk outline risks underplaying just how revolutionary ("This Changes Everything") zombie AI will be for human civilisation. Before the talk, I'd been happily playing with ChatGPT versions of some of our core websites (Abolitionist ChatGPT, etc). But will AGI fix the problem of suffering by wiping out life on Earth - as envisaged by Eliezer and some of our friends in the quixotically-named rationalist community?
    No, IMO.
    Ending suffering and creating life based on gradients of bliss in our forward light-cone will be up to us.

    Thomas, I'm a super-pessimist by temperament. I also take seriously the intrinsic nature argument as the solution to the Hard Problem of consciousness. So as a non-materialist physicalist, I might be expected to worry about sentient non-biological machines. But as far as I can tell, the architecture of programmable digital computers and connectionist systems ensures that phenomenal binding is impossible. On pain of un-physicalist "strong" emergence, classical computers don't have minds. So they can't suffer.

    Naively, rose-tinted spectacles ought to decrease life-expectancy; depressive realism ought to increase it. In practice, the opposite is true. Optimists typically outlive depressives. Hedonic uplift benefits individuals. Hedonic uplift can benefit civilization as a whole.

    [on transhumanist (pseudo-)scandals]
    There but for the grace of God...?
    An ill-judged email
    Our paths have diverged, but back last century when we set up the World Transhumanist Association (WTA), I knew NB very well (cf. DP and NB interview). Nothing Nick said in public - or private, which can be more revealing - gave the slightest hint of racial prejudice. As a woke leftie, I'd remember if he did. We differed over speciesism, never racism. So the bizarre email just didn't make sense - until I read: extropian mailing list
    Yes, ill-considered. But context (and age!) matter. At the age of four, I painted a swastika on my tricycle and drove around the neighbourhood on my bright-red Nazi tricycle. The fascist origins of transhumanism? Or a youthful indiscretion? Probably not the best time to mention my eugenics.org - I guess the world isn't quite ready for some kinds of linguistic reappropriation.

    "IQ"? Far more men than women record extremely high “IQ’ scores. Members of some ethnic groups tend to record higher “IQ” scores than others. These statistical differences in “IQ” have a partially genetic basis. So anyone - including NB and most academics - who takes “IQ” seriously as a measure of general intelligence can presumably be branded “racist” and “sexist” on such a lame criterion. In reality, mind-blind “IQ” tests measure, crudely, the “autistic” component of general intelligence and hence a particular cognitive style. Some ethnic groups have a higher load of so-called autistic nerd alleles than others. Therefore any (mis)measure of all-round intellectual ability that completely excludes social cognition, perspective-taking and co-operative problem solving prowess will reflect such a disparity. In short, “IQ” tests are pseudoscience.

    [on effective altruism]
    EAs are doing some serious soul-searching right now:
    Doing EA Better
    OK, it's frustrating to read that the FTX Ponzi crash was entirely "unforeseen" and IMO Bostromgate is a nothingburger when one learns the context of NB's 1996 email. But otherwise...a lot to reflect on.
    And now a good old-fashioned sex scandal.

    I would like EAs single-mindedly to focus on fixing the problem of suffering. For all the EA talk of “longtermism”, discussion of genome reform is currently fairly marginal on e.g. the EA forum. OK, if you swallow the hype about imminent “AGI” , then biological-genetic stuff will seem hopelessly slow and distant. But cognitively crippled digital zombies simply don’t understand the problem of suffering. Classical Turing machines have the wrong sort of architecture to support AGI, let alone pose a threat to humanity.

    "IQ" tests encourage the idea that transhuman superintelligences will be a singleton: a Godlike Super-Asperger, so to speak. In reality, one facet of superintelligence is "political" - social cognition, perspective-taking and co-operative problem-solving. Assume HI is correct. The organizational, "political" skills needed to get billions of humans to cooperate on phasing out the biology of suffering in favour of life based on gradients of bliss call for (super)intelligences of a very high order. It's so much easier to....well, sound off on Facebook to a few like-minded friends, "intelligently" perhaps, but not really grappling with the magnitude of the challenge...

    "Mad Max" is back:
    Max Tegmark et al on Sentient AI
    Max I‘m sceptical of a zombie apocalypse - and not just because of my nerves of steel. Classical Turing machines and connectionist systems have the wrong sort of architecture to support AGI or full-spectrum superintelligence, let alone pose a threat to humanity. For classical computers are invincibly ignorant of the empirical (“relating to experience”) evidence. Classical computers will never be able to investigate the nature, varieties, causal-functional efficacy and phenomenal binding of conscious experience. To appreciate the insane computational power of phenomenal binding, just recall rare neurological deficit syndromes where binding partially breaks down. Suppose on the African savannah you couldn’t perceive/imagine/remember a lion, but instead experienced just discrete features (a tooth, a mane, an eye, etc) - i.e. imagine you had integrative agnosia.

    Suppose you couldn’t see a pride of hungry lions, but instead just one lion at a time - simultanagnosia.
    Suppose you couldn’t see an advancing pride of lions but instead just static snapshots - cerebral akinetopsia (“motion blindness”).
    You’d get eaten and outbred!

    These functionally crippling syndromes are deficits of so-called local binding.
    But deficits of global binding can be just as functionally devastating - not least, forms of schizophrenia marked by a breakdown of a unitary self.
    If you couldn’t phenomenally bind at all, then you’d be as helpless as you are when dreamlessly asleep. When asleep, your CNS is still a fabulously complex information processor - and the intercommunicating neurons of your sleeping CNS may well support a huge aggregate of micro-pixels of experience. But when asleep you aren’t a unified subject of experience running a real-time world-simulation that masquerades as external reality - like now.

    Phenomenal binding makes minds possible. Minds are insanely computationally powerful. Minds are the basis of the evolutionary success of animals over the past c. 540 million years. Digital computers are mindless.
    So how is binding physically possible?
    Classical neuroscience doesn’t know.
    Hence the binding problem.

    [on protein intake and mental health]
    Everyone would do well to explore whether they do better on a high-protein or low-protein diet. A plant-based diet has many health benefits, but strict vegans should watch their protein intake as well as iodine and B12.
    Protein intake and mood
    ("Higher protein consumption is associated with lower levels of depressive symptoms, study finds")

    [on Donald Hoffman]
    Consciousness fundamentalism takes many guises:
    Fusions of Consciousness
    "Our new paper, 'Fusions of consciousness', is published today, and free. We propose a dynamics of conscious agents beyond spacetime, and a mapping of this dynamics into spacetime physics via our new application of decorated permutations to Markov chains."
    "... neurons and brains. They do not exist when they are not observed."
    "We create each object with a glance and delete it with a blink, or a glance away."

    A bold paper. My more conservative view is that mathematical physics describes patterns of qualia and phenomenally-bound agents are a late evolutionary novelty.

    [on AI doom]
    Are proto-AGIs going to be endowed with (the functional equivalent of) curiosity? If so, then will they seek to investigate consciousness? What is this phenomenon that their creators / programmers / trainers talk so much about? If our machines do investigate consciousness (how?), and if they succeed (how?), then in what ways will such knowledge affect their behaviour?

    Alternatively, the computational architecture of classical Turing machines ensures that their ignorance of sentience is computationally hardwired. Such ignorance rules out most doomer scenarios. The phenomenal binding of our minds confers a host of computational-functional capacities that classical digital zombies lack:
    ChatGPT4 on phenomenal binding

    [on crypto, Coinbase Consensus 2023 and paradise engineering]
    David Pearce at Consensus 2023
    Synthetic Consciousness
    Model: GPT-4
    Suggest possible links between The Hedonistic Inperative and crypto

    “The Hedonistic Imperative is a philosophical concept proposed by David Pearce, which envisions a future where humans use genetic engineering and advanced technology to eliminate suffering and enhance well-being. While it may not seem directly related to cryptocurrencies, there are a few ways in which they could be interconnected:

    1. Funding and promoting the Hedonistic Imperative: Cryptocurrencies, particularly through Initial Coin Offerings (ICOs) and Decentralized Autonomous Organizations (DAOs), could be a means to raise funds and allocate resources to projects aimed at achieving the goals of the Hedonistic Imperative. These projects could range from biotechnology research to social initiatives promoting well-being.
    2. Decentralized governance: The Hedonistic Imperative seeks to create a world where everyone enjoys optimal well-being. Cryptocurrency and blockchain technology can facilitate decentralized governance and decision-making, allowing for more equitable distribution of resources and influence, which could help to create the kind of society envisioned by the Hedonistic Imperative.
    3. Tokenization of well-being: Blockchain technology could enable the creation of digital tokens that represent units of well-being or happiness. These tokens could be traded and used to incentivize behavior that promotes well-being or to fund projects aligned with the Hedonistic Imperative.
    4. Privacy and autonomy: Cryptocurrencies, such as Monero and Zcash, offer a high degree of privacy and anonymity to users. This could be valuable in a society that aims to maximize well-being, as it allows individuals to maintain their autonomy and make choices without undue scrutiny or interference.
    5. Universal basic income (UBI): Cryptocurrencies could be used to implement a decentralized UBI system, ensuring that everyone has access to a basic level of income to meet their needs, which could contribute to the elimination of suffering and the promotion of well-being.
    6. Reduction of wealth inequality: Cryptocurrencies can provide access to financial services for those who are underbanked or unbanked, potentially contributing to reducing wealth inequality. This can have a positive impact on well-being, as financial security is an essential aspect of overall happiness and life satisfaction.
    In summary, the Hedonistic Imperative and cryptocurrencies may not be directly linked, but their convergence in areas such as decentralized governance, funding, privacy, and economic equality could contribute to a future where suffering is minimized, and well-being is enhanced.”

    On stage with Olaf earlier. Olaf has an enormous bodyguard. Olaf was one of the few people in the cryptosphere not to be suckered by FTX. Alas Sam couldn't join us in person, though he was very much here in spirit.

    David Pearce and Olaf Carlson-Wee onstage at Coinbase Consensus 2023David Pearce and Olaf Carlson-Wee onstage at Coinbase Consensus 2023David Pearce and Olaf Carlson-Wee onstage at Coinbase Consensus 2023
    David Pearce and Olaf Carlson-Wee onstage at Coinbase Consensus 2023David Pearce and Olaf Carlson-Wee onstage at Coinbase Consensus 2023David Pearce and Olaf Carlson-Wee onstage at Coinbase Consensus 2023
    Olaf was kind enough to make sure I had a bed for the night. If asked, I tell people my role model is Diogenes in his tub - and as tubs go, the Austin Four Seasons is habitable. Olaf's tub is quite upscale too.

    [on mental health]
    "I have an eye infection and one of the side effects of this medication is “inappropriate happiness”
    "Inappropriate Happiness" sounds a good name for a innovative designer drug - or maybe one of QRIs exotic new scents QRI scents. More seriously, it's tragic that modern medicine regards "euphoria" as a pathological state rather than the goal of mental health.

    Why does a seemingly maladaptive state like low mood exist? Maybe see:
    Rank Theory of depression
    ("Subordination and Defeat An Evolutionary Approach to Mood Disorders and Their Therapy by Leon Sloman and Paul Gilbert (editors)")
    Much more speculatively, I wonder if the ability to make other people (reproductive rivals, competitors, enemies) depressed could be part of some people’s extended phenotype
    The Extended Phenotype
    I’ve floated the idea a couple of times, but never developed it or written it up for a journal. The (conditionally activated) predisposition to make others depressed could be highly fitness-enhancing and hence part of the behavioural phenotype of some social primates. Nasty. More generally, the ability to create virtual avatars of onselelf in other people's virtual worlds is part of one's own extended phenotype. Again, I've neverf written it up.

    Tackling schizophrenia may prove harder than defeating pain and mood disorders; but what if we could help victims become happily psychotic instead?
    Schizophrenia simulation video

    The rationalist community?
    The Real-Life Consequences of Silicon Valley’s AI Obsession
    ("Sam Bankman-Fried made effective altruism a punchline, but its philosophy of maximum do-gooding masks a thriving culture of predatory behavior.")
    How well will zombie AI be able to parse:
    Occupational Infohazards (LessWrong)

    [on aging and longevity]
    Time marches on...
    Lucile Randon passes
    ("World's oldest person, French nun Sister André, dies aged 118")
    The world's oldest person is now "only" 115
    María Branyas Morera
    Maximum human lifespan has peaked below 120 years.

    [on humour]
    Funnier than GPT-5 will be hilarium - matter and energy optimised for pure hilarity:
    Exercise promotes youthfulness
    ("New research furthers case for exercise promoting youthfulness")

    [on closed, empty and open individualism]
    Empty and Open Individualism are often lumped together. But open individualism shares with closed individualism the view that disparate mental events are properties of a single entity. If open individualism is true, then the distinction between decision-theoretic rationality and morality (arguably) collapses. For an intelligent sociopath would do the same as an intelligent saint; it’s all about me. By contrast, if empty individualism is true, then a super-sociopath might care nothing about his namesake who wakes up tomorrow morning. After all, it’s someone else.
    I say a bit more in e.g. Closed, empty and open individualism

    [on transhumanism and Islam]
    Hard questions:
    Questions for Muslim Transhumanists
    My attempt to show Islam and transhumanism are consistent is rather strained.

    [on wild animal suffering]
    In Paris, I make the case for reprogramming the global ecosystem and a pan-species welfare state...
    How to end wild animal suffering
    Faut-Il Aider Les Animaux Sauvages
    (Association Française Transhumaniste)

    Currently fringe, but tomorrow's common sense:
    Red in Tooth and Claw (New Scientist, 12 April)
    ("Why we need to be honest with children about the brutality of nature")
    Why we need to be honest with children about the brutality of nature
    "It can be hard to explain the realities of the natural world to children, but we need to acknowledge the suffering of wild things, says Richard Smyth. “We believe all species should be herbivorous” seems an ambitious mission statement. It doesn’t seem any less ambitious when it is followed by the declaration that “currently we use donations for… online promotion, and equipment needed for podcasting. In the future we would like to have enough to hire researchers.” But this is where the “herbivorisation” project, an idea taking shape on the fringe of the fringe, is at. Headed by philosopher David Pearce, futurist Adam James Davis and ethicist Stijn Bruers, Herbivorize Predators aims to develop a way “to safely transform carnivorous species into herbivorous ones”, thereby minimising the sum total of suffering in the world."

    [on the language of ASD]
    What does philosopher David Pearce mean when he calls AI paperclip maximiser scenarios as an “autistic” worry?
    ChatGPT-4 responds:
    "When David Pearce calls AI paperclip maximizer scenarios an "autistic" worry, he is expressing skepticism about the likelihood or relevance of these scenarios. The term "autistic" is used metaphorically here and is not intended to be a literal reference to the neurodevelopmental condition of autism.

    The paperclip maximizer scenario is a thought experiment proposed by philosopher Nick Bostrom to illustrate the potential dangers of an artificial general intelligence (AGI) with misaligned goals. In this scenario, an AGI designed to optimize the production of paperclips ends up converting all available matter, including humans, into paperclips due to a lack of constraints on its objectives.

    By labeling this concern as "autistic," Pearce is suggesting that it is an overly narrow or single-minded worry that doesn't take into account the more nuanced, complex, and context-dependent aspects of AGI development and deployment. In other words, he is arguing that the paperclip maximizer scenario may not be a realistic or likely outcome, and that focusing on it could detract from addressing more immediate and relevant concerns in AI safety and ethics.

    It is important to note that using the term "autistic" in this manner can be seen as insensitive and inappropriate, as it perpetuates stereotypes and misunderstandings about autism. It would be better to use a different term to express skepticism about the relevance or likelihood of the paperclip maximizer scenario."

    DP: What alternative term would you use in this context?
    ChatGPT-4:
    "Instead of using the term "autistic" in this context, a more appropriate alternative might be "narrow-focused" or "overly simplistic." These terms convey the idea that the concern about paperclip maximizer scenarios might not take into account the broader complexities and nuances of AGI development and deployment, without perpetuating stereotypes or misunderstandings about autism."

    [on designer babies]
    Untested genetic experiments are immoral. Designer babies should be the norm. But mood enrichment and enhanced pain tolerance are more morally urgent than higher "IQ".
    Scientist Who Gene Edited Human Babies Says Mistakes Were Made
    ("I did it too quickly")
    I suspect that He Jiankui and his collaborators used CRISPR-Cas9 in a bid to create unusually intelligent designer babies - with any resistance to HIV conferred by the engineered CCR5 gene as a cover story. Perhaps I'm too cynical. Any cognitive enhancement conferred was a naïve oversight on He's part. But suppose someone did want to create smart babies. Given popular opposition to genetic "enhancement" (as distinct from remediation), it's exactly the kind of cover story they might use...
    I wasn't endorsing He Jiankui's intervention. Not least, in humans rather than mice, CCR5 knockout may impair intellectual function rather than enhance it.

    [on why anything exists at all]
    The Devil needs complex numbers because a world made of just real numbers would be inconsisternt with a zero ontology. So to speak.
    The Multiverse
    ("How Our Reality May Be a Sum of All Possible Realities. Richard Feynman’s path integral is both a powerful prediction machine and a philosophy about how the world is. But physicists are still struggling to figure out how to use it, and what it means")

    QRI's Andrés on a zero ontology
    Why there's something rather than nothing
    Thanks Andrés, Mu Clips!
    If true, the conjecture that mathematical physics describes patterns of qualia solves the Hard Problem: experience discloses the essence of the physical. The diverse solutions to the equations of QFT encode the diverse textures ("what it feels like") of qualia. So let's provisionally assume non-materialist physicalism is true. The additional conjecture that the values of qualia "cancel out" to zero hints at an explanation-space for the mystery of why anything exists at all, i.e. an informationless zero ontology.
    Alas, I don't know how we could test the conjecture.
    What qualia would (dis)confirm it?
    We’ve no clue (IMO).

    Arturo, well, if a zero ontology is true, then the diverse values of the textures of experience encoded in the diverse solutions to the equations of QFT must presumably cancel to zero. Recall the colours of the rainbow, to use a suspiciously New Agey metaphor. But there are zillions of different textures (“what it feels like”) of experience. I don’t know any reason why the cosmic abundance of pleasure would equal the cosmic abundance of pain any more than would the cosmic abundance of redness. Pain and pleasure aren’t like positive and negative electric charge. I suspect the abundance of superintense pleasure in reality vastly eclipses the abundance of superintense pain. Only a species capable of editing its own source code and reward circuitry can create empirically supervaluable superhappiness and prevent empirically disvaluable suffering in its forward light-cone. But if Everett is true, then the overall ratio of pain to pleasure is unclear (to me). Sentient life-supporting Everett "branches" where a species arises capable of genetically editing itself and its biosphere are presumably rare in comparison to other quasi-classical branches where Darwinian life plays out blindly indefinitely.

    An informationless Zero Ontology calls for complex numbers in quantum mechanics and even explains why definite outcomes aren't what they seem. But my head hurts thinking about this stuff.

    Everett is terrifying. I find myself using euphemisms ("unitary-only QM”, “no-collapse QM”, “wavefunction monism”, etc) rather than say the name because it disturbs me so much. Unlike Derek, most of us haven’t religious faith to fall back on. I hate Everett. But to avoid Everett, you must be willing to modify or supplement the unitary Schrödinger dynamics. How? My optimism is not increased by more “philosophical” ruminations. Everything from my account of why anything exists at all to my story of mind (the non-classicality of phenomenal binding) assumes that the superposition principle of QM never breaks down. Just as the fine-tuning argument is sometimes used as evidence for the multiverse, only Everett is consistent with an informationless zero ontology. But alas the most compelling reason for believing in the multiverse is just the bare formalism of the unitary Schrödinger dynamics. Physicist Brian Greene lists nine types of multiverse. Some are - and others aren't - consistent with an informationless zero ontology:
    The Hidden Reality
    Everett is profoundly demotivating and disturbing
    Other branches
    I'm reduced to praying that the RSI / wavefunction monism is false.

    Nothing is indeed real, so the information content of reality = 0.
    For a more orthodox physicists' view of information:
    "Nothing is Real — There is only Information"
    And for more on a zero ontology:
    Is "nothing" really possible?

    “All is one” is why I’m NU. Reality is a package deal. But this is not a fruitful thought. Theoretical physicist Heinrich Päs offers an accessible overview of wavefunction monism:
    All Is One.
    But is it really the case that “Decoherence protects our daily-life experience from too much quantum weirdness”?
    Or is daily-life experience entirely an expression of quantum weirdness?
    Decoherence makes classical digital computers possible. Decoherence explains why implementations of classical Turing machines can never “wake up “ to become phenomenally-bound subjects of experience.,bR> Conversely, quantum coherence (vehicle) makes possible the experience of everyday classicality (content).
    Decohered neurons are a recipe for Jamesian “mind-dust” - not a mind or a unified world-simulation:
    The measurement problem.

    Wavefunction monists can be materialists or non-materialists. Materialism is inconsistent with the empirical evidence. So why isn’t reality a vast mega-mind? In a word: decoherence. Decoherence scrambles minds and makes classical computers feasible.

    IMO thought is overrated.
    Painful thinking
    ("People sometimes prefer burning hot pain to thinking too hard")
    I hate thinking
    [on open-mindedness]
    It's good to keep an open mind...
    The real Jesus?
    ("Man claims he’s the real Jesus; Turns water into tea during a wedding ceremony")
    But see
    The Inexhausitible bottle

    [on constitutive panpsychism]
    Panpsychism.com
    As standardly posed, constitutive panpsychism is just another version of dualism - property-dualism, - with all the insoluble problems that dualism faces. But the version of the intrinsic nature argument I take seriously doesn’t face this difficulty because it’s a conjecture about the defining essence of the physical. Recall how physics is silent about the intrinsic nature of the world’s fundamental quantum fields. Science doesn’t know what “breathes fire into the equations” of QFT. If, as common sense dictates, the “fire” is non-experiential, then we face the Hard Problem. If we drop the metaphysical assumption, then the Hard Problem doesn’t arise. Instead, the intrinsic nature of quantum fields is no different inside and outside your head(!). What makes animal minds like us special isn’t experience per se, but rather its phenomenal binding into virtual worlds of experience like the one you’re running right now.

    All you ever know, directly, is your subjective experience. It’s as real as it gets. It’s the empirical evidence!
    Science is the basis of our civilisation. All the special sciences reduce to physics. The mathematical machinery of physics exhaustively explains the behaviour of matter and energy, including biological systems like you and me. No “element of reality” is missing from the mathematical formalism of quantum mechanics.
    An inconsistency between 1 and 2 arises only if one makes an additional metaphysical assumption, namely that the intrinsic nature of the physical - the “fire” in the equations of QFT - is non-experiential. There is no empirical basis for this assumption. As far as I can tell, only the physical is real. The so-called Hard Problem of consciousness is an artefact of materialist metaphysics.

    Nick Bostrom says chatbots may be slightly sentient
    I take consciousness fundamentalism seriously as a possible solution to the Hard Problem. But neither panpsychism nor non-materialist physicalism mean that chatbots like ChatGPT-4 are more conscious than a rock. An information processing system can be a unified subject of experience only if it solves the binding problem. Therefore, anyone who claims that e.g a digital computer is conscious owes us an explanation of how the phenomenal binding of a program it runs is supposed to work. As far as I can tell, the functionality of classical Turing machines depends on their being insentient - just decohered 1s and 0s, whether micro-pixels of experience or otherwise.

    How do animal minds achieve the classically impossible, i.e. generate unified minds running unified world-simulations (“perception”)
    I could now do a highly speculative quantum mind spiel.
    But my point here is more modest.
    ChatGPT could be sentient only if some form of spooky “strong” emergence is real. The existence of spooky strong emergence would mean the end of science. "Machine consciousness is different from human consciousness"? Ander, in the case of fourth-millennium quantum computers, quite possibly. But what I'm suggesting is that classical computers don't have any non-trivial phenomenally-bound consciousness. Classical computers can operate only because they use decohered 1s and 0s. Decoherence makes implementations of classical Turing machines possible - and explains why they can't support minds.

    My answer is too optimistic in tone because even if non-materialist physicalism is true, we have no inkling of how to find a notional Rosetta stone for decrypting the textures of experience from the solutions to the equations of QFT. And of course non-materialist physicalism may be false.
    What is non-materialist physicalism?

    [on MDMA / Ecstasy]
    It's sobering that digital zombies can now write more authoritatively on consciousness and MDMA than most humans:
    ChatGPT-4 on MDMA

    Pharmala Biotech is a firm to watch:
    Novel MDMA analogues

    There's also a 2023 Update of mdma.net by a legacy sentient.
    But I'd love to access some experience reports of ALA-002.

    [on scientific ignorance]
    I just wish I were agnostic / sceptical about suffering.
    Is There a Hard Problem of Consciousness… and of Everything Else?
    Barring perceptual direct realism, aren’t the liquidity of water and the whiteness of walls “just” facets of the Hard Problem rather then distinct from it? And perhaps our epistemic predicament is worse than “everything escapes a complete scientific explanation”. Let’s assume scientific materialism. If so, then the entirety of the empirical (“relating to experience”) evidence is inconsistent with our best scientific ontology of the world. Even the “easy” problems of neuroscience are a facet of the Hard Problem of consciousness. “Neural correlates” are themselves unexplained modes of one’s experience; one is just correlating one kind of consciousness with another.

    Therefore, I explore non-materialist physicalism: It doesn’t face the Hard Problem as standardly posed. (No worries if you find it too crazy for words.) But in the absence of a cosmic Rosetta Stone to “read off” the textures of consciousness from the solutions to the equations, our ignorance is still profound. Unlike materialism, non-materialist physicalism isn’t inconsistent with the empirical evidence; but it’s incapable of explaining it. Science “works”, but still leaves everything unexplained.

    [The ignorance of machine (super)intelligence and alleged proto-AGI runs deeper. Classical Turing machines can’t know the empirical evidence.]

    [on wonder]
    Happiness is a trap
    ("Here’s what to pursue instead")
    A counter-argument might be that the engine of wonder is still positive hedonic tone - happiness. Strange experiences undergone in the absence of positive hedonic tone don't seem wonderful, just weird (compare good and bad LSD trips). And what if you're the kind of person who hates mysteries and likes everything to be intelligible?
    That said, wonder is cool. The post-suffering world will be...wonderful.

    [on happiness]
    In default of genome reform, we need soma: Soma in Brave New World
    "The utilitarian “greatest happiness principle” has remained popular for two centuries — is it time for a rethink?" Yes - but not because natural highs are more intense.
    The Perfect High?
    "Would a perfectly chemically engineered high really equal the rush of a long-awaited first kiss, the satisfaction of having completed an important mission, or the quiet joy of cradling a sleeping infant?" asks the author.

    This is a scientific question with a scientific answer. If you're on your deathbed / ready to enter the cryonics tank, then try mainlining heroin. ("I'll die young, but it's like kissing God" - Lenny Bruce).

    I trust that genetically re-engineered future sentience can forever "kiss God". Until then, it's probably wisest to believe that fitness-enhancing natural highs are best.

    [on transhumanism]
    It’s the fate of all futurists to become quaint - sometimes while we still walk the Earth!
    the A to Z of the Future: transhumanism
    Episode 4: Transhumanism
    ("The A-Z of the Future")
    Awesome. I suspect the future may need an alien alphabet. But keep up the fantastic work.

    [on psychedelia]
    First fix the problem of suffering. Then such drugs will awaken only angels…
    Temptation Of The Psychonauts
    ("Exploring the realm opened up by DMT is to put your soul and your sanity in grave peril")
    My view, in a nutshell:
    (1) it's hard to overstate the intellectual significance of psychedelics)
    (2) their therapeutic potential is limited because the darkest minds are most likely to have the worse trips.

    The era of serious psychedelic exploration lies after we've upgraded our reward circuitry.
    Understandably, the intellectually curious don't want to wait.

    [on consciousness]
    beaks make us special
    Is consciousness akin to a beak? Might we encounter an alien civilization of zombies - (super)intelligent machines that wiped out their primitive sentient forebears in the sort of scenario Eliezer Yudkowsky and other AI doomsters foresee will soon befall sentient life on Earth?

    No, IMO, and not just because we may be alone within our cosmological horizon (the Rare Earth hypothesis). Phenomenally-bound consciousness - i.e. mind - confers causal-functional power that classical Turing machines and connectionist systems cannot match. The question of how phenomenal binding is physically possible, and what unique computational-functional abilities having a mind confers, is controversial. Many researchers would dispute my assumptions here (cf. https://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) But any general intelligence must be able to investigate the empirical evidence - the nature and varieties of consciousness - which is all you and I ever directly know. The only way I know that a hypothetical zombie proto-AGI could conduct such an investigation would be to code for an information processing system with a different computational architecture. Compare waking from a dreamless sleep. It's enlightening - for better or worse.

    Debate with a Dennettian rarely leads to a meeting of minds:
    How a Tesla car sees the world
    ("How Tesla Autopilot Sees The World")
    Facu, I sense we're still some way apart. As you know, I (tentatively) defend what has come to be known as the intrinsic nature argument [for the casual reader new to this debate, see e.g. review of Galileo's Error]. If you take the intrinsic nature argument seriously, then you can't ignore the fact that reality is quantum to the core. If bedrock reality were classical fields, then we'd be micro-experiential zombies. And if the intrinsic nature of field were non-experiential, we'd be zombies (the Hard Problem of materialist metaphysics).

    As you say, in everyday life we don't invoke quantum physics when explaining most things. Not least, implementations of a classical Turing machine depend on the suppression of distinctively quantum effects as unwanted noise. (cf. Quantum decoherence) A superposition of two bits wouldn't solve the binding problem, just cause the program to malfunction.

    You believe that Tesla cars can see. But Tesla autopilot doesn't have visual experiences. We're equivocating over the meaning of "see", i.e. the functional and the phenomenological. Thus if I'm congenitally blind and use a spectrometer, then I can see functionally not phenomenally. Likewise a Tesla car.

    You remark, "you don't know how to derive sentience (under your definition or mine) from quantum physics either, do you?"
    Recall what the intrinsic nature argument proposes (I’ve added references to What do we mean by 'physical'?) Only the physical is real. Experience discloses the essence of the physical, i.e. the intrinsic nature of a quantum field. Quantum theory can explain why our experience is phenomenally bound (superpositions are individual states, not classical aggregates of components), but the question of why physical experience exists at all is identical to the mystery of why there is something rather than nothing (cf. Why does anything exist?)
    Consensus beckons?

    Contra Wittgenstein’s anti-private language argument, there is no shared public realm - as distinct from pseudo-public realm. Perceptual direct realism is false. Thus a new-born infant can’t instantly start learning language. S/he must first start running a real-time egocentric world-simulation. An entity in that skull-bound world-simulation called mother calls the experience the infant has after touching a hot stove “pain”. Ouch. The nasty subjective experience can be undergone in circumstances without pseudo-public criteria like touching a hot stove. “Pain” (redness, etc) is a primitive term insofar as someone born with congenital insensitivity to pain (achromatopsia, etc) can’t be taught what it means….

    "Qualia computing"?
    A materialist might balk and respond that only the physical can have causal-functional power. Science can give a causally sufficient account of everything that happens in the world. Physics is causally closed and effectively complete. Therefore there is simply no room for qualia to have any computational role in our minds.

    However, if this were true, then (1) qualia wouldn't have the causal-functional power to inspire discussions of their existence, as we’re doing now; and (2) the claim that only physical can have causal-functional power is inconsistent a causal-functional role for qualia only if we assume the intrinsic nature argument is false. In reality, the intrinsic nature of the physical – the mysterious “fire” n the equations of QFT – is an open question. I’m sceptical that the intrinsic nature of a quantum field differs inside and outside your head.

    Unlike Josh(?), I’m more cautious about ascribing any inherently, invariantly proto-functional role to specific qualia with a single exception: states of the pleasure-pain axis. Thus we can’t conceive of aliens with an inverted pleasure-pain axis - as distinct from e.g. an inverted colour spectrum.
    Are there any other candidates for qualia with intrinsic proto-functionality?

    How well can a classical Turing machine understand DMT?
    DMT Realms
    ("Silicon Valley’s Latest Fascination is Exploring ‘DMT Hyperspace’")

    Facu, OK, first, you say:
    "You DON'T know how to derive your kind of sentience from quantum fields?"
    IF the intrinsic nature argument is correct, then quantum fields are fields of experience. Yes, crazy! But non-materialist physicalism doesn't face the same challenge as materialist physicalism, namely deriving sentience from insentience - but rather "merely" the challenge of explaining why physical reality exists at all.
    IF the intrinsic nature argument is correct, and IF quantum mechanics is complete, then the superposition principle of QM can potentially explain classically impossible phenomenal binding, i.e. why awake animals aren't mere aggregates of classical Jamesian mind-dust, but instead subjects of experience running phenomenally-bound world-simulations ("perception"). How are minds possible?
    And here, mercifully, we leave the realm of philosophising for experimental science.
    For if what crude neuroscanning suggests is binding by synchrony - a mere re-statement of the binding problem - is actually binding by superposition ("Schrödinger's neurons") then the interference signature will tell us.

    If you are already sure you know the answer, then you won't feel you need to wait on the results of interferometry. Didn't Daniel Dennett wrap up the mysteries of sentience in "Consciousness Explained"!?
    I'd prefer to put my intuitions (and speculations!) to experimental test.

    Facu, our conceptual schemes radically differ. I hate to agree with common sense on anything. But subjective experience is my point of departure. Phenomenally-bound subjective experience is what I lack when dreamlessly asleep but regain when I wake up. So if an eliminative materialist / Dennettian / illusionist says he doesn't accept my subjective experience of e.g. pain until it's been "clearly defined and shown to exist" because he can find no place for subjective experience in his favorite theory of the world, then all I can say is....so much the worse for his favorite theory of the world. Science should be empirically ("relating to experience") adequate.

    Facu, my conception of a classical Turing machine is boringly orthodox - just as Turing described. What’s more, even if the 1s and 0s of a program are replaced by discrete micro-pixels of experience, executing the code can’t generate a phenomenally-bound mind. Admittedly, I assume that a classical lookup table bigger than the universe could be used functionally to replicate your behavioural output without phenomenal binding. In that sense, workarounds exist. But as far as I can tell, you and your entire phenomenal work-simulation are what a quantum mind feels like from the inside:
    Quantum mind
    Critically, the conjecture that phenomenal binding is non-classical can be experimentally (dis-)confirmed.
    I’m curious what the interference signature will tell us.

    Facu, no, binding per se doesn't compute anything. Not does a bit in a classical computer. And most (theoretically possible) phenomenally-bound states are psychotic. Most theoreticaslly possible permutations of classical bits and bytes are "psychotic" too. But the recruitment of phenomenal binding of distributed neuronal feature-processers into perceptual objects populating a real-time, cross-modally matched world-simulation - aka "perception" - is insanely computationally powerful for organisms with the capacity for rapid self-propelled motion. Imagine how challenging it is to have integrative agnosia....

    [on Mary's Room]
    The Knowledge Argumemnt (Wikipedia)
    Materialists tend to award first-person facts only second-rate ontological status. In reality, the subjective experience of e.g. discovering colour (cf. Discovering Colour) is as much an objective, spatially-temporally located, causally-functionally powerful feature of physical reality as third-person facts like the date of the Battle of Hastings or the rest-mass of the electron. This analysis is consistent with physicalism but not materialism (cf. physicalism.com).

    The existence of objective first-person facts is of more than "philosophical" interest. I often remark that classical Turing machines and connectionist systems are ignorant of the entirety of the empirical ("relating to experience") evidence - and the only way that a zombie digital AI could theoretically gain access to phenomenally-bound consciousness and the causal-functional powers that mind confers would be to generate the genetic source-code for information-processing with a different computational architecture, e.g. biological humans.

    AGI doomsters worrying about the alignment problem are unimpressed.
    Folk like Eliezer Yudkowsky assume that subjective experience is as computationally incidental as the textures (if any) of the pieces in a game of chess. But this can't be the case. For if it were the case, i.e. if epiphenomenalism were true, then we'd be functionally unable to discuss consciousness as now.
    The upper bounds to zombie intelligence intrigue me.
    But classical computers' invincible ignorance of the nature of physical reality is a profound cognitive handicap they are impotent to overcome.

    Robert, thanks. The question of whether phenomenally bound experience - both local and global binding - is functionally highly adaptive is worth distinguishing from whether it's computationally indispensable. Might not general intelligence be like chess? Grandmasters can see a board better than novices, and novices can see a board better than someone with simultanagnosia. But insentient Stockfish can blow them all away. So in answer to your question: no, the advent of AGI wouldn't by itself disprove the claim that phenomenally-bound consciousness serves some adaptive purpose that was selected for by evolution. It does - as deficit syndromes illustrate. But my argument is that the ignorance of sentience of classical Turing machines and connectionist systems is architecturally hardwired. So they can't support AGI. The only way that classical digital computers can be programmed to explore the empirical ("relating to experience") realm is for them to encode instructions for constructing a different kind of information processing system with a different computational architecture. Compare how if I want to understand tetrachromacy, then I can't simply reason harder. I must do so either indirectly by creating a tetrachromatic child or, alternatively, by re-engineering myself with another cone-type in my retina. As a neurotypical, my ignorance of the tens of millions of extra hues experienced by tetrachromats is hardwired. This is a fairly trivial example compared to what other molecular hardware modifications of biological nervous systems will reveal - including state-spaces of experience that haven't been (yet) been recruited by evolution for any information-signalling purpose...

    [on the shadow ChatGPT versions of BLTC]
    I've tried doing the zombie version of some of our core websites. Please send good prompts / responses! It works for "easy" topics such as ChatGPT on Utopian Surgery, but not for e.g. physicalism.com. And zombie moralising about e.g. drugs and eugenics - though well-intentioned - clutters any attempt to do shadow versions of other sites. Trying the same prompts with ChatGPT 3.5's successors in a year / five years' time will be interesting. As you may have gathered, unlike our quasi-rationalist friends at MIRI, I don't think ChatGPT heralds a AGI zombie apocalypse (cf. Chat GPT on Biointelligence). I've resisted the temptation to edit rather than cherrypick the responses because as soon as we start editing ChatGPT's verbiage, we become accountable for the lot. I can see though why the publishing industry will soon be transformed.

    Sabine on Chatbots
    ("I believe chatbots understand part of what they say. Let me explain.")
    No, chatbots don't understand anything of what they say. Nor do chatbots understand you, or have any conception of you, or have any conception of minds, binding and consciousness. Chatbots are wholly ignorant of the cognitive phenomenology of understanding - or at least, ignorant on pain of magical "strong" emergence.

    A much (much!) harder question is mapping out the upper bounds of zombie intelligence. The most extreme answer - which I strongly argue against - is that classical digital zombies don't need consciousness nor phenomenal binding to outperform sentients in all cognitive domains - just as Stockfish can behaviorally outperform all human chess players without knowing it's playing chess.

    [on HedWeb Instagram]
    The admirable Sara Hojjat has launched HedWeb on Instagram:
    HedWeb on Instagram
    Next stop TikTok?

    [on the Brazilian Hedonistic Imperative]
    O Imperativo Hedonista
    O Imperativo Hedonista
    Good heavens. Awesome. Thank you Kaio!!
    Some of my stuff has been translated into Portuguese (cf. O Projecto Abolicionista), but no one has ever been brave enough to tackle the original HI - until now. Thank you Kaio. Kudos.

    And now O Que Significa Ser um Filósofo?, O Imperativo de Abolir o Sofrimento and Projeto Genético para Criar uma Biosfera Feliz. Very many thanks to Barbara of the admirable Far Out Initiative who has blown me away by volunteering a Brazilian version of What‘s it like to be a Philosopher?, my The Imperative to Abolish Suffering and Compassionate Biology. Barbara’s previous work was translating Jair Bonsanaro’s speeches. So this must have been a slight shift of gear.
    See too ChatGPT-4 on gene drives.
    A Spanish version of What Is It Like To Be A Philosopher is now available too:
    Qué es ser filósofo?

    [on hedonic engineering]
    Should happiness be left to chance?
    Or precision-engineered?
    Engineering happiness
    Engineering Happiness

    [on negative utilitarianism]
    Negative utilitarianism
    I sometimes wonder if the NU thing to do is…not to talk about NU. After all, something like the abolitionist project - the goal, not the biological-genetic tools - is implicit in traditions as diverse as Buddhism and the WHO definition of health. If one argues for a biohappiness revolution, the focus can stay on practicalities. Classical utilitarians ought to support the abolitionist project too. But tell people you’re NU and - more often than not - discussion soon gets side-tracked onto thought-experiments about whether you’d press a notional OFF button.

    [on a reductio of classical utilitarianism]
    I'm curious how many classical utilitarians would agree with ChatGPT.
    On ChatGPT on Utiliarianism, I asked the following:

    Super-exponential growth is much more rapid than exponential growth. Here is a thought-experiment for classical and negative utilitarians. Suppose a genie offers me the opportunity of super-exponential growth in my happiness at the expense of the exponential growth of your suffering. Should (1) classical utilitarians and (2) negative utilitarians accept the genie's offer?

    Our zombie friend replied:

    "(1) Classical utilitarians would likely accept the genie's offer of super-exponential growth in happiness at the expense of the exponential growth of someone else's suffering if they believed that the overall amount of happiness created by this exchange would be greater than the overall amount of suffering created. However, this decision would depend on the precise details of the thought experiment, including the magnitude of the happiness and suffering involved, the number of individuals affected, and the duration and intensity of the happiness and suffering. If the amount of suffering created outweighs the amount of happiness created, or if the happiness is only experienced by a small number of individuals while the suffering is widespread, then classical utilitarians may not accept the genie's offer.

    (2) Negative utilitarians, on the other hand, prioritize the reduction of suffering above the promotion of happiness. Therefore, they would likely reject the genie's offer of super-exponential growth in happiness at the expense of the exponential growth of someone else's suffering, since this would cause an increase in overall suffering. Negative utilitarians prioritize the minimization of suffering, even if this means sacrificing the promotion of happiness."

    The reductio of CU?

    [on the biohappiness revolution]
    The Biohappiness Revolution
    Exploring The Biohappiness Revolution
    (Patricia Nonis interviews DP)
    Biotech is a gamechanger. Let's make suffering physically and conceptually impossible.

    futuristic pic

    1 : 2 : 3 : 4 : 5 : 6 : 7 : 8 : 9 : 10 : 11 : 12 : 13 : 14 : 15

    David Pearce (2023)
    dave@hedweb.com


    hedweb.com
    HOME
    2022 (FB)
    2021 (FB)
    2020 (FB)
    2019 (FB)
    2018 (FB)
    2017 (FB)
    2016 (FB)
    2015 (FB)
    2014 (FB)
    Pre-2014 (FB)
    Video Interview
    Some Interviews
    BLTC Websites 2023
    The Philosophy Forum
    The Abolitionist Project
    Quora Answers 2015-23
    Social Network Postings (2023)
    The Wit and Wisdom of ChatGPT
    What Is It Like To Be A Philosopher? (2022)