London Futurists

Catastrophe and consent

June 21, 2023 London Futurists Season 1 Episode 44
Catastrophe and consent
London Futurists
More Info
London Futurists
Catastrophe and consent
Jun 21, 2023 Season 1 Episode 44
London Futurists

In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?

The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.

Topics addressed in this episode include:
*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence
*) The existential threat of the entire human species being wiped out
*) The vulnerabilities of our shared infrastructure
*) An AGI may pursue goals even without it being conscious or having agency
*) The risks of accidental and/or coincidental catastrophe
*) A single technical fault caused the failure of automated passport checking throughout the UK
*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia
*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles
*) Reasons why an AGI might decide to eliminate humans
*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders
*) Why "Consent" is a better name than "Celebration"
*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems
*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"
*) Enhanced human intelligence could play a role in avoiding a surge of panic
*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans
*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us
*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans
*) Is collaboration a self-evident virtue?
*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?
*) The inscrutability of AGI motivation
*) A reason to consider "Consent" as the most likely outcome
*) A fifth 'C' word, as discussed by Max Tegmark
*) A reason to keep working on a moonshot solution for "Control"
*) Practical steps to reduce the risk of public panic

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Out-of-the-box insights from digital leaders
Delivered is your window in the minds of people behind successful digital products.

Listen on: Apple Podcasts   Spotify

Show Notes Chapter Markers

In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?

The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.

Topics addressed in this episode include:
*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence
*) The existential threat of the entire human species being wiped out
*) The vulnerabilities of our shared infrastructure
*) An AGI may pursue goals even without it being conscious or having agency
*) The risks of accidental and/or coincidental catastrophe
*) A single technical fault caused the failure of automated passport checking throughout the UK
*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia
*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles
*) Reasons why an AGI might decide to eliminate humans
*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders
*) Why "Consent" is a better name than "Celebration"
*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems
*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"
*) Enhanced human intelligence could play a role in avoiding a surge of panic
*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans
*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us
*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans
*) Is collaboration a self-evident virtue?
*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?
*) The inscrutability of AGI motivation
*) A reason to consider "Consent" as the most likely outcome
*) A fifth 'C' word, as discussed by Max Tegmark
*) A reason to keep working on a moonshot solution for "Control"
*) Practical steps to reduce the risk of public panic

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Out-of-the-box insights from digital leaders
Delivered is your window in the minds of people behind successful digital products.

Listen on: Apple Podcasts   Spotify