
London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum Chace
Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.
His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.
He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.
In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.
He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.
Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.
David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.
He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.
As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.
From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.
Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
London Futurists
Catastrophe and consent
In this episode, co-hosts Calum and David continue their reflections on what they have both learned from their interactions with guests on this podcast over the last few months. Where have their ideas changed? And where are they still sticking to their guns?
The previous episode started to look at two of what Calum calls the 4 Cs of superintelligence: Cease and Control. In this episode, under the headings of Catastrophe and Consent, the discussion widens to look at what might be the very bad outcomes and also the very good outcomes, from the emergence of AI superintelligence.
Topics addressed in this episode include:
*) A 'zombie' argument that corporations are superintelligences - and what that suggests about the possibility of human control over a superintelligence
*) The existential threat of the entire human species being wiped out
*) The vulnerabilities of our shared infrastructure
*) An AGI may pursue goals even without it being conscious or having agency
*) The risks of accidental and/or coincidental catastrophe
*) A single technical fault caused the failure of automated passport checking throughout the UK
*) The example of automated control of the Boeing 737 Max causing the deaths of everyone aboard two flights - in Indonesia and in Ethiopia
*) The example from 1983 of Stanislav Petrov using his human judgement regarding an automated alert of apparently incoming nuclear missiles
*) Reasons why an AGI might decide to eliminate humans
*) The serious risk of a growing public panic - and potential mishandling of it by self-interested partisan political leaders
*) Why "Consent" is a better name than "Celebration"
*) Reasons why an AGI might consent to help humanity flourish, solving all our existential problems
*) Two models for humans merging with an AI superintelligence - to seek "Control", and as a consequence of "Consent"
*) Enhanced human intelligence could play a role in avoiding a surge of panic
*) Reflections on "The Artilect War" by Hugo de Garis: cosmists vs. terrans
*) Reasons for supporting "team human" (or "team posthuman") as opposed to an AGI that might replace us
*) Reflections on "Diaspora" by Greg Egan: three overlapping branches of future humans
*) Is collaboration a self-evident virtue?
*) Will an AGI consider humans to be endlessly fascinating? Or regard our culture and history as shallow and uninspiring?
*) The inscrutability of AGI motivation
*) A reason to consider "Consent" as the most likely outcome
*) A fifth 'C' word, as discussed by Max Tegmark
*) A reason to keep working on a moonshot solution for "Control"
*) Practical steps to reduce the risk of public panic
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify