
London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum Chace
Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.
His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.
He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.
In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.
He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.
Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.
David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.
He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.
As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.
From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.
Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
London Futurists
The 4 Cs of Superintelligence
The 4 Cs of Superintelligence is a framework that casts fresh light on the vexing question of possible outcomes of humanity's interactions with an emerging superintelligent AI. The 4 Cs are Cease, Control, Catastrophe, and Consent. In this episode, the show's co-hosts, Calum Chace and David Wood, debate the pros and cons of the first two of these Cs, and lay the groundwork for a follow-up discussion of the pros and cons of the remaining two.
Topics addressed in this episode include:
*) Reasons why superintelligence might never be created
*) Timelines for the arrival of superintelligence have been compressed
*) Does the unpredictability of superintelligence mean we shouldn't try to consider its arrival in advance?
*) Two "big bangs" have caused dramatic progress in AI; what might the next such breakthrough bring?
*) The flaws in the "Level zero futurist" position
*) Two analogies contrasted: overcrowding on Mars , and travelling to Mars without knowing what we'll breathe when we'll get there
*) A startling illustration of the dramatic power of exponential growth
*) A concern for short-term risk is by no means a reason to pay less attention to longer-term risks
*) Why the "Cease" option is looking more credible nowadays than it did a few years ago
*) Might "Cease" become a "Plan B" option?
*) Examples of political dictators who turned away from acquiring or using various highly risky weapons
*) Challenges facing a "Turing Police" who monitor for dangerous AI developments
*) If a superintelligence has agency (volition), it seems that "Control" is impossible
*) Ideas for designing superintelligence without agency or volition
*) Complications with emergent sub-goals (convergent instrumental goals)
*) A badly configured superintelligent coffee fetcher
*) Bad actors may add agency to a superintelligence, thinking it will boost its performance
*) The possibility of changing social incentives to reduce the dangers of people becoming bad actors
*) What's particularly hard about both "Cease" and "Control" is that they would need to remain in place forever
*) Human civilisations contain many diametrically opposed goals
*) Going beyond the statement of "Life, liberty, and the pursuit of happiness" to a starting point for aligning AI with human values?
*) A cliff-hanger ending
The survey "Key open questions about the transition to AGI" can be found at https://transpolitica.org/projects/key-open-questions-about-the-transition-to-agi/
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...
Listen on: Apple Podcasts Spotify