London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum Chace
Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.
His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.
He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.
In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.
He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.
Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.
David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.
He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.
As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.
From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.
Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
London Futurists
The AI suicide race, with Jaan Tallinn
The race to create advanced AI is becoming a suicide race.
That's part of the thinking behind the open letter from the Future of Life Institute which "calls on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4".
In this episode, our guest, Jaan Tallinn, explains why he sees this pause as a particularly important initiative.
In the 1990s and 20-noughts, Jaan led much of the software engineering for the file-sharing application Kazaa and the online communications tool Skype. He is also known as one of the earliest investors in DeepMind, before they were acquired by Google.
More recently, Jaan has been a prominent advocate for study of existential risks, including the risks from artificial superintelligence. He helped set up the Centre for the Study of Existential Risk (CSER) in 2012 and the Future of Life Institute (FLI) in 2014.
Follow-up reading:
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
https://www.cser.ac.uk/
https://en.wikipedia.org/wiki/Jaan_Tallinn
Topics addressed in this episode include:
*) The differences between CSER and FLI
*) Do the probabilities for the occurrence of different existential risks vary by orders of magnitude?
*) The principle that "arguments screen authority"
*) The possibility that GPT-6 will be built, not by humans, but by GPT-5
*) Growing public concern, all over the world, that the fate of all humanity is, in effect, being decided by the actions of just a small number of people in AI labs
*) Two reasons why FLI recently changed its approach to AI risk
*) The AI safety conference in 2015 in Puerto Rico was initially viewed as a massive success, but it has had little lasting impact
*) Uncertainty about a potential cataclysmic event doesn't entitle people to conclude it won't happen any time soon
*) The argument that LLMs (Large Language Models) are an "off ramp" rather than being on the road to AGI
*) Why the duration of 6 months was selected for the proposed pause
*) The "What about China?" objection to the pause
*) Potential concrete steps that could take place during the pause
*) The FLI document "Policymaking in the pause"
*) The article by Luke Muehlhauser of Open Philanthropy, "12 tentative ideas for US AI policy"
*) The "summon and tame" way of thinking about the creation of LLMs - and the risk that minds summoned in this way won't be able to be tamed
*) Scenarios in which the pause might be ignored by various entities, such as authoritarian regimes, organised crime, rogue corporations, and extraordinary individuals such as Elon Musk and John Carmack
*) A meta-principle for deciding which types of AI research should be paused
*) 100 million dollar projects become even harder when they are illegal
*) The case for requiring the pre-registration of largescale mind-summoning experiments
*) A possible 10^25 limit on the number of FLOPs (Floating Point Operations) an AI model can spend
*) The reactions by AI lab leaders to the widescale public response to GPT-4 and to the pause letter
*) Even Sundar Pichai, CEO of Google/Alphabet, has called for government intervention regarding AI
*) The hardware overhang complication with the pause
*) Not letting "the perfect" be "the enemy of the good"
*) Elon Musk's involvement with FLI and with the pause letter
*) "Humanity now has cancer"
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration