London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum Chace
Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.
His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.
He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.
In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.
He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.
Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.
David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.
He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.
As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.
From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.
Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
London Futurists
GPT: To ban or not to ban, that is the question
On March 14th, OpenAI launched GPT-4 , which took the world by surprise and storm. Almost everybody, including people within the AI community, was stunned by its capabilities. A week later, the Future of Life Institute (FLI) published an open letter calling on the world’s AI labs to pause the development of larger versions of GPT (generative pre-trained transformer) models until their safety can be ensured.
Recent episodes of this podcast have presented arguments for and against this call for a moratorium. Jaan Tallin, one of the co-founders of FLI, made the case in favour. Pedro Domingos, an eminent AI researcher, and Kenn Cukier, a senior editor at The Economist, made variants of the case against. In this episode, co-hosts Calum Chace and David Wood highlight some key implications and give our own opinions. Expect some friendly disagreements along the way.
Follow-up reading:
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/
Topics addressed in this episode include:
*) Definitions of Artificial General Intelligence (AGI)
*) Many analysts knowledgeable about AI have recently brought forward their estimates of when AGI will become a reality
*) The case that AGI poses an existential risk to humanity
*) The continued survival of the second smartest species on the planet depends entirely on the actions of the actual smartest species
*) One species can cause another to become extinct, without that outcome being intended or planned
*) Four different ways in which advanced AI could have terrible consequences for humanity: bugs in the implementation; the implementation being hacked (or jail broken); bugs in the design; and the design being hacked by emergent new motivations
*) Near future AIs that still fall short of being AGI could have effects which, whilst not themselves existential, would plunge society into such a state of dysfunction and distraction that we are unable to prevent subsequent AGI-induced disaster
*) Calum's "4 C's" categorisation of possible outcomes regarding AGI existential risks: Cease, Control, Catastrophe, and Consent
*) 'Consent' means a superintelligence decides that we humans are fun, enjoyable, interesting, worthwhile, or simply unobjectionable, and consents to let us carry on as we are, or to help us, or to allow us to merge with it
*) The 'Control' option arguably splits into "control while AI capabilities continue to proceed at full speed" and "control with the help of a temporary pause in the development of AI capabilities"
*) Growing public support for stopping AI development - driven by a sense of outrage that the future of humanity is seemingly being decided by a small number of AI lab executives
*) A comparison with how the 1983 film "The Day After" triggered a dramatic change in public opinion regarding the nuclear weapons arms race
*) How much practical value could there be in a six-month pause? Or will the six-months be extended into an indefinite ban?
*) Areas where there could be at least some progress: methods to validate the output of giant AI models, and choices of initial configurations that would make the 'Consent' scenario more likely
*) Designs that might avoid the emergence of agency (convergent instrumental goals) within AI models as they acquire more intelligence
*) Why 'Consent' might be the most likely outcome
*) The longer a ban remains in place, the larger the risks of bad actors building AGIs
*) Contemplating how to secure the best upsides - an "AI summer" - from advanced AIs
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration