London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum Chace
Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.
His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.
He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.
In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.
He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.
Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.
David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.
He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.
As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.
From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.
Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
London Futurists
Against pausing AI research, with Pedro Domingos
Should the pace of research into advanced artificial intelligence be slowed down, or perhaps even paused completely?
Your answer to that question probably depends on your answers to a number of other questions. Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow down desirable, given that AI can also lead to lots of very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow down is desirable, is it practical?
Our guest in this episode is Professor Pedro Domingos of the University of Washington. He is perhaps best known for his book "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World".
That book takes an approach to the future of AI that is significantly different from what you can read in many other books. It describes five different "tribes" of AI researchers, each with their own paradigms, and it suggests that true progress towards human-level general intelligence will depend on a unification of these different approaches. In other words, we won't reach AGI just by scaling up deep learning approaches, or even by adding in features from logical reasoning.
Follow-up reading:
https://homes.cs.washington.edu/~pedrod/
https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543
https://futureoflife.org/open-letter/pause-giant-ai-experiments/
Topics addressed in this episode include:
*) The five tribes of AI research - why there's a lot more to AI than deep learning
*) Why unifying these five tribes may not be sufficient to reach human-level general intelligence
*) The task of understanding an entire concept (e.g 'horse') from just seeing a single example
*) A wide spread of estimates of the timescale to reach AGI
*) Different views as to the true risks from advanced AI
*) The case that risks arise from AI incompetence rather than from increased AI competence
*) A different risk: that bad actors will gain dangerously more power from access to increasingly competent AI
*) The case for using AI to prevent misuse of AI
*) Yet another risk: that an AI trained against one objective function will nevertheless adopt goals diverging from that objective
*) How AIs that operate beyond our understanding could still remain under human control
*) How fully can evolution be trusted to produce outputs in line with a specified objective function?
*) The example of humans taming wolves into dogs that pose no threat to us
*) The counterexample of humans pursuing goals contrary to our in-built genetic drives
*) Complications with multiple levels of selection pressures, e.g genes and memes working at cross purposes
*) The “genie problem” (or “King Midas problem”) of choosing an objective function that is apparently attractive but actually dangerous
*) Assessing the motivations of people who have signed the FLI (Future of Life Institute) letter advocating a pause on the development of larger AI language models
*) Pros and cons of escalating a sense of urgency
*) The two key questions of existential risk from AI: how much risk is acceptable, and what might that level of risk become in the near future?
*) The need for a more rational discussion of the issues raised by increasingly competent AIs
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration