London Futurists
Anticipating and managing exponential impact - hosts David Wood and Calum Chace
Calum Chace is a sought-after keynote speaker and best-selling writer on artificial intelligence. He focuses on the medium- and long-term impact of AI on all of us, our societies and our economies. He advises companies and governments on AI policy.
His non-fiction books on AI are Surviving AI, about superintelligence, and The Economic Singularity, about the future of jobs. Both are now in their third editions.
He also wrote Pandora's Brain and Pandora’s Oracle, a pair of techno-thrillers about the first superintelligence. He is a regular contributor to magazines, newspapers, and radio.
In the last decade, Calum has given over 150 talks in 20 countries on six continents. Videos of his talks, and lots of other materials are available at https://calumchace.com/.
He is co-founder of a think tank focused on the future of jobs, called the Economic Singularity Foundation. The Foundation has published Stories from 2045, a collection of short stories written by its members.
Before becoming a full-time writer and speaker, Calum had a 30-year career in journalism and in business, as a marketer, a strategy consultant and a CEO. He studied philosophy, politics, and economics at Oxford University, which confirmed his suspicion that science fiction is actually philosophy in fancy dress.
David Wood is Chair of London Futurists, and is the author or lead editor of twelve books about the future, including The Singularity Principles, Vital Foresight, The Abolition of Aging, Smartphones and Beyond, and Sustainable Superabundance.
He is also principal of the independent futurist consultancy and publisher Delta Wisdom, executive director of the Longevity Escape Velocity (LEV) Foundation, Foresight Advisor at SingularityNET, and a board director at the IEET (Institute for Ethics and Emerging Technologies). He regularly gives keynote talks around the world on how to prepare for radical disruption. See https://deltawisdom.com/.
As a pioneer of the mobile computing and smartphone industry, he co-founded Symbian in 1998. By 2012, software written by his teams had been included as the operating system on 500 million smartphones.
From 2010 to 2013, he was Technology Planning Lead (CTO) of Accenture Mobility, where he also co-led Accenture’s Mobility Health business initiative.
Has an MA in Mathematics from Cambridge, where he also undertook doctoral research in the Philosophy of Science, and a DSc from the University of Westminster.
London Futurists
Stability and combinations, with Aleksa Gordić
This episode continues our discussion with AI researcher Aleksa Gordić from DeepMind on understanding today’s most advanced AI systems.
00.07 This episode builds on Episode 5
01.05 We start with GANs – Generative Adversarial Networks
01.33 Solving the problem of stability, with higher resolution
03.24 GANs are notoriously hard to train. They suffer from mode collapse
03.45 Worse, the model might not learn anything, and the result is pure noise
03.55 DC GANs introduced convolutional layers to stabilise them and enable higher resolution
04.37 The technique of outpainting
05.55 Generating text as well as images, and producing stories
06.14 AI Dungeon
06.28 From GANs to Diffusion models
06.48 DDPM (De-noising diffusion probabilistic models) does for diffusion models what DC GANs did for GANs
07.20 They are more stable, and don’t suffer from mode collapse
07.30 They do have downsides. They are much more computation intensive
08.24 What does the word diffusion mean in this context?
08.40 It’s adopted from physics. It peels noise away from the image
09.17 Isn’t that rewinding entropy?
09.45 One application is making a photo taken in 1830 look like one taken yesterday
09.58 Semantic Segmentation Masks convert bands of flat colour into realistic images of sky, earth, sea, etc
10.35 Bounding boxes generate objects of a specified class from tiny inputs
11.00 The images are not taken from previously seen images on the internet, but invented from scratch
11.40 The model saw a lot of images during training, but during the creation process it does not refer back to them
12.40 Failures are eliminated by amendments, as always with models like this
12.55 Scott Alexander blogged about models producing images with wrong relationships, and how this was fixed within 3 months
13.30 The failure modes get harder to find as the obvious ones are eliminated
13.45 Even with 175 billion parameters, GPT-3 struggled to handle three digits in computation
15.18 Are you often surprised by what the models do next?
15.50 The research community is like a hive mind, and you never know where the next idea will come from
16.40 Often the next thing comes from a couple of students at a university
16.58 How Ian Goodfellow created the first GAN
17.35 Are the older tribes described by Pedro Domingos (analogisers, evolutionists, Bayesians…) now obsolete?
18.15 We should cultivate different approaches because you never know where they might lead
19.15 Symbolic AI (aka Good Old Fashioned AI, or GOFAI) is still alive and kicking
19.40 AlphaGo combined deep learning and GOFAI
21.00 Doug Lennart is still persevering with Cyc, a purely GOFAI approach
21.30 GOFAI models had no learning element. They can’t go beyond the humans whose expertise they encapsulate
22.25 The now-famous move 37 in AlphaGo’s game two against Lee Sedol in 2016
23.40 Moravec’s paradox. Easy things are hard, and hard things are easy
24.20 The combination of deep learning and symbolic AI has been long urged, and in fact is already happening
24.40 Will models always demand more and more compute?
25.10 The human brain has far more compute power than even our biggest systems today
25.45 Sparse, or MoE (Mixture of Experts) systems are quite efficient
26.00 We need more compute, better algorithms, and more efficiency
26.55 Dedicated AI chips will help a lot with efficiency
26.25 Cerebros claims that GPT-3 could be trained on a single chip
27.50 Models can increasingly be trained for general purposes and then tweaked for particular tasks
28.30 Some of the big new models are open access
29.00 What else should people learn about with regard to advanced AI?
29.20 Neural Radiance Fields (NERF) models
30.40 Flamingo and Gato
31.15 We have mostly discussed research in these episodes, rather than engineering