London Futurists

Stability and combinations, with Aleksa Gordić

September 28, 2022 London Futurists Season 1 Episode 6
London Futurists
Stability and combinations, with Aleksa Gordić
Show Notes Chapter Markers

This episode continues our discussion with AI researcher Aleksa Gordić from DeepMind on understanding today’s most advanced AI systems.

00.07 This episode builds on Episode 5
01.05 We start with GANs – Generative Adversarial Networks
01.33 Solving the problem of stability, with higher resolution
03.24 GANs are notoriously hard to train. They suffer from mode collapse
03.45 Worse, the model might not learn anything, and the result is pure noise
03.55 DC GANs introduced convolutional layers to stabilise them and enable higher resolution
04.37 The technique of outpainting
05.55 Generating text as well as images, and producing stories
06.14 AI Dungeon
06.28 From GANs to Diffusion models
06.48 DDPM (De-noising diffusion probabilistic models) does for diffusion models what DC GANs did for GANs
07.20 They are more stable, and don’t suffer from mode collapse
07.30 They do have downsides. They are much more computation intensive
08.24 What does the word diffusion mean in this context?
08.40 It’s adopted from physics. It peels noise away from the image
09.17 Isn’t that rewinding entropy?
09.45 One application is making a photo taken in 1830 look like one taken yesterday
09.58 Semantic Segmentation Masks convert bands of flat colour into realistic images of sky, earth, sea, etc
10.35 Bounding boxes generate objects of a specified class from tiny inputs
11.00 The images are not taken from previously seen images on the internet, but invented from scratch
11.40 The model saw a lot of images during training, but during the creation process it does not refer back to them
12.40 Failures are eliminated by amendments, as always with models like this
12.55 Scott Alexander blogged about models producing images with wrong relationships, and how this was fixed within 3 months
13.30 The failure modes get harder to find as the obvious ones are eliminated
13.45 Even with 175 billion parameters, GPT-3 struggled to handle three digits in computation
15.18 Are you often surprised by what the models do next?
15.50 The research community is like a hive mind, and you never know where the next idea will come from
16.40 Often the next thing comes from a couple of students at a university
16.58 How Ian Goodfellow created the first GAN
17.35 Are the older tribes described by Pedro Domingos (analogisers, evolutionists, Bayesians…) now obsolete?
18.15 We should cultivate different approaches because you never know where they might lead
19.15 Symbolic AI (aka Good Old Fashioned AI, or GOFAI) is still alive and kicking
19.40 AlphaGo combined deep learning and GOFAI
21.00 Doug Lennart is still persevering with Cyc, a purely GOFAI approach
21.30 GOFAI models had no learning element. They can’t go beyond the humans whose expertise they encapsulate
22.25 The now-famous move 37 in AlphaGo’s game two against Lee Sedol in 2016
23.40 Moravec’s paradox. Easy things are hard, and hard things are easy
24.20 The combination of deep learning and symbolic AI has been long urged, and in fact is already happening
24.40 Will models always demand more and more compute?
25.10 The human brain has far more compute power than even our biggest systems today
25.45 Sparse, or MoE (Mixture of Experts) systems are quite efficient
26.00 We need more compute, better algorithms, and more efficiency
26.55 Dedicated AI chips will help a lot with efficiency
26.25 Cerebros claims that GPT-3 could be trained on a single

Climate Confident
With a new episode every Wed morning, the Climate Confident podcast is weekly podcast...

Listen on: Apple Podcasts   Spotify

(Cont.) Stability and combinations, with Aleksa Gordić