London Futurists

AI overview: 3. Recent developments

September 19, 2022 London Futurists Season 1 Episode 4
London Futurists
AI overview: 3. Recent developments
Show Notes

In this episode, co-hosts Calum Chace and David Wood explore a number of recent developments in AI - developments that are rapidly changing what counts as "state of the art" in AI.

00.05: Short recap of previous episodes
00.20: A couple of Geoff Hinton stories
02.27: Today's subject: the state of AI today
02.53: Search
03.35: Games
03.58: Translation
04.33: Maps
05.33: Making the world understandable. Increasingly
07.00: Transformers. Attention is all you need
08.00: Masked language models
08.18: GPT-2 and GPT-3
08.54: Parameters and synapses
10.15: Foundation models produce much of the content on the internet
10.40: Data is even more important than size
11.45: Brittleness and transfer learning
13.15: Do machines understand?
14.05: Human understanding and stochastic parrots
15.27: Chatbots
16.22: Tay embarrasses Microsoft
16.53: Blenderbot
17.19: Far from AGI. LaMDA and Blaise Lemoine
18.26: The value of anthropomorphising
19.53: Automation
20.25: Robotic Process Automation (RPA)
20.55: Drug discovery
21.45: New antibiotics. Discovering Halicin
23.50: AI drug discovery as practiced by Insilico, Exscientia and others
25.33: Eroom's Law
26.34: AlphaFold. How 200m proteins fold
28.30: Towards a complete model of the cell
29.19: Analysis
30.04: Air traffic controllers use only 10% of the data available to them
30.36: Transfer learning can mitigate the escalating demand for compute power
31.18: Next up: the short-term future of AI

Audio engineering by Alexander Chace.

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

For more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/