London Futurists

Collapsing AGI timelines, with Ross Nordby

October 26, 2022 London Futurists Season 1 Episode 10
London Futurists
Collapsing AGI timelines, with Ross Nordby
Show Notes Chapter Markers

How likely is it that, by 2030, someone will build artificial general intelligence (AGI)?

Ross Nordby is an AI researcher who has shortened his AGI timelines: he has changed his mind about when AGI might be expected to exist. He recently published an article on the LessWrong community discussion site, giving his argument in favour of shortening these timelines. He now identifies 2030 as the date by which it is 50% likely that AGI will exist. In this episode, we ask Ross questions about his argument, and consider some of the implications that arise.

Article by Ross: https://www.lesswrong.com/posts/K4urTDkBbtNuLivJx/why-i-think-strong-general-ai-is-coming-soon

Effective Altruism Long-Term Future Fund: https://funds.effectivealtruism.org/funds/far-future

MIRI (Machine Intelligence Research Institution): https://intelligence.org/

00.57 Ross’ background: real-time graphics, mostly in video games
02.10 Increased familiarity with AI made him reconsider his AGI timeline
02.37 He submitted a grant request to the Effective Altruism Long-Term Future Fund to move into AI safety work
03.50 What Ross was researching: can we make an AI intrinsically interpretable?
04.25 The AGI Ross is interested in is defined by capability, regardless of consciousness or sentience
04.55 An AI that is itself "goalless" might be put to uses with destructive side-effects
06.10 The leading AI research groups are still DeepMind and OpenAI
06.43 Other groups, like Anthropic, are more interested in alignment
07.22 If you can align an AI to any goal at all, that is progress: it indicates you have some control
08.00 Is this not all abstract and theoretical - a distraction from more pressing problems?
08.30 There are other serious problems, like pandemics and global warming, but we have to solve them all
08.45 Globally, only around 300 people are focused on AI alignment: not enough
10.05 AGI might well be less than three decades away
10.50 AlphaGo surprised the community, which was expecting Go to be winnable 10-15 years later
11.10 Then AlphaGo was surpassed by systems like AlphaZero and MuZero, which were actually simpler, and more flexible
11.20 AlphaTensor frames matrix multiplication as a game, and becomes superhuman at it
11.40 In 2018, the Transformer paper was published, but no-one forecast GPT-3’s capabilities
12.00 This year, Minerva (similar to GPT-3) got 50% correct on the math dataset: high school competition math problems
13.16 Illustrators now feel threatened by systems like Dall-E, Stable Diffusion, etc
13.30 The conclusion is that intelligence is easier to simulate than we thought
13.40 But these systems also do stupid things. They are brittle
18.00 But we could use transformers more intelligently
19.20 They turn out to be able to write code, and to explain jokes, and do maths reasoning
21:10 Google's Gopher AI
22.05 Machines don’t yet have internal models of the world, which we call common sense
24.00 But an early version of GPT-3 demonstrated the ability to model a human thought process alongside a machine’s
27.15 Ross’ current timeline is 50% probability of AGI by 2030, and 90+% by 2050
27:35 Counterarguments?
29.35 So what is to be done?
30.55 If convinced that AGI is coming soon, most lay people would probably demand that all AI research stops immediately. Which isn’t possible
31.40 Maybe publicity would be good in order to generate resources for AI alignment. And to avoid a backla

Climate Confident
With a new episode every Wed morning, the Climate Confident podcast is weekly podcast...

Listen on: Apple Podcasts   Spotify

(Cont.) Collapsing AGI timelines, with Ross Nordby