London Futurists

The Singularity Principles

November 02, 2022 London Futurists Season 1 Episode 11
London Futurists
The Singularity Principles
Show Notes

Co-hosts Calum and David dig deep into aspects of David's recent new book "The Singularity Principles". Calum (CC) says he is, in part, unconvinced. David  (DW) agrees that the projects he recommends are hard, but suggests some practical ways forward.

0.25 The technological singularity may be nearer than we think
1.10 Confusions about the singularity
1.35 “Taking back control of the singularity”
2.40 The “Singularity Shadow”: over-confident predictions which repulse people
3.30 The over-confidence includes predictions of timescale…
4.00 … and outcomes
4.45 The Singularity as the Rapture of the Nerds?
5.20 The Singularity is not a religion …
5.40 .. although if positive, it will confer almost godlike powers
6.35 Much discussion of the Singularity is dystopian, but there could be enormous benefits, including…
7.15 Digital twins for cells and whole bodies, and super longevity
7.30 A new enlightenment
7.50 Nuclear fusion
8.10 Humanity’s superpower is intelligence
8.30 Amplifying our intelligence should increase our power
9.50 DW’s timeline: 50% chance of AGI by 2050, 10% by 2030
10.10 The timeline is contingent on human actions
10.40 Even if AGI isn’t coming until 2070, we should be working on AI alignment today
11.10 AI Impact’s survey of all contributors to NeurIPS
11.35 Median view: 50% chance of AGI in 2059, and many were pessimistic
12.15 This discussion can’t be left to AI researchers
12.40 A bad beta version might be our last invention
13.00 A few hundred people are now working on AI alignment, and tens of thousands on advancing AI
13.35 The growth of the AI research population is still faster
13.40 CC: Three routes to a positive outcome
13.55 1. Luck. The world turns out to be configured in our favour
14.30 2. Mathematical approaches to AI alignment succeed
14.45 We either align AIs forever, or manage to control them. This is very hard
14.55 3. We merge with the superintelligent machines
15.40 Uploading is a huge engineering challenge
15.55 Philosophical issues raised by uploading: is the self retained?
16.10 DW: routes 2 and 3 are too binary. A fourth route is solving morality
18.15 Individual humans will be augmented, indeed we already are
18.55 But augmented humans won’t necessarily be benign
19.30 DW: We have to solve beneficence
20.00 CC: We can’t hope to solve our moral debates before AGI arrives
20.20 In which case we are relying on route 1 – luck
20.30 DW: Progress in philosophy *is* possible, and must be accelerated
21.15 The Universal Declaration of Human Rights shows that generalised moral principles can be agreed
22.25 CC: That sounds impossible. The UDHR is very broad and often ignored
23.05 Solving morality is even harder than the MIRI project, and reinforces the idea that route 3 is our best hope
23.50 It’s not unreasonable to hope that wisdom correlates with intelligence
24.00 DW: We can proceed step by step, starting with progress on facial recognition, autonomous weapons, and such intermediate questions
25.10 CC: We are so far from solving moral questions. Americans can’t even agree if a coup against their democracy was a bad thing
25.40 DW: We have to make progress, and quickly. AI might help us.
26.50 The essence of transhumanism is that we can use technology to improve ourselves
27.20 CC: If you had a magic wand, your first wish should probably be to make all humans see each other as members of the same tribe
27.50 Is AI ethics a helpful term?
28.05 AI ethics is a growing profession, but if problems are ethical then people who disagree with you are bad, not just wrong
28.55 AI ethics makes debates about AI harder to resolve, and more angry
29.15 AI researchers are understandably offended by finger-wagging, self-proclaimed AI ethicists who may not understand what they are talking about