London Futurists

Governing the transition to AGI, with Jerome Glenn

December 21, 2022 London Futurists Season 1 Episode 18
London Futurists
Governing the transition to AGI, with Jerome Glenn
Show Notes

Our guest on this episode is someone with excellent connections to the foresight departments of governments around the world. He is Jerome Glenn, Founder and Executive Director of the Millennium Project.

The Millennium Project is a global participatory think tank established in 1996, which now has over 70 nodes around the world. It has the stated purpose to "Improve humanity's prospects for building a better world". The organisation produces regular "State of the Future" reports as well as updates on what it describes as "the 15 Global Challenges". It recently released an acclaimed report on three scenarios for the future of work. One of its new projects is the main topic in this episode, namely scenarios for the global governance of the transition from Artificial Narrow Intelligence (ANI) to Artificial General Intelligence (AGI).

Topics discussed in this episode include:
*) Why many futurists are jealous of Alvin Toffler
*) The benefits of a decentralised, incremental approach to foresight studies
*) Special features of the Millennium Project compared to other think tanks
*) How the Information Revolution differs from the Industrial Revolution
*) What is likely to happen if there is no governance of the transition to AGI
*) Comparisons with regulating the use of cars - and the use of nuclear materials
*) Options for licensing, auditing, and monitoring
*) How the development of a technology may be governed even if it has few visible signs
*) Three options: "Hope", "Control", and "Merge" - but all face problems; in all three cases, getting the initial conditions right could make a huge difference
*) Distinctions between AGI and ASI (Artificial Superintelligence), and whether an ASI could act in defiance of its initial conditions
*) Controlling AGI is likely to be impossible, but controlling the companies that are creating AGI is more credible
*) How actions taken by the EU might influence decisions elsewhere in the world
*) Options for "aligning" AGI as opposed to "controlling" it
*) Complications with the use of advanced AI by organised crime and by rogue states
*) The poor level of understanding of most political advisors about AGI, and their tendency to push discussions back to the issues of ANI
*) Risks of catastrophic social destabilisation if "the mother of all panics" about AGI occurs on top of existing culture wars and political tribalism
*) Past examples of progress with technologies that initially seemed impossible to govern
*) The importance of taking some initial steps forward, rather than being overwhelmed by the scale of the challenge.

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Selected follow-up reading:
https://en.wikipedia.org/wiki/Jerome_C._Glenn
https://www.millennium-project.org/
https://www.millennium-project.org/first-steps-for-artificial-general-intelligence-governance-study-have-begun/
The 2020 book "After Shock: The World's Foremost Futurists Reflect on 50 Years of Future Shock - and Look Ahead to the Next 50"