London Futurists

The case for a conditional AI safety treaty, with Otto Barten

London Futurists Season 1 Episode 113

How can a binding international treaty be agreed and put into practice, when many parties are strongly tempted to break the rules of the agreement, for commercial or military advantage, and when cheating may be hard to detect? That’s the dilemma we’ll examine in this episode, concerning possible treaties to govern the development and deployment of advanced AI.

Our guest is Otto Barten, Director of the Existential Risk Observatory, which is based in the Netherlands but operates internationally. In November last year, Time magazine published an article by Otto, advocating what his organisation calls a Conditional AI Safety Treaty. In March this year, these ideas were expanded into a 34-page preprint which we’ll be discussing today, “International Agreements on AI Safety: Review and Recommendations for a Conditional AI Safety Treaty”.

Before co-founding the Existential Risk Observatory in 2021, Otto had roles as a sustainable energy engineer, data scientist, and entrepreneur. He has a BSc in Theoretical Physics from the University of Groningen and an MSc in Sustainable Energy Technology from Delft University of Technology.

Selected follow-ups:


Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Promoguy Talk Pills
Agency in Amsterdam dives into topics like Tech, AI, digital marketing, and more drama...

Listen on: Apple Podcasts   Spotify

Digital Disruption with Geoff Nielson
Discover how technology is reshaping our lives and livelihoods.

Listen on: Apple Podcasts   Spotify

People on this episode