London Futurists

A defence of human uniqueness against AI encroachment, with Kenn Cukier

April 19, 2023 London Futurists Season 1 Episode 35
London Futurists
A defence of human uniqueness against AI encroachment, with Kenn Cukier
Show Notes Chapter Markers

Despite the impressive recent progress in AI capabilities, there are reasons why AI may be incapable of possessing a full "general intelligence". And although AI will continue to transform the workplace, some important jobs will remain outside the reach of AI. In other words, the Economic Singularity may not happen, and AGI may be impossible.

These are views defended by our guest in this episode, Kenneth Cukier, the Deputy Executive Editor of The Economist newspaper.

For the past decade, Kenn was the host of its weekly tech podcast Babbage. He is co-author of the 2013 book “Big Data", a New York Times best-seller that has been translated into over 20 languages. He is a regular commentator in the media, and a popular keynote speaker, from TED to the World Economic Forum.

Kenn recently stepped down as a board director of Chatham House and a fellow at Oxford's Saïd Business School. He is a member of the Council on Foreign Relations. His latest book is "Framers", on the power of mental models and the limits of AI.

Follow-up reading:
http://www.cukier.com/
https://mediadirectory.economist.com/people/kenneth-cukier/
https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/
Kurzweil's version of the Turing Test: https://longbets.org/1/

Topics addressed in this episode include:

*) Changing attitudes at The Economist about how to report on the prospects for AI
*) The dual roles of scepticism regarding claims made for technology
*) 'Calum's rule' about technology forecasts that omit timing
*) Options for magazine coverage of possible developments more than 10 years into the future
*) Some leaders within AI research, including Sam Altman of OpenAI, think AGI could happen within a decade
*) Metaculus community aggregate forecasts for the arrival of different forms of AGI
*) A theme for 2023: the increased 'emergence' of unexpected new capabilities within AI large language models - especially when these models are combined with other AI functionality
*) Different views on the usefulness of the Turing Test - a test of human idiocy rather than machine intelligence?
*) The benchmark of "human-level general intelligence" may become as anachronistic as the benchmark of "horsepower" for rockets
*) The drawbacks of viewing the world through a left-brained hyper-rational "scientistic" perspective
*) Two ways the ancient Greeks said we could find truth: logos and mythos
*) People in 2023 finding "mythical, spiritual significance" in their ChatGPT conversations
*) Appropriate and inappropriate applause for what GPTs can do
*) Another horse analogy: could steam engines that lack horse-like legs really replace horses?
*) The Ship of Theseus argument that consciousness could be transferred from biology to silicon
*) The "life force" and its apparently magical, spiritual aspects
*) The human superpower to imaginatively reframe mental models
*) People previously thought humans had a unique superpower to create soul-moving music, but a musical version of the Turing Test changed minds
*) Diff

Climate Confident
With a new episode every Wed morning, the Climate Confident podcast is weekly podcast...

Listen on: Apple Podcasts   Spotify

Inspiring Computing
The Inspiring Computing podcast is where computing meets the real world. This podcast...

Listen on: Apple Podcasts   Spotify

(Cont.) A defence of human uniqueness against AI encroachment, with Kenn Cukier
(Cont.) A defence of human uniqueness against AI encroachment, with Kenn Cukier