In the last few weeks, the pace of change in AI has been faster than ever before. The changes aren't just announcements of future capabilities - announcements that could have been viewed, perhaps, as hype. The changes are new versions of AI systems that are available for users around the world to experiment with, directly, here and now. These systems are being released by multiple different companies, and also by open-source collaborations. And users of these systems are frequently expressing surprise: the systems are by no means perfect, but they regularly out-perform previous expectations, sometimes in astonishing ways.
In this episode, Calum Chace and David Wood, the co-hosts of this podcast series, discuss the wider implications of these new AI systems. David asks Calum if he has changed any of his ideas about what he has called "the two singularities", namely the Economic Singularity and the Technological Singularity, as covered in a number of books he has written.
Calum has been a full-time writer and speaker on the subject of AI since 2012. Earlier in his life, he studied philosophy, politics, and economics at Oxford University, and trained as a journalist at the BBC. He wrote a column in the Financial Times and nowadays is a regular contributor to Forbes magazine. In between, he held a number of roles in business, including leading a media practice at KPMG. In the last few days, he has been taking a close look at GPT-4.
Selected follow-up reading:
Topics in this conversation include:
*) Is the media excitement about GPT-4 and its predecessor ChatGPT overblown, or are these systems signs of truly important disruptions?
*) How do these new AI systems compare with earlier AIs?
*) The two "big bangs" in AI history
*) How transformers work
*) The difference between self-supervised learning and supervised learning
*) The significance of OpenAI enabling general public access to ChatGPT
*) Market competition between Microsoft Bing and Google Search
*) Unwholesome replies by Microsoft Sydney and Google Bard - and the intended role of RLHF (Reinforcement Learning with Human Feedback)
*) How basic reasoning seems to emerge (unexpectedly) from pattern recognition at sufficient scale
*) Examples of how the jobs of knowledge workers are being changed by GPT-4
*) What will happen to departments where each human knowledge workers has a tenfold productivity boost?
*) From the job churns of the past to the Great Churn of the near future
*) The forthcoming wave of automation is not only more general than past waves, but will also proceed at a much faster pace
*) Improvements in the writing AI produces, such as book chapters
*) Revisions of timelines for the Economic and Technological Singularity?
*) It now seems that human intelligence is less hard to replicate than was previously thought
*) The Technological Singularity might arrive before an Economic Singularity
*) The liberating vision of people no longer needing to be wage slaves, and the threat of almost everyone living in poverty
*) The insufficiency of UBI (Universal Basic Income) unless an economy of abundance is achieved (bringing the costs of goods and services down toward zero)
*) Is the creation of AI now out of control, with a rush to release new versions?
*) The infeasibility of the idea of AGI relinquishment
*) OpenAI's recent actions assessed
*) Expectations for new AI releases in the remainder of 2023: accelerating pace
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration