London Futurists

Transformational transformers, with Jeremy Kahn

London Futurists Season 1 Episode 49

Our guest in this episode is Jeremy Kahn, a senior writer at Fortune Magazine, based in the UK. He writes about artificial intelligence and other disruptive technologies, from quantum computing to augmented reality. Previously he was at Bloomberg for eight years, again writing mostly about technology, and in moving to Fortune he was returning to his journalistic roots, as he started his career there in 1997, when he was based in New York.

David and Calum invited Jeremy onto the show because they think his weekly newsletter “Eye on AI” is one of the very best non-technical sources of news and views about the technology.

Jeremy has some distinctive views on the significance of transformers and the LLMs (Large Language Models) they enable.

Selected follow-ups:
https://www.fortune.com/newsletters/eye-on-ai
https://fortune.com/author/jeremy-kahn/

Topics addressed in this episode include:
*) Jeremy's route into professional journalism, focussing on technology
*) Assessing the way technology changes: exponential, linear with a steep incline, linear with leaps, or something else?
*) Some characteristics of LLMs that appear to "emerge" out of nowhere at larger scale, can actually be seen developing linearly when attention is paid to the second or third prediction of the model
*) Some leaps in capability depend, not on underlying technological power, but on improvements in interfaces - as with ChatGPT
*) Some leaps in capability require, not just step-ups in technological power, but changes in how people organise their work around the new technology
*) The decades-long conversion of factories from steam-powered to electricity-powered
*) Reasons to anticipate significant boosts in productivity in many areas of the economy within just two years, with assistance from AI co-pilots and from "universal digital assistants"
*) Related forthcoming economic impacts: slow-downs in hiring, and depression of some wages (akin to how Uber drivers reduced how much yellow cab drivers could charge for fares)
*) The potential, not just for companies to learn to make good use of existing transformer technologies, but for forthcoming next generation transformers to cause larger disruptions
*) Models that predict, not "the next most likely word", but "the next most likely action to take to achieve a given goal"
*) Recent AI startups with a focus on using transformers for task automation include Adept and Inflection
*) Risks when LLMs lack sufficient common sense, and might take actions which a human assistant would know to check beforehand with their supervisor
*) Ways in which LLMs could acquire sufficient common sense
*) Ways in which observers can be misled about how much common sense is possessed by an LLM
*) Reasons why some companies have instructed their employees not to use consumer-facing versions of LLMs
*) The case, nevertheless, for companies to encourage bottom-up massive experimentation with LLMs by employees
*) The possibility for companies to have departments without any people in them
*) Implications of LLMs for geo-security and international relations
*) A possible agency, akin to the International Atomic Energy Agency, to monitor the training and use of next generation LLMs
*) Interest by the Pentagon (and also in China) for LLMs that can act as "battlefield advisors"
*) A call to action: people need to get their heads around transformers, and understand both the upsides and the risks

Audio engineering assisted by Alexander Chace.

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

People on this episode