In this episode, Tim Clement-Jones brings us up to date on the reactions by members of the UK's House of Commons to recent advances in the capabilities of AI systems, such as ChatGPT. He also looks ahead to larger changes, in the UK and elsewhere.
Lord Clement-Jones CBE, or Tim, as he prefers to be known, has been a very successful lawyer, holding senior positions at ITV and Kingfisher among others, and later becoming London Managing Partner of law firm DLA Piper.
He is better known as a politician. He became a life peer in 1998, and has been the Liberal Democrats’ spokesman on a wide range of issues. The reason we are delighted to have him as a guest on the podcast is that he was the chair of the AI Select Committee, Co-Chair of the All-Party Parliamentary Group on AI, and is now a member of a special inquiry on the use of AI in Weapons Systems.
Tim also has multiple connections with universities and charities in the UK.
Selected follow-up reading:
Topics in this conversation include:
*) Does "the Westminster bubble" understand the importance of AI?
*) Evidence that "the tide is turning" - MPs are demonstrating a spirit of inquiry
*) The example of Sir Peter Bottomley, the Father of the House (who has been an MP continuously since 1975)
*) New AI systems are showing characteristics that had not been expected to arrive for another 5 or 10 years, taking even AI experts by surprise
*) The AI duopoly (the US and China) and the possible influence of the UK and the EU
*) The forthcoming EU AI Act and the risk-based approach it embodies
*) The importance of regulatory systems being innovation-friendly
*) How might the EU support the development of some European AI tech giants?
*) The inevitability(?) of the UK needing to become "a rule taker"
*) Cynical and uncynical explanations for why major tech companies support EU AI regulation
*) The example of AI-powered facial recognition: benefits and risks
*) Is Brexit helping or hindering the UK's AI activities?
*) Complications with the funding of AI research in the UK's universities
*) The risks of a slow-down in the UK's AI start-up ecosystem
*) Looking further afield: AI ambitions in the UAE and Saudi Arabia
*) The particular risks of lethal autonomous weapons systems
*) Future conflicts between AI-controlled tanks and human-controlled tanks
*) Forecasts for the arrival of artificial general intelligence: 10-15 years from now?
*) Superintelligence may emerge from a combination of separate AI systems
*) The case for "technology-neutral" regulation
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration