Our guest in this episode is Francesca Rossi. Francesca studied computer science at the University of Pisa in Italy, where she became a professor, before spending 20 years at the University of Padova. In 2015 she joined IBM's T.J. Watson Research Lab in New York, where she is now an IBM Fellow and also IBM's AI Ethics Global Leader.
Francesca is a member of numerous international bodies concerned with the beneficial use of AI, including being a board member at the Partnership on AI, a Steering Committee member and designated expert at the Global Partnership on AI, a member of the scientific advisory board of the Future of Life Institute, and Chair of the international conference on Artificial Intelligence, Ethics, and Society which is being held in Montreal in August this year.
From 2022 until 2024 she holds the prestigious role of the President of the AAAI, that is, the Association for the Advancement of Artificial Intelligence. The AAAI has recently held its annual conference, and in this episode, Francesca shares some reflections on what happened there.
Topics in this conversation include:
*) How a one-year sabbatical at the Harvard Radcliffe Institute changed the trajectory of Francesca's life
*) New generative AI systems such as ChatGPT expand previous issues involving bias, privacy, copyright, and content moderation - because they are trained on very large data sets that have not been curated
*) Large language models (LLMs) have been optimised, not for "factuality", but for creating language that is syntactically correct
*) Compared to previous AIs, the new systems impact a wider range of occupations, and they also have major implications for education
*) Are the "AI ethics" and "responsible AI" approaches that address the issues of existing AI systems also the best approaches for the "AI alignment" and "AI safety" issues raised by artificial general intelligence?
*) Different ideas on how future LLMs could acquire mastery, not only over language, but also over logic, inference, and reasoning
*) Options for combining classical AI techniques focussing on knowledge and reasoning, with the data-intensive approaches of LLMs
*) How "foundation models" allow training to be split into two phases, with a shorter supervised phase customising the output from a prior longer unsupervised phase
*) Even experts face the temptation to anthropomorphise the behaviour of LLMs
*) On the other hand, unexpected capabilities have emerged within LLMs
*) The interplay of "thinking fast" and "thinking slow" - adapting, for the context of AI, insights from Daniel Kahneman about human intelligence
*) Cross-fertilisation of ideas from different communities at the recent AAAI conference
*) An extension of that "bridge" theme to involve ideas from outside of AI itself, including the use of methods of physics to observe and interpret LLMs from the outside
*) Prospects for interpretability, explainability, and transparency of AI - and implications for trust and cooperation between humans and AIs
*) The roles played by different international bodies, such as PAI and GPAI
*) Pros and cons of including China in the initial phase of GPAI
*) Designing regulations to be future-proof, with parts that can change quickly
*) An important new goal for AI experts
*) A vision for the next 3-5 years
Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration