London Futurists

Against pausing AI research, with Pedro Domingos

April 12, 2023 London Futurists Season 1 Episode 34
Against pausing AI research, with Pedro Domingos
London Futurists
More Info
London Futurists
Against pausing AI research, with Pedro Domingos
Apr 12, 2023 Season 1 Episode 34
London Futurists

Should the pace of research into advanced artificial intelligence be slowed down, or perhaps even paused completely?

Your answer to that question probably depends on your answers to a number of other questions. Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow down desirable, given that AI can also lead to lots of very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow down is desirable, is it practical?

Our guest in this episode is Professor Pedro Domingos of the University of Washington. He is perhaps best known for his book "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World".

That book takes an approach to the future of AI that is significantly different from what you can read in many other books. It describes five different "tribes" of AI researchers, each with their own paradigms, and it suggests that true progress towards human-level general intelligence will depend on a unification of these different approaches. In other words, we won't reach AGI just by scaling up deep learning approaches, or even by adding in features from logical reasoning.

Follow-up reading:
https://homes.cs.washington.edu/~pedrod/
https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543
https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Topics addressed in this episode include:

*) The five tribes of AI research - why there's a lot more to AI than deep learning
*) Why unifying these five tribes may not be sufficient to reach human-level general intelligence
*) The task of understanding an entire concept (e.g 'horse') from just seeing a single example
*) A wide spread of estimates of the timescale to reach AGI
*) Different views as to the true risks from advanced AI
*) The case that risks arise from AI incompetence rather than from increased AI competence
*) A different risk: that bad actors will gain dangerously more power from access to increasingly competent AI
*) The case for using AI to prevent misuse of AI
*) Yet another risk: that an AI trained against one objective function will nevertheless adopt goals diverging from that objective
*) How AIs that operate beyond our understanding could still remain under human control
*) How fully can evolution be trusted to produce outputs in line with a specified objective function?
*) The example of humans taming wolves into dogs that pose no threat to us
*) The counterexample of humans pursuing goals contrary to our in-built genetic drives
*) Complications with multiple levels of selection pressures, e.g genes and memes working at cross purposes
*) The “genie problem” (or “King Midas problem”) of choosing an objective function that is apparently attractive but actually dangerous
*) Assessing the motivations of people who have signed the FLI (Future of Life Institute) letter advocating a pause on the development of larger AI language models
*) Pros and cons of escalating a sense of urgency
*) The two key questions of existential risk from AI: how much risk is acceptable, and what might that level of risk become in the near future?
*) The need for a more rational discussion of the issues raised by increasingly competent AIs

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Show Notes

Should the pace of research into advanced artificial intelligence be slowed down, or perhaps even paused completely?

Your answer to that question probably depends on your answers to a number of other questions. Is advanced artificial intelligence reaching the point where it could result in catastrophic damage? Is a slow down desirable, given that AI can also lead to lots of very positive outcomes, including tools to guard against the worst excesses of other applications of AI? And even if a slow down is desirable, is it practical?

Our guest in this episode is Professor Pedro Domingos of the University of Washington. He is perhaps best known for his book "The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World".

That book takes an approach to the future of AI that is significantly different from what you can read in many other books. It describes five different "tribes" of AI researchers, each with their own paradigms, and it suggests that true progress towards human-level general intelligence will depend on a unification of these different approaches. In other words, we won't reach AGI just by scaling up deep learning approaches, or even by adding in features from logical reasoning.

Follow-up reading:
https://homes.cs.washington.edu/~pedrod/
https://www.amazon.co.uk/Master-Algorithm-Ultimate-Learning-Machine/dp/0241004543
https://futureoflife.org/open-letter/pause-giant-ai-experiments/

Topics addressed in this episode include:

*) The five tribes of AI research - why there's a lot more to AI than deep learning
*) Why unifying these five tribes may not be sufficient to reach human-level general intelligence
*) The task of understanding an entire concept (e.g 'horse') from just seeing a single example
*) A wide spread of estimates of the timescale to reach AGI
*) Different views as to the true risks from advanced AI
*) The case that risks arise from AI incompetence rather than from increased AI competence
*) A different risk: that bad actors will gain dangerously more power from access to increasingly competent AI
*) The case for using AI to prevent misuse of AI
*) Yet another risk: that an AI trained against one objective function will nevertheless adopt goals diverging from that objective
*) How AIs that operate beyond our understanding could still remain under human control
*) How fully can evolution be trusted to produce outputs in line with a specified objective function?
*) The example of humans taming wolves into dogs that pose no threat to us
*) The counterexample of humans pursuing goals contrary to our in-built genetic drives
*) Complications with multiple levels of selection pressures, e.g genes and memes working at cross purposes
*) The “genie problem” (or “King Midas problem”) of choosing an objective function that is apparently attractive but actually dangerous
*) Assessing the motivations of people who have signed the FLI (Future of Life Institute) letter advocating a pause on the development of larger AI language models
*) Pros and cons of escalating a sense of urgency
*) The two key questions of existential risk from AI: how much risk is acceptable, and what might that level of risk become in the near future?
*) The need for a more rational discussion of the issues raised by increasingly competent AIs

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration