Ben Goertzel is a cognitive scientist and artificial intelligence researcher. He is CEO and founder of SingularityNET, leader of the OpenCog Foundation, and chair of Humanity+.
Ben is perhaps best-known for popularising the term 'artificial general intelligence', or AGI, a machine with all the cognitive abilities of an adult human. He thinks that the way to create this machine is to start with a baby-like AI, and raise it, as we raise children. We would do this either in VR, or in robot form. Hence he works with the robot-builder David Hanson to create robots like Sophia and Grace.
Ben is a unique and engaging speaker, and gives frequent keynotes all round the world. Both his appearance and his views have been described as counter-cultural. In this episode, we hear about Ben's vision for the creation of benevolent decentralized AGI.
Selected follow-up reading:
Topics in this conversation include:
*) Occasional hazards of humans and robots working together
*) "The future is already here, it's just not wired together properly"
*) Ben's definition of AGI
*) Ways in which humans lack "general intelligence"
*) Changes in society expected when AI reaches "human level"
*) Is there "one key thing" which will enable the creation of AGI?
*) Ben's OpenCog Hyperon project combines three approaches: neural pattern recognition and synthesis, rigorous symbolic reasoning, and evolutionary creativity
*) Parallel combinations versus sequential combinations of AI capabilities: why the former is harder, but more likely to create AGI
*) Three methods to improve the scalability of AI algorithms: mathematical innovations, efficient concurrent processing, and an AGI hardware board
*) "We can reach the Singularity in ten years if we really, really try"
*) ... but humanity has, so far, not "really tried" to apply sufficient resources to creating AGI
*) Sam Altman: "If you talk about the upsides of what AGI could do for us, you sound like a crazy person"
*) "The benefits of AGI will challenge our concept of 'what is a benefit'"
*) Options for human life trajectories, if AGIs are well disposed towards humans
*) We will be faced with the questions of "what do we want" and "what are our values"
*) The burning issue is "what is the transition phase" to get to AGI
*) Ben's disagreements with Nick Bostrom and Eliezer Yudkowsky
*) Assessment of the approach taken by OpenAI to create AGI
*) Different degrees of faith in big tech companies as a venue for hosting the breakthroughs in creating AGI
*) Should OpenAI be renamed as "ClosedAI"?
*) The SingularityNET initiative to create a decentralized, democratically controlled infrastructure for AGI
*) The development of AGI should be "more like Linux or the Internet than Windows or the mobile phone ecosystem"
*) Limitations of neural net systems in self-understanding
*) Faith in big tech and capitalism vs. faith in humanity as a whole vs. faith in reward maximization as a paradigm for intelligence
*) Open-ended intelligence vs. intelligence created by reward maximization
*) A concern regarding Effective Altruism
*) There's more to intelligence than pursuit of an overarching goal
*) A broader view of evolution than drives to survive and to reproduce
*) "What the fate of humanity depends on" - selecting the right approach to the creation of AG
Listen on: Apple Podcasts Spotify