London Futurists

Developing responsible AI, with Ray Eitel-Porter

December 07, 2022 London Futurists Season 1 Episode 16
London Futurists
Developing responsible AI, with Ray Eitel-Porter
Show Notes

As AI automates larger portions of the activities of companies and organisations, there's a greater need to think carefully about questions of privacy, bias, transparency, and explainability. Due to scale effects, mistakes made by AI and the automated analysis of data can have wide impacts. On the other hand, evidence of effective governance of AI development can deepen trust and accelerate the adoption of significant innovations.

One person who has thought a great deal about these issues is Ray Eitel-Porter, Global Lead for Responsible AI at Accenture. In this episode of the London Futurist Podcast, he explains what conclusions he has reached.

Topics discussed include:
*) The meaning and importance of "Responsible AI"
*) Connections and contrasts with "AI ethics" and "AI safety"
*) The advantages of formal AI governance processes
*) Recommendations for the operation of an AI ethics board
*) Anticipating the operation of the EU's AI Act
*) How different intuitions of fairness can produce divergent results
*) Examples where transparency has been limited
*) The potential future evolution of the discipline of Responsible AI.

Music: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain Declaration

Some follow-up reading: