How to Practice Responsible AI

49:16
 
Share
 

Manage episode 295124427 series 2447657
By HBR Presents / Azeem Azhar. Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio is streamed directly from their servers. Hit the Subscribe button to track updates in Player FM, or paste the feed URL into other podcast apps.

From predictive policing to automated credit scoring, algorithms applied on a massive scale, gone unchecked, represent a serious threat to our society. Dr. Rumman Chowdhury, director of Machine Learning Ethics, Transparency and Accountability at Twitter, joins Azeem Azhar to explore how businesses can practice responsible AI to minimize unintended bias and the risk of harm.

They also discuss:

  • How you can assess and diagnose bias in unexplainable “black box” algorithms.
  • Why responsible AI demands top-down organizational change, implementing new metrics and systems of redress.
  • How Twitter led an audit of its own image-cropping algorithm that was alleged to bias white faces over people of color.
  • The emerging field of “Responsible Machine Learning Operations” (MLOps).

@ruchowdh
@azeem
@exponentialview

Further resources:

134 episodes