Content provided by Anthrocurious, LLC and LLC. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anthrocurious, LLC and LLC or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

People love us!

User reviews

"Love the offline function"
"This is "the" way to handle your podcast subscriptions. It's also a great way to discover new podcasts."

Developing Responsible AI with David Gray Widder and Dawn Nafus

55:17
 
Share
 

Manage episode 364982665 series 2427584
Content provided by Anthrocurious, LLC and LLC. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anthrocurious, LLC and LLC or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://cloudutil.player.fm/legal.
Contemporary AI systems are typically created by many different people, each working on separate parts or “modules.” This can make it difficult to determine who is responsible for considering the ethical implications of an AI system as a whole — a problem compounded by the fact that many AI engineers already don’t consider it their job to ensure the AI systems they work on are ethical.
In their latest paper, “Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers’ Notions of Responsibility,” technology ethics researcher David Gray Widder and research scientist Dawn Nafus attempt to better understand the multifaceted challenges of responsible AI development and implementation, exploring how responsible AI labor is currently divided and how it could be improved.
In this episode, David and Dawn join This Anthro Life host Adam Gamwell to talk about the AI “supply chain,” modularity in software development as both ideology and technical practice, how we might reimagine responsible AI, and more.

Show Highlights:



  • [03:51] How David and Dawn found themselves in the responsible AI space
  • [09:04] Where and how responsible AI emerged
  • [16:25] What the typical AI development process looks like and how developers see that process
  • [18:28] The problem with “supply chain” thinking
  • [23:37] Why modularity is epistemological
  • [26:26] The significance of modularity in the typical AI development process
  • [31:26] How computer scientists’ reactions to David and Dawn’s paper underscore modularity as a dominant ideology
  • [37:57] What it is about AI that makes us rethink the typical development process
  • [45:32] Whether the job of asking ethical questions gets “outsourced” to or siloed in the research department
  • [49:12] Some of the problems with user research nowadays
  • [56:05] David and Dawn’s takeaways from writing the paper

Links and Resources:




This show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5168968/advertisement
  continue reading

178 episodes

iconShare
 
Manage episode 364982665 series 2427584
Content provided by Anthrocurious, LLC and LLC. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Anthrocurious, LLC and LLC or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://cloudutil.player.fm/legal.
Contemporary AI systems are typically created by many different people, each working on separate parts or “modules.” This can make it difficult to determine who is responsible for considering the ethical implications of an AI system as a whole — a problem compounded by the fact that many AI engineers already don’t consider it their job to ensure the AI systems they work on are ethical.
In their latest paper, “Dislocated Accountabilities in the AI Supply Chain: Modularity and Developers’ Notions of Responsibility,” technology ethics researcher David Gray Widder and research scientist Dawn Nafus attempt to better understand the multifaceted challenges of responsible AI development and implementation, exploring how responsible AI labor is currently divided and how it could be improved.
In this episode, David and Dawn join This Anthro Life host Adam Gamwell to talk about the AI “supply chain,” modularity in software development as both ideology and technical practice, how we might reimagine responsible AI, and more.

Show Highlights:



  • [03:51] How David and Dawn found themselves in the responsible AI space
  • [09:04] Where and how responsible AI emerged
  • [16:25] What the typical AI development process looks like and how developers see that process
  • [18:28] The problem with “supply chain” thinking
  • [23:37] Why modularity is epistemological
  • [26:26] The significance of modularity in the typical AI development process
  • [31:26] How computer scientists’ reactions to David and Dawn’s paper underscore modularity as a dominant ideology
  • [37:57] What it is about AI that makes us rethink the typical development process
  • [45:32] Whether the job of asking ethical questions gets “outsourced” to or siloed in the research department
  • [49:12] Some of the problems with user research nowadays
  • [56:05] David and Dawn’s takeaways from writing the paper

Links and Resources:




This show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5168968/advertisement
  continue reading

178 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Player FM - Podcast App
Go offline with the Player FM app!

Quick Reference Guide