Algorithmic Fairness with Alice Xiang – Intel on AI – Season 2, Episode 12

December 16th, 2020 |
Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel on AI - iTunes | Spotify | RSS Feed | Email
 

In this episode of Intel on AI guest Alice Xiang, Head of Fairness, Transparency, and Accountability Research at the Partnership on AI, talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about algorithmic fairness—the study of how algorithms might systemically perform better or worse for certain groups of people and the ways in which historical biases or other systemic inequities might be perpetuated by algorithmic systems.

The two discuss the lofty goals of the Partnership on AI, why being able to explain how a model arrived at a specific decision is important for the future of AI adoption, and the proliferation of criminal justice risk assessment tools.

Follow Alice on Twitter: twitter.com/alicexiang
Follow Abigail on Twitter: twitter.com/abigailhingwen
Learn more about Intel’s work in AI: intel.com/ai

Tags: , , , , , , , , ,
 
Posted in: Artificial Intelligence, Audio Podcast, Intel, Intel on AI