Increasing Trust in AI Systems through Explainability – Intel Chip Chat – Episode 573

Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Chip Chat - iTunes | Spotify | RSS Feed | Email
 

In this Intel Chip Chat audio podcast with Allyson Klein: Dr. Casimir Wierzynski, Senior Director for the Office of the CTO in the AI Products Group (AIPG) at Intel, joins us to discuss explainable AI. A key topic at NIPS 2017, explainable AI systems are those in which the AI algorithm’s inner workings are revealed transparently and can be easily understood by humans. Dr. Wierzynski contrasts this with neural networks in which it is more challenging to analyze the network’s component parts. In this interview, Dr. Wierzynski talks about why explainable AI is of particular interest when developing artificial neural networks, how the new, Intel-supported Partnership on AI is driving cross-industry collaboration in explainable AI, and how explainable AI provides opportunities to increase trust in AI systems.

For more information, please read Dr. Wierzynski’s blog post, “The Challenges and Opportunities of Explainable AI”.

Read further about Dr. Wierzynski’s work at:
ai.intel.com

Tags: , , , , , , , ,
 
Posted in: Audio Podcast, Intel, Intel Chip Chat