Artificial Intelligence in Health Care: Its Perils (Bias) and Potential

Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Business Group on Health - iTunes | Spotify | Google Play | TuneIn | RSS Feed | Email
 

Artificial intelligence (AI) has the potential to transform the future of health care, but according to Dr. Ziad Obermeyer, it can also harm as much as it can help. That’s because bias and errors are built into the very algorithms we use to predict health care needs, “reproduce[ing] all of those ugly things that we don’t like about our health care system.” In this episode, we speak with Dr. Obermeyer about the hidden signals in health care data, his research revealing racial bias in health care AI and how we can fix algorithms so that machine learning can be a force for good.

Guest: Ziad Obermeyer, MD, Blue Cross of California Distinguished Associate Professor of Health Policy and Management, UC Berkeley School of Public Health

Tags: , , , , , , ,
 
Posted in: Artificial Intelligence, Audio Podcast, Business Group on Health, Healthcare