Increasing AI Application Performance with Software Optimizations – Intel Chip Chat – Episode 604

September 7th, 2018 |
Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Chip Chat - iTunes | Spotify | RSS Feed | Email
 

In this Intel Chip Chat audio podcast with Allyson Klein: Dr. Andres Rodriguez, a Senior Principal Engineer in the Data Center Group at Intel, stops by to talk about why it is so critical to optimize framework and software tools artificial intelligence applications. Intel has worked hard over the last two years to optimize popular frameworks like Caffe, TensorFlow, MXNet, and Pytorch for Intel Xeon processors. We’ve also developed the Intel Math Kernel Library for Deep Neural Networks (Intel MKL-DNN) to accelerate deep learning workloads on Intel architecture. Customers are now seeing the benefits of using their existing Intel Xeon processors for artificial intelligence workloads with increasingly optimized performance.

For more on this topic, visit:
ai.intel.com

Tags: , , , , , , , , , , , , , , ,
 
Posted in: Artificial Intelligence, Audio Podcast, Intel, Intel Chip Chat