Advancing Deep Learning with Custom-Built Accelerators- Intel Chip Chat – Episode 677

November 13th, 2019 |
Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Chip Chat - iTunes | Spotify | RSS Feed | Email
 

In this Intel Chip Chat audio podcast with Allyson Klein: Deep learning workloads have evolved considerably over the last few years. Today’s models are larger, deeper, and more complex than neural networks from even a few years ago, with an explosion in size in the number of parameters per model. The Intel Nervana Neural Network Processor for Training (NNP-T) is a purpose-built deep learning accelerator to speed up the training and deployment of distributed learning algorithms.

Carey Kloss is the VP and General Manager of the AI Training Products Group at Intel. In this interview, Kloss outlines the architecture and potential of the Intel Nervana NNP-T. He gets into major issues like memory and how the architecture was designed to avoid problems like becoming memory-locked, how the accelerator supports existing software frameworks like PaddlePaddle and TensorFlow, and what the NNP-T means for customers who want to keep on eye on power usage and lower TCO.

To learn more about the Intel Nervana Neural Network Processor for Training go to:
intel.ai/nervana-nnp

Tags: , , , , , , , , , ,
 
Posted in: Audio Podcast, Intel, Intel Chip Chat