Accelerating AI Inference with Intel Deep Learning Boost – Intel Chip Chat – Episode 632
![Image for FaceBook](https://media19.connectedsocialmedia.com/intel/02/17082/Accelerating_AI_Inference_Intel_Deep_Learning_Boost_Intel_Chip_Chat_632.jpg)
In this Intel Chip Chat audio podcast with Allyson Klein: When Intel previewed an array of data-centric innovations in August 2018, one that captured media attention was Intel Deep Learning Boost, an embedded AI accelerator in the CPU designed to speed deep learning inference workloads.
Intel DL Boost will make its initial appearance in the upcoming generation of Intel Xeon Scalable processors code-named Cascade Lake. In this Chip Chat podcast, Intel Data-centric Platform Marketing Director Jason Kennedy shares details about the optimization behind some impressive test results.
The key to Intel DL Boost – and its performance kick – is augmentation of the existing Intel Advanced Vector Extensions 512 (Intel AVX-512) instruction set. This innovation significantly accelerates inference performance for deep learning workloads optimized to use vector neural network instructions (VNNI). Image classification, language translation, object detection, and speech recognition are just a few examples of workloads that can benefit.
Early tests have shown image recognition 11 times faster using a similar configuration than with current-generation Intel Xeon Scalable processors when launched in July 2017. Current projections estimate 17 times faster inference throughput benefit with Intel Optimized Caffe ResNet-50 and Intel Deep Learning Boost that can be achieved with a new class of advance performance CPUs debuting in the upcoming generation.
For more information about AI activities across Intel visit:
ai.intel.com
Posted in:
Artificial Intelligence, Audio Podcast, Intel, Intel Chip Chat