ONNX and Intel nGraph API Deliver AI Framework Flexibility – Intel Chip Chat – Episode 611

October 29th, 2018 |
Image for FaceBook
Download Audio FileRight click here to download audio | Share The Connected Social Media Player

Subscribe to Intel Chip Chat on iTunes.  

In this Intel Chip Chat audio podcast with Allyson Klein: Prasanth Pulavarthi, Principal Program Manager for AI Infrastructure at Microsoft, and Padma Apparao, Principal Engineer and Lead Technical Architect for AI at Intel, discuss a collaboration that enables developers to switch from one deep learning operating environment to another regardless of software stack or hardware configuration.

ONNX is an open format that unties developers from specific machine learning frameworks so they can easily move between software stacks. It also reduces ramp-up time by sparing them from learning new tools. Many hardware and software companies have joined the ONNX community over the last year and added ONNX support in their products. Microsoft has enabled ONNX in Windows and Azure and has released the ONNX Runtime which provides a full implementation of the ONNX-ML spec.

With the nGraph API, developed by Intel, developers can optimize their deep learning software without having to learn the specific intricacies of the underlying hardware. It enables portability between Intel Xeon Scalable processors and Intel FPGAs as well as Intel Nervana Neural Network Processors (Intel Nervana NNPs). Intel is integrating the nGraph API into the ONNX Runtime to provide developers accelerated performance on a variety of hardware.

For information about ONNX as well as tutorials and ways to get involved in the ONNX community, visit:

To learn more about ONNX Runtime visit:

To learn more about the Intel nGraph API, visit:


Posted in: Audio Podcast, Intel, Intel Chip Chat
Tags: , , , , , , , , , ,