ONNX and Intel nGraph API Deliver AI Framework Flexibility – Intel Chip Chat – Episode 611

October 29th, 2018 |
Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Chip Chat - iTunes | Spotify | RSS Feed | Email
 

In this Intel Chip Chat audio podcast with Allyson Klein: Prasanth Pulavarthi, Principal Program Manager for AI Infrastructure at Microsoft, and Padma Apparao, Principal Engineer and Lead Technical Architect for AI at Intel, discuss a collaboration that enables developers to switch from one deep learning operating environment to another regardless of software stack or hardware configuration.

ONNX is an open format that unties developers from specific machine learning frameworks so they can easily move between software stacks. It also reduces ramp-up time by sparing them from learning new tools. Many hardware and software companies have joined the ONNX community over the last year and added ONNX support in their products. Microsoft has enabled ONNX in Windows and Azure and has released the ONNX Runtime which provides a full implementation of the ONNX-ML spec.

With the nGraph API, developed by Intel, developers can optimize their deep learning software without having to learn the specific intricacies of the underlying hardware. It enables portability between Intel Xeon Scalable processors and Intel FPGAs as well as Intel Nervana Neural Network Processors (Intel Nervana NNPs). Intel is integrating the nGraph API into the ONNX Runtime to provide developers accelerated performance on a variety of hardware.

For information about ONNX as well as tutorials and ways to get involved in the ONNX community, visit:
onnx.ai

To learn more about ONNX Runtime visit:
azure.microsoft.com/en-us/blog/onnx-runtime-for-inferencing-machine-learning-models-now-in-preview

To learn more about the Intel nGraph API, visit:
ai.intel.com/ngraph-a-new-open-source-compiler-for-deep-learning-systems

Tags: , , , , , , , , , ,
 
Posted in: Audio Podcast, Intel, Intel Chip Chat