Accelerating AI Inference with Microsoft Azure Machine Learning – Intel Chip Chat – Episode 626

December 21st, 2018 |
Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Chip Chat - iTunes | Spotify | RSS Feed | Email
 

In this Intel Chip Chat audio podcast with Allyson Klein: Dr. Henry Jerez, Principal Group Product and Program Manager for Azure Machine Learning Inferencing and Infrastructure at Microsoft, joins Chip Chat to discuss accelerating AI inference in Microsoft Azure. Dr. Jerez leads the team responsible for creating assets that help data scientists manage their AI models and deployments, both in the cloud and at the edge, and works closely with Intel to deliver the fastest-possible inference performance for Microsoft’s customers. At Ignite 2018, Microsoft demoed an Azure Machine Learning model running atop the OpenVINO toolkit and Intel architecture for highly-performant inference at the edge. This capability will soon be incorporated into Azure Machine Learning. Microsoft additionally announced at Ignite a refreshed public preview of Azure Machine Learning that now provides a unified platform and SDK for data scientists, IT professionals, and developers.

For more on Microsoft Azure Machine Learning, please visit:
aka.ms/azureml-docs

Tags: , , , , , , , , ,
 
Posted in: Audio Podcast, Intel, Intel Chip Chat, Microsoft