Accelerating AI Inference with Microsoft Azure Machine Learning – Intel Chip Chat – Episode 626

December 21st, 2018 |
Image for FaceBook
Download Audio FileRight click here to download audio


Subscribe to Intel Chip Chat on iTunes.  

In this Intel Chip Chat audio podcast with Allyson Klein: Dr. Henry Jerez, Principal Group Product and Program Manager for Azure Machine Learning Inferencing and Infrastructure at Microsoft, joins Chip Chat to discuss accelerating AI inference in Microsoft Azure. Dr. Jerez leads the team responsible for creating assets that help data scientists manage their AI models and deployments, both in the cloud and at the edge, and works closely with Intel to deliver the fastest-possible inference performance for Microsoft’s customers. At Ignite 2018, Microsoft demoed an Azure Machine Learning model running atop the OpenVINO toolkit and Intel architecture for highly-performant inference at the edge. This capability will soon be incorporated into Azure Machine Learning. Microsoft additionally announced at Ignite a refreshed public preview of Azure Machine Learning that now provides a unified platform and SDK for data scientists, IT professionals, and developers.

For more on Microsoft Azure Machine Learning, please visit:
aka.ms/azureml-docs

Share

Posted in: Audio Podcast, Intel, Intel Chip Chat, Microsoft
Tags: , , , , , , , , ,