How Open Source Transformers Are Accelerating AI – Conversations in the Cloud – Episode 261

November 4th, 2021 | | 15:39
Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Conversations in the Cloud - iTunes | Spotify | RSS Feed | Email
 

In this Intel Conversations in the Cloud audio podcast: Jeff Boudier from Hugging Face joins host Jake Smith to talk about the company’s open source machine learning transformers (also known as “pytorch-pretrained-bert”) library. Jeff talks about how transformers have accelerated the proliferation of natural language process (NLP) models and their future use in objection detection and other machine learning tasks. He goes into detail about Optimum—an open source library to train and run models on specific hardware, like Intel Xeon CPUs, and the benefits of the Intel Neural Compressor, which is designed to help deploy low-precision inference solutions. Jeff also announces Hugging Face’s new Infinity solution that integrates the inference pipeline to achieve results in milliseconds wherever Docker containers can be deployed.

For more information, visit:
hf.co

Follow Jake on Twitter at:
twitter.com/jakesmithintel

Tags: , , , , , , , , , , , , , , , , ,
 
Posted in: Audio Podcast, Cloud Computing, Intel, Intel Conversations in the Cloud, Technology