How Open Source Transformers Are Accelerating AI – Conversations in the Cloud – Episode 261
In this Intel Conversations in the Cloud audio podcast: Jeff Boudier from Hugging Face joins host Jake Smith to talk about the company’s open source machine learning transformers (also known as “pytorch-pretrained-bert”) library. Jeff talks about how transformers have accelerated the proliferation of natural language process (NLP) models and their future use in objection detection and other machine learning tasks. He goes into detail about Optimum—an open source library to train and run models on specific hardware, like Intel Xeon CPUs, and the benefits of the Intel Neural Compressor, which is designed to help deploy low-precision inference solutions. Jeff also announces Hugging Face’s new Infinity solution that integrates the inference pipeline to achieve results in milliseconds wherever Docker containers can be deployed.
For more information, visit:
hf.co
Follow Jake on Twitter at:
twitter.com/jakesmithintel
Posted in:
Audio Podcast, Cloud Computing, Intel, Intel Conversations in the Cloud, Technology