Hugging Face and Intel – Driving Towards Practical, Faster, Democratized and Ethical AI solutions

March 31st, 2023 | | 40:35
Image for FaceBook

Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
This post can be linked to directly with the following short URL:

The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other

The audio player code can be used without the image as follows:

This audio file can be linked to by copying the following URL:

Right/Ctrl-click to download the audio file.
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Code Together - iTunes | Spotify | Google | Stitcher | SoundCloud | RSS Feed | Email

Transformer models are the powerful neural networks that have become the standard for delivering advanced performance behind these innovations. But there is a challenge: Training these deep learning models at scale and doing inference on them requires a large amount of computing power. This can make the process time-consuming, complex, and costly.

Today we will talk about all kinds of issues around accessible, production level AI solutions. We also talk about ethical questions around AI usage and why open, democratized AI solutions are important.

Learn more:
Hugging Face

Hugging Face Hub

Fast Inference on Large Language Models: BLOOMZ on Habana Gaudi2 Accelerator

Accelerating Stable Diffusion Inference on Intel CPUs

Transformer Performance with Intel & Hugging Face Webinar

Intel Explainable AI Tools

Intel Distribution of OpenVINO Toolkit

Intel AI Analytics Toolkit (AI Kit)

Julien Simon – Chief Evangelist @ Hugging Face
Ke Ding – Principal Engineer @ Intel

Transcript Read/Download the transcript.

Tags: , , , , , , , , , , , , , ,
Posted in: Audio Podcast, Code Together, Intel