Hugging Face Helps Find and Use Open Source AI Models

January 27th, 2025 | | 6:26
Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The video player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, 720p, 1080p, Other


 
This video file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the video file. There is also an HD version of this video.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Tech Barometer - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
 

In this video interview, Hugging Face’s Taylor Linton explains how enterprises can use existing open source software instead of building AI applications from scratch.

Find more enterprise cloud news, features stories and profiles at The Forecast.

Transcript:

Taylor Linton: Predating me joining Hugging Face. We started in 2016 to be a chatbot company. That’s where the fun hugging face name and emoji comes from, because it’s supposed to be a chatbot that’s your friend. So we wanted a friendly brand associated with it. But since I joined Hugging Face in 2021, we started out with 15,000 pre-trained models on the hub. And it was so interesting talking to customers then because it was, Hey, there’s too many models out there, I don’t even know which one to start with. And just seeing that grow from 15,000 to 650,000 in a few years, it really shows how much ai, especially open source has grown.

[Related: Enterprise IT Teams Jump Into AIOps]

It gets even more challenging. So since yesterday when you and I both looked at the models, there’s already 2000 more models. It’s crazy. And the tough part is you can’t really automate these decisions on which model to use. It really is context dependent. You need to know what your cost requirements are, what type of input data that you’re going to be sending to the model, because models are trained on different types of data and you need to try to back into the model that was exposed to your type of domain. During the training. It comes down to latency, the particular task. So these are all things that our engineers can sit down with customers and look at what are we trying to build here? What type of constraints are we working with? And we really walk through them, walk through with them the different trade-offs that you make when you’re picking a different model.

[Related: Role of CIO Expands with Enterprise AI]

A lot of CEOs weren’t even aware of what AI was a couple years ago, and then chat GPT came out and CEO’s kids were doing their homework with it and all of a sudden they started hearing about it. So there’s been a lot of interest in companies to try to take advantage of this technology. When folks talk to hugging Face, it’s because we’re the entire open source ecosystem around ai. And so when they want to explore and take advantage of these open source models, they might go to our hub to go grab one of the 650,000 open source models. But also, once they do grab that model and bring it into their environment, we have quite a bit of maybe 20 different libraries or tools that they use to actually go all the way from building to deploying these models. So that’s really when we get involved with folks is to help them take advantage of open source AI.

[Related: Building a GenAI App to Improve Customer Support]

The community around hugging face and just open source AI is incredible. So the fact that someone might release a paper that introduces a new optimization technique, the fact that the entire community can benefit from that and be able to use it on their models, it’s really exciting and it gets folks pretty motivated to be able to introduce a new novel technique and open source it to benefit a larger group of people. I think that open source AI is critical to be able to keep responsible AI across the world. The biggest reason is because the last thing that people want is to only have a few companies that have access to this technology. The fact that people can build in public and you can know certain data sets that were used to train the models. It’s a great opportunity for companies to make sure they are using models that were trained on permissive data. And bias is also a really big area that folks are looking into because of course, human bias exists in these training sets, and that ends up transferring to the model’s bias. So the fact that people can work together and in a community to really look into these data sets and do their best to try to identify and mitigate bias in these models, that’s very important.

[Related: AI and Cloud Native Alchemize the Future of Enterprise IT]

We partner with quite a few companies, but Nutanix has really stood out to me. It really seems progressive to make sure to offer the best tools and technologies are made available to their customers. So what I’ve learned with GPT in a Box, the fact that they’re integrating it with all of hugging faces, open source libraries, they’re making it frictionless for customers to be able to deploy open source models in their environment. You don’t see that often from, and they’re really going a long way to make sure that the best tools are available for their customers.

Transcript Read/Download the transcript.
 

Tags: , , ,
 
Posted in: Artificial Intelligence, HD Video, Tech Barometer - From The Forecast by Nutanix, Video Podcast