Optimizing Model Performance with Deci – Conversation in the Cloud – Episode 286

November 21st, 2022 | | 13:14
Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Conversations in the Cloud - iTunes | Spotify | RSS Feed | Email
 

Yonatan Geifman, Co-Founder & CEO at Deci, joins host Jake Smith to talk about how Deci’s AutoNAC (Automated Neural Architecture Construction) engine enables developers to redesign their deep learning models to significantly improve latency performance while preserving the model’s accuracy. Yonatan further explained that improving latency and accuracy would allow developers to shorten the development cycle and deploy the optimized model to production environments faster. Yonatan shared with Jake the close and deep partnership Deci has developed with Intel in engineering collaboration and ecosystem programs. When asked about his thought on the future of AI, Yonatan said while it would take more time for AI to be everywhere, AI is being democratized by startups, large companies, and open-source communities, so more and more people will be able to build AI applications in the future.

For more information, visit:
deci.ai

Follow Jake on Twitter at:
twitter.com/jakesmithintel

Tags: , , , , , ,
 
Posted in: Artificial Intelligence, Audio Podcast, Cloud Computing, Intel, Intel Conversations in the Cloud