What Neurobiology Can Teach Us About Hardware – Conversation in the Cloud – Episode 221

February 8th, 2021 |
Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Chip Chat - iTunes | Spotify | RSS Feed | Email
Intel Conversations in the Cloud - iTunes | Spotify | RSS Feed | Email
 

In this Intel Conversations in the Cloud audio podcast: Nir Shavit, co-founder of Neural Magic, joins host Jake Smith to talk about enabling convolutional neural networks to run on commodity CPUs. Nir speaks to how his background as a professor researching the connectivity maps of brain tissue and tracking neurons evolved into developing machine learning algorithms that run on multicore processors instead of GPUs. The two discuss the relationship between performance and CPU cache and memory, compression techniques like model pruning and quantization, and the future of sparsification. Nir also expands on the similarities and differences between the human brain and modern processors and why software optimization could be more important than specialized hardware.

Follow Neural Magic on Twitter at:
twitter.com/neuralmagic

Follow Jake on Twitter at:
twitter.com/jakesmithintel

Tags: , , , , , , , ,
 
Posted in: Audio Podcast, Cloud Computing, Intel, Intel Chip Chat, Intel Conversations in the Cloud