Reducing Compute Resources in Neural Networks – Conversations in the Cloud – Episode 266

December 9th, 2021 | | 14:46
Image for FaceBook

 
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
 
This post can be linked to directly with the following short URL:


 
The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other


 
The audio player code can be used without the image as follows:


 
This audio file can be linked to by copying the following URL:


 
Right/Ctrl-click to download the audio file.
 
Subscribe:
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Conversations in the Cloud - iTunes | Spotify | RSS Feed | Email
 

In this Intel Conversations in the Cloud audio podcast: Helen Kim from MaxLinear (previously NanoSemi, Inc.) joins host Jake Smith to talk about reducing compute resources to achieve target accuracies in deep neural networks. Helen goes into detail about MaxLinear’s Augmented Neuron technology, which mathematically augments neural networks to reduce memory usage and latency. Jake and Helen discuss how Intel’s oneDNN and other tools are making AI advancements easier for partners and how the future of 5G will impact the larger industry.

For more information, visit:
nanosemitech.com/benchmarks-show-maxlinears-augmented-neuron-reduces-resnet50-cost-by-2x

Follow Jake on Twitter at:
twitter.com/jakesmithintel

Tags: , , , , , , , , , , , , ,
 
Posted in: Intel, Intel Conversations in the Cloud