Accelerating FPGA Deep Learning for Intel OpenVino – Intel Chip Chat – Episode 587

Image for FaceBook

Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
This post can be linked to directly with the following short URL:

The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other

The audio player code can be used without the image as follows:

This audio file can be linked to by copying the following URL:

Right/Ctrl-click to download the audio file.
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Chip Chat - iTunes | Spotify | RSS Feed | Email

In this Intel Chip Chat audio podcast with Allyson Klein: Tony Kau, Software, IP, and Artificial Intelligence Marketing Director at Intel, discusses the new Intel FPGA Deep Learning Acceleration Suite for Intel OpenVINO. The software tool suite enables FPGA AI inferencing to deliver reduced latency and increased performance, power and cost efficiency for AI inference workloads targeting Intel FPGAs. This suite allows software developers to access and develop frameworks and networks around machine vision and AI-related workloads.

For more information, visit:

Tags: , , , , , , , , ,
Posted in: Artificial Intelligence, Audio Podcast, Intel, Intel Chip Chat