Accelerating FPGA Adoption for AI Inference with the Inspur TF2 – Intel on AI – Episode 13

Image for FaceBook

Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
This post can be linked to directly with the following short URL:

The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other

The audio player code can be used without the image as follows:

This audio file can be linked to by copying the following URL:

Right/Ctrl-click to download the audio file.
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel on AI - iTunes | Spotify | RSS Feed | Email

In this Intel on AI podcast episode: FPGA (field-programmable gate array) technology can offer a very high level of flexibility and performance, with low latency. Yet, with software writing thresholds, limited performance optimization, and difficult power control, FPGA solutions can also be challenging to implement. Bob Anderson, General Manager of Sales for Strategic Accounts at Inspur, joins Intel on AI to talk about the Inspur TensorFlow-supported FPGA Compute Acceleration Engine (TF2). Bob illustrates how the TF2 helps customers more easily deploy FPGA solutions and take advantage of the customization and performance of FPGAs for AI inference applications. He also describes how the TF2 is especially suitable for image-based AI applications with high real-time requirements.

To learn more, visit:

Visit Intel AI Builders at:

Tags: , , , , , , , , , , ,
Posted in: Artificial Intelligence, Audio Podcast, Intel, Intel on AI