Using AI to Build Explainable AI with Intel Optimizations and DarwinAI – Intel on AI – Episode 25

Image for FaceBook

Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
This post can be linked to directly with the following short URL:

The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other

The audio player code can be used without the image as follows:

This audio file can be linked to by copying the following URL:

Right/Ctrl-click to download the audio file.
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel on AI - iTunes | Spotify | RSS Feed | Email

In this Intel on AI podcast episode: Deep neural networks (DNNs), which are arguably the most powerful form of AI today, are difficult to build, run, and explain. Such challenges constitute significant roadblocks for their adoption in their enterprise. Ironically, AI itself can be used to assist data scientists and developers to build and evaluate DNNs. Sheldon Fernandez, CEO of DarwinAI, joins us to talk about how DarwinAI is using this ‘AI building AI’ method in their Generative Synthesis platform.

Sheldon explains how their technology reduces the complexity in designing high-performance deep learning solutions and also facilitates explainable deep learning, which allows a user to understand why a network makes the decisions it does. Finally, he describes a recent analysis conducted by the Intel AI Builders team, where Darwin-generated networks coupled with Intel Optimizations for TensorFlow were able to deliver up to 16.3X performance increase on ResNet50 and up to 9.6X on NASNet workloads.

To learn more, visit:

Visit Intel AI Builders at:

Tags: , , , , , , , , , , , ,
Posted in: Artificial Intelligence, Audio Podcast, Intel, Intel on AI