Behind the Massive Scale Required for Content Recommendation – Conversations in the Cloud – Episode 252

Image for FaceBook

Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
This post can be linked to directly with the following short URL:

The audio player code can be copied in different sizes:
144p, 240p, 360p, 480p, 540p, Other

The audio player code can be used without the image as follows:

This audio file can be linked to by copying the following URL:

Right/Ctrl-click to download the audio file.
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel Conversations in the Cloud - iTunes | Spotify | RSS Feed | Email

In this Intel Conversations in the Cloud audio podcast: Taboola’s Ariel Pisetzky joins host Jake Smith to talk about using artificial intelligence (AI) for personalized content recommendations at massive scale, reaching up to 4 billion web pages a day. Ariel discusses why Taboola runs their services on their own on-prem infrastructure of over 10,000 servers with Intel processors, many of which are dedicated to inferencing workloads. Ariel also goes into detail about some of the performance optimizations the company has found gen-over-gen with increased memory access. Jake and Ariel close the episode by talking about the open source community and how to respond to a crisis, like a fire breaking out inside your data center.

Follow Taboola on Twitter at:

Follow Jake on Twitter at:

Tags: , , , , , , , ,
Posted in: Artificial Intelligence, Audio Podcast, data centers, Intel, Intel Conversations in the Cloud