Intel IT Best Practices for Implementing Apache Hadoop Software

November 20th, 2013 |
Image for FaceBook
Download PDFRead/Download White Paper (PDF)
Share this post:
Facebook | Twitter | Google+ | LinkedIn | Pinterest | Reddit | Email
This post can be linked to directly with the following short URL:

This pdf file can be linked to by copying the following URL:

Right/Ctrl-click to download the pdf file.
Connected Social Media - iTunes | Spotify | Google | Stitcher | TuneIn | Twitter | RSS Feed | Email
Intel - iTunes | Spotify | RSS Feed | Email
Intel IT - iTunes | Spotify | RSS Feed | Email

IT Best Practices: In an age when organizations such as Intel are rich in data, the true value of this data lies in the ability to collect, sort, and analyze it to derive actionable business intelligence (BI). Recognizing the need to add big data capabilities to our BI efforts, Intel IT formed a team to evaluate several Apache Hadoop distributions and consider implementation options. Our goal was to deliver a production platform in 10 weeks or less.

Intel IT’s “start small” strategy enabled us to take an iterative, agile methodology-like approach. We worked with Intel IT BI teams and other groups to design and implement a 16-server, 192-core Hadoop platform, including all software and data integration solutions, in just five weeks.

Intel’s first internal big data compute-intensive production platform with the Intel Distribution of Hadoop launched at the end of 2012. This platform is already delivering value in our first three use cases, helping us to identify new opportunities as well as reduce IT costs and enabling new product offerings.

For more information on Intel IT Best Practices, please visit

Tags: , , , , , , , , , , , , , ,
Posted in: Corporate, Intel, Intel IT, IT White Papers, IT@Intel