Intel Lowers Energy Costs for High Performance Computing
The current uptake in high performance computing means mostly good things, but it also comes with a few built-in challenges. The paradox of this particular progress is this: when you scale hardware, you oftentimes scale power consumption, right along with it. That’s where Intel’s Shesha Krishnapura has some good news to share, in this podcast speaking with The Register’s Tim Phillips. Says Krishnapura, “In the past, that power relationship has existed. But with Intel’s core microarchitecture platform, the power holds constant while performance climbs.”
Intel is working to improve the performance-per-watt characteristics of HPC systems. The effort is important, as Xeon-based servers dominate the Top 500 supercomputers list and the clusters used by businesses for their most demanding jobs.
Fist of all, Intel’s throughput-per-rack measurement helps illustrate the point when Intel 45nm-based quad-core processors run at similar power levels as dual-core processors, while offering twice the number of processing cores per server. Add Intel’s switch to higher density memory like 4GB memory modules instead of 2GB modules — the 4GB run at similar power envelope — and it’s clear where Intel is holding a fairly stable power envelope and still seen what Krishnapura calls, “a substantial performance increase, year after year.”
Krishnapura is a principal engineer in the Intel Platform and Design Capability Engineering group, driving the internal engineering of High Performance Computing solutions optimized for Tapeout and Design Computing. As an architect of Intel Architecture migration program for Electronic Design Automation, Shesha is responsible for enabling IA-based optimization and adoption in EDA market by enabling application vendors and strategically influencing world-wide semiconductor customers for best-in-class design compute solutions.