Six miles north of downtown Austin sits Stampede, a supercomputer built by the Texas Advanced Computing Center (TACC) at The University of Texas. One of the largest supercomputers in the world, and ranked as the sixth fastest, Stampede conducts trillions of calculations per second, helping researchers at hundreds of academic institutions to solve large-scale science and engineering problems ranging from aircraft design to weather forecasting to nanoelectronics.
But for modern supercomputing to be cost-effective, efficient power and cooling for operations is a must. To help solve the power and cooling challenges in their supercomputing environments, TACC turned to Schneider Electric™ and their InfraStruxure data center solution.
Building supercomputers that can perform very large scale simulations to solve problems takes more than ingenuity; it takes a lot of power. For instance, Stampede physically fills up 180 cabinets and runs 100,000 conventional processors, as well as 400,000 of a new type of experimental processors.
“At this point, we’ve reached the level where we’re dissipating almost 40,000 watts per cabinet, per standard sort of two-foot wide server rack, which makes the density in our data center very nearly 1,000 watts per square foot across the whole data center,” says Dan Stanzione, deputy director, Texas Advanced Computing Center and the University of Texas at Austin.
TACC is working with Schneider Electric to not only optimize power distribution at the density needed, but to remove heat at that density. Schneider Electric InRow™ closed coupled cooling solution positioned between their server cabinets with hot-aisle containment, are helping to lower their TCO. And by moving the cooling closer to the servers, TACC is achieving additional estimated savings of 15 to 20 percent.
“We’re getting a PUE of about 1.2, which means for every watt we use in computation, we use about 2/10th of a watt to cool it, which is very efficient,” says Stanzione.