SCD Supercomputer Gallery


IBM BlueGene/L (frost): 2005–present

NCAR's IBM BlueGene/L system was delivered on March 15, 2005. It went online on March 25 after passing five days of acceptance testing on the first try. Nicknamed "frost" because it runs cooler than most microprocessor-based high-end systems, the machine will be at NCAR for three years.

Comprising just one rack of a full, 64-rack BlueGene/L system, frost has 1,024 dual-processor compute nodes and 32 I/O nodes. Each processor runs at .7 GHz, which is relatively slow — for example, current Pentiums run at 3.2 GHz. However, because frost's processors are slow, they produce less heat and can be more tightly packed. Thus, the new system:

  • Takes up 3% of the floor space of NCAR's flagship computer, an IBM p690 named bluesky
  • Uses 6% of the electricity required by bluesky
  • Delivers 69% of bluesky's peak computational power

Frost, with one cabinet containing 2,048 processors, is far more compact than bluesky, which has 50 cabinets that each contain 32 processors. Frost, by one key measure, is also faster. Although bluesky has a top speed of 8.3 teraflops, it achieves just 4.2 teraflops on the Linpack benchmark. Frost, in contrast, runs Linpack equations at 4.6 teraflops.

Frost seems to be tilted because a triangular-shaped air duct, or plenum, is attached to it on either side. The plenum on the left directs cold air from underneath the floor through the machine. The plenum on the right directs hot exhaust air upward.

Because frost is an experimental system and lacks a complete computational environment for users, it cannot yet be used for production computing. SCD is using it for computer science research: running performance I/O tests, repartitioning the architecture into smaller blocks to accommodate a variety of users, setting up job queues, and writing tools to support the machine.

Related Links

SCD is also collaborating with researchers at the University of Colorado, IBM, Argonne National Laboratory, Lawrence Livermore National Laboratory, and the San Diego Supercomputing Center to see how the Blue Gene/L architecture can be best used for atmospheric science. Joint projects include testing applications, debugging software, developing new system configurations, and evaluting job schedulers.

In addition, SCD has joined the Blue Gene/L Consortium, an association of laboratories, universities, and industrial partners working to develop scientific and technical applications for Blue Gene/L.

SCD made the machine available to select users in Mary 2005 for experimental purposes.