Wednesday, December 05, 2007

Washington Post on Petascale Computing

Sometime next year, developers will boot up the next generation of supercomputers, machines whose vast increases in processing power will accelerate the transformation of the scientific method, experts say.

The first "petascale" supercomputer will be capable of 1,000 trillion calculations per second. That's about twice as powerful as today's dominant model, a basketball-court-size beast known as BlueGene/L at the Energy Department's Lawrence Livermore National Laboratory in California that performs a peak of 596 trillion calculations per second.

The computing muscle of the new petascale machines will be akin to that of more than 100,000 desktop computers combined, experts say. A computation that would take a lifetime for a home PC and that can be completed in about five hours on today's supercomputers will be doable in as little as two hours.

"The difficulty in building the machines is tremendous, and the amount of power these machines require is pretty mind-boggling," said Mark Seager, assistant department head for advanced computing technology at Lawrence Livermore. "But the scientific results that we can get out of them are also mind-boggling and worth every penny and every megawatt it takes to build them."

A leading candidate to become the first petascale machine, the "Roadrunner" supercomputer being developed by IBM in partnership with the Energy Department's Los Alamos National Laboratory, will require about 4 megawatts of power -- enough to illuminate 10,000 light bulbs, said John Hopson, program director for advanced simulation and computing at Los Alamos in New Mexico.

But scientists say Roadrunner and its cousins will make possible dramatically improved computer simulations. That will help shed new light on subjects such as climate change, geology, new drug development, dark matter and other secrets of the universe, as well as other fields in which direct experimental observation is time-consuming, costly, dangerous or impossible.

In fact, supercomputers and their simulations are becoming so powerful that they essentially have introduced a new step in the time-honored scientific method that moves from theory to hypothesis to experimental confirmation, some experts contend.

HPC often fills the role of experiment on science that is simply too big to experiment on such as climate, orbital mechanics or stellar formation. One of my favorites in this realm that exemplifies it has been the sim of colliding black holes. We simply cannot do the experiment until the far future or we suddenly develop Deity class powers/tech.

Alternately, HPC and simulation in genera, also makes itself useful by verifying equations and theories: do we really understand what's happening when we observe it? Can we accurately describe it in equations? It's entirely possible to think we've come up with a good set of generalized equations for something and only find there's a lot more going on than we thought when we try to apply them elsewhere. Building a simulation makes us confront the realities of all the possible actors in a given experiment.

Another candidate for petascale uber'puters is the Cray XT5, especially the XT5h. We're moving away from the generalized processor machine, finally, and towards ones with the processors for specific applications. Coding for the first generation or two is gonna be frakkin hard, but worthwhile once you get the tools done right.

(:P to the WP cuz they didn't quote any of us tho)

No comments: