You are here

10 Questions for a Scientist: Erich Strohmaier

September 25, 2013 - 3:18pm

Addthis

Dr. Erich Strohmaier, of Lawrence Berkeley National Laboratory, presents data from the TOP500 list of supercomputers. | Photo courtesy of Berkeley Lab.

Dr. Erich Strohmaier, of Lawrence Berkeley National Laboratory, presents data from the TOP500 list of supercomputers. | Photo courtesy of Berkeley Lab.

Berkeley Lab scientist Erich Strohmaier created the first TOP500 list of supercomputers in June 1993 with Professor Hans Meuer. Updated twice annually, the list serves as the industry inventory of the fastest supercomputers in the world. Currently, Dr. Strohmaier is the head of the Future Technologies Group at Berkeley Lab, where his team studies the design and development of hardware and software systems that allow application scientists to more effectively use high-end machines. In this “10 Questions,” Dr. Strohmaier discusses the evolution of the TOP500 list, his own career and where the field of high performance computing is going next.

Question: How did you get started in computer science?

Erich Strohmaier: I’ve always been interested in computers. When I was studying physics, I always used computers. During the time I was working on my Ph.D. on numerical methods in particle physics, I used the largest computer systems available at that time. So transitioning into high performance computing, or HPC was a natural progression in my career.

Q: What inspired you to start the TOP500 list?

ES: In the 1980s, one could gather statistics on the HPC market by simply counting vector computers, as they were the only supercomputers available. This had worked well because vector machines were very different from other computers and had clearly superior performance. But by the early 1990s, there was no longer a performance gap between regular computers and vector computers. Also, massively parallel processor supercomputers started to appear in the marketplace, and the old method of counting vector computer didn’t produce good numbers any more. So the community was looking for a new method.

Computing performance was growing very quickly, so we wanted to come up with a definition that adapted itself. That led us to the idea of listing the 500 fastest supercomputers using a single benchmark. It was not a totally new subject -- the problem was already there and just needed a solution.

Q: What is the importance of benchmarking supercomputing performance?

ES: You really can’t discuss or improve what you can’t measure. You need a definition for performance if you want to talk about supercomputing and how it’s improving. Benchmark results are essential here as they provide a practical way to define and measure performance. However, there is no single metric or benchmark that can truly represent the huge variety of programs that we use. For different purposes, different users and different situations, you need to define different benchmarks to represent progress.

In many situations you have to find a compromise between how representative benchmarks are and how widely used in general they are. For the TOP500 list, in the interest of comparability, we picked LINPACK, one of the most general and widely used benchmarks.

Editor's note: The LINPACK benchmark measures the performance of a dedicated system as it solves a dense system of linear equations.

Q: Has creating the TOP500 list and monitoring supercomputers around the world led to any valuable insights about the development of these machines?

ES: Even early on, and especially over time, many architectural trends in HPC have shown up early and very clearly on the list. Much of the changes might have been debated and discussed much longer if we did not have the clear results from the TOP500 list to show those trends.

Two prominent examples come to mind. The first is the replacement of vector supercomputers by CMOS-based (semiconductor-based) massively parallel processors systems in the early 1990s. The second is the rise and importance of off-the-shelf cluster systems in the early 2000s.

The most consistent impression from the list is simply the incredible pace and continuity of change in this market.

Q: What are the current challenges or roadblocks that scientists and researchers are solving on the road to exascale computing?

Editor's Note: Exascale supercomputers can perform in excess of one quintillion floating point operations, essentially calculations, per second. The National Labs are working collaboratively to solve the challenges necessary to build an exascale supercomputer.

ES: There are two dominant roadblocks. On the hardware side, we have to dramatically increase the power and energy efficiency of our current technology to be able to build affordable exascale systems at any point in the future. For our software, we need to find new ways to generate and manage the massive amounts of concurrency needed in our programs to use these future exascale systems.

Q: What types of research or modeling will exascale computing enable that petascale computing does not?

ES: As with each successive leap in supercomputing capability, exascale will allow scientists to study the most challenging problems in much greater detail with much greater accuracy. DOE is currently supporting three projects -- called co-design because the science application researchers are working with computer scientists -- to create exascale systems that will help study combustion, to improve efficiency and reduce pollution; help study the behavior of materials in extreme environments, which has applications in energy production and national security; and help study the performance of advanced reactors for generating electricity. Climate research will also clearly benefit from exascale computing.

The general benefit of increased computational capability will be that we can do more explorative investigation in various scientific fields and this will be no different with exascale. Considering the amount and complexity of scientific data we are collecting and generating, we will need to use more explorative techniques to make progress in these fields.

Q: What are you working on right now?

ES: One of the projects I’m working on now, which is focused on exascale, is DEGAS, or Dynamic Exascale Global Address Space. We are developing various elements of the software stack related to hierarchical global address space languages and how to extend them to use at the exascale. The overall goal is to provide a unified programming model to make exascale systems successful, since most of the programming models we use today won’t currently work at the exascale.

Q: What research are you interested in (other than your own)?

ES: I have a broad interest in the sciences, spanning from my old fields of study in particle physics and astrophysics to anything related to energy production, especially renewable and clean energy sources. I’m also interested in new methods for data exploration and analysis.

Q: Do you have any advice for students that are interested in pursuing a career working with supercomputers or using them for advanced modeling and simulation?

ES: If you want to enter this field, you need to build a strong foundation. You need to learn about the science discipline, but you also need to understand the computer science. And you will need to keep learning, changing and adapting to the rapidly changing hardware and software environments of HPC.

Q: What is your favorite computer or supercomputer from film and television?

ES: That’s hard to say. Three come to mind. The first is HAL from 2001: A Space Odyssey, even though he was a bit evil. Second is Data from Star Trek: The Next Generation. He’s a humanoid android who always seemed a little lost. My last choice is J.A.R.V.I.S. from the Iron Man movies -- I like his ironic sense of humor.

Addthis