High resolution aerospace applications using the NASA Columbia supercomputer

Dimitri J. Mavriplis, Michael J. Aftosmis, Marsha Berger

Research output: Contribution to journalArticle

Abstract

This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages are industrial-level codes designed for complex geometry and incorporate customized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAlink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/ OpenMP communication should scale well on even more processors.

Original languageEnglish (US)
Pages (from-to)106-126
Number of pages21
JournalInternational Journal of High Performance Computing Applications
Volume21
Issue number1
DOIs
StatePublished - Mar 2007

Fingerprint

Flight envelopes
Aerospace applications
Supercomputers
Supercomputer
NASA
Program processors
High Resolution
Aerospace vehicles
InfiniBand
OpenMP
Interconnect
Fidelity
Envelope
Scalability
Aerodynamics
Degradation
Cartesian Grid
Parametric Analysis
Critical region
Simulation

Keywords

  • Computational fluid dynamics
  • Hybrid programming
  • NASA Columbia
  • OpenMP
  • Scalability
  • SGI Altix
  • Unstructured

ASJC Scopus subject areas

  • Hardware and Architecture
  • Computer Science Applications
  • Computational Theory and Mathematics
  • Theoretical Computer Science

Cite this

High resolution aerospace applications using the NASA Columbia supercomputer. / Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha.

In: International Journal of High Performance Computing Applications, Vol. 21, No. 1, 03.2007, p. 106-126.

Research output: Contribution to journalArticle

@article{9a3ba43bf6ef4b0282a2ffc68478454c,
title = "High resolution aerospace applications using the NASA Columbia supercomputer",
abstract = "This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages are industrial-level codes designed for complex geometry and incorporate customized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAlink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/ OpenMP communication should scale well on even more processors.",
keywords = "Computational fluid dynamics, Hybrid programming, NASA Columbia, OpenMP, Scalability, SGI Altix, Unstructured",
author = "Mavriplis, {Dimitri J.} and Aftosmis, {Michael J.} and Marsha Berger",
year = "2007",
month = "3",
doi = "10.1177/1094342006074872",
language = "English (US)",
volume = "21",
pages = "106--126",
journal = "International Journal of High Performance Computing Applications",
issn = "1094-3420",
publisher = "SAGE Publications Inc.",
number = "1",

}

TY - JOUR

T1 - High resolution aerospace applications using the NASA Columbia supercomputer

AU - Mavriplis, Dimitri J.

AU - Aftosmis, Michael J.

AU - Berger, Marsha

PY - 2007/3

Y1 - 2007/3

N2 - This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages are industrial-level codes designed for complex geometry and incorporate customized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAlink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/ OpenMP communication should scale well on even more processors.

AB - This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fidelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages are industrial-level codes designed for complex geometry and incorporate customized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InfiniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 CPUs using the NUMAlink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InfiniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/ OpenMP communication should scale well on even more processors.

KW - Computational fluid dynamics

KW - Hybrid programming

KW - NASA Columbia

KW - OpenMP

KW - Scalability

KW - SGI Altix

KW - Unstructured

UR - http://www.scopus.com/inward/record.url?scp=33846660782&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33846660782&partnerID=8YFLogxK

U2 - 10.1177/1094342006074872

DO - 10.1177/1094342006074872

M3 - Article

AN - SCOPUS:33846660782

VL - 21

SP - 106

EP - 126

JO - International Journal of High Performance Computing Applications

JF - International Journal of High Performance Computing Applications

SN - 1094-3420

IS - 1

ER -