High resolution aerospace applications using the NASA Columbia supercomputer

Dimitri J. Mavriplis, Michael J. Aftosmis, Marsha Berger

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fldelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages are industrial-level codes designed for complex geometry and incorporate customized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InflniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 cpus using the NUMAlink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InflniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.

Original languageEnglish (US)
Title of host publicationProceedings - Thirteenth International Symposium on Temporal Representation and Reasoning, TIME 2006
Volume2005
DOIs
StatePublished - 2005
EventACM/IEEE 2005 Supercomputing Conference, SC'05 - Seatle, WA, United States
Duration: Nov 12 2005Nov 18 2005

Other

OtherACM/IEEE 2005 Supercomputing Conference, SC'05
CountryUnited States
CitySeatle, WA
Period11/12/0511/18/05

Fingerprint

Flight envelopes
Aerospace applications
Supercomputers
NASA
Aerospace vehicles
Program processors
Scalability
Aerodynamics
Degradation
Geometry
Communication

Keywords

  • Computational fluid dynamics
  • Hybrid programming
  • NASA Columbia
  • OpenMP
  • Scalability
  • SGI Altix
  • Unstructured

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Mavriplis, D. J., Aftosmis, M. J., & Berger, M. (2005). High resolution aerospace applications using the NASA Columbia supercomputer. In Proceedings - Thirteenth International Symposium on Temporal Representation and Reasoning, TIME 2006 (Vol. 2005). [1560013] https://doi.org/10.1109/SC.2005.32

High resolution aerospace applications using the NASA Columbia supercomputer. / Mavriplis, Dimitri J.; Aftosmis, Michael J.; Berger, Marsha.

Proceedings - Thirteenth International Symposium on Temporal Representation and Reasoning, TIME 2006. Vol. 2005 2005. 1560013.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Mavriplis, DJ, Aftosmis, MJ & Berger, M 2005, High resolution aerospace applications using the NASA Columbia supercomputer. in Proceedings - Thirteenth International Symposium on Temporal Representation and Reasoning, TIME 2006. vol. 2005, 1560013, ACM/IEEE 2005 Supercomputing Conference, SC'05, Seatle, WA, United States, 11/12/05. https://doi.org/10.1109/SC.2005.32
Mavriplis DJ, Aftosmis MJ, Berger M. High resolution aerospace applications using the NASA Columbia supercomputer. In Proceedings - Thirteenth International Symposium on Temporal Representation and Reasoning, TIME 2006. Vol. 2005. 2005. 1560013 https://doi.org/10.1109/SC.2005.32
Mavriplis, Dimitri J. ; Aftosmis, Michael J. ; Berger, Marsha. / High resolution aerospace applications using the NASA Columbia supercomputer. Proceedings - Thirteenth International Symposium on Temporal Representation and Reasoning, TIME 2006. Vol. 2005 2005.
@inproceedings{192b878bdb274a67af1363bad3b9e591,
title = "High resolution aerospace applications using the NASA Columbia supercomputer",
abstract = "This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fldelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages are industrial-level codes designed for complex geometry and incorporate customized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InflniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 cpus using the NUMAlink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InflniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.",
keywords = "Computational fluid dynamics, Hybrid programming, NASA Columbia, OpenMP, Scalability, SGI Altix, Unstructured",
author = "Mavriplis, {Dimitri J.} and Aftosmis, {Michael J.} and Marsha Berger",
year = "2005",
doi = "10.1109/SC.2005.32",
language = "English (US)",
isbn = "1595930612",
volume = "2005",
booktitle = "Proceedings - Thirteenth International Symposium on Temporal Representation and Reasoning, TIME 2006",

}

TY - GEN

T1 - High resolution aerospace applications using the NASA Columbia supercomputer

AU - Mavriplis, Dimitri J.

AU - Aftosmis, Michael J.

AU - Berger, Marsha

PY - 2005

Y1 - 2005

N2 - This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fldelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages are industrial-level codes designed for complex geometry and incorporate customized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InflniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 cpus using the NUMAlink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InflniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.

AB - This paper focuses on the parallel performance of two high-performance aerodynamic simulation packages on the newly installed NASA Columbia supercomputer. These packages include both a high-fidelity, unstructured, Reynolds-averaged Navier-Stokes solver, and a fully-automated inviscid flow package for cut-cell Cartesian grids. The complementary combination of these two simulation codes enables high-fldelity characterization of aerospace vehicle design performance over the entire flight envelope through extensive parametric analysis and detailed simulation of critical regions of the flight envelope. Both packages are industrial-level codes designed for complex geometry and incorporate customized multigrid solution algorithms. The performance of these codes on Columbia is examined using both MPI and OpenMP and using both the NUMAlink and InflniBand interconnect fabrics. Numerical results demonstrate good scalability on up to 2016 cpus using the NUMAlink4 interconnect, with measured computational rates in the vicinity of 3 TFLOP/s, while InflniBand showed some performance degradation at high CPU counts, particularly with multigrid. Nonetheless, the results are encouraging enough to indicate that larger test cases using combined MPI/OpenMP communication should scale well on even more processors.

KW - Computational fluid dynamics

KW - Hybrid programming

KW - NASA Columbia

KW - OpenMP

KW - Scalability

KW - SGI Altix

KW - Unstructured

UR - http://www.scopus.com/inward/record.url?scp=33845385901&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=33845385901&partnerID=8YFLogxK

U2 - 10.1109/SC.2005.32

DO - 10.1109/SC.2005.32

M3 - Conference contribution

SN - 1595930612

SN - 9781595930613

VL - 2005

BT - Proceedings - Thirteenth International Symposium on Temporal Representation and Reasoning, TIME 2006

ER -