Federal High Performance Computing and Communications (HPCC) program as
fundamental problems in science and engineering with broad economic and
scientific impact, whose solutions require the application of
high-performance computing.
The following is a list of "official" Grand Challenge applications that
are sponsored by the various federal agencies
that are part of the Federal HPCC program. The applications are divided into
the following categories:
-
Aerospace
-
Computational Aerosciences Project
NASA - NASA Ames, NASA Langley and NASA Lewis
-
Accelerate the development and availability of high-performance
computing technology that will be of use to the U.S. aerospace community,
facilitate the adoption and use of this technology by the U.S. aerospace
industry, and hasten the emergence of a viable
commercial market for hardware and software vendors to exploit this lead.
-
High performance computational methods for coupled field problems and GAFD
turbulence
NSF - Colorado, Minnesota, and the National Center for
Atmospheric Research
-
Develop and implement algorithms and
software on parallel computers for solving field problems in structural and
fluid dynamics and studying highly turbulent flows which arise in
geophysical and astrophysical fluid dynamics.
-
Computer Science
-
High performance computing for learning
NSF - MIT, Brown and Harvard
-
Develop, implement, and test new mathematical
techniques, software, and hardware for high performance computers with the
ultimate goal of getting computers to "see, move, and speak."
-
Parallel I/O methodologies for I/O-intensive Grand Challenge
applications
NSF - Caltech and Illinois
-
Investigate and develop strategies for the efficient implementation of I/O
intensive applications on a specially configured Intel Paragon computer.
They will characterize I/O behavior and performance, define I/O models and
methodologies, and develop, implement and test tools to support scientific
applications with large I/O requirements.
-
Energy
-
Mathematical combustion modeling
DOE
- Developing adaptive parallel
algorithms for computational fluid dynamics and applying them to
combustion models.
-
Numerical Tokamak project
DOE - Lawrence Livermore, Texas, UCLA, Oak Ridge, Princeton,
NASA JPL, Cornell, Los Alamos, Caltech,
National Energy Research Supercomputer Center
-
Develop and integrate particle
and fluid plasma models on massively parallel machines as part of the
multidisciplinary study of Tokamak fusion reactors.
-
Oil reservoir modeling
DOE - Texas A&M, Brookhaven, Oak Ridge, Rice, Stony Brook,
South Carolina, and Princeton
-
Develop software for massively parallel computers that calculates fluid
flow through permeable media. The project has a dual application,
focusing on methods that solve modeling problems for
petroleum reservoirs and for groundwater contamination.
-
Quantum chromodynamics calculations
DOE - Los Alamos
- Developing lattice gauge theory
algorithms on massively parallel machines for high energy physics and
particle physics applications.
-
Environmental Monitoring and Prediction
-
Adaptive coordination of predictive models with experimental
observations
NSF - Stanford and NASA Ames
-
Using a predictive computer
model carrying out simulations in real time and a laboratory test bed, the
team will investigate the potential for the interplay of the simulations
and the experimental facility to estimate what data need to be gathered, as
well as the location and resolution of this data, in order that accurate
predictions of the future behavior of a complex nonlinear fluid system such
as the atmosphere or the ocean can be made.
-
Computational chemistry
DOE - Argonne, Pacific Northwest Laboratory, Allied Signal,
du Pont, Exxon, and Phillips
-
Develop new parallel algorithms, software, and portable tools for
computational chemistry,
and develop modeling systems for critical environmental
problems and remediation methods.
-
Data analysis and knowledge discovery in geophysical databases
NASA - UCLA, NASA JPL
-
Demonstrate the applicability of information systems for geophysical
databases to support cooperative research in earth-science projects.
-
Development of algorithms for climate models scalable to TeraFLOP
performance
NASA - NASA Goddard
-
Develop a high-resolution global climate model capable of centuries-long
calculations on massively parallel machines at teraFLOP speed.
-
Development of an Earth system model: atmosphere/ocean dynamics and
tracers chemistry
NASA - UCLA, Princeton, Berkeley, Santa Barbara, JPL,
Lawrence Livermore
-
Develop a model of the coupled global atmosphere-global ocean system, including
chemical tracers that are found in, and may be exchanged between the atmosphere
and the oceans. Use the model to study the general circulation of the
coupled atmosphere-ocean system, the global geochemical carbon cycle,
and the global chemistry of the troposphere and stratosphere.
-
A distributed computational system for large scale environmental
modeling
NSF - Carnegie Mellon and MIT
-
Use high performance heterogeneous computing systems, advanced software
environments, parallel architectures, and networks to develop algorithms
for multiphase chemistry and aerosol dynamics and a distributed computing
approach for simultaneous solution and sensitivity of environmental models.
-
Earthquake ground motion modeling in large basins
NSF - Carnegie Mellon, USC and the National University of Mexico
-
Develop new mathematical models and software tools to demonstrate the
capability for predicting, by simulation on parallel computers, the ground
motion of large basins during strong earthquakes, and use this
capability to study the seismic response of the Greater Los Angeles Basin.
-
Four-dimensional data assimilation for massive Earth system data
analysis
NASA - NASA Goddard, NASA JPL, Syracuse
-
The goal of data assimilation is the calculation of consistent, uniform, spatial
and temporal representations of the Earth environment that can be used for
scientific analysis and synthesis.
This involves the collection of diverse Earth observational data sets,
and the incorporation of these data into models of the ocean, land surface,
and atmosphere, including chemical processes.
-
Global climate modeling
DOE - Los Alamos, Argonne, Oak Ridge
-
Numerical studies of the Earth's climate using general circulation models
of the atmosphere and ocean.
-
Groundwater transport and remediation
DOE - Texas A&M, Brookhaven, Oak Ridge, Rice, Stony Brook,
South Carolina, and Princeton
-
Develop software for massively parallel computers that calculates fluid
flow through permeable media. The project has a dual application,
focusing on methods that solve modeling problems for
petroleum reservoirs and for groundwater contamination.
-
High performance computing for land cover dynamics
NSF - Maryland, New Hampshire, Indiana, and NASA Goddard
-
Develop techniques to support access and analysis of remotely sensed
data stored on
parallel disk systems and use those techniques to facilitate the study of
global ecological responses to climate changes and human activity.
-
Massively parallel simulation of large scale, high resolution
ecosystem models
NSF - Arizona
-
Establish new algorithms and
implementations for massively parallel processing that integrate
geographical information systems databases with cellular discrete-event
methodology to express large scale realistic ecosystem models and visualize
their simulated behavior. The focus will be on monitoring and predicting
landscape and ecosystem changes for large geographic regions
-
Molecular Biology and Biomedical Imaging
-
Advanced computational approaches to biomolecular modeling and
structure determination
NSF - Illinois, Duke, NYU, Yale, and Eli Lilly Corporation
-
Develop models and molecular dynamics algorithms for a widely
used program for structural biology (X-PLOR) in order to advance the
fundamental understanding of molecular biology and pharmacology.
-
Computational biomolecular design
NSF - Houston
-
Use emerging scalable parallel computers and software to develop and
implement new methods for solving critical problems in biomolecular design.
-
Computational structural biology
DOE - Caltech, Argonne, University of Washington, and UCLA
-
Understanding the components of genomes and developing a
parallel programming environment for structural biology.
-
High performance imaging in biological research
NSF - Carnegie Mellon and Pittsburgh
-
Use the latest technologies in
light microscopy and reagent chemistry with advanced techniques for
computerized image analysis, processing and display, implemented on
high-performance computers to produce an automated, high speed, interactive
tool that will make possible new kinds of basic biological research on
living cells and tissues.
-
Understanding human joint mechanics through advanced computational
models
NSF - Rensselaer Polytechnic and Columbia
-
Develop automated and adaptive three-dimensional finite element analysis and
parallel solution strategies to describe nonlinear moving contact problems
characteristic of the biomechanics of joints in the human musculoskeletal
system using the actual anatomic geometries and the multiphasic properties
of the tissues in the joint.
-
Product Design and Process Optimization
-
First-principles simulation of materials properties
DOE - Oak Ridge, Brookhaven, NASA Ames
-
Investigate new methods for performing large-scale,
first-principles simulation of materials properties
using a hierarchy of increasingly more accurate
techniques that exploit the power of massively parallel computing
systems.
-
High capacity atomic-level simulations for design of materials modeling
NSF - Caltech, Columbia and NASA JPL
-
Formulate and implement new methodologies
for parallel computers to carry out high capacity atomic-level simulations
for design of materials, and apply the resulting software to critical
industrial materials problems.
-
Space Science
-
Black hole binaries: coalescence and gravitational radiation
NSF - Texas, Illinois, Syracuse, Pittsburgh, Penn State, Northwestern,
North Carolina and Cornell
-
Create a computational toolkit to provide modular development tools to
support the study of coalescence of astrophysical black holes and the
gravitational radiation emitted via the numerical solution of Einstein's
equations for gravitational fields.
-
Convective turbulence and mixing in astrophysics
NASA - Colorado, Michigan State, Chicago, Argonne, NCAR
-
Develop the next generation of multi-dimensional hydrodynamic codes for
astrophysical simulations involving turbulent convection,
based on the use of massively parallel machines.
-
Cosmology and accretion astrophysics
NASA - Los Alamos, Syracuse, Penn State, Caltech,
Australian National University
-
Develop parallel, scalable particle codes (N-body, smoothed particle
hydrodynamic (SPH), and hybrid) based on hierarchical tree data structures
and use them to study astrophysical problems.
-
The formation of galaxies and large-scale structure
NSF - Princeton, Illinois, Pittsburgh, MIT, Indiana, San Diego
-
Explore different numerical algorithms, mesh
adaptation strategies, programming models and new software technologies in
order to obtain detailed numerical simulations that can help answer the
question: "What is the origin of large-scale structure in the universe and
how do galaxies form?"
-
Large scale structure and galaxy formation
NASA - University of Washington, University of Toronto
-
Develop the tools needed for high performance N-body simulations, and use
these to test the "standard model" for the origin of galaxies and large-scale
structure by accurately evolving it into its present highly nonlinear state.
-
Radio synthesis imaging
NSF - Illinois, Wisconsin, Maryland, Berkeley
-
Implement a prototype of the next
generation of astronomical telescope systems - remotely located telescopes
connected by high-speed networks to very high performance computers and
on-line data archives.
-
Solar activity and heliospheric dynamics
NASA - Naval Research Laboratory, NASA Goddard
-
Develop parallel algorithms for solar and heliospheric modeling.