This simulation depicts the passive advection of a tracer onto the sphere. The tracer (left panel) consists of two cosine bells with maximums set to 2 and their minimal value fixed at 1. The high-order elements (right panel) represented by the quadrangles are dynamically load-balanced at each refinement step, and the various colors represent a processor number. (Click the image to view a brief 1.6 MB animation from this simulation.)
To continue its scientific leadership, NCAR needs a unified simulation environment that can dynamically resolve time and space scales on petascale computers. A framework that encompasses such an environment relieves scientists from becoming fluent in advanced scalable numerical methods and enables them to focus on their research issues.
Spatial and temporal scales projects
This simulation uses the non-hydrostatic equations of the atmosphere, which are valid for all scales relevant to weather prediction and climate simulation. The density perturbation is plotted with respect to a hydrostatically balanced atmosphere at rest. The perturbation consists of a cold air bubble that falls to the ground. The filtering approach described in the text was required to perform this simulation. (Click the image to view a 2.1 MB animation.)
Co-sponsored by an NSF PetaApps award, the Multiscale Unified Simulation Environment (MUSE) is a project that could be used to build such models and enable predictions across scales. Due to the growth of this year’s team, developments in MUSE made significant progress.
First, the model was extended to support spherical shells. Contrary to other “shell” models, this is not a pure 2D curvilinear implementation. For scalar advection tests, the velocity at the surface of the sphere is in fact three dimensional, but only “shell” elements are employed. The transformation employed within the model relies only on a numerical differentiation procedure, so arbitrary terrains and elevation are de facto supported.
The shallow water equations are used to simulate a barotropic instability problem. In the top panel, a jet is introduced in the northern atmosphere with a small perturbation: initially stable, the perturbation breaks the jet into waves. The bottom panel shows the dynamically adapted cubed sphere mesh that tracks the waves. The blue region represents elements with lower accuracy (fourth order) while the red region represents high-accuracy (sixth order) elements. (Click the image to view a 2.9 MB animation.)
One of the problems faced by high-order methods is their inability to produce stable solutions in highly under-resolved situations. This year, we addressed this problem using two techniques. The first one consisted of changing the Lobatto-Legendre-Gauss (LGL) points to Gauss-Legendre points. This produced much more stable simulations. The second approach introduced a sophisticated filter that is controlled by a “resolution indicator.” The indicator dictates where the solution needs filtering (a.k.a. diffusion). The diffusion is injected into the model using a fast filter instead of using the discontinuous Galerkin discretization of a Laplacian. The results are promising and were tested on the sphere for advection fidelity and for the non-hydrostatic equations in 2D for an idealized setting.
Mesh of the Earth with continents before the initial refinement and mean water depth for the tsunami simulation. Water depth can vary from 5 meters to 9,000 meters in a single grid element because of the ocean floor's topography.
MUSE is now able to support the shallow water equations on the sphere. This is a fundamental step for the model since those equations generate most of the waves that are relevant for climate modeling. The spherical shell implementation in MUSE does not support pure 2D curvilinear transforms. Therefore the shallow water equations are using a 3D momentum constrained onto the sphere using a Lagrange multiplier approach. All of the standard classical test cases were tested in both adaptive and non-adaptive configurations.
In this 24-hour simulation of the 2010 Chilean tsunami, the initial condition is computed by assuming that the initial sea surface displacement is equal to the static sea floor uplift caused by an abrupt slip at the interface between the Nazca and South American plates. As the tsunami propagates through the Pacific Ocean, the order of interpolation and the mesh are adapted to track the wave. The resolution remains high in the shallow areas along the earthquake initial uplift, where the propagation of the wave is slow due to the very low depth. The size of the elements varies from 250 km to 0.25 km. This simulation was performed on 256 processors of Lynx, NCAR’s Cray XT5m system. The acceleration with respect to reality varies between 20 and 40 depending on how the error indicator step is performed, hence the 24-hour simulation is performed in a little more than 30 minutes. (Click image to view the 7.5 MB animation.)
Finally, MUSE was employed to create a complete tsunami simulation application. This is the first realistic application of the model. Such a prediction system, when coupled with an inundation parameterization, could be employed to provide early warning of disastrous events linked to tsunami waves generated by seismic phenomena. The application also demonstrates what is possible with unstructured meshes that the MUSE model is currently able to handle, as well as the quality of the numerical prediction provided by the high-order DG techniques. In the following figures, the model was used to compute the propagation of the 2010 Chilean tsunami through the global ocean.
Scalable elliptic solver for spectral elements
This year, the multigrid code developed at Argonne National Laboratories was embedded into GASpAR to solve the coarse grid problem associated with the two-level optimized Schwarz preconditioning strategy. The optimized Schwarz strategy is 20-25% faster than the classical Schwarz approach. This enables the spectral element code GASpAR to run more than 10 times faster than before, and now it can handle previously inaccessible problems. However, issues within the code arose. For instance, some of the parallel exchanges need to be revisited to enable scaling past 32 processors and a large number of unknowns. The goal for FY2011 is to extend the preconditioning approach to AMR and 3D spectral elements. For more information on this work, see the GASpAR report.
This work supports CISL’s strategic plan in various ways. It advances CISL’s strategic imperative to produce scientific excellence, and it develops numerical algorithms for petascale and exascale computing. It also supports CISL’s science frontier for algorithmic acceleration, and it fulfills a strategic action item to accelerate applications algorithmically by applying adaptive mesh refinement techniques, new time-stepping techniques, and efficient use of processor technology.
These projects are made possible through NSF Core funding and NSF PetaApps award number 0904599.
The elevation of the free-surface has been compared with the DART data from the NOAA Center for Tsunami Research at eight different stations. This shows that the model accurately estimates the time at which the tsunami reaches the different stations. The amplitude of the waves is also well predicted, except at stations 5 and 8. Because those stations are located in very shallow areas with an irregular bathymetry and several small islands, the resolution of the model is probably not sufficient to accurately reproduce the heights of the waves.