CITA Research September 2002 - August 2003
Computational Astrophysics


CITA's expertise and computing infrastructure continue to drive a strong analysis and simulation effort. Current developments include large scale parallel N-body, hydrodynamics, and magnetohydrodynamics. CITA has hired a permanent parallel programmer to assist the utilization of its high performance computing infrastructute. Current hardware includes a 536 2.4 GHz CPU / 284 GB intel cluster, a 32-processor 64 GB shared memory Compaq alphaserver GS320, 8 quad es40 alphaservers, and intel and alpha linux clusters comprising upward of 100 processors. Recent highlights include detailed simulations of the intergalactic medium to model the Sunyaev-Zeldovich effect (Pen and Zhang; Bond, Wadsley and Ruetalo); construction of a dedicated out-of-core machine for cosmological gas and clustering simulations (Pen and Trac); hydrodynamical simulations of cluster formation (Loken and collaborators); hydrodynamical simulations of galaxy interactions (Dubinski and collaborators); simulations of two-phase protoplanetary disks (Humble and collaborators); a new numerical method to solve the equations of MHD (Pen and Arras); and precision calculations of the internal modes of rotating stars (Arras, Pen, and Wu).

General relativistic supernova code

M. Liebendoerfer has completed the world's first detailed documentation of a general relativistic supernova code with Boltzmann neutrino transport. Accurate neutrino transport is essential to quantify the energy deposition of outstreaming neutrinos behind the stalled shock. It is still not known how the shock is revived to produce a supernova explosion. M. Liebendoerfer has parallelized parts of the code with OMP and adapted it to the SMP machines at CITA. New tools for the detailed analysis of the calculated neutrino signal have been developed.

Comparison of numerical methods

M. Liebendoerfer, M. Rampp (Max-Planck Institute for Astrophysics, Garching, MPA), H.-T. Janka (MPA), and A. Mezzacappa (Oak Ridge National Laboratory, ORNL) have compared in detail their supernova simulations implementing an implicit solution of the Boltzmann transport equation or a variable Eddington factor method for the neutrino transport and found satisfactory agreement in spherical symmetry.

Singularity excision algorithm

With D. Richmond (Univ. of Victoria), M. Liebendoerfer has supplemented the adaptive grid in the hydrodynamics code AGILE with a singularity excision algorithm to be combined with neutrino transport in future predictions of the neutrino signal after a failed supernova explosion.

Parallel code development

M. Liebendoerfer has created a new and concise implementation of cubic domain decomposition with MPI for distributed memory computations in the three-dimensional MHD code developed by Ue-Li Pen, Phil Arras, and ShingKwong Wong. A concise and fast code facilitates student projects and the extension with input physics. The Lattimer-Swesty equation of state has been tabulated for future multidimensional magneto-hydrodynamics simulations in the supernova context.

Hydrodynamics Simulations

The hydrodynamic effort has matured, and a new parallel MHD code (with P. Arras and S. Wong) and out-of-core cosmological hydrocode (with H. Trac) are now in production mode. Five papers were submitted on this topic over the year, covering the methodology, the Sunyaev-Zeldovich effect, and black hole accretion.

Infrastructure

Efforts on the infrastructure have been very successful, with the very efficient acquistion, installation, and operation of the McKenzie cluster. This 536 processor economical beowulf cluster was first conceived in September 2002, with vendor negotiations through November. The machine was delivered and installed in December, and by January was running production science code as the fastest computer in Canada. The low cost and fast installation was made possible through the efforts of a key innovative team of R. Humble, C. Loken, P. Martin and J. Dubinski. This success is now used to leverage an new generation machine through a CFI proposal, which could bring Canada to the top three scientifc computing nations in the world.

Numerical Studies of Braneworld Dynamics

Working with Johannes Martin, Andrei Frolov, Lev Kofman, and Marco Peloso of CITA, Felder developed a program for calculating the time evolution of a braneworld system. The program self-consistently solves the Einstein equations in the bulk and junction conditions on the branes. Using this code they were able to show that such systems will generically tend to have multiple metastable states corresponding to different effective cosmological constants on the brane. A transition from such a metastable state to the ground state could account for the occurrence and end of inflation. This program is being made publicly available under the name BRANECODE.

Dusty Protoplanetary Disks

Robin Humble works with James Murray (Leicester, Swinburne) and Sarah Maddison (Swinburne) on protoplanetary dusty disks. The 3D non-linear Navier-Stokes equations with two fluid phases (dust and gas) are solved numerically in the protoplanetary disk. The dynamics of 10 micron and larger dust grains are followed as they migrate and grow in size. Grain growth means varying the magnitude of the drag forces between gas and dust, and can also mean varying the form of the drag equations themselves. Star, planet, and gas and dust self-gravity are also included in the parallel smoothed particle hydrodynamics (SPH) code. Resolving turbulent dust lofting and gravitational and turbulent drag instabilities are current focus areas. A bonus of the Lagrangian particle technique means it is trivial to track thermal and accretion histories of dust grains. This may (one day) be used to determine chemical compositions of planets.

[ Back to CITA People | CITA Research 2003 | CITA Annual Report 2003 ]


Questions and Comments <liebend@cita.utoronto.ca>