CITA Fellow James Beattie was awarded close to 100-million-core-hours to explore the nature of turbulence and dynamos
By Lyuba Encheva, Canadian Institute for Theoretical Astrophysics
CITA and Princeton Postdoctoral Fellow James Beattie and collaborators were awarded close to a 100-million-core-hours on the German supercomputer at the Leibniz Supercomputing Center, SuperMUC-NG. Beattie will utilize the immense computing power of SuperMUC-NG for the investigation of magnetic field generation in merging binary neutron stars and to understand the nature of magnetized turbulence at extremely high resolutions. The compute time allocation will be divided between two research projects led by Beattie, who is the Principal Investigator (PI) of both. To describe just how much compute time this is, Beattie gave the analogy:
“100-million-core-hours is a lot of computation time… 1-core-hour equates to running a program on your laptop non-stop for an hour. To get an idea for how large a calculation utilizing 100-million-core-hours is, I like to think about it in terms of years. For example, if I pressed “start” on our calculations 11,000 years ago on my laptop, they would just be finishing now. The earliest traces of domesticated cattle in human civilization date back to 10, 500 years ago. So, this gives you some amazing perspective for how large our calculations can get when we distribute the processing across hundreds of thousands of compute cores in a supercomputer. Indeed, it’s only through utilizing a supercomputer like SuperMUC-NG that we can make such calculations.”
Out of the 100-million-core-hours, 20 million are allocated to a project that examines the small-and-large-scale Kelvin Helmholtz dynamo at extreme resolutions to unravel the nature of magnetic field growth in binary neutron star mergers. In this project Beattie and collaborators, including CITA faculty Bart Ripperda, will be exploring how merging binary neutron stars build strong magnetic fields in just a few milliseconds before merger. The resulting magnetic storm may also be a condition for the formation and launching of jets after the merger. The researchers will collect data from the radiative outbursts (detectable by telescopes) that, along with gravitational waves (curvatures of space), are produced by these extreme events. They will be zooming in on 10 meter-sized regions between the binaries, where the plasma undergoes a Kelvin Helmholtz instability that stretches and grows the magnetic field exponentially fast. By simulating this instability locally, they will produce extremely high-resolution simulations, capturing details from 10 meters down to a fraction of a centimeter.
The second project is an allocation of 74 million-core-hours to create the world’s largest compressible magnetohydrodynamic (MHD) turbulence simulation with a large-scale magnetic field. In this project Beattie will be extending some of his previous work (https://arxiv.org/abs/2405.16626 ) running the highest resolution MHD turbulence simulations in the world to garner the most resolved statistics of magnetized turbulence processes available to date. These statistics, explains Beattie “render a wealth of information on a huge variety of topics such as turbulence-regulated star formation, magnetized turbulence in accretion disks, and the solar wind and plasma turbulence surrounding Earth in the magnetosheath.” “This is what excites me so much about this kind of astrophysical research on fundamental processes like turbulence — it’s extremely universal and essential to understanding astrophysical plasmas across so many scales in the Universe”, he adds.
However, one question lingers, is it going to take another 11, 000 years before we see the results of these research projects?
“No”, Beattie responds with a laugh, “The computing power of SuperMUC-NG roughly equals 350,000 individual computers. The calculations we are talking about can be distributed across close to 150,000 compute cores and will be completed over a few months. After the simulations have finished running, we will spend the remaining part of the allocation, 10’s of millions of compute-hours, analysing the huge datasets, which is no easy task. The turbulence simulation alone will create close to 3 petabytes (one million gigabytes) of data! Every part of these projects is only possible with a synergy between high-performance computing, software development, physics and astrophysics.”