Molecular dynamics approach for predicting release temperatures of noble gases in pre-solar nanodiamonds [CL]

http://arxiv.org/abs/2005.00434


Pre-solar meteoritic nanodiamond grains carry an array of isotropically anomalous noble gas isotopes and provide information on the history of nucleosynthesis, galactic mixing and the formation of the solar system. In this paper, we develop a molecular dynamics approach to predict thermal release distribution of implanted noble gases (He and Xe) in nanodiamonds. Our simulations show that low-energy ion-implantation is a viable way for the incorporation of noble gases into nanodiamonds. Accordingly, we provide atomistic details of the unimodal temperature release distribution for helium and a bimodal behavior for xenon. Intriguingly, our model shows that the thermal release process of noble gases is highly sensitive to the impact and annealing parameters as well as to crystallographic orientation. In addition, the model elegantly explains the unimodal and bimodal behaviour via the interstitial and substutional types of defects formed. In particularly, our approach explains the origin of the famous Xe-P3 and Xe-HL peaks, and shows that P3 component in meteoritic literature releases not only at the low-temperature but also at the high-temperature along with HL component. This means that an isotopically anomalous HL component must be the sum of high temperature part of P3 component and a pure-HL component.

Read this paper on arXiv…

A. Aghajamali and N. Marks
Mon, 4 May 20
2/55

Comments: N/A

Carbon ionization at Gbar pressures: an ab initio perspective on astrophysical high-density plasmas [CL]

http://arxiv.org/abs/2004.13698


A realistic description of partially-ionized matter in extreme thermodynamic states is critical to model the interior and evolution of the multiplicity of high-density astrophysical objects. Current predictions of its essential property, the ionization degree, rely widely on analytical approximations that have been challenged recently by a series of experiments. Here, we propose a novel ab initio approach to calculate the ionization degree directly from the dynamic electrical conductivity using the Thomas-Reiche-Kuhn sum rule. This Density Functional Theory framework captures genuinely the condensed matter nature and quantum effects typical for strongly-correlated plasmas. We demonstrate this new capability for carbon and hydrocarbon, which most notably serve as ablator materials in inertial confinement fusion experiments aiming at recreating stellar conditions. We find a significantly higher carbon ionization degree than predicted by commonly used models, yet validating the qualitative behavior of the average atom model Purgatorio. Additionally, we find the carbon ionization state to remain unchanged in the environment of fully-ionized hydrogen. Our results will not only serve as benchmark for traditional models, but more importantly provide an experimentally accessible quantity in the form of the electrical conductivity.

Read this paper on arXiv…

M. Bethkenhagen, B. Witte, M. Schörner, et. al.
Wed, 29 Apr 20
34/75

Comments: accepted for publication in Physical Review Research

Dynamical friction with radiative feedback — II. High resolution study of the subsonic regime [EPA]

http://arxiv.org/abs/2004.13422


Recent work has suggested that the net gravitational force acting on a massive and luminous perturber travelling through a gaseous and opaque medium can have same direction as the perturber’s motion (an effect sometimes called negative dynamical friction). Analytic results were obtained using a linear analysis and were later confirmed by means of non-linear numerical simulations which did not resolve the flow within the Bondi sphere of the perturber, hence effectively restricted to weakly perturbed regions of the flow (paper~I). Here we present high resolution simulations, using either 3D Cartesian or 2D cylindrical meshes that resolve the flow within the Bondi sphere. We perform a systematic study of the force as a function of the perturber’s mass and luminosity, in the subsonic regime. We find that perturbers with mass $M$ smaller than a few $M_c\sim \chi c_s/G$ are subjected to a thermal force with a magnitude in good agreement with linear theory ($\chi$ being the thermal diffusivity of the medium, $c_s$ the adiabatic sound speed and $G$ the gravitational constant), while for larger masses, the thermal forces are only a fraction of the linear estimate that decays as $M^{-1}$. Our analysis confirms the possibility of negative friction (hence a propulsion) on sufficiently luminous, low-mass embryos embedded in protoplanetary discs. Finally, we give an approximate expression of the total force at low Mach number, valid both for sub-critical ($M<M_c$) and super-critical ($M>M_c$) perturbers.

Read this paper on arXiv…

D. Velasco-Romero and F. Masset
Wed, 29 Apr 20
45/75

Comments: N/A

Time-implicit schemes in fluid dynamics? — Their advantage in the regime of ultra-relativistic shock fronts [CL]

http://arxiv.org/abs/2004.12310


Relativistic jets are intrinsic phenomena of active galactic nuclei (AGN) and quasars. They have been observed to also emanate from systems containing compact objects, such as white dwarfs, neutron stars and black hole candidates. The corresponding Lorentz factors, $\Gamma$, were found to correlate with the compactness of the central objects. In the case of quasars and AGNs, plasmas with $\Gamma$-factors larger than $8$ were detected. However, numerically consistent modelling of propagating shock-fronts with $\Gamma \geq 4$ is a difficult issue, as the non-linearities underlying the transport operators increase dramatically with $\Gamma$, thereby giving rise to a numerical stagnation of the time-advancement procedure or alternatively they may diverge completely. In this paper, we present a unified numerical solver for modelling the propagation of one-dimensional shock fronts with high Lorentz factors. The numerical scheme is based on the finite-volume formulation with adaptive mesh refinement (AMR) and domain decomposition for parallel computation. It unifies both time-explicit and time-implicit numerical schemes within the framework of the pre-conditioned defect-correction iteration solution procedure. We find that time-implicit solution procedures are remarkably superior over their time-explicit counterparts in the very high $\Gamma$-regime and therefore most suitable for consistent modelling of relativistic outflows in AGNs and micro-quasars.

Read this paper on arXiv…

M. Fischer and A. Hujeirat
Tue, 28 Apr 20
80/81

Comments: N/A

Systematic construction of upwind constrained transport schemes for MHD [CL]

http://arxiv.org/abs/2004.10542


The constrained transport (CT) method reflects the state of the art numerical technique for preserving the divergence-free condition of magnetic field to machine accuracy in multi-dimensional MHD simulations performed with Godunov-type, or upwind, conservative codes. The evolution of the different magnetic field components, located at zone interfaces using a staggered representation, is achieved by calculating the electric field components at cell edges, in a way that has to be consistent with the Riemann solver used for the update of cell-centered fluid quantities at interfaces. Albeit several approaches have been undertaken, the purpose of this work is, on the one hand, to compare existing methods in terms of robustness and accuracy and, on the other, to extend the upwind contrained transport (UCT) method by Londrillo & Del Zanna (2004) and Del Zanna et al. (2007) for the systematic construction of new averaging schemes using the information available from 1D Riemann solvers. Our results are presented here in the context of second-order schemes for classical MHD, but they can be easily generalized to higher than second order schemes, either based on finite volumes or finite differences, and to other physical systems retaining the same structure of the equations, such as that of relativistic or general relativistic MHD.

Read this paper on arXiv…

A. Mignone and L. Zanna
Thu, 23 Apr 20
6/45

Comments: 28 pages, 16 figures

Towards Universal Cosmological Emulators with Generative Adversarial Networks [CEA]

http://arxiv.org/abs/2004.10223


Generative adversarial networks (GANs) have been recently applied as a novel emulation technique for large scale structure simulations. Recent results show that GANs can be used as a fast, efficient and computationally cheap emulator for producing novel weak lensing convergence maps as well as cosmic web data in 2-D and 3-D. However, like any algorithm, the GAN approach comes with a set of limitations, such as an unstable training procedure and the inherent randomness of the produced outputs. In this work we employ a number of techniques commonly used in the machine learning literature to address the mentioned limitations. In particular, we train a GAN to produce both weak lensing convergence maps and dark matter overdensity field data for multiple redshifts, cosmological parameters and modified gravity models. In addition, we train a GAN using the newest Illustris data to emulate dark matter, gas and internal energy distribution data simultaneously. Finally, we apply the technique of latent space interpolation to control which outputs the algorithm produces. Our results indicate a 1-20% difference between the power spectra of the GAN-produced and the training data samples depending on the dataset used and whether Gaussian smoothing was applied. Finally, recent research on generative models suggests that such algorithms can be treated as mappings from a lower-dimensional input (latent) space to a higher dimensional (data) manifold. We explore such a theoretical description as a tool for better understanding the latent space interpolation procedure.

Read this paper on arXiv…

A. Tamosiunas, H. Winther, K. Koyama, et. al.
Thu, 23 Apr 20
25/45

Comments: 30 pages, 21 figures, 1 table

MPI-AMRVAC: a parallel, grid-adaptive PDE toolkit [IMA]

http://arxiv.org/abs/2004.03275


We report on the latest additions to our open-source, block-grid adaptive framework MPI-AMRVAC, which is a general toolkit for especially hyperbolic/parabolic partial differential equations (PDEs). Applications traditionally focused on shock-dominated, magnetized plasma dynamics described by either Newtonian or special relativistic (magneto)hydrodynamics, but its versatile design easily extends to different PDE systems. Here, we demonstrate applications covering any-dimensional scalar to system PDEs, with e.g. Korteweg-de Vries solutions generalizing early findings on soliton behaviour, shallow water applications in round or square pools, hydrodynamic convergence tests as well as challenging computational fluid and plasma dynamics applications. The recent addition of a parallel multigrid solver opens up new avenues where also elliptic constraints or stiff source terms play a central role. This is illustrated here by solving several multi-dimensional reaction-diffusion-type equations. We document the minimal requirements for adding a new physics module governed by any nonlinear PDE system, such that it can directly benefit from the code flexibility in combining various temporal and spatial discretisation schemes. Distributed through GitHub, MPI-AMRVAC can be used to perform 1D, 1.5D, 2D, 2.5D or 3D simulations in Cartesian, cylindrical or spherical coordinate systems, using parallel domain-decomposition, or exploiting fully dynamic block quadtree-octree grids.

Read this paper on arXiv…

R. Keppens, J. Teunissen, C. Xia, et. al.
Wed, 8 Apr 20
53/72

Comments: 31 pages, 13 figures, accepted for publication by Computers and Mathematics with Applications (CAMWA-10131)

Understanding GPU-Based Lossy Compression for Extreme-Scale Cosmological Simulations [CL]

http://arxiv.org/abs/2004.00224


To help understand our universe better, researchers and scientists currently run extreme-scale cosmology simulations on leadership supercomputers. However, such simulations can generate large amounts of scientific data, which often result in expensive costs in data associated with data movement and storage. Lossy compression techniques have become attractive because they significantly reduce data size and can maintain high data fidelity for post-analysis. In this paper, we propose to use GPU-based lossy compression for extreme-scale cosmological simulations. Our contributions are threefold: (1) we implement multiple GPU-based lossy compressors to our opensource compression benchmark and analysis framework named Foresight; (2) we use Foresight to comprehensively evaluate the practicality of using GPU-based lossy compression on two real-world extreme-scale cosmology simulations, namely HACC and Nyx, based on a series of assessment metrics; and (3) we develop a general optimization guideline on how to determine the best-fit configurations for different lossy compressors and cosmological simulations. Experiments show that GPU-based lossy compression can provide necessary accuracy on post-analysis for cosmological simulations and high compression ratio of 5~15x on the tested datasets, as well as much higher compression and decompression throughput than CPU-based compressors.

Read this paper on arXiv…

S. Jin, P. Grosset, C. Biwer, et. al.
Thu, 2 Apr 20
34/56

Comments: 11 pages, 10 figures, accepted by IEEE IPDPS ’20

Technologies for supporting high-order geodesic mesh frameworks for computational astrophysics and space sciences [CL]

http://arxiv.org/abs/2003.13862


Many important problems in astrophysics, space physics, and geophysics involve flows of (possibly ionized) gases in the vicinity of a spherical object, such as a star or planet. The geometry of such a system naturally favors numerical schemes based on a spherical mesh. Despite its orthogonality property, the polar (latitude-longitude) mesh is ill suited for computation because of the singularity on the polar axis, leading to a highly non-uniform distribution of zone sizes. The consequences are (a) loss of accuracy due to large variations in zone aspect ratios, and (b) poor computational efficiency from a severe limitations on the time stepping. Geodesic meshes, based on a central projection using a Platonic solid as a template, solve the anisotropy problem, but increase the complexity of the resulting computer code. We describe a new finite volume implementation of Euler and MHD systems of equations on a triangular geodesic mesh (TGM) that is accurate up to fourth order in space and time and conserves the divergence of magnetic field to machine precision. The paper discusses in detail the generation of a TGM, the domain decomposition techniques, three-dimensional conservative reconstruction, and time stepping.

Read this paper on arXiv…

V. Florinski, D. Balsara, S. Garain, et. al.
Wed, 1 Apr 20
43/83

Comments: 41 pages, 18 figures

Self-similar orbit-averaged Fokker-Planck equation for isotropic spherical dense clusters (i) accurate pre-collapse solution [CL]

http://arxiv.org/abs/2003.12196


This is the first paper of a series of our works on the self-similar orbit-averaged Fokker-Planck (OAFP) equation and shows its accurate pre-collapse solution. At the late stage of relaxation evolution of dense star clusters, standard stellar dynamics predicts that the clusters may evolve in a self-similar fashion forming a collapsing core. However, the corresponding mathematical model, the self-similar OAFP equation for distribution function of stars in isotropic star clusters, has never been solved on the whole energy domain $(-1< E < 0)$. The existing works based on kinds of finite difference methods provide solutions only on the truncated domain $-1< E<-0.2$. To broaden the range of the truncated domain, the present work resorts to a (highly accurate- and efficient-) Gauss-Chebyshev pseudo-spectral method. We provide a Chebyshev spectral solution, whose number of significant figures is four, on the whole domain. Also, The solution can reduce to a semi-analytical form whose degrees of polynomials are only eighteen holding three significant figures. We also provide the new eigenvalues; $c_{1}=9.0925\times10^{-4}$, $c_{2}=1.1118\times10^{-4}$, $c_{3}=7.1975\times10^{-2}$ and $c_{4}=3.303\times10^{-2}$, corresponding to the core collapse rate $\xi=3.64\times10^{-3}$, scaled escape energy $\chi_\text{esc}=13.881$ and power-law exponent $\alpha=2.2305$. Since the solution on the whole domain is unstable against change in degree of Chebyshev polynomials, we show spectral solutions on truncated domains ( $-1< E<E_\text{max}$, where $-0.35<E_\text{max}<-0.03$) to explain how to handle the instability. By reformulating the OAFP equation in several ways, we improve the accuracy of the spectral solution and reproduce an existing self-similar solution; we consider existing solutions have only one significant figure at most.

Read this paper on arXiv…

Y. Ito
Mon, 30 Mar 20
23/44

Comments: N/A

Ion acceleration in non-relativistic quasi-parallel shocks using fully kinetic simulations [HEAP]

http://arxiv.org/abs/2003.07293


The formation of collisionless shock fronts is an ubiquitous phenomenon in space plasma environments. In the solar wind shocks might accompany coronal mass ejections, while even more violent events, such as supernovae, produce shock fronts traveling at relativistic speeds. While the basic concepts of shock formation and particle acceleration in their vicinity are known, many details on a micro-physical scope are still under discussion. In recent years the hybrid kinetic simulation approach has allowed to study the dynamics and acceleration of protons and heavier ions in great detail. However, Particle-in-Cell codes allow to study the process including also electron dynamics and the radiation pressure. Additionally a further numerical method allows for crosschecking results. We therefore investigate shock formation and particle acceleration with a fully kinetic particle-in-cell code. Besides electrons and protons we also include helium and carbon ions in our simulations of a quasi-parallel shock. We are able to reproduce characteristic features of the energy spectra of the particles, such as the temperature ratios of the different ion species in the downstream which scale with the ratio of particle mass to charge. We also find that approximately 12-15% of the energy of the unperturbed upstream is transferred to the accelerated particles escaping the shock.

Read this paper on arXiv…

C. Schreiner, P. Kilian, F. Spanier, et. al.
Tue, 17 Mar 20
4/63

Comments: 17 pages, 9 figures

Thermodynamic anomalies and three distinct liquid-liquid transitions in warm dense liquid hydrogen [CL]

http://arxiv.org/abs/2003.06629


The properties of hydrogen at high pressure have wide implications in astrophysics and high-pressure physics. Its phase change in the liquid is variously described as a metallization, H2-dissociation, density discontinuity or plasma phase transition. It has been tacitly assumed that these phenomena coincide at a first-order liquid-liquid transition (LLT). In this work, the relevant pressure-temperature conditions are thoroughly explored with first-principles molecular dynamics. We show there is a large dependency on exchange-correlation functional and significant finite size effects. We use hysteresis in a number of measurable quantities to demonstrate a first-order transition up to a critical point, above which molecular and atomic liquids are indistinguishable. At higher temperature beyond the critical point, H2-dissociation becomes a smooth cross-over in the supercritical region that can be modelled by a pseudo-transition, where the H2-2H transformation is localized and does not cause a density discontinuity at metallization. Thermodynamic anomalies and counter-intuitive transport behavior of protons are also discovered even far beyond the critical point, making this dissociative transition highly relevant to the interior dynamics of Jovian planets. Below the critical point, simulation also reveals a dynamic H2-2H chemical equilibrium with rapid interconversion, showing that H2 and H are miscible. The predicted critical temperature lies well below the ionization temperature. Our calculations unequivocally demonstrate that there are three distinct regimes in the liquid-liquid transition of warm dense hydrogen.

Read this paper on arXiv…

H. Geng, Q. Wu, M. Marqués, et. al.
Tue, 17 Mar 20
10/63

Comments: 34 pages, 7 figures, with Supplementary Material

CUBE — Towards an Optimal Scaling of Cosmological N-body Simulations [CL]

http://arxiv.org/abs/2003.03931


N-body simulations are essential tools in physical cosmology to understand the large-scale structure (LSS) formation of the Universe. Large-scale simulations with high resolution are important for exploring the substructure of universe and for determining fundamental physical parameters like neutrino mass. However, traditional particle-mesh (PM) based algorithms use considerable amounts of memory, which limits the scalability of simulations. Therefore, we designed a two-level PM algorithm CUBE towards optimal performance in memory consumption reduction. By using the fixed-point compression technique, CUBE reduces the memory consumption per N-body particle toward 6 bytes, an order of magnitude lower than the traditional PM-based algorithms. We scaled CUBE to 512 nodes (20,480 cores) on an Intel Cascade Lake based supercomputer with $\simeq$95\% weak-scaling efficiency. This scaling test was performed in “Cosmo-$\pi$” — a cosmological LSS simulation using $\simeq$4.4 trillion particles, tracing the evolution of the universe over $\simeq$13.7 billion years. To our best knowledge, Cosmo-$\pi$ is the largest completed cosmological N-body simulation. We believe CUBE has a huge potential to scale on exascale supercomputers for larger simulations.

Read this paper on arXiv…

S. Cheng, H. Yu, D. Inman, et. al.
Tue, 10 Mar 20
36/63

Comments: 6 pages, 5 figures. Accepted for SCALE 2020, co-located as part of the proceedings of CCGRID 2020

Investigating the use of field solvers for simulating classical systems [CL]

http://arxiv.org/abs/2001.05791


We explore the use of field solvers as approximations of classical Vlasov-Poisson systems. This correspondence is investigated in both electrostatic and gravitational contexts. We demonstrate the ability of field solvers to be excellent approximations of problems with cold initial condition into the non linear regime. We also investigate extensions of the Schr\”odinger-Poisson system that employ multiple stacked cold streams, and the von Neumann-Poisson equation as methods that can successfully reproduce the classical evolution of warm initial conditions. We then discuss how appropriate simulation parameters need to be chosen to avoid interference terms, aliasing, and wave behavior in the field solver solutions. We present a series of criteria clarifying how parameters need to be chosen in order to effectively approximate classical solutions.

Read this paper on arXiv…

A. Eberhardt, A. Banerjee, M. Kopp, et. al.
Wed, 4 Mar 20
17/51

Comments: 28 pages, 14 figures. Matches published version Phys. Rev. D

General Relativistic Hydrodynamics on a Moving-mesh I: Static Spacetimes [CL]

http://arxiv.org/abs/2002.09613


We present the first-ever moving-mesh general relativistic hydrodynamics solver for static spacetimes as implemented in the code, MANGA. Our implementation builds on the architectures of MANGA and the numerical relativity Python package NRPy+. We review the general algorithm to solve these equations and, in particular, detail the time stepping; Riemann solution across moving faces; conversion between primitive and conservative variables; validation and correction of hydrodynamic variables; and mapping of the metric to a Voronoi moving-mesh grid.
We present test results for the numerical integration of an unmagnetized Tolman-Oppenheimer-Volkoff star for 24 dynamical times. We demonstrate that at a resolution of $10^6$ mesh generating points, the star is stable and its central density drifts downward by 2% over this timescale. At a lower resolution the central density drift increases in a manner consistent with the adopted second order spatial reconstruction scheme. These results agree well with the exact solutions, and we find the error behavior to be similar to Eulerian codes with second-order spatial reconstruction. We also demonstrate that the new code recovers the fundamental mode frequency for the same TOV star but with its initial pressure depleted by 10%

Read this paper on arXiv…

P. Chang and Z. Etienne
Tue, 25 Feb 20
15/76

Comments: 10 pages, 4 figures, submitted to MNRAS

Observational signatures of disk and jet misalignment in images of accreting black holes [GA]

http://arxiv.org/abs/2002.08386


Black hole accretion is one of nature’s most efficient energy extraction processes. When gas falls in, a significant fraction of its gravitational binding energy is either converted into radiation or flows outwards in the form of black hole-driven jets and disk-driven winds. Recently, the Event Horizon Telescope (EHT), an Earth-size sub-millimetre radio interferometer, captured the first images of M87’s black hole (or M87*). These images were analysed and interpreted using general-relativistic magnetohydrodynamics (GRMHD) models of accretion disks with rotation axes aligned with the black hole spin axis. However, since infalling gas is often insensitive to the black hole spin direction, misalignment between accretion disk and black hole spin may be a common occurrence in nature. In this work, we use the general-relativistic radiative transfer (GRRT) code \texttt{BHOSS} to calculate the first synthetic radio images of (highly) tilted disk/jet models generated by our GPU-accelerated GRMHD code \texttt{HAMR}. While the tilt does not have a noticeable effect on the system dynamics beyond a few tens of gravitational radii from the black hole, the warping of the disk and jet can imprint observable signatures in EHT images on smaller scales. Comparing the images from our GRMHD models to the 43 GHz and 230 GHz EHT images of M87, we find that M87 may feature a tilted disk/jet system. Further, tilted disks and jets display significant time variability in the 230 GHz flux that can be further tested by longer-duration EHT observations of M87.

Read this paper on arXiv…

K. Chatterjee, Z. Younsi, M. Liska, et. al.
Fri, 21 Feb 20
41/67

Comments: 17 pages, 19 figures, submitted to MNRAS, for YouTube playlist see this https URL

Honing and proofing Astrophysical codes on the road to Exascale. Experiences from code modernization on many-core systems [CL]

http://arxiv.org/abs/2002.08161


The complexity of modern and upcoming computing architectures poses severe challenges for code developers and application specialists, and forces them to expose the highest possible degree of parallelism, in order to make the best use of the available hardware. The Intel$^{(R)}$ Xeon Phi$^{(TM)}$ of second generation (code-named Knights Landing, henceforth KNL) is the latest many-core system, which implements several interesting hardware features like for example a large number of cores per node (up to 72), the 512 bits-wide vector registers and the high-bandwidth memory. The unique features of KNL make this platform a powerful testbed for modern HPC applications. The performance of codes on KNL is therefore a useful proxy of their readiness for future architectures. In this work we describe the lessons learnt during the optimisation of the widely used codes for computational astrophysics P-Gadget-3, Flash and Echo. Moreover, we present results for the visualisation and analysis tools VisIt and yt. These examples show that modern architectures benefit from code optimisation at different levels, even more than traditional multi-core systems. However, the level of modernisation of typical community codes still needs improvements, for them to fully utilise resources of novel architectures.

Read this paper on arXiv…

S. Cielo, L. Iapichino, F. Baruffa, et. al.
Thu, 20 Feb 20
36/61

Comments: 16 pages, 10 figures, 4 tables. To be published in Future Generation of Computer Systems (FGCS), Special Issue on “On The Road to Exascale II: Advances in High Performance Computing and Simulations”

3D Hydrodynamical Simulations of a Brown Dwarf Accretion by a Main-Sequence Star and its Impact on the Surface Li Abundance [SSA]

http://arxiv.org/abs/2002.05926


Li-depleted (enhanced) stars in the main-sequence (MS) and (or) the RGB, pose a puzzling mystery. Presently, there is still no clear answer to the mechanism(s) that enables such Li depletion (enhancement). One possible explanation comes from the, still controversial, observational evidence of Li underabundances in MS stars hosting planets, and of a positive correlation between the Li abundance and rotational velocity in some RGB stars, which suggests a stellar collision with a planet-like object as a possible solution. In this study we explore this scenario, performing for first time 3D-hydrodynamical simulations of a 0.019 Mo brown dwarf collision with a MS star under different initial conditions. This enables us to gather information about the impact on the physical structure and final Li content in the hosting star.

Read this paper on arXiv…

C. Abia, R. Cabezón and I. nguez
Mon, 17 Feb 20
50/53

Comments: 4 pages, 1 figure, presented in Lithium in the Universe: to Be or not to Be?, Frascati, 2019. To be published in MemSAI

Moving and Reactive Boundary Conditions in Moving-Mesh Hydrodynamics [CL]

http://arxiv.org/abs/2002.04287


We outline the methodology of implementing moving boundary conditions into the moving-mesh code MANGA. The motion of our boundaries is reactive to hydrodynamic and gravitational forces. We discuss the hydrodynamics of a moving boundary as well as the modifications to our hydrodynamic and gravity solvers. Appropriate initial conditions to accurately produce a boundary of arbitrary shape are also discussed. Our code is applied to several test cases, including a Sod shock tube, a Sedov-Taylor blast wave and a supersonic wind on a sphere. We show the convergence of conserved quantities in our simulations. We demonstrate the use of moving boundaries in astrophysical settings by simulating a common envelope phase in a binary system, in which the companion object is modeled by a spherical boundary. We conclude that our methodology is suitable to simulate astrophysical systems using moving and reactive boundary conditions.

Read this paper on arXiv…

L. Prust
Wed, 12 Feb 20
36/58

Comments: 12 pages, 8 figures, submitted to MNRAS

Provably Physical-Constraint-Preserving Discontinuous Galerkin Methods for Multidimensional Relativistic MHD Equations [CL]

http://arxiv.org/abs/2002.03371


We propose and analyze a class of robust, uniformly high-order accurate discontinuous Galerkin (DG) schemes for multidimensional relativistic magnetohydrodynamics (RMHD) on general meshes. A distinct feature of the schemes is their physical-constraint-preserving (PCP) property, i.e., they are proven to preserve the subluminal constraint on the fluid velocity and the positivity of density, pressure, and specific internal energy. Developing PCP high-order schemes for RMHD is highly desirable but remains a challenging task, especially in the multidimensional cases, due to the inherent strong nonlinearity in the constraints and the effect of the magnetic divergence-free condition. Inspired by some crucial observations at the PDE level, we construct the provably PCP schemes by using the locally divergence-free DG schemes of the recently proposed symmetrizable RMHD equations as the base schemes, a limiting technique to enforce the PCP property of the DG solutions, and the strong-stability-preserving methods for time discretization. We rigorously prove the PCP property by using a novel “quasi-linearization” approach to handle the highly nonlinear physical constraints, technical splitting to offset the influence of divergence error, and sophisticated estimates to analyze the beneficial effect of the additional source term in the symmetrizable RMHD system. Several two-dimensional numerical examples are provided to confirm the PCP property and to demonstrate the accuracy, effectiveness and robustness of the proposed PCP schemes.

Read this paper on arXiv…

K. Wu and C. Shu
Tue, 11 Feb 20
54/81

Comments: N/A

Gargantuan chaotic gravitational three-body systems and their irreversibility to the Planck length [IMA]

http://arxiv.org/abs/2002.04029


Chaos is present in most stellar dynamical systems and manifests itself through the exponential growth of small perturbations. Exponential divergence drives time irreversibility and increases the entropy in the system. A numerical consequence is that integrations of the N-body problem unavoidably magnify truncation and rounding errors to macroscopic scales. Hitherto, a quantitative relation between chaos in stellar dynamical systems and the level of irreversibility remained undetermined. In this work we study chaotic three-body systems in free fall initially using the accurate and precise N-body code Brutus, which goes beyond standard double-precision arithmetic. We demonstrate that the fraction of irreversible solutions decreases as a power law with numerical accuracy. This can be derived from the distribution of amplification factors of small initial perturbations. Applying this result to systems consisting of three massive black holes with zero total angular momentum, we conclude that up to five percent of such triples would require an accuracy of smaller than the Planck length in order to produce a time-reversible solution, thus rendering them fundamentally unpredictable.

Read this paper on arXiv…

T. Boekholt, S. Zwart and M. Valtonen
Tue, 11 Feb 20
60/81

Comments: Accepted for publication in MNRAS. 7 pages, 4 figures

Fast inference of Boosted Decision Trees in FPGAs for particle physics [CL]

http://arxiv.org/abs/2002.02534


We describe the implementation of Boosted Decision Trees in the hls4ml library, which allows the conversion of a trained model into an FPGA firmware through an automatic highlevel-synthesis conversion. Thanks to its full on-chip implementation, hls4ml allows performance of inference of Boosted Decision Tree models with extremely low latency. A benchmark model achieving near state of the art classification performance is implemented on an FPGA with 60 ns inference latency, using 8% of the Look Up Tables of the target device. This solution is compatible with the needs of fast real-time processing such as the L1 trigger system of a typical collider experiment.

Read this paper on arXiv…

S. Summers, G. Guglielmo, J. Duarte, et. al.
Mon, 10 Feb 20
21/59

Comments: N/A

Accelerating linear system solvers for time domain component separation of cosmic microwave background data [CEA]

http://arxiv.org/abs/2002.02833


Component separation is one of the key stages of any modern, cosmic microwave background (CMB) data analysis pipeline. It is an inherently non-linear procedure and typically involves a series of sequential solutions of linear systems with similar, albeit not identical system matrices, derived for different data models of the same data set. Sequences of this kind arise for instance in the maximization of the data likelihood with respect to foreground parameters or sampling of their posterior distribution. However, they are also common in many other contexts. In this work we consider solving the component separation problem directly in the measurement (time) domain, which can have a number of important advantageous over the more standard pixel-based methods, in particular if non-negligible time-domain noise correlations are present as it is commonly the case. The time-domain based approach implies, however, significant computational effort due to the need to manipulate the full volume of time-domain data set. To address this challenge, we propose and study efficient solvers adapted to solving time-domain-based, component separation systems and their sequences and which are capable of capitalizing on information derived from the previous solutions. This is achieved either via adapting the initial guess of the subsequent system or through a so-called subspace recycling, which allows to construct progressively more efficient, two-level preconditioners. We report an overall speed-up over solving the systems independently of a factor of nearly 7, or 5, in the worked examples inspired respectively by the likelihood maximization and likelihood sampling procedures we consider in this work.

Read this paper on arXiv…

J. Papez, L. Grigori and R. Stompor
Mon, 10 Feb 20
39/59

Comments: N/A

Scattering, absorption, and thermal emission by large cometary dust particles: Synoptic numerical solution [EPA]

http://arxiv.org/abs/2002.01250


Context: Remote light scattering and thermal infrared observations provide clues about the physical properties of cometary and interplanetary dust particles. Identifying these properties will lead to a better understanding of the formation and evolution of the Solar System. Aims: We present a numerical solution for the radiative and conductive heat transport in a random particulate medium enclosed by an arbitrarily shaped surface. The method will be applied to study thermal properties of cometary dust particles. Methods: The recently introduced incoherent Monte Carlo radiative transfer method developed for scattering, absorption, and propagation of electromagnetic waves in dense discrete random media is extended for radiative heat transfer and thermal emission. The solution is coupled with the conductive Fourier transport equation that is solved with the finite-element method. Results: The proposed method allows the synoptic analysis of light scattering and thermal emission by large cometary dust particles consisting of submicrometer-sized grains. In particular, we show that these particles can sustain significant temperature gradients resulting in the superheating factor phase function observed for the coma of comet 67P/Churyumov-Gerasimenko.

Read this paper on arXiv…

J. Markkanen and J. Agarwal
Wed, 5 Feb 20
1/67

Comments: 7 pages, 8 figures

Mean shift cluster recognition method implementation in the nested sampling algorithm [CL]

http://arxiv.org/abs/2002.01431


Nested sampling is an efficient algorithm for the calculation of the Bayesian evidence and posterior parameter probability distributions. It is based on the step-by-step exploration of the parameter space by Monte Carlo sampling with a series of values sets called live points that evolve towards the region of interest, i.e. where the likelihood function is maximal. In presence of several local likelihood maxima, the algorithm converges with difficulty. Some systematic errors can also be introduced by unexplored parameter volume regions. In order to avoid this, different methods are proposed in the literature for an efficient search of new live points, even in presence of local maxima. Here we present a new solution based on the mean shift cluster recognition method implemented in a random walk search algorithm. The clustering recognition is integrated within the Bayesian analysis program NestedFit. It is tested with the analysis of some difficult cases. Compared to the analysis results without cluster recognition, the computation time is considerably reduced. At the same time, the entire parameter space is efficiently explored, which translates into a smaller uncertainty of the extracted value of the Bayesian evidence.

Read this paper on arXiv…

M. Trassinelli and P. Ciccodicola
Wed, 5 Feb 20
17/67

Comments: N/A

The generation and sustenance of electric fields in sandstorms [CL]

http://arxiv.org/abs/2001.11503


Sandstorms are frequently accompanied by the generation of intense electric fields and lightning. In a very narrow region close to the ground level, sand particles undergo a charge exchange mechanism whereby larger (resp. smaller) sized sand grains become positively (resp. negatively) charged are then entrained by the turbulent fluid motion. Our central hypothesis is that differently sized sand particles get differentially transported by the turbulent flow resulting in a large-scale charge separation, and hence a large-scale electric field. We utilize our simulation framework, comprising of large-eddy simulation of the turbulent atmospheric boundary layer along with sand particle transport and an electrostatic Poisson solver, to investigate the physics of electric fields in sandstorms and thus, to confirm our hypothesis. We utilize the simulation framework to investigate electric fields in weak to strong sandstorms that are characterized by the number density of the sand particles. Our simulations reproduce observational measurements of both mean and RMS fluctuation values of the electric field. We propose a scaling law in which the electric field scales as the two-thirds power of the number density that holds for weak-to-medium sandstorms.

Read this paper on arXiv…

M. Rahman, W. Cheng and R. Samtaney
Fri, 31 Jan 20
33/61

Comments: 4 pages, 5 figures, submitted to Physical Review Letters (PRL), American Physical Society

Machine learning applied to simulations of collisions between rotating, differentiated planets [EPA]

http://arxiv.org/abs/2001.09542


In the late stages of terrestrial planet formation, pairwise collisions between planetary-sized bodies act as the fundamental agent of planet growth. These collisions can lead to either growth or disruption of the bodies involved and are largely responsible for shaping the final characteristics of the planets. Despite their critical role in planet formation, an accurate treatment of collisions has yet to be realized. While semi-analytic methods have been proposed, they remain limited to a narrow set of post-impact properties and have only achieved relatively low accuracies. However, the rise of machine learning and access to increased computing power have enabled novel data-driven approaches. In this work, we show that data-driven emulation techniques are capable of predicting the outcome of collisions with high accuracy and are generalizable to any quantifiable post-impact quantity. In particular, we focus on the dataset requirements, training pipeline, and regression performance for four distinct data-driven techniques from machine learning (ensemble methods and neural networks) and uncertainty quantification (Gaussian processes and polynomial chaos expansion). We compare these methods to existing analytic and semi-analytic methods. Such data-driven emulators are poised to replace the methods currently used in N-body simulations. This work is based on a new set of 10,700 SPH simulations of pairwise collisions between rotating, differentiated bodies at all possible mutual orientations.

Read this paper on arXiv…

M. Timpe, M. Veiga, M. Knabenhans, et. al.
Tue, 28 Jan 20
46/63

Comments: 28 pages, 8 figures. Submitted to Computational Astrophysics and Cosmology

Investigating the use of field solvers for simulating classical systems [CL]

http://arxiv.org/abs/2001.05791


We explore the use of field solvers as approximations of classical Vlasov-Poisson systems. This correspondence is investigated in both electrostatic and gravitational contexts. We demonstrate the ability of field solvers to be excellent approximations of problems with cold initial condition into the non linear regime. We also investigate extensions of the Schr\”odinger-Poisson system that employ multiple stacked cold streams, and the von Neumann-Poisson equation as methods that can successfully reproduce the classical evolution of warm initial conditions. We then discuss how appropriate simulation parameters need to be chosen to avoid interference terms, aliasing, and wave behavior in the field solver solutions. We present a series of criteria clarifying how parameters need to be chosen in order to effectively approximate classical solutions.

Read this paper on arXiv…

A. Eberhardt, A. Banerjee, M. Kopp, et. al.
Fri, 17 Jan 20
29/60

Comments: 28 pages, 14 figures. To be submitted to Phys. Rev. D

Direct Calculation of Self-Gravitational Force for Infinitesimally Thin Gaseous Disks Using Adaptive Mesh Refinement [CL]

http://arxiv.org/abs/2001.02765


Yen et al. (2012) advanced a direct approach for the calculation of self-gravitational force to second order accuracy based on uniform grid discretization. This method improves the accuracy of N-body calculation by using exact integration of kernel functions and employing the Fast Fourier Transform (FFT) to reduce complexity of computation to nearly linear. This direct approach is free of artificial boundary conditions, however, the applicability is limited by the uniform discretization of grids. We report here an advancement in the direct method with the implementation of adaptive mesh refinement (AMR) and maintaining second-order accuracy, which breaks the barrier set by uniform grid discretization. The adoption of graphic process units (GPUs) can significantly speed up the computation and make application of this method possible to astrophysical systems of gaseous disk galaxies and protoplanetary disks.

Read this paper on arXiv…

Y. Tseng, H. Shang and C. Yen
Fri, 10 Jan 20
59/65

Comments: 16 pages

Self-gravitational Force Calculation of Second Order Accuracy Using Multigrid Method on Nested Grids [CL]

http://arxiv.org/abs/2001.02327


We present a simple and effective multigrid-based Poisson solver of second-order accuracy in both gravitational potential and forces in terms of the one, two and infinity norms. The method is especially suitable for numerical simulations using nested mesh refinement. The Poisson equation is solved from coarse to fine levels using a one-way interface scheme. We introduce anti-symmetrically linear interpolation for evaluating the boundary conditions across the multigrid hierarchy. The spurious forces commonly observed at the interfaces between refinement levels are effectively suppressed. We validate the method using two- and three-dimensional density-force pairs that are sufficiently smooth for probing the order of accuracy.

Read this paper on arXiv…

H. Wang and C. Yen
Thu, 9 Jan 20
5/61

Comments: accepted for publication in ApJS

Production of nitric oxide by a fragmenting bolide: An exploratory numerical study [EPA]

http://arxiv.org/abs/1912.13130


A meteoroid’s hypersonic passage through the Earth’s atmosphere results in ablational and fragmentational mass loss. Potential shock waves associated with a parent object as well as its fragments can modify the surrounding atmosphere and produce a range of physico-chemical effects. Some of the thermally driven chemical and physical processes induced by meteoroid-fragment generated shock waves, such as nitric oxide (NO) production, are less understood. Any estimates of meteoric NO production depend not only on a quantifiable meteoroid population and a rate of fragmentation, with a size capable of producing high temperature flows, but also on understanding the physical properties of the meteor flows along with their thermal history. We performed an exploratory pilot numerical study using ANSYS Fluent, the CFD code, to investigate the production of NO in the upper atmosphere by small meteoroids (or fragments of meteoroids after they undergo a disruption episode) in the size range from 10-2 m to 1 m. Our model uses the simulation of a spherical body in the continuum flow at 70 and 80 km altitude to approximate the behaviour of a small meteoroid capable of producing NO. The results presented in this exploratory study are in good agreement with previous studies.

Read this paper on arXiv…

M. Niculescu, E. Silber and R. Silber
Wed, 1 Jan 20
77/88

Comments: 30 pages, 9 figures

Production of nitric oxide by a fragmenting bolide: An exploratory numerical study [EPA]

http://arxiv.org/abs/1912.13130


A meteoroid’s hypersonic passage through the Earth’s atmosphere results in ablational and fragmentational mass loss. Potential shock waves associated with a parent object as well as its fragments can modify the surrounding atmosphere and produce a range of physico-chemical effects. Some of the thermally driven chemical and physical processes induced by meteoroid-fragment generated shock waves, such as nitric oxide (NO) production, are less understood. Any estimates of meteoric NO production depend not only on a quantifiable meteoroid population and a rate of fragmentation, with a size capable of producing high temperature flows, but also on understanding the physical properties of the meteor flows along with their thermal history. We performed an exploratory pilot numerical study using ANSYS Fluent, the CFD code, to investigate the production of NO in the upper atmosphere by small meteoroids (or fragments of meteoroids after they undergo a disruption episode) in the size range from 10-2 m to 1 m. Our model uses the simulation of a spherical body in the continuum flow at 70 and 80 km altitude to approximate the behaviour of a small meteoroid capable of producing NO. The results presented in this exploratory study are in good agreement with previous studies.

Read this paper on arXiv…

M. Niculescu, E. Silber and R. Silber
Wed, 1 Jan 20
45/88

Comments: 30 pages, 9 figures

Evaporative cooling of icy interstellar grains. I [GA]

http://arxiv.org/abs/1912.11378


Context. While radiative cooling of interstellar grains is a well-known process, little detail is known about the cooling of grains with an icy mantle that contains volatile adsorbed molecules. Aims. We explore basic details for the cooling process of an icy grain with properties relevant to dark interstellar clouds. Methods. Grain cooling was described with a numerical code considering a grain with an icy mantle that is structured in monolayers and containing several volatile species in proportions consistent with interstellar ice. Evaporation was treated as first-order decay. Diffusion and subsequent thermal desorption of bulk-ice species was included. Temperature decrease from initial temperatures of 100, 90, 80, 70, 60, 50, 40, 30, and 20K was studied, and we also followed the composition of ice and evaporated matter. Results. We find that grain cooling occurs by partially successive and partially overlapping evaporation of different species. The most volatile molecules (N2) first evaporate at the greatest rate and are most rapidly depleted from the outer ice monolayers. The most important coolant is CO, but evaporation of more refractory species, such as CH4 and even CO2, is possible when the former volatiles are not available. Cooling of high-temperature grains takes longer because volatile molecules are depleted faster and the grain has to switch to slow radiative cooling at a higher temperature. For grain temperatures above 40K, most of the thermal energy is carried away by evaporation. Evaporation of the nonpolar volatile species induces a complete change of the ice surface, as the refractory polar molecules (H2O) are left behind. Conclusions. The effectiveness of thermal desorption from heated icy grains (e.g., the yield of cosmic-ray-induced desorption) is primarily controlled by the thermal energy content of the grain and the number and availability of volatile molecules.

Read this paper on arXiv…

J. Kalvans and J. Kalnin
Wed, 25 Dec 19
6/31

Comments: A&A, in press

Artificial neural network subgrid models of 2-D compressible magnetohydrodynamic turbulence [CL]

http://arxiv.org/abs/1912.11073


We explore the suitability of deep learning to capture the physics of subgrid-scale ideal magnetohydrodynamics turbulence of 2-D simulations of the magnetized Kelvin-Helmholtz instability. We produce simulations at different resolutions to systematically quantify the performance of neural network models to reproduce the physics of these complex simulations. We compare the performance of our neural networks with gradient models, which are extensively used in the extensively in the magnetohydrodynamic literature. Our findings indicate that neural networks significantly outperform gradient models at reproducing the effects of magnetohydrodynamics turbulence. To the best of our knowledge, this is the first exploratory study on the use of deep learning to learn and reproduce the physics of magnetohydrodynamics turbulence.

Read this paper on arXiv…

S. Rosofsky and E. Huerta
Wed, 25 Dec 19
23/31

Comments: 24 pages, 17 figures

Artificial neural network subgrid models of 2-D compressible magnetohydrodynamic turbulence [CL]

http://arxiv.org/abs/1912.11073


We explore the suitability of deep learning to capture the physics of subgrid-scale ideal magnetohydrodynamics turbulence of 2-D simulations of the magnetized Kelvin-Helmholtz instability. We produce simulations at different resolutions to systematically quantify the performance of neural network models to reproduce the physics of these complex simulations. We compare the performance of our neural networks with gradient models, which are extensively used in the extensively in the magnetohydrodynamic literature. Our findings indicate that neural networks significantly outperform gradient models at reproducing the effects of magnetohydrodynamics turbulence. To the best of our knowledge, this is the first exploratory study on the use of deep learning to learn and reproduce the physics of magnetohydrodynamics turbulence.

Read this paper on arXiv…

S. Rosofsky and E. Huerta
Wed, 25 Dec 19
3/31

Comments: 24 pages, 17 figures

Evaporative cooling of icy interstellar grains. I [GA]

http://arxiv.org/abs/1912.11378


Context. While radiative cooling of interstellar grains is a well-known process, little detail is known about the cooling of grains with an icy mantle that contains volatile adsorbed molecules. Aims. We explore basic details for the cooling process of an icy grain with properties relevant to dark interstellar clouds. Methods. Grain cooling was described with a numerical code considering a grain with an icy mantle that is structured in monolayers and containing several volatile species in proportions consistent with interstellar ice. Evaporation was treated as first-order decay. Diffusion and subsequent thermal desorption of bulk-ice species was included. Temperature decrease from initial temperatures of 100, 90, 80, 70, 60, 50, 40, 30, and 20K was studied, and we also followed the composition of ice and evaporated matter. Results. We find that grain cooling occurs by partially successive and partially overlapping evaporation of different species. The most volatile molecules (N2) first evaporate at the greatest rate and are most rapidly depleted from the outer ice monolayers. The most important coolant is CO, but evaporation of more refractory species, such as CH4 and even CO2, is possible when the former volatiles are not available. Cooling of high-temperature grains takes longer because volatile molecules are depleted faster and the grain has to switch to slow radiative cooling at a higher temperature. For grain temperatures above 40K, most of the thermal energy is carried away by evaporation. Evaporation of the nonpolar volatile species induces a complete change of the ice surface, as the refractory polar molecules (H2O) are left behind. Conclusions. The effectiveness of thermal desorption from heated icy grains (e.g., the yield of cosmic-ray-induced desorption) is primarily controlled by the thermal energy content of the grain and the number and availability of volatile molecules.

Read this paper on arXiv…

J. Kalvans and J. Kalnin
Wed, 25 Dec 19
13/31

Comments: A&A, in press

Artificial neural network subgrid models of 2-D compressible magnetohydrodynamic turbulence [CL]

http://arxiv.org/abs/1912.11073


We explore the suitability of deep learning to capture the physics of subgrid-scale ideal magnetohydrodynamics turbulence of 2-D simulations of the magnetized Kelvin-Helmholtz instability. We produce simulations at different resolutions to systematically quantify the performance of neural network models to reproduce the physics of these complex simulations. We compare the performance of our neural networks with gradient models, which are extensively used in the extensively in the magnetohydrodynamic literature. Our findings indicate that neural networks significantly outperform gradient models at reproducing the effects of magnetohydrodynamics turbulence. To the best of our knowledge, this is the first exploratory study on the use of deep learning to learn and reproduce the physics of magnetohydrodynamics turbulence.

Read this paper on arXiv…

S. Rosofsky and E. Huerta
Wed, 25 Dec 19
15/31

Comments: 24 pages, 17 figures

Evaporative cooling of icy interstellar grains. I [GA]

http://arxiv.org/abs/1912.11378


Context. While radiative cooling of interstellar grains is a well-known process, little detail is known about the cooling of grains with an icy mantle that contains volatile adsorbed molecules. Aims. We explore basic details for the cooling process of an icy grain with properties relevant to dark interstellar clouds. Methods. Grain cooling was described with a numerical code considering a grain with an icy mantle that is structured in monolayers and containing several volatile species in proportions consistent with interstellar ice. Evaporation was treated as first-order decay. Diffusion and subsequent thermal desorption of bulk-ice species was included. Temperature decrease from initial temperatures of 100, 90, 80, 70, 60, 50, 40, 30, and 20K was studied, and we also followed the composition of ice and evaporated matter. Results. We find that grain cooling occurs by partially successive and partially overlapping evaporation of different species. The most volatile molecules (N2) first evaporate at the greatest rate and are most rapidly depleted from the outer ice monolayers. The most important coolant is CO, but evaporation of more refractory species, such as CH4 and even CO2, is possible when the former volatiles are not available. Cooling of high-temperature grains takes longer because volatile molecules are depleted faster and the grain has to switch to slow radiative cooling at a higher temperature. For grain temperatures above 40K, most of the thermal energy is carried away by evaporation. Evaporation of the nonpolar volatile species induces a complete change of the ice surface, as the refractory polar molecules (H2O) are left behind. Conclusions. The effectiveness of thermal desorption from heated icy grains (e.g., the yield of cosmic-ray-induced desorption) is primarily controlled by the thermal energy content of the grain and the number and availability of volatile molecules.

Read this paper on arXiv…

J. Kalvans and J. Kalnin
Wed, 25 Dec 19
25/31

Comments: A&A, in press

Dynamical evidence for an early giant planet instability [EPA]

http://arxiv.org/abs/1912.10879


The dynamical structure of the Solar System can be explained by a period of orbital instability experienced by the giant planets. While a late instability was originally proposed to explain the Late Heavy Bombardment, recent work favors an early instability. We model the early dynamical evolution of the outer Solar System to self-consistently constrain the most likely timing of the instability. We first simulate the dynamical sculpting of the primordial outer planetesimal disk during the accretion of Uranus and Neptune from migrating planetary embryos during the gas disk phase, and determine the separation between Neptune and the inner edge of the planetesimal disk. We performed simulations with a range of migration histories for Jupiter. We find that, unless Jupiter migrated inwards by 10 AU or more, the instability almost certainly happened within 100 Myr of the start of Solar System formation. There are two distinct possible instability triggers. The first is an instability that is triggered by the planets themselves, with no appreciable influence from the planetesimal disk. Of those, the median instability time is $\sim4$Myr. Among self-stable systems — where the planets are locked in a resonant chain that remains stable in the absence of a planetesimal’s disk– our self-consistently sculpted planetesimal disks nonetheless trigger a giant planet instability with a median instability time of 37-62 Myr for a reasonable range of migration histories of Jupiter. The simulations that give the latest instability times are those that invoked long-range inward migration of Jupiter from 15 AU or beyond; however these simulations over-excited the inclinations of Kuiper belt objects and are inconsistent with the present-day Solar System. We conclude on dynamical grounds that the giant planet instability is likely to have occurred early in Solar System history.

Read this paper on arXiv…

R. Sousa, A. Morbidelli, S. Raymond, et. al.
Tue, 24 Dec 19
58/79

Comments: 46 pages, 26 figures, Article reference YICAR_113605, this https URL&jid=YICAR&surname=Ribeiro

An adaptive gaussian quadrature for the Voigt function [IMA]

http://arxiv.org/abs/1912.08427


We evaluate an adaptive gaussian quadrature integration scheme that will be suitable for the numerical evaluation of generalized redistribution in frequency functions. The latter are indispensable ingredients for “full non-LTE” radiation transfer computations i.e., assuming potential deviations of the velocity distribution of massive particles from the usual Maxwell-Boltzmann distribution. A first validation is made with computations of the usual Voigt profile.

Read this paper on arXiv…

F. Paletou, C. Peymirat, E. Anterrieu, et. al.
Thu, 19 Dec 19
52/82

Comments: 5 pages, 5 figures

A simple, entropy-based dissipation trigger for SPH [IMA]

http://arxiv.org/abs/1912.01095


Smoothed Particle Hydrodynamics (SPH) schemes need to be enhanced by dissipation mechanisms to handle shocks. Most SPH formulations rely on artificial viscosity and while this is working well in pure shocks, attention has to be payed to avoid dissipation where it is not wanted. Commonly used approaches include limiters and time-dependent dissipation parameters. The former try to distinguish shocks from other types of flows that do not require dissipation while in the latter approach the dissipation parameters are steered by some source term (“trigger”) and, if not triggered, they decay to a pre-described floor value. The commonly used source terms trigger on either compression, $-\nabla\cdot\vec{v}$, or its time derive. Here we explore a novel way to trigger SPH-dissipation: based on the entropy growth rate between two time steps we identify “troubled particles” that need to have dissipation added because they are either passing through a shock wave or become noisy. Our new scheme is implemented into the Lagrangian hydrodynamics code MAGMA2 and scrutinized in a number of shock and fluid instability tests. We find excellent results in shocks and only a moderate (and desired) switch-on in instability tests, despite our conservatively chosen trigger parameters. The new scheme is robust, trivial to implement into existing SPH codes and does not add any computational overhead.

Read this paper on arXiv…

S. Rosswog
Wed, 4 Dec 19
48/58

Comments: 11 pages, 10 figures

The computation of seismic normal modes with rotation as a quadratic eigenvalue problem [CL]

http://arxiv.org/abs/1912.00114


A new approach is presented to compute the seismic normal modes of a fully heterogeneous, rotating planet. Special care is taken to separate out the essential spectrum in the presence of a fluid outer core. The relevant elastic-gravitational system of equations, including the Coriolis force, is subjected to a mixed finite-element method, while self-gravitation is accounted for with the fast multipole method (FMM). To solve the resulting quadratic eigenvalue problem (QEP), the approach utilizes extended Lanczos vectors forming a subspace computed from a non-rotating planet — with the shape of boundaries of a rotating planet and accounting for the centrifugal potential — to reduce the dimension of the original problem significantly. The subspace is guaranteed to be contained in the space of functions to which the seismic normal modes belong. The reduced system can further be solved with a standard eigensolver. The computational accuracy is illustrated using all the modes with relative small meshes and also tested against standard perturbation calculations relative to a standard Earth model. The algorithm and code are used to compute the point spectra of eigenfrequencies in several Mars models studying the effects of heterogeneity on a large range of scales.

Read this paper on arXiv…

J. Shi, R. Li, Y. Xi, et. al.
Tue, 3 Dec 19
69/90

Comments: N/A

The Lagrangian hydrodynamics code MAGMA2 [IMA]

http://arxiv.org/abs/1911.13093


We present the methodology and performance of the new Lagrangian hydrodynamics code MAGMA2, a Smoothed Particle Hydrodynamics code that benefits from a number of non-standard enhancements. It uses high-order smoothing kernels and wherever gradients are needed, they are calculated via accurate matrix inversion techniques. Our default version does not make use of any kernel gradients, but a more conventional formulation has also been implemented for comparison purposes. MAGMA2 uses artificial viscosity, but enhanced by techniques that are commonly used in finite volume schemes such as reconstruction and slope limiting. While simple to implement, this approach efficiently suppresses particle noise, but at the same time drastically reduces dissipation in locations where it is not needed and actually unwanted. We demonstrate the performance of the new code in a number of challenging benchmark tests including e.g. complex, multi-dimensional Riemann problems and more astrophysical tests such as a collision between two stars to demonstrate its robustness and excellent conservation properties.

Read this paper on arXiv…

S. Rosswog
Mon, 2 Dec 19
9/91

Comments: 22 pages, 23 figures

An Arc-Length Approximation For Elliptical Orbits [CL]

http://arxiv.org/abs/1911.10584


In this paper, we overlay a continuum of analytical relations which essentially serve to compute the arc-length described by a celestial body in an elliptic orbit within a stipulated time interval. The formalism is based upon a two-dimensional heliocentric coordinate frame, where both the coordinates are parameterized as two infinitely differentiable functions in time by using the Lagrange inversion theorem. The parameterization is firstly endorsed to generate a dynamically consistent ephemerides of any celestial object in an elliptic orbit, and thereafter manifested into a numerical integration routine to approximate the arc-lengths delineated within an arbitrary interval of time. As elucidated, the presented formalism can also be orchestrated to quantify the perimeters of elliptic orbits of celestial bodies solely based upon their orbital period and other intrinsic characteristics.

Read this paper on arXiv…

A. Jha and A. Karki
Tue, 26 Nov 19
22/66

Comments: (5 pages, 1 figure, 3 tables)

Universality in the structure of dark matter haloes over twenty orders of magnitude in halo mass [CEA]

http://arxiv.org/abs/1911.09720


Dark matter haloes are the basic units of all cosmic structure. They grew by gravitational amplification of weak initial density fluctuations that are still visible on large scales in the cosmic microwave background radiation. Galaxies formed within relatively massive haloes as gas cooled and condensed at their centres, but many hypotheses for the nature of dark matter imply that the halo population should extend to masses many orders of magnitude below those where galaxies can form. Here, we use a novel, multi-zoom technique to create the first consistent simulation of the formation of present-day haloes over the full mass range populated when dark matter is aWeakly Interacting Massive Particle (WIMP) of mass ~100 GeV. The simulation has a dynamic range of 30 orders of magnitude in mass, resolving the internal structure of hundreds of Earth-mass haloes just as well as that of hundreds of rich galaxy clusters. Remarkably, halo density profiles are universal over the entire mass range and are well described by simple two-parameter fitting formulae. Halo mass and concentration are tightly related in a way which depends on cosmology and on the nature of the dark matter. At fixed mass, concentration is independent of local environment for haloes less massive than those of typical galaxies. These results are important for predicting annihilation radiation signals from dark matter, since these should be dominated by contributions from the smallest structures.

Read this paper on arXiv…

J. Wang, S. Bose, C. Frenk, et. al.
Mon, 25 Nov 19
37/55

Comments: 35 pages, 8 figures, submitted

Challenges in fluid flow simulations using Exascale computing [CL]

http://arxiv.org/abs/1911.10020


In this paper, I discuss the challenges in porting hydrodynamic codes to futuristic exascale HPC systems. In particular, we describe the computational complexities of finite difference method, pseudo-spectral method, and Fast Fourier Transform (FFT). We show how global data communication among the processors brings down the efficiency of pseudo-spectral codes and FFT. It is argued that FFT scaling may saturate at 1/2 million processors. However, finite difference and finite volume codes scale well beyond million processors, hence they are likely candidates to be tried on exascale systems. The codes based on spectral-element and Fourier continuation, that are more accurate than finite difference, could also scale well on such systems.

Read this paper on arXiv…

M. Verma
Mon, 25 Nov 19
55/55

Comments: N/A

Coupled MHD — Hybrid Simulations of Space Plasmas [CL]

http://arxiv.org/abs/1911.08660


Heliospheric plasmas require multi-scale and multi-physics considerations. On one hand, MHD codes are widely used for global simulations of the solar-terrestrial environments, but do not provide the most elaborate physical description of space plasmas. Hybrid codes, on the other hand, capture important physical processes, such as electric currents and effects of finite Larmor radius, but they can be used locally only, since the limitations in available computational resources do not allow for their use throughout a global computational domain. In the present work, we present a new coupled scheme which allows to switch blocks in the block-adaptive grids from fluid MHD to hybrid simulations, without modifying the self-consistent computation of the electromagnetic fields acting on fluids (in MHD simulation) or charged ion macroparticles (in hybrid simulation). In this way, the hybrid scheme can refine the description in specified regions of interest without compromising the efficiency of the global MHD code.

Read this paper on arXiv…

S. Moschou, I. Sokolov, O. Cohen, et. al.
Thu, 21 Nov 19
17/57

Comments: 13 pages, 3 figures, ASTRONUM 2019 refereed proceedings paper (in press)

Two-level Dynamic Load Balancing for High Performance Scientific Applications [CL]

http://arxiv.org/abs/1911.06714


Scientific applications are often complex, irregular, and computationally-intensive. To accommodate the ever-increasing computational demands of scientific applications, high-performance computing (HPC) systems have become larger and more complex, offering parallelism at multiple levels (e.g., nodes, cores per node, threads per core). Scientific applications need to exploit all the available multilevel hardware parallelism to harness the available computational power. The performance of applications executing on such HPC systems may adversely be affected by load imbalance at multiple levels, caused by problem, algorithmic, and systemic characteristics. Nevertheless, most existing load balancing methods do not simultaneously address load imbalance at multiple levels. This work investigates the impact of load imbalance on the performance of three scientific applications at the thread and process levels. We jointly apply and evaluate selected dynamic loop self-scheduling (DLS) techniques to both levels. Specifically, we employ the extended LaPeSD OpenMP runtime library at the thread level and extend the DLS4LB MPI-based dynamic load balancing library at the process level. This approach is generic and applicable to any multiprocess-multithreaded computationally-intensive application (programmed using MPI and OpenMP). We conduct an exhaustive set of experiments to assess and compare six DLS techniques at the thread level and eleven at the process level. The results show that improved application performance, by up to 21%, can only be achieved by jointly addressing load imbalance at the two levels. We offer insights into the performance of the selected DLS techniques and discuss the interplay of load balancing at the thread level and process level.

Read this paper on arXiv…

A. Mohammed, A. Cavelan, F. Ciorba, et. al.
Wed, 20 Nov 19
72/73

Comments: N/A

Corrfunc — A Suite of Blazing Fast Correlation Functions on the CPU [CL]

http://arxiv.org/abs/1911.03545


The two-point correlation function (2PCF) is the most widely used tool for quantifying the spatial distribution of galaxies. Since the distribution of galaxies is determined by galaxy formation physics as well as the underlying cosmology, fitting an observed correlation function yields valuable insights into both. The calculation for a 2PCF involves computing pair-wise separations and consequently, the computing time scales quadratically with the number of galaxies. The next-generation galaxy surveys are slated to observe many millions of galaxies, and computing the 2PCF for such surveys would be prohibitively time-consuming. Additionally, modern modelling techniques require the 2PCF to be calculated thousands of times on simulated galaxy catalogues of {\em at least} equal size to the data and would be completely unfeasible for the next generation surveys. Thus, calculating the 2PCF forms a substantial bottleneck in improving our understanding of the fundamental physics of the universe, and we need high-performance software to compute the correlation function. In this paper, we present Corrfunc — a suite of highly optimised, OpenMP parallel clustering codes. The improved performance of Corrfunc arises from both efficient algorithms as well as software design that suits the underlying hardware of modern CPUs. Corrfunc can compute a wide range of 2-D and 3-D correlation functions in either simulation (Cartesian) space or on-sky coordinates. Corrfunc runs efficiently in both single- and multi-threaded modes and can compute a typical 2-point projected correlation function ($w_p(r_p)$) for ~1 million galaxies within a few seconds on a single thread. Corrfunc is designed to be both user-friendly and fast and is publicly available at https://github.com/manodeep/Corrfunc.

Read this paper on arXiv…

M. Sinha and L. Garrison
Tue, 12 Nov 19
52/84

Comments: Accepted for publication to MNRAS

Extending and Calibrating the Velocity dependent One-Scale model for Cosmic Strings with One Thousand Field Theory Simulations [CEA]

http://arxiv.org/abs/1911.03163


Understanding the evolution and cosmological consequences of topological defect networks requires a combination of analytic modeling and numerical simulations. The canonical analytic model for defect network evolution is the Velocity-dependent One-Scale (VOS) model. For the case of cosmic strings, this has so far been calibrated using small numbers of Goto-Nambu and field theory simulations, in the radiation and matter eras, as well as in Minkowski spacetime. But the model is only as good as the available simulations, and it should be extended as further simulations become available. In previous work we presented a General Purpose Graphics Processing Unit implementation of the evolution of cosmological domain wall networks, and used it to obtain an improved VOS model for domain walls. Here we continue this effort, exploiting a more recent analogous code for local Abelian-Higgs string networks. The significant gains in speed afforded by this code enabled us to carry out 1032 field theory simulations of $512^3$ size, with 43 different expansion rates. This detailed exploration of the effects of the expansion rate on the network properties in turn enables a statistical separation of various dynamical processes affecting the evolution of the network. We thus extend and accurately calibrate the VOS model for cosmic strings, including separate terms for energy losses due to loop production and scalar/gauge radiation. By comparing this newly calibrated VOS model with the analogous one for domain walls we quantitatively show that energy loss mechanisms are different for the two types of defects.

Read this paper on arXiv…

J. Correia and C. Martins
Mon, 11 Nov 19
8/105

Comments: 15 pages, 5 figures, Phys. Rev. D (in press)

21cm Global Signal Extraction: Extracting the 21cm Global Signal using Artificial Neural Networks [CEA]

http://arxiv.org/abs/1911.02580


The study of the cosmic Dark Ages, Cosmic Dawn, and Epoch of Reionization (EoR) using the all-sky averaged redshifted HI 21cm signal, are some of the key science goals of most of the ongoing or upcoming experiments, for example, EDGES, SARAS, and the SKA. This signal can be detected by averaging over the entire sky, using a single radio telescope, in the form of a Global signal as a function of only redshifted HI 21cm frequencies. One of the major challenges faced while detecting this signal is the dominating, bright foreground. The success of such detection lies in the accuracy of the foreground removal. The presence of instrumental gain fluctuations, chromatic primary beam, radio frequency interference (RFI) and the Earth’s ionosphere corrupts any observation of radio signals from the Earth. Here, we propose the use of Artificial Neural Networks (ANN) to extract the faint redshifted 21cm Global signal buried in a sea of bright Galactic foregrounds and contaminated by different instrumental models. The most striking advantage of using ANN is the fact that, when the corrupted signal is fed into a trained network, we can simultaneously extract the signal as well as foreground parameters very accurately. Our results show that ANN can detect the Global signal with $\gtrsim 92 \%$ accuracy even in cases of mock observations where the instrument has some residual time-varying gain across the spectrum.

Read this paper on arXiv…

M. Choudhury, A. Datta and A. Chakraborty
Mon, 11 Nov 19
33/105

Comments: 14 pages, 18 figures. Accepted for publication in MNRAS

Rigidly rotating gravitationally bound systems of point particles, compared to polytropes [CL]

http://arxiv.org/abs/1911.01313


In order to simulate rigidly rotating polytropes we have simulated systems of $N$ point particles, with $N$ up to 1800. Two particles at a distance $r$ interact by an attractive potential $-1/r$ and a repulsive potential $1/r^2$. The repulsion simulates the pressure in a polytropic gas of polytropic index $3/2$. We take the total angular momentum $L$ to be conserved, but not the total energy $E$. The particles are stationary in the rotating coordinate system. The rotational energy is $L^2/(2I)$ where $I$ is the moment of inertia. Configurations where the energy $E$ has a local minimum are stable. In the continuum limit $N\to\infty$ the particles become more and more tightly packed in a finite volume, with the interparticle distances decreasing as $N^{-1/3}$. We argue that $N^{-1/3}$ is a good parameter for describing the continuum limit. We argue further that the continuum limit is the polytropic gas of index $3/2$. For example, the density profile of the nonrotating gas approaches that computed from the Lane–Emden equation describing the nonrotating polytropic gas. In the case of maximum rotation the instability occurs by the loss of particles from the equator, which becomes a sharp edge, as predicted by Jeans in his study of rotating polytropes. We describe the minimum energy nonrotating configurations for a number of small values of $N$.

Read this paper on arXiv…

Y. Hopstad and J. Myrheim
Tue, 5 Nov 19
50/72

Comments: 42 pages, 26 figures

Modelling low Mach number stellar hydrodynamics with MAESTROeX [CL]

http://arxiv.org/abs/1910.12979


Modelling long-time convective flows in the interiors of stars is extremely challenging using conventional compressible hydrodynamics codes due to the acoustic timestep limitation. Many of these flows are in the low Mach number regime, which allows us to exploit the relationship between acoustic and advective time scales to develop a more computationally efficient approach. MAESTROeX is an open source low Mach number stellar hydrodynamics code that allows much larger timesteps to be taken, therefore enabling systems to be modelled for much longer periods of time. This is particularly important for the problem of convection in the cores of rotating massive stars prior to core collapse. To fully capture the dynamics, it is necessary to model these systems in three dimensions at high resolution over many rotational periods. We present an overview of MAESTROeX’s current capabilities, describe ongoing work to incorporate the effects of rotation and discuss how we are optimising the code to run on GPUs.

Read this paper on arXiv…

A. Harpole, D. Fan, M. Katz, et. al.
Wed, 30 Oct 19
71/77

Comments: 9 pages, 1 figure, Proceedings for the “ASTRONUM 2019” conference, July 2019, Paris, France

Integral Relations and Control Volume Method for Kinetic Equation with Poisson Brackets [CL]

http://arxiv.org/abs/1910.12636


Simulation of plasmas in the electromagnetic fields requires to solve numerically a kinetic equation, describing the time evolution of the particle distribution function. Here, we propose a finite volume scheme based on the integral relation for the Poisson bracket to solve the most fundamental kinetic equation, namely, the Liouville equation. The proposed scheme conserves the number of particles, maintains the total-variation-diminishing (TVD) property, and provides high-quality numerical results. Some other types of kinetic equations may be also formulated in terms of the Poisson brackets and solved with the proposed method. Among them is the focused transport equation describing the acceleration and propagation of the Solar Energetic Particles (SEPs), which is of practical importance, since the high energy SEPs produce radiation hazards. The newly proposed scheme is demonstrated to be accurate and efficient, which makes it applicable to global simulation systems analysing the space weather. We also discuss a role of focused transport and the accuracy of the diffusive approximation, in application to the SEPs

Read this paper on arXiv…

I. Sokolov, H. Sun, G. Toth, et. al.
Tue, 29 Oct 19
23/78

Comments: 30 pages, 8 figures

The Castro AMR Simulation Code: Current and Future Developments [IMA]

http://arxiv.org/abs/1910.12578


We describe recent developments to the Castro astrophysics simulation code, focusing on new features that enable our simulations of X-ray bursts. Two highlights of Castro’s ongoing development are the new integration technique to couple hydrodynamics and reactions to high order and GPU offloading. We discuss how these features will help offset some of the computational expense in X-ray burst models.

Read this paper on arXiv…

M. Zingale, A. Almgren, M. Sazo, et. al.
Tue, 29 Oct 19
44/78

Comments: submitted to proceedings of AstroNum 2019

A Surrogate Model for Gravitational Wave Signals from Comparable- to Large- Mass-Ratio Black Hole Binaries [CL]

http://arxiv.org/abs/1910.10473


Gravitational wave signals from compact astrophysical sources such as those observed by LIGO and Virgo require a high-accuracy, theory-based waveform model for the analysis of the recorded signal. Current inspiral-merger-ringdown models are calibrated only up to moderate mass ratios, thereby limiting their applicability to signals from high-mass ratio binary systems. We present EMRISur1dq1e4, a reduced-order surrogate model for gravitational waveforms of 13,500M in duration and including several harmonic modes for non-spinning black hole binary systems with mass-ratios varying from 3 to 10,000 thus vastly expanding the parameter range beyond the current models. This surrogate model is trained on waveform data generated by point-particle black hole perturbation theory (ppBHPT) both for large mass-ratio and comparable mass-ratio binaries. We observe that the gravitational waveforms generated through a simple application of ppBHPT to the comparable mass-ratio cases agree remarkably (and surprisingly) well with those from full numerical relativity after a rescaling of the ppBHPT’s total mass parameter. This observation and the EMRISur1dq1e4 surrogate model will enable data analysis studies in the high-mass ratio regime, including potential intermediate mass-ratio signals from LIGO/Virgo and extreme-mass ratio events of interest to the future space-based observatory LISA.

Read this paper on arXiv…

N. Rifat, S. Field, G. Khanna, et. al.
Thu, 24 Oct 19
54/68

Comments: 8 pages, 3 figures

From Dark Matter to Galaxies with Convolutional Neural Networks [CEA]

http://arxiv.org/abs/1910.07813


Cosmological simulations play an important role in the interpretation of astronomical data, in particular in comparing observed data to our theoretical expectations. However, to compare data with these simulations, the simulations in principle need to include gravity, magneto-hydrodyanmics, radiative transfer, etc. These ideal large-volume simulations (gravo-magneto-hydrodynamical) are incredibly computationally expensive which can cost tens of millions of CPU hours to run. In this paper, we propose a deep learning approach to map from the dark-matter-only simulation (computationally cheaper) to the galaxy distribution (from the much costlier cosmological simulation). The main challenge of this task is the high sparsity in the target galaxy distribution: space is mainly empty. We propose a cascade architecture composed of a classification filter followed by a regression procedure. We show that our result outperforms a state-of-the-art model used in the astronomical community, and provides a good trade-off between computational cost and prediction accuracy.

Read this paper on arXiv…

J. Yip, X. Zhang, Y. Wang, et. al.
Fri, 18 Oct 19
9/77

Comments: 5 pages, 2 figures. Accepted to the Second Workshop on Machine Learning and the Physical Sciences (NeurIPS 2019)

Visualizing the world's largest turbulence simulation [CL]

http://arxiv.org/abs/1910.07850


In this exploratory submission we present the visualization of the largest interstellar turbulence simulations ever performed, unravelling key astrophysical processes concerning the formation of stars and the relative role of magnetic fields. The simulations, including pure hydrodynamical (HD) and magneto-hydrodynamical (MHD) runs, up to a size of $10048^3$ grid elements, were produced on the supercomputers of the Leibniz Supercomputing Centre and visualized using the hybrid parallel (MPI+TBB) ray-tracing engine OSPRay associated with VisIt. Besides revealing features of turbulence with an unprecedented resolution, the visualizations brilliantly showcase the stretching-and-folding mechanisms through which astrophysical processes such as supernova explosions drive turbulence and amplify the magnetic field in the interstellar gas, and how the first structures, the seeds of newborn stars are shaped by this process.

Read this paper on arXiv…

S. Cielo, L. Iapichino, J. Günther, et. al.
Fri, 18 Oct 19
39/77

Comments: 6 pages, 5 figures, accompanying paper of SC19 visualization showcase finalist. The full video is publicly available under this https URL

Newton vs the machine: solving the chaotic three-body problem using deep neural networks [GA]

http://arxiv.org/abs/1910.07291


Since its formulation by Sir Isaac Newton, the problem of solving the equations of motion for three bodies under their own gravitational force has remained practically unsolved. Currently, the solution for a given initialization can only be found by performing laborious iterative calculations that have unpredictable and potentially infinite computational cost, due to the system’s chaotic nature. We show that an ensemble of solutions obtained using an arbitrarily precise numerical integrator can be used to train a deep artificial neural network (ANN) that, over a bounded time interval, provides accurate solutions at fixed computational cost and up to 100 million times faster than a state-of-the-art solver. Our results provide evidence that, for computationally challenging regions of phase-space, a trained ANN can replace existing numerical solvers, enabling fast and scalable simulations of many-body systems to shed light on outstanding phenomena such as the formation of black-hole binary systems or the origin of the core collapse in dense star clusters.

Read this paper on arXiv…

P. Breen, C. Foley, T. Boekholt, et. al.
Thu, 17 Oct 19
39/62

Comments: 6 pages, 6 figures

A general-purpose timestep criterion for simulations with gravity [IMA]

http://arxiv.org/abs/1910.06349


We describe a new adaptive timestep criterion for integrating gravitational motion, which uses the tidal tensor to estimate the local dynamical timescale and scales the timestep proportionally. This provides a better candidate for a truly general-purpose gravitational timestep criterion than the usual prescription derived from the gravitational acceleration, which does not respect the equivalence principle, breaks down when $\mathbf{a}=0$, and does not obey the same dimensional scaling as the true timescale of orbital motion. We implement the tidal timestep criterion in the simulation code GIZMO, and examine controlled tests of collisionless galaxy and star cluster models, as well as fully-dynamic galaxy merger and cosmological dark matter simulations. The tidal criterion estimates the dynamical time faithfully, and generally provides a more efficient timestepping scheme compared to an acceleration criterion. Specifically, the tidal criterion achieves order-of-magnitude smaller energy errors for the same number of force evaluations in potentials with inner profiles shallower than $\rho \propto r^{-1}$ (ie. where $\mathbf{a}\rightarrow 0$), such as star clusters and cored galaxies. For a given problem these advantages must be weighed against the additional overhead of computing the tidal tensor on-the-fly, but in many cases this overhead is small.

Read this paper on arXiv…

M. Grudić and P. Hopkins
Wed, 16 Oct 19
52/56

Comments: Submitted to MNRAS. 8 pages, 4 figures. Comments welcome!

A relativistic particle pusher for ultra-strong electromagnetic fields [CL]

http://arxiv.org/abs/1910.04591


Kinetic plasma simulations are nowadays commonly used to study a wealth of non-linear behaviours and properties in laboratory and space plasmas. In particular, in high-energy physics and astrophysics, the plasma usually evolves in ultra-strong electromagnetic fields produced by intense laser beams for the former or by rotating compact objects such as neutron stars and black holes for the latter. In ultra-strong electromagnetic fields, the gyro-period is several orders of magnitude smaller than the timescale on which we desire to investigate the plasma evolution. Some approximations are required like for instance artificially decreasing the electromagnetic field strength which is certainly not satisfactory. The main flaw of this downscaling is that it cannot reproduce single particle acceleration to ultra-relativistic speeds with Lorentz factor above $\gamma \approx 10^3-10^4$. In this paper, we design a new algorithm able to catch particle motion and acceleration to Lorentz factor up to $10^{15}$ or even higher by using Lorentz boosts to special frames where electric and magnetic fields are parallel. Assuming that these fields are locally uniform, we solve analytically the equation of motion in a tiny region smaller than the length scale of the gradient of the field. This analytical integration of the orbit severely reduces the constrain on the time step, allowing us to use very large time steps, avoiding to resolved the ultra high frequency gyromotion. We performed simulations in ultra-strong spatially and time dependent electromagnetic fields, showing that our particle pusher is able to follow accurately the exact analytical solution for very long times. This property is crucial to properly capture lepton electrodynamics in electromagnetic waves produced by fast rotating neutron stars.

Read this paper on arXiv…

J. Pétri
Fri, 11 Oct 19
23/76

Comments: Submitted to Journal of Computational Physics

A deep learning approach to cosmological dark energy models [CEA]

http://arxiv.org/abs/1910.02788


We propose a novel deep learning tool in order to study the evolution of dark energy models. The aim is to combine a training of Recurrent Neural Networks (RNN) and of Bayesian Neural Networks (BNN). The first one is capable of learning complex sequential information to classify objects like supernovae and use the light-curves directly to learn information from the sequence of observations. Since RNN is not capable to calculate the uncertainties, BNN emerges as a solution for problems in deep learning like, for example, the overfitting. For the trainings we use measurements of the distance modulus $\mu(z)$, such as those provided by Pantheon Supernovae Type Ia. In view of our results, the reported approach turns out to be a first promising step on how we can train a neural network for specific cosmological data. It is worth stressing that the technique allows to reduce the computational load of expensive codes for dark energy models and probe the necessity of modified dark energy models at higher redshift than that reported by current supernovae astrophysical samples.

Read this paper on arXiv…

C. Escamilla-Rivera, M. Quintero and S. Capozziello
Tue, 8 Oct 19
39/88

Comments: 7 pages, 6 figures

$k$-evolution: a relativistic N-body code for clustering dark energy [CEA]

http://arxiv.org/abs/1910.01104


We introduce $k$-evolution, a relativistic $N$-body code based on $\textit{gevolution}$, which includes clustering dark energy among its cosmological components. To describe dark energy, we use the effective field theory approach. In particular, we focus on $k$-essence with a speed of sound much smaller than unity but we lay down the basis to extend the code to other dark energy and modified gravity models. We develop the formalism including dark energy non-linearities but, as a first step, we implement the equations in the code after dropping non-linear self-coupling in the $k$-essence field. In this simplified setup, we compare $k$-evolution simulations with those of $\texttt{CLASS}$ and $\textit{gevolution}$ 1.2, showing the effect of dark matter and gravitational non-linearities on the power spectrum of dark matter, of dark energy and of the gravitational potential. Moreover, we compare $k$-evolution to Newtonian $N$-body simulations with back-scaled initial conditions and study how dark energy clustering affects massive halos.

Read this paper on arXiv…

F. Hassani, J. Adamek, M. Kunz, et. al.
Thu, 3 Oct 19
13/59

Comments: 38 pages, 19 figures

A resistive extension for ideal MHD [CL]

http://arxiv.org/abs/1906.03150


We present an extension to the special relativistic, ideal magnetohydrodynamics (MHD) equations, designed to capture effects due to resistivity. The extension takes the simple form of an additional source term which, when implemented numerically, is shown to emulate the behaviour produced by a fully resistive MHD description for a range of initial data. The extension is developed from first principle arguments, and thus requires no fine tuning of parameters, meaning it can be applied to a wide range of dynamical systems. Furthermore, our extension does not suffer from the same stiffness issues arising in resistive MHD, and thus can be evolved quickly using explicit methods, with performance benefits of roughly an order of magnitude compared to current methods.

Read this paper on arXiv…

A. Wright and I. Hawke
Thu, 3 Oct 19
36/59

Comments: 18 pages, 10 figures

Spectral shock detection for dynamically developing discontinuities [CL]

http://arxiv.org/abs/1910.00858


Pseudospectral schemes are a class of numerical methods capable of solving smooth problems with high accuracy thanks to their exponential convergence to the true solution. When applied to discontinuous problems, such as fluid shocks and material interfaces, due to the Gibbs phenomenon, pseudospectral solutions lose their superb convergence and suffer from spurious oscillations across the entire computational domain. Luckily, there exist theoretical remedies for these issues which have been successfully tested in practice for cases of well defined discontinuities. We focus on one piece of this procedure—detecting a discontinuity in spectral data. We show that realistic applications require treatment of discontinuities dynamically developing in time and that it poses challenges associated with shock detection. More precisely, smoothly steepening gradients in the solution spawn spurious oscillations due to insufficient resolution, causing premature shock identification and information loss. We improve existing spectral shock detection techniques to allow us to automatically detect true discontinuities and identify cases for which post-processing is required to suppress spurious oscillations resulting from the loss of resolution. We then apply these techniques to solve an inviscid Burgers’ equation in 1D, demonstrating that our method correctly treats genuine shocks caused by wave breaking and removes oscillations caused by numerical constraints.

Read this paper on arXiv…

J. Piotrowska and J. Miller
Thu, 3 Oct 19
49/59

Comments: 16 pages, 6 figures

Regression methods in waveform modeling: a comparative study [IMA]

http://arxiv.org/abs/1909.10986


Gravitational-wave astronomy of compact binaries relies on theoretical models of the gravitational-wave signal that is emitted as binaries coalesce. These models do not only need to be accurate, they also have to be fast to evaluate in order to be able to compare millions of signals in near real time with the data of gravitational-wave instruments. A variety of regression and interpolation techniques have been employed to build efficient waveform models, but no study has systematically compared the performance of these regression methods yet. Here we provide such a comparison of various techniques, including polynomial fits, radial basis functions, Gaussian process regression and artificial neural networks, specifically for the case of gravitational waveform modeling. We use all these techniques to regress analytical models of non-precessing and precessing binary black hole waveforms, and compare the accuracy as well as computational speed. We find that most regression methods are reasonably accurate, but efficiency considerations favour in many cases the most simple approach. We conclude that sophisticated regression methods are not necessarily needed in standard gravitational-wave modeling applications, although problems with higher complexity than what is tested here might be more suitable for machine-learning techniques and more sophisticated methods may have side benefits.

Read this paper on arXiv…

Y. Setyawati, M. Pürrer and F. Ohme
Wed, 25 Sep 19
70/70

Comments: 31 pages, 5 figures

Time-step dependent force interpolation scheme for suppressing numerical Cherenkov instability in relativistic particle-in-cell simulations [CL]

http://arxiv.org/abs/1909.09613


The WT scheme, a piecewise polynomial force interpolation scheme with time-step dependency, is proposed in this paper for relativistic particle-in-cell (PIC) simulations. The WT scheme removes the lowest order numerical Cherenkov instability (NCI) growth rate for arbitrary time steps allowed by the Courant condition. While NCI from higher order resonances is still present, the numerical tests show that for smaller time steps, the numerical instability grows much slower than using the optimal time step found in previous studies. The WT scheme is efficient for improving the quality and flexibility of relativistic particle-in-cell simulations.

Read this paper on arXiv…

Y. Lu, P. Kilian, F. Guo, et. al.
Mon, 23 Sep 19
12/46

Comments: 10 pages, single column, one figure

Learning Symbolic Physics with Graph Networks [CL]

http://arxiv.org/abs/1909.05862


We introduce an approach for imposing physically motivated inductive biases on graph networks to learn interpretable representations and improved zero-shot generalization. Our experiments show that our graph network models, which implement this inductive bias, can learn message representations equivalent to the true force vector when trained on n-body gravitational and spring-like simulations. We use symbolic regression to fit explicit algebraic equations to our trained model’s message function and recover the symbolic form of Newton’s law of gravitation without prior knowledge. We also show that our model generalizes better at inference time to systems with more bodies than had been experienced during training. Our approach is extensible, in principle, to any unknown interaction law learned by a graph network, and offers a valuable technique for interpreting and inferring explicit causal theories about the world from implicit knowledge captured by deep learning.

Read this paper on arXiv…

M. Cranmer, R. Xu, P. Battaglia, et. al.
Mon, 16 Sep 19
74/74

Comments: 5 pages; 3 figures; submitted to Machine Learning and the Physical Sciences Workshop @ NeurIPS 2019

An Extension of the Athena++ Framework for General Equations of State [IMA]

http://arxiv.org/abs/1909.05274


We present modifications to the Athena++ framework to enable use of general equations of state (EOS). Part of our motivation for doing so is to model transient astrophysics phenomena, as these types of events are often not well approximated by an ideal gas. This necessitated changes to the Riemann solvers implemented in Athena++. We discuss the adjustments made to the HLLC, and HLLD solvers and EOS calls required for arbitrary EOS. For the first time, we demonstrate the reliability of our code in a number of tests which utilize a relatively simple, but non-trivial EOS based on hydrogen ionization, appropriate for the transition from atomic to ionized hydrogen. Additionally, we perform tests using an electron-positron Helmholtz EOS, appropriate for regimes where nuclear statistical equilibrium is a good approximation. These new complex EOS tests overall show that our modifications to Athena++ accurately solve the Riemann problem with the expected linear convergence. We provide our test solutions as a means to check the accuracy of other hydrodynamic codes. Our tests and additions to Athena++ will enable further research into (magneto)hydrodynamic problems where realistic treatments of the EOS are required.

Read this paper on arXiv…

M. Coleman
Fri, 13 Sep 19
41/70

Comments: 29 pages, 11 figures, 16 tables, submitted to ApJS

The Arepo public code release [IMA]

http://arxiv.org/abs/1909.04667


We introduce the public version of the cosmological magnetohydrodynamical moving-mesh simulation code Arepo. This version contains a finite-volume magnetohydrodynamics algorithm on an unstructured, dynamic Voronoi tessellation coupled to a tree-particle-mesh algorithm for the Poisson equation either on a Newtonian or cosmologically expanding spacetime. Time-integration is performed adopting local timestep constraints for each cell individually, solving the fluxes only across active interfaces, and calculating gravitational forces only between active particles, using an operator-splitting approach. This allows simulations with high dynamic range to be performed efficiently. Arepo is a massively distributed-memory parallel code, using the Message Passing Interface (MPI) communication standard and employing a dynamical work-load and memory balancing scheme to allow optimal use of multi-node parallel computers. The employed parallelization algorithms of Arepo are deterministic and produce binary-identical results when re-run on the same machine and with the same number of MPI ranks. A simple primordial cooling and star formation model is included as an example of sub-resolution models commonly used in simulations of galaxy formation. Arepo also contains a suite of computationally inexpensive test problems, ranging from idealized tests for automated code verification to scaled-down versions of cosmological galaxy formation simulations, and is extensively documented in order to assist adoption of the code by new scientific users.

Read this paper on arXiv…

R. Weinberger, V. Springel and R. Pakmor
Thu, 12 Sep 19
4/84

Comments: 33 pages, 6 figures, submitted to ApJS, this https URL, repository: this https URL

Relativistic changes to particle trajectories are difficult to detect [CL]

http://arxiv.org/abs/1909.04652


We study the sensitivity of the computed orbits for the Kepler problem, both for continuous space, and discretizations of space. While it is known that energy can be very well preserved with symplectic methods, the semi-major-axis is in general not preserved. We study this spurious shift, as a function of the integration method used, and also as a function of an additional interpolation of forces on a 2-dimensional lattice. This is done for several choices of eccentricities, and semi-major axes. Using these results, we can predict which precisions and lattice constants allow for a detection of the relativistic perihelion advance. Such bounds are important for calculations in N-body simulations, if one wants to meaningfully add these relativistic effects.

Read this paper on arXiv…

J. Eckmann and F. Hassani
Thu, 12 Sep 19
63/84

Comments: 14 pages, 8 figures

Relativistic changes to particle trajectories are difficult to detect [CL]

http://arxiv.org/abs/1909.04652


We study the sensitivity of the computed orbits for the Kepler problem, both for continuous space, and discretizations of space. While it is known that energy can be very well preserved with symplectic methods, the semi-major-axis is in general not preserved. We study this spurious shift, as a function of the integration method used, and also as a function of an additional interpolation of forces on a 2-dimensional lattice. This is done for several choices of eccentricities, and semi-major axes. Using these results, we can predict which precisions and lattice constants allow for a detection of the relativistic perihelion advance. Such bounds are important for calculations in N-body simulations, if one wants to meaningfully add these relativistic effects.

Read this paper on arXiv…

J. Eckmann and F. Hassani
Wed, 11 Sep 19
45/86

Comments: 14 pages, 8 figures

A Staggered Semi-Analytic Method for Simulating Dust Grains Subject to Gas Drag [EPA]

http://arxiv.org/abs/1909.02006


Numerical simulations of dust-gas dynamics are one of the fundamental tools in astrophysical research, such as the study of star and planet formation. It is common to find tightly coupled dust and gas in astrophysical systems, which demands that any practical integration method be able to take time steps $\Delta t$ much longer than the stopping time $t_{\rm s}$ due to drag. A number of methods have been developed to ensure stability in this stiff ($\Delta t\gg t_{\rm s}$) regime, but there remains large room for improvement in terms of accuracy. In this paper, we describe an easy-to-implement method, the “staggered semi-analytic method” (SSA), and conduct numerical tests to compare it to other implicit and semi-analytic methods, including the $2^{\rm nd}$ order implicit method and the Verlet method. SSA makes use of a staggered step to better approximate the terminal velocity in the stiff regime. In applications to protoplanetary disks, this not only leads to orders-of-magnitude higher accuracy than the other methods, but also provides greater stability, making it possible to take time steps 100 times larger in some situations. SSA is also $2^{\rm nd}$ order accurate and symplectic when $\Delta t \ll t_{\rm s}$. More generally, the robustness of SSA makes it applicable to linear dust-gas drag in virtually any context.

Read this paper on arXiv…

J. Fung and D. Muley
Fri, 6 Sep 19
76/78

Comments: Submitted to ApJ supplement

A Particle Module for the PLUTO Code: III — Dust [EPA]

http://arxiv.org/abs/1908.10793


The implementation of a new particle module describing the physics of dust grains coupled to the gas via drag forces is the subject of this work. The proposed particle-gas hybrid scheme has been designed to work in Cartesian as well as in cylindrical and spherical geometries. The numerical method relies on a Godunov-type second-order scheme for the fluid and an exponential midpoint rule for dust particles which overcomes the stiffness introduced by the linear coupling term. Besides being time-reversible and globally second-order accurate in time, the exponential integrator provides energy errors which are always bounded and it remains stable in the limit of arbitrarily small particle stopping times yielding the correct asymptotic solution. Such properties make this method preferable to the more widely used semi-implicit or fully implicit schemes at a very modest increase in computational cost. Coupling between particles and grid quantities is achieved through particle deposition and field-weighting techniques borrowed from Particle-In-Cell simulation methods. In this respect, we derive new weight factors in curvilinear coordinates that are more accurate than traditional volume- or area-weighting.
A comprehensive suite of numerical benchmarks is presented to assess the accuracy and robustness of the algorithm in Cartesian, cylindrical and spherical coordinates. Particular attention is devoted to the streaming instability which is analyzed in both local and global disk models. The module is part of the PLUTO code for astrophysical gas-dynamics and it is mainly intended for the numerical modeling of protoplanetary disks in which solid and gas interact via aerodynamic drag.

Read this paper on arXiv…

A. Mignone, M. Flock and B. Vaidya
Thu, 29 Aug 19
42/55

Comments: 22 pages, 13 figures

Driving solar coronal MHD simulations on high-performance computers [SSA]

http://arxiv.org/abs/1908.08557


The quality of today’s research is often tightly limited to the available computing power and scalability of codes to many processors. For example, tackling the problem of heating the solar corona requires a most realistic description of the plasma dynamics and the magnetic field. Numerically solving such a magneto-hydrodynamical (MHD) description of a small active region (AR) on the Sun requires millions of computation hours on current high-performance computing (HPC) hardware. The aim of this work is to describe methods for an efficient parallelization of boundary conditions and data input/output (IO) strategies that allow for a better scaling towards thousands of processors (CPUs). The Pencil Code is tested before and after optimization to compare the performance and scalability of a coronal MHD model above an AR. We present a novel boundary condition for non-vertical magnetic fields in the photosphere, where we approach the realistic pressure increase below the photosphere. With that, magnetic flux bundles become narrower with depth and the flux density increases accordingly. The scalability is improved by more than one order of magnitude through the HPC-friendly boundary conditions and IO strategies. This work describes also the necessary nudging methods to drive the MHD model with observed magnetic fields from the Sun’s photosphere. In addition, we present the upper and lower atmospheric boundary conditions (photospheric and towards the outer corona), including swamp layers to diminish perturbations before they reach the boundaries. Altogether, these methods enable more realistic 3D MHD simulations than previous models regarding the coronal heating problem above an AR — simply because of the ability to use a large amount of CPUs efficiently in parallel.

Read this paper on arXiv…

P. Bourdin
Mon, 26 Aug 19
38/55

Comments: 26 pages, 12 figures, 1 table, accepted at GAFD

Modeling of rigidity dependent CORSIKA simulations for GRAPES-3 [IMA]

http://arxiv.org/abs/1908.05948


The GRAPES-3 muon telescope located in Ooty, India records 4×10^9 muons daily. These muons are produced by interaction of primary cosmic rays (PCRs) in the atmosphere. The high statistics of muons enables GRAPES-3 to make precise measurement of various sun-induced phenomenon including coronal mass ejections (CME), Forbush decreases, geomagnetic storms (GMS) and atmosphere acceleration during the overhead passage of thunderclouds. However, the understanding and interpretation of observed data requires Monte Carlo (MC) simulation of PCRs and subsequent development of showers in the atmosphere. CORSIKA is a standard MC simulation code widely used for this purpose. However, these simulations are time consuming as large number of interactions and decays need to be taken into account at various stages of shower development from top of the atmosphere down to ground level. Therefore, computing resources become an important consideration particularly when billion of PCRs need to be simulated to match the high statistical accuracy of the data. During the GRAPES-3 simulations, it was observed that over 60% of simulated events don’t really reach the Earth’s atmosphere. The geomagnetic field (GMF) creates a threshold to PCRs called cutoff rigidity Rc, a direction dependent parameter below which PCRs can’t reach the Earth’s atmosphere. However, in CORSIKA there is no provision to set a direction dependent threshold. We have devised an efficient method that has taken into account of this Rc dependence. A reduction by a factor ~3 in simulation time and ~2 in output data size was achieved for GRAPES-3 simulations. This has been incorporated in CORSIKA version v75600 onwards. Detailed implementation of this along the potential benefits are discussed in this work.

Read this paper on arXiv…

B. B.Hariharan, S. S.R.Dugad, S. S.K.Gupta, et. al.
Mon, 19 Aug 19
21/46

Comments: Exp Astron (2019)

Metallic liquid H3O in a thin-shell zone inside Uranus and Neptune [CL]

http://arxiv.org/abs/1908.05821


The Solar System harbors deep unresolved mysteries despite centuries-long study. A highly intriguing case concerns anomalous non-dipolar and non-axisymmetric magnetic fields of Uranus and Neptune that have long eluded explanation by the prevailing theory. A thin-shell dynamo conjecture captures observed phenomena but leaves unexplained fundamental material basis and underlying mechanism. Here, we report the discovery of trihydrogen oxide (H3O) in metallic liquid state stabilized at extreme pressure and temperature conditions inside these icy planets. Calculated stability pressure field compared to known pressure-radius relation for Uranus and Neptune places metallic liquid H3O in a thin-shell zone near planetary cores. These findings from accurate quantum mechanical calculations rationalize the empirically conjectured thin-shell dynamo model and establish key physical benchmarks that are essential to elucidating the enigmatic magnetic-field anomaly of Uranus and Neptune, resolving a major mystery in planetary science.

Read this paper on arXiv…

P. Huang, H. Liu, J. Lv, et. al.
Mon, 19 Aug 19
44/46

Comments: N/A

Cosmological N-body simulations: a challenge for scalable generative models [CL]

http://arxiv.org/abs/1908.05519


Deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders have been demonstrated to produce images of high visual quality. However, the existing hardware on which these models are trained severely limits the size of the images that can be generated. The rapid growth of high dimensional data in many fields of science therefore poses a significant challenge for generative models. In cosmology, the large-scale, 3D matter distribution, modeled with N-body simulations, plays a crucial role in understanding of evolution of structures in the universe. As these simulations are computationally very expensive, GANs have recently generated interest as a possible method to emulate these datasets, but they have been, so far, mostly limited to 2D data. In this work, we introduce a new benchmark for the generation of 3D N-body simulations, in order to stimulate new ideas in the machine learning community and move closer to the practical use of generative models in cosmology. As a first benchmark result, we propose a scalable GAN approach for training a generator of N-body 3D cubes. Our technique relies on two key building blocks, (i) splitting the generation of the high-dimensional data into smaller parts, and (ii) using a multi-scale approach that efficiently captures global image features that might otherwise be lost in the splitting process. We evaluate the performance of our model for the generation of N-body samples using various statistical measures commonly used in cosmology. Our results show that the proposed model produces samples of high visual quality, although the statistical analysis reveals that capturing rare features in the data poses significant problems for the generative models. We make the data, quality evaluation routines, and the proposed GAN architecture publicly available at https://github.com/nperraud/3DcosmoGAN

Read this paper on arXiv…

N. Perraudin, A. Srivastava, A. Lucchi, et. al.
Fri, 16 Aug 19
3/54

Comments: N/A

Improved Coupling of Hydrodynamics and Nuclear Reactions via Spectral Deferred Corrections [CL]

http://arxiv.org/abs/1908.03661


Simulations in stellar astrophysics involve the coupling of hydrodynamics and nuclear reactions under a wide variety of conditions, from simmering convective flows to explosive nucleosynthesis. Numerical techniques such as operator splitting (most notably Strang splitting) are usually employed to couple the physical processes, but this can affect the accuracy of the simulation, particularly when the burning is vigorous. Furthermore, Strang splitting does not have a straightforward extension to higher-order integration in time. We present a new temporal integration strategy based on spectral deferred corrections and describe the second- and fourth-order implementations in the open-source, finite-volume, compressible hydrodynamics code Castro. One notable advantage to these schemes is that they combine standard low-order discretizations for individual physical processes in a way that achieves an arbitrarily high order of accuracy. We demonstrate the improved accuracy of the new methods on several test problems of increasing complexity.

Read this paper on arXiv…

M. Zingale, M. Katz, J. Bell, et. al.
Tue, 13 Aug 19
1/69

Comments: submitted to ApJ; supplementary Jupyter/SymPy notebook with some derivations here: this https URL

MAESTROeX: A Massively Parallel Low Mach Number Astrophysical Solver [CL]

http://arxiv.org/abs/1908.03634


We present MAESTROeX, a massively parallel solver for low Mach number astrophysical flows. The underlying low Mach number equation set allows for efficient, long-time integration for highly subsonic flows compared to compressible approaches. MAESTROeX is suitable for modeling full spherical stars as well as well as planar simulations of dynamics within localized regions of a star, and can robustly handle several orders of magnitude of density and pressure stratification. Previously, we have described the development of the predecessor of MAESTROeX, called MAESTRO, in a series of papers. Here, we present a new, greatly simplified temporal integration scheme that retains the same order of accuracy as our previous approaches. We also explore the use of alternative spatial mapping of the one-dimensional base state onto the full Cartesian grid. The code leverages the new AMReX software framework for block-structured adaptive mesh refinement (AMR) applications, allowing for scalability to large fractions of leadership-class machines. Using our previous studies on the convective phase of single-degenerate progenitor models of Type Ia supernovae as a guide, we characterize the performance of the code and validate the new algorithmic features. Like MAESTRO, MAESTROeX is fully open source.

Read this paper on arXiv…

D. Fan, A. Nonaka, A. Almgren, et. al.
Tue, 13 Aug 19
5/69

Comments: Submitted to the Astrophysical Journal for publication

Biermann battery effects on the turbulent dynamo in a colliding plasma jets produced by high-power lasers [CL]

http://arxiv.org/abs/1907.12889


The implication of the Biermann battery (BB) on turbulent magnetic field amplification in colliding plasma jets produced by high-power lasers is studied by using the FLASH code. It is found that the BB can play a significant role in turbulent field amplification. The small scale fluid structures introduced by turbulence can allow the BB to effectively amplify the magnetic field. When the flow is perpendicular to the magnetic field, the magnetic field amplification is shown to be greater than the case where the flow is parallel.

Read this paper on arXiv…

C. Ryu, H. Tuan and C. Kim
Wed, 31 Jul 19
13/65

Comments: N/A

Hazma: A Python Toolkit for Studying Indirect Detection of Sub-GeV Dark Matter [CL]

http://arxiv.org/abs/1907.11846


With several proposed MeV gamma-ray telescopes on the horizon, it is of paramount importance to perform accurate calculations of gamma-ray spectra expected from sub-GeV dark matter annihilation and decay. We present $\textbf{hazma}$, a python package for reliably computing these spectra, determining the resulting constraints from existing gamma-ray data, and prospects for upcoming telescopes. For high-level analyses, $\textbf{hazma}$ comes with several built-in dark matter models where the interactions between dark matter and hadrons have been determined in detail using chiral perturbation theory. Additionally, $\textbf{hazma}$ provides tools for computing spectra from individual final states with arbitrary numbers of light leptons and mesons, and for analyzing custom dark matter models. $\textbf{hazma}$ can also produce electron and positron spectra from dark matter annihilation, enabling precise derivation of constraints from the cosmic microwave background.

Read this paper on arXiv…

A. Coogan, L. Morrison and S. Profumo
Tue, 30 Jul 19
57/79

Comments: 56 pages, 14 figures

High order symplectic integrators for planetary dynamics and their implementation in REBOUND [EPA]

http://arxiv.org/abs/1907.11335


Direct N-body simulations and symplectic integrators are effective tools to study the long-term evolution of planetary systems. The Wisdom-Holman (WH) integrator in particular has been used extensively in planetary dynamics as it allows for large timesteps at good accuracy. One can extend the WH method to achieve even higher accuracy using several different approaches. In this paper we survey integrators developed by Wisdom et al. (1996) and Laskar & Robutel (2001). Since some of these methods are harder to implement and not as readily available to astronomers compared to the standard WH method, they are not used as often. This is somewhat unfortunate given that in typical simulations it is possible to improve the accuracy by up to six orders of magnitude (!) compared to the standard WH method without the need for any additional force evaluations. To change this, we implement a variety of high order symplectic methods in the freely available N-body integrator REBOUND. In this paper we catalogue these methods, discuss their differences, describe their error scalings, and benchmark their speed using our implementations.

Read this paper on arXiv…

H. Rein, D. Tamayo and G. Brown
Mon, 29 Jul 19
1/52

Comments: 10 pages, 2 figures, submitted, code to reproduce figures available at this https URL

Beyond the Runge-Kutta-Wentzel-Kramers-Brillouin method [CL]

http://arxiv.org/abs/1907.11638


We explore higher dimensional generalisations of the Runge-Kutta-Wentzel-Kramers-Brillouin method for integrating coupled systems of first-order ordinary differential equations with highly oscillatory solutions. Such methods could improve the performance and adaptability of the codes which are used compute numerical solutions to the Einstein-Boltzmann equations. We test Magnus expansion-based methods on the Einstein-Boltzmann equations for a simple universe model dominated by photons with a small amount of cold dark matter. The Magnus expansion methods achieve an increase in run speed of about 50% compared to a standard Runge-Kutta integration method. A comparison of approximate solutions derived from the Magnus expansion and the Wentzel-Kramers-Brillouin (WKB) method implies the two are distinct mathematical approaches. Simple Magnus expansion solutions show inferior long range accuracy compared to WKB. However we also demonstrate how one can improve on the standard Magnus approach to obtain a new “Jordan-Magnus” method. This has WKB-like performance on simple two-dimensional systems, although its higher dimensional generalisation remains elusive.

Read this paper on arXiv…

J. Bamber and W. Handley
Mon, 29 Jul 19
11/52

Comments: 13 pages, 9 figures, submitted to PRD

Beyond second-order convergence in simulations of magnetised binary neutron stars with realistic microphysics [HEAP]

http://arxiv.org/abs/1907.10328


We investigate the impact of using high-order numerical methods to study the merger of magnetised neutron stars with finite-temperature microphysics and neutrino cooling in full general relativity. By implementing a fourth-order accurate conservative finite-difference scheme we model the inspiral together with the early post-merger and highlight the differences to traditional second-order approaches at the various stages of the simulation. We find that even for finite-temperature equations of state, convergence orders higher than second order can be achieved in the inspiral and post-merger for the gravitational-wave phase. We further demonstrate that the second-order scheme overestimates the amount of proton-rich shock-heated ejecta, which can have an impact on the modelling of the dynamical part of the kilonova emission. Finally, we show that already at low resolution the growth rate of the magnetic energy is consistently resolved by using a fourth-order scheme.

Read this paper on arXiv…

E. Most, L. Papenfort and L. Rezzolla
Thu, 25 Jul 19
58/72

Comments: 12 pages, 9 figures

Advanced Astrophysics Discovery Technology in the Era of Data Driven Astronomy [IMA]

http://arxiv.org/abs/1907.10558


Experience suggests that structural issues in how institutional Astrophysics approaches data-driven science and the development of discovery technology may be hampering the community’s ability to respond effectively to a rapidly changing environment in which increasingly complex, heterogeneous datasets are challenging our existing information infrastructure and traditional approaches to analysis. We stand at the confluence of a new epoch of multimessenger science, remote co-location of data and processing power and new observing strategies based on miniaturized spacecraft. Significant effort will be required by the community to adapt to this rapidly evolving range of possible discovery moduses. In the suggested creation of a new Astrophysics element, Advanced Astrophysics Discovery Technology, we offer an affirmative solution that places the visibility of discovery technologies at a level that we suggest is fully commensurate with their importance to the future of the field.

Read this paper on arXiv…

R. Barry, J. Babu, J. Baker, et. al.
Thu, 25 Jul 19
72/72

Comments: White paper submitted to the ASTRO2020 decadal survey

General relativistic resistive magnetohydrodynamics with robust primitive variable recovery for accretion disk simulations [CL]

http://arxiv.org/abs/1907.07197


Recent advances in black hole astrophysics, particularly the first visual evidence of a supermassive black hole at the center of the galaxy M87 by the Event Horizon Telescope (EHT), and the detection of an orbiting “hot spot” nearby the event horizon of Sgr A* in the Galactic center by the Gravity Collaboration, require the development of novel numerical methods to understand the underlying plasma microphysics. Non-thermal emission related to such hot spots is conjectured to originate from plasmoids that form due to magnetic reconnection in thin current layers in the innermost accretion zone. Resistivity plays a crucial role in current sheet formation, magnetic reconnection, and plasmoid growth in black hole accretion disks and jets. We included resistivity in the three-dimensional general-relativistic magnetohydrodynamics (GRMHD) code BHAC and present the implementation of an Implicit-Explicit scheme to treat the stiff resistive source terms of the GRMHD equations. The algorithm is tested in combination with adaptive mesh refinement to resolve the resistive scales and a constrained transport method to keep the magnetic field solenoidal. Several novel methods for primitive variable recovery, a key part in relativistic magnetohydrodynamics codes, are presented and compared for accuracy, robustness, and efficiency. We propose a new inversion strategy that allows for resistive-GRMHD simulations of low gas-to-magnetic pressure ratio and highly magnetized regimes as applicable for black hole accretion disks, jets, and neutron star magnetospheres. We apply the new scheme to study the effect of resistivity on accreting black holes, accounting for dissipative effects as reconnection.

Read this paper on arXiv…

B. Ripperda, F. Bacchini, O. Porth, et. al.
Thu, 18 Jul 19
18/64

Comments: Submitted to ApJS

Entropy Symmetrization and High-Order Accurate Entropy Stable Numerical Schemes for Relativistic MHD Equations [CL]

http://arxiv.org/abs/1907.07467


This paper presents entropy symmetrization and high-order accurate entropy stable schemes for the relativistic magnetohydrodynamic (RMHD) equations. It is shown that the conservative RMHD equations are not symmetrizable and do not possess an entropy pair. To address this issue, a symmetrizable RMHD system, which admits a convex entropy pair, is proposed by adding a source term into the equations. Arbitrarily high-order accurate entropy stable finite difference schemes are then developed on Cartesian meshes based on the symmetrizable RMHD system. The crucial ingredients of these schemes include (i) affordable explicit entropy conservative fluxes which are technically derived through carefully selected parameter variables, (ii) a special high-order discretization of the source term in the symmetrizable RMHD system, and (iii) suitable high-order dissipative operators based on essentially non-oscillatory reconstruction to ensure the entropy stability. Several benchmark numerical tests demonstrate the accuracy and robustness of the proposed entropy stable schemes of the symmetrizable RMHD equations.

Read this paper on arXiv…

K. Wu and C. Shu
Thu, 18 Jul 19
43/64

Comments: 37 pages, 8 figures

SCALAR: an AMR code to simulate axion-like dark matter models [CL]

http://arxiv.org/abs/1906.12160


We present a new code SCALAR, based on the high-resolution hydrodynamics and N-body code RAMSES, to solve Schrodinger equation on adaptive refined meshes. The code is intended to be used to simulate axion or Fuzzy Dark Matter models where the evolution of the dark matter component is determined by a coupled Schrodinger-Poisson equation, but can also be used as a stand-alone solver for both linear and non-linear Schrodinger equations with any given external potential. This paper describes the numerical implementation of our solver and present tests to demonstrate how accurately it operates.

Read this paper on arXiv…

M. Mina, D. Mota and H. Winther
Mon, 1 Jul 19
8/52

Comments: 13 pages, 6 figures

Collisional excitation of NH(3Σ-) by Ar: A new ab initio 3D potential energy surface and scattering calculations [CL]

http://arxiv.org/abs/1906.11474


Collisional excitation of light hydrides is important to fully understand the complex chemical and physical processes of atmospheric and astrophysical environments. Here, we focus on the NH(X3{\Sigma}-)-Ar van der Waals system. First, we have calculated a new three-dimensional Potential Energy Surface (PES), which explicitly includes the NH bond vibration. We have carried out the ab initio calculations of the PES employing the open-shell single- and double-excitation couple cluster method with noniterative perturbational treatment of the triple excitations. To achieve a better accuracy, we have first obtained the energies using the augmented correlation-consistent aug-cc-pVXZ (X = T, Q, 5) basis sets and then we have extrapolated the final values to the complete basis set limit. We have also studied the collisional excitation of NH(X3{\Sigma}-)-Ar at the close-coupling level, employing our new PES. We calculated collisional excitation cross sections of the fine-structure levels of NH by Ar for energies up to 3000 cm-1 . After thermal average of the cross sections, we have then obtained the rate coefficients for temperatures up to 350 K. The propensity rules between the fine-structure levels are in good agreement with those of similar collisional systems, even though they are not as strong and pronounced as for lighter systems, such as NH-He. The final theoretical values are also compared with the few available experimental data.

Read this paper on arXiv…

D. Prudenzano, F. Lique, R. Ramachandran, et. al.
Fri, 28 Jun 19
36/65

Comments: N/A

A Rayleigh-Ritz method based approach to computing seismic normal modes in the presence of an essential spectrum [CL]

http://arxiv.org/abs/1906.11082


A Rayleigh-Ritz with Continuous Galerkin method based approach is presented to compute the normal modes of a planet in the presence of an essential spectrum. The essential spectrum is associated with a liquid outer core. The presence of a liquid outer core requires the introduction of a mixed Continuous Galerkin finite-element approach. Our discretization utilizes fully unstructured tetrahedral meshes for both solid and fluid domains. The relevant generalized eigenvalue problem is solved by a combination of several highly parallel, computationally efficient methods. Self-gravitation is treated as an N-body problem and the relevant gravitational potential is evaluated directly and efficiently utilizing the fast multipole method. The computational experiments are performed on constant elastic balls and the isotropic version of the preliminary reference earth model (PREM) for validation. Our proposed algorithm is illustrated in fully heterogeneous models including one combined with crust 1.0.

Read this paper on arXiv…

J. Shi, R. Li, Y. Xi, et. al.
Thu, 27 Jun 19
12/62

Comments: N/A

Hierarchical Particle-Mesh: an FFT-accelerated Fast Multipole Method [CL]

http://arxiv.org/abs/1906.10734


I describe a modification to the original Fast Multipole Method (FMM) of Greengard & Rokhlin that approximates the gravitation field of an FMM cell as a small uniform grid (a “gridlet”) of effective masses. The effective masses on a gridlet are set from the requirement that the multipole moments of the FMM cells are reproduced exactly, hence preserving the accuracy of the gravitational field representation. The calculation of the gravitational field from a multipole expansion can then be computed for all multipole orders simultaneously, with a single Fast Fourier Transform, significantly reducing the computational cost at a given value of the required accuracy. The described approach belongs to the class of “kernel independent” variants of the FMM method and works with any Green function.

Read this paper on arXiv…

N. Gnedin
Thu, 27 Jun 19
22/62

Comments: Accepted for publication in the ApJ

Unconventional phase III of high-pressure solid hydrogen [CL]

http://arxiv.org/abs/1906.10854


We reassess the phase diagram of high-pressure solid hydrogen using mean-filed and many-body wave function based approaches to determine the nature of phase III of solid hydrogen. To discover the best candidates for the phase III, Density Functional Theory with meta-generalized-gradient approximation (meta-GGA) non-empirical strongly constrained and appropriately normed (SCAN) exchange-correlation (XC) is employed. We study eleven molecular structures with different symmetry, which are the most competitive phases, within the pressure range of 100 to 500 GPa. The SCAN phase diagram predicts that the $C2/c-24$ and $P6_122-36$ structures are the best candidates for the phase III with energy difference of less than 1 meV/atom. To verify the stability of the competitive insulator structures of $C2/c-24$ and $P6_122-36$, we apply the diffusion quantum Monte Carlo (DMC) to optimise the percentage of the exact exchange ($\alpha$) in the trial many-body wave function. We found that the optimised $\alpha$ equals to $40 \%$, and the corresponding XC functional is named PBE${1x}$. The energy gain with respect to the conventional hybrid functional (PBE$_0$) with $\alpha = 25\%$ varies with density and structure. The PBE${1x}$-DMC enthalpy-pressure phase diagram predicts that the $P6_122-36$ structure is stable up to 210 GPa where it transforms to the $C2/c-24$. We predict that the phase III of high-pressure solid hydrogen is polymorphic.

Read this paper on arXiv…

S. Azadi and T. Kuehne
Thu, 27 Jun 19
62/62

Comments: N/A

Fast quasi-periodic oscillations in the eclipsing polar VV Puppis from VLT and XMM-Newton observations [HEAP]

http://arxiv.org/abs/1906.06985


We present high time resolution optical photometric data of the polar VV Puppis obtained simultaneously in three filters (u’, HeII $\lambda$4686, r’) with the ULTRACAM camera mounted at the ESO-VLT telescope. An analysis of a long 50 ks XMM-Newton observation of the source, retrieved from the database, is also provided. Quasi-periodic oscillations (QPOs) are clearly detected in the optical during the source bright phase intervals when the accreting pole is visible, confirming the association of the QPOs with the basis of the accretion column. QPOs are detected in the three filters at a mean frequency of $\sim$ 0.7 Hz with a similar amplitude $\sim$ 1\%. Mean orbitally-averaged power spectra during the bright phase show a rather broad excess with a quality factor Q= $\nu$/$\Delta \nu$ = 5-7 but smaller data segments commonly show a much higher coherency with Q up to 30. The XMM (0.5–10 keV) observation provides the first accurate estimation of the hard X-ray component with a high kT $\sim$ 40 keV temperature and confirms the high EUV-soft/hard ratio in the range of (4–15) for VV Pup. The detailed X-ray orbital light curve displays a short $\Delta \phi \simeq 0.05$ ingress into self-eclipse of the active pole, indicative of a accretion shock height of $\sim$ 75 km. No significant X-ray QPOs are detected with an amplitude upper limit of $\sim$30\% in the range (0.1–5) Hz. Detailed hydrodynamical numerical simulations of the post-shock accretion region with parameters consistent with VV Pup demonstrate that the expected frequencies from radiative instability are identical for X-rays and optical regime at values $\nu$ $\sim$ (40–70) Hz, more than one order magnitude higher than observed. This confirms previous statements suggesting that present instability models are unable to explain the full QPO characteristics within the parameters commonly known for polars.

Read this paper on arXiv…

J. Bonnet-Bidaud, M. Mouchet, E. Falize, et. al.
Tue, 18 Jun 19
18/73

Comments: 11 pages, 10 figures, accepted for publication in A&A

Runko: Modern multi-physics toolbox for simulating plasma [CL]

http://arxiv.org/abs/1906.06306


Runko is a new open-source plasma simulation framework implemented in C++ and Python. It is designed to function as an easy-to-extend general toolbox for simulating astrophysical plasmas with different theoretical and numerical models. Computationally intensive low-level “kernels” are written in modern C++14 taking advantage of polymorphic classes, multiple inheritance, and template metaprogramming. High-level functionality is operated with Python3 scripts. This hybrid program design ensures fast code and ease of use. The framework has a modular object-oriented design that allow the user to easily add new numerical algorithms to the system. The code can be run on various computing platforms ranging from laptops (shared-memory systems) to massively parallel supercomputer architectures (distributed-memory systems). The framework also supports heterogeneous multi-physics simulations in which different physical solvers can be combined and run simultaneously. Here we report on the first results from the framework’s relativistic particle-in-cell (PIC) module. Using the PIC module, we simulate decaying relativistic kinetic turbulence in suddenly stirred magnetically-dominated pair plasma. We show that the resulting particle distribution can be separated into a thermal part that forms the turbulent cascade and into a separate decoupled non-thermal particle population that acts as an energy sink for the system.

Read this paper on arXiv…

J. Nättilä
Mon, 17 Jun 19
24/53

Comments: 17 pages, 6 figures. Comments welcome! Code available from this https URL

An efficient method for solving highly oscillatory ordinary differential equations with applications to physical systems [CL]

http://arxiv.org/abs/1906.01421


We present a novel numerical routine (oscode) with a C++ and Python interface for the efficient solution of one-dimensional, second-order, ordinary differential equations with rapidly oscillating solutions. The method is based on a Runge-Kutta-like stepping procedure that makes use of the Wentzel-Kramers-Brillouin (WKB) approximation to skip regions of integration where the characteristic frequency varies slowly. In regions where this is not the case, the method is able to switch to a made-to-measure Runge-Kutta integrator that minimises the total number of function evaluations. We demonstrate the effectiveness of the method with example solutions of the Airy equation and an equation exhibiting a burst of oscillations, discussing the error properties of the method in detail. We then show the method applied to physical systems. First, the one-dimensional, time-independent Schr\”odinger equation is solved as part of a shooting method to search for the energy eigenvalues for a potential with quartic anharmonicity. Then, the method is used to solve the Mukhanov-Sasaki equation describing the evolution of cosmological perturbations, and the primordial power spectrum of the perturbations is computed in different cosmological scenarios. We compare the performance of our solver in calculating a primordial power spectrum of scalar perturbations to that of BINGO, an efficient code specifically designed for such applications.

Read this paper on arXiv…

F. Agocs, W. Handley, A. Lasenby, et. al.
Tue, 11 Jun 19
11/60

Comments: 23 pages, 15 figures. Submitted to Physical Review D. The associated code is available online at this https URL

Learning Radiative Transfer Models for Climate Change Applications in Imaging Spectroscopy [CL]

http://arxiv.org/abs/1906.03479


According to a recent investigation, an estimated 33-50% of the world’s coral reefs have undergone degradation, believed to be as a result of climate change. A strong driver of climate change and the subsequent environmental impact are greenhouse gases such as methane. However, the exact relation climate change has to the environmental condition cannot be easily established. Remote sensing methods are increasingly being used to quantify and draw connections between rapidly changing climatic conditions and environmental impact. A crucial part of this analysis is processing spectroscopy data using radiative transfer models (RTMs) which is a computationally expensive process and limits their use with high volume imaging spectrometers. This work presents an algorithm that can efficiently emulate RTMs using neural networks leading to a multifold speedup in processing time, and yielding multiple downstream benefits.

Read this paper on arXiv…

S. Deshpande, B. Bue, D. Thompson, et. al.
Tue, 11 Jun 19
27/60

Comments: Accepted to International Conference on Machine Learning (ICML) 2019 Workshop: Climate Change: How Can AI Help?

The formation and dissipation of current sheets and shocks due to compressive waves in a stratified atmosphere containing a magnetic null [SSA]

http://arxiv.org/abs/1906.02317


We study the propagation and dissipation of magnetohydrodynamic waves in a set of numerical models that each include a solar–like stratified atmosphere and a magnetic field with a null point. All simulations have the same magnetic field configuration but different transition region heights. Compressive wave packets introduced in the photospheric portion of the simulations refract towards the null and collapse it into a current sheet, which then undergoes reconnection. The collapsed null forms a current sheet due to a strong magnetic pressure gradient caused by the inability of magnetic perturbations to cross the null. Although the null current sheet undergoes multiple reconnection episodes due to repeated reflections off the lower boundary, we find no evidence of oscillatory reconnection arising from the dynamics of the null itself. Wave mode conversion around the null generates a series of slow mode shocks localized near each separatrix. The shock strength is asymmetric across each separatrix, and subsequent shock damping therefore creates a tangential discontinuity across each separatrix, with long–lived current densities. A parameter study of the injected wave energy to reach the null confirms our previous WKB estimates. Finally, using current estimates of the photospheric acoustic power, we estimate that the shock and Ohmic heating we describe may account for $\approx1-10\%$ of the radiative losses from coronal bright points with similar topologies, and are similarly insufficient to account for losses from larger structures such as ephemeral regions. At the same time, the dynamics are comparable to proposed mechanisms for generating type–II spicules.

Read this paper on arXiv…

L. Tarr and M. Linton
Fri, 7 Jun 19
1/49

Comments: 24 pages, 20 figures. Accepted for publication in ApJ

An efficient method for solving highly oscillatory ordinary differential equations with applications to physical systems [CL]

http://arxiv.org/abs/1906.01421


We present a novel numerical routine (oscode) with a C++ and Python interface for the efficient solution of one-dimensional, second-order, ordinary differential equations with rapidly oscillating solutions. The method is based on a Runge-Kutta-like stepping procedure that makes use of the Wentzel-Kramers-Brillouin (WKB) approximation to skip regions of integration where the characteristic frequency varies slowly. In regions where this is not the case, the method is able to switch to a made-to-measure Runge-Kutta integrator that minimises the total number of function evaluations. We demonstrate the effectiveness of the method with example solutions of the Airy equation and an equation exhibiting a burst of oscillations, discussing the error properties of the method in detail. We then show the method applied to physical systems. First, the one-dimensional, time-independent Schr\”odinger equation is solved as part of a shooting method to search for the energy eigenvalues for a potential with quartic anharmonicity. Then, the method is used to solve the Mukhanov-Sasaki equation describing the evolution of cosmological perturbations, and the primordial power spectrum of the perturbations is computed in different cosmological scenarios. We compare the performance of our solver in calculating a primordial power spectrum of scalar perturbations to that of BINGO, an efficient code specifically designed for such applications.

Read this paper on arXiv…

F. Agocs, W. Handley, A. Lasenby, et. al.
Wed, 5 Jun 19
27/74

Comments: 23 pages, 15 figures. Submitted to Physical Review D. The associated code is available online at this https URL

hankel: A Python library for performing simple and accurate Hankel transformations [IMA]

http://arxiv.org/abs/1906.01088


This paper presents \textsc{hankel}, a pure-python code for solving Hankel-type integrals and transforms. Such transforms are common in the physical sciences, especially appearing as the radial solution to angularly symmetric Fourier Transforms in arbitrary dimensions. The code harnesses the advantages of solving such transforms via the one-dimensional Hankel transform — an increase in conceptual simplicity and efficiency — and implements them in the user-friendly and flexible Python language. We discuss several limitations of the adopted method, and point to the code’s extensive documentation for further examples.

Read this paper on arXiv…

S. Murray and F. Poulin
Wed, 5 Jun 19
67/74

Comments: N/A