Provably convergent Newton-Raphson methods for recovering primitive variables with applications to physical-constraint-preserving Hermite WENO schemes for relativistic hydrodynamics [CL]

http://arxiv.org/abs/2305.14805


The relativistic hydrodynamics (RHD) equations have three crucial intrinsic physical constraints on the primitive variables: positivity of pressure and density, and subluminal fluid velocity. However, numerical simulations can violate these constraints, leading to nonphysical results or even simulation failure. Designing genuinely physical-constraint-preserving (PCP) schemes is very difficult, as the primitive variables cannot be explicitly reformulated using conservative variables due to relativistic effects. In this paper, we propose three efficient Newton–Raphson (NR) methods for robustly recovering primitive variables from conservative variables. Importantly, we rigorously prove that these NR methods are always convergent and PCP, meaning they preserve the physical constraints throughout the NR iterations. The discovery of these robust NR methods and their PCP convergence analyses are highly nontrivial and technical. As an application, we apply the proposed NR methods to design PCP finite volume Hermite weighted essentially non-oscillatory (HWENO) schemes for solving the RHD equations. Our PCP HWENO schemes incorporate high-order HWENO reconstruction, a PCP limiter, and strong-stability-preserving time discretization. We rigorously prove the PCP property of the fully discrete schemes using convex decomposition techniques. Moreover, we suggest the characteristic decomposition with rescaled eigenvectors and scale-invariant nonlinear weights to enhance the performance of the HWENO schemes in simulating large-scale RHD problems. Several demanding numerical tests are conducted to demonstrate the robustness, accuracy, and high resolution of the proposed PCP HWENO schemes and to validate the efficiency of our NR methods.

Read this paper on arXiv…

C. Cai, J. Qiu and K. Wu
Thu, 25 May 23
64/64

Comments: 49 pages

Panchromatic simulated galaxy observations from the NIHAO project [GA]

http://arxiv.org/abs/2305.10232


We present simulated galaxy spectral energy distributions (SEDs) from the far ultraviolet through the far infrared, created using hydrodynamic simulations and radiative transfer calculations, suitable for the validation of SED modeling techniques. SED modeling is an essential tool for inferring star formation histories from nearby galaxy observations, but is fraught with difficulty due to our incomplete understanding of stellar populations, chemical enrichment processes, and the non-linear, geometry dependent effects of dust on our observations. Our simulated SEDs will allow us to assess the accuracy of these inferences against galaxies with known ground truth. To create the SEDs, we use simulated galaxies from the NIHAO suite and the radiative transfer code SKIRT. We explore different sub-grid post-processing recipes, using color distributions and their dependence on axis ratio of galaxies in the nearby universe to tune and validate them. We find that sub-grid post-processing recipes that mitigate limitations in the temporal and spatial resolution of the simulations are required for producing FUV to FIR photometry that statistically reproduce the colors of galaxies in the nearby universe. With this paper we release resolved photometry and spatially integrated spectra for our sample galaxies, each from a range of different viewing angles. Our simulations predict that there is a large variation in attenuation laws among galaxies, and that from any particular viewing angle that energy balance between dust attenuation and reemission can be violated by up to a factor of 3. These features are likely to affect SED modeling accuracy.

Read this paper on arXiv…

N. Faucher, M. Blanton and A. Macciò
Thu, 18 May 23
19/67

Comments: N/A

Identification and Classification of Exoplanets Using Machine Learning Techniques [EPA]

http://arxiv.org/abs/2305.09596


NASA’s Kepler Space Telescope has been instrumental in the task of finding the presence of exoplanets in our galaxy. This search has been supported by computational data analysis to identify exoplanets from the signals received by the Kepler telescope. In this paper, we consider building upon some existing work on exoplanet identification using residual networks for the data of the Kepler space telescope and its extended mission K2. This paper aims to explore how deep learning algorithms can help in classifying the presence of exoplanets with less amount of data in one case and a more extensive variety of data in another. In addition to the standard CNN-based method, we propose a Siamese architecture that is particularly useful in addressing classification in a low-data scenario. The CNN and ResNet algorithms achieved an average accuracy of 68% for three classes and 86% for two-class classification. However, for both the three and two classes, the Siamese algorithm achieved 99% accuracy.

Read this paper on arXiv…

P. G and A. Kumari
Wed, 17 May 23
28/67

Comments: 16pages, 3 figures

Gradient-Annihilated PINNs for Solving Riemann Problems: Application to Relativistic Hydrodynamics [CL]

http://arxiv.org/abs/2305.08448


We present a novel methodology based on Physics-Informed Neural Networks (PINNs) for solving systems of partial differential equations admitting discontinuous solutions. Our method, called Gradient-Annihilated PINNs (GA-PINNs), introduces a modified loss function that requires the model to partially ignore high-gradients in the physical variables, achieved by introducing a suitable weighting function. The method relies on a set of hyperparameters that control how gradients are treated in the physical loss and how the activation functions of the neural model are dynamically accounted for. The performance of our GA-PINN model is demonstrated by solving Riemann problems in special relativistic hydrodynamics, extending earlier studies with PINNs in the context of the classical Euler equations. The solutions obtained with our GA-PINN model correctly describe the propagation speeds of discontinuities and sharply capture the associated jumps. We use the relative $l^{2}$ error to compare our results with the exact solution of special relativistic Riemann problems, used as the reference “ground truth”, and with the error obtained with a second-order, central, shock-capturing scheme. In all problems investigated, the accuracy reached by our GA-PINN model is comparable to that obtained with a shock-capturing scheme and significantly higher than that achieved by a baseline PINN algorithm. An additional benefit worth stressing is that our PINN-based approach sidesteps the costly recovery of the primitive variables from the state vector of conserved ones, a well-known drawback of grid-based solutions of the relativistic hydrodynamics equations. Due to its inherent generality and its ability to handle steep gradients, the GA-PINN method discussed could be a valuable tool to model relativistic flows in astrophysics and particle physics, characterized by the prevalence of discontinuous solutions.

Read this paper on arXiv…

F. Antonio, M. David, R. Roberto, et. al.
Wed, 17 May 23
57/67

Comments: 25 pages, 16 figures

RAM: Rapid Advection Algorithm on Arbitrary Meshes [IMA]

http://arxiv.org/abs/2305.05362


The study of many astrophysical flows requires computational algorithms that can capture high Mach number flows, while resolving a large dynamic range in spatial and density scales. In this paper we present a novel method, RAM: Rapid Advection Algorithm on Arbitrary Meshes. RAM is a time-explicit method to solve the advection equation in problems with large bulk velocity on arbitrary computational grids. In comparison with standard up-wind algorithms, RAM enables advection with larger time steps and lower truncation errors. Our method is based on the operator splitting technique and conservative interpolation. Depending on the bulk velocity and resolution, RAM can decrease the numerical cost of hydrodynamics by more than one order of magnitude. To quantify the truncation errors and speed-up with RAM, we perform one and two-dimensional hydrodynamics tests. We find that the order of our method is given by the order of the conservative interpolation and that the effective speed up is in agreement with the relative increment in time step. RAM will be especially useful for numerical studies of disk-satellite interaction, characterized by high bulk orbital velocities, and non-trivial geometries. Our method dramatically lowers the computational cost of simulations that simultaneously resolve the global disk and well inside the Hill radius of the secondary companion.

Read this paper on arXiv…

P. Benítez-Llambay, L. Krapp, X. Ramos, et. al.
Wed, 10 May 23
14/65

Comments: 15 pages, 7 figures. Submitted to ApJ. Comments are welcome

Timescales of Chaos in the Inner Solar System: Lyapunov Spectrum and Quasi-integrals of Motion [EPA]

http://arxiv.org/abs/2305.01683


Numerical integrations of the Solar System reveal a remarkable stability of the orbits of the inner planets over billions of years, in spite of their chaotic variations characterized by a Lyapunov time of only 5 million years and the lack of integrals of motion able to constrain their dynamics. To open a window on such long-term behavior, we compute the entire Lyapunov spectrum of a forced secular model of the inner planets. We uncover a hierarchy of characteristic exponents that spans two orders of magnitude, manifesting a slow-fast dynamics with a broad separation of timescales. A systematic analysis of the Fourier harmonics of the Hamiltonian, based on computer algebra, reveals three symmetries that characterize the strongest resonances responsible for the orbital chaos. These symmetries are broken only by weak resonances, leading to the existence of quasi-integrals of motion that are shown to relate to the smallest Lyapunov exponents. A principal component analysis of the orbital solutions independently confirms that the quasi-integrals are among the slowest degrees of freedom of the dynamics. Strong evidence emerges that they effectively constrain the chaotic diffusion of the orbits, playing a crucial role in the statistical stability over the Solar System lifetime.

Read this paper on arXiv…

F. Mogavero, N. Hoang and J. Laskar
Thu, 4 May 23
34/60

Comments: 24 pages, 11 figures. Published in Physical Review X

Ameliorating the Courant-Friedrichs-Lewy condition in spherical coordinates: A double FFT filter method for general relativistic MHD in dynamical spacetimes [CL]

http://arxiv.org/abs/2305.01537


Numerical simulations of merging compact objects and their remnants form the theoretical foundation for gravitational wave and multi-messenger astronomy. While Cartesian-coordinate-based adaptive mesh refinement is commonly used for simulations, spherical-like coordinates are more suitable for nearly spherical remnants and azimuthal flows due to lower numerical dissipation in the evolution of fluid angular momentum, as well as requiring fewer numbers of computational cells. However, the use of spherical coordinates to numerically solve hyperbolic partial differential equations can result in severe Courant-Friedrichs-Lewy (CFL) stability condition timestep limitations, which can make simulations prohibitively expensive. This paper addresses this issue for the numerical solution of coupled spacetime and general relativistic magnetohydrodynamics evolutions by introducing a double FFT filter and implementing it within the fully MPI-parallelized SphericalNR framework in the Einstein Toolkit. We demonstrate the effectiveness and robustness of the filtering algorithm by applying it to a number of challenging code tests, and show that it passes these tests effectively, demonstrating convergence while also increasing the
timestep significantly compared to unfiltered simulations.

Read this paper on arXiv…

L. Ji, V. Mewes, Y. Zlochower, et. al.
Wed, 3 May 23
55/67

Comments: 15 pages, 13 figures, revtex4-1

Parallelization of the Symplectic Massive Body Algorithm (SyMBA) $N$-body Code [EPA]

http://arxiv.org/abs/2304.07325


Direct $N$-body simulations of a large number of particles, especially in the study of planetesimal dynamics and planet formation, have been computationally challenging even with modern machines. This work presents the combination of fully parallelized $N^2/2$ interactions and the incorporation of the GENGA code’s close encounter pair grouping strategy to enable MIMD parallelization of the Symplectic Massive Body Algorithm (SyMBA) with OpenMP on multi-core CPUs in shared-memory environment. SyMBAp (SyMBA parallelized) preserves the symplectic nature of SyMBA and shows good scalability, with a speedup of 30.8 times with 56 cores in a simulation with 5,000 fully interactive particles.

Read this paper on arXiv…

T. Lau and M. Lee
Tue, 18 Apr 23
23/80

Comments: Accepted for publication in Research Notes of the AAS

Reducing roundoff errors in numerical integration of planetary ephemeris [EPA]

http://arxiv.org/abs/2304.04458


Modern lunar-planetary ephemerides are numerically integrated on the observational timespan of more than 100 years (with the last 20 years having very precise astrometrical data). On such long timespans, not only finite difference approximation errors, but also the accumulating arithmetic roundoff errors become important because they exceed random errors of high-precision range observables of Moon, Mars, and Mercury. One way to tackle this problem is using extended-precision arithmetics available on x86 processors. Noting the drawbacks of this approach, we propose an alternative: using double-double arithmetics where appropriate. This will allow to use only double precision floating-point primitives which have ubiquitous support.

Read this paper on arXiv…

M. Subbotin, A. Kodukov and D. Pavlov
Tue, 11 Apr 23
18/63

Comments: N/A

Quantum algorithm for collisionless Boltzmann simulation of self-gravitating systems [CL]

http://arxiv.org/abs/2303.16490


The collisionless Boltzmann equation (CBE) is a fundamental equation that governs the dynamics of a broad range of astrophysical systems from space plasma to star clusters and galaxies. It is computationally expensive to integrate the CBE directly in a phase space, and thus the applications to realistic astrophysical problems have been limited so far. Recently, Todorova \& Steijl (2020) proposed an efficient quantum algorithm for solving the CBE with a significantly reduced computational complexity. We extend the method to perform quantum simulations that follow the evolution of self-gravitating systems. We first run a 1+1 dimensional test calculation of free streaming motion on 64$\times$64 grids using 13 simulated qubits and validate our method. We then perform simulations of Jeans collapse, and compare the result with analytic and linear theory calculations. We propose a direct method to generate initial conditions as well as a method to retrieve necessary information from a register of multiple qubits. Our simulation scheme achieves $\mathcal{O}(N_v^3)$ less computational complexity than the classical method, where $N_v$ is the number of discrete velocity grids per dimension. It will thus allow us to perform large-scale CBE simulations on future quantum computers.

Read this paper on arXiv…

S. Yamazaki, F. Uchida, K. Fujisawa, et. al.
Thu, 30 Mar 23
39/66

Comments: 10 pages, 9figures

APES: Approximate Posterior Ensemble Sampler [CEA]

http://arxiv.org/abs/2303.13667


This paper proposes a novel approach to generate samples from target distributions that are difficult to sample from using Markov Chain Monte Carlo (MCMC) methods. Traditional MCMC algorithms often face slow convergence due to the difficulty in finding proposals that suit the problem at hand. To address this issue, the paper introduces the Approximate Posterior Ensemble Sampler (APES) algorithm, which employs kernel density estimation and radial basis interpolation to create an adaptive proposal, leading to fast convergence of the chains. The APES algorithm’s scalability to higher dimensions makes it a practical solution for complex problems. The proposed method generates an approximate posterior probability that closely approximates the desired distribution and is easy to sample from, resulting in smaller autocorrelation times and a higher probability of acceptance by the chain. In this work, we compare the performance of the APES algorithm with the affine invariance ensemble sampler with the stretch move in various contexts, demonstrating the efficiency of the proposed method. For instance, on the Rosenbrock function, the APES presented an autocorrelation time 140 times smaller than the affine invariance ensemble sampler. The comparison showcases the effectiveness of the APES algorithm in generating samples from challenging distributions. This paper presents a practical solution to generating samples from complex distributions while addressing the challenge of finding suitable proposals. With new cosmological surveys set to deal with many new systematics, which will require many new nuisance parameters in the models, this method offers a practical solution for the upcoming era of cosmological analyses.

Read this paper on arXiv…

S. Vitenti and E. Barroso
Mon, 27 Mar 23
33/59

Comments: 15 pages, 6 figures, 7 tables

First total recovery of Sun global Alfven resonance: least-squares spectra of decade-scale dynamics of N-S-separated fast solar wind reveal solar-type stars act as revolving-field magnetoalternators [SSA]

http://arxiv.org/abs/2301.07219


The Sun reveals itself in the 385.8-2.439-nHz band of polar ({\phi}Sun>|70{\deg}|) fast (>700 km s^-1) solar wind’s decade-scale dynamics as a globally completely vibrating, revolving-field magnetoalternator rather than a proverbial engine. Thus North-South separation of 1994-2008 Ulysses <10 nT wind polar samplings spanning ~1.6 10^7-2.5 10^9-erg base energies reveals Gauss-Vanicek spectral signatures of an entirely >99%-significant Sun-borne global sharp Alfven resonance (AR), Pi=PS/i, imprinted into the winds to the order n=100+ and co-triggered by the PS=~11-yr Schwabe global mode northside, its ~10-yr degeneration equatorially, and ~9-yr degeneration southside. The Sun is a typical ~3-dB-attenuated ring-system of differentially rotating and contrarily (out-of-phase) vibrating conveyor belts and layers with a continuous spectrum and resolution (<81.3 nHz (S), <55.6 nHz (N)) in lowermost frequencies (<2 {\mu}Hz in most modes). AR is accompanied by an also sharp symmetrical antiresonance P(-) whose both N/S tailing harmonics P(-17) are the well-known PR=~154-day Rieger period dominating planetary dynamics and space weather. Unlike a resonating motor restrained from separating its casing, the freely resonating Sun exhausts the wind in an axial shake-off beyond L1 at highly coherent discrete wave modes generated in the Sun, so to understand solar-type stars, only global decadal scales matter. The result verified against remote data and the experiment, so it instantly replaces dynamo with magnetoalternator and advances Standard Stellar Models, improving fundamental understanding of billions of trillions of solar-type stars. Gauss-Vanicek spectral analysis revolutionizes planetary & space sciences by rigorously simulating multiple spacecraft or fleet formations from a single spacecraft and physics by directly computing nonlinear global dynamics (rendering spherical approximation obsolete).

Read this paper on arXiv…

M. Omerbashich
Thu, 19 Jan 23
51/100

Comments: 31 pages, 7 figures, 3 tables

SFQEDtoolkit: a high-performance library for the accurate modeling of strong-field QED processes in PIC and Monte Carlo codes [CL]

http://arxiv.org/abs/2301.07684


Strong-field QED (SFQED) processes are central in determining the dynamics of particles and plasmas in extreme electromagnetic fields such as those present in the vicinity of compact astrophysical objects or generated with ultraintense lasers. SFQEDtoolkit is an open source library designed to allow users for a straightforward implementation of SFQED processes in existing particle-in-cell (PIC) and Monte Carlo codes. Through advanced function approximation techniques, high-energy photon emission and electron-positron pair creation probability rates and energy distributions are calculated within the locally-constant-field approximation (LCFA) as well as with more advanced models [Phys. Rev. A 99, 022125 (2019)]. SFQEDtoolkit is designed to provide users with high-performance and high-accuracy, and neat examples showing its usage are provided. In the near future, SFQEDtoolkit will be enriched to model the angular distribution of the generated particles, i.e., beyond the commonly employed collinear emission approximation, as well as to model spin and polarization dependent SFQED processes. Notably, the generality and flexibility of the presented function approximation approach makes it suitable to be employed in other areas of physics, chemistry and computer science.

Read this paper on arXiv…

S. Montefiori and M. Tamburini
Thu, 19 Jan 23
52/100

Comments: 31 pages, 7 figures. Repository with the associated open-source code available on github this https URL

Switching integrators reversibly in the astrophysical $N$-body problem [EPA]

http://arxiv.org/abs/2301.06253


We present a simple algorithm to switch between $N$-body time integrators in a reversible way. We apply it to planetary systems undergoing arbitrarily close encounters and highly eccentric orbits, but the potential applications are broader. Upgrading an ordinary non-reversible switching integrator to a reversible one is straightforward and introduces no appreciable computational burden in our tests. Our method checks if the integrator during the time step violates a time-symmetric selection condition and redoes the step if necessary. In our experiments a few percent of steps would have violated the condition without our corrections. By eliminating them the algorithm avoids long-term error accumulation, of several orders magnitude in some cases.

Read this paper on arXiv…

D. Hernandez and W. Dehnen
Wed, 18 Jan 23
49/133

Comments: 10 pages, 8 figures, submitted to MNRAS, comments welcome

Coupling multi-fluid dynamics equipped with Landau closures to the particle-in-cell method [HEAP]

http://arxiv.org/abs/2301.04679


The particle-in-cell (PIC) method is successfully used to study magnetized plasmas. However, this requires large computational costs and limits simulations to short physical run-times and often to setups in less than three spatial dimensions. Traditionally, this is circumvented either via hybrid-PIC methods (adopting massless electrons) or via magneto-hydrodynamic-PIC methods (modelling the background plasma as a single charge-neutral magneto-hydrodynamical fluid). Because both methods preclude modelling important plasma-kinetic effects, we introduce a new fluid-PIC code that couples a fully explicit and charge-conservative multi-fluid solver to the PIC code SHARP through a current-coupling scheme and solve the full set of Maxwell’s equations. This avoids simplifications typically adopted for Ohm’s Law and enables us to fully resolve the electron temporal and spatial scales while retaining the versatility of initializing any number of ion, electron, or neutral species with arbitrary velocity distributions. The fluid solver includes closures emulating Landau damping so that we can account for this important kinetic process in our fluid species. Our fluid-PIC code is second-order accurate in space and time. The code is successfully validated against several test problems, including the stability and accuracy of shocks and the dispersion relation and damping rates of waves in unmagnetized and magnetized plasmas. It also matches growth rates and saturation levels of the gyro-scale and intermediate-scale instabilities driven by drifting charged particles in magnetized thermal background plasmas in comparison to linear theory and PIC simulations. This new fluid-SHARP code is specially designed for studying high-energy cosmic rays interacting with thermal plasmas over macroscopic timescales.

Read this paper on arXiv…

R. Lemmerz, M. Shalaby, T. Thomas, et. al.
Fri, 13 Jan 23
67/72

Comments: 17 pages, 11 figures, submitted to MNRAS. Comments are welcome

The Cosmological Simulation Code OpenGadget3 — Implementation of Meshless Finite Mass [IMA]

http://arxiv.org/abs/2301.03612


Subsonic turbulence plays a major role in determining properties of the intra cluster medium (ICM). We introduce a new Meshless Finite Mass (MFM) implementation in OpenGadget3 and apply it to this specific problem. To this end, we present a set of test cases to validate our implementation of the MFM framework in our code. These include but are not limited to: the soundwave and Kepler disk as smooth situations to probe the stability, a Rayleigh-Taylor and Kelvin-Helmholtz instability as popular mixing instabilities, a blob test as more complex example including both mixing and shocks, shock tubes with various Mach numbers, a Sedov blast wave, different tests including self-gravity such as gravitational freefall, a hydrostatic sphere, the Zeldovich-pancake, and the nifty cluster as cosmological application. Advantages over SPH include increased mixing and a better convergence behavior. We demonstrate that the MFM-solver is robust, also in a cosmological context. We show evidence that the solver preforms extraordinarily well when applied to decaying subsonic turbulence, a problem very difficult to handle for many methods. MFM captures the expected velocity power spectrum with high accuracy and shows a good convergence behavior. Using MFM or SPH within OpenGadget3 leads to a comparable decay in turbulent energy due to numerical dissipation. When studying the energy decay for different initial turbulent energy fractions, we find that MFM performs well down to Mach numbers $\mathcal{M}\approx 0.007$. Finally, we show how important the slope limiter and the energy-entropy switch are to control the behavior and the evolution of the fluids.

Read this paper on arXiv…

F. Groth, U. Steinwandel, M. Valentini, et. al.
Wed, 11 Jan 23
68/80

Comments: 27 pages, 24 figures, submitted to MNRAS

Web-based telluric correction made in Spain: spectral fitting of Vega-type telluric standards [IMA]

http://arxiv.org/abs/2212.14068


Infrared spectroscopic observations from the ground must be corrected from telluric contamination to make them ready for scientific analyses. However, telluric correction is often a tedious process that requires significant expertise to yield accurate results in a reasonable time frame. To solve these inconveniences, we present a new method for telluric correction that employs a roughly simultaneous observation of a Vega analog to measure atmospheric transmission. After continuum reconstruction and spectral fitting, the stellar features are removed from the observed Vega-type spectrum and the result is used for cancelling telluric absorption features on science spectra. This method is implemented as TelCorAl (Telluric Correction from Alicante), a Python-based web application with a user-friendly interface, whose beta version will be released soon.

Read this paper on arXiv…

D. Fuente, A. Marco, L. Patrick, et. al.
Mon, 2 Jan 23
38/44

Comments: 6 pages, 2 figures. To be published in Highlights of Spanish Astrophysics XI, Proceedings of the XV Scientific Meeting of the Spanish Astronomical Society

Reversible time-step adaptation for the integration of few-body systems [IMA]

http://arxiv.org/abs/2212.09745


The time step criterion plays a crucial role in direct N-body codes. If not chosen carefully, it will cause a secular drift in the energy error. Shared, adaptive time step criteria commonly adopt the minimum pairwise time step, which suffers from discontinuities in the time evolution of the time step. This has a large impact on the functioning of time step symmetrisation algorithms. We provide new demonstrations of previous findings that a smooth and weighted average over all pairwise time steps in the N-body system, improves the level of energy conservation. Furthermore, we compare the performance of 27 different time step criteria, by considering 3 methods for weighting time steps and 9 symmetrisation methods. We present performance tests for strongly chaotic few-body systems, including unstable triples, giant planets in a resonant chain, and the current Solar System. We find that the harmonic symmetrisation methods (methods A3 and B3 in our notation) are the most robust, in the sense that the symmetrised time step remains close to the time step function. Furthermore, based on our Solar System experiment, we find that our new weighting method based on direct pairwise averaging (method W2 in our notation), is slightly preferred over the other methods.

Read this paper on arXiv…

T. Boekholt, T. Vaillant and A. Correia
Tue, 20 Dec 22
82/97

Comments: Accepted by MNRAS. 13 pages, 6 figures

Novel Conservative Methods for Adaptive Force Softening in Collisionless and Multi-Species N-Body Simulations [GA]

http://arxiv.org/abs/2212.06851


Modeling self-gravity of collisionless fluids (e.g. ensembles of dark matter, stars, black holes, dust, planetary bodies) in simulations is challenging and requires some force softening. It is often desirable to allow softenings to evolve adaptively, in any high-dynamic range simulation, but this poses unique challenges of consistency, conservation, and accuracy, especially in multi-physics simulations where species with different ‘softening laws’ may interact. We therefore derive a generalized form of the energy-and-momentum conserving gravitational equations of motion, applicable to arbitrary rules used to determine the force softening, together with consistent associated timestep criteria, interaction terms between species with different softening laws, and arbitrary maximum/minimum softenings. We also derive new methods to maintain better accuracy and conservation when symmetrizing forces between particles. We review and extend previously-discussed adaptive softening schemes based on the local neighbor particle density, and present several new schemes for scaling the softening with properties of the gravitational field, i.e. the potential or acceleration or tidal tensor. We show that the ‘tidal softening’ scheme not only represents a physically-motivated, translation and Galilean invariant and equivalence-principle respecting (and therefore conservative) method, but imposes negligible timestep or other computational penalties, ensures that pairwise two-body scattering is small compared to smooth background forces, and can resolve outstanding challenges in properly capturing tidal disruption of substructures (minimizing artificial destruction) while also avoiding excessive N-body heating. We make all of this public in the GIZMO code.

Read this paper on arXiv…

P. Hopkins, E. Nadler, M. Grudic, et. al.
Thu, 15 Dec 22
34/75

Comments: 20 pages, 12 figures, submitted to MNRAS. Comments welcome

Solving the Teukolsky equation with physics-informed neural networks [CL]

http://arxiv.org/abs/2212.06103


We use physics-informed neural networks (PINNs) to compute the first quasi-normal modes of the Kerr geometry via the Teukolsky equation. This technique allows us to extract the complex frequencies and separation constants of the equation without the need for sophisticated numerical techniques, and with an almost immediate implementation under the \texttt{PyTorch} framework. We are able to compute the oscillation frequencies and damping times for arbitrary black hole spins and masses, with accuracy typically below the percentual level as compared to the accepted values in the literature. We find that PINN-computed quasi-normal modes are indistinguishable from those obtained through existing methods at signal-to-noise ratios (SNRs) larger than 100, making the former reliable for gravitational-wave data analysis in the mid term, before the arrival of third-generation detectors like LISA or the Einstein Telescope, where SNRs of ${\cal O}(1000)$ might be achieved.

Read this paper on arXiv…

R. Luna, J. Bustillo, J. Martínez, et. al.
Tue, 13 Dec 22
76/105

Comments: 12 pages, 7 figures

Asteroseismology: Looking for axions in the red supergiant star Alpha Ori [SSA]

http://arxiv.org/abs/2212.01890


In this work, for the first time, we use seismic data as well as surface abundances to model the supergiant $\alpha$-Ori, with the goal of setting an upper bound on the axion-photon coupling constant $g_{a\gamma}$. We found that, in general, the stellar models with $g_{a \gamma} \in [0.002;2.0]\times 10^{-10}{\rm GeV}^{-1}$ agree with observational data, but beyond that upper limit, we did not find stellar models compatible with the observational constraints, and current literature. From $g_{a \gamma} = 3.5 \times 10^{-10} {\rm GeV}^{-1}$ on, the algorithm did not find any fitting model. Nevertheless, all axionic models considered, presented a distinct internal profile from the reference case, without axions. Moreover, as axion energy losses become more significant, the behaviour of the stellar models becomes more diversified, even with very similar input parameters. Nonetheless, the consecutive increments of $g_{a \gamma}$ still show systematic tendencies, resulting from the axion energy losses. Moreover, we establish three important conclusions: (1) The increased luminosity and higher neutrino production are measurable effects, possibly associated with axion energy losses. (2) Stellar models with axion energy loss show a quite distinct internal structure. (3) The importance of future asteroseismic missions in observing low-degree non-radial modes in massive stars:internal gravity waves probe the near-core regions, where axion effects are most intense. Thus, more seismic data will allow us to constrain $g_{a\gamma}$ better and prove or dismiss the existence of axion energy loss inside massive stars.

Read this paper on arXiv…

C. Severino and I. Lopes
Tue, 6 Dec 22
5/87

Comments: 11pages,accepted by Astrophysical Journal

SuperNest: accelerated nested sampling applied to astrophysics and cosmology [CL]

http://arxiv.org/abs/2212.01760


We present a method for improving the performance of nested
sampling as well as its accuracy. Building on previous work by
Chen et al., we show that posterior repartitioning
may be used to reduce the amount of time nested sampling spends in
compressing from prior to posterior if a suitable “proposal”
distribution is supplied. We showcase this on a cosmological example
with a Gaussian posterior, and release the code as an LGPL licensed,
extensible Python package
https://gitlab.com/a-p-petrosyan/sspr.

Read this paper on arXiv…

A. Petrosyan and W. Handley
Tue, 6 Dec 22
18/87

Comments: N/A

A Finite Element Method for Angular Discretization of the Radiation Transport Equation on Spherical Geodesic Grids [CL]

http://arxiv.org/abs/2212.01409


Discrete ordinate ($S_N$) and filtered spherical harmonics ($FP_N$) based schemes have been proven to be robust and accurate in solving the Boltzmann transport equation but they have their own strengths and weaknesses in different physical scenarios. We present a new method based on a finite element approach in angle that combines the strengths of both methods and mitigates their disadvantages. The angular variables are specified on a spherical geodesic grid with functions on the sphere being represented using a finite element basis. A positivity-preserving limiting strategy is employed to prevent non-physical values from appearing in the solutions. The resulting method is then compared with both $S_N$ and $FP_N$ schemes using four test problems and is found to perform well when one of the other methods fail.

Read this paper on arXiv…

M. Bhattacharyya and D. Radice
Tue, 6 Dec 22
82/87

Comments: 24 pages, 13 figures

Collision detection for N-body Kepler systems [EPA]

http://arxiv.org/abs/2212.00783


In a Keplerian system, a large number of bodies orbit a central mass. Accretion disks, protoplanetary disks, asteroid belts, and planetary rings are examples. Simulations of these systems require algorithms that are computationally efficient. The inclusion of collisions in the simulations is challenging but important. We intend to calculate the time of collision of two astronomical bodies in intersecting Kepler orbits as a function of the orbital elements. The aim is to use the solution in an analytic propagator ($N$-body simulation) that jumps from one collision event to the next. We outline an algorithm that maintains a list of possible collision pairs ordered chronologically. At each step (the soonest event on the list), only the particles created in the collision can cause new collision possibilities. We estimate the collision rate, the length of the list, and the average change in this length at an event, and study the efficiency of the method used. We find that the collision-time problem is equivalent to finding the grid point between two parallel lines that is closest to the origin. The solution is based on the continued fraction of the ratio of orbital periods. Due to the large jumps in time, the algorithm can beat tree codes (octree and $k$-d tree codes can efficiently detect collisions) for specific systems such as the Solar System with $N<10^8$. However, the gravitational interactions between particles can only be treated as gravitational scattering or as a secular perturbation, at the cost of reducing the time-step or at the cost of accuracy. While simulations of this size with high-fidelity propagators can already span vast timescales, the high efficiency of the collision detection allows many runs from one initial state or a large sample set, so that one can study statistics.

Read this paper on arXiv…

P. Visser
Fri, 2 Dec 22
60/81

Comments: 15 pages, 14 figures

Revisiting Kinematic Fast Dynamo in 3-dimensional magnetohydrodynamic plasmas: Dynamo transition from non-Helical to Helical flows [CL]

http://arxiv.org/abs/2211.12362


Dynamos wherein magnetic field is produced from velocity fluctuations are fundamental to our understanding of several astrophysical and/or laboratory phenomena. Though fluid helicity is known to play a key role in the onset of dynamo action, its effect is yet to be fully understood. In this work, a fluid flow proposed recently [Yoshida et al. Phys. Rev. Lett. 119, 244501 (2017)] is invoked such that one may inject zero or finite fluid helicity using a control parameter, at the beginning of the simulation. Using a simple kinematic fast dynamo model, we demonstrate unambiguously the strong dependency of short scale dynamo on fluid helicity. In contrast to conventional understanding, it is shown that fluid helicity does strongly influence the physics of short scale dynamo. To corroborate our findings, late time magnetic field spectra for various values of injected fluid helicity is presented along with rigorous geometric'' signatures of the 3D magnetic field surfaces, which shows a transition fromuntwisted” to twisted'' sheet tocigar” like configurations. It is also shown that one of the most studied ABC dynamo model is not the fastest'' dynamo model for problems with lower magnetic Reynolds number. This work brings out, for the first time, the role of fluid helicity in moving fromnon-dynamo” to “dynamo” regime systematically.

Read this paper on arXiv…

S. Biswas and R. Ganesh
Wed, 23 Nov 22
4/71

Comments: N/A

Mesh-free hydrodynamics in PKDGRAV3 for galaxy formation simulations [GA]

http://arxiv.org/abs/2211.12243


We extend the state-of-the-art N-body code PKDGRAV3 with the inclusion of mesh-free gas hydrodynamics for cosmological simulations. Two new hydrodynamic solvers have been implemented, the mesh-less finite volume and mesh-less finite mass methods. The solvers manifestly conserve mass, momentum and energy, and have been validated with a wide range of standard test simulations, including cosmological simulations. We also describe improvements to PKDGRAV3 that have been implemented for performing hydrodynamic simulations. These changes have been made with efficiency and modularity in mind, and provide a solid base for the implementation of the required modules for galaxy formation and evolution physics and future porting to GPUs. The code is released in a public repository, together with the documentation and all the test simulations presented in this work.

Read this paper on arXiv…

I. Asensio, C. Vecchia, D. Potter, et. al.
Wed, 23 Nov 22
44/71

Comments: 18 pages, 14 figures; accepted for publication in MNRAS

Global MHD simulations of the solar convective zone using a volleyball mesh decomposition. I. Pilot [SSA]

http://arxiv.org/abs/2211.09564


Solar modelling has long been split into ”internal” and ”surface” modelling, because of the lack of tools to connect the very different scales in space and time, as well as the widely different environments and dominating physical effects involved. Significant efforts have recently been put into resolving this disconnect. We address the outstanding bottlenecks in connecting internal convection zone and dynamo simulations to the surface of the Sun, and conduct a proof-of-concept high resolution global simulation of the convection zone of the Sun, using the task-based DISPATCH code framework. We present a new `volleyball’ mesh decomposition, which has Cartesian patches tessellated on a sphere with no singularities. We use our new entropy based HLLS approximate Riemann solver to model magneto-hydrodynamics in a global simulation, ranging between 0.655 — 0.995 R$_\odot$, with an initial ambient magnetic field set to 0.1 Gauss. The simulations develop convective motions with complex, turbulent structures. Small-scale dynamo action twists the ambient magnetic field and locally amplifies magnetic field magnitudes by more than two orders of magnitude within the initial run-time.

Read this paper on arXiv…

A. Popovas, &. Nordlund and M. Szydlarski
Fri, 18 Nov 22
53/70

Comments: 12 pages, 9 figures, submitted to A&A. Movies available online

i-SPin: An integrator for multicomponent Schrödinger-Poisson systems with self-interactions [CEA]

http://arxiv.org/abs/2211.08433


We provide an algorithm and a publicly available code to numerically evolve multicomponent Schr\”{o}dinger-Poisson (SP) systems, including attractive or repulsive self-interactions in addition to gravity. Focusing on the case where the SP system represents the non-relativistic limit of a massive vector field, non-gravitational self-interactions (in particular, spin-spin type interactions) introduce new challenges related to mass and spin conservation which are not present in purely gravitational systems. We address these challenges with an analytical solution for the non-trivial `kick’ step in the algorithm. Equipped with this analytical solution, the full field evolution is second order accurate, preserves spin and mass to machine precision, and is reversible. Our algorithm allows for: general $n$-component fields with SO$(n)$ symmetry, an expanding universe relevant for cosmology, and the inclusion of external potentials relevant for laboratory settings.

Read this paper on arXiv…

M. Jain and M. Amin
Thu, 17 Nov 22
34/63

Comments: 18 pages, 3 figures, 4 appendices. A python code based on our algorithm in provided at, this https URL . Animations of the numerical simulation results can be found at, this https URL

Signatures of Strong Magnetization and Metal-Poor Atmosphere for a Neptune-Size Exoplanet [EPA]

http://arxiv.org/abs/2211.05155


The magnetosphere of an exoplanet has yet to be unambiguously detected. Investigations of star-planet interaction and neutral atomic hydrogen absorption during transit to detect magnetic fields in hot Jupiters have been inconclusive, and interpretations of the transit absorption non-unique. In contrast, ionized species escaping a magnetized exoplanet, particularly from the polar caps, should populate the magnetosphere, allowing detection of different regions from the plasmasphere to the extended magnetotail, and characterization of the magnetic field producing them. Here, we report ultraviolet observations of HAT-P-11b, a low-mass (0.08 MJ) exoplanet showing strong, phase-extended transit absorption of neutral hydrogen (maximum and tail transit depths of 32 \pm 4%, 27 \pm 4%) and singly ionized carbon (15 \pm 4%, 12.5 \pm 4%). We show that the atmosphere should have less than six times the solar metallicity (at 200 bars), and the exoplanet must also have an extended magnetotail (1.8-3.1 AU). The HAT-P-11b equatorial magnetic field strength should be about 1-5 Gauss. Our panchromatic approach using ionized species to simultaneously derive metallicity and magnetic field strength can now constrain interior and dynamo models of exoplanets, with implications for formation and evolution scenarios.

Read this paper on arXiv…

L. Ben-Jaffel, G. Ballester, A. Muñoz, et. al.
Fri, 11 Nov 22
39/58

Comments: 68 pages, 12 figures. Published in Nature Astronomy on December 16 2021. Main draft and Supplementary information are included in a single file. Full-text access to a view-only version of the paper via : this https URL

DISPATCH methods: an approximate, entropy-based Riemann solver for ideal magnetohydrodynamics [IMA]

http://arxiv.org/abs/2211.02438


With advance of supercomputers we can now afford simulations with very large range of scales. In astrophysical applications, e.g. simulating Solar, stellar and planetary atmospheres, physical quantities, like gas pressure, density, temperature and plasma $\beta$ can vary by orders of magnitude. This requires a robust solver, which can deal with a very wide range of conditions and be able to maintain hydrostatic equilibrium. We reformulate a Godunov-type HLLD Riemann solver so it would be suitable to maintain hydrostatic equilibrium in atmospheric applications and would be able to handle low and high Mach numbers, regimes where kinetic and magnetic energies dominate over thermal energy without any ad-hoc corrections. We change the solver to use entropy instead of total energy as the ‘energy’ variable in the system of MHD equations. The entropy is not conserved, it increases when kinetic and magnetic energy is converted to heat, as it should. We conduct a series of standard tests with varying conditions and show that the new formulation for the Godunot type Riemann solver works and is very promising.

Read this paper on arXiv…

A. Popovas
Mon, 7 Nov 22
40/67

Comments: 12 pages, 14 figures, submitted to A&A

Data-Driven Modeling of Landau Damping by Physics-Informed Neural Networks [CL]

http://arxiv.org/abs/2211.01021


Kinetic approaches are generally accurate in dealing with microscale plasma physics problems but are computationally expensive for large-scale or multiscale systems. One of the long-standing problems in plasma physics is the integration of kinetic physics into fluid models, which is often achieved through sophisticated analytical closure terms. In this study, we successfully construct a multi-moment fluid model with an implicit fluid closure included in the neural network using machine learning. The multi-moment fluid model is trained with a small fraction of sparsely sampled data from kinetic simulations of Landau damping, using the physics-informed neural network (PINN) and the gradient-enhanced physics-informed neural network (gPINN). The multi-moment fluid model constructed using either PINN or gPINN reproduces the time evolution of the electric field energy, including its damping rate, and the plasma dynamics from the kinetic simulations. For the first time, we introduce a new variant of the gPINN architecture, namely, gPINN$p$ to capture the Landau damping process. Instead of including the gradients of all the equation residuals, gPINN$p$ only adds the gradient of the pressure equation residual as one additional constraint. Among the three approaches, the gPINN$p$-constructed multi-moment fluid model offers the most accurate results. This work sheds new light on the accurate and efficient modeling of large-scale systems, which can be extended to complex multiscale laboratory, space, and astrophysical plasma physics problems.

Read this paper on arXiv…

Y. Qin, J. Ma, M. Jiang, et. al.
Thu, 3 Nov 22
17/59

Comments: 11 pages, 7 figures

Deep Learning application for stellar parameters determination: II- Application to observed spectra of AFGK stars [SSA]

http://arxiv.org/abs/2210.17470


In this follow-up paper, we investigate the use of Convolutional Neural Network for deriving stellar parameters from observed spectra. Using hyperparameters determined previously, we have constructed a Neural Network architecture suitable for the derivation of Teff, log g, [M/H], and vesini. The network was constrained by applying it to databases of AFGK synthetic spectra at different resolutions. Then, parameters of A stars from Polarbase, SOPHIE, and ELODIE databases are derived as well as FGK stars from the Spectroscopic Survey of Stars in the Solar Neighbourhood. The network model average accuracy on the stellar parameters are found to be as low as 80 K for Teff , 0.06 dex for log g, 0.08 dex for [M/H], and 3 km/s for vesini for AFGK stars.

Read this paper on arXiv…

M. Gebran, F. Paletou, I. Bentley, et. al.
Tue, 1 Nov 22
83/100

Comments: 13 pages, 7 figures. Accepted for publication in Open Astronomy, De Gruyter

Loading a relativistic kappa distribution in particle simulations [CL]

http://arxiv.org/abs/2210.15118


A procedure for loading particle velocities from a relativistic kappa distribution in particle-in-cell (PIC) and Monte Carlo simulations is presented. It is based on the rejection method and the beta prime distribution. The rejection part extends earlier method for the Maxwell-Juttner distribution, and then the acceptance rate reaches ~95%. Utilizing the generalized beta prime distributions, we successfully reproduce the relativistic kappa distribution, including the power-law tail. The derivation of the procedure, mathematical preparations, comparison with other procedures, and numerical tests are presented.

Read this paper on arXiv…

S. Zenitani and S. Nakano
Fri, 28 Oct 22
14/56

Comments: 33 pages 11 figures; to appear in Physics of Plasmas

Reconnection-Driven Energy Cascade in Magnetohydrodynamic Turbulence [SSA]

http://arxiv.org/abs/2210.10736


Magnetohydrodynamic turbulence regulates the transfer of energy from large to small scales in many astrophysical systems, including the solar atmosphere. We perform three-dimensional magnetohydrodynamic simulations with unprecedentedly large magnetic Reynolds number to reveal how rapid reconnection of magnetic field lines changes the classical paradigm of the turbulent energy cascade. By breaking elongated current sheets into chains of small magnetic flux ropes (or plasmoids), magnetic reconnection leads to a new range of turbulent energy cascade, where the rate of energy transfer is controlled by the growth rate of the plasmoids. As a consequence, the turbulent energy spectra steepen and attain a spectral index of -2.2 that is accompanied by changes in the anisotropy of turbulence eddies. The omnipresence of plasmoids and their consequences on, e.g., solar coronal heating, can be further explored with current and future spacecraft and telescopes.

Read this paper on arXiv…

C. Dong, L. Wang, Y. Huang, et. al.
Thu, 20 Oct 22
72/74

Comments: 32 pages, 8 figures, the world’s largest 3D MHD turbulence simulation using a fifth-order scheme

A general relativistic extension to mesh-free methods for hydrodynamics [CL]

http://arxiv.org/abs/2210.05682


The detection of gravitational waves has opened a new era for astronomy, allowing for the combined use of gravitational wave and electromagnetic emissions to directly probe the physics of compact objects, still poorly understood. So far, the theoretical modelling of these sources has mainly relied on standard numerical techniques as grid-based methods or smoothed particle hydrodynamics, with only a few recent attempts at using new techniques as moving-mesh schemes. Here, we introduce a general relativistic extension to the mesh-less hydrodynamic schemes in the code GIZMO, which benefits from the use of Riemann solvers and at the same time perfectly conserves angular momentum thanks to a generalised leap-frog integration scheme. We benchmark our implementation against many standard tests for relativistic hydrodynamics, either in one or three dimensions, and also test the ability to preserve the equilibrium solution of a Tolman-Oppenheimer-Volkoff compact star. In all the presented tests, the code performs extremely well, at a level at least comparable to other numerical techniques.

Read this paper on arXiv…

A. Lupi
Thu, 13 Oct 22
61/68

Comments: 16 pages, 14 figures, submitted to MNRAS

BIFROST: simulating compact subsystems in star clusters using a hierarchical fourth-order forward symplectic integrator code [IMA]

http://arxiv.org/abs/2210.02472


We present BIFROST, an extended version of the GPU-accelerated hierarchical fourth-order forward symplectic integrator code FROST. BIFROST (BInaries in FROST) can efficiently evolve collisional stellar systems with arbitrary binary fractions up to $f_\mathrm{bin}=100\%$ by using secular and regularised integration for binaries, triples, multiple systems or small clusters around black holes within the fourth-order forward integrator framework. Post-Newtonian (PN) terms up to order PN3.5 are included in the equations of motion of compact subsystems with optional three-body and spin-dependent terms. PN1.0 terms for interactions with black holes are computed everywhere in the simulation domain. The code has several merger criteria (gravitational-wave inspirals, tidal disruption events and stellar and compact object collisions) with the addition of relativistic recoil kicks for compact object mergers. We show that for systems with $N$ particles the scaling of the code remains good up to $N_\mathrm{GPU} \sim 40\times N / 10^6$ GPUs and that the increasing binary fractions up to 100 per cent hardly increase the code running time (less than a factor $\sim 1.5$). We also validate the numerical accuracy of BIFROST by presenting a number of star clusters simulations the most extreme ones including a core collapse and a merger of two intermediate mass black holes with a relativistic recoil kick.

Read this paper on arXiv…

A. Rantala, T. Naab, F. Rizzuto, et. al.
Fri, 7 Oct 22
27/62

Comments: 25 pages, 16 figures, submitted to MNRAS

A finite-volume scheme for modeling compressible magnetohydrodynamic flows at low Mach numbers in stellar interiors [SSA]

http://arxiv.org/abs/2210.01641


Fully compressible magnetohydrodynamic (MHD) simulations are a fundamental tool for investigating the role of dynamo amplification in the generation of magnetic fields in deep convective layers of stars. The flows that arise in such environments are characterized by low (sonic) Mach numbers (M_son < 0.01 ). In these regimes, conventional MHD codes typically show excessive dissipation and tend to be inefficient as the Courant-Friedrichs-Lewy (CFL) constraint on the time step becomes too strict. In this work we present a new method for efficiently simulating MHD flows at low Mach numbers in a space-dependent gravitational potential while still retaining all effects of compressibility. The proposed scheme is implemented in the finite-volume Seven-League Hydro (SLH) code, and it makes use of a low-Mach version of the five-wave Harten-Lax-van Leer discontinuities (HLLD) solver to reduce numerical dissipation, an implicit-explicit time discretization technique based on Strang splitting to overcome the overly strict CFL constraint, and a well-balancing method that dramatically reduces the magnitude of spatial discretization errors in strongly stratified setups. The solenoidal constraint on the magnetic field is enforced by using a constrained transport method on a staggered grid. We carry out five verification tests, including the simulation of a small-scale dynamo in a star-like environment at M_son ~ 0.001 . We demonstrate that the proposed scheme can be used to accurately simulate compressible MHD flows in regimes of low Mach numbers and strongly stratified setups even with moderately coarse grids.

Read this paper on arXiv…

G. Leidi, C. Birke, R. Andrassy, et. al.
Wed, 5 Oct 22
37/73

Comments: N/A

Continuous Simulation Data Stream: A dynamical timescale-dependent output scheme for simulations [IMA]

http://arxiv.org/abs/2210.00835


Exa-scale simulations are on the horizon but almost no new design for the output has been proposed in recent years. In simulations using individual time steps, the traditional snapshots are over resolving particles/cells with large time steps and are under resolving the particles/cells with short time steps. Therefore, they are unable to follow fast events and use efficiently the storage space. The Continuous Simulation Data Stream (CSDS) is designed to decrease this space while providing an accurate state of the simulation at any time. It takes advantage of the individual time step to ensure the same relative accuracy for all the particles. The outputs consist of a single file representing the full evolution of the simulation. Within this file, the particles are written independently and at their own frequency. Through the interpolation of the records, the state of the simulation can be recovered at any point in time. In this paper, we show that the CSDS can reduce the storage space by 2.76x for the same accuracy than snapshots or increase the accuracy by 67.8x for the same storage space whilst retaining an acceptable reading speed for analysis. By using interpolation between records, the CSDS provides the state of the simulation, with a high accuracy, at any time. This should largely improve the analysis of fast events such as supernovae and simplify the construction of light-cone outputs.

Read this paper on arXiv…

L. Hausammann, P. Gonnet and M. Schaller
Tue, 4 Oct 22
34/71

Comments: Accepted for publication in “Astronomy and Computing”

A database of high precision trivial choreographies for the planar three-body problem [CL]

http://arxiv.org/abs/2210.00594


Trivial choreographies are special periodic solutions of the planar three-body problem. In this work we use a modified Newton’s method based on the continuous analog of Newton’s method and a high precision arithmetic for a specialized numerical search for new trivial choreographies. As a result of the search we computed a high precision database of 462 such orbits, including 397 new ones. The initial conditions and the periods of all found solutions are given with 180 correct decimal digits. 108 of the choreographies are linearly stable, including 99 new ones. The linear stability is tested by a high precision computing of the eigenvalues of the monodromy matrices.

Read this paper on arXiv…

I. Hristov, R. Hristova, I. Puzynin, et. al.
Tue, 4 Oct 22
37/71

Comments: 10 pages, 3 figures, 1 table. arXiv admin note: substantial text overlap with arXiv:2203.02793

REST: A java package for crafting realistic cosmic dust particles [EPA]

http://arxiv.org/abs/2209.05768


The overall understanding of cosmic dust particles is mainly inferred from the different Earth-based measurements of interplanetary dust particles and space missions such as Giotto, Stardust and Rosetta. The results from these measurements indicate the presence of a wide variety of morphologically significant dust particles. To interpret the light scattering and thermal emission observations arising due to dust in different regions of space, it is necessary to generate computer modelled realistic dust structures of various shape, size, porosity, bulk density, aspect ratio and material inhomogenity. The present work introduces a java package called Rough Ellipsoid Structure Tool (REST), which is a collection of multiple algorithms, that aims to craft realistic rough surface cosmic dust particles from spheres, super-ellipsoids and fractal aggregates depending on the measured bulk-density and porosity. Initially, spheres having $N_d$ dipoles or lattice points are crafted by selecting random material and space seed cells to generate strongly damaged structure, rough surface and poked structure. Similarly, REST generates rough surface super-ellipsoids and poked structure super-ellipsoids from initial super-ellipsoid structures. REST also generates rough fractal aggregates which are fractal aggregates having rough surface irregular grains. REST has been applied to create agglomerated debris, agglomerated debris super-ellipsoids and mixed morphology particles. Finally, the light scattering properties of the respective applied structures are studied to ensure their applicability. REST is a flexible structure tool, which shall be useful to generate various types of dust structures that can be applied to study the physical properties of dust in different regions of space.

Read this paper on arXiv…

P. Halder
Wed, 14 Sep 22
38/90

Comments: 17 pages, 18 figures, Accepted for publication in The Astrophysical Journal Supplement Series (ApJS)

To E or not to E: Numerical Nuances of Global Coronal Models [SSA]

http://arxiv.org/abs/2209.04481


In the recent years, global coronal models have experienced an ongoing increase in popularity as tools for forecasting solar weather. Within the domain of up to 21.5Rsun, magnetohydrodynamics (MHD) is used to resolve the coronal structure using magnetograms as inputs at the solar surface. Ideally, these computations would be repeated with every update of the solar magnetogram so that they could be used in the ESA Modelling and Data Analysis Working Group (MADAWG) magnetic connectivity tool (this http URL). Thus, it is crucial that these results are both accurate and efficient. While much work has been published showing the results of these models in comparison with observations, not many of it discusses the intricate numerical adjustments required to achieve these results. These range from details of boundary condition formulations to adjustments as large as enforcing parallelism between the magnetic field and velocity. By omitting the electric field in ideal-MHD, the description of the physics can be insufficient and may lead to excessive diffusion and incorrect profiles. We formulate inner boundary conditions which, along with other techniques, reduce artificial electric field generation. Moreover, we investigate how different outer boundary condition formulations and grid design affect the results and convergence, with special focus on the density and the radial component of the B-field. The significant improvement in accuracy of real magnetic map-driven simulations is illustrated for an example of the 2008 eclipse.

Read this paper on arXiv…

M. Brchnelova, B. Kuźma, B. Perri, et. al.
Tue, 13 Sep 22
69/85

Comments: 28 pages, 26 figures, 3 tables, accepted for publication in ApJS

A direct N-body integrator for modelling the chaotic, tidal dynamics of multi-body extrasolar systems: TIDYMESS [IMA]

http://arxiv.org/abs/2209.03955


Tidal dissipation plays an important role in the dynamical evolution of moons, planets, stars and compact remnants. The interesting complexity originates from the interplay between the internal structure and external tidal forcing. Recent and upcoming observing missions of exoplanets and stars in the Galaxy help to provide constraints on the physics of tidal dissipation. It is timely to develop new N-body codes, which allow for experimentation with various tidal models and numerical implementations. We present the open-source N-body code TIDYMESS, which stands for “TIdal DYnamics of Multi-body ExtraSolar Systems”. This code implements a creep deformation law for the bodies, parametrized by their fluid Love numbers and fluid relaxation times. Due to tidal and centrifugal deformations, we approximate the general shape of a body to be an ellipsoid. We calculate the associated gravitational field to quadruple order, from which we derive the gravitational accelerations and torques. The equations of motion for the orbits, spins and deformations are integrated directly using a fourth-order integration method based on a symplectic composition. We implement a novel integration method for the deformations, which allows for a time step solely dependent on the orbits, and not on the spin periods or fluid relaxation times. This feature greatly speeds up the calculations, while also improving the consistency when comparing different tidal regimes. We demonstrate the capabilities and performance of TIDYMESS, particularly in the niche regime of parameter space where orbits are chaotic and tides become non-linear.

Read this paper on arXiv…

T. Boekholt and A. Correia
Fri, 9 Sep 22
50/76

Comments: 17 pages, 6 figures

Cosmic Inflation and Genetic Algorithms [CL]

http://arxiv.org/abs/2208.13804


Large classes of standard single-field slow-roll inflationary models consistent with the required number of e-folds, the current bounds on the spectral index of scalar perturbations, the tensor-to-scalar ratio, and the scale of inflation can be efficiently constructed using genetic algorithms. The setup is modular and can be easily adapted to include further phenomenological constraints. A semi-comprehensive search for sextic polynomial potentials results in roughly O(300,000) viable models for inflation. The analysis of this dataset reveals a preference for models with a tensor-to-scalar ratio in the range 0.0001 < r < 0.0004. We also consider potentials that involve cosine and exponential terms. In the last part we explore more complex methods of search relying on reinforcement learning and genetic programming. While reinforcement learning proves more difficult to use in this context, the genetic programming approach has the potential to uncover a multitude of viable inflationary models with new functional forms.

Read this paper on arXiv…

S. Abel, A. Constantin, T. Harvey, et. al.
Wed, 31 Aug 22
34/86

Comments: 13 pages, 13 figures

Leap-frog neural network for learning the symplectic evolution from partitioned data [EPA]

http://arxiv.org/abs/2208.14148


For the Hamiltonian system, this work considers the learning and prediction of the position (q) and momentum (p) variables generated by a symplectic evolution map. Similar to Chen & Tao (2021), the symplectic map is represented by the generating function. In addition, we develop a new learning scheme by splitting the time series (q_i, p_i) into several partitions, and then train a leap-frog neural network (LFNN) to approximate the generating function between the first (i.e. initial condition) and one of the rest partitions. For predicting the system evolution in a short timescale, the LFNN could effectively avoid the issue of accumulative error. Then the LFNN is applied to learn the behavior of the 2:3 resonant Kuiper belt objects, in a much longer time period, and there are two significant improvements on the neural network constructed in our previous work (Li et al. 2022): (1) conservation of the Jacobi integral ; (2) highly accurate prediction of the orbital evolution. We propose that the LFNN may be useful to make the prediction of the long time evolution of the Hamiltonian system.

Read this paper on arXiv…

X. Li, J. Li and Z. Xia
Wed, 31 Aug 22
60/86

Comments: 10 pages, 5 figures, comments welcome

Inferring the dense matter equation of state from neutron star observations via artificial neural networks [CL]

http://arxiv.org/abs/2208.13163


The difficulty in describing the equation of state (EoS) for nuclear matter at densities above the saturation density ($\rho_0$) has led to the emergence of a multitude of models based on different assumptions and techniques. These EoSs, when used to describe a neutron star (NS), lead to differing values of observables. An outstanding goal in astrophysics is to constrain the dense matter EoS by exploiting astrophysical and gravitational wave measurements. Nuclear matter parameters appear as Taylor coefficients in the expansion of the EoS around the saturation density of symmetric and asymmetric nuclear matter, and provide a physically-motivated representation of the EoS. In this paper, we introduce a deep learning-based methodology to predict key neutron star observables such as the NS mass, NS radius, and tidal deformability from a set of nuclear matter parameters. Using generated mock data, we confirm that the neural network model is able to accurately capture the underlying physics of finite nuclei and replicate inter-correlations between the symmetry energy slope, its curvature and the tidal deformability arising from a set of physical constraints. We also perform a systematic Bayesian estimation of NMPs in light of recent observational data with the trained neural network and study the effects of correlations among these NMPs. We show that by not considering inter-correlations arising from finite nuclei constraints, an intrinsic uncertainty of upto 30% can be observed on higher-order NMPs.

Read this paper on arXiv…

A. Thete, K. Banerjee and T. Malik
Tue, 30 Aug 22
25/76

Comments: 23 pages, 5 figures, 7 tables

Influence of turbulence on Lyman-alpha scattering [GA]

http://arxiv.org/abs/2208.13103


We develop a Monte Carlo radiative transfer code to study the effect of turbulence with a finite correlation length on scattering of Lyman-alpha (Ly$\alpha$) photons propagating through neutral atomic hydrogen gas. We investigate how the effective mean free path, the emergent spectrum, and the average number of scatterings that Ly$\alpha$ photons experience change in the presence of turbulence. We find that the correlation length is an important and sensitive parameter, which can significantly, by orders of magnitude, reduce the number of scattering events that the average Ly$\alpha$ photon undergoes before it escapes the turbulent cloud. This can have consequences for the effectiveness of the Wouthuysen-Field coupling of the spin temperature to Ly$\alpha$ radiation as well as affect the polarization of the scattered photons.

Read this paper on arXiv…

V. Munirov and A. Kaurov
Tue, 30 Aug 22
42/76

Comments: N/A

Alouette: Yet another encapsulated TAUOLA, but revertible [CL]

http://arxiv.org/abs/2208.11914


We present an algorithm for simulating reverse Monte Carlo decays given an existing forward Monte Carlo decay engine. This algorithm is implemented in the Alouette library, a TAUOLA thin wrapper for simulating decays of tau-leptons. We provide a detailed description of Alouette, as well as validation results.

Read this paper on arXiv…

V. Niess
Fri, 26 Aug 22
15/49

Comments: 30 pages, 4 figures

Flash-X, a multiphysics simulation software instrument [CL]

http://arxiv.org/abs/2208.11630


Flash-X is a highly composable multiphysics software system that can be used to simulate physical phenomena in several scientific domains. It derives some of its solvers from FLASH, which was first released in 2000. Flash-X has a new framework that relies on abstractions and asynchronous communications for performance portability across a range of increasingly heterogeneous hardware platforms. Flash-X is meant primarily for solving Eulerian formulations of applications with compressible and/or incompressible reactive flows. It also has a built-in, versatile Lagrangian framework that can be used in many different ways, including implementing tracers, particle-in-cell simulations, and immersed boundary methods.

Read this paper on arXiv…

A. Dubey, K. Weide, J. O’Neal, et. al.
Thu, 25 Aug 22
17/43

Comments: 16 pages, 5 Figures, published open access in SoftwareX

Dust Temperature and Emission of FirstLight Simulated Galaxies at Cosmic Dawn [GA]

http://arxiv.org/abs/2208.08658


We study the behavior of dust temperature and its infrared emission of FirstLight1 simulated galaxies at the redshift of 6 and 8, by using POLARIS2 as a Monte Carlo photon transport simulator. To calculate the dust temperature ($T_{dust}$) of the Interstellar medium (ISM) of galaxies, POLARIS requires three essential parameters as an input – (1) The physical characteristics of galaxies such as the spatial distribution of stars and dust, which are taken from FirstLight galaxies. (2) The intrinsic properties of dust grains that are derived from theDiscrete Dipole Approximation Code (DDSCAT) model. (3) The optical properties of star-particles are in the form of their spectral energy distributions (SEDs) which are extracted from the Binary Population and Spectral Synthesis (BPASS) model. Our simulations produced the 3D maps of the equilibrium dust temperature along with the sight-line infrared emission maps of galaxies. Our results show the importance of excess heating of dust by the Cosmic Microwave Background (CMB) radiations at high redshifts that results in increased Mid and Far infrared (M-FIR) dust emission. The different evaluations of dust temperature models relate diversely to the optical and intrinsic properties of galaxies

Read this paper on arXiv…

M. Mushtaqa and P. Puttasiddappa
Fri, 19 Aug 22
1/55

Comments: 16 pages, 5 figures; International Journal of Natural Sciences Current and Future Research Trends, Vol 14 No 1, 2022, 109 to 125

LeXInt: Package for Exponential Integrators employing Leja interpolation [CL]

http://arxiv.org/abs/2208.08269


We present a publicly available software for exponential integrators that computes the $\varphi_l(z)$ functions using polynomial interpolation. The interpolation method at Leja points have recently been shown to be competitive with the traditionally-used Krylov subspace method. The developed framework facilitates easy adaptation into any Python software package for time integration.

Read this paper on arXiv…

P. Deka, L. Einkemmer and M. Tokman
Thu, 18 Aug 22
19/45

Comments: Publicly available software available at this https URL, in submission

The PUMAS library [CL]

http://arxiv.org/abs/2206.01457


The PUMAS library is a transport engine for muon and tau leptons in matter. It can operate with a configurable level of details, from a fast deterministic CSDA mode to a detailed Monte~Carlo simulation. A peculiarity of PUMAS is that it is revertible, i.e. it can run in forward or in backward mode. Thus, the PUMAS library is particularly well suited for muography applications. In the present document, we provide a detailed description of PUMAS, of its physics and of its implementation.

Read this paper on arXiv…

V. Niess
Mon, 6 Jun 22
8/41

Comments: 72 pages, 13 figures

AtoMEC: an open-source average-atom Python code [CL]

http://arxiv.org/abs/2206.01074


Average-atom models are an important tool in studying matter under extreme conditions, such as those conditions experienced in planetary cores, brown and white dwarfs, and during inertial confinement fusion. In the right context, average-atom models can yield results with similar accuracy to simulations which require orders of magnitude more computing time, and thus they can greatly reduce financial and environmental costs. Unfortunately, due to the wide range of possible models and approximations, and the lack of open-source codes, average-atom models can at times appear inaccessible. In this paper, we present our open-source average-atom code, atoMEC. We explain the aims and structure of atoMEC to illuminate the different stages and options in an average-atom calculation, and facilitate community contributions. We also discuss the use of various open-source Python packages in atoMEC, which have expedited its development.

Read this paper on arXiv…

T. Callow, D. Kotik, E. Kraisler, et. al.
Fri, 3 Jun 22
6/57

Comments: 9 pages, 8 figures. Submitted to Proceedings of the 21st Python in Science Conference (SciPy 2022)

New families of periodic orbits for the planar three-body problem computed with high precision [CL]

http://arxiv.org/abs/2205.14709


In this paper we use a Modified Newton’s method based on the Continuous analog of Newton’s method and high precision arithmetic for a general numerical search of periodic orbits for the planar three-body problem. We consider relatively short periods and a relatively coarse search-grid. As a result, we found 123 periodic solutions belonging to 105 new topological families that are not included in the database in [Science China Physics, Mechanics and Astronomy 60.12 (2017)]. The extensive numerical search is achieved by a parallel solving of many independent tasks using many cores in a computational cluster.

Read this paper on arXiv…

I. Hristov, R. Hristova, I. Puzynin, et. al.
Tue, 31 May 22
71/89

Comments: 5 pages, 1 figure. arXiv admin note: substantial text overlap with arXiv:2203.02793

A general stability-driven approach for the refinement of multi-planet systems [EPA]

http://arxiv.org/abs/2205.09319


Over the past years, the amount of detected multi-planet systems significantly grew, an important sub-class of which being the compact configurations. A precise knowledge of them is crucial to understand the conditions with which planetary systems form and evolve. However, observations often leave these systems with large uncertainties, notably on the orbital eccentricities. This is especially prominent for systems with low-mass planets detected with Radial Velocities (RV), the amount of which is more and more important in the exoplanet population. It is becoming a common approach to refine these parameters with the help of orbital stability arguments.
Such dynamical techniques can be computationally expensive. In this work we use an alternative procedure faster by orders of magnitude than classical N-body integration approaches.
We couple a reliable exploration of the parameter space with the precision of the Numerical Analysis of Fundamental Frequencies (NAFF, Laskar 1990) fast chaos indicator. We also propose a general procedure to calibrate the NAFF indicator on any multi-planet system without additional computational cost. This calibration strategy is illustrated on HD 45364, in addition to yet-unpublished measurements obtained with the HARPS and CORALIE high-resolution spectrographs. We validate the calibration approach on HD 202696. We test the performances of this stability-driven approach on two systems with different architectures. First we study HD 37124, a 3-planet system composed of planets in the Jovian regime. Then, we analyse HD 215152, a compact system of four low-mass planets.
We demonstrate the potential of the NAFF stability-driven approach to refine the orbital parameters and planetary masses. We stress the importance of undertaking systematic global dynamical analyses on every new multi-planet system discovered.

Read this paper on arXiv…

M. Stalport, J. Delisle, S. Udry, et. al.
Fri, 20 May 22
65/65

Comments: 14 pages, 14 figures. Accepted for publication in A&A

Modeling the thermal conduction in the solar atmosphere with the code MANCHA3D [SSA]

http://arxiv.org/abs/2205.08846


Thermal conductivity is one of the important mechanisms of heat transfer in the solar corona. In the limit of strongly magnetized plasma, it is typically modeled by Spitzer’s expression where the heat flux is aligned with the magnetic field. This paper describes the implementation of the heat conduction into the code MANCHA3D with an aim of extending single-fluid MHD simulations from the upper convection zone into the solar corona. Two different schemes to model heat conduction are implemented: (1) a standard scheme where a parabolic term is added to the energy equation, and (2) a scheme where the hyperbolic heat flux equation is solved. The first scheme limits the time step due to the explicit integration of a parabolic term, which makes the simulations computationally expensive. The second scheme solves the limitations on the time step by artificially limiting the heat conduction speed to computationally manageable values. The validation of both schemes is carried out with standard tests in one, two, and three spatial dimensions. Furthermore, we implement the model for heat flux derived by Braginskii (1965) in its most general form, when the expression for the heat flux depends on the ratio of the collisional to cyclotron frequencies of the plasma, and, therefore on the magnetic field strength. Additionally, our implementation takes into account the heat conduction in parallel, perpendicular, and transverse directions, and provides the contributions from ions and electrons separately. The model also transitions smoothly between field-aligned conductivity and isotropic conductivity for regions with a low or null magnetic field. Finally, we present a two-dimensional test for heat conduction using realistic values of the solar atmosphere where we prove the robustness of the two schemes implemented.

Read this paper on arXiv…

A. Navarro, E. Khomenko, M. Modestov, et. al.
Thu, 19 May 22
5/61

Comments: 11 pages, 8 figures

Physics-Informed Machine Learning for Modeling Turbulence in Supernovae [CL]

http://arxiv.org/abs/2205.08663


Turbulence plays an integral role in astrophysical phenomena, including core-collapse supernovae (CCSN). Unfortunately, current simulations must resort to using subgrid models for turbulence treatment, as direct numerical simulations (DNS) are too expensive to run. However, subgrid models used in CCSN simulations lack accuracy compared to DNS results. Recently, Machine Learning (ML) has shown impressive prediction capability for turbulence closure. We have developed a physics-informed, deep convolutional neural network (CNN) to preserve the realizability condition of Reynolds stress that is necessary for accurate turbulent pressure prediction. The applicability of the ML model was tested for magnetohydrodynamic (MHD) turbulence subgrid modeling in both stationary and dynamic regimes. Our future goal is to utilize our ML methodology within the MHD CCSN framework to investigate the effects of accurately-modeled turbulence on the explosion rate of these events.

Read this paper on arXiv…

P. Karpov, C. Huang, I. Sitdikov, et. al.
Thu, 19 May 22
53/61

Comments: N/A

Reynolds number dependence of Lagrangian dispersion in direct numerical simulations of anisotropic magnetohydrodynamic turbulence [CL]

http://arxiv.org/abs/2205.06879


Large-scale magnetic fields thread through the electrically conducting matter of the interplanetary and interstellar medium, stellar interiors, and other astrophysical plasmas, producing anisotropic flows with regions of high-Reynolds-number turbulence. It is common to encounter turbulent flows structured by a magnetic field with a strength approximately equal to the root-mean-square magnetic fluctuations. In this work, direct numerical simulations of anisotropic magnetohydrodynamic (MHD) turbulence influenced by such a magnetic field are conducted for a series of cases that have identical resolution, and increasing grid sizes up to $2048^3$. The result is a series of closely comparable simulations at Reynolds numbers ranging from 1,400 up to 21,000. We investigate the influence of the Reynolds number from the Lagrangian viewpoint by tracking fluid particles and calculating single-particle and two-particle statistics. The influence of Alfv\’enic fluctuations and the fundamental anisotropy on the MHD turbulence in these statistics is discussed. Single-particle diffusion curves exhibit mildly superdiffusive behaviors that differ in the direction aligned with the magnetic field and the direction perpendicular to it. Competing alignment processes affect the dispersion of particle pairs, in particular at the beginning of the inertial subrange of time scales. Scalings for relative dispersion, which become clearer in the inertial subrange for larger Reynolds number, can be observed that are steeper than indicated by the Richardson prediction.

Read this paper on arXiv…

J. Pratt, A. Busse and W. Müller
Tue, 17 May 22
32/95

Comments: 23 pages, 11 figures

Front Propagation from Radiative Sources [HEAP]

http://arxiv.org/abs/2205.04526


Fronts are regions of transition from one state to another in a medium. They are present in many areas of science and applied mathematics, and modelling them and their evolution is often an effective way of treating the underlying phenomena responsible for them. In this paper, we propose a new approach to modelling front propagation, which characterises the evolution of structures surrounding radiative sources. This approach is generic and has a wide range of applications, particularly when dealing with the propagation of phase or state transitions in media surrounding radiation emitting objects. As an illustration, we show an application in modelling the propagation of ionisation fronts around early stars during the cosmological Epoch of Reionisation (EoR) and show that the results are consistent with those of existing equations but provide much richer sources of information.

Read this paper on arXiv…

T. Steele and K. Wu
Wed, 11 May 22
52/60

Comments: 10 pages, 4 figures

Automatic Detection of Interplanetary Coronal Mass Ejections in Solar Wind In Situ Data [SSA]

http://arxiv.org/abs/2205.03578


Interplanetary coronal mass ejections (ICMEs) are one of the main drivers for space weather disturbances. In the past, different approaches have been used to automatically detect events in existing time series resulting from solar wind in situ observations. However, accurate and fast detection still remains a challenge when facing the large amount of data from different instruments. For the automatic detection of ICMEs we propose a pipeline using a method that has recently proven successful in medical image segmentation. Comparing it to an existing method, we find that while achieving similar results, our model outperforms the baseline regarding training time by a factor of approximately 20, thus making it more applicable for other datasets. The method has been tested on in situ data from the Wind spacecraft between 1997 and 2015 with a True Skill Statistic (TSS) of 0.64. Out of the 640 ICMEs, 466 were detected correctly by our algorithm, producing a total of 254 False Positives. Additionally, it produced reasonable results on datasets with fewer features and smaller training sets from Wind, STEREO-A and STEREO-B with True Skill Statistics of 0.56, 0.57 and 0.53, respectively. Our pipeline manages to find the start of an ICME with a mean absolute error (MAE) of around 2 hours and 56 minutes, and the end time with a MAE of 3 hours and 20 minutes. The relatively fast training allows straightforward tuning of hyperparameters and could therefore easily be used to detect other structures and phenomena in solar wind data, such as corotating interaction regions.

Read this paper on arXiv…

H. Rüdisser, A. Windisch, U. Amerstorfer, et. al.
Tue, 10 May 22
12/70

Comments: N/A

Long-term instability of the inner Solar System: numerical experiments [EPA]

http://arxiv.org/abs/2205.04170


Apart from being chaotic, the inner planets in the Solar System constitute an open system, as they are forced by the regular long-term motion of the outer ones. No integrals of motion can bound a priori the stochastic wanderings in their high-dimensional phase space. Still, the probability of a dynamical instability is remarkably low over the next 5 billion years, a timescale thousand times longer than the Lyapunov time. The dynamical half-life of Mercury has indeed been estimated recently at 40 billion years. By means of the computer algebra system TRIP, we consider a set of dynamical models resulting from truncation of the forced secular dynamics recently proposed for the inner planets at different degrees in eccentricities and inclinations. Through ensembles of $10^3$ to $10^5$ numerical integrations spanning 5 to 100 Gyr, we find that the Hamiltonian truncated at degree 4 practically does not allow any instability over 5 Gyr. The destabilisation is mainly due to terms of degree 6. This surprising result suggests an analogy to the Fermi-Pasta-Ulam-Tsingou problem, in which tangency to Toda Hamiltonian explains the very long timescale of thermalisation, which Fermi unsuccessfully looked for.

Read this paper on arXiv…

N. Hoang, F. Mogavero and J. Laskar
Tue, 10 May 22
30/70

Comments: Accepted for publication in MNRAS. 9 pages, 7 figures

The origin of chaos in the Solar System through computer algebra [EPA]

http://arxiv.org/abs/2205.03298


The discovery of the chaotic motion of the planets in the Solar System dates back more than 30 years. Still, no analytical theory has satisfactorily addressed the origin of chaos so far. Implementing canonical perturbation theory in the computer algebra system TRIP, we systematically retrieve the secular resonances at work along the orbital solution of a forced long-term dynamics of the inner planets. We compare the time statistic of their half-widths to the ensemble distribution of the maximum Lyapunov exponent and establish dynamical sources of chaos in an unbiased way. New resonances are predicted by the theory and checked against direct integrations of the Solar System. The image of an entangled dynamics of the inner planets emerges.

Read this paper on arXiv…

F. Mogavero and J. Laskar
Mon, 9 May 22
20/63

Comments: 16 pages, 8 figures. Astronomy & Astrophysics Letters

Cloud formation in Exoplanetary Atmospheres [EPA]

http://arxiv.org/abs/2205.00454


This invited review for young researchers presents key ideas on cloud formation as key part for virtual laboratories for exoplanet atmospheres. The basic concepts are presented, followed by utilising a time-scale analysis to disentangle process hierarchies. The kinetic approach to cloud formation modelling is described in some detail to allow the discussion of cloud structures as prerequisite for future extrasolar weather forecasts.

Read this paper on arXiv…

C. Helling
Tue, 3 May 22
79/82

Comments: 21 pages, 6 figures; accepted as a chapter in the book “Planetary systems now”, eds. Luisa M. Lara and David Jewitt, World Scientific Publishing Co Pte Ltd

Reliable event detection for Taylor methods in astrodynamics [EPA]

http://arxiv.org/abs/2204.09948


We present a novel approach for the detection of events in systems of ordinary differential equations. The new method combines the unique features of Taylor integrators with state-of-the-art polynomial root finding techniques to yield a novel algorithm ensuring strong event detection guarantees at a modest computational overhead. Detailed tests and benchmarks focused on problems in astrodynamics and celestial mechanics (such as collisional N-body systems, spacecraft dynamics around irregular bodies accounting for eclipses, computation of Poincare’ sections, etc.) show how our approach is superior in both performance and detection accuracy to strategies commonly employed in modern numerical integration works. The new algorithm is available in our open source Taylor integration package heyoka.

Read this paper on arXiv…

F. Biscani and D. Izzo
Fri, 22 Apr 22
55/64

Comments: Accepted for publication in MNRAS

The Escape of Globular Clusters from the Satellite Dwarf Galaxies of the Milky Way [GA]

http://arxiv.org/abs/2204.08481


Using numerical simulations, we have studied the escape of globular clusters (GCs) from the satellite dwarf spheroidal galaxies (dSphs) of the Milky Way (MW). We start by following the orbits of a large sample of GCs around dSphs in the presence of the MW potential field. We then obtain the fraction of GCs leaving their host dSphs within a Hubble Time. We model dSphs by a Hernquist density profile with masses between $10^7\,\mathrm{M}{\odot}$ and $7\times 10^9\,\mathrm{M}{\odot}$. All dSphs lie on the Galactic disc plane, but they have different orbital eccentricities and apogalactic distances. We compute the escape fraction of GCs from 13 of the most massive dSphs of the MW, using their realistic orbits around the MW (as determined by Gaia). The escape fraction of GCs from 13 dSphs is in the range $12\%$ to $93\%$. The average escape time of GCs from these dSphs was less than 8 $\,\mathrm{Gyrs}$, indicating that the escape process of GCs from dSphs was over. We then adopt a set of observationally-constrained density profiles for specific case of the Fornax dSph. According to our results, the escape fraction of GCs shows a negative correlation with both the mass and the apogalactic distance of the dSphs, as well as a positive correlation with the orbital eccentricity of dSphs. In particular, we find that the escape fraction of GCs from the Fornax dSph is between $13\%$ and $38\%$. Finally, we observe that when GCs leave their host dSphs, their final orbit around the MW does not differ much from their host dSphs.

Read this paper on arXiv…

A. Shirazi, H. Haghi, P. Khalaj, et. al.
Wed, 20 Apr 22
54/62

Comments: 16 pages, 7 figures (including 1 in the appendix), 6 tables (including 2 in the appendix). Accepted for publication in MNRAS

Gliding on ice in search of accurate and cost-effective computational methods for astrochemistry on grains: the puzzling case of the HCN isomerization [CL]

http://arxiv.org/abs/2204.04642


The isomerization of hydrogen cyanide to hydrogen isocyanide on icy grain surfaces is investigated by an accurate composite method (jun-Cheap) rooted in the coupled cluster ansatz and by density functional approaches. After benchmarking density functional predictions of both geometries and reaction energies against jun-Cheap results for the relatively small model system HCN — (H2O)2 the best performing DFT methods are selected. A large cluster containing 20 water molecules is then employed within a QM/QM$’$ approach to include a realistic environment mimicking the surface of icy grains. Our results indicate that four water molecules are directly involved in a proton relay mechanism, which strongly reduces the activation energy with respect to the direct hydrogen transfer occurring in the isolated molecule. Further extension of the size of the cluster up to 192 water molecules in the framework of a three-layer QM/QM’/MM model has a negligible effect on the energy barrier ruling the isomerization. Computation of reaction rates by transition state theory indicates that on icy surfaces the isomerization of HNC to HCN could occur quite easily even at low temperatures thanks to the reduced activation energy that can be effectively overcome by tunneling.

Read this paper on arXiv…

C. Baiano, J. Lupi, V. Barone, et. al.
Tue, 12 Apr 22
41/87

Comments: Accepted on JCTC

Using Kernel-Based Statistical Distance to Study the Dynamics of Charged Particle Beams in Particle-Based Simulation Codes [CL]

http://arxiv.org/abs/2204.04275


Measures of discrepancy between probability distributions (statistical distance) are widely used in the fields of artificial intelligence and machine learning. We describe how certain measures of statistical distance can be implemented as numerical diagnostics for simulations involving charged-particle beams. Related measures of statistical dependence are also described. The resulting diagnostics provide sensitive measures of dynamical processes important for beams in nonlinear or high-intensity systems, which are otherwise difficult to characterize. The focus is on kernel-based methods such as Maximum Mean Discrepancy, which have a well-developed mathematical foundation and reasonable computational complexity. Several benchmark problems and examples involving intense beams are discussed. While the focus is on charged-particle beams, these methods may also be applied to other many-body systems such as plasmas or gravitational systems.

Read this paper on arXiv…

C. Mitchell, R. Ryne and K. Hwang
Tue, 12 Apr 22
87/87

Comments: N/A

A Mini-Chemical Scheme with Net Reactions for 3D GCMs I.: Thermochemical Kinetics [EPA]

http://arxiv.org/abs/2204.04201


Growing evidence has indicated that the global composition distribution plays an indisputable role in interpreting observational data. 3D general circulation models (GCMs) with a reliable treatment of chemistry and clouds are particularly crucial in preparing for the upcoming observations. In the effort of achieving 3D chemistry-climate modeling, the challenge mainly lies in the expensive computing power required for treating a large number of chemical species and reactions. Motivated by the need for a robust and computationally efficient chemical scheme, we devise a mini-chemical network with a minimal number of species and reactions for H$_2$-dominated atmospheres. We apply a novel technique to simplify the chemical network from a full kinetics model — VULCAN by replacing a large number of intermediate reactions with net reactions. The number of chemical species is cut down from 67 to 12, with the major species of thermal and observational importance retained, including H$_2$O, CH$_4$, CO, CO$_2$, C$_2$H$_2$, NH$_3$, and HCN. The size of the total reactions is greatly reduced from $\sim$ 800 to 20. The mini-chemical scheme is validated by verifying the temporal evolution and benchmarking the predicted compositions in four exoplanet atmospheres (GJ 1214b, GJ 436b, HD 189733b, HD 209458b) against the full kinetics of VULCAN. It reproduces the chemical timescales and composition distributions of the full kinetics well within an order of magnitude for the major species in the pressure range of 1 bar — 0.1 mbar across various metallicities and carbon-to-oxygen (C/O) ratios. The small scale of the mini-chemical scheme permits simple use and fast computation, which is optimal for implementation in a 3D GCM or a retrieval framework. We focus on the thermochemical kinetics of net reactions in this paper and address photochemistry in a follow-up paper.

Read this paper on arXiv…

S. Tsai, E. Lee and R. Pierrehumbert
Mon, 11 Apr 22
59/61

Comments: 9 pages, 5 figures, accepted for publication in A&A

Escaping the maze: a statistical sub-grid model for cloud-scale density structures in the interstellar medium [GA]

http://arxiv.org/abs/2204.02053


The interstellar medium (ISM) is a turbulent, highly structured multi-phase medium. State-of-the-art cosmological simulations of the formation of galactic discs usually lack the resolution to accurately resolve those multi-phase structures. However, small-scale density structures play an important role in the life cycle of the ISM, and determine the fraction of cold, dense gas, the amount of star formation and the amount of radiation and momentum leakage from cloud-embedded sources. Here, we derive a $statistical\, model$ to calculate the unresolved small-scale ISM density structure from coarse-grained, volume-averaged quantities such as the $gas\, clumping\, factor$, $\mathcal{C}$, and mean density $\left<\rho\right>V$. Assuming that the large-scale ISM density is statistically isotropic, we derive a relation between the three-dimensional clumping factor, $\mathcal{C}\rho$, and the clumping factor of the $4\pi$ column density distribution on the cloud surface, $\mathcal{C}\Sigma$, and find $\mathcal{C}\Sigma=\mathcal{C}_\rho^{2/3}$. Applying our model to calculate the covering fraction, i.e., the $4\pi$ sky distribution of optically thick sight-lines around sources inside interstellar gas clouds, we demonstrate that small-scale density structures lead to significant differences at fixed physical ISM density. Our model predicts that gas clumping increases the covering fraction by up to 30 per cent at low ISM densities compared to a uniform medium. On the other hand, at larger ISM densities, gas clumping suppresses the covering fraction and leads to increased scatter such that covering fractions can span a range from 20 to 100 per cent at fixed ISM density. All data and example code is publicly available at GitHub.

Read this paper on arXiv…

T. Buck, C. Pfrommer, P. Girichidis, et. al.
Wed, 6 Apr 22
67/68

Comments: 16 pages with 8 figures, 14 pages main text with 7 figures, 1 page references, 1 page appendix with 1 figure, accepted by MNRAS on April 1st 2022

An implicit symplectic solver for high-precision long term integrations of the Solar System [CL]

http://arxiv.org/abs/2204.01539


Compared to other symplectic integrators (the Wisdom and Holman map and its higher order generalizations) that also take advantage of the hierarchical nature of the motion of the planets around the central star, our methods require solving implicit equations at each time-step. We claim that, despite this disadvantage, FCIRK16 is more efficient than explicit symplectic integrators for high precision simulations thanks to: (i) its high order of precision, (ii) its easy parallelization, and (iii) its efficient mixed-precision implementation which reduces the effect of round-off errors. In addition, unlike typical explicit symplectic integrators for near Keplerian problems, FCIRK16 is able to integrate problems with arbitrary perturbations (non necessarily split as a sum of integrable parts). We present a novel analysis of the effect of close encounters in the leading term of the local discretization errors of our integrator. Based on that analysis, a mechanism to detect and refine integration steps that involve close encounters is incorporated in our code. That mechanism allows FCIRK16 to accurately resolve close encounters of arbitrary bodies. We illustrate our treatment of close encounters with the application of FCIRK16 to a point mass Newtonian 15-body model of the Solar System (with the Sun, the eight planets, Pluto, and five main asteroids) and a 16-body model treating the Moon as a separate body. We also present some numerical comparisons of FCIRK16 with a state-of-the-art high order explicit symplectic scheme for 16-body model that demonstrate the superiority of our integrator when very high precision is required.

Read this paper on arXiv…

M. Antoñana, E. Alberdi, J. J.Makazaga, et. al.
Tue, 5 Apr 22
60/83

Comments: N/A

Near-Linear Orbit Uncertainty Propagation in the Perturbed Two-Body Problem [EPA]

http://arxiv.org/abs/2204.00395


The paper addresses the problem of minimizing the impact of non-linearities when dealing with uncertainty propagation in the perturbed two-body problem. The recently introduced generalized equinoctial orbital element set (GEqOE) is employed as a means to reduce non-linear effects stemming from J$_2$ and higher order gravity field harmonics. The uncertainty propagation performance of the proposed set of elements in different Earth orbit scenarios, including low-thrust orbit raising, is evaluated using a Cram\’er-von Mises test on the Mahalanobis distance of the uncertainty distribution. A considerable improvement compared to all sets of elements proposed so far is obtained.

Read this paper on arXiv…

J. Hernando-Ayuso, C. Bombardelli, G. Baù, et. al.
Mon, 4 Apr 22
38/50

Comments: N/A

Time-Varying Magnetopause Reconnection during Sudden Commencement: Global MHD Simulations [CL]

http://arxiv.org/abs/2203.14056


In response to a solar wind dynamic pressure enhancement, the compression of the magnetosphere generates strong ionospheric signatures and a sharp variation in the ground magnetic field, termed sudden commencement (SC). Whilst such compressions have also been associated with a contraction of the ionospheric polar cap due to the triggering of reconnection in the magnetotail, the effect of any changes in dayside reconnection is less clear and is a key component in fully understanding the system response. In this study we explore the time-dependent nature of dayside coupling during SC by performing global simulations using the Gorgon MHD code, and impact the magnetosphere with a series of interplanetary shocks with different parameters. We identify the location and evolution of the reconnection region in each case as the shock propagates through the magnetosphere, finding strong enhancement in the dayside reconnection rate and prompt expansion of the dayside polar cap prior to the eventual triggering of tail reconnection. This effect pervades for a variety of IMF orientations, and the reconnection rate is most enhanced for events with higher dynamic pressure. We explain this by repeating the simulations with a large explicit resistivity, showing that compression of the magnetosheath plasma near the propagating shock front allows for reconnection of much greater intensity and at different locations on the dayside magnetopause than during typical solar wind conditions. The results indicate that the dynamic behaviour of dayside coupling may render steady models of reconnection inaccurate during the onset of a severe space weather event.

Read this paper on arXiv…

J. Eggington, R. Desai, L. Mejnertsen, et. al.
Tue, 29 Mar 22
8/73

Comments: N/A

Provably Positive Central DG Schemes via Geometric Quasilinearization for Ideal MHD Equations [CL]

http://arxiv.org/abs/2203.14853


In the numerical simulation of ideal MHD, keeping the pressure and density positive is essential for both physical considerations and numerical stability. This is a challenge, due to the underlying relation between such positivity-preserving (PP) property and the magnetic divergence-free (DF) constraint as well as the strong nonlinearity of the MHD equations. This paper presents the first rigorous PP analysis of the central discontinuous Galerkin (CDG) methods and constructs arbitrarily high-order PP CDG schemes for ideal MHD. By the recently developed geometric quasilinearization (GQL) approach, our analysis reveals that the PP property of standard CDG methods is closely related to a discrete DF condition, whose form was unknown and differs from the non-central DG and finite volume cases in [K. Wu, SIAM J. Numer. Anal. 2018]. This result lays the foundation for the design of our PP CDG schemes. In 1D case, the discrete DF condition is naturally satisfied, and we prove the standard CDG method is PP under a condition that can be enforced with a PP limiter. However, in the multidimensional cases, the discrete DF condition is highly nontrivial yet critical, and we prove the the standard CDG method, even with the PP limiter, is not PP in general, as it fails to meet the discrete DF condition. We address this issue by carefully analyzing the structure of the discrete divergence and then constructing new locally DF CDG schemes for Godunov’s modified MHD equations with an additional source. The key point is to find out the suitable discretization of the source term such that it exactly offsets all the terms in the discrete DF condition. Based on the GQL approach, we prove the PP property of the new multidimensional CDG schemes. The robustness and accuracy of PP CDG schemes are validated by several demanding examples, including the high-speed jets and blast problems with very low plasma beta.

Read this paper on arXiv…

K. Wu, H. Jiang and C. Shu
Tue, 29 Mar 22
50/73

Comments: N/A

Applications of physics informed neural operators [CL]

http://arxiv.org/abs/2203.12634


We present an end-to-end framework to learn partial differential equations that brings together initial data production, selection of boundary conditions, and the use of physics-informed neural operators to solve partial differential equations that are ubiquitous in the study and modeling of physics phenomena. We first demonstrate that our methods reproduce the accuracy and performance of other neural operators published elsewhere in the literature to learn the 1D wave equation and the 1D Burgers equation. Thereafter, we apply our physics-informed neural operators to learn new types of equations, including the 2D Burgers equation in the scalar, inviscid and vector types. Finally, we show that our approach is also applicable to learn the physics of the 2D linear and nonlinear shallow water equations, which involve three coupled partial differential equations. We release our artificial intelligence surrogates and scientific software to produce initial data and boundary conditions to study a broad range of physically motivated scenarios. We provide the source code, an interactive website to visualize the predictions of our physics informed neural operators, and a tutorial for their use at the Data and Learning Hub for Science.

Read this paper on arXiv…

S. Rosofsky and E. Huerta
Fri, 25 Mar 22
11/46

Comments: 15 pages, 10 figures

COSE$ν$: A Collective Oscillation Simulation Engine for Neutrinos [CL]

http://arxiv.org/abs/2203.12866


We introduce the implementation details of the simulation code \cosenu, which numerically solves a set of non-linear partial differential equations that govern the dynamics of neutrino collective flavor conversions. We systematically provide the details of both the finite difference method supported by Kreiss-Oliger dissipation and the finite volume method with seventh order weighted essentially non-oscillatory scheme. To ensure the reliability of the code, we perform the comparison of the simulation results with theoretically obtainable solutions. In order to understand and characterize the error accumulation behavior of the implementations when neutrino self-interactions are switched on, we also analyze the evolution of the deviation of the conserved quantities for different values of simulation parameters. We report the performance of our code with both CPUs and GPUs. The public version of the \cosenu~package is available at \url{https://github.com/COSEnu/COSEnu}.

Read this paper on arXiv…

M. George, C. Lin, M. Wu, et. al.
Fri, 25 Mar 22
31/46

Comments: 17 pages, 13 figures

sympy2c: from symbolic expressions to fast C/C++ functions and ODE solvers in Python [IMA]

http://arxiv.org/abs/2203.11945


Computer algebra systems play an important role in science as they facilitate the development of new theoretical models. The resulting symbolic equations are often implemented in a compiled programming language in order to provide fast and portable codes for practical applications. We describe sympy2c, a new Python package designed to bridge the gap between the symbolic development and the numerical implementation of a theoretical model. sympy2c translates symbolic equations implemented in the SymPy Python package to C/C++ code that is optimized using symbolic transformations. The resulting functions can be conveniently used as an extension module in Python. sympy2c is used within the PyCosmo Python package to solve the Einstein-Boltzmann equations, a large system of ODEs describing the evolution of linear perturbations in the Universe. After reviewing the functionalities and usage of sympy2c, we describe its implementation and optimization strategies. This includes, in particular, a novel approach to generate optimized ODE solvers making use of the sparsity of the symbolic Jacobian matrix. We demonstrate its performance using the Einstein-Boltzmann equations as a test case. sympy2c is widely applicable and may prove useful for various areas of computational physics. sympy2c is publicly available at https://cosmology.ethz.ch/research/software-lab/sympy2c.html

Read this paper on arXiv…

U. Schmitt, B. Moser, C. Lorenz, et. al.
Thu, 24 Mar 22
40/56

Comments: 28 pages, 5 figures, 5 tables, Link to package: this https URL, the described packaged sympy2c is used within arXiv:2112.08395

On the Jacobi capture origin of binaries with applications to the Earth-Moon system and black holes in galactic nuclei [EPA]

http://arxiv.org/abs/2203.09646


Close encounters between two bodies in a disk often result in a single orbital deflection. However, within their Jacobi volumes, where the gravitational forces between the two bodies and the central body become competitive, temporary captures with multiple close encounters become possible outcomes: a Jacobi capture. We perform 3-body simulations in order to characterise the dynamics of Jacobi captures in the plane. We find that the phase space structure resembles a Cantor set with a fractal dimension of 0.4. The lifetime distribution decreases exponentially, while the distribution of the closest separation follows a power law with index 0.5. In our first application, we consider the Jacobi capture of the Moon. We demonstrate that both tidal captures and giant impacts are possible outcomes. Their respective 1D cross sections differ within an order of magnitude, evaluated at a heliocentric distance of 1 AU. The impact speed is well approximated by a parabolic encounter, while the impact angles follow that of a uniform beam on a circular target. In our second application, we find that Jacobi captures with gravitational wave dissipation can result in the formation of binary black holes in galactic nuclei. The eccentricity distribution is approximately super-thermal and includes both prograde and retrograde orientations. We estimate a cosmic rate density of 0.083 < R < 14 Gpc^-3 yr^-1. We conclude that dissipative Jacobi captures form an efficient channel for binary formation, which motivates further research into establishing the universality of Jacobi captures across multiple astrophysical scales.

Read this paper on arXiv…

T. Boekholt, C. Rowan and B. Kocsis
Mon, 21 Mar 22
8/60

Comments: Submitted to MNRAS. 18 pages, 16 figures

Comptonization by Reconnection Plasmoids in Black Hole Coronae II: Electron-Ion Plasma [HEAP]

http://arxiv.org/abs/2203.02856


We perform two-dimensional particle-in-cell simulations of magnetic reconnection in electron-ion plasmas subject to strong Compton cooling and calculate the X-ray spectra produced by this process. The simulations are performed for trans-relativistic reconnection with magnetization $1\leq \sigma\leq 3$ (defined as the ratio of magnetic tension to plasma rest-mass density), which is expected in the coronae of accretion disks around black holes. We find that magnetic dissipation proceeds with inefficient energy exchange between the heated ions and the Compton-cooled electrons. As a result, most electrons are kept at a low temperature in Compton equilibrium with radiation, and so thermal Comptonization cannot reach photon energies $\sim 100$ keV observed from accreting black holes. Nevertheless, magnetic reconnection efficiently generates $\sim 100$ keV photons because of mildly relativistic bulk motions of the plasmoid chain formed in the reconnection layer. Comptonization by the plasmoid motions dominates the radiative output and controls the peak of the radiation spectrum $E_{\rm pk}$. We find $E_{\rm pk}\sim 40$ keV for $\sigma=1$ and $E_{\rm pk}\sim100$ keV for $\sigma=3$. In addition to the X-ray peak around 100 keV, the simulations show a non-thermal MeV tail emitted by a non-thermal electron population generated near X-points of the reconnection layer. The results are consistent with the typical hard state of accreting black holes. In particular, we find that the spectrum of Cygnus~X-1 is well explained by electron-ion reconnection with $\sigma\sim 3$.

Read this paper on arXiv…

N. Sridhar, L. Sironi and A. Beloborodov
Tue, 8 Mar 22
27/100

Comments: Submitted for publication in MNRAS: 15 pages, 13 figures, 1 table, 7 appendices. Part-I can be accessed at arXiv:2107.00263

New satellites of figure-eight orbit computed with high precision [CL]

http://arxiv.org/abs/2203.02793


In this paper we use a Modified Newton’s method based on the Continuous analog of Newton’s method and high precision arithmetic for a search of new satellites of the famous figure-eight orbit. By making a purposeful search for such satellites, we found over 300 new satellites, including 7 new stable choreographies.

Read this paper on arXiv…

I. Hristov, R. Hristova, I. Puzynin, et. al.
Tue, 8 Mar 22
63/100

Comments: 11 pages, 9 figures, 1 table

Effects of Mesh Topology on MHD Solution Features in Coronal Simulations [SSA]

http://arxiv.org/abs/2202.13696


Magnetohydrodynamic (MHD) simulations of the solar corona have become more popular with the increased availability of computational power. Modern computational plasma codes, relying upon Computational Fluid Dynamics (CFD) methods, allow for resolving the coronal features using solar surface magnetograms as inputs. These computations are carried out in a full 3D domain and thus selection of the right mesh configuration is essential to save computational resources and enable/speed up convergence. In addition, it has been observed that for MHD simulations close to the hydrostatic equilibrium, spurious numerical artefacts might appear in the solution following the mesh structure, which makes the selection of the grid also a concern for accuracy. The purpose of this paper is to discuss and trade off two main mesh topologies when applied to global solar corona simulations using the unstructured ideal MHD solver from the COOLFluiD platform. The first topology is based on the geodesic polyhedron and the second on UV mapping. Focus will be placed on aspects such as mesh adaptability, resolution distribution, resulting spurious numerical fluxes and convergence performance. For this purpose, firstly a rotating dipole case is investigated, followed by two simulations using real magnetograms from the solar minima (1995) and solar maxima (1999). It is concluded that the most appropriate mesh topology for the simulation depends on several factors, such as the accuracy requirements, the presence of features near the polar regions and/or strong features in the flow field in general. If convergence is of concern and the simulation contains strong dynamics, then grids which are based on the geodesic polyhedron are recommended compared to more conventionally used UV-mapped meshes.

Read this paper on arXiv…

M. Brchnelova, F. Zhang, P. Leitner, et. al.
Tue, 1 Mar 22
49/80

Comments: 30 pages, 22 figures, 3 tables

DOME: Discrete oriented muon emission in GEANT4 simulations [IMA]

http://arxiv.org/abs/2202.11487


In this study, we exhibit a number elementary strategies that might be at disposal in diverse computational applications in the GEANT4 simulations with the purpose of hemispherical particle sources. To further detail, we initially generate random points on a spherical surface for a sphere of a practical radius by employing Gaussian distributions for the three components of the Cartesian coordinates, thereby obtaining a generating surface for the initial positions of the corresponding particles. Since we do not require the half bottom part of the produced spherical surface for our tomographic applications, we take the absolute value of the vertical component in the Cartesian coordinates by leading to a half-spherical shell, which is traditionally called a hemisphere. Last but not least, we direct the generated particles into the target material to be irradiated by favoring a selective momentum direction that is based on the vector construction between the random point on the hemispherical surface and the origin of the target material, hereby optimizing the particle loss through the source biasing. In the end, we incorporate our strategy by using G4ParticleGun in the GEANT4 code. Furthermore, we also exhibit a second scheme that is based on the coordinate transformation from the spherical coordinates to the Cartesian coordinates, thereby reducing the number of random number generators. While we plan to exert our strategy in the computational practices for muon scattering tomography, this source scheme might find its straightforward applications in different neighboring fields including but not limited to atmospheric sciences, space engineering, and astrophysics where a 3D particle source is a necessity for the modeling goals.

Read this paper on arXiv…

A. Topuz, M. Kiisk and A. Giammanco
Thu, 24 Feb 22
49/52

Comments: 3 figures

The mechanism of efficient electron acceleration at parallel non-relativistic shocks [HEAP]

http://arxiv.org/abs/2202.05288


Thermal electrons cannot directly participate in the process of diffusive acceleration at electron-ion shocks because their Larmor radii are smaller than the shock transition width: this is the well-known electron injection problem of diffusive shock acceleration. Instead, an efficient pre-acceleration process must exist that scatters electrons off of electromagnetic fluctuations on scales much shorter than the ion gyro radius. The recently found intermediate-scale instability provides a natural way to produce such fluctuations in parallel shocks. The instability drives comoving (with the upstream plasma) ion-cyclotron waves at the shock front and only operates when the drift speed is smaller than half of the electron Alfven speed. Here, we perform particle-in-cell simulations with the SHARP code to study the impact of this instability on electron acceleration at parallel non-relativistic, electron-ion shocks. To this end, we compare a shock simulation in which the intermediate-scale instability is expected to grow to simulations where it is suppressed. In particular, the simulation with an Alfvenic Mach number large enough to quench the intermediate instability shows a great reduction (by two orders of magnitude) of the electron acceleration efficiency. Moreover, the simulation with a reduced ion-to-electron mass ratio (where the intermediate instability is also suppressed) not only artificially precludes electron acceleration but also results in erroneous electron and ion heating in the downstream and shock transition regions. This finding opens up a promising route for a plasma physical understanding of diffusive shock acceleration of electrons, which necessarily requires realistic mass ratios in simulations of collisionless electron-ion shocks.

Read this paper on arXiv…

M. Shalaby, R. Lemmerz, T. Thomas, et. al.
Mon, 14 Feb 22
13/55

Comments: 11 pages, 8 figures, 1 table, submitted to ApJ

OSIRIS: A New Code for Ray Tracing Around Compact Objects [CL]

http://arxiv.org/abs/2202.00086


The radiation observed in quasars and active galactic nuclei is mainly produced by a relativistic plasma orbiting close to the black hole event horizon, where strong gravitational effects are relevant. The observational data of such systems can be compared with theoretical models to infer the black hole and plasma properties. In the comparison process, ray tracing algorithms are essential to computing the trajectories followed by the photons from the source to our telescopes. In this paper, we present OSIRIS: a new stable FORTRAN code capable of efficiently computing null geodesics around compact objects, including general relativistic effects such as gravitational lensing, redshift, and relativistic boosting. The algorithm is based on the Hamiltonian formulation and uses different integration schemes to evolve null geodesics while tracking the error in the Hamiltonian constrain to ensure physical results. We found from an error analysis that the integration schemes are all stable, and the best one maintains an error below $10^{-11}$. Particularly, to test the robustness and ability of the code to evolve geodesics in curved spacetime, we compute the shadow and Einstein rings of a Kerr black hole with different rotation parameters and obtain the image of a thin Keplerian accretion disk around a Schwarzschild black hole. Although OSIRIS is parallelized neither with MPI nor with CUDA, the computation times are of the same order as those reported by other codes with these types of parallel computing platforms.

Read this paper on arXiv…

V. J.M., A. J.A., F. Lora-Clavijo, et. al.
Wed, 2 Feb 22
10/60

Comments: 13 pages, 13 figures, accepted for publication in The European Physical Journal C

Dealing with density discontinuities in planetary SPH simulations [EPA]

http://arxiv.org/abs/2202.00472


Density discontinuities cannot be precisely modelled in standard formulations of smoothed particles hydrodynamics (SPH) because the density field is defined smoothly as a kernel-weighted sum of neighbouring particle masses. This is a problem when performing simulations of giant impacts between proto-planets, for example, because planets typically do have density discontinuities both at their surfaces and at any internal boundaries between different materials. The inappropriate densities in these regions create artificial forces that effectively suppress mixing between particles of different material and, as a consequence, this problem introduces a key unknown systematic error into studies that rely on SPH simulations. In this work we present a novel, computationally cheap method that deals simultaneously with both of these types of density discontinuity in SPH simulations. We perform standard hydrodynamical tests and several example giant impact simulations, and compare the results with standard SPH. In a simulated Moon-forming impact using $10^7$ particles, the improved treatment at boundaries affects at least 30% of the particles at some point during the simulation.

Read this paper on arXiv…

S. Ruiz-Bonilla, V. Eke, J. Kegerreis, et. al.
Wed, 2 Feb 22
25/60

Comments: 9 pages, 8 figures. Submitted to MNRAS

Evolutions in first-order viscous hydrodynamics [CL]

http://arxiv.org/abs/2201.13359


Motivated by the physics of the quark-gluon plasma created in heavy-ion collision experiments, we use holography to study the regime of applicability of various theories of relativistic viscous hydrodynamics. Using the microscopic description provided by holography of a system that relaxes to equilibrium, we obtain initial data with which we perform real-time evolutions in 2+1 dimensional conformal fluids using the first-order viscous relativistic hydrodynamics theory of Bemfica, Disconzi, Noronha and Kovtun (BDNK), BRSSS and ideal hydrodynamics. By initializing the hydrodynamics codes at different times, we can check the constitutive relations and assess the predictive power and accuracy of each of these theories.

Read this paper on arXiv…

H. Bantilan, Y. Bea and P. Figueras
Tue, 1 Feb 22
5/73

Comments: 6 pages plus appendixes, 8 figures

Conservative finite volume scheme for first-order viscous relativistic hydrodynamics [CL]

http://arxiv.org/abs/2201.12317


We present the first conservative finite volume numerical scheme for the causal, stable relativistic Navier-Stokes equations developed by Bemfica, Disconzi, Noronha, and Kovtun (BDNK). BDNK theory has arisen very recently as a promising means of incorporating entropy-generating effects (viscosity, heat conduction) into relativistic fluid models, appearing as a possible alternative to the so-called M\”uller-Israel-Stewart (MIS) theory successfully used to model quark-gluon plasma. Both BDNK and MIS-type theories may be understood in terms of a gradient expansion about the perfect (ideal) fluid, wherein BDNK arises at first order and MIS at second order. As such, BDNK has vastly fewer terms and undetermined model coefficients (as is typical for an effective field theory appearing at lower order), allowing for rigorous proofs of stability, causality, and hyperbolicity in full generality which have as yet been impossible for MIS. To capitalize on these advantages, we present the first fully conservative multi-dimensional fluid solver for the BDNK equations suitable for physical applications. The scheme includes a flux-conservative discretization, non-oscillatory reconstruction, and a central-upwind numerical flux, and is designed to smoothly transition to a high-resolution shock-capturing perfect fluid solver in the inviscid limit. We assess the robustness of our new method in a series of flat-spacetime tests for a conformal fluid, and provide a detailed comparison with previous approaches of Pandya & Pretorius (2021).

Read this paper on arXiv…

A. Pandya, E. Most and F. Pretorius
Mon, 31 Jan 22
7/55

Comments: 23 pages, 9 figures; comments welcome

A hybrid adaptive multiresolution approach for the efficient simulation of reactive flows [CL]

http://arxiv.org/abs/2201.10686


Computational studies that use block-structured adaptive mesh refinement (AMR) approaches suffer from unnecessarily high mesh resolution in regions adjacent to important solution features. This deficiency limits the performance of AMR codes. In this work a novel hybrid adaptive multiresolution (HAMR) approach to AMR-based calculations is introduced to address this issue. The multiresolution (MR) smoothness indicators are used to identify regions of smoothness on the mesh where the computational cost of individual physics solvers may be decreased by replacing direct calculations with interpolation. We suggest an approach to balance the errors due to the adaptive discretization and the interpolation of physics quantities such that the overall accuracy of the HAMR solution is consistent with that of the MR-driven AMR solution. The performance of the HAMR scheme is evaluated for a range of test problems, from pure hydrodynamics to turbulent combustion.

Read this paper on arXiv…

B. Gusto and T. Plewa
Thu, 27 Jan 22
34/44

Comments: 24 pages, 13 figures, 3 tables; accepted for publication in Computer Physics Communications; source code available at this https URL

Inference of bipolar neutrino flavor oscillations near a core-collapse supernova, based on multiple measurements at Earth [HEAP]

http://arxiv.org/abs/2201.08505


Neutrinos in compact-object environments, such as core-collapse supernovae, can experience various kinds of collective effects in flavor space, engendered by neutrino-neutrino interactions. These include “bipolar” collective oscillations, which are exhibited by neutrino ensembles where different flavors dominate at different energies. Considering the importance of neutrinos in the dynamics and nucleosynthesis in these environments, it is desirable to ascertain whether an Earth-based detection could contain signatures of bipolar oscillations that occurred within a supernova envelope. To that end, we continue examining a cost-function formulation of statistical data assimilation (SDA) to infer solutions to a small-scale model of neutrino flavor transformation. SDA is an inference paradigm designed to optimize a model with sparse data. Our model consists of two mono-energetic neutrino beams with different energies emanating from a source and coherently interacting with each other and with a matter background, with time-varying interaction strengths. We attempt to infer flavor transformation histories of these beams using simulated measurements of the flavor content at locations in vacuum (that is, far from the source), which could in principle correspond to earth-based detectors. Within the scope of this small-scale model, we found that: (i) based on such measurements, the SDA procedure is able to infer \textit{whether} bipolar oscillations had occurred within the protoneutron star envelope, and (ii) if the measurements are able to sample the full amplitude of the neutrino oscillations in vacuum, then the amplitude of the prior bipolar oscillations is also well predicted. This result intimates that the inference paradigm can well complement numerical integration codes, via its ability to infer flavor evolution at physically inaccessible locations.

Read this paper on arXiv…

E. Armstrong, A. Patwardhan, A. Ahmetaj, et. al.
Mon, 24 Jan 22
43/59

Comments: 12 pages, 7 figures, 1 table

Modelling astrophysical fluids with particles [IMA]

http://arxiv.org/abs/2201.05896


Computational fluid dynamics is a crucial tool to theoretically explore the cosmos. In the last decade, we have seen a substantial methodological diversification with a number of cross-fertilizations between originally different methods. Here we focus on recent developments related to the Smoothed Particle Hydrodynamics (SPH) method. We briefly summarize recent technical improvements in the SPH-approach itself, including smoothing kernels, gradient calculations and dissipation steering. These elements have been implemented in the Newtonian high-accuracy SPH code MAGMA2 and we demonstrate its performance in a number of challenging benchmark tests. Taking it one step further, we have used these new ingredients also in the first particle-based, general-relativistic fluid dynamics code that solves the full set of Einstein equations, SPHINCS_BSSN. We present the basic ideas and equations and demonstrate the code performance at examples of relativistic neutron stars that are evolved self-consistently together with the spacetime.

Read this paper on arXiv…

S. Rosswog
Wed, 19 Jan 22
8/121

Comments: 16 pages, 7 figures

Towards RNA life on Early Earth: From atmospheric HCN to biomolecule production in warm little ponds [EPA]

http://arxiv.org/abs/2201.00829


The origin of life on Earth involves the early appearance of an information-containing molecule such as RNA. The basic building blocks of RNA could have been delivered by carbon-rich meteorites, or produced in situ by processes beginning with the synthesis of hydrogen cyanide (HCN) in the early Earth’s atmosphere. Here, we construct a robust physical and non-equilibrium chemical model of the early Earth atmosphere. The atmosphere is supplied with hydrogen from impact degassing of meteorites, sourced with water evaporated from the oceans, carbon dioxide from volcanoes, and methane from undersea hydrothermal vents, and in which lightning and external UV-driven chemistry produce HCN. This allows us to calculate the rain-out of HCN into warm little ponds (WLPs). We then use a comprehensive sources and sinks numerical model to compute the resulting abundances of nucleobases, ribose, and nucleotide precursors such as 2-aminooxazole resulting from aqueous and UV-driven chemistry within them. We find that at 4.4 bya (billion years ago) peak adenine concentrations in ponds can be maintained at ~2.8$\mu$M for more than 100 Myr. Meteorite delivery of adenine to WLPs produce similar peaks in concentration, but are destroyed within months by UV photodissociation, seepage, and hydrolysis. The early evolution of the atmosphere is dominated by the decrease of hydrogen due to falling impact rates and atmospheric escape, and the rise of oxygenated species such as OH from H2O photolysis. Our work points to an early origin of RNA on Earth within ~200 Myr of the Moon-forming impact.

Read this paper on arXiv…

B. Pearce, K. Molaverdikhani, R. Pudritz, et. al.
Wed, 5 Jan 22
14/54

Comments: Accepted to ApJ, 27 pages (14 main text), 11 figures, 9 tables

N-body Simulations of the Solar System with CPU-based Parallel Methods [EPA]

http://arxiv.org/abs/2112.15079


The gravitational N-body simulation in the Solar system was performed using different parallel approaches with the comparisons in the computational times and speed-up values being carried out under different model sizes and the number of processors. The numerical integration used is a second-order velocity Verlet approach which gives the acceptable accuracy in the orbits of major bodies and asteroids with a time step size of 0.1 days.

Read this paper on arXiv…

T. Zhu
Mon, 3 Jan 22
1/49

Comments: 4pages, 6 figures

Addendum: Precision in high resolution absorption line modelling, analytic Voigt derivatives, and optimisation methods [IMA]

http://arxiv.org/abs/2112.14490


The parent paper to this Addendum describes the optimisation theory on which VPFIT, a non-linear least-squares program for modelling absorption spectra, is based. In that paper, we show that Voigt function derivatives can be calculated analytically using Taylor series expansions and look-up tables, for the specific case of one column density parameter for each absorption component. However, in many situations, modelling requires more complex parameterisation, such as summed column densities over a whole absorption complex, or common pattern relative ion abundances. This Addendum provides those analytic derivatives.

Read this paper on arXiv…

C. Lee, J. Webb and R. Carswell
Thu, 30 Dec 21
46/71

Comments: 4 pages, 2 figures. Submitted to MNRAS 23 Dec 2021, accepted 24 Dec 2021

Addendum: Precision in high resolution absorption line modelling, analytic Voigt derivatives, and optimisation methods [IMA]

http://arxiv.org/abs/2112.14490


The parent paper to this Addendum describes the optimisation theory on which VPFIT, a non-linear least-squares program for modelling absorption spectra, is based. In that paper, we show that Voigt function derivatives can be calculated analytically using Taylor series expansions and look-up tables, for the specific case of one column density parameter for each absorption component. However, in many situations, modelling requires more complex parameterisation, such as summed column densities over a whole absorption complex, or common pattern relative ion abundances. This Addendum provides those analytic derivatives.

Read this paper on arXiv…

C. Lee, J. Webb and R. Carswell
Thu, 30 Dec 21
6/71

Comments: 4 pages, 2 figures. Submitted to MNRAS 23 Dec 2021, accepted 24 Dec 2021

Stable and unstable supersonic stagnation of an axisymmetric rotating magnetized plasma [CL]

http://arxiv.org/abs/2112.10828


The Naval Research Laboratory “Mag Noh problem”, described in this paper, is a self-similar magnetized implosion flow, which contains a fast MHD outward propagating shock of constant velocity. We generalize the classic Noh (1983) problem to include azimuthal and axial magnetic fields as well as rotation. Our family of ideal MHD solutions is five-parametric, each solution having its own self-similarity index, gas gamma, magnetization, the ratio of axial to the azimuthal field, and rotation. While the classic Noh problem must have a supersonic implosion velocity to create a shock, our solutions have an interesting three-parametric special case with zero initial velocity in which magnetic tension, instead of implosion flow, creates the shock at $t=0+$. Our self-similar solutions are indeed realized when we solve the initial value MHD problem with finite volume MHD code Athena. We numerically investigated the stability of these solutions and found both stable and unstable regions in parameter space. Stable solutions can be used to test the accuracy of numerical codes. Unstable solutions have also been widely used to test how codes reproduce linear growth, transition to turbulence, and the practically important effects of mixing. Now we offer a family of unstable solutions featuring all three elements relevant to magnetically driven implosions: convergent flow, magnetic field, and a shock wave.

Read this paper on arXiv…

A. Beresnyak, A. Velikovich, J. Giuliani, et. al.
Wed, 22 Dec 21
4/67

Comments: 24 pages, 9 figures, submitted to JFM

Neural Symplectic Integrator with Hamiltonian Inductive Bias for the Gravitational $N$-body Problem [CL]

http://arxiv.org/abs/2111.15631


The gravitational $N$-body problem, which is fundamentally important in astrophysics to predict the motion of $N$ celestial bodies under the mutual gravity of each other, is usually solved numerically because there is no known general analytical solution for $N>2$. Can an $N$-body problem be solved accurately by a neural network (NN)? Can a NN observe long-term conservation of energy and orbital angular momentum? Inspired by Wistom & Holman (1991)’s symplectic map, we present a neural $N$-body integrator for splitting the Hamiltonian into a two-body part, solvable analytically, and an interaction part that we approximate with a NN. Our neural symplectic $N$-body code integrates a general three-body system for $10^{5}$ steps without diverting from the ground truth dynamics obtained from a traditional $N$-body integrator. Moreover, it exhibits good inductive bias by successfully predicting the evolution of $N$-body systems that are no part of the training set.

Read this paper on arXiv…

M. Cai, S. Zwart and D. Podareanu
Fri, 10 Dec 21
37/94

Comments: 7 pages, 2 figures, accepted for publication at the NeurIPS 2021 workshop “Machine Learning and the Physical Sciences”

Solving 3D Magnetohydrostatics with RBF-FD: Applications to the Solar Corona [SSA]

http://arxiv.org/abs/2112.04561


We present a novel magnetohydrostatic numerical model that solves directly for the force-balanced magnetic field in the solar corona. This model is constructed with Radial Basis Function Finite Differences (RBF-FD), specifically 3D polyharmonic splines plus polynomials, as the core discretization. This set of PDEs is particularly difficult to solve since in the limit of the forcing going to zero it becomes ill-posed with a multitude of solutions. For the forcing equal to zero there are no numerically tractable solutions. For finite forcing, the ability to converge onto a physically viable solution is delicate as will be demonstrated. The static force-balance equations are of a hyperbolic nature, in that information of the magnetic field travels along characteristic surfaces, yet they require an elliptic type solver approach for a sparse overdetermined ill-conditioned system. As an example, we reconstruct a highly nonlinear analytic model designed to represent long-lived magnetic structures observed in the solar corona.

Read this paper on arXiv…

N. Mathews, N. Flyer and S. Gibson
Fri, 10 Dec 21
92/94

Comments: Submitted to Journal of Computational Physics

Searching for Anomalies in the ZTF Catalog of Periodic Variable Stars [SSA]

http://arxiv.org/abs/2112.03306


Periodic variables illuminate the physical processes of stars throughout their lifetime. Wide-field surveys continue to increase our discovery rates of periodic variable stars. Automated approaches are essential to identify interesting periodic variable stars for multi-wavelength and spectroscopic follow-up. Here, we present a novel unsupervised machine learning approach to hunt for anomalous periodic variables using phase-folded light curves presented in the Zwicky Transient Facility Catalogue of Periodic Variable Stars by \citet{Chen_2020}. We use a convolutional variational autoencoder to learn a low dimensional latent representation, and we search for anomalies within this latent dimension via an isolation forest. We identify anomalies with irregular variability. Most of the top anomalies are likely highly variable Red Giants or Asymptotic Giant Branch stars concentrated in the Milky Way galactic disk; a fraction of the identified anomalies are more consistent with Young Stellar Objects. Detailed spectroscopic follow-up observations are encouraged to reveal the nature of these anomalies.

Read this paper on arXiv…

H. Chan, V. Villar, S. Cheung, et. al.
Wed, 8 Dec 21
16/77

Comments: 26 pages, 17 figures. The full version of Table 4 and Table 5 are available upon request

GRB prompt phase spectra under backscattering dominated model [HEAP]

http://arxiv.org/abs/2111.14163


We propose a backscattering dominated prompt emission model for gamma-ray bursts (GRB) prompt phase in which the photons generated through pair annihilation at the centre of the burst are backscattered through Compton scattering by an outflowing stellar cork. We show that the obtained spectra are capable of explaining the low and high energy slopes as well as the distribution of spectral peak energies in their observed prompt spectra.

Read this paper on arXiv…

M. Vyas, A. Pe’er and D. Eichler
Tue, 30 Nov 21
6/105

Comments: 6 pages, 5 figures

A Convolutional Autoencoder-Based Pipeline for Anomaly Detection and Classification of Periodic Variables [SSA]

http://arxiv.org/abs/2111.13828


The periodic pulsations of stars teach us about their underlying physical process. We present a convolutional autoencoder-based pipeline as an automatic approach to search for out-of-distribution anomalous periodic variables within The Zwicky Transient Facility Catalog of Periodic Variable Stars (ZTF CPVS). We use an isolation forest to rank each periodic variable by its anomaly score. Our overall most anomalous events have a unique physical origin: they are mostly highly variable and irregular evolved stars. Multiwavelength data suggest that they are most likely Red Giant or Asymptotic Giant Branch stars concentrated in the Milky Way galactic disk. Furthermore, we show how the learned latent features can be used for the classification of periodic variables through a hierarchical random forest. This novel semi-supervised approach allows astronomers to identify the most anomalous events within a given physical class, significantly increasing the potential for scientific discovery.

Read this paper on arXiv…

H. Chan, S. Cheung, V. Villar, et. al.
Tue, 30 Nov 21
105/105

Comments: 7 pages, 4 figures

Inference solves a boundary-value collision problem, with relevance to neutrino flavor transformation [CL]

http://arxiv.org/abs/2111.07412


Understanding neutrino flavor transformation in dense environments such as core-collapse supernovae (CCSN) is critical for inferring nucleosynthesis and interpreting a detected neutrino signal. The role of direction-changing collisions in shaping the neutrino flavor field in these environments is important and poorly understood; it has not been treated self-consistently. There has been progress, via numerical integration, to include the effects of collisions in the dynamics of the neutrino flavor field. While this has led to important insights, integration is limited by its requirement that full initial conditions must be assumed known. On the contrary, feedback from collisions to the neutrino field is a boundary value problem. Numerical integration techniques are poorly equipped to handle that formulation. This paper demonstrates that an inference formulation of the problem can solve a simple collision-only model representing a CCSN core — without full knowledge of initial conditions. Rather, the procedure solves a two-point boundary value problem with partial information at the bounds. The model is sufficiently simple that physical reasoning may be used as a confidence check on the inference-based solution, and the procedure recovers the expected model dynamics. This result demonstrates that inference can solve a problem that is artificially hidden from integration techniques — a problem that is an important feature of flavor evolution in dense environments. Thus, it is worthwhile to explore means of augmenting the existing powerful integration tools with inference-based approaches.

Read this paper on arXiv…

E. Armstrong
Tue, 16 Nov 21
16/97

Comments: 9 pages, 3 figures

High-accuracy numerical models of Brownian thermal noise in thin mirror coatings [IMA]

http://arxiv.org/abs/2111.06893


Brownian coating thermal noise in detector test masses is limiting the sensitivity of current gravitational-wave detectors on Earth. Therefore, accurate numerical models can inform the ongoing effort to minimize Brownian coating thermal noise in current and future gravitational-wave detectors. Such numerical models typically require significant computational resources and time, and often involve closed-source commercial codes. In contrast, open-source codes give complete visibility and control of the simulated physics and enable direct assessment of the numerical accuracy. In this article, we use the open-source SpECTRE numerical-relativity code and adopt a novel discontinuous Galerkin numerical method to model Brownian coating thermal noise. We demonstrate that SpECTRE achieves significantly higher accuracy than a previous approach at a fraction of the computational cost. Furthermore, we numerically model Brownian coating thermal noise in multiple sub-wavelength crystalline coating layers for the first time. Our new numerical method has the potential to enable fast exploration of realistic mirror configurations, and hence to guide the search for optimal mirror geometries, beam shapes and coating materials for gravitational-wave detectors.

Read this paper on arXiv…

N. Fischer, S. Rodriguez, T. Wlodarczyk, et. al.
Tue, 16 Nov 21
78/97

Comments: 9 pages, 5 figures. Results are reproducible with the ancillary input files