A modified Brink-Axel hypothesis for astrophysical Gamow-Teller transitions [CL]

http://arxiv.org/abs/2111.06242


Weak interaction charged current transition strengths from highly excited nuclear states are fundamental ingredients for accurate modeling of compact object composition and dynamics, but are difficult to obtain either from experiment or theory. For lack of alternatives, calculations have often fallen back upon a generalized Brink-Axel hypothesis, that is, assuming the strength function (transition probability) is independent of the initial nuclear state but depends only upon the transition energy and the weak interaction properties of the parent nucleus ground state. Here we present numerical evidence for a modified `local’ Brink-Axel hypothesis for Gamow-Teller transitions for $pf$-shell nuclei relevant to astrophysical applications. Specifically, while the original Brink-Axel hypothesis does not hold globally, strength functions from initial states nearby in energy are similar within statistical fluctuations. This agrees with previous work on strength function moments. Using this modified hypothesis, we can tackle strength functions at previously intractable initial energies, using semi-converged initial states at arbitrary excitation energy. Our work provides a well-founded method for computing accurate thermal weak transition rates for medium-mass nuclei at temperatures occurring in stellar cores near collapse. We finish by comparing to previous calculations of astrophysical rates.

Read this paper on arXiv…

R. Herrera and C. Fuller
Fri, 12 Nov 21
16/53

Comments: 19 pages, 10 figures

Predicting resolved galaxy properties from photometric images using convolutional neural networks [GA]

http://arxiv.org/abs/2111.01154


Multi-band images of galaxies reveal a huge amount of information about their morphology and structure. However, inferring properties of the underlying stellar populations such as age, metallicity or kinematics from those images is notoriously difficult. Traditionally such information is best extracted from expensive spectroscopic observations. Here we present the $Painting\, IntrinsiC\, Attributes\, onto\, SDSS\, Objects$ (PICASSSO) project and test the information content of photometric multi-band images of galaxies. We train a convolutional neural network on 27,558 galaxy image pairs to establish a connection between broad-band images and the underlying physical stellar and gaseous galaxy property maps. We test our machine learning (ML) algorithm with SDSS $ugriz$ mock images for which uncertainties and systematics are exactly known. We show that multi-band galaxy images contain enough information to reconstruct 2d maps of stellar mass, metallicity, age and gas mass, metallicity as well as star formation rate. We recover the true stellar properties on a pixel by pixel basis with only little scatter, $\lesssim20\%$ compared to $\sim50\%$ statistical uncertainty from traditional mass-to-light-ratio based methods. We further test for any systematics of our algorithm with image resolution, training sample size or wavelength coverage. We find that galaxy morphology alone constrains stellar properties to better than $\sim20\%$ thus highlighting the benefits of including morphology into the parameter estimation. The machine learning approach can predict maps of high resolution, only limited by the resolution of the input bands, thus achieving higher resolution than IFU observations. The network architecture and all code is publicly available on GitHub.

Read this paper on arXiv…

T. Buck and S. Wolf
Wed, 3 Nov 21
1/106

Comments: 14 pages main text, 9 figures, 3 pages appendix with 5 figures, submitted to MNRAS

High resolution calibration of string network evolution [CEA]

http://arxiv.org/abs/2110.15427


The canonical velocity-dependent one-scale (VOS) model for cosmic string evolution contains a number of free parameters which cannot be obtained ab initio. Therefore it must be calibrated using high resolution numerical simulations. We exploit our state of the art graphically accelerated implementation of the evolution of local Abelian-Higgs string networks to provide a statistically robust calibration of this model. In order to do so, we will make use of the largest set of high resolution simulations carried out to date, for a variety of cosmological expansion rates, and explore the impact of key numerical choices on model calibration, including the dynamic range, lattice spacing, and the choice of numerical estimators for the mean string velocity. This sensitivity exploration shows that certain numerical choices will indeed have consequences for observationally crucial parameters, such as the loop chopping parameter. To conclude, we will also briefly illustrate how our results impact observational constraints on cosmic strings.

Read this paper on arXiv…

J. Correia and C. Martins
Mon, 1 Nov 21
10/58

Comments: Summary of a talk given at the From Cosmic Strings to Superstrings parallel session of the Sixteenth Marcel Grossmann Meeting, partially summarizing work previously reported in arXiv:2108.07513. To appear in the proceedings

Bayesian reconstruction of nuclear matter parameters from the equation of state of neutron star matter [CL]

http://arxiv.org/abs/2110.15776


The nuclear matter parameters (NMPs), those underlie in the construction of the equation of state (EoS) of neutron star matter, are not directly accessible. The Bayesian approach is applied to reconstruct the posterior distributions of NMPs from the EoS of neutron star matter. The constraints on lower-order parameters as imposed by the finite nuclei observables are incorporated through appropriately chosen prior distributions. The calculations are performed with two sets of pseudo data on the EoS whose true models are known. The median values of second or higher order NMPs show sizeable deviations from their true values and associated uncertainties are also larger. The sources of these uncertainties are intrinsic in nature, identified as (i) the correlations among various NMPs and (ii) the variations in the EoS of symmetric nuclear matter, symmetry energy, and the neutron-proton asymmetry in such a way that the neutron star matter EoS remain almost unaffected.

Read this paper on arXiv…

S. Imam, N. Patra, C. Mondal, et. al.
Mon, 1 Nov 21
58/58

Comments: 11 pages, 7 figures, Submitted to PRC

Numerical solutions to linear transfer problems of polarized radiation I. Algebraic formulation and stationary iterative methods [SSA]

http://arxiv.org/abs/2110.11861


Context. The numerical modeling of the generation and transfer of polarized radiation is a key task in solar and stellar physics research and has led to a relevant class of discrete problems that can be reframed as linear systems. In order to solve such problems, it is common to rely on efficient stationary iterative methods. However, the convergence properties of these methods are problem-dependent, and a rigorous investigation of their convergence conditions, when applied to transfer problems of polarized radiation, is still lacking. Aims. After summarizing the most widely employed iterative methods used in the numerical transfer of polarized radiation, this article aims to clarify how the convergence of these methods depends on different design elements, such as the choice of the formal solver, the discretization of the problem, or the use of damping factors. The main goal is to highlight advantages and disadvantages of the different iterative methods in terms of stability and rate of convergence. Methods. We first introduce an algebraic formulation of the radiative transfer problem. This formulation allows us to explicitly assemble the iteration matrices arising from different stationary iterative methods, compute their spectral radii and derive their convergence rates, and test the impact of different discretization settings, problem parameters, and damping factors. Conclusions. The general methodology used in this article, based on a fully algebraic formulation of linear transfer problems of polarized radiation, provides useful estimates of the convergence rates of various iterative schemes. Additionally, it can lead to novel solution approaches as well as analyses for a wider range of settings, including the unpolarized case.

Read this paper on arXiv…

G. Janett, P. Benedusi, L. Belluzzi, et. al.
Mon, 25 Oct 21
73/76

Comments: N/A

GP-MOOD: A positive-preserving high-order finite volume method for hyperbolic conservation laws [CL]

http://arxiv.org/abs/2110.08683


We present an a posteriori shock-capturing finite volume method algorithm called GP-MOOD that solves a compressible hyperbolic conservative system at high-order solution accuracy (e.g., third-, fifth-, and seventh-order) in multiple spatial dimensions. The GP-MOOD method combines two methodologies, the polynomial-free spatial reconstruction methods of GP (Gaussian Process) and the a posteriori detection algorithms of MOOD (Multidimensional Optimal Order Detection). The spatial approximation of our GP-MOOD method uses GP’s unlimited spatial reconstruction that builds upon our previous studies on GP reported in Reyes et al., Journal of Scientific Computing, 76 (2017) and Journal of Computational Physics, 381 (2019). This paper focuses on extending GP’s flexible variability of spatial accuracy to an a posteriori detection formalism based on the MOOD approach. We show that GP’s polynomial-free reconstruction provides a seamless pathway to the MOOD’s order cascading formalism by utilizing GP’s novel property of variable (2R+1)th-order spatial accuracy on a multidimensional GP stencil defined by the GP radius R, whose size is smaller than that of the standard polynomial MOOD methods. The resulting GP-MOOD method is a positivity-preserving method. We examine the numerical stability and accuracy of GP-MOOD on smooth and discontinuous flows in multiple spatial dimensions without resorting to any conventional, computationally expensive a priori nonlinear limiting mechanism to maintain numerical stability.

Read this paper on arXiv…

R. Bourgeois and D. Lee
Tue, 19 Oct 21
1/98

Comments: N/A

Accurate Baryon Acoustic Oscillations reconstruction via semi-discrete optimal transport [CEA]

http://arxiv.org/abs/2110.08868


Optimal transport theory has recently reemerged as a vastly resourceful field of mathematics with elegant applications across physics and computer science. Harnessing methods from geometry processing, we report on the efficient implementation for a specific problem in cosmology — the reconstruction of the linear density field from low redshifts, in particular the recovery of the Baryonic Acoustic Oscillation (BAO) scale. We demonstrate our algorithm’s accuracy by retrieving the BAO scale in noise-less cosmological simulations that are dedicated to cancel cosmic variance; we find uncertainties to be reduced by factor of 4.3 compared with performing no reconstruction, and a factor of 3.1 compared with standard reconstruction.

Read this paper on arXiv…

S. Hausegger, B. LĂ©vy and R. Mohayaee
Tue, 19 Oct 21
48/98

Comments: Comments welcome! 5 pages excluding references, 2 figures, 1 table

An extension of Gmunu: General-relativistic resistive magnetohydrodynamics based on staggered-meshed constrained transport with elliptic cleaning [IMA]

http://arxiv.org/abs/2110.03732


We present the implementation of general-relativistic resistive magnetohydrodynamics solvers and three divergence-free handling approaches adopted in the General-relativistic multigrid numerical (Gmunu) code.
In particular, implicit-explicit Runge-Kutta schemes are used to deal with the stiff terms in the evolution equations for small resistivity.
Three divergence-free handling methods are (i) hyperbolic divergence cleaning through a generalised Lagrange multiplier (GLM); (ii) staggered-meshed constrained transport (CT) schemes and (iii) elliptic cleaning though multigrid (MG) solver which is applicable in both cell-centred and face-centred (stagger grid) magnetic field.
The implementation has been test with a number of numerical benchmarks from special-relativistic to general-relativistic cases.
We demonstrate that our code can robustly recover a very wide range of resistivity.
We also illustrate the applications in modelling magnetised neutron stars, and compare how different divergence-free handling affects the evolution of the stars.
Furthermore, we show that the preservation of the divergence-free condition of magnetic field when staggered-meshed constrained transport schemes can be significantly improved by applying elliptic cleaning.

Read this paper on arXiv…

P. Cheong, A. Yip and T. Li
Mon, 11 Oct 21
43/58

Comments: N/A

Bar-driven Leading Spiral Arms in a Counter-rotating Dark Matter Halo [GA]

http://arxiv.org/abs/2110.02149


An overwhelming majority of galactic spiral arms trail with respect to the rotation of the galaxy, though a small sample of leading spiral arms has been observed. The formation of these leading spirals is not well understood. Here we show, using collisionless $N$-body simulations, that a barred disc galaxy in a retrograde dark matter halo can produce long-lived ($\sim3$ Gyr) leading spiral arms. Due to the strong resonant coupling of the disc to the halo, the bar slows rapidly and spiral perturbations are forced ahead of the bar. We predict that such a system, if observed, will also host a dark matter wake oriented perpendicular to the stellar bar. More generally, we propose that any mechanism that rapidly decelerates the stellar bar will allow leading spiral arms to flourish.

Read this paper on arXiv…

E. Lieb, A. Collier and A. Madigan
Wed, 6 Oct 21
44/56

Comments: N/A

Chaos in self-gravitating many-body systems: Lyapunov time dependence of $N$ and the influence of general relativity [CL]

http://arxiv.org/abs/2109.11012


In self-gravitating $N$-body systems, small perturbations introduced at the start, or infinitesimal errors produced by the numerical integrator or due to limited precision in the computer, grow exponentially with time. For Newton’s gravity, we confirm earlier results by \cite{1992ApJ…386..635K} and \cite{1993ApJ…415..715G}, that for relatively homogeneous systems, this rate of growth per crossing time increases with $N$ up to $N \sim 30$, but that for larger systems, the growth rate has a weaker dependency with $N$. For concentrated systems, however, the rate of exponential growth continues to scale with $N$. In relativistic self-gravitating systems, the rate of growth is almost independent of $N$. This effect, however, is only noticeable when the system’s mean velocity approaches the speed of light to within three orders of magnitude. The chaotic behavior of systems with $\apgt 10$ bodies for the usually adopted approximation of only solving the pairwise interactions in the Einstein-Infeld-Hoffmann equation of motion, is qualitatively different than when the interaction terms (or cross terms) are taken into account. This result provides a strong motivation for follow-up studies on the microscopic effect of general relativity on orbital chaos, and the influence of higher-order cross-terms in the Taylor-series expansion of the EIH equations of motion.

Read this paper on arXiv…

S. Zwart, T. Boekholt, E. Por, et. al.
Fri, 24 Sep 21
80/81

Comments: Submitted to A&A

A Co-Scaling Grid for Athena++ [IMA]

http://arxiv.org/abs/2109.03899


We present a co-scaling grid formalism and its implementation in the magnetohydrodynamics code Athena++. The formalism relies on flow symmetries in astrophysical problems involving expansion, contraction, and center-of-mass motion. The grid is evolved at the same time order as the fluid variables. The user specifies grid evolution laws, which can be independent of the fluid motion. Applying our implementation to standard hydrodynamic test cases leads to improved results and higher efficiency, compared to the fixed-grid solutions.

Read this paper on arXiv…

R. Habegger and F. Heitsch
Fri, 10 Sep 21
2/59

Comments: 11 pages, 8 figures

Dissipative Magnetohydrodynamics for Non-Resistive Relativistic Plasmas [HEAP]

http://arxiv.org/abs/2109.02796


Based on a 14-moment closure for non-resistive (general-) relativistic viscous plasmas, we describe a new numerical scheme that is able to handle all first-order dissipative effects (heat conduction, bulk and shear viscosities), as well the anisotropies induced by the presence of magnetic fields. The latter is parameterized in terms of a thermal gyrofrequency or, equivalently, a thermal Larmor radius and allows to correctly capture the thermal Hall effect. By solving an extended Israel-Stewart-like system for the dissipative quantities that enforces algebraic constraints via stiff-relaxation, we are able to cast all first-order dissipative terms in flux-divergence form. This allows us to apply traditional high-resolution shock capturing methods to the equations, making the system suitable for the numerical study of highly turbulent flows. We present several numerical tests to assess the robustness of our numerical scheme in flat spacetime. The 14-moment closure can seamlessly interpolate between the highly collisional limit found in neutron star mergers, and the highly anisotropic limit of relativistic Braginskii magnetohydrodynamics appropriate for weakly collisional plasmas in black-hole accretion problems. We believe that this new formulation and numerical scheme will be useful for a broad class of relativistic magnetized flows.

Read this paper on arXiv…

E. Most and J. Noronha
Wed, 8 Sep 21
67/76

Comments: 24 pages, 7 figures

Applying explicit symplectic integrator to study chaos of charged particles around magnetized Kerr black hole [CL]

http://arxiv.org/abs/2109.02295


In a recent work of Wu, Wang, Sun and Liu, a second-order explicit symplectic integrator was proposed for the integrable Kerr spacetime geometry. It is still suited for simulating the nonintegrable dynamics of charged particles moving around the Kerr black hole embedded in an external magnetic field. Its successful construction is due to the contribution of a time transformation. The algorithm exhibits a good long-term numerical performance in stable Hamiltonian errors and computational efficiency. As its application, the dynamics of order and chaos of charged particles is surveyed. In some circumstances, an increase of the dragging effects of the spacetime seems to weaken the extent of chaos from the global phase-space structure on Poincare sections. However, an increase of the magnetic parameter strengthens the chaotic properties. On the other hand, fast Lyapunov indicators show that there is no universal rule for the dependence of the transition between different dynamical regimes on the black hole spin. The dragging effects of the spacetime do not always weaken the extent of chaos from a local point of view.

Read this paper on arXiv…

W. Sun, Y. Wang, F. Liu, et. al.
Tue, 7 Sep 21
29/89

Comments: 10 pages,20 figures

Scalar and Gravitational Transient "Hair" for Near-Extremal Black Holes [CL]

http://arxiv.org/abs/2109.02607


We study the existence and nature of Aretakis “hair” and its potentially observable imprint at a finite distance from the horizon (Ori-coefficient) in near-extremal black hole backgrounds. Specifically, we consider the time evolution of horizon penetrating scalar and gravitational perturbations with compact support on near-extremal Reissner-Nordstrom (NERN) and Kerr (NEK). We do this by numerically solving the Teukolsky equation and determining the Aretakis charge values on the horizon and at a finite distance from the black hole. We demonstrate that these values are no longer strictly conserved in the non-extremal case; however, their decay rate can be arbitrarily slow as the black hole approaches extremality allowing for the possibility of their observation as a transient hair.

Read this paper on arXiv…

K. Gonzalez-Quesada, S. Sabharwal and G. Khanna
Tue, 7 Sep 21
40/89

Comments: 6 pages; 15 figures

Drift Orbit Bifurcations and Cross-field Transport in the Outer Radiation Belt: Global MHD and Integrated Test-Particle Simulations [CL]

http://arxiv.org/abs/2109.01913


Energetic particle fluxes in the outer magnetosphere present a significant challenge to modelling efforts as they can vary by orders of magnitude in response to solar wind driving conditions. In this article, we demonstrate the ability to propagate test particles through global MHD simulations to a high level of precision and use this to map the cross-field radial transport associated with relativistic electrons undergoing drift orbit bifurcations (DOBs). The simulations predict DOBs primarily occur within an Earth radius of the magnetopause loss cone and appears significantly different for southward and northward interplanetary magnetic field orientations. The changes to the second invariant are shown to manifest as a dropout in particle fluxes with pitch angles close to 90$^\circ$ and indicate DOBs are a cause of butterfly pitch angle distributions within the night-time sector. The convective electric field, not included in previous DOB studies, is found to have a significant effect on the resultant long term transport, and losses to the magnetopause and atmosphere are identified as a potential method for incorporating DOBs within Fokker-Planck transport models.

Read this paper on arXiv…

R. Desai, J. Eastwood, R. Horne, et. al.
Tue, 7 Sep 21
81/89

Comments: 12 pages, 8 figures. Accepted for publication as a Journal of Geophysical Research article on 04 September 2021

Hardware-accelerated Inference for Real-Time Gravitational-Wave Astronomy [CL]

http://arxiv.org/abs/2108.12430


The field of transient astronomy has seen a revolution with the first gravitational-wave detections and the arrival of multi-messenger observations they enabled. Transformed by the first detection of binary black hole and binary neutron star mergers, computational demands in gravitational-wave astronomy are expected to grow by at least a factor of two over the next five years as the global network of kilometer-scale interferometers are brought to design sensitivity. With the increase in detector sensitivity, real-time delivery of gravitational-wave alerts will become increasingly important as an enabler of multi-messenger followup. In this work, we report a novel implementation and deployment of deep learning inference for real-time gravitational-wave data denoising and astrophysical source identification. This is accomplished using a generic Inference-as-a-Service model that is capable of adapting to the future needs of gravitational-wave data analysis. Our implementation allows seamless incorporation of hardware accelerators and also enables the use of commercial or private (dedicated) as-a-service computing. Based on our results, we propose a paradigm shift in low-latency and offline computing in gravitational-wave astronomy. Such a shift can address key challenges in peak-usage, scalability and reliability, and provide a data analysis platform particularly optimized for deep learning applications. The achieved sub-millisecond scale latency will also be relevant for any machine learning-based real-time control systems that may be invoked in the operation of near-future and next generation ground-based laser interferometers, as well as the front-end collection, distribution and processing of data from such instruments.

Read this paper on arXiv…

A. Gunny, D. Rankin, J. Krupa, et. al.
Tue, 31 Aug 21
60/73

Comments: 21 pages, 14 figures

Menura: a code for simulating the interaction between a turbulent solar wind and solar system bodies [EPA]

http://arxiv.org/abs/2108.12252


Despite the close relationship between planetary science and plasma physics, few advanced numerical tools allow to bridge the two topics. The code Menura proposes a breakthrough towards the self-consistent modelling of these overlapping field, in a novel 2-step approach allowing for the global simulation of the interaction between a fully turbulent solar wind and various bodies of the solar system. This article introduces the new code and its 2-step global algorithm, illustrated by a first example: the interaction between a turbulent solar wind and a comet.

Read this paper on arXiv…

E. Behar, S. Fatemi, P. Henri, et. al.
Mon, 30 Aug 21
5/38

Comments: N/A

Precision in high resolution absorption line modelling, analytic Voigt derivatives, and optimisation methods [IMA]

http://arxiv.org/abs/2108.11218


This paper describes the optimisation theory on which VPFIT, a non-linear least-squares program for modelling absorption spectra, is based. Particular attention is paid to precision. Voigt function derivatives have previously been calculated using numerical finite difference approximations. We show how these can instead be computed analytically using Taylor series expansions and look-up tables. We introduce a new optimisation method for an efficient descent path to the best-fit, combining the principles used in both the Gauss-Newton and Levenberg-Marquardt algorithms. A simple practical fix for ill-conditioning is described, a common problem when modelling quasar absorption systems. We also summarise how unbiased modelling depends on using an appropriate information criterion to guard against over- or under-fitting.
The methods and the new implementations introduced in this paper are aimed at optimal usage of future data from facilities such as ESPRESSO/VLT and HIRES/ELT, particularly for the most demanding applications such as searches for spacetime variations in fundamental constants and attempts to detect cosmological redshift drift.

Read this paper on arXiv…

J. Webb, R. Carswell and C. Lee
Thu, 26 Aug 21
29/52

Comments: 15 pages, 7 figures, submitted to MNRAS

Field moment expansion method for interacting Bosonic systems [CL]

http://arxiv.org/abs/2108.08849


We introduce a numerical method and python package, https://github.com/andillio/CHiMES, that simulates quantum systems initially well approximated by mean field theory using a second order extension of the classical field approach. We call this the field moment expansion method. In this way, we can accurately approximate the evolution of first and second field moments beyond where the mean field theory breaks down. This allows us to estimate the quantum breaktime of a classical approximation without any calculations external to the theory. We investigate the accuracy of the field moment expansion using a number of well studied quantum test problems. Interacting Bosonic systems similar to scalar field dark matter are chosen as test problems. We find that successful application of this method depends on two conditions: the quantum system must initially be well described by the classical theory, and that the growth of the higher order moments be hierarchical.

Read this paper on arXiv…

A. Eberhardt, M. Kopp, A. Zamora, et. al.
Mon, 23 Aug 21
16/54

Comments: To be submitted to Phys. Rev. D

A low-dissipation HLLD approximate Riemann solver for a very wide range of Mach numbers [CL]

http://arxiv.org/abs/2108.04991


We propose a new Harten-Lax-van Leer discontinuities (HLLD) approximate Riemann solver to improve the stability of shocks and the accuracy of low-speed flows in multidimensional magnetohydrodynamic (MHD) simulations. Stringent benchmark tests verify that the new solver is more robust against a numerical shock instability and is more accurate for low-speed, nearly incompressible flows than the original solver, whereas additional computational costs are quite low. The novel ability of the new solver enables us to tackle MHD systems, including both high and low Mach number flows.

Read this paper on arXiv…

T. Minoshima and T. Miyoshi
Thu, 12 Aug 21
45/62

Comments: 16 pages, 3 figures, accepted for the publication in Journal of Computational Physics

Multi-Frequency Implicit Semi-analog Monte-Carlo (ISMC) Radiative Transfer Solver in Two-Dimensions (without Teleportation) [CL]

http://arxiv.org/abs/2108.02612


We study the multi-dimensional radiative transfer phenomena using the ISMC scheme, in both gray and multi-frequency problems. Implicit Monte-Carlo (IMC) schemes have been in use for five decades. The basic algorithm yields teleportation errors, where photons propagate faster than the correct heat front velocity. Recently [Po\”ette and Valentin, J. Comp. Phys., 412, 109405 (2020)], a new implicit scheme based on the semi-analog scheme was presented and tested in several one-dimensional gray problems. In this scheme, the material energy of the cell is carried by material-particles, and the photons are produced only from existing material particles. As a result, the teleportation errors vanish, due to the infinite discrete spatial accuracy of the scheme. We examine the validity of the new scheme in two-dimensional problems, both in Cartesian and Cylindrical geometries. Additionally, we introduce an expansion of the new scheme for multi-frequency problems. We show that the ISMC scheme presents excellent results without teleportation errors in a large number of benchmarks, especially against the slow classic IMC convergence.

Read this paper on arXiv…

E. Steinberg and S. Heizler
Fri, 6 Aug 21
9/54

Comments: 21 pages, 26 figures

Generalisation of the Menegozzi & Lamb Maser Algorithm to the Transient Superradiance Regime [IMA]

http://arxiv.org/abs/2108.01164


We investigate the application of the conventional quasi-steady state maser modelling algorithm of Menegozzi & Lamb (ML) to the high field transient regime of the one-dimensional Maxwell-Bloch (MB) equations for a velocity distribution of atoms or molecules. We quantify the performance of a first order perturbation approximation available within the ML framework when modelling regions of increasing electric field strength, and we show that the ML algorithm is unable to accurately describe the key transient features of R. H. Dicke’s superradiance (SR). We extend the existing approximation to one of variable fidelity, and we derive a generalisation of the ML algorithm convergent in the transient SR regime by performing an integration on the MB equations prior to their Fourier representation. We obtain a manifestly unique integral Fourier representation of the MB equations which is $\mathcal{O}\left(N\right)$ complex in the number of velocity channels $N$ and which is capable of simulating transient SR processes at varying degrees of fidelity. As a proof of operation, we demonstrate our algorithm’s accuracy against reference time domain simulations of the MB equations for transient SR responses to the sudden inversion of a sample possessing a velocity distribution of moderate width. We investigate the performance of our algorithm at varying degrees of approximation fidelity, and we prescribe fidelity requirements for future work simulating SR processes across wider velocity distributions.

Read this paper on arXiv…

C. Wyenberg, B. Lankhaar, F. Rajabi, et. al.
Wed, 4 Aug 21
24/66

Comments: 17 pages, 10 figures. Accepted for publication in Monthly Notices of the Royal Astronomical Society, 2021-08-01

Spherically symmetric model atmospheres using approximate lambda operators V. Static inhomogeneous atmospheres of hot dwarf stars [SSA]

http://arxiv.org/abs/2108.00773


Context. Clumping is a common property of stellar winds and is being incorporated to a solution of the radiative transfer equation coupled with kinetic equilibrium equations. However, in static hot model atmospheres, clumping and its influence on the temperature and density structures have not been considered and analysed at all to date. This is in spite of the fact that clumping can influence the interpretation of resulting spectra, as many inhomogeneities can appear there; for example, as a result of turbulent motions.
Aims. We aim to investigate the effects of clumping on atmospheric structure for the special case of a static, spherically symmetric atmosphere assuming microclumping and a 1-D geometry.
Methods. Static, spherically symmetric, non-LTE (local thermodynamic equilibrium) model atmospheres were calculated using the recent version of our code, which includes optically thin clumping. The matter is assumed to consist of dense clumps and a void interclump medium. Clumping is considered by means of clumping and volume filling factors, assuming all clumps are optically thin. Enhanced opacity and emissivity in clumps is multiplied by a volume filling factor to obtain their mean values. These mean values are used in the radiative transfer equation. Equations of kinetic equilibrium and the thermal balance equation use clump values of densities. Equations of hydrostatic and radiative equilibrium use mean values of densities.
Results. The atmospheric structure was calculated for selected stellar parameters. Moderate differences were found in temperature structure. However, clumping causes enhanced continuum radiation for the Lyman-line spectral region, while radiation in other parts of the spectrum is lower, depending on the adopted model. The atomic level departure coefficients are influenced by clumping as well.

Read this paper on arXiv…

J. Kubát and B. Kubátová
Tue, 3 Aug 21
78/90

Comments: accepted to Astronomy and Astrophysics

Entropy-Conserving Scheme for Modeling Nonthermal Energies in Fluid Dynamics Simulations [GA]

http://arxiv.org/abs/2107.14240


We compare the performance of energy-based and entropy-conservative schemes for modeling nonthermal energy components, such as unresolved turbulence and cosmic rays, using idealized fluid dynamics tests and isolated galaxy simulations. While both methods are aimed to model advection and adiabatic compression or expansion of different energy components, the energy-based scheme numerically solves the non-conservative equation for the energy density evolution, while the entropy-conserving scheme uses a conservative equation for modified entropy. Using the standard shock tube and Zel’dovich pancake tests, we show that the energy-based scheme results in a spurious generation of nonthermal energy on shocks, while the entropy-conserving method evolves the energy adiabatically to machine precision. We also show that, in simulations of an isolated $L_\star$ galaxy, switching between the schemes results in $\approx 20-30\%$ changes of the total star formation rate and a significant difference in morphology, particularly near the galaxy center. We also outline and test a simple method that can be used in conjunction with the entropy-conserving scheme to model the injection of nonthermal energies on shocks. Finally, we discuss how the entropy-conserving scheme can be used to capture the kinetic energy dissipated by numerical viscosity into the subgrid turbulent energy implicitly, without explicit source terms that require calibration and can be rather uncertain. Our results indicate that the entropy-conserving scheme is the preferred choice for modeling nonthermal energy components, a conclusion that is equally relevant for Eulerian and moving-mesh fluid dynamics codes.

Read this paper on arXiv…

V. Semenov, A. Kravtsov and B. Diemer
Mon, 2 Aug 21
72/82

Comments: 19 pages, 9 figures; submitted to ApJS; comments are welcome

The Boosted Potential [CEA]

http://arxiv.org/abs/2107.13008


The global gravitational potential, $\phi$, is not commonly employed in the analysis of cosmological simulations. This is because its levelsets do not show any obvious correspondence to the underlying density field or to the persistence of structures. Here, we show that the potential becomes a locally meaningful quantity when considered from a boosted frame of reference, defined by subtracting a uniform gradient term $\phi_{\rm{boost}}(\boldsymbol{x}) = \phi(\boldsymbol{x}) + \boldsymbol{x} \cdot \boldsymbol{a}_0$ with acceleration $\boldsymbol{a}_0$. We study this “boosted potential” in a variety of scenarios and propose several applications: (1) The boosted potential can be used to define a binding criterion that naturally incorporates the effect of tidal fields. This solves several problems of commonly-used self-potential binding checks: i) it defines a tidal boundary for each halo, ii) it is much less likely to consider caustics as haloes (specially in the context of warm dark matter cosmologies), and iii) performs better at identifying virialized regions of haloes and yields to the expected value of 2 for the virial ratio. (2) This binding check can be generalized to filaments and other cosmic structures to define binding energies in one and two dimensions. (3) The boosted potential defines a system which facilitates the understanding of the disruption of satellite subhaloes. We propose a picture where most mass loss is explained through a lowering of the escape energy through the tidal field. (4) We discuss the possibility of understanding the topology of the potential field in a way that is independent of constant offsets in the first derivative $\boldsymbol{a}_0$. We foresee that this novel perspective on the potential can help to develop more accurate models and improve our understanding of structure formation.

Read this paper on arXiv…

J. StĂĽcker, R. Angulo and P. Busch
Thu, 29 Jul 21
22/59

Comments: 21 pages, 14 figures. Submitted to MNRAS. Comments and feedback welcome!

A tenuous, collisional atmosphere on Callisto [EPA]

http://arxiv.org/abs/2107.12341


A simulation tool which utilizes parallel processing is developed to describe molecular kinetics in 2D, single-and multi-component atmospheres on Callisto. This expands on our previous study on the role of collisions in 1D atmospheres on Callisto composed of radiolytic products (Carberry Mogan et al., 2020) by implementing a temperature gradient from noon to midnight across Callisto’s surface and introducing sublimated water vapor. We compare single-species, ballistic and collisional O2, H2 and H2O atmospheres, as well as an O2+H2O atmosphere to 3-species atmospheres which contain H2 in varying amounts. Because the H2O vapor pressure is extremely sensitive to the surface temperatures, the density drops several order of magnitude with increasing distance from the subsolar point, and the flow transitions from collisional to ballistic accordingly. In an O2+H2O atmosphere the local temperatures are determined by H2O near the subsolar point and transition with increasing distance from the subsolar point to being determined by O2 When radiolytically produced H2 is not negligible in O2+H2O+H2 atmospheres, this much lighter molecule, with a scale height roughly an order of magnitude larger than that for the heavier species, can cool the local temperatures via collisions. In addition, if the H2 component is dense enough, particles originating on the day-side and precipitating into the night-side atmosphere deposit energy via collisions, which in turn heats the local atmosphere relative to the surface temperature. Finally, we discuss the potential implications of this study on the presence of H2 in Callisto’s atmosphere and how the simulated densities correlate with expected detection thresholds at flyby altitudes of the proposed JUpiter ICy moons Explorer (JUICE) spacecraft.

Read this paper on arXiv…

S. Mogan, O. Tucker, R. Johnson, et. al.
Tue, 27 Jul 21
74/97

Comments: N/A

Optimizing Parameters of Information-Theoretic Correlation Measurement for Multi-Channel Time-Series Datasets in Gravitational Wave Detectors [IMA]

http://arxiv.org/abs/2107.03516


Data analysis in modern science using extensive experimental and observational facilities, such as a gravitational wave detector, is essential in the search for novel scientific discoveries. Accordingly, various techniques and mathematical principles have been designed and developed to date. A recently proposed approximate correlation method based on the information theory is widely adopted in science and engineering. Although the maximal information coefficient (MIC) method remains in the phase of improving its algorithm, it is particularly beneficial in identifying the correlations of multiple noise sources in gravitational-wave detectors including non-linear effects. This study investigates various prospects for determining MIC parameters to improve the reliability of handling multi-channel time-series data, reduce high computing costs, and propose a novel method of determining optimized parameter sets for identifying noise correlations in gravitational wave data.

Read this paper on arXiv…

P. Jung, S. Oh, Y. Kim, et. al.
Fri, 9 Jul 21
51/62

Comments: 11 pages, 8 figures

Extended two-body problem for rotating rigid bodies [EPA]

http://arxiv.org/abs/2107.03274


A new technique that utilizes surface integrals to find the force, torque and potential energy between two non-spherical, rigid bodies is presented. The method is relatively fast, and allows us to solve the full rigid two-body problem for pairs of spheroids and ellipsoids with 12 degrees of freedom. We demonstrate the method with two dimensionless test scenarios, one where tumbling motion develops, and one where the motion of the bodies resemble spinning tops. We also test the method on the asteroid binary (66391) 1999 KW4, where both components are modelled either as spheroids or ellipsoids. The two different shape models have negligible effects on the eccentricity and semi-major axis, but have a larger impact on the angular velocity along the $z$-direction. In all cases, energy and total angular momentum is conserved, and the simulation accuracy is kept at the machine accuracy level.

Read this paper on arXiv…

A. Ho, M. Wold, J. Conway, et. al.
Thu, 8 Jul 21
19/52

Comments: 24 pages, 9 figures, accepted for publication in Celestial Mechanics and Dynamical Astronomy

Comptonization by Reconnection Plasmoids in Black Hole Coronae I: Magnetically Dominated Pair Plasma [HEAP]

http://arxiv.org/abs/2107.00263


We perform two-dimensional particle-in-cell simulations of reconnection in magnetically dominated electron-positron plasmas subject to strong Compton cooling. We vary the magnetization $\sigma\gg1$, defined as the ratio of magnetic tension to plasma inertia, and the strength of cooling losses. Magnetic reconnection under such conditions can operate in magnetically dominated coronae around accreting black holes, which produce hard X-rays through Comptonization of seed soft photons. We find that the particle energy spectrum is dominated by a peak at mildly relativistic energies, which results from bulk motions of cooled plasmoids. The peak has a quasi-Maxwellian shape with an effective temperature of $\sim 100$~keV, which depends only weakly on the flow magnetization and the strength of radiative cooling. The mean bulk energy of the reconnected plasma is roughly independent of $\sigma$, whereas the variance is larger for higher magnetizations. The spectra also display a high-energy tail, which receives $\sim 25$% of the dissipated reconnection power for $\sigma=10$ and $\sim 40$% for $\sigma=40$. We complement our particle-in-cell studies with a Monte-Carlo simulation of the transfer of seed soft photons through the reconnection layer, and find the escaping X-ray spectrum. The simulation demonstrates that Comptonization is dominated by the bulk motions in the chain of Compton-cooled plasmoids and, for $\sigma\sim 10$, yields a spectrum consistent with the typical hard state of accreting black holes.

Read this paper on arXiv…

N. Sridhar, L. Sironi and A. Beloborodov
Fri, 2 Jul 21
10/67

Comments: 17 pages, 11 figures, 4 appendices

Chemulator: Fast, accurate thermochemistry for dynamical models through emulation [CL]

http://arxiv.org/abs/2106.14789


Chemical modelling serves two purposes in dynamical models: accounting for the effect of microphysics on the dynamics and providing observable signatures. Ideally, the former must be done as part of the hydrodynamic simulation but this comes with a prohibitive computational cost which leads to many simplifications being used in practice. To produce a statistical emulator that replicates a full chemical model capable of solving the temperature and abundances of a gas through time. This emulator should suffer only a minor loss of accuracy over including a full chemical solver in a dynamical model but would have a fraction of the computational cost. The gas-grain chemical code UCLCHEM was updated to include heating and cooling processes and a large dataset of model outputs from possible starting conditions was produced. A neural network was then trained to map directly from inputs to outputs. Chemulator replicates the outputs of UCLCHEM with an overall mean squared error (MSE) of 0.0002 for a single time step of 1000 yr and is shown to be stable over 1000 iterations with an MSE of 0.003 the log scaled temperature after one time step and 0.006 after 1000 time steps. Chemulator was found to be approximately 50,000 times faster than the time dependent model it emulates but can introduce a significant error to some models.

Read this paper on arXiv…

J. Holdship, S. Viti, T. Haworth, et. al.
Tue, 29 Jun 21
42/101

Comments: 16 pages, 12 figures, accepted for publication in A&A

Primordial non-Gaussianity from the Completed SDSS-IV extended Baryon Oscillation Spectroscopic Survey I: Catalogue Preparation and Systematic Mitigation [CEA]

http://arxiv.org/abs/2106.13724


We investigate the large-scale clustering of the final spectroscopic sample of quasars from the recently completed extended Baryon Oscillation Spectroscopic Survey (eBOSS). The sample contains $343708$ objects in the redshift range $0.8<z<2.2$ and $72667$ objects with redshifts $2.2<z<3.5$, covering an effective area of $4699~{\rm deg}^{2}$. We develop a neural network-based approach to mitigate spurious fluctuations in the density field caused by spatial variations in the quality of the imaging data used to select targets for follow-up spectroscopy. Simulations are used with the same angular and radial distributions as the real data to estimate covariance matrices, perform error analyses, and assess residual systematic uncertainties. We measure the mean density contrast and cross-correlations of the eBOSS quasars against maps of potential sources of imaging systematics to address algorithm effectiveness, finding that the neural network-based approach outperforms standard linear regression. Stellar density is one of the most important sources of spurious fluctuations, and a new template constructed using data from the Gaia spacecraft provides the best match to the observed quasar clustering. The end-product from this work is a new value-added quasar catalogue with the improved weights to correct for nonlinear imaging systematic effects, which will be made public. Our quasar catalogue is used to measure the local-type primordial non-Gaussianity in our companion paper, Mueller et al. in preparation.

Read this paper on arXiv…

M. Rezaie, A. Ross, H. Seo, et. al.
Mon, 28 Jun 21
12/51

Comments: 17 pages, 13 figures, 2 tables. Accepted for publication in MNRAS. For the associated code and value-added catalogs see this https URL and this https URL

The planar two-body problem for spheroids and disks [EPA]

http://arxiv.org/abs/2106.13558


We outline a new method suggested by Conway (2016) for solving the two-body problem for solid bodies of spheroidal or ellipsoidal shape. The method is based on integrating the gravitational potential of one body over the surface of the other body. When the gravitational potential can be analytically expressed (as for spheroids or ellipsoids), the gravitational force and mutual gravitational potential can be formulated as a surface integral instead of a volume integral, and solved numerically. If the two bodies are infinitely thin disks, the surface integral has an analytical solution. The method is exact as the force and mutual potential appear in closed-form expressions, and does not involve series expansions with subsequent truncation errors. In order to test the method, we solve the equations of motion in an inertial frame, and run simulations with two spheroids and two infinitely thin disks, restricted to torque-free planar motion. The resulting trajectories display precession patterns typical for non-Keplerian potentials. We follow the conservation of energy and orbital angular momentum, and also investigate how the spheroid model approaches the two cases where the surface integral can be solved analytically, i.e. for point masses and infinitely thin disks.

Read this paper on arXiv…

M. Wold and J. Conway
Mon, 28 Jun 21
48/51

Comments: 15 pages, 5 figures, accepted for publication in Celestial Mechanics and Dynamical Astronomy

Construction of explicit symplectic integrators in general relativity. IV. Kerr black holes [CL]

http://arxiv.org/abs/2106.12356


In previous papers, explicit symplectic integrators were designed for nonrotating black holes, such as a Schwarzschild black hole. However, they fail to work in the Kerr spacetime because not all variables can be separable, or not all splitting parts have analytical solutions as explicit functions of proper time. To cope with this difficulty, we introduce a time transformation function to the Hamiltonian of Kerr geometry so as to obtain a time-transformed Hamiltonian consisting of five splitting parts, whose analytical solutions are explicit functions of the new coordinate time. The chosen time transformation function can cause time steps to be adaptive, but it is mainly used to implement the desired splitting of the time transformed Hamiltonian. In this manner, new explicit symplectic algorithms are easily available. Unlike Runge Kutta integrators, the newly proposed algorithms exhibit good long term behavior in the conservation of Hamiltonian quantities when appropriate fixed coordinate time steps are considered. They are better than same order implicit and explicit mixed symplectic algorithms and extended phase space explicit symplectic like methods in computational efficiency. The proposed idea on the construction of explicit symplectic integrators is suitable for not only the Kerr metric but also many other relativistic problems, such as a Kerr black hole immersed in a magnetic field, a Kerr Newman black hole with an external magnetic field, axially symmetric core shell systems, and five dimensional black ring metrics.

Read this paper on arXiv…

X. Wu, Y. Wang, W. Sun, et. al.
Thu, 24 Jun 21
25/54

Comments: 12pages,12figures

Efficient Computation of $N$-point Correlation Functions in $D$ Dimensions [IMA]

http://arxiv.org/abs/2106.10278


We present efficient algorithms for computing the $N$-point correlation functions (NPCFs) of random fields in arbitrary $D$-dimensional homogeneous and isotropic spaces. Such statistics appear throughout the physical sciences, and provide a natural tool to describe a range of stochastic processes. Typically, NPCF estimators have $\mathcal{O}(n^N)$ complexity (for a data set containing $n$ particles); their application is thus computationally infeasible unless $N$ is small. By projecting onto a suitably-defined angular basis, we show that the estimators can be written in separable form, with complexity $\mathcal{O}(n^2)$, or $\mathcal{O}(n_{\rm g}\log n_{\rm g})$ if evaluated using a Fast Fourier Transform on a grid of size $n_{\rm g}$. Our decomposition is built upon the $D$-dimensional hyperspherical harmonics; these form a complete basis on the $(D-1)$-sphere and are intrinsically related to angular momentum operators. Concatenation of $(N-1)$ such harmonics gives states of definite combined angular momentum, forming a natural separable basis for the NPCF. In particular, isotropic correlation functions require only states with zero combined angular momentum. We provide explicit expressions for the NPCF estimators as applied to both discrete and gridded data, and discuss a number of applications within cosmology and fluid dynamics. The efficiency of such estimators will allow higher-order correlators to become a standard tool in the analysis of random fields.

Read this paper on arXiv…

O. Philcox and Z. Slepian
Tue, 22 Jun 21
62/71

Comments: 12 pages, 2 figures, submitted to PNAS. Comments welcome!

hankl: A lightweight Python implementation of the FFTLog algorithm for Cosmology [IMA]

http://arxiv.org/abs/2106.06331


We introduce hankl, a lightweight Python implementation of the FFTLog algorithm for Cosmology. The FFTLog algorithm is an extension of the Fast Fourier Transform (FFT) for logarithmically spaced periodic sequences. It can be used to efficiently compute Hankel transformations, which are paramount for many modern cosmological analyses that are based on the power spectrum or the 2-point correlation function multipoles. The code is well-tested, open source, and publicly available.

Read this paper on arXiv…

M. Karamanis and F. Beutler
Mon, 14 Jun 21
32/58

Comments: 6 pages, 2 figures; Code available at this https URL

Solutions of the imploding shock problem in a medium with varying density [CL]

http://arxiv.org/abs/2106.04971


We consider the solutions of the Guderley problem, consisting of an imploding strong shock wave in an ideal gas with a power law initial density profile. The self-similar solutions, and specifically the similarity exponent which determines the behavior of the accelerating shock, are studied in detail, for cylindrical and spherical symmetries and for a wide range of the adiabatic index and the spatial density exponent. We then demonstrate how the analytic solutions can be reproduced in Lagrangian hydrodynamic codes, thus demonstrating their usefulness as a code validation and verification test problem.

Read this paper on arXiv…

I. Giron, S. Balberg and M. Krief
Thu, 10 Jun 21
26/77

Comments: N/A

Dark Matter as a Possible Solution to the Multiple Stellar Populations Problem in Globular Clusters [GA]

http://arxiv.org/abs/2106.03398


According to the classical view of globular clusters, stars inside globular clusters are evolved from the same giant molecular cloud. Then their stars’ chemical compositions must be the same. But recent photometric and spectroscopic studies of globular clusters reveal the presence of more-than-one stellar populations inside globular clusters. This finding challenges our classical view of globular clusters.
In this work, we investigated the possibility of solving multiple stellar populations problem in globular clusters using dark matter assumptions. We showed that the presence of dark matter inside globular clusters changes the physical parameters (e.g. chemical composition, luminosity, temperature, age, etc.) of stars inside them.
We supposed that dark matter distributed non-uniformly inside globular clusters. It means stars in high dark matter density environments (like the central region of globular clusters) are more affected by the presence of dark matter. Using this assumption, we showed that stars in different locations of globular clusters (corresponding to different dark matter densities) follow different evolutionary paths (e.g. on Hertzsprung-Russell diagram). We used this note to infer that the presence of dark matter inside globular clusters can be the reason for the multiple stellar populations.

Read this paper on arXiv…

E. Hassani and S. Mousavi
Tue, 8 Jun 21
19/86

Comments: 6 Pages, 3 Figures, 1 Table, Submitted to MNRAS Journal

Comparison of Graphcore IPUs and Nvidia GPUsfor cosmology applications [CL]

http://arxiv.org/abs/2106.02465


This paper represents the first investigation of the suitability and performance of Graphcore Intelligence Processing Units (IPUs) for deep learning applications in cosmology. It presents the benchmark between a Nvidia V100 GPU and a Graphcore MK1 (GC2) IPU on three cosmological use cases: a classical deep neural network and a Bayesian neural network (BNN) for galaxy shape estimation, and a generative network for galaxy images production. The results suggest that IPUs could be a potential avenue to address the increasing computation needs in cosmology.

Read this paper on arXiv…

B. Arcelin
Mon, 7 Jun 21
8/52

Comments: 11 pages, 4 figures

A new broadening technique of numerically unresolved solar transition region and its effect on the spectroscopic synthesis using coronal approximation [SSA]

http://arxiv.org/abs/2106.00864


The transition region is a thin layer of the solar atmosphere that controls the energy loss from the solar corona. Large numbers of grid points are required to resolve this thin transition region fully in numerical modeling. In this study, we propose a new numerical treatment, called LTRAC, which can be easily extended to the multi-dimensional domains. We have tested the proposed method using a one-dimensional hydrodynamic model of a coronal loop in an active region. The LTRAC method enables modeling of the transition region with the numerical grid size of 50–100 km, which is about 1000 times larger than the physically required value. We used the velocity differential emission measure to evaluate the possible effects on the optically thin emission. Lower temperature emissions were better reproduced by the LTRAC method than by previous methods. Doppler shift and non-thermal width of the synthesized line emission agree with those from a high-resolution reference simulation within an error of several km/s above the formation temperature of $10^5$ K.

Read this paper on arXiv…

H. Iijima and S. Imada
Thu, 3 Jun 21
11/55

Comments: 17 pages, 10 figures, accepted for publication in ApJ

Machine-Learning Non-Conservative Dynamics for New-Physics Detection [CL]

http://arxiv.org/abs/2106.00026


Energy conservation is a basic physics principle, the breakdown of which often implies new physics. This paper presents a method for data-driven “new physics” discovery. Specifically, given a trajectory governed by unknown forces, our Neural New-Physics Detector (NNPhD) aims to detect new physics by decomposing the force field into conservative and non-conservative components, which are represented by a Lagrangian Neural Network (LNN) and a universal approximator network (UAN), respectively, trained to minimize the force recovery error plus a constant $\lambda$ times the magnitude of the predicted non-conservative force. We show that a phase transition occurs at $\lambda$=1, universally for arbitrary forces. We demonstrate that NNPhD successfully discovers new physics in toy numerical experiments, rediscovering friction (1493) from a damped double pendulum, Neptune from Uranus’ orbit (1846) and gravitational waves (2017) from an inspiraling orbit. We also show how NNPhD coupled with an integrator outperforms previous methods for predicting the future of a damped double pendulum.

Read this paper on arXiv…

Z. Li, B. Wang, Q. Meng, et. al.
Wed, 2 Jun 21
43/48

Comments: 17 pages, 7 figs, 2 tables

Long-term dynamics of the solar system inner planets [EPA]

http://arxiv.org/abs/2105.14976


Although the discovery of the chaotic motion of the inner planets in the solar system dates back to more than thirty years ago, the secular chaos of their orbits still dares more analytical analyses. Apart from the high-dimensional structure of the motion, this is probably related to the lack of an adequately simple dynamical model. Here, we consider a new secular dynamics for the inner planets, with the aim of retaining a fundamental set of interactions responsible for their chaotic behaviour, while being consistent with the predictions of the most precise orbital solutions currently available. We exploit the regularity in the secular motion of the outer planets, to predetermine a quasi-periodic solution for their orbits. This reduces the secular phase space to the degrees of freedom dominated by the inner planets. On top of that, the smallness of the inner planet masses and the absence of strong mean-motion resonances permits to restrict ourselves to first-order secular averaging. The resulting dynamics can be integrated numerically in a very efficient way through Gauss’s method, while computer algebra allows for analytical inspection of planet interactions, once the Hamiltonian is truncated at a given total degree in eccentricities and inclinations. The new model matches very satisfactorily reference orbital solutions of the solar system over timescales shorter than or comparable to the Lyapunov time. It correctly reproduces the maximum Lyapunov exponent of the inner system and the statistics of the high eccentricities of Mercury over the next five billion years. The destabilizing role of the $g_1-g_5$ secular resonance also arises. A numerical experiment, consisting of a thousand orbital solutions over one hundred billion years, reveals the essential properties of the stochastic process driving the destabilization of the inner solar system and clarifies its current metastable state.

Read this paper on arXiv…

F. Mogavero and J. Laskar
Tue, 1 Jun 21
19/72

Comments: 25 pages, 10 figures. Accepted for publication in Astronomy & Astrophysics

What Sustained Multi-Disciplinary Research Can Achieve: The Space Weather Modeling Framework [CL]

http://arxiv.org/abs/2105.13227


MHD-based global space weather models have mostly been developed and maintained at academic institutions. While the “free spirit” approach of academia enables the rapid emergence and testing of new ideas and methods, the lack of long-term stability and support makes this arrangement very challenging. This paper describes a successful example of a university-based group, the Center of Space Environment Modeling (CSEM) at the University of Michigan, that developed and maintained the Space Weather Modeling Framework (SWMF) and its core element, the BATS-R-US extended MHD code. It took a quarter of a century to develop this capability and reach its present level of maturity that makes it suitable for research use by the space physics community through the Community Coordinated Modeling Center (CCMC) as well as operational use by the NOAA Space Weather Prediction Center (SWPC).

Read this paper on arXiv…

T. Gombosi, Y. Chen, A. Glocer, et. al.
Fri, 28 May 21
8/56

Comments: 105 pages, 36 figures, in press

Atom-in-jellium predictions of the shear modulus at high pressure [CL]

http://arxiv.org/abs/2105.12303


Atom-in-jellium calculations of the Einstein frequency in condensed matter and of the equation of state were used to predict the variation of shear modulus from zero pressure to $\sim 10^7$ g/cm$^3$, for several elements relevant to white dwarf (WD) stars and other self-gravitating systems. This is by far the widest range reported electronic structure calculation of shear modulus, spanning from ambient through the one-component plasma to extreme relativistic conditions. The predictions were based on a relationship between Debye temperature and shear modulus which we assess to be accurate at the $o(10\%)$ level, and is the first known use of atom-in-jellium theory to calculate a shear modulus. We assessed the overall accuracy of the method by comparing with experimental measurements and more detailed electronic structure calculations at lower pressures.

Read this paper on arXiv…

D. Swift, T. Lockard, S. Hamel, et. al.
Thu, 27 May 21
33/62

Comments: N/A

ENCORE: Estimating Galaxy $N$-point Correlation Functions in $\mathcal{O}(N_{\rm g}^2)$ Time [IMA]

http://arxiv.org/abs/2105.08722


We present a new algorithm for efficiently computing the $N$-point correlation functions (NPCFs) of a 3D density field for arbitrary $N$. This can be applied both to a discrete galaxy survey and a continuous field. By expanding the statistics in a separable basis of isotropic functions based on spherical harmonics, the NPCFs can be estimated by counting pairs of particles in space, leading to an algorithm with complexity $\mathcal{O}(N_{\rm g}^2)$ for $N_{\rm g}$ particles, or $\mathcal{O}(N_\mathrm{FFT}\log N_\mathrm{FFT})$ when using a Fast Fourier Transform with $N_\mathrm{FFT}$ grid-points. In practice, the rate-limiting step for $N>3$ will often be the summation of the histogrammed spherical harmonic coefficients, particularly if the number of bins is large. In this case, the algorithm scales linearly with $N_{\rm g}$. The approach is implemented in the ENCORE code, which can compute the 4PCF and 5PCF of a BOSS-like galaxy survey in $\sim$ 100 CPU-hours, including the corrections necessary for non-uniform survey geometries. We discuss the implementation in depth, along with its GPU acceleration, and provide practical demonstration on realistic galaxy catalogs. Our approach can be straightforwardly applied to current and future datasets to unlock the potential of constraining cosmology from the higher-point functions.

Read this paper on arXiv…

O. Philcox, Z. Slepian, J. Hou, et. al.
Thu, 20 May 21
56/56

Comments: 24 pages, 6 figures, submitted to MNRAS. Code available at this https URL

zeus: A Python implementation of Ensemble Slice Sampling for efficient Bayesian parameter inference [IMA]

http://arxiv.org/abs/2105.03468


We introduce zeus, a well-tested Python implementation of the Ensemble Slice Sampling (ESS) method for Bayesian parameter inference. ESS is a novel Markov chain Monte Carlo (MCMC) algorithm specifically designed to tackle the computational challenges posed by modern astronomical and cosmological analyses. In particular, the method requires no hand-tuning of any hyper-parameters, its performance is insensitive to linear correlations and it can scale up to 1000s of CPUs without any extra effort. Furthermore, its locally adaptive nature allows to sample efficiently even when strong non-linear correlations are present. Lastly, the method achieves a high performance even in strongly multimodal distributions in high dimensions. Compared to emcee, a popular MCMC sampler, zeus performs 9 and 29 times better in a cosmological and an exoplanet application respectively.

Read this paper on arXiv…

M. Karamanis, F. Beutler and J. Peacock
Tue, 11 May 21
8/93

Comments: 11 pages, 10 figures, 2 tables, submitted to MNRAS; Code available at this https URL

Revisiting high-order Taylor methods for astrodynamics and celestial mechanics [EPA]

http://arxiv.org/abs/2105.00800


We present heyoka, a new, modern and general-purpose implementation of Taylor’s integration method for the numerical solution of ordinary differential equations. Detailed numerical tests focused on difficult high-precision gravitational problems in astrodynamics and celestial mechanics show how our general-purpose integrator is competitive with and often superior to state-of-the-art specialised symplectic and non-symplectic integrators in both speed and accuracy. In particular, we show how Taylor methods are capable of satisfying Brouwer’s law for the conservation of energy in long-term integrations of planetary systems over billions of dynamical timescales. We also show how close encounters are modelled accurately during simulations of the formation of the Kirkwood gaps and of Apophis’ 2029 close encounter with the Earth (where heyoka surpasses the speed and accuracy of domain-specific methods). heyoka can be used from both C++ and Python, and it is publicly available as an open-source project.

Read this paper on arXiv…

F. Biscani and D. Izzo
Tue, 4 May 21
18/72

Comments: N/A

Analytic solutions of the nonlinear radiation diffusion equation with an instantaneous point source in non-homogeneous media [CL]

http://arxiv.org/abs/2104.08475


Analytic solutions to the nonlinear radiation diffusion equation with an instantaneous point source for a non-homogeneous medium with a power law spatial density profile, are presented. The solutions are a generalization of the well known solutions for a homogeneous medium. It is shown that the solutions take various qualitatively different forms according to the value of the spatial exponent. These different forms are studied in detail for linear and non linear heat conduction. In addition, by inspecting the generalized solutions, we show that there exist values of the spatial exponent such the conduction front has constant speed or even accelerates. Finally, the various solution forms are compared in detail to numerical simulations, and a good agreement is achieved.

Read this paper on arXiv…

M. Krief
Tue, 20 Apr 2021
62/72

Comments: The following article has been accepted for publication in Physics of Fluids

SpaceHub: A high-performance gravity integration toolkit for few-body problems in astrophysics [SSA]

http://arxiv.org/abs/2104.06413


We present the open source few-body gravity integration toolkit {\tt SpaceHub}. {\tt SpaceHub} offers a variety of algorithmic methods, including the unique algorithms AR-Radau, AR-Sym6, AR-ABITS and AR-chain$^+$ which we show out-perform other methods in the literature and allow for fast, precise and accurate computations to deal with few-body problems ranging from interacting black holes to planetary dynamics. We show that AR-Sym6 and AR-chain$^+$, with algorithmic regularization, chain algorithm, active round-off error compensation and a symplectic kernel implementation, are the fastest and most accurate algorithms to treat black hole dynamics with extreme mass ratios, extreme eccentricities and very close encounters. AR-Radau, the first regularized Radau integrator with round off error control down to 64 bits floating point machine precision, has the ability to handle extremely eccentric orbits and close approaches in long-term integrations. AR-ABITS, a bit efficient arbitrary precision method, achieves any precision with the least CPU cost compared to other open source arbitrary precision few-body codes. With the implementation of deep numerical and code optimization, these new algorithms in {\tt SpaceHub} prove superior to other popular high precision few-body codes in terms of performance, accuracy and speed.

Read this paper on arXiv…

Y. Wang, N. Leigh, B. Liu, et. al.
Thu, 15 Apr 2021
3/59

Comments: Submitted to MNRAS. Comments are welcome

A New Fast Monte Carlo Code for Solving Radiative Transfer Equations based on Neumann Solution [CL]

http://arxiv.org/abs/2104.07007


In this paper, we proposed a new Monte Carlo radiative transport (MCRT) scheme, which is based completely on the Neumann series solution of Fredholm integral equation. This scheme indicates that the essence of MCRT is the calculation of infinite terms of multiple integrals in Neumann solution simultaneously. Under this perspective we redescribed MCRT procedure systematically, in which the main work amounts to choose an associated probability distribution function (PDF) for a set of random variables and the corresponding unbiased estimation functions. We can select a relatively optimal estimation procedure that has a lower variance from an infinite possible choices, such as the term by term estimation. In this scheme, MCRT can be regarded as a pure problem of integral evaluation, rather than as the tracing of random walking photons. Keeping this in mind, one can avert some subtle intuitive mistakes. In addition the $\delta$-functions in these integrals can be eliminated in advance by integrating them out directly. This fact together with the optimal chosen random variables can remarkably improve the Monte Carlo (MC) computational efficiency and accuracy, especially in systems with axial or spherical symmetry. An MCRT code, Lemon (Linear Integral Equations’ Monte Carlo Solver Based on the Neumann solution), has been developed completely based on this scheme. Finally, we intend to verify the validation of Lemon, a suite of test problems mainly restricted to flat spacetime have been reproduced and the corresponding results are illustrated in detail.

Read this paper on arXiv…

X. Yang, J. Wang and C. Yang
Thu, 15 Apr 2021
33/59

Comments: 37 pages, 28 figures. The code can be download from: this https URL (or this https URL) and this https URL Comments are welcome

JefiGPU: Jefimenko's Equations on GPU [CL]

http://arxiv.org/abs/2104.05426


We have implemented a GPU version of the Jefimenko’s equations — JefiGPU. Given the proper distributions of the source terms $\rho$ (charge density) and $\mathbf{J}$ (current density) in the source volume, the algorithm gives the electromagnetic fields in the observational region (not necessarily overlaps the vicinity of the sources). To verify the accuracy of the GPU implementation, we have compared the obtained results with that of the theoretical ones. Our results show that the deviations of the GPU results from the theoretical ones are around 5\%. Meanwhile, we have also compared the performance of the GPU implementation with a CPU version. The simulation results indicate that the GPU code is significantly faster than the CPU version. Finally, we have studied the parameter dependence of the execution time and memory consumption on one NVIDIA Tesla V100 card. Our code can be consistently coupled to RBG (Relativistic Boltzmann equations on GPUs) and many other GPU-based algorithms in physics.

Read this paper on arXiv…

J. Zhang, J. Chen, G. Peng, et. al.
Tue, 13 Apr 2021
44/93

Comments: 21 pages, 8 figures, 4 tables

Kepler's Goat Herd: An Exact Solution for Elliptical Orbit Evolution [CL]

http://arxiv.org/abs/2103.15829


A fundamental relation in celestial mechanics is Kepler’s equation, linking an orbit’s mean anomaly to its eccentric anomaly and eccentricity. Being transcendental, the equation cannot be directly solved for eccentric anomaly by conventional treatments; much work has been devoted to approximate methods. Here, we give an explicit integral solution, utilizing methods recently applied to the ‘geometric goat problem’ and to the dynamics of spherical collapse. The solution is given as a ratio of contour integrals; these can be efficiently computed via numerical integration for arbitrary eccentricities. The method is found to be highly accurate in practice, with our C++ implementation outperforming conventional root-finding and series approaches by a factor greater than two.

Read this paper on arXiv…

O. Philcox, J. Goodman and Z. Slepian
Wed, 31 Mar 2021
51/62

Comments: 7 pages, 5 figures, submitted to MNRAS. Code available at this https URL

The number of populated electronic configurations in a hot dense plasma [CL]

http://arxiv.org/abs/2103.07663


In hot dense plasmas of intermediate or high-Z elements in the state of local thermodynamic equilibrium, the number of electronic configurations contributing to key macroscopic quantities such as the spectral opacity and equation of state, can be enormous. In this work we present systematic methods for the analysis of the number of relativistic electronic configurations in a plasma. While the combinatoric number of configurations can be huge even for mid-Z elements, the number of configurations which have non negligible population is much lower and depends strongly and non-trivially on temperature and density. We discuss two useful methods for the estimation of the number of populated configurations: (i) using an exact calculation of the total combinatoric number of configurations within superconfigurations in a converged super-transition-array (STA) calculation, and (ii) by using an estimate for the multidimensional width of the probability distribution for electronic population over bound shells, which is binomial if electron exchange and correlation effects are neglected. These methods are analyzed, and the mechanism which leads to the huge number of populated configurations is discussed in detail. Comprehensive average atom finite temperature density functional theory (DFT) calculations are performed in a wide range of temperature and density for several low, mid and high Z plasmas. The effects of temperature and density on the number of populated configurations are discussed and explained.

Read this paper on arXiv…

M. Krief
Tue, 16 Mar 21
59/92

Comments: N/A

Atom-in-jellium equations of state and melt curves in the white dwarf regime [SSA]

http://arxiv.org/abs/2103.03371


Atom-in-jellium calculations of the electron states, and perturbative calculations of the Einstein frequency, were used to construct equations of state (EOS) from around $10^{-5}$ to $10^7$g/cm$^3$ and $10^{-4}$ to $10^{6}$eV for elements relevant to white dwarf (WD) stars. This is the widest range reported for self-consistent electronic shell structure calculations. Elements of the same ratio of atomic weight to atomic number were predicted to asymptote to the same $T=0$ isotherm, suggesting that, contrary to recent studies of the crystallization of WDs, the amount of gravitational energy that could be released by separation of oxygen and carbon is small. A generalized Lindemann criterion based on the amplitude of the ion-thermal oscillations calculated using atom-in-jellium theory, previously used to extrapolate melt curves for metals, was found to reproduce previous thermodynamic studies of the melt curve of the one component plasma with a choice of vibration amplitude consistent with low pressure results. For elements for which low pressure melting satisfies the same amplitude criterion, such as Al, this melt model thus gives a likely estimate of the melt curve over the full range of normal electronic matter; for the other elements, it provides a useful constraint on the melt locus.

Read this paper on arXiv…

D. Swift, T. Lockard, S. Hamel, et. al.
Mon, 8 Mar 21
17/65

Comments: N/A

Construction of explicit symplectic integrators in general relativity. II. Reissner-Nordstrom black holes [CL]

http://arxiv.org/abs/2103.02864


In a previous paper, second- and fourth-order explicit symplectic integrators were designed for a Hamiltonian of the Schwarzschild black hole. Following this work, we continue to trace the possibility of the construction of explicit symplectic integrators for a Hamiltonian of charged particles moving around a Reissner-Nordstrom black hole with an external magnetic field. Such explicit symplectic methods are still available when the Hamiltonian is separated into five independently integrable parts with analytical solutions as explicit functions of proper time. Numerical tests show that the proposed algorithms share the desirable properties in their long-term stability, precision and efficiency for appropriate choices of step sizes. For the applicability of one of the new algorithms, the effects of the black hole’s charge, the Coulomb part of the electromagnetic potential and the magnetic parameter on the dynamical behavior are surveyed. Under some circumstances, the extent of chaos gets strong with an increase of the magnetic parameter from a global phase-space structure. No the variation of the black hole’s charge but the variation of the Coulomb part is considerably sensitive to affect the regular and chaotic dynamics of particles’ orbits. A positive Coulomb part is easier to induce chaos than a negative one.

Read this paper on arXiv…

Y. Wang, W. Sun, F. Liu, et. al.
Fri, 5 Mar 21
44/64

Comments: 8 pages,20 figures

Modeling Anharmonic Infrared Spectra of Thermally Excited Pyrene(C$_{16}$H$_{10}$): the combined view of DFT AnharmonicCaOs and approximate DFT molecular dynamics [GA]

http://arxiv.org/abs/2102.06582


Aromatic Infrared Bands (AIBs) are a set of bright and ubiquitous emission bands, observed in regions illuminated by stellar ultraviolet photons, from our galaxy all the way out to cosmological distances. The forthcoming James Webb Space Telescope will unveil unprecedented spatial and spectral details in the AIB spectrum; significant advancement is thus necessary now to model the infrared emission of polycyclic aromatic hydrocarbons, their presumed carriers, with enough detail to exploit the information content of the AIBs. This requires including anharmonicity in such models, and to do so systematically for all species included, requiring a difficult compromise between accuracy and efficiency. We propose a new recipe using minimal assumptions on the general behaviour of band positions and widths with temperature, which can be defined by a small number of empirical parameters. We explore here the performances of a full quantum method, AnharmoniCaOs, relying on an ab initio potential, and Molecular Dynamics simulations using a Density Functional based Tight Binding potential to determine these parameters for the case of pyrene, for which high temperature gas-phase data are available. The first one is very accurate and detailed, but it becomes computationally very expensive for increasing T; the second trades some accuracy for speed, making it suitable to provide approximate, general trends at high temperatures. We propose to use, for each species and band, the best available empirical parameters for a fast, yet sufficiently accurate spectral model of PAH emission properly including anharmonicity. Modelling accuracy will depend critically on these empirical parameters, allowing for an incremental improvement in model results, as better estimates become gradually available.

Read this paper on arXiv…

S. Chakraborty, G. Mulas, M. Rapacioli, et. al.
Mon, 15 Feb 21
28/53

Comments: submitted to the Journal of Molecular Spectroscopy

Cadmium Zinc Telluride Detectors for a Next-Generation Hard X-ray Telescope [IMA]

http://arxiv.org/abs/2102.03463


We are currently developing Cadmium Zinc Telluride (CZT) detectors for a next-generation space-borne hard X-ray telescope which can follow up on the highly successful NuSTAR (Nuclear Spectroscopic Telescope Array) mission. Since the launch of NuSTAR in 2012, there have been major advances in the area of X-ray mirrors, and state-of-the-art X-ray mirrors can improve on NuSTAR’s angular resolution of ~1 arcmin Half Power Diameter (HPD) to 15″ or even 5″ HPD. Consequently, the size of the detector pixels must be reduced to match this resolution. This paper presents detailed simulations of relatively thin (1 mm thick) CZT detectors with hexagonal pixels at a next-neighbor distance of 150 $\mu$m. The simulations account for the non-negligible spatial extent of the deposition of the energy of the incident photon, and include detailed modeling of the spreading of the free charge carriers as they move toward the detector electrodes. We discuss methods to reconstruct the energies of the incident photons, and the locations where the photons hit the detector. We show that the charge recorded in the brightest pixel and six adjacent pixels suffices to obtain excellent energy and spatial resolutions. The simulation results are being used to guide the design of a hybrid application-specific integrated circuit (ASIC)-CZT detector package.

Read this paper on arXiv…

J. Tang, F. Kislat and H. Krawczynski
Tue, 9 Feb 21
16/87

Comments: 13 pages, 11 figures. Accepted for publication in Astroparticle Physics

Magnetic field amplification by the Weibel instability at planetary and astrophysical high-Mach-number shocks [HEAP]

http://arxiv.org/abs/2102.04328


Collisionless shocks are ubiquitous in the Universe and often associated with strong magnetic field. Here we use large-scale particle-in-cell simulations of non-relativistic perpendicular shocks in the high-Mach-number regime to study the amplification of magnetic field within shocks. The magnetic field is amplified at the shock transition due to the ion-ion two-stream Weibel instability. The normalized magnetic-field strength strongly correlates with the Alfv\’enic Mach number. Mock spacecraft measurements derived from PIC simulations are fully consistent with those taken in-situ at Saturn’s bow shock by the Cassini spacecraft.

Read this paper on arXiv…

A. Bohdan, M. Pohl, J. Niemiec, et. al.
Tue, 9 Feb 21
87/87

Comments: Accepted to PRL. 7 pages, 4 figure

Construction of Explicit Symplectic Integrators in General Relativity. I. Schwarzschild Black Holes [CL]

http://arxiv.org/abs/2102.00373


Symplectic integrators that preserve the geometric structure of Hamiltonian flows and do not exhibit secular growth in energy errors are suitable for the long-term integration of N-body Hamiltonian systems in the solar system. However, the construction of explicit symplectic integrators is frequently difficult in general relativity because all variables are inseparable. Moreover, even if two analytically integrable splitting parts exist in a relativistic Hamiltonian, all analytical solutions are not explicit functions of proper time. Naturally, implicit symplectic integrators, such as the midpoint rule, are applicable to this case. In general, these integrators are numerically more expensive to solve than same-order explicit symplectic algorithms. To address this issue, we split the Hamiltonian of Schwarzschild spacetime geometry into four integrable parts with analytical solutions as explicit functions of proper time. In this manner, second- and fourth-order explicit symplectic integrators can be easily made available. The new algorithms are also useful for modeling the chaotic motion of charged particles around a black hole with an external magnetic field. They demonstrate excellent long-term performance in maintaining bounded Hamiltonian errors and saving computational cost when appropriate proper time steps are adopted.

Read this paper on arXiv…

Y. Wang, W. Sun, F. Liu, et. al.
Tue, 2 Feb 21
11/86

Comments: 10 pages,2 figures

Fuzzy Dark Matter and the 21cm Power Spectrum [CEA]

http://arxiv.org/abs/2101.07177


We model the 21cm power spectrum across the Cosmic Dawn and the Epoch of Reionization (EoR) in fuzzy dark matter (FDM) cosmologies. The suppression of small mass halos in FDM models leads to a delay in the onset redshift of these epochs relative to cold dark matter (CDM) scenarios. This strongly impacts the 21cm power spectrum and its redshift evolution. The 21cm power spectrum at a given stage of the EoR/Cosmic Dawn process is also modified: in general, the amplitude of 21cm fluctuations is boosted by the enhanced bias factor of galaxy hosting halos in FDM. We forecast the prospects for discriminating between CDM and FDM with upcoming power spectrum measurements from HERA, accounting for degeneracies between astrophysical parameters and dark matter properties. If FDM constitutes the entirety of the dark matter and the FDM particle mass is 10-21eV, HERA can determine the mass to within 20 percent at 2-sigma confidence.

Read this paper on arXiv…

D. Jones, S. Palatnick, R. Chen, et. al.
Tue, 19 Jan 21
60/92

Comments: 15 pages, 12 figures

A New Method for Simulating Photoprocesses in Astrochemical Models [GA]

http://arxiv.org/abs/2101.01209


We propose a new model for treating solid-phase photoprocesses in interstellar ice analogues. In this approach, photoionization and photoexcitation are included in more detail, and the production of electronically-excited (suprathermal) species is explicitly considered. In addition, we have included non-thermal, non-diffusive chemistry to account for the low-temperature characteristic of cold cores. As an initial test of our method, we have simulated two previous experimental studies involving the UV irradiation of pure solid O$_2$. In contrast to previous solid-state astrochemical model calculations which have used gas-phase photoabsorption cross-sections, we have employed solid-state cross-sections in our calculations. This method allows the model to be tested using well-constrained experiments rather than poorly constrained gas-phase abundances in ISM regions. Our results indicate that inclusion of non-thermal reactions and suprathermal species allows for reproduction of low-temperature solid-phase photoprocessing that simulate interstellar ices within cold ($\sim$ 10 K) dense cores such as TMC-1.

Read this paper on arXiv…

E. Mullikin, H. Anderson, N. O’Hern, et. al.
Wed, 6 Jan 21
35/82

Comments: ApJ, accepted: 15 pages, 3 figures

Effects of latitude-dependent gravity wave source variations on the middle and upper atmosphere [CL]

http://arxiv.org/abs/2012.12829


Atmospheric gravity waves (GWs) are generated in the lower atmosphere by various weather phenomena. They propagate upward, carry energy and momentum to higher altitudes, and appreciably influence the general circulation upon depositing them in the middle and upper atmosphere. We use a three-dimensional first-principle general circulation model (GCM) with an implemented nonlinear whole atmosphere GW parameterization to study the global climatology of wave activity and produced effects at altitudes up to the upper thermosphere. The numerical experiments were guided by the GW momentum fluxes and temperature variances as measured in 2010 by the SABER (Sounding of the Atmosphere using Broadband Emission Radiometry) instrument onboard NASA’s TIMED (Thermosphere Ionosphere Mesosphere Energetics Dynamics) satellite. This includes the latitudinal dependence and magnitude of GW activity in the lower stratosphere for the boreal summer season. The modeling results were compared to the SABER temperature and total absolute momentum flux, and Upper Atmosphere Research Satellite (UARS) data in the mesosphere and lower thermosphere. Simulations suggest that, in order to reproduce the observed circulation and wave activity in the middle atmosphere, smaller than the measured GW fluxes have to be used at the source level in the lower atmosphere. This is because observations contain a broader spectrum of GWs, while parameterizations capture only a portion relevant to the middle and upper atmosphere dynamics. Accounting for the latitudinal variations of the source appreciably improves simulations.

Read this paper on arXiv…

E. YiÄźit, A. Medvedev and M. Ern
Thu, 24 Dec 20
45/73

Comments: Submitted to Frontiers in Astronomy and Space Sciences. Research Topic: “Coupling Processes in Terrestrial and Planetary Atmospheres”

Dynamical evolution of star clusters with top-heavy IMF [GA]

http://arxiv.org/abs/2012.09195


Several observational and theoretical studies suggest that the initial mass function (IMF) slope for massive stars in globular clusters (GCs) depends on the initial cloud density and metallicity, such that the IMF becomes increasingly top-heavy with decreasing metallicity and increasing the gas density of the forming object. Using N-body simulations of GCs starting with a top-heavy IMF and undergo early gas expulsion within a Milky Way-like potential, we show how such a cluster would evolve. By varying the degree of top-heaviness, we calculate the dissolution time and the minimum cluster mass needed for the cluster to survive after 12 Gyr of evolution.

Read this paper on arXiv…

H. Haghi, G. Safaei, A. Zonoozi, et. al.
Fri, 18 Dec 20
61/78

Comments: 4 pages, 2 figures, Accepted for publication in Proceedings of the International Astronomical Union

Interaction of large- and small-scale dynamos in isotropic turbulent flows from GPU-accelerated simulations [CL]

http://arxiv.org/abs/2012.08758


Magnetohydrodynamical (MHD) dynamos emerge in many different astrophysical situations where turbulence is present, but the interaction between large-scale (LSD) and small-scale dynamos (SSD) is not fully understood. We performed a systematic study of turbulent dynamos driven by isotropic forcing in isothermal MHD with magnetic Prandtl number of unity, focusing on the exponential growth stage. Both helical and non-helical forcing was employed to separate the effects of LSD and SSD in a periodic domain. Reynolds numbers (Rm) up to $\approx 250$ were examined and multiple resolutions used for convergence checks. We ran our simulations with the Astaroth code, designed to accelerate 3D stencil computations on graphics processing units (GPUs) and to employ multiple GPUs with peer-to-peer communication. We observed a speedup of $\approx 35$ in single-node performance compared to the widely used multi-CPU MHD solver Pencil Code. We estimated the growth rates both from the averaged magnetic fields and their power spectra. At low Rm, LSD growth dominates, but at high Rm SSD appears to dominate in both helically and non-helically forced cases. Pure SSD growth rates follow a logarithmic scaling as a function of Rm. Probability density functions of the magnetic field from the growth stage exhibit SSD behaviour in helically forced cases even at intermediate Rm. We estimated mean-field turbulence transport coefficients using closures like the second-order correlation approximation (SOCA). They yield growth rates similar to the directly measured ones and provide evidence of $\alpha$ quenching. Our results are consistent with the SSD inhibiting the growth of the LSD at moderate Rm, while the dynamo growth is enhanced at higher Rm.

Read this paper on arXiv…

M. Väisälä, J. Pekkilä, M. Käpylä, et. al.
Thu, 17 Dec 20
45/85

Comments: 22 pages, 23 figures, 2 tables, Accepted for publication in the Astrophysical Journal

A fast semi-discrete optimal transport algorithm for a unique reconstruction of the early Universe [CEA]

http://arxiv.org/abs/2012.09074


We leverage powerful mathematical tools stemming from optimal transport theory and transform them into an efficient algorithm to reconstruct the fluctuations of the primordial density field, built on solving the Monge-Amp`ere-Kantorovich equation. Our algorithm computes the optimal transport between an initial uniform continuous density field, partitioned into Laguerre cells, and a final input set of discrete point masses, linking the early to the late Universe. While existing early universe reconstruction algorithms based on fully discrete combinatorial methods are limited to a few hundred thousand points, our algorithm scales up well beyond this limit, since it takes the form of a well-posed smooth convex optimization problem, solved using a Newton method. We run our algorithm on cosmological $N$-body simulations, from the AbacusCosmos suite, and reconstruct the initial positions of $\mathcal{O}(10^7)$ particles within a few hours with an off-the-shelf personal computer. We show that our method allows a unique, fast and precise recovery of subtle features of the initial power spectrum, such as the baryonic acoustic oscillations.

Read this paper on arXiv…

B. LĂ©vy, R. Mohayaee and S. Hausegger
Thu, 17 Dec 20
62/85

Comments: 22 pages

The Lifetimes of Star Clusters Born with a Top-heavy IMF [GA]

http://arxiv.org/abs/2012.07095


Several observational and theoretical indications suggest that the initial mass function (IMF) becomes increasingly top-heavy (i.e., overabundant in high-mass stars with mass $m > 1M_{\odot}$) with decreasing metallicity and increasing gas density of the forming object. This affects the evolution of globular clusters (GCs) owing to the different mass-loss rates and the number of black holes formed. Previous numerical modeling of GCs usually assumed an invariant canonical IMF. Using the state-of-the-art $NBODY6$ code, we perform a comprehensive series of direct $N$-body simulations to study the evolution of star clusters, starting with a top-heavy IMF and undergoing early gas expulsion. Utilizing the embedded cluster mass-radius relation of Marks & Kroupa (2012) for initializing the models, and by varying the degree of top-heaviness, we calculate the minimum cluster mass needed for the cluster to survive longer than 12 Gyr. We document how the evolution of different characteristics of star clusters such as the total mass, the final size, the density, the mass-to-light ratio, the population of stellar remnants, and the survival of GCs is influenced by the degree of top-heaviness. We find that the lifetimes of clusters with different IMFs moving on the same orbit are proportional to the relaxation time to a power of $x$ that is in the range of 0.8 to 1. The observed correlation between concentration and the mass function slope in Galactic GCs can be accounted for excellently in models starting with a top-heavy IMF and undergoing an early phase of rapid gas expulsion.

Read this paper on arXiv…

H. Haghi, G. Safaei, A. Zonoozi, et. al.
Tue, 15 Dec 20
23/136

Comments: 21 pages, 18 figures, 4 tables. Accepted for publication in ApJ

Integration of Few Body Celestial Systems Implementing Explicit Numerical Methods [CL]

http://arxiv.org/abs/2012.03479


The $N$-body problem is of historical significance because it was the first implementation of the Newtonian dynamical laws for the description of our Solar System. Motivated by this, the project’s goal is to revisit this problem for small $N$ and find a solution for the trajectories of specific two-body and three-body configurations as well as the planetary orbits of our Solar System using a fourth order Runge-Kutta explicit iterative method. We find an adequate agreement in our results with planetary trajectories found online.

Read this paper on arXiv…

A. Mavrakis and K. Kritos
Tue, 8 Dec 20
46/73

Comments: 15 pages, 7 figures

Evolution and Mass Loss of Cool Ageing Stars: a Daedalean Story [SSA]

http://arxiv.org/abs/2011.13472


The chemical enrichment of the Universe; the mass spectrum of planetary nebulae, white dwarfs and gravitational wave progenitors; the frequency distribution of Type I and II supernovae; the fate of exoplanets … a multitude of phenomena which is highly regulated by the amounts of mass that stars expel through a powerful wind. For more than half a century, these winds of cool ageing stars have been interpreted within the common interpretive framework of 1-dimensional (1D) models. I here discuss how that framework now appears to be highly problematic.
* Current 1D mass-loss rate formulae differ by orders of magnitude, rendering contemporary stellar evolution predictions highly uncertain.
These stellar winds harbour 3D complexities which bridge 23 orders of magnitude in scale, ranging from the nanometer up to thousands of astronomical units. We need to embrace and understand these 3D spatial realities if we aim to quantify mass loss and assess its effect on stellar evolution. We therefore need to gauge
* the 3D life of molecules and solid-state aggregates: the gas-phase clusters that form the first dust seeds are not yet identified. This limits our ability to predict mass-loss rates using a self-consistent approach.
* the emergence of 3D clumps: they contribute in a non-negligible way to the mass loss, although they seem of limited importance for the wind-driving mechanism.
* the 3D lasting impact of a (hidden) companion: unrecognised binary interaction has biased previous mass-loss rate estimates towards values that are too large.
Only then will it be possible to drastically improve our predictive power of the evolutionary path in 4D (classical) spacetime of any star.

Read this paper on arXiv…

L. Decin
Mon, 30 Nov 20
85/117

Comments: preprint of invited review article to be published in the Annual Review of Astronomy and Astrophysics (2021) – 71 pages, Main Paper 58 pages, Supplemental Material 13 pages, 10 figures of which Figure 8 is also a movie. Movie does not display properly in all pdf file readers; therefore the movie has been uploaded separately or can be obtained via request to the author

FROST: a momentum-conserving CUDA implementation of a hierarchical fourth-order forward symplectic integrator [IMA]

http://arxiv.org/abs/2011.14984


We present a novel hierarchical formulation of the fourth-order forward symplectic integrator and its numerical implementation in the GPU-accelerated direct-summation N-body code FROST. The new integrator is especially suitable for simulations with a large dynamical range due to its hierarchical nature. The strictly positive integrator sub-steps in a fourth-order symplectic integrator are made possible by computing an additional gradient term in addition to the Newtonian accelerations. All force calculations and kick operations are synchronous so the integration algorithm is manifestly momentum-conserving. We also employ a time-step symmetrisation procedure to approximately restore the time-reversibility with adaptive individual time-steps. We demonstrate in a series of binary, few-body and million-body simulations that FROST conserves energy to a level of $|\Delta E / E| \sim 10^{-10}$ while errors in linear and angular momentum are practically negligible. For typical star cluster simulations, we find that FROST scales well up to $N_\mathrm{GPU}^\mathrm{max}\sim 4\times N/10^5$ GPUs, making direct summation N-body simulations beyond $N=10^6$ particles possible on systems with several hundred and more GPUs. Due to the nature of hierarchical integration the inclusion of a Kepler solver or a regularised integrator with post-Newtonian corrections for close encounters and binaries in the code is straightforward.

Read this paper on arXiv…

A. Rantala, T. Naab and V. Springel
Tue, 1 Dec 20
24/108

Comments: 17 pages, 7 figures. Submitted to MNRAS

First optical reconstruction of dust in the region of SNR RX~J1713.7-3946 from astrometric Gaia data [HEAP]

http://arxiv.org/abs/2011.14383


The origin of the radiation observed in the region of the supernova remnant (SNR) RX$\,$J1713.7-3946, one of the brightest TeV emitters, has been debated since its discovery. The existence of atomic and molecular clouds in this object supports the idea that part of the GeV gamma rays in this region originate from proton-proton collisions. However, the observed column density of gas cannot explain the whole emission. Here we present the results of a novel technique that uses the ESA/Gaia DR2 data to reveal faint gas and dust structures in the region of RX$\,$J1713.7-3946 by making use of both astrometric and photometric data. These new structures could be an additional target for cosmic ray protons from the SNR. Our distance resolved reconstruction of dust extinction towards the SNR indicates the presence of only one faint structure in the vicinity of RX$\,$J1713.7-3946. Considering that the SNR is located in a dusty environment, we set the most precise constrain to the SNR distance to date, at ($1.12 \pm 0.01$)~kpc.

Read this paper on arXiv…

R. Leike, S. Celli, A. Krone-Martins, et. al.
Tue, 1 Dec 20
25/108

Comments: N/A

Ultra-fast model emulation with PRISM; analyzing the Meraxes galaxy formation model [IMA]

http://arxiv.org/abs/2011.14530


We demonstrate the potential of an emulator-based approach to analyzing galaxy formation models in the domain where constraining data is limited. We have applied the open-source Python package PRISM to the galaxy formation model Meraxes. Meraxes is a semi-analytic model, purposefully built to study the growth of galaxies during the Epoch of Reionization (EoR). Constraining such models is however complicated by the scarcity of observational data in the EoR. PRISM’s ability to rapidly construct accurate approximations of complex scientific models using minimal data is therefore key to performing this analysis well.
This paper provides an overview of our analysis of Meraxes using measurements of galaxy stellar mass densities; luminosity functions; and color-magnitude relations. We demonstrate the power of using PRISM instead of a full Bayesian analysis when dealing with highly correlated model parameters and a scarce set of observational data. Our results show that the various observational data sets constrain Meraxes differently and do not necessarily agree with each other, signifying the importance of using multiple observational data types when constraining such models. Furthermore, we show that PRISM can detect when model parameters are too correlated or cannot be constrained effectively. We conclude that a mixture of different observational data types, even when they are scarce or inaccurate, is a priority for understanding galaxy formation and that emulation frameworks like PRISM can guide the selection of such data.

Read this paper on arXiv…

E. Velden, A. Duffy, D. Croton, et. al.
Tue, 1 Dec 20
89/108

Comments: 19 pages, 120 figures, submitted to ApJS

Natively Periodic Fast Multipole Method: Approximating the Optimal Green Function [CL]

http://arxiv.org/abs/2011.07099


The Fast Multipole Method (FMM) obeys periodic boundary conditions “natively” if it uses a periodic Green function for computing the multipole expansion in the interaction zone of each FMM oct-tree node. One can define the “optimal” Green function for such a method that results in the numerical solution that converges to the equivalent Particle-Mesh solution in the limit of sufficiently high order of multipoles. A discrete functional equation for the optimal Green function can be derived, but is not practically useful as methods for its solution are not known. Instead, this paper presents an approximation for the optimal Green function that is accurate to better than 1e-3 in LMAX norm and 1e-4 in L2 norm for practically useful multipole counts. Such an approximately optimal Green function offers a practical way for implementing FMM with periodic boundary conditions “natively”, without the need to compute lattice sums or to rely on hybrid FMM-PM approaches.

Read this paper on arXiv…

N. Gnedin
Tue, 17 Nov 20
6/83

Comments: Submitted to ApJ. Comments are welcome

Case study on the identification and classification of small-scale flow patterns in flaring active region [SSA]

http://arxiv.org/abs/2011.07634


We propose a novel methodology to identity flows in the solar atmosphere and classify their velocities as either supersonic, subsonic, or sonic. The proposed methodology consists of three parts. First, an algorithm is applied to the Solar Dynamics Observatory (SDO) image data to locate and track flows, resulting in the trajectory of each flow over time. Thereafter, the differential emission measure inversion method is applied to six AIA channels along the trajectory of each flow in order to estimate its background temperature and sound speed. Finally, we classify each flow as supersonic, subsonic, or sonic by performing simultaneous hypothesis tests on whether the velocity bounds of the flow are larger, smaller, or equal to the background sound speed. The proposed methodology was applied to the SDO image data from the 171 {\AA} spectral line for the date 6 March 2012 from 12:22:00 to 12:35:00 and again for the date 9 March 2012 from 03:00:00 to 03:24:00. Eighteen plasma flows were detected, 11 of which were classified as supersonic, 3 as subsonic, and 3 as sonic at a $70\%$ level of significance. Out of all these cases, 2 flows cannot be strictly ascribed to one of the respective categories as they change from the subsonic state to supersonic and vice versa. We labelled them as a subclass of transonic flows. The proposed methodology provides an automatic and scalable solution to identify small-scale flows and to classify their velocities as either supersonic, subsonic, or sonic. We identified and classified small-scale flow patterns in flaring loops. The results show that the flows can be classified into four classes: sub-, super-, trans-sonic, and sonic. The detected flows from AIA images can be analyzed in combination with the other high-resolution observational data, such as Hi-C 2.1 data, and be used for the development of theories of the formation of flow patterns.

Read this paper on arXiv…

E. Philishvi, B. Shergelashvili, S. Buitendag, et. al.
Tue, 17 Nov 20
17/83

Comments: 13 pages, 7 figures, Accepted for publication in A&A

Methanimine as a key precursor of imines in the interstellar medium: the case of propargylimine [GA]

http://arxiv.org/abs/2010.11651


A gas-phase formation route is proposed for the recently detected propargylimine molecule. In analogy to other imines, such as cyanomethanimine, the addition of a reactive radical (C$_2$H in the present case) to methanimine (CH$_2$NH}) leads to reaction channels open also in the harsh conditions of the interstellar medium. Three possible isomers can be formed in the C$H_2$NH + C$_2$H reaction: Z- and E-propargylimine (Z-,E-PGIM) as well as N-ethynyl-methanimine (N-EMIM). For both PGIM species, the computed global rate coefficient is nearly constant in the 20-300 K temperature range, and of the order of 2-3 $\times$ 10$^{-10}$ cm$^3$ molecule$^{-1}$ s$^{-1}$, while that for N-EMIM is about two orders of magnitude smaller. Assuming equal destruction rates for the two isomers, these results imply an abundance ratio for PGIM of [Z]/[E] $\sim$ 1.5, which is only slightly underestimated with respect to the observational datum.

Read this paper on arXiv…

J. Lupi, C. Puzzarini and V. Barone
Fri, 23 Oct 20
63/67

Comments: 10 pages, 4 figures, 2 tables. Accepted in ApJL

Periodicity detection in AGN with the boosted tree method [GA]

http://arxiv.org/abs/2010.07978


We apply a machine learning algorithm called XGBoost to explore the periodicity of two radio sources: PKS~1921-293 (OV~236) and PKS~2200+420 (BL~Lac), both radio frequency dataset obtained from University of Michigan Radio Astronomy Observatory (UMRAO), at 4.8 GHz, 8.0 GHz, and 14.5 GHz, between 1969 to 2012. From this methods, we find that the XGBoost provides the opportunity to use a machine learning based methodology on radio dataset and to extract information with strategies quite different from those traditionally used to treat time series and to obtain periodicity through the classification of recurrent events. The results were compared with other methods from others works that examined the same dataset and exhibit good agreement with them.

Read this paper on arXiv…

S. Soltau and L. Botti
Mon, 19 Oct 20
39/44

Comments: Submitted to the Revista Mexicana de Astronom\’ia y Astrof\’isica in Aug 03, 2020. Accepted oct 15, 2020. Will appear in v. 57, n.1, April 2021

Extremely High-Order Convergence in Simulations of Relativistic Stars [CL]

http://arxiv.org/abs/2010.05126


We provide a road towards obtaining gravitational waveforms from inspiraling material binaries with an accuracy viable for third-generation gravitational wave detectors, without necessarily advancing computational hardware or massively-parallel software infrastructure. We demonstrate a proof-of-principle 1+1-dimensional numerical implementation that exhibits up to 7th-order convergence for highly dynamic barotropic stars in curved spacetime, and numerical errors up to 6 orders of magnitude smaller than a standard method. Aside from Runge’s phenomenon, there are no obvious fundamental obstacles to obtaining convergence of even higher order. The implementation uses a novel surface-tracking method, where the surface is evolved and high-order accurate boundary conditions are imposed there. Computational memory does not need to be allocated to fluid variables in the vacuum region of spacetime. We anticipate the application of this new method to full $3! +! 1$-dimensional simulations of the inspiral phase of compact binary systems with at least one material body. The additional challenge of a deformable surface must be addressed in multiple spatial dimensions, but it is also an opportunity to input more precise surface tension physics.

Read this paper on arXiv…

J. Westernacher-Schneider
Tue, 13 Oct 20
71/97

Comments: Pdf ~1 Mb. 13 page body, 2 pages of appendices. Journal submission will be delayed to allow for feedback from readers. Video content available at this https URL

Modeling optical roughness and first-order scattering processes from OSIRIS-REx color images of the rough surface of asteroid (101955) Bennu [EPA]

http://arxiv.org/abs/2010.04032


The dark asteroid (101955) Bennu studied by NASA\textquoteright s OSIRIS-REx mission has a boulder-rich and apparently dust-poor surface, providing a natural laboratory to investigate the role of single-scattering processes in rough particulate media. Our goal is to define optical roughness and other scattering parameters that may be useful for the laboratory preparation of sample analogs, interpretation of imaging data, and analysis of the sample that will be returned to Earth. We rely on a semi-numerical statistical model aided by digital terrain model (DTM) shadow ray-tracing to obtain scattering parameters at the smallest surface element allowed by the DTM (facets of \textasciitilde{}10 cm). Using a Markov Chain Monte Carlo technique, we solved the inversion problem on all four-band images of the OSIRIS-REx mission\textquoteright s top four candidate sample sites, for which high-precision laser altimetry DTMs are available. We reconstructed the \emph{a posteriori} probability distribution for each parameter and distinguished primary and secondary solutions. Through the photometric image correction, we found that a mixing of low and average roughness slope best describes Bennu’s surface for up to $90^{\circ}$ phase angle. We detected a low non-zero specular ratio, perhaps indicating exposed sub-centimeter mono-crystalline inclusions on the surface. We report an average roughness RMS slope of $27_{-5}^{\circ+1}$, a specular ratio of $2.6_{-0.8}^{+0.1}\%$, an approx. single-scattering albedo of $4.64_{-0.09}^{+0.08}\%$ at 550 nm, and two solutions for the back-scatter asymmetric factor, $\xi^{(1)}=-0.360\pm0.030$ and $\xi^{(2)}=-0.444\pm0.020$, for all four sites altogether.

Read this paper on arXiv…

P. Hasselmann, S. Fornasier, M. Barucci, et. al.
Fri, 9 Oct 20
53/64

Comments: 15 pages, 11 figures

FANTASY: User-Friendly Symplectic Geodesic Integrator for Arbitrary Metrics with Automatic Differentiation [CL]

http://arxiv.org/abs/2010.02237


We present FANTASY, a user-friendly, open-source symplectic geodesic integrator written in Python. FANTASY is designed to work “out-of-the-box” and does not require anything from the user aside from the metric and the initial conditions for the geodesics. FANTASY efficiently computes derivatives up to machine precision using automatic differentiation, allowing the integration of geodesics in arbitrary space(times) without the need for the user to manually input Christoffel symbols or any other metric derivatives. Further, FANTASY utilizes a Hamiltonian integration scheme that doubles the phase space, where two copies of the particle phase space are evolved together. This technique allows for an integration scheme that is both explicit and symplectic, even when the Hamiltonian is not separable. FANTASY comes prebuilt with second and fourth order schemes, and is easily extendible to higher order schemes. As an example application, we apply FANTASY to numerically study orbits in the Kerr-Sen metric.

Read this paper on arXiv…

P. Christian and C. Chan
Wed, 7 Oct 20
76/76

Comments: N/A

Acoustic wave propagation through solar granulation: Validity of effective-medium theories, coda waves [SSA]

http://arxiv.org/abs/2010.01174


Context. The frequencies, lifetimes, and eigenfunctions of solar acoustic waves are affected by turbulent convection, which is random in space and in time. Since the correlation time of solar granulation and the periods of acoustic waves ($\sim$5 min) are similar, the medium in which the waves propagate cannot a priori be assumed to be time independent. Aims. We compare various effective-medium solutions with numerical solutions in order to identify the approximations that can be used in helioseismology. For the sake of simplicity, the medium is one dimensional. Methods. We consider the Keller approximation, the second-order Born approximation, and spatial homogenization to obtain theoretical values for the effective wave speed and attenuation (averaged over the realizations of the medium). Numerically, we computed the first and second statistical moments of the wave field over many thousands of realizations of the medium (finite-amplitude sound-speed perturbations are limited to a 30 Mm band and have a zero mean). Results. The effective wave speed is reduced for both the theories and the simulations. The attenuation of the coherent wave field and the wave speed are best described by the Keller theory. The numerical simulations reveal the presence of coda waves, trailing the coherent wave packet. These late arrival waves are due to multiple scattering and are easily seen in the second moment of the wave field. Conclusions. We find that the effective wave speed can be calculated, numerically and theoretically, using a single snapshot of the random medium (frozen medium); however, the attenuation is underestimated in the frozen medium compared to the time-dependent medium. Multiple scattering cannot be ignored when modeling acoustic wave propagation through solar granulation.

Read this paper on arXiv…

P. Poulier, D. Fournier, L. Gizon, et. al.
Tue, 6 Oct 20
84/85

Comments: 13 pages, 16 figures, to be published in A&A

Inference of neutrino flavor evolution through data assimilation and neural differential equations [HEAP]

http://arxiv.org/abs/2010.00695


The evolution of neutrino flavor in dense environments such as core-collapse supernovae and binary compact object mergers constitutes an important and unsolved problem. Its solution has potential implications for the dynamics and heavy-element nucleosynthesis in these environments. In this paper, we build upon recent work to explore inference-based techniques for estimation of model parameters and neutrino flavor evolution histories. We combine data assimilation, ordinary differential equation solvers, and neural networks to craft an inference approach tailored for non-linear dynamical systems. Using this architecture, and a simple two-neutrino, two-flavor model, we test various optimization algorithms with the help of four experimental setups. We find that employing this new architecture, together with evolutionary optimization algorithms, accurately captures flavor histories in the four experiments. This work provides more options for extending inference techniques to large numbers of neutrinos.

Read this paper on arXiv…

E. Rrapaj, A. Patwardhan, E. Armstrong, et. al.
Mon, 5 Oct 20
19/61

Comments: N/A

The role of energy in ballistic agglomeration [CL]

http://arxiv.org/abs/2010.01106


We study a ballistic agglomeration process in the reaction-controlled limit. Cluster densities obey an infinite set of Smoluchowski rate equations, with rates dependent on the average particle energy. The latter is the same for all cluster species in the reaction-controlled limit and obeys an equation depending on densities. We express the average energy through the total cluster density that allows us to reduce the governing equations to the standard Smoluchowski equations. We derive basic asymptotic behaviors and verify them numerically. We also apply our formalism to the agglomeration of dark matter.

Read this paper on arXiv…

N. Brilliantov, A. Osinsky and P. Krapivsky
Mon, 5 Oct 20
37/61

Comments: N/A

A Discontinuous Galerkin Method for General Relativistic Hydrodynamics in thornado [HEAP]

http://arxiv.org/abs/2009.13025


Discontinuous Galerkin (DG) methods provide a means to obtain high-order accurate solutions in regions of smooth fluid flow while, with the aid of limiters, still resolving strong shocks. These and other properties make DG methods attractive for solving problems involving hydrodynamics; e.g., the core-collapse supernova problem. With that in mind we are developing a DG solver for the general relativistic, ideal hydrodynamics equations under a 3+1 decomposition of spacetime, assuming a conformally-flat approximation to general relativity. With the aid of limiters we verify the accuracy and robustness of our code with several difficult test-problems: a special relativistic Kelvin–Helmholtz instability problem, a two-dimensional special relativistic Riemann problem, and a one- and two-dimensional general relativistic standing accretion shock (SAS) problem. We find good agreement with published results, where available. We also establish sufficient resolution for the 1D SAS problem and find encouraging results regarding the standing accretion shock instability (SASI) in 2D.

Read this paper on arXiv…

S. Dunham, E. Endeve, A. Mezzacappa, et. al.
Tue, 29 Sep 20
46/98

Comments: 14 pages, 6 figures

The Ecological Impact of High-performance Computing in Astrophysics [IMA]

http://arxiv.org/abs/2009.11295


The importance of computing in astronomy continues to increase, and so is its impact on the environment. When analyzing data or performing simulations, most researchers raise concerns about the time to reach a solution rather than its impact on the environment. Luckily, a reduced time-to-solution due to faster hardware or optimizations in the software generally also leads to a smaller carbon footprint. This is not the case when the reduced wall-clock time is achieved by overclocking the processor, or when using supercomputers.
The increase in the popularity of interpreted scripting languages, and the general availability of high-performance workstations form a considerable threat to the environment. A similar concern can be raised about the trend of running single-core instead of adopting efficient many-core programming paradigms.
In astronomy, computing is among the top producers of green-house gasses, surpassing telescope operations. Here I hope to raise the awareness of the environmental impact of running non-optimized code on overpowered computer hardware.

Read this paper on arXiv…

S. Zwart
Fri, 25 Sep 20
-1824/62

Comments: Originated at EAS 2020 conference, sustainability session by this https URL – published in Nature Astronomy, September 2020

Identifying magnetic reconnection in 2D Hybrid Vlasov Maxwell simulations with Convolutional Neural Networks [CL]

http://arxiv.org/abs/2008.09463


Magnetic reconnection is a fundamental process that quickly releases magnetic energy stored in a plasma.Identifying, from simulation outputs, where reconnection is taking place is non-trivial and, in general, has to be performed by human experts. Hence, it would be valuable if such an identification process could be automated. Here, we demonstrate that a machine learning algorithm can help to identify reconnection in 2D simulations of collisionless plasma turbulence. Using a Hybrid Vlasov Maxwell (HVM) model, a data set containing over 2000 potential reconnection events was generated and subsequently labeled by human experts. We test and compare two machine learning approaches with different configurations on this data set. The best results are obtained with a convolutional neural network (CNN) combined with an ‘image cropping’ step that zooms in on potential reconnection sites. With this method, more than 70% of reconnection events can be identified correctly. The importance of different physical variables is evaluated by studying how they affect the accuracy of predictions. Finally, we also discuss various possible causes for wrong predictions from the proposed model.

Read this paper on arXiv…

A. Hu, M. Sisti, F. Finelli, et. al.
Mon, 24 Aug 20
-1148/52

Comments: 16 pages, 9 figures and 5 tabels

Solving Kepler's equation with CORDIC double iterations [IMA]

http://arxiv.org/abs/2008.02894


In a previous work, we developed the idea to solve Kepler’s equation with a CORDIC-like algorithm, which does not require any division, but still multiplications in each iteration. Here we overcome this major shortcoming and solve Kepler’s equation using only bitshifts, additions, and one initial multiplication. We prescale the initial vector with the eccentricity and the scale correction factor. The rotation direction is decided without correction for the changing scale. We find that double CORDIC iterations are self-correcting and compensate possible wrong rotations in subsequent iterations. The algorithm needs 75\% more iterations and delivers the eccentric anomaly and its sine and cosine terms times the eccentricity. The algorithm can be adopted for the hyperbolic case, too. The new shift-and-add algorithm brings Kepler’s equation close to hardware and allows to solve it with cheap and simple hardware components.

Read this paper on arXiv…

M. Zechmeister
Mon, 10 Aug 20
-759/53

Comments: 10 pages, 8 figures. Accepted by MNRAS. Demo python code available at /anc/ke_cordic_dbl.py. Further variants and languages at this https URL

Quantifying the effect of cooled initial conditions on cosmic string network evolution [CEA]

http://arxiv.org/abs/2007.12008


Quantitative studies of the evolution and cosmological consequences of networks of cosmic strings (or other topological defects) require a combination of numerical simulations and analytic modeling with the velocity-dependent one-scale (VOS) model. In previous work, we demonstrated that a GPU-accelerated code for local Abelian-Higgs string networks enables a statistical separation of key dynamical processes affecting the evolution of the string networks and thus a precise calibration of the VOS model. Here we further exploit this code in a detailed study of two important aspects connecting the simulations with the VOS model. First, we study the sensitivity of the model calibration to the presence (or absence) of thermal oscillations due to high gradients in the initial conditions. This is relevant since in some Abelian-Higgs simulations described in the literature a period of artificial (unphysical) dissipation—usually known as cooling—is introduced with the goal of suppressing these oscillations and accelerating the convergence to scaling. We show that a small amount of cooling has no statistically significant impact on the VOS model calibration, while a longer dissipation period does have a noticeable effect. Second, in doing this analysis we also introduce an improved Markov Chain Monte Carlo based pipeline for calibrating the VOS model, Comparison to our previous bootstrap based pipeline shows that the latter accurately determined the best-fit values of the VOS model parameter, but underestimated the uncertainties in some of the parameters. Overall, our analysis shows that the calibration pipeline is robust and can be applied to future much larger field theory simulations.

Read this paper on arXiv…

J. Correia and C. Martins
Fri, 24 Jul 20
-515/53

Comments: 13 pages, 6 figures, 5 tables; Phys. Rev. D (in press)

Deep rotating convection generates the polar hexagon on Saturn [CL]

http://arxiv.org/abs/2007.08958


Numerous land and space-based observations have established that Saturn has a persistent hexagonal flow pattern near its north pole. While observations abound, the physics behind its formation is still uncertain. Although several phenomenological models have been able to reproduce this feature, a self-consistent model for how such a large-scale polygonal jet forms in the highly turbulent atmosphere of Saturn is lacking. Here we present a 3D fully-nonlinear anelastic simulation of deep thermal convection in the outer layers of gas giant planets which spontaneously generates giant polar cyclones, fierce alternating zonal flows, and a high latitude eastward jet with a polygonal pattern. The analysis of the simulation suggests that self-organized turbulence in the form of giant vortices pinches the eastward jet, forming polygonal shapes. We argue that a similar mechanism is responsible for exciting Saturn’s hexagonal flow pattern.

Read this paper on arXiv…

R. Yadav and J. Bloxham
Mon, 20 Jul 20
-305/85

Comments: 11 pages, 4 main and 5 supplementary figures, 1 animation, 42 references

Beyond moments: relativistic Lattice-Boltzmann methods for radiative transport in computational astrophysics [CL]

http://arxiv.org/abs/2007.05718


We present a new method for the numerical solution of the radiative-transfer equation (RTE) in multidimensional scenarios commonly encountered in computational astrophysics. The method is based on the direct solution of the Boltzmann equation via an extension of the Lattice Boltzmann (LB) methods and allows to model the evolution of the radiation field as it interacts with a background fluid, via absorption, emission, and scattering. As a first application of this method, we restrict our attention to a frequency independent (“grey”) formulation within a special-relativistic framework, which can be employed also for classical computational astrophysics. For a number of standard tests that consider the performance of the method in optically thin, optically thick and intermediate regimes with a static fluid, we show the ability of the LB method to produce accurate and convergent results matching the analytic solutions. We also contrast the LB method with commonly employed moment-based schemes for the solution of the RTE, such as the M1 scheme. In this way, we are able to highlight that the LB method provides the correct solution for both non-trivial free-streaming scenarios and the intermediate optical-depth regime, for which the M1 method either fails or provides inaccurate solutions. When coupling to a dynamical fluid, on the other hand, we present the first self-consistent solution of the RTE with LB methods within a relativistic-hydrodynamic scenario. Finally, we show that besides providing more accurate results in all regimes, the LB method features smaller or comparable computational costs compared to the M1 scheme. We conclude that LB methods represent a competitive and promising avenue to the solution of radiative transport, one of the most common and yet important problems in computational astrophysics.

Read this paper on arXiv…

L. Weih, A. Gabbana, D. Simeoni, et. al.
Tue, 14 Jul 20
-158/97

Comments: 23 pages, 16 figures, submitted to MNRAS

Dense output for highly oscillatory numerical solutions [CL]

http://arxiv.org/abs/2007.05013


We present a method to construct a continuous extension (otherwise known as dense output) for a numerical routine in the special case of the numerical solution being a scalar-valued function exhibiting rapid oscillations. Such cases call for numerical routines that make use of the known global behaviour of the solution, one example being methods using asymptotic expansions to forecast the solution at each step of the independent variable. An example is oscode, numerical routine which uses the Wentzel-Kramers-Brillouin (WKB) approximation when the solution oscillates rapidly and otherwise behaves as a Runge-Kutta (RK) solver. Polynomial interpolation is not suitable for producing the solution at an arbitrary point mid-step, since efficient numerical methods based on the WKB approximation will step through multiple oscillations in a single step. Instead we construct the continuous solution by extending the numerical quadrature used in computing a WKB approximation of the solution with no additional evaluations of the differential equation or terms within, and provide an error estimate on this dense output. Finally, we draw attention to previous work on the continuous extension of Runge-Kutta formulae, and construct an extension to a RK method based on Gauss–Lobatto quadrature nodes, thus describing how to generate dense output from each of the methods underlying oscode.

Read this paper on arXiv…

F. Agocs, M. Hobson, W. Handley, et. al.
Mon, 13 Jul 20
-129/64

Comments: 10 pages, 5 figures, 4 tables. Submitted to PRResearch

Sensitivity of stellar physics to the equation of state [CL]

http://arxiv.org/abs/2006.16208


The formation and evolution of stars depends on various physical aspects of stellar matter, including the equation of state (EOS) and transport properties. Although often dismissed as `ideal gas-like’ and therefore simple, states occurring in stellar matter are dense plasmas, and the EOS has not been established precisely. EOS constructed using multi-physics approaches found necessary for laboratory studies of warm dense matter give significant variations in stellar regimes, and vary from the EOS commonly used in simulations of the formation and evolution of stars. We have investigated the sensitivity of such simulations to variations in the EOS, for sun-like and low-mass stars, and found a strong sensitivity of the lifetime of the Sun and of the lower luminosity limit for red dwarfs. We also found a significant sensitivity in the lower mass limit for red dwarfs. Simulations of this type are also used for other purposes in astrophysics, including the interpretation of absolute magnitude as mass, the conversion of inferred mass distribution to the initial mass function using predicted lifetimes, simulations of star formation from nebulae, simulations of galactic evolution, and the baryon census used to bound the exotic contribution to dark matter. Although many of the sensitivities of stellar physics to the EOS are large, some of the inferred astrophysical quantities are also constrained by independent measurements, although the constraints may be indirect and non-trivial. However, it may be possible to use such measurements to constrain the EOS more than presently possible by established plasma theory.

Read this paper on arXiv…

D. Swift, T. Lockard, M. Bethkenhagen, et. al.
Tue, 30 Jun 20
64/86

Comments: N/A

1+1D implicit disk computations [CL]

http://arxiv.org/abs/2006.12939


We present an implicit numerical method to solve the time-dependent equations of radiation hydrodynamics (RHD) in axial symmetry assuming hydrostatic equilibrium perpendicular to the equatorial plane (1+1D) of a gaseous disk. The equations are formulated in conservative form on an adaptive grid and the corresponding fluxes are calculated by a spacial second order advection scheme. Self-gravity of the disk is included by solving the Possion equation. We test the resulting numerical method through comparison with a simplified analytical solution as well as through the long term viscous evolution of protoplanetary disk when due to viscosity matter is transported towards the central host star and the disk depletes. The importance of the inner boundary conditions on the structural behaviour of disks is demonstrated with several examples.

Read this paper on arXiv…

F. Ragossnig, E. Dorfi, B. Ratschiner, et. al.
Wed, 24 Jun 20
38/77

Comments: for details check this https URL&jid=COMPHY&surname=Ragossnig

A single-step third-order temporal discretization with Jacobian-free and Hessian-free formulations for finite difference methods [CL]

http://arxiv.org/abs/2006.00096


Discrete updates of numerical partial differential equations (PDEs) rely on two branches of temporal integration. The first branch is the widely-adopted, traditionally popular approach of the method-of-lines (MOL) formulation, in which multi-stage Runge-Kutta (RK) methods have shown great success in solving ordinary differential equations (ODEs) at high-order accuracy. The clear separation between the temporal and the spatial discretizations of the governing PDEs makes the RK methods highly adaptable. In contrast, the second branch of formulation using the so-called Lax-Wendroff procedure escalates the use of tight couplings between the spatial and temporal derivatives to construct high-order approximations of temporal advancements in the Taylor series expansions. In the last two decades, modern numerical methods have explored the second route extensively and have proposed a set of computationally efficient single-stage, single-step high-order accurate algorithms. In this paper, we present an algorithmic extension of the method called the Picard integration formulation (PIF) that belongs to the second branch of the temporal updates. The extension presented in this paper furnishes ease of calculating the Jacobian and Hessian terms necessary for third-order accuracy in time.

Read this paper on arXiv…

Y. Lee and D. Lee
Tue, 2 Jun 20
82/90

Comments: N/A

Abelian-Higgs cosmic string evolution with multiple GPUs [CL]

http://arxiv.org/abs/2005.14454


Topological defects form at cosmological phase transitions by the Kibble mechanism. Cosmic strings and superstrings can lead to particularly interesting astrophysical and cosmological consequences, but this study is is currently limited by the availability of accurate numerical simulations, which in turn is bottlenecked by hardware resources and computation time. Aiming to eliminate this bottleneck, in recent work we introduced and validated a GPU-accelerated evolution code for local Abelian-Higgs strings networks. While this leads to significant gains in speed, it is still limited by physical memory available on a graphical accelerator. Here we report on a further towards our main goal, by implementing and validating a multiple GPU extension of the earlier code, and further demonstrate its good scalability, both in terms of strong and weak scaling. A $8192^3$ production run, using 4096 GPUs, runs in 40.4 minutes of wall-clock time on the Piz Daint supercomputer.

Read this paper on arXiv…

J. Correia and C. Martins
Mon, 1 Jun 20
39/50

Comments: 12 pages, 4 figures, 3 tables

A Deep Dive into the Distribution Function: Understanding Phase Space Dynamics with Continuum Vlasov-Maxwell Simulations [CL]

http://arxiv.org/abs/2005.13539


In collisionless and weakly collisional plasmas, the particle distribution function is a rich tapestry of the underlying physics. However, actually leveraging the particle distribution function to understand the dynamics of a weakly collisional plasma is challenging. The equation system of relevance, the Vlasov-Maxwell-Fokker-Planck (VM-FP) system of equations, is difficult to numerically integrate, and traditional methods such as the particle-in-cell method introduce counting noise into the distribution function.
In this thesis, we present a new algorithm for the discretization of VM-FP system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin (DG) finite element method for the spatial discretization and a third order strong-stability preserving Runge-Kutta for the time discretization, we obtain an accurate solution for the plasma’s distribution function in space and time.
We both prove the numerical method retains key physical properties of the VM-FP system, such as the conservation of energy and the second law of thermodynamics, and demonstrate these properties numerically. These results are contextualized in the history of the DG method. We discuss the importance of the algorithm being alias-free, a necessary condition for deriving stable DG schemes of kinetic equations so as to retain the implicit conservation relations embedded in the particle distribution function, and the computational favorable implementation using a modal, orthonormal basis in comparison to traditional DG methods applied in computational fluid dynamics. Finally, we demonstrate how the high fidelity representation of the distribution function, combined with novel diagnostics, permits detailed analysis of the energization mechanisms in fundamental plasma processes such as collisionless shocks.

Read this paper on arXiv…

J. Juno
Fri, 29 May 20
63/75

Comments: N/A

An arbitrary high-order Spectral Difference method for the induction equation [CL]

http://arxiv.org/abs/2005.13563


We study in this paper three variants of the high-order Discontinuous Galerkin (DG) method with Runge-Kutta (RK) time integration for the induction equation, analysing their ability to preserve the divergence free constraint of the magnetic field. To quantify divergence errors, we use a norm based on both a surface term, measuring global divergence errors, and a volume term, measuring local divergence errors. This leads us to design a new, arbitrary high-order numerical scheme for the induction equation in multiple space dimensions, based on a modification of the Spectral Difference (SD) method [1] with ADER time integration [2]. It appears as a natural extension of the Constrained Transport (CT) method. We show that it preserves $\nabla\cdot\vec{B}=0$ exactly by construction, both in a local and a global sense. We compare our new method to the 3 RKDG variants and show that the magnetic energy evolution and the solution maps of our new SD-ADER scheme are qualitatively similar to the RKDG variant with divergence cleaning, but without the need for an additional equation and an extra variable to control the divergence errors.
[1] Liu Y., Vinokur M., Wang Z.J. (2006) Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids. In: Groth C., Zingg D.W. (eds) Computational Fluid Dynamics 2004. Springer, Berlin, Heidelberg
[2] Dumbser M., Castro M., Par\’es C., Toro E.F (2009) ADER schemes on unstructured meshes for nonconservative hyperbolic systems: Applications to geophysical flows. In: Computers & Fluids, Volume 38, Issue 9

Read this paper on arXiv…

M. Veiga, D. Velasco-Romero, Q. Wenger, et. al.
Fri, 29 May 20
74/75

Comments: 26 pages

A Detailed Examination of Anisotropy and Timescales inThree-dimensional Incompressible Magnetohydrodynamic Turbulence [CL]

http://arxiv.org/abs/2005.08815


When magnetohydrodynamic turbulence evolves in the presence of a large-scale mean magnetic field, an anisotropy develops relative to that preferred direction. The well-known tendency is to develop stronger gradients perpendicular to the magnetic field, relative to the direction along the field. This anisotropy of the spectrum is deeply connected with anisotropy of estimated timescales for dynamical processes, and requires reconsideration of basic issues such as scale locality and spectral transfer. Here analysis of high-resolution three-dimensional simulations of unforced magnetohydrodynamic turbulence permits quantitative assessment of the behavior of theoretically relevant timescales in Fourier wavevector space. We discuss the distribution of nonlinear times, Alfv\’en times, and estimated spectral transfer rates. Attention is called to the potential significance of special regions of the spectrum, such as the two-dimensional limit and the “critical balance” region. A formulation of estimated spectral transfer in terms of a suppression factor supports a conclusion that the quasi two-dimensional fluctuations (characterized by strong nonlinearities) are not a singular limit, but may be in general expected to make important contributions.

Read this paper on arXiv…

R. Chhiber, W. Matthaeus, S. Oughton, et. al.
Tue, 19 May 20
35/92

Comments: In Press at Physics of Plasmas

The effects of density inhomogeneities on the radio wave emission in electron beam plasmas [CL]

http://arxiv.org/abs/2005.08876


Type III radio bursts are radio emissions associated with solar flares. They are considered to be caused by electron beams in the solar corona. Magnetic reconnection is a possible accelerator of electron beams in the course of solar flares which causes unstable distribution functions, and density inhomogeneities. The properties of radio emission by electron beams in such environment are, however, still poorly understood. We capture the non-linear kinetic plasma processes of radio emissions in such plasmas by utilizing fully-kinetic Particle-In-Cell (PIC) code numerical simulations. Our model takes into account initial velocity distribution functions as they are supposed to be created by magnetic reconnection. These velocity distribution functions allow two distinct mechanisms of radio wave emissions: plasma emissions due to plasma-wave interactions and so called electron cyclotron maser emissions (ECME) due to wave-particle interactions. Our most important finding is that the number of harmonics of Langmuir waves increases with the density inhomogeneities. The harmonics are generated by the interaction of beam-generated Langmuir waves and their harmonics. In addition, we also find evidence for transverse harmonic electromagnetic wave emissions due to a coalescence of beam-generated and fundamental Langmuir waves with a vanishing wavevector. We investigate the effects of density inhomogeneities on the conversion process of the free energy of the electron beams to electrostatic and electromagnetic waves and the frequency shift of electron resonances caused by perpendicular gradients in the beam velocity distribution function. Our findings explain the observation of Langmuir waves and their harmonics in solar radio bursts and of the observed frequency shifts in these emissions.

Read this paper on arXiv…

X. Yao, P. Muñoz and J. Büchner
Tue, 19 May 20
73/92

Comments: 37 pages, 5 tables, 17 figures

The Athena++ Adaptive Mesh Refinement Framework: Design and Magnetohydrodynamic Solvers [CL]

http://arxiv.org/abs/2005.06651


The design and implementation of a new framework for adaptive mesh refinement (AMR) calculations is described. It is intended primarily for applications in astrophysical fluid dynamics, but its flexible and modular design enables its use for a wide variety of physics. The framework works with both uniform and nonuniform grids in Cartesian and curvilinear coordinate systems. It adopts a dynamic execution model based on a simple design called a “task list” that improves parallel performance by overlapping communication and computation, simplifies the inclusion of a diverse range of physics, and even enables multiphysics models involving different physics in different regions of the calculation. We describe physics modules implemented in this framework for both non-relativistic and relativistic magnetohydrodynamics (MHD). These modules adopt mature and robust algorithms originally developed for the Athena MHD code and incorporate new extensions: support for curvilinear coordinates, higher-order time integrators, more realistic physics such as a general equation of state, and diffusion terms that can be integrated with super-time-stepping algorithms. The modules show excellent performance and scaling, with well over 80% parallel efficiency on over half a million threads. The source code has been made publicly available.

Read this paper on arXiv…

J. Stone, K. Tomida, C. White, et. al.
Fri, 15 May 20
53/65

Comments: 50 pages, 41 figures, accepted for publication in American Astronomical Society journals

A new class of discontinuous solar wind solutions [SSA]

http://arxiv.org/abs/2005.06426


A new class of one-dimensional solar wind models is developed within the general polytropic, single-fluid hydrodynamic framework. The particular case of quasi-adiabatic radial expansion with a localized heating source is considered. We consider analytical solutions with continuous Mach number over the entire radial domain while allowing for jumps in the flow velocity, density, and temperature, provided that there exists an external source of energy in the vicinity of the critical point which supports such jumps in physical quantities. This is substantially distinct from both the standard Parker solar wind model and the original nozzle solutions, where such discontinuous solutions are not permissible. We obtain novel sample analytic solutions of the governing equations corresponding to both slow and fast wind.

Read this paper on arXiv…

B. Shergelashvili, V. Melnik, G. Dididze, et. al.
Thu, 14 May 20
18/56

Comments: 13 pages, 3 figures, MNRAS, Accepted for publication

Nonlinear 3D Cosmic Web Simulation with Heavy-Tailed Generative Adversarial Networks [CEA]

http://arxiv.org/abs/2005.03050


Fast and accurate simulations of the non-linear evolution of the cosmic density field are a major component of many cosmological analyses, but the computational time and storage required to run them can be exceedingly large. For this reason, we use generative adversarial networks (GANs) to learn a compressed representation of the 3D matter density field that is fast and easy to sample, and for the first time show that GANs are capable of generating samples at the level of accuracy of other conventional methods. Using sub-volumes from a suite of GADGET-2 N-body simulations, we demonstrate that a deep-convolutional GAN can generate samples that capture both large- and small-scale features of the matter density field, as validated through a variety of n-point statistics. The use of a data scaling that preserves high-density features and a heavy-tailed latent space prior allow us to obtain state of the art results for fast 3D cosmic web generation. In particular, the mean power spectra from generated samples agree to within 5% up to k=3 and within 10% for k<5 when compared with N-body simulations, and similar accuracy is obtained for a variety of bispectra. By modeling the latent space with a heavy-tailed prior rather than a standard Gaussian, we better capture sample variance in the high-density voxel PDF and reduce errors in power spectrum and bispectrum covariance on all scales. Furthermore, we show that a conditional GAN can smoothly interpolate between samples conditioned on redshift. Deep generative models, such as the ones described in this work, provide great promise as fast, low-memory, high-fidelity forward models of large-scale structure.

Read this paper on arXiv…

R. Feder, P. Berger and G. Stein
Fri, 8 May 20
59/72

Comments: 18 pages, 17 figures

Introducing PyCross: PyCloudy Rendering Of Shape Software for pseudo 3D ionisation modelling of nebulae [IMA]

http://arxiv.org/abs/2005.02749


Research into the processes of photoionised nebulae plays a significant part in our understanding of stellar evolution. It is extremely difficult to visually represent or model ionised nebula, requiring astronomers to employ sophisticated modelling code to derive temperature, density and chemical composition. Existing codes are available that often require steep learning curves and produce models derived from mathematical functions. In this article we will introduce PyCross: PyCloudy Rendering Of Shape Software. This is a pseudo 3D modelling application that generates photoionisation models of optically thin nebulae, created using the Shape software. Currently PyCross has been used for novae and planetary nebulae, and it can be extended to Active Galactic Nuclei or any other type of photoionised axisymmetric nebulae. Functionality, an operational overview, and a scientific pipeline will be described with scenarios where PyCross has been adopted for novae (V5668 Sagittarii (2015) & V4362 Sagittarii (1994)) and a planetary nebula (LoTr1). Unlike the aforementioned photoionised codes this application does not require any coding experience, nor the need to derive complex mathematical models, instead utilising the select features from Cloudy/PyCloudy and Shape. The software was developed using a formal software development lifecycle, written in Python and will work without the need to install any development environments or additional python packages. This application, Shape models and PyCross archive examples are freely available to students, academics and research community on GitHub for download (https://github.com/karolfitzgerald/PyCross_OSX_App).

Read this paper on arXiv…

K. Fitzgerald, E. Harvey, N. Keaveney, et. al.
Thu, 7 May 20
5/62

Comments: 15 pages, 12 figures