Provably convergent Newton-Raphson methods for recovering primitive variables with applications to physical-constraint-preserving Hermite WENO schemes for relativistic hydrodynamics [CL]

http://arxiv.org/abs/2305.14805


The relativistic hydrodynamics (RHD) equations have three crucial intrinsic physical constraints on the primitive variables: positivity of pressure and density, and subluminal fluid velocity. However, numerical simulations can violate these constraints, leading to nonphysical results or even simulation failure. Designing genuinely physical-constraint-preserving (PCP) schemes is very difficult, as the primitive variables cannot be explicitly reformulated using conservative variables due to relativistic effects. In this paper, we propose three efficient Newton–Raphson (NR) methods for robustly recovering primitive variables from conservative variables. Importantly, we rigorously prove that these NR methods are always convergent and PCP, meaning they preserve the physical constraints throughout the NR iterations. The discovery of these robust NR methods and their PCP convergence analyses are highly nontrivial and technical. As an application, we apply the proposed NR methods to design PCP finite volume Hermite weighted essentially non-oscillatory (HWENO) schemes for solving the RHD equations. Our PCP HWENO schemes incorporate high-order HWENO reconstruction, a PCP limiter, and strong-stability-preserving time discretization. We rigorously prove the PCP property of the fully discrete schemes using convex decomposition techniques. Moreover, we suggest the characteristic decomposition with rescaled eigenvectors and scale-invariant nonlinear weights to enhance the performance of the HWENO schemes in simulating large-scale RHD problems. Several demanding numerical tests are conducted to demonstrate the robustness, accuracy, and high resolution of the proposed PCP HWENO schemes and to validate the efficiency of our NR methods.

Read this paper on arXiv…

C. Cai, J. Qiu and K. Wu
Thu, 25 May 23
64/64

Comments: 49 pages

Analysis of Prospective Flight Schemes to Venus Accompanied by an Asteroid Flyby [EPA]

http://arxiv.org/abs/2305.08244


This paper deals with the problem of constructing a flight scheme to Venus, in which a spacecraft flying to the planet after a gravity assist maneuver and transition to a resonant orbit in order to re-encounter with Venus, makes a passage of a minor celestial body. The 117 candidate asteroids from the NASA JPL catalogue, whose diameter exceeds 1 km, were selected. The flight trajectories which meet the criteria of impulse-free both flyby Venus and asteroid, and the subsequent landing on the surface of Venus were found within the interval of launch dates from 2029 to 2050. The trajectory of the spacecraft flight from the Earth to Venus including flyby of Venus and asteroids with a subsequent landing on the surface of Venus was analyzed.

Read this paper on arXiv…

V. Zubko
Tue, 16 May 23
47/83

Comments: N/A

A well-balanced and exactly divergence-free staggered semi-implicit hybrid finite volume/finite element scheme for the incompressible MHD equations [CL]

http://arxiv.org/abs/2305.06497


We present a new divergence-free and well-balanced hybrid FV/FE scheme for the incompressible viscous and resistive MHD equations on unstructured mixed-element meshes in 2 and 3 space dimensions. The equations are split into subsystems. The pressure is defined on the vertices of the primary mesh, while the velocity field and the normal components of the magnetic field are defined on an edge-based/face-based dual mesh in two and three space dimensions, respectively. This allows to account for the divergence-free conditions of the velocity field and of the magnetic field in a rather natural manner. The non-linear convective and the viscous terms are solved at the aid of an explicit FV scheme, while the magnetic field is evolved in a divergence-free manner via an explicit FV method based on a discrete form of the Stokes law in the edges/faces of each primary element. To achieve higher order of accuracy, a pw-linear polynomial is reconstructed for the magnetic field, which is guaranteed to be divergence-free via a constrained L2 projection. The pressure subsystem is solved implicitly at the aid of a classical continuous FE method in the vertices of the primary mesh. In order to maintain non-trivial stationary equilibrium solutions of the governing PDE system exactly, which are assumed to be known a priori, each step of the new algorithm takes the known equilibrium solution explicitly into account so that the method becomes exactly well-balanced. This paper includes a very thorough study of the lid-driven MHD cavity problem in the presence of different magnetic fields. We finally present long-time simulations of Soloviev equilibrium solutions in several simplified 3D tokamak configurations even on very coarse unstructured meshes that, in general, do not need to be aligned with the magnetic field lines.

Read this paper on arXiv…

F. Fambri, E. Zampa, S. Busto, et. al.
Fri, 12 May 23
6/53

Comments: 57 pages, 33 figures, 13 tables, reference-data (supplementary electronic material) will be available after publication on the Journal web-page

SLEPLET: Slepian Scale-Discretised Wavelets in Python [CL]

http://arxiv.org/abs/2304.10680


Wavelets are widely used in various disciplines to analyse signals both in space and scale. Whilst many fields measure data on manifolds (i.e., the sphere), often data are only observed on a partial region of the manifold. Wavelets are a typical approach to data of this form, but the wavelet coefficients that overlap with the boundary become contaminated and must be removed for accurate analysis. Another approach is to estimate the region of missing data and to use existing whole-manifold methods for analysis. However, both approaches introduce uncertainty into any analysis. Slepian wavelets enable one to work directly with only the data present, thus avoiding the problems discussed above. Applications of Slepian wavelets to areas of research measuring data on the partial sphere include gravitational/magnetic fields in geodesy, ground-based measurements in astronomy, measurements of whole-planet properties in planetary science, geomagnetism of the Earth, and cosmic microwave background analyses.

Read this paper on arXiv…

P. Roddy
Mon, 24 Apr 23
26/41

Comments: 4 pages

Multi-scale CLEAN in hard X-ray solar imaging [IMA]

http://arxiv.org/abs/2303.16272


Multi-scale deconvolution is an ill-posed inverse problem in imaging, with applications ranging from microscopy, through medical imaging, to astronomical remote sensing. In the case of high-energy space telescopes, multi-scale deconvolution algorithms need to account for the peculiar property of native measurements, which are sparse samples of the Fourier transform of the incoming radiation. The present paper proposes a multi-scale version of CLEAN, which is the most popular iterative deconvolution method in Fourier space imaging. Using synthetic data generated according to a simulated but realistic source configuration, we show that this multi-scale version of CLEAN performs better than the original one in terms of accuracy, photometry, and regularization. Further, the application to a data set measured by the NASA Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) shows the ability of multi-scale CLEAN to reconstruct rather complex topographies, characteristic of a real flaring event.

Read this paper on arXiv…

A. Volpara, M. Piana and A. Massone
Thu, 30 Mar 23
31/66

Comments: N/A

A machine learning and feature engineering approach for the prediction of the uncontrolled re-entry of space objects [CL]

http://arxiv.org/abs/2303.10183


The continuously growing number of objects orbiting around the Earth is expected to be accompanied by an increasing frequency of objects re-entering the Earth’s atmosphere. Many of these re-entries will be uncontrolled, making their prediction challenging and subject to several uncertainties. Traditionally, re-entry predictions are based on the propagation of the object’s dynamics using state-of-the-art modelling techniques for the forces acting on the object. However, modelling errors, particularly related to the prediction of atmospheric drag may result in poor prediction accuracies. In this context, we explore the possibility to perform a paradigm shift, from a physics-based approach to a data-driven approach. To this aim, we present the development of a deep learning model for the re-entry prediction of uncontrolled objects in Low Earth Orbit (LEO). The model is based on a modified version of the Sequence-to-Sequence architecture and is trained on the average altitude profile as derived from a set of Two-Line Element (TLE) data of over 400 bodies. The novelty of the work consists in introducing in the deep learning model, alongside the average altitude, three new input features: a drag-like coefficient (B*), the average solar index, and the area-to-mass ratio of the object. The developed model is tested on a set of objects studied in the Inter-Agency Space Debris Coordination Committee (IADC) campaigns. The results show that the best performances are obtained on bodies characterised by the same drag-like coefficient and eccentricity distribution as the training set.

Read this paper on arXiv…

F. Salmaso, M. Trisolini and C. Colombo
Tue, 21 Mar 23
43/68

Comments: N/A

Ejecta cloud distributions for the statistical analysis of impact cratering events onto asteroids' surfaces: a sensitivity analysis [EPA]

http://arxiv.org/abs/2301.04284


This work presents the model of an ejecta cloud distribution to characterise the plume generated by the impact of a projectile onto asteroids surfaces. A continuum distribution based on the combination of probability density functions is developed to describe the size, ejection speed, and ejection angles of the fragments. The ejecta distribution is used to statistically analyse the fate of the ejecta. By combining the ejecta distribution with a space-filling sampling technique, we draw samples from the distribution and assigned them a number of \emph{representative fragments} so that the evolution in time of a single sample is representative of an ensemble of fragments. Using this methodology, we analyse the fate of the ejecta as a function of different modelling techniques and assumptions. We evaluate the effect of different types of distributions, ejection speed models, coefficients, etc. The results show that some modelling assumptions are more influential than others and, in some cases, they influence different aspects of the ejecta evolution such as the share of impacting and escaping fragments or the distribution of impacting fragments on the asteroid surface.

Read this paper on arXiv…

M. Trisolini, C. Colombo and Y. Tsuda
Thu, 12 Jan 23
60/68

Comments: N/A

Target selection for Near-Earth Asteroids in-orbit sample collection missions [EPA]

http://arxiv.org/abs/2212.09497


This work presents a mission concept for in-orbit particle collection for sampling and exploration missions towards Near-Earth asteroids. Ejecta is generated via a small kinetic impactor and two possible collection strategies are investigated: collecting the particle along the anti-solar direction, exploiting the dynamical features of the L$_2$ Lagrangian point or collecting them while the spacecraft orbits the asteroid and before they re-impact onto the asteroid surface. Combining the dynamics of the particles in the Circular Restricted Three-Body Problem perturbed by Solar Radiation Pressure with models for the ejecta generation, we identify possible target asteroids as a function of their physical properties, by evaluating the potential for particle collection.

Read this paper on arXiv…

M. Trisolini, C. Colombo and Y. Tsuda
Tue, 20 Dec 22
19/97

Comments: N/A

A Lambert's Problem Solution via the Koopman Operator with Orthogonal Polynomials [CL]

http://arxiv.org/abs/2212.01390


Lambert’s problem has been long studied in the context of space operations; its solution enables accurate orbit determination and spacecraft guidance. This work offers an analytical solution to Lambert’s problem using the Koopman Operator (KO). In contrast to previous methods in the literature, the KO provides the analysis of a nonlinear system by seeking a transformation that embeds the nonlinear dynamics into a global linear representation. Our new methodology to solve for Lambert solutions considers the position of the system’s eigenvalues on the phase plane, evaluating accurate state transition polynomial maps for a computationally efficient propagation of the dynamics. The methodology used and multiple-revolution solutions found are compared in accuracy and performance with other techniques found in the literature, highlighting the benefits of the newly developed analytical approach over classical numerical methodologies.

Read this paper on arXiv…

J. Pasiecznik, S. Servadio and R. Linares
Tue, 6 Dec 22
46/87

Comments: Conference Proceedings from the 73rd International Astronautical Congress

A Finite Element Method for Angular Discretization of the Radiation Transport Equation on Spherical Geodesic Grids [CL]

http://arxiv.org/abs/2212.01409


Discrete ordinate ($S_N$) and filtered spherical harmonics ($FP_N$) based schemes have been proven to be robust and accurate in solving the Boltzmann transport equation but they have their own strengths and weaknesses in different physical scenarios. We present a new method based on a finite element approach in angle that combines the strengths of both methods and mitigates their disadvantages. The angular variables are specified on a spherical geodesic grid with functions on the sphere being represented using a finite element basis. A positivity-preserving limiting strategy is employed to prevent non-physical values from appearing in the solutions. The resulting method is then compared with both $S_N$ and $FP_N$ schemes using four test problems and is found to perform well when one of the other methods fail.

Read this paper on arXiv…

M. Bhattacharyya and D. Radice
Tue, 6 Dec 22
82/87

Comments: 24 pages, 13 figures

Exponential methods for anisotropic diffusion [HEAP]

http://arxiv.org/abs/2211.08953


The anisotropic diffusion equation is of crucial importance in understanding cosmic ray (CR) diffusion across the Galaxy and its interplay with the Galactic magnetic field. This diffusion term contributes to the highly stiff nature of the CR transport equation. In order to conduct numerical simulations of time-dependent cosmic ray transport, implicit integrators have been traditionally favoured over the CFL-bound explicit integrators in order to be able to take large step sizes. We propose exponential methods that directly compute the exponential of the matrix to solve the linear anisotropic diffusion equation. These methods allow us to take even larger step sizes; in certain cases, we are able to choose a step size as large as the simulation time, i.e., only one time step. This can substantially speed-up the simulations whilst generating highly accurate solutions (l2 error $\leq 10^{-10}$). Additionally, we test an approach based on extracting a constant coefficient from the anisotropic diffusion equation where the constant coefficient term is solved implicitly or exponentially and the remainder is treated using some explicit method. We find that this approach, for linear problems, is unable to improve on the exponential-based methods that directly evaluate the matrix exponential.

Read this paper on arXiv…

P. Deka, L. Einkemmer, R. Kissmann, et. al.
Thu, 17 Nov 22
25/63

Comments: in submission

Global high-order numerical schemes for the time evolution of the general relativistic radiation magneto-hydrodynamics equations [CL]

http://arxiv.org/abs/2211.00027


Modeling correctly the transport of neutrinos is crucial in some astrophysical scenarios such as core-collapse supernovae and binary neutron star mergers. In this paper, we focus on the truncated-moment formalism, considering only the first two moments (M1 scheme) within the grey approximation, which reduces Boltzmann seven-dimensional equation to a system of $3+1$ equations closely resembling the hydrodynamic ones. Solving the M1 scheme is still mathematically challenging, since it is necessary to model the radiation-matter interaction in regimes where the evolution equations become stiff and behave as an advection-diffusion problem. Here, we present different global, high-order time integration schemes based on Implicit-Explicit Runge-Kutta (IMEX) methods designed to overcome the time-step restriction caused by such behavior while allowing us to use the explicit RK commonly employed for the MHD and Einstein equations. Finally, we analyze their performance in several numerical tests.

Read this paper on arXiv…

M. Izquierdo, L. Pareschi, B. Miñano, et. al.
Wed, 2 Nov 22
11/67

Comments: 22 pages, 11 figures

Detection and estimation of spacecraft maneuvers for catalog maintenance [CL]

http://arxiv.org/abs/2210.14350


Building and maintaining a catalog of resident space objects involves several tasks, ranging from observations to data analysis. Once acquired, the knowledge of a space object needs to be updated following a dedicated observing schedule. Dynamics mismodeling and unknown maneuvers can alter the catalog’s accuracy, resulting in uncorrelated observations originating from the same object. Starting from two independent orbits, this work presents a novel approach to detect and estimate maneuvers of resident space objects, which allows for correlation recovery. The estimation is performed with successive convex optimization without a-priori assumption on the thrust arcs structure and thrust direction.

Read this paper on arXiv…

L. Pirovano and R. Armellin
Thu, 27 Oct 22
18/55

Comments: 14 pages, 9 figures, 1 table

Conservative Evolution of Black Hole Perturbations with Time-Symmetric Numerical Methods [CL]

http://arxiv.org/abs/2210.02550


The scheduled launch of the LISA Mission in the next decade has called attention to the gravitational self-force problem. Despite an extensive body of theoretical work, long-time numerical computations of gravitational waves from extreme-mass-ratio-inspirals remain challenging. This work proposes a class of numerical evolution schemes suitable to this problem based on Hermite integration. Their most important feature is time-reversal symmetry and unconditional stability, which enables these methods to preserve symplectic structure, energy, momentum and other Noether charges over long time periods. We apply Noether’s theorem to the master fields of black hole perturbation theory on a hyperboloidal slice of Schwarzschild spacetime to show that there exist constants of evolution that numerical simulations must preserve. We demonstrate that time-symmetric integration schemes based on a 2-point Taylor expansion (such as Hermite integration) numerically conserve these quantities, unlike schemes based on a 1-point Taylor expansion (such as Runge-Kutta). This makes time-symmetric schemes ideal for long-time EMRI simulations.

Read this paper on arXiv…

M. O’Boyle, C. Markakis, L. Silva, et. al.
Fri, 7 Oct 22
50/62

Comments: 43 pages, 8 figures

A finite-volume scheme for modeling compressible magnetohydrodynamic flows at low Mach numbers in stellar interiors [SSA]

http://arxiv.org/abs/2210.01641


Fully compressible magnetohydrodynamic (MHD) simulations are a fundamental tool for investigating the role of dynamo amplification in the generation of magnetic fields in deep convective layers of stars. The flows that arise in such environments are characterized by low (sonic) Mach numbers (M_son < 0.01 ). In these regimes, conventional MHD codes typically show excessive dissipation and tend to be inefficient as the Courant-Friedrichs-Lewy (CFL) constraint on the time step becomes too strict. In this work we present a new method for efficiently simulating MHD flows at low Mach numbers in a space-dependent gravitational potential while still retaining all effects of compressibility. The proposed scheme is implemented in the finite-volume Seven-League Hydro (SLH) code, and it makes use of a low-Mach version of the five-wave Harten-Lax-van Leer discontinuities (HLLD) solver to reduce numerical dissipation, an implicit-explicit time discretization technique based on Strang splitting to overcome the overly strict CFL constraint, and a well-balancing method that dramatically reduces the magnitude of spatial discretization errors in strongly stratified setups. The solenoidal constraint on the magnetic field is enforced by using a constrained transport method on a staggered grid. We carry out five verification tests, including the simulation of a small-scale dynamo in a star-like environment at M_son ~ 0.001 . We demonstrate that the proposed scheme can be used to accurately simulate compressible MHD flows in regimes of low Mach numbers and strongly stratified setups even with moderately coarse grids.

Read this paper on arXiv…

G. Leidi, C. Birke, R. Andrassy, et. al.
Wed, 5 Oct 22
37/73

Comments: N/A

A database of high precision trivial choreographies for the planar three-body problem [CL]

http://arxiv.org/abs/2210.00594


Trivial choreographies are special periodic solutions of the planar three-body problem. In this work we use a modified Newton’s method based on the continuous analog of Newton’s method and a high precision arithmetic for a specialized numerical search for new trivial choreographies. As a result of the search we computed a high precision database of 462 such orbits, including 397 new ones. The initial conditions and the periods of all found solutions are given with 180 correct decimal digits. 108 of the choreographies are linearly stable, including 99 new ones. The linear stability is tested by a high precision computing of the eigenvalues of the monodromy matrices.

Read this paper on arXiv…

I. Hristov, R. Hristova, I. Puzynin, et. al.
Tue, 4 Oct 22
37/71

Comments: 10 pages, 3 figures, 1 table. arXiv admin note: substantial text overlap with arXiv:2203.02793

Solving nonlinear Klein-Gordon equations on unbounded domains via the Finite Element Method [CL]

http://arxiv.org/abs/2209.07226


A large class of scalar-tensor theories of gravity exhibit a screening mechanism that dynamically suppresses fifth forces in the Solar system and local laboratory experiments. Technically, at the scalar field equation level, this usually translates into nonlinearities which strongly limit the scope of analytical approaches. This article presents $femtoscope$ $-$ a Python numerical tool based on the Finite Element Method (FEM) and Newton method for solving Klein-Gordon-like equations that arise in particular in the symmetron or chameleon models. Regarding the latter, the scalar field behavior is generally only known infinitely far away from the its sources. We thus investigate existing and new FEM-based techniques for dealing with asymptotic boundary conditions on finite-memory computers, whose convergence are assessed. Finally, $femtoscope$ is showcased with a study of the chameleon fifth force in Earth orbit.

Read this paper on arXiv…

H. Lévy, J. Bergé and J. Uzan
Fri, 16 Sep 22
27/84

Comments: N/A

LeXInt: Package for Exponential Integrators employing Leja interpolation [CL]

http://arxiv.org/abs/2208.08269


We present a publicly available software for exponential integrators that computes the $\varphi_l(z)$ functions using polynomial interpolation. The interpolation method at Leja points have recently been shown to be competitive with the traditionally-used Krylov subspace method. The developed framework facilitates easy adaptation into any Python software package for time integration.

Read this paper on arXiv…

P. Deka, L. Einkemmer and M. Tokman
Thu, 18 Aug 22
19/45

Comments: Publicly available software available at this https URL, in submission

Uncertainty-Aware Blob Detection with an Application to Integrated-Light Stellar Population Recoveries [GA]

http://arxiv.org/abs/2208.05881


Context. Blob detection is a common problem in astronomy. One example is in stellar population modelling, where the distribution of stellar ages and metallicities in a galaxy is inferred from observations. In this context, blobs may correspond to stars born in-situ versus those accreted from satellites, and the task of blob detection is to disentangle these components. A difficulty arises when the distributions come with significant uncertainties, as is the case for stellar population recoveries inferred from modelling spectra of unresolved stellar systems. There is currently no satisfactory method for blob detection with uncertainties. Aims. We introduce a method for uncertainty-aware blob detection developed in the context of stellar population modelling of integrated-light spectra of stellar systems. Methods. We develop theory and computational tools for an uncertainty-aware version of the classic Laplacian-of-Gaussians method for blob detection, which we call ULoG. This identifies significant blobs considering a variety of scales. As a prerequisite to apply ULoG to stellar population modelling, we introduce a method for efficient computation of uncertainties for spectral modelling. This method is based on the truncated Singular Value Decomposition and Markov Chain Monte Carlo sampling (SVD-MCMC). Results. We apply the methods to data of the star cluster M54. We show that the SVD-MCMC inferences match those from standard MCMC, but are a factor 5-10 faster to compute. We apply ULoG to the inferred M54 age/metallicity distributions, identifying between 2 or 3 significant, distinct populations amongst its stars.

Read this paper on arXiv…

P. Jethwa, F. Parzer, O. Scherzer, et. al.
Fri, 12 Aug 22
16/48

Comments: Submitted to A&A, comments welcome

New families of periodic orbits for the planar three-body problem computed with high precision [CL]

http://arxiv.org/abs/2205.14709


In this paper we use a Modified Newton’s method based on the Continuous analog of Newton’s method and high precision arithmetic for a general numerical search of periodic orbits for the planar three-body problem. We consider relatively short periods and a relatively coarse search-grid. As a result, we found 123 periodic solutions belonging to 105 new topological families that are not included in the database in [Science China Physics, Mechanics and Astronomy 60.12 (2017)]. The extensive numerical search is achieved by a parallel solving of many independent tasks using many cores in a computational cluster.

Read this paper on arXiv…

I. Hristov, R. Hristova, I. Puzynin, et. al.
Tue, 31 May 22
71/89

Comments: 5 pages, 1 figure. arXiv admin note: substantial text overlap with arXiv:2203.02793

Energy conserving and well-balanced discontinuous Galerkin methods for the Euler-Poisson equations in spherical symmetry [CL]

http://arxiv.org/abs/2205.04448


This paper presents high-order Runge-Kutta (RK) discontinuous Galerkin methods for the Euler-Poisson equations in spherical symmetry. The scheme can preserve a general polytropic equilibrium state and achieve total energy conservation up to machine precision with carefully designed spatial and temporal discretizations. To achieve the well-balanced property, the numerical solutions are decomposed into equilibrium and fluctuation components which are treated differently in the source term approximation. One non-trivial challenge encountered in the procedure is the complexity of the equilibrium state, which is governed by the Lane-Emden equation. For total energy conservation, we present second- and third-order RK time discretization, where different source term approximations are introduced in each stage of the RK method to ensure the conservation of total energy. A carefully designed slope limiter for spherical symmetry is also introduced to eliminate oscillations near discontinuities while maintaining the well-balanced and total-energy-conserving properties. Extensive numerical examples — including a toy model of stellar core-collapse with a phenomenological equation of state that results in core-bounce and shock formation — are provided to demonstrate the desired properties of the proposed methods, including the well-balanced property, high-order accuracy, shock capturing capability, and total energy conservation.

Read this paper on arXiv…

W. Zhang, Y. Xing and E. Endeve
Tue, 10 May 22
17/70

Comments: N/A

Instabilities Appearing in Effective Field theories: When and How? [CL]

http://arxiv.org/abs/2205.01055


Nonlinear partial differential equations appear in many domains of physics, and we study here a typical equation which one finds in effective field theories (EFT) originated from cosmological studies. In particular, we are interested in the equation $\partial_t^2 u(x,t) = \alpha (\partial_x u(x,t))^2 +\beta \partial_x^2 u(x,t)$ in $1+1$ dimensions. It has been known for quite some time that solutions to this equation diverge in finite time, when $\alpha >0$. We study the detailed nature of this divergence as a function of the parameters $\alpha>0 $ and $\beta\ge0$. The divergence does not disappear even when $\beta $ is very large contrary to what one might believe. But it will take longer to appear as $\beta$ increases when $\alpha$ is fixed. We note that there are two types of divergence and we discuss the transition between these two as a function of parameter choices. The blowup is unavoidable unless the corresponding equations are modified. Our results extend to $3+1$ dimensions.

Read this paper on arXiv…

J. Eckmann, F. Hassani and H. Zaag
Tue, 3 May 22
51/82

Comments: 19 pages, 5 figures

Forward-fitting STIX visibilities [SSA]

http://arxiv.org/abs/2204.14148


Aima. To determine to what extent the problem of forward fitting visibilities measured by the Spectrometer/Telescope Imaging X-rays (STIX) on-board Solar Orbiter is more challenging with respect to the same problem in the case of previous hard X-ray solar imaging missions; to identify an effective optimization scheme for parametric imaging for STIX. Methods. This paper introduces a Particle Swarm Optimization (PSO) algorithm for forward fitting STIX visibilities and compares its effectiveness with respect to the standard simplex-based optimization algorithm used so far for the analysis of visibilities measured by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). This comparison is made by considering experimental visibilities measured by both RHESSI and STIX, and synthetic visibilities generated by accounting for the STIX signal formation model. Results. We found out that the parametric imaging approach based on PSO is as reliable as the one based on the simplex method in the case of RHESSI visibilities. However, PSO is significantly more robust when applied to STIX simulated and experimental visibilities. Conclusions. Standard deterministic optimization is not effective enough for forward-fitting the few visibilities sampled by STIX in the angular frequency plane. Therefore a more sofisticated optimization scheme must be introduced for parametric imaging in the case of the Solar Orbiter X-ray telescope. The forward-fitting routine based on PSO we introduced in this paper proved to be significantly robust and reliable, and could be considered as an effective candidate tool for parametric imaging in the STIX context.

Read this paper on arXiv…

A. Volpara, P. Massa, E. Perracchione, et. al.
Mon, 2 May 22
19/52

Comments: N/A

Robust initial orbit determination for short-arc Doppler radar observations [CL]

http://arxiv.org/abs/2204.13966


A new Doppler radar initial orbit determination algorithm with embedded uncertainty quantification capabilities is presented. The method is based on a combination of Gauss’ and Lambert’s solvers. The whole process is carried out in the Differential Algebra framework, which provides the Taylor expansion of the state estimate with respect to the measurements’ uncertainties. This feature makes the approach particularly suited for handling data association problems. A comparison with the Doppler integration method is performed using both simulated and real data. The proposed approach is shown to be more accurate and robust, and particularly suited for short-arc observations.

Read this paper on arXiv…

M. Losacco, R. Armellin, C. Yanez, et. al.
Mon, 2 May 22
25/52

Comments: N/A

A new instability in clustering dark energy? [CEA]

http://arxiv.org/abs/2204.13098


In this paper, we study the effective field theory (EFT) of dark energy for the $k$-essence model beyond linear order. Using particle-mesh $N$-body simulations that consistently solve the dark energy evolution on a grid, we find that the next-to-leading order in the EFT expansion, which comprises the terms of the equations of motion that are quadratic in the field variables, gives rise to a new instability in the regime of low speed of sound (high Mach number). We rule out the possibility of a numerical artefact by considering simplified cases in spherically and plane symmetric situations analytically. If the speed of sound vanishes exactly, the non-linear instability makes the evolution singular in finite time, signalling a breakdown of the EFT framework. The case of finite (but small) speed of sound is subtle, and the local singularity could be replaced by some other type of behaviour with strong non-linearities. While an ultraviolet completion may cure the problem in principle, there is no reason why this should be the case in general. As a result, for a large range of the effective speed of sound $c_s$, a linear treatment is not adequate.

Read this paper on arXiv…

F. Hassani, J. Adamek, M. Kunz, et. al.
Thu, 28 Apr 22
43/70

Comments: 24 pages, 10 figures

A novel formulation for the evolution of relativistic rotating stars [CL]

http://arxiv.org/abs/2204.09943


We present a new formulation to construct numerically equilibrium configurations of rotating stars in general relativity. Having in mind the application to their quasi static evolutions, we adopt a Lagrangian formulation of our own devising, in which we solve force balance equations to seek for the positions of fluid elements assigned to the grid points, instead of the ordinary Eulerian formulation. Unlike previous works in the literature, we do not employ the first integral of the Euler equation, which is not obtained by an analytic integration in general. We assign a mass, specific angular momentum and entropy to each fluid element in contrast to the previous methods, in which the spatial distribution of the angular velocity or angular momentum is specified. Those distributions are determined after the positions of all fluid elements (or grid points) are derived in our formulation. We solve the large system of algebraic nonlinear equations that are obtained by discretizing the time-independent Euler and Einstein equations in the finite-elements method by using our new multi-dimensional root-finding scheme, named the W4 method. To demonstrate the capability of our new formulation, we construct some rotational configurations both barotropic and baroclinic. We also solve three evolutionary sequences that mimic the cooling, mass-loss, and mass-accretion as simple toy models.

Read this paper on arXiv…

H. Okawa, K. Fujisawa, N. Yasutake, et. al.
Fri, 22 Apr 22
39/64

Comments: 19 pages, 13 figures

Singularity-Avoiding Multi-Dimensional Root-Finder [CL]

http://arxiv.org/abs/2204.09941


We proposed in this paper a new method, which we named the W4 method, to solve nonlinear equation systems. It may be regarded as an extension of the Newton-Raphson~(NR) method to be used when the method fails. Indeed our method can be applied not only to ordinary problems with non-singular Jacobian matrices but also to problems with singular Jacobians, which essentially all previous methods that employ the inversion of the Jacobian matrix have failed to solve. In this article, we demonstrate that (i) our new scheme can define a non-singular iteration map even for those problems by utilizing the singular value decomposition, (ii) a series of vectors in the new iteration map converges to the right solution under a certain condition, (iii) the standard two-dimensional problems in the literature that no single method proposed so far has been able to solve completely are all solved by our new method.

Read this paper on arXiv…

H. Okawa, K. Fujisawa, Y. Yamamoto, et. al.
Fri, 22 Apr 22
43/64

Comments: 13 pages, 4 figures

A fast linear system solution with application to spatial source separation for the Cosmic Microwave Background [CL]

http://arxiv.org/abs/2204.08057


Implementation of many statistical methods for large, multivariate data sets requires one to solve a linear system that, depending on the method, is of the dimension of the number of observations or each individual data vector. This is often the limiting factor in scaling the method with data size and complexity. In this paper we illustrate the use of Krylov subspace methods to address this issue in a statistical solution to a source separation problem in cosmology where the data size is prohibitively large for direct solution of the required system. Two distinct approaches are described: one that uses the method of conjugate gradients directly to the Kronecker-structured problem and another that reformulates the system as a Sylvester matrix equation. We show that both approaches produce an accurate solution within an acceptable computation time and with practical memory requirements for the data size that is currently available.

Read this paper on arXiv…

K. Soodhalter, S. Wilson and D. Pham
Tue, 19 Apr 22
36/52

Comments: submitted for publication

An implicit symplectic solver for high-precision long term integrations of the Solar System [CL]

http://arxiv.org/abs/2204.01539


Compared to other symplectic integrators (the Wisdom and Holman map and its higher order generalizations) that also take advantage of the hierarchical nature of the motion of the planets around the central star, our methods require solving implicit equations at each time-step. We claim that, despite this disadvantage, FCIRK16 is more efficient than explicit symplectic integrators for high precision simulations thanks to: (i) its high order of precision, (ii) its easy parallelization, and (iii) its efficient mixed-precision implementation which reduces the effect of round-off errors. In addition, unlike typical explicit symplectic integrators for near Keplerian problems, FCIRK16 is able to integrate problems with arbitrary perturbations (non necessarily split as a sum of integrable parts). We present a novel analysis of the effect of close encounters in the leading term of the local discretization errors of our integrator. Based on that analysis, a mechanism to detect and refine integration steps that involve close encounters is incorporated in our code. That mechanism allows FCIRK16 to accurately resolve close encounters of arbitrary bodies. We illustrate our treatment of close encounters with the application of FCIRK16 to a point mass Newtonian 15-body model of the Solar System (with the Sun, the eight planets, Pluto, and five main asteroids) and a 16-body model treating the Moon as a separate body. We also present some numerical comparisons of FCIRK16 with a state-of-the-art high order explicit symplectic scheme for 16-body model that demonstrate the superiority of our integrator when very high precision is required.

Read this paper on arXiv…

M. Antoñana, E. Alberdi, J. J.Makazaga, et. al.
Tue, 5 Apr 22
60/83

Comments: N/A

Provably Positive Central DG Schemes via Geometric Quasilinearization for Ideal MHD Equations [CL]

http://arxiv.org/abs/2203.14853


In the numerical simulation of ideal MHD, keeping the pressure and density positive is essential for both physical considerations and numerical stability. This is a challenge, due to the underlying relation between such positivity-preserving (PP) property and the magnetic divergence-free (DF) constraint as well as the strong nonlinearity of the MHD equations. This paper presents the first rigorous PP analysis of the central discontinuous Galerkin (CDG) methods and constructs arbitrarily high-order PP CDG schemes for ideal MHD. By the recently developed geometric quasilinearization (GQL) approach, our analysis reveals that the PP property of standard CDG methods is closely related to a discrete DF condition, whose form was unknown and differs from the non-central DG and finite volume cases in [K. Wu, SIAM J. Numer. Anal. 2018]. This result lays the foundation for the design of our PP CDG schemes. In 1D case, the discrete DF condition is naturally satisfied, and we prove the standard CDG method is PP under a condition that can be enforced with a PP limiter. However, in the multidimensional cases, the discrete DF condition is highly nontrivial yet critical, and we prove the the standard CDG method, even with the PP limiter, is not PP in general, as it fails to meet the discrete DF condition. We address this issue by carefully analyzing the structure of the discrete divergence and then constructing new locally DF CDG schemes for Godunov’s modified MHD equations with an additional source. The key point is to find out the suitable discretization of the source term such that it exactly offsets all the terms in the discrete DF condition. Based on the GQL approach, we prove the PP property of the new multidimensional CDG schemes. The robustness and accuracy of PP CDG schemes are validated by several demanding examples, including the high-speed jets and blast problems with very low plasma beta.

Read this paper on arXiv…

K. Wu, H. Jiang and C. Shu
Tue, 29 Mar 22
50/73

Comments: N/A

New satellites of figure-eight orbit computed with high precision [CL]

http://arxiv.org/abs/2203.02793


In this paper we use a Modified Newton’s method based on the Continuous analog of Newton’s method and high precision arithmetic for a search of new satellites of the famous figure-eight orbit. By making a purposeful search for such satellites, we found over 300 new satellites, including 7 new stable choreographies.

Read this paper on arXiv…

I. Hristov, R. Hristova, I. Puzynin, et. al.
Tue, 8 Mar 22
63/100

Comments: 11 pages, 9 figures, 1 table

A Discontinuous Galerkin Solver in the FLASH Multi-Physics Framework [IMA]

http://arxiv.org/abs/2112.11318


In this paper, we present a discontinuous Galerkin solver based on previous work by Markert et al. (2021) for magneto-hydrodynamics in form of a new fluid solver module integrated into the established and well-known multi-physics simulation code FLASH. Our goal is to enable future research on the capabilities and potential advantages of discontinuous Galerkin methods for complex multi-physics simulations in astrophysical settings. We give specific details and adjustments of our implementation within the FLASH framework and present extensive validations and test cases, specifically its interaction with several other physics modules such as (self-)gravity and radiative transfer. We conclude that the new DG solver module in FLASH is ready for use in astrophysics simulations and thus ready for assessments and investigations.

Read this paper on arXiv…

J. Markert, S. Walch and G. Gassner
Wed, 22 Dec 21
42/67

Comments: arXiv admin note: text overlap with arXiv:1806.02343 by other authors

A Conservative Finite Element Solver for MHD Kinematics equations: Vector Potential method and Constraint Preconditioning [CL]

http://arxiv.org/abs/2111.11693


A new conservative finite element solver for the three-dimensional steady magnetohydrodynamic (MHD) kinematics equations is presented.The solver utilizes magnetic vector potential and current density as solution variables, which are discretized by H(curl)-conforming edge-element and H(div)-conforming face element respectively. As a result, the divergence-free constraints of discrete current density and magnetic induction are both satisfied. Moreover the solutions also preserve the total magnetic helicity. The generated linear algebraic equation is a typical dual saddle-point problem that is ill-conditioned and indefinite. To efficiently solve it, we develop a block preconditioner based on constraint preconditioning framework and devise a preconditioned FGMRES solver. Numerical experiments verify the conservative properties, the convergence rate of the discrete solutions and the robustness of the preconditioner.

Read this paper on arXiv…

X. Li and L. Li
Wed, 24 Nov 21
6/61

Comments: 13 pages. arXiv admin note: text overlap with arXiv:1712.08922

Implementation paradigm for supervised flare forecasting studies: a deep learning application with video data [SSA]

http://arxiv.org/abs/2110.12554


Solar flare forecasting can be realized by means of the analysis of magnetic data through artificial intelligence techniques. The aim is to predict whether a magnetic active region (AR) will originate solar flares above a certain class within a certain amount of time. A crucial issue is concerned with the way the adopted machine learning method is implemented, since forecasting results strongly depend on the criterion with which training, validation, and test sets are populated. In this paper we propose a general paradigm to generate these sets in such a way that they are independent from each other and internally well-balanced in terms of AR flaring effectiveness. This set generation process provides a ground for comparison for the performance assessment of machine learning algorithms. Finally, we use this implementation paradigm in the case of a deep neural network, which takes as input videos of magnetograms recorded by the Helioseismic and Magnetic Imager on-board the Solar Dynamics Observatory (SDO/HMI). To our knowledge, this is the first time that the solar flare forecasting problem is addressed by means of a deep neural network for video classification, which does not require any a priori extraction of features from the HMI magnetograms.

Read this paper on arXiv…

S. Guastavino, F. Marchetti, F. Benvenuto, et. al.
Tue, 26 Oct 21
72/109

Comments: N/A

Numerical solutions to linear transfer problems of polarized radiation II. Krylov methods and matrix-free implementation [CL]

http://arxiv.org/abs/2110.11873


Context. Numerical solutions to transfer problems of polarized radiation in solar and stellar atmospheres commonly rely on stationary iterative methods, which often perform poorly when applied to large problems. In recent times, stationary iterative methods have been replaced by state-of-the-art preconditioned Krylov iterative methods for many applications. However, a general description and a convergence analysis of Krylov methods in the polarized radiative transfer context are still lacking. Aims. We describe the practical application of preconditioned Krylov methods to linear transfer problems of polarized radiation, possibly in a matrix-free context. The main aim is to clarify the advantages and drawbacks of various Krylov accelerators with respect to stationary iterative methods. Methods. We report the convergence rate and the run time of various Krylov-accelerated techniques combined with different formal solvers when applied to a 1D benchmark transfer problem of polarized radiation. In particular, we analyze the GMRES, BICGSTAB, and CGS Krylov methods, preconditioned with Jacobi, or (S)SOR. Results. Krylov methods accelerate the convergence, reduce the run time, and improve the robustness of standard stationary iterative methods. Jacobi-preconditioned Krylov methods outperform SOR-preconditioned stationary iterations in all respects. In particular, the Jacobi-GMRES method offers the best overall performance for the problem setting in use. Conclusions. Krylov methods can be more challenging to implement than stationary iterative methods. However, an algebraic formulation of the radiative transfer problem allows one to apply and study Krylov acceleration strategies with little effort. Furthermore, many available numerical libraries implement matrix-free Krylov routines, enabling an almost effortless transition to Krylov methods.

Read this paper on arXiv…

P. Benedusi, G. Janett, L. Belluzzi, et. al.
Mon, 25 Oct 21
49/76

Comments: N/A

A novel fourth-order WENO interpolation technique. A possible new tool designed for radiative transfer [CL]

http://arxiv.org/abs/2110.11885


Context. Several numerical problems require the interpolation of discrete data that present various types of discontinuities. The radiative transfer is a typical example of such a problem. This calls for high-order well-behaved techniques to interpolate both smooth and discontinuous data. Aims. The final aim is to propose new techniques suitable for applications in the context of numerical radiative transfer. Methods. We have proposed and tested two different techniques. Essentially non-oscillatory (ENO) techniques generate several candidate interpolations based on different substencils. The smoothest candidate interpolation is determined from a measure for the local smoothness, thereby enabling the essential non-oscillatory property. Weighted ENO (WENO) techniques use a convex combination of all candidate substencils to obtain high-order accuracy in smooth regions while keeping the essentially non-oscillatory property. In particular, we have outlined and tested a novel well-performing fourth-order WENO interpolation technique for both uniform and nonuniform grids. Results. Numerical tests prove that the fourth-order WENO interpolation guarantees fourth-order accuracy in smooth regions of the interpolated functions. In the presence of discontinuities, the fourth-order WENO interpolation enables the non-oscillatory property, avoiding oscillations. Unlike B\’ezier and monotonic high-order Hermite interpolations, it does not degenerate to a linear interpolation near smooth extrema of the interpolated function. Conclusions. The novel fourth-order WENO interpolation guarantees high accuracy in smooth regions, while effectively handling discontinuities. This interpolation technique might be particularly suitable for several problems, including a number of radiative transfer applications such as multidimensional problems, multigrid methods, and formal solutions.

Read this paper on arXiv…

G. Janett, O. Steiner, E. Ballester, et. al.
Mon, 25 Oct 21
74/76

Comments: N/A

Analytic Correlation of Inflationary Potential to Power Spectrum Shape: Limits of Validity, and `No-Go' for Small Field Model Analytics [CEA]

http://arxiv.org/abs/2110.10557


The primordial power spectrum informs the possible inflationary histories of our universe. Given a power spectrum, the ensuing cosmic microwave background is calculated and compared to the observed one. Thus, one focus of modern cosmology is building well-motivated inflationary models that predict the primordial power spectrum observables. The common practice uses analytic terms for the scalar spectral index $n_s$ and the index running $\alpha$, forgoing the effort required to evaluate the model numerically. However, the validity of these terms has never been rigorously probed and relies on perturbative methods, which may lose their efficacy for large perturbations. The requirement for more accurate theoretical predictions becomes crucial with the advent of highly sensitive measuring instruments. This paper probes the limits of the perturbative treatment that connects inflationary potential parameters to primordial power spectrum observables. We show that the validity of analytic approximations of the scalar index roughly respects the large-field/small-field dichotomy. We supply an easily calculated measure for relative perturbation amplitude and show that, for large field models, the validity of analytical terms extends to $\sim 3\%$ perturbation relative to a power-law inflation model. Conversely, the analytical treatment loses its validity for small-field models with as little as $0.1\%$ perturbation relative to the small-field test-case. By employing the most general artificial neural networks and multinomial functions up to the twentieth degree and demonstrating their shortcomings, we show that no reasonable analytic expressions correlating small field models to the observables the yield exists. Finally, we discuss the possible implications of this work and supply the validity heuristic for large and small field models.

Read this paper on arXiv…

I. Wolfson
Fri, 22 Oct 21
85/133

Comments: 20 pages, 13 figures, 2 tables. COde package INSANE will be made public shortly

GP-MOOD: A positive-preserving high-order finite volume method for hyperbolic conservation laws [CL]

http://arxiv.org/abs/2110.08683


We present an a posteriori shock-capturing finite volume method algorithm called GP-MOOD that solves a compressible hyperbolic conservative system at high-order solution accuracy (e.g., third-, fifth-, and seventh-order) in multiple spatial dimensions. The GP-MOOD method combines two methodologies, the polynomial-free spatial reconstruction methods of GP (Gaussian Process) and the a posteriori detection algorithms of MOOD (Multidimensional Optimal Order Detection). The spatial approximation of our GP-MOOD method uses GP’s unlimited spatial reconstruction that builds upon our previous studies on GP reported in Reyes et al., Journal of Scientific Computing, 76 (2017) and Journal of Computational Physics, 381 (2019). This paper focuses on extending GP’s flexible variability of spatial accuracy to an a posteriori detection formalism based on the MOOD approach. We show that GP’s polynomial-free reconstruction provides a seamless pathway to the MOOD’s order cascading formalism by utilizing GP’s novel property of variable (2R+1)th-order spatial accuracy on a multidimensional GP stencil defined by the GP radius R, whose size is smaller than that of the standard polynomial MOOD methods. The resulting GP-MOOD method is a positivity-preserving method. We examine the numerical stability and accuracy of GP-MOOD on smooth and discontinuous flows in multiple spatial dimensions without resorting to any conventional, computationally expensive a priori nonlinear limiting mechanism to maintain numerical stability.

Read this paper on arXiv…

R. Bourgeois and D. Lee
Tue, 19 Oct 21
1/98

Comments: N/A

Accurate Baryon Acoustic Oscillations reconstruction via semi-discrete optimal transport [CEA]

http://arxiv.org/abs/2110.08868


Optimal transport theory has recently reemerged as a vastly resourceful field of mathematics with elegant applications across physics and computer science. Harnessing methods from geometry processing, we report on the efficient implementation for a specific problem in cosmology — the reconstruction of the linear density field from low redshifts, in particular the recovery of the Baryonic Acoustic Oscillation (BAO) scale. We demonstrate our algorithm’s accuracy by retrieving the BAO scale in noise-less cosmological simulations that are dedicated to cancel cosmic variance; we find uncertainties to be reduced by factor of 4.3 compared with performing no reconstruction, and a factor of 3.1 compared with standard reconstruction.

Read this paper on arXiv…

S. Hausegger, B. Lévy and R. Mohayaee
Tue, 19 Oct 21
48/98

Comments: Comments welcome! 5 pages excluding references, 2 figures, 1 table

Adjustment of force-gradient operator in symplectic methods [CL]

http://arxiv.org/abs/2110.03685


Many force-gradient explicit symplectic integration algorithms have been designed for the Hamiltonian $H=T (\mathbf{p})+V(\mathbf{q})$ with a kinetic energy $T(\mathbf{p})=\mathbf{p}^2/2$ in the existing references. When the force-gradient operator is appropriately adjusted as a new operator, they are still suitable for a class of Hamiltonian problems $H=K(\mathbf{p},\mathbf{q})+V(\mathbf{q})$ with \emph{integrable} part $K(\mathbf{p},\mathbf{q}) = \sum_{i=1}^{n} \sum_{j=1}^{n}a_{ij}p_ip_j+\sum_{i=1}^{n} b_ip_i$, where $a_{ij}=a_{ij}(\textbf{q})$ and $b_i=b_i(\textbf{q})$ are functions of coordinates $\textbf{q}$. The newly adjusted operator is not a force-gradient operator but is similar to the momentum-version operator associated to the potential $V$. The newly extended (or adjusted) algorithms are no longer solvers of the original Hamiltonian, but are solvers of slightly modified Hamiltonians. They are explicit symplectic integrators with time reversibility and time symmetry. Numerical tests show that the standard symplectic integrators without the new operator are generally poorer than the corresponding extended methods with the new operator in computational accuracies and efficiencies. The optimized methods have better accuracies than the corresponding non-optimized methods. Among the tested symplectic methods, the two extended optimized seven-stage fourth-order methods of Omelyan, Mryglod and Folk exhibit the best numerical performance. As a result, one of the two optimized algorithms is used to study the orbital dynamical features of a modified H\'{e}non-Heiles system and a spring pendulum. These extended integrators allow for integrations in Hamiltonian problems, such as the spiral structure in self-consistent models of rotating galaxies and the spiral arms in galaxies.

Read this paper on arXiv…

L. Zhang, X. Wu and E. Liang
Mon, 11 Oct 21
5/58

Comments: 14 pages, 9 figures

Towards Adaptive Simulations of Dispersive Tsunami Propagation from an Asteroid Impact [CL]

http://arxiv.org/abs/2110.01420


The long-term goal of this work is the development of high-fidelity simulation tools for dispersive tsunami propagation. A dispersive model is especially important for short wavelength phenomena such as an asteroid impact into the ocean, and is also important in modeling other events where the simpler shallow water equations are insufficient. Adaptive simulations are crucial to bridge the scales from deep ocean to inundation, but have difficulties with the implicit system of equations that results from dispersive models. We propose a fractional step scheme that advances the solution on separate patches with different spatial resolutions and time steps. We show a simulation with 7 levels of adaptive meshes and onshore inundation resulting from a simulated asteroid impact off the coast of Washington. Finally, we discuss a number of open research questions that need to be resolved for high quality simulations.

Read this paper on arXiv…

M. Berger and R. LeVeque
Tue, 5 Oct 21
11/72

Comments: 16 pages, 5 figures, submitted to Proc. International Congress of Mathematicians, 2022

An Improved Approach to Orbital Determination and Prediction of Near-Earth Asteroids: Computer Simulation, Modeling and Test Measurements [EPA]

http://arxiv.org/abs/2109.07397


In this article, theory-based analytical methodologies of astrophysics employed in the modern era are suitably operated alongside a test research-grade telescope to image and determine the orbit of a near-earth asteroid from original observations, measurements, and calculations. Subsequently, its intrinsic orbital path has been calculated including the chance it would likely impact Earth in the time ahead. More so specifically, this case-study incorporates the most effective, feasible, and novel Gauss’s Method in order to maneuver the orbital plane components of a planetesimal, further elaborating and extending our probes on a selected near-earth asteroid (namely the 12538-1998 OH) through the observational data acquired over a six week period. Utilizing the CCD (Charge Coupled Device) snapshots captured, we simulate and calculate the orbit of our asteroid as outlined in quite detailed explanations. The uncertainties and deviations from the expected values are derived to reach a judgement whether our empirical findings are truly reliable and representative measurements by partaking a statistical analysis based systematic approach. Concluding the study by narrating what could have caused such discrepancy of findings in the first place, if any, measures are put forward that could be undertaken to improve the test-case for future investigations. Following the calculation of orbital elements and their uncertainties using Monte Carlo analysis, simulations were executed with various sample celestial bodies to derive a plausible prediction regarding the fate of Asteroid 1998 OH. Finally, the astrometric and photometric data, after their precise verification, were officially submitted to the Minor Planet Center: an organization hosted by the Center for Astrophysics, Harvard and Smithsonian and funded by NASA, for keeping track of the asteroid’s potential trajectories.

Read this paper on arXiv…

M. Farae, C. Woo and A. Hu
Thu, 16 Sep 21
28/54

Comments: N/A

Machine Learning for Discovering Effective Interaction Kernels between Celestial Bodies from Ephemerides [EPA]

http://arxiv.org/abs/2108.11894


Building accurate and predictive models of the underlying mechanisms of celestial motion has inspired fundamental developments in theoretical physics. Candidate theories seek to explain observations and predict future positions of planets, stars, and other astronomical bodies as faithfully as possible. We use a data-driven learning approach, extending that developed in Lu et al. ($2019$) and extended in Zhong et al. ($2020$), to a derive stable and accurate model for the motion of celestial bodies in our Solar System. Our model is based on a collective dynamics framework, and is learned from the NASA Jet Propulsion Lab’s development ephemerides. By modeling the major astronomical bodies in the Solar System as pairwise interacting agents, our learned model generate extremely accurate dynamics that preserve not only intrinsic geometric properties of the orbits, but also highly sensitive features of the dynamics, such as perihelion precession rates. Our learned model can provide a unified explanation to the observation data, especially in terms of reproducing the perihelion precession of Mars, Mercury, and the Moon. Moreover, Our model outperforms Newton’s Law of Universal Gravitation in all cases and performs similarly to, and exceeds on the Moon, the Einstein-Infeld-Hoffman equations derived from Einstein’s theory of general relativity.

Read this paper on arXiv…

M. Zhong, J. Miller and M. Maggioni
Fri, 27 Aug 21
52/67

Comments: N/A

Big Data in Astroinformatics — Compression of Scanned Astronomical Photographic Plates [IMA]

http://arxiv.org/abs/2108.08399


Construction of Scanned Astronomical Photographic Plates(SAPPs) databases and SVD image compression algorithm are considered. Some examples of compression with different plates are shown.

Read this paper on arXiv…

V. Kolev
Fri, 20 Aug 21
23/59

Comments: 9 pages, 4 figures, International Conference on Big Data, Knowledge and Control Systems Engineering,5 – 6 November 2015, Sofia, Bulgaria

Imaging from STIX visibility amplitudes [SSA]

http://arxiv.org/abs/2108.04901


Aims: To provide the first demonstration of STIX Fourier-transform X-ray imaging using semi-calibrated (amplitude-only) visibility data acquired during the Solar Orbiter’s cruise phase. Methods: We use a parametric imaging approach by which STIX visibility amplitudes are fitted by means of two non-linear optimization methods: a fast meta-heuristic technique inspired by social behavior, and a Bayesian Monte Carlo sampling method, which, although slower, provides better quantification of uncertainties. Results: When applied to a set of solar flare visibility amplitudes recorded by STIX on November 18, 2020 the two parametric methods provide very coherent results. The analysis also demonstrates the ability of STIX to reconstruct high time resolution information and, from a spectral viewpoint, shows the reliability of a double-source scenario consistent with a thermal versus nonthermal interpretation. Conclusions: In this preliminary analysis of STIX imaging based only on visibility amplitudes, we formulate the imaging problem as a non-linear parametric issue we addressed by means of two high-performance optimization techniques that both showed the ability to sample the parametric space in an effective fashion, thus avoiding local minima.

Read this paper on arXiv…

P. Massa, E. Perracchione, S. Garbarino, et. al.
Thu, 12 Aug 21
38/62

Comments: N/A

A new non-linear instability for scalar fields [CEA]

http://arxiv.org/abs/2107.14215


In this letter we introduce the non-linear partial differential equation (PDE) $\partial^2_{\tau} \pi \propto (\vec\nabla \pi)^2$ showing a new type of instability. Such equations appear in the effective field theory (EFT) of dark energy for the $k$-essence model as well as in many other theories based on the EFT formalism. We demonstrate the occurrence of instability in the cosmological context using a relativistic $N$-body code, and we study it mathematically in 3+1 dimensions within spherical symmetry. We show that this term dominates for the low speed of sound limit where some important linear terms are suppressed.

Read this paper on arXiv…

F. Hassani, P. Shi, J. Adamek, et. al.
Fri, 30 Jul 21
51/71

Comments: 5 pages, 2 figures

Slepian Scale-Discretised Wavelets on the Sphere [CL]

http://arxiv.org/abs/2106.02023


This work presents the construction of a novel spherical wavelet basis designed for incomplete spherical datasets, i.e. datasets which are missing in a particular region of the sphere. The eigenfunctions of the Slepian spatial-spectral concentration problem (the Slepian functions) are a set of orthogonal basis functions which exist within a defined region. Slepian functions allow one to compute a convolution on the incomplete sphere by leveraging the recently proposed sifting convolution and extending it to any set of basis functions. Through a tiling of the Slepian harmonic line one may construct scale-discretised wavelets. An illustration is presented based on an example region on the sphere defined by the topographic map of the Earth. The Slepian wavelets and corresponding wavelet coefficients are constructed from this region, and are used in a straightforward denoising example.

Read this paper on arXiv…

P. Roddy and J. McEwen
Fri, 4 Jun 21
12/71

Comments: 10 pages, 8 figures

Fragmentation model and strewn field estimation for meteoroids entry [EPA]

http://arxiv.org/abs/2105.14776


Everyday thousands of meteoroids enter the Earth’s atmosphere. The vast majority burn up harmlessly during the descent, but the larger objects survive, occasionally experiencing intense fragmentation events, and reach the ground. These events can pose a threat for a village or a small city; therefore, models of asteroid fragmentation, along with accurate post-breakup trajectory and strewn field estimation, are needed to enable a reliable risk assessment. In this work, a methodology to describe meteoroids entry, fragmentation, descent, and strewn field is presented by means of a continuum approach. At breakup, a modified version of the NASA Standard Breakup Model is used to generate the distribution of the fragments in terms of their area-to-mass ratio and ejection velocity. This distribution, combined with the meteoroid state, is directly propagated using the continuity equation coupled with the non-linear entry dynamics. At each time step, the probability density evolution of the fragments is reconstructed using GMM interpolation. Using this information is then possible to estimate the meteoroid’s ground impact probability. This approach departs from the current state-of-the-art models: it has the flexibility to include large fragmentation events while maintaining a continuum formulation for a better physical representation of the phenomenon. The methodology is also characterised by a modular structure, so that updated asteroids fragmentation models can be readily integrated into the framework, allowing a continuously improving prediction of re-entry and fragmentation events. The propagation of the fragments’ density and its reconstruction, currently considering only one fragmentation point, is first compared against Monte Carlo simulations, and then against real observations. Both deceleration due to atmospheric drag and ablation due to aerothermodynamics effects have been considered.

Read this paper on arXiv…

S. Limonta, M. Trisolini, S. Frey, et. al.
Tue, 1 Jun 21
60/72

Comments: 29 pages, 26 figures, published in Icarus

Sparse image reconstruction on the sphere: a general approach with uncertainty quantification [CL]

http://arxiv.org/abs/2105.04935


Inverse problems defined naturally on the sphere are becoming increasingly of interest. In this article we provide a general framework for evaluation of inverse problems on the sphere, with a strong emphasis on flexibility and scalability. We consider flexibility with respect to the prior selection (regularization), the problem definition – specifically the problem formulation (constrained/unconstrained) and problem setting (analysis/synthesis) – and optimization adopted to solve the problem. We discuss and quantify the trade-offs between problem formulation and setting. Crucially, we consider the Bayesian interpretation of the unconstrained problem which, combined with recent developments in probability density theory, permits rapid, statistically principled uncertainty quantification (UQ) in the spherical setting. Linearity is exploited to significantly increase the computational efficiency of such UQ techniques, which in some cases are shown to permit analytic solutions. We showcase this reconstruction framework and UQ techniques on a variety of spherical inverse problems. The code discussed throughout is provided under a GNU general public license, in both C++ and Python.

Read this paper on arXiv…

M. Price, L. Pratley and J. McEwen
Mon, 17 May 21
4/55

Comments: N/A

Long-Term Orbit Dynamics of Decommissioned Geostationary Satellites [EPA]

http://arxiv.org/abs/2104.01240


In nominal mission scenarios, geostationary satellites perform end-of-life orbit maneuvers to reach suitable disposal orbits, where they do not interfere with operational satellites. This research investigates the long-term orbit evolution of decommissioned geostationary satellite under the assumption that the disposal maneuver does not occur and the orbit evolves with no control. The dynamical model accounts for all the relevant harmonics of the gravity field at the altitude of geostationary orbits, as well as solar radiation pressure and third-body perturbations caused by the Moon and the Sun. Orbit propagations are performed using two algorithms based on different equations of motion and numerical integration methods: (i) Gauss planetary equations for modified equinoctial elements with a Runge-Kutta numerical integration scheme based on 8-7th-order Dorman and Prince formulas; (ii) Cartesian state equations of motion in an Earth-fixed frame with a Runge-Kutta Fehlberg 7/8 integration scheme. The numerical results exhibit excellent agreement over integration times of decades. Some well-known phenomena emerge, such as the longitudinal drift due to the resonance between the orbital motion and Earth’s rotation, attributable to the J22 term of the geopotential. In addition, the third-body perturbation due to Sun and Moon causes two major effects: (a) a precession of the orbital plane, and (b) complex longitudinal dynamics. This study proposes an analytical approach for the prediction of the precessional motion and shows its agreement with the orbit evolution obtained numerically. Moreover, long-term orbit propagations show that the above mentioned complex longitudinal dynamics persists over time scales of several decades. Frequent and unpredictable migrations toward different longitude regions occur, in contrast with the known effects due only to the J22 perturbation.

Read this paper on arXiv…

S. Proietti, R. Flores, E. Fantino, et. al.
Tue, 6 Apr 2021
42/55

Comments: N/A

Simulations of Dynamical Gas-Dust Circumstellar Disks: Going Beyond the Epstein Regime [EPA]

http://arxiv.org/abs/2102.09155


In circumstellar disks, the size of dust particles varies from submicron to several centimeters, while planetesimals have sizes of hundreds of kilometers. Therefore, various regimes for the aerodynamic drag between solid bodies and gas can be realized in these disks, depending on the grain sizes and velocities: Epstein, Stokes, and Newton, as well as transitional regimes between them. For small bodies moving in the Epstein regime, the time required to establish the constant relative velocity between the gas and bodies can be much less than the dynamical time scale for the problem – the time for the rotation of the disk about the central body. In addition, the dust may be concentrated in individual regions of the disk, making it necessary to take into account the transfer of momentum between the dust and gas. It is shown that, for a system of equations for gas and monodisperse dust, a semi-implicit first-order approximation scheme in time in which the interphase interaction is calculated implicitly, while other forces, such as the pressure gradient and gravity are calculated explicitly, is suitable for stiff problems with intense interphase interactions and for computations of the drag in non-linear regimes. The piece-wise drag coefficient widely used in astrophysical simulations has a discontinuity at some values of the Mach and Knudsen numbers that are realized in a circumstellar disk. A continuous drag coefficient is presented, which corresponds to experimental dependences obtained for various drag regimes.

Read this paper on arXiv…

O. Stoyanovskaya, F. Okladnikov, E. Vorobyov, et. al.
Fri, 19 Feb 21
19/64

Comments: 24 pages, 13 figures

Feature augmentation for the inversion of the Fourier transform with limited data [CL]

http://arxiv.org/abs/2102.03755


We investigate an interpolation/extrapolation method that, given scattered observations of the Fourier transform, approximates its inverse. The interpolation algorithm takes advantage of modelling the available data via a shape-driven interpolation based on Variably Scaled Kernels (VSKs), whose implementation is here tailored for inverse problems. The so-constructed interpolants are used as inputs for a standard iterative inversion scheme. After providing theoretical results concerning the spectrum of the VSK collocation matrix, we test the method on astrophysical imaging benchmarks.

Read this paper on arXiv…

E. Perracchione, A. Massone and M. Piana
Tue, 9 Feb 21
5/87

Comments: N/A

Reduced Order and Surrogate Models for Gravitational Waves [CL]

http://arxiv.org/abs/2101.11608


We present an introduction to some of the state of the art in reduced order and surrogate modeling in gravitational wave (GW) science. Approaches that we cover include Principal Component Analysis, Proper Orthogonal Decomposition, the Reduced Basis approach, the Empirical Interpolation Method, Reduced Order Quadratures, and Compressed Likelihood evaluations. We divide the review into three parts: representation/compression of known data, predictive models, and data analysis. The targeted audience is that one of practitioners in GW science, a field in which building predictive models and data analysis tools that are both accurate and fast to evaluate, especially when dealing with large amounts of data and intensive computations, are necessary yet can be challenging. As such, practical presentations and, sometimes, heuristic approaches are here preferred over rigor when the latter is not available. This review aims to be self-contained, within reasonable page limits, with little previous knowledge (at the undergraduate level) requirements in mathematics, scientific computing, and other disciplines. Emphasis is placed on optimality, as well as the curse of dimensionality and approaches that might have the promise of beating it. We also review most of the state of the art of GW surrogates. Some numerical algorithms, conditioning details, scalability, parallelization and other practical points are discussed. The approaches presented are to large extent non-intrusive and data-driven and can therefore be applicable to other disciplines. We close with open challenges in high dimension surrogates, which are not unique to GW science.

Read this paper on arXiv…

M. Tiglio and A. Villanueva
Thu, 28 Jan 21
55/64

Comments: Invited article for Living Reviews in Relativity. 93 pages

Propagation and reconstruction of re-entry uncertainties using continuity equation and simplicial interpolation [CL]

http://arxiv.org/abs/2101.10825


This work proposes a continuum-based approach for the propagation of uncertainties in the initial conditions and parameters for the analysis and prediction of spacecraft re-entries. Using the continuity equation together with the re-entry dynamics, the joint probability distribution of the uncertainties is propagated in time for specific sampled points. At each time instant, the joint probability distribution function is then reconstructed from the scattered data using a gradient-enhanced linear interpolation based on a simplicial representation of the state space. Uncertainties in the initial conditions at re-entry and in the ballistic coefficient for three representative test cases are considered: a three-state and a six-state steep Earth re-entry and a six-state unguided lifting entry at Mars. The paper shows the comparison of the proposed method with Monte Carlo based techniques in terms of quality of the obtained marginal distributions and runtime as a function of the number of samples used.

Read this paper on arXiv…

M. Trisolini and C. Colombo
Wed, 27 Jan 21
17/68

Comments: N/A

Visibility Interpolation in Solar Hard X-ray Imaging: Application to RHESSI and STIX [IMA]

http://arxiv.org/abs/2012.14007


Space telescopes for solar hard X-ray imaging provide observations made of sampled Fourier components of the incoming photon flux. The aim of this study is to design an image reconstruction method relying on enhanced visibility interpolation in the Fourier domain. % methods heading (mandatory) The interpolation-based method is applied on synthetic visibilities generated by means of the simulation software implemented within the framework of the Spectrometer/Telescope for Imaging X-rays (STIX) mission on board Solar Orbiter. An application to experimental visibilities observed by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) is also considered. In order to interpolate these visibility data we have utilized an approach based on Variably Scaled Kernels (VSKs), which are able to realize feature augmentation by exploiting prior information on the flaring source and which are used here, for the first time, for image reconstruction purposes.} % results heading (mandatory) When compared to an interpolation-based reconstruction algorithm previously introduced for RHESSI, VSKs offer significantly better performances, particularly in the case of STIX imaging, which is characterized by a notably sparse sampling of the Fourier domain. In the case of RHESSI data, this novel approach is particularly reliable when either the flaring sources are characterized by narrow, ribbon-like shapes or high-resolution detectors are utilized for observations. % conclusions heading (optional), leave it empty if necessary The use of VSKs for interpolating hard X-ray visibilities allows a notable image reconstruction accuracy when the information on the flaring source is encoded by a small set of scattered Fourier data and when the visibility surface is affected by significant oscillations in the frequency domain.

Read this paper on arXiv…

E. Perracchione, P. Massa, A. Massone, et. al.
Tue, 29 Dec 20
43/66

Comments: N/A

Visibility Interpolation in Solar Hard X-ray Imaging: Application to RHESSI and STIX [IMA]

http://arxiv.org/abs/2012.14007


Space telescopes for solar hard X-ray imaging provide observations made of sampled Fourier components of the incoming photon flux. The aim of this study is to design an image reconstruction method relying on enhanced visibility interpolation in the Fourier domain. % methods heading (mandatory) The interpolation-based method is applied on synthetic visibilities generated by means of the simulation software implemented within the framework of the Spectrometer/Telescope for Imaging X-rays (STIX) mission on board Solar Orbiter. An application to experimental visibilities observed by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) is also considered. In order to interpolate these visibility data we have utilized an approach based on Variably Scaled Kernels (VSKs), which are able to realize feature augmentation by exploiting prior information on the flaring source and which are used here, for the first time, for image reconstruction purposes.} % results heading (mandatory) When compared to an interpolation-based reconstruction algorithm previously introduced for RHESSI, VSKs offer significantly better performances, particularly in the case of STIX imaging, which is characterized by a notably sparse sampling of the Fourier domain. In the case of RHESSI data, this novel approach is particularly reliable when either the flaring sources are characterized by narrow, ribbon-like shapes or high-resolution detectors are utilized for observations. % conclusions heading (optional), leave it empty if necessary The use of VSKs for interpolating hard X-ray visibilities allows a notable image reconstruction accuracy when the information on the flaring source is encoded by a small set of scattered Fourier data and when the visibility surface is affected by significant oscillations in the frequency domain.

Read this paper on arXiv…

E. Perracchione, P. Massa, A. Massone, et. al.
Tue, 29 Dec 20
62/66

Comments: N/A

Visibility Interpolation in Solar Hard X-ray Imaging: Application to RHESSI and STIX [IMA]

http://arxiv.org/abs/2012.14007


Space telescopes for solar hard X-ray imaging provide observations made of sampled Fourier components of the incoming photon flux. The aim of this study is to design an image reconstruction method relying on enhanced visibility interpolation in the Fourier domain. % methods heading (mandatory) The interpolation-based method is applied on synthetic visibilities generated by means of the simulation software implemented within the framework of the Spectrometer/Telescope for Imaging X-rays (STIX) mission on board Solar Orbiter. An application to experimental visibilities observed by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) is also considered. In order to interpolate these visibility data we have utilized an approach based on Variably Scaled Kernels (VSKs), which are able to realize feature augmentation by exploiting prior information on the flaring source and which are used here, for the first time, for image reconstruction purposes.} % results heading (mandatory) When compared to an interpolation-based reconstruction algorithm previously introduced for RHESSI, VSKs offer significantly better performances, particularly in the case of STIX imaging, which is characterized by a notably sparse sampling of the Fourier domain. In the case of RHESSI data, this novel approach is particularly reliable when either the flaring sources are characterized by narrow, ribbon-like shapes or high-resolution detectors are utilized for observations. % conclusions heading (optional), leave it empty if necessary The use of VSKs for interpolating hard X-ray visibilities allows a notable image reconstruction accuracy when the information on the flaring source is encoded by a small set of scattered Fourier data and when the visibility surface is affected by significant oscillations in the frequency domain.

Read this paper on arXiv…

E. Perracchione, P. Massa, A. Massone, et. al.
Tue, 29 Dec 20
59/66

Comments: N/A

A novel structure preserving semi-implicit finite volume method for viscous and resistive magnetohydrodynamics [CL]

http://arxiv.org/abs/2012.11218


In this work we introduce a novel semi-implicit structure-preserving finite-volume/finite-difference scheme for the viscous and resistive equations of magnetohydrodynamics (MHD) based on an appropriate 3-split of the governing PDE system, which is decomposed into a first convective subsystem, a second subsystem involving the coupling of the velocity field with the magnetic field and a third subsystem involving the pressure-velocity coupling. The nonlinear convective terms are discretized explicitly, while the remaining two subsystems accounting for the Alfven waves and the magneto-acoustic waves are treated implicitly. The final algorithm is at least formally constrained only by a mild CFL stability condition depending on the velocity field of the pure hydrodynamic convection. To preserve the divergence-free constraint of the magnetic field exactly at the discrete level, a proper set of overlapping dual meshes is employed. The resulting linear algebraic systems are shown to be symmetric and therefore can be very efficiently solved by means of a standard matrix-free conjugate gradient algorithm. One of the peculiarities of the presented algorithm is that the magnetic field is defined on the edges of the main grid, while the electric field is on the faces. The final scheme can be regarded as a novel shock-capturing, conservative and structure preserving semi-implicit scheme for the nonlinear viscous and resistive MHD equations. Several numerical tests are presented to show the main features of our novel solver: linear-stability in the sense of Lyapunov is verified at a prescribed constant equilibrium solution; a 2nd-order of convergence is numerically estimated; shock-capturing capabilities are proven against a standard set of stringent MHD shock-problems; accuracy and robustness are verified against a nontrivial set of 2- and 3-dimensional MHD problems.

Read this paper on arXiv…

F. Fambri
Tue, 22 Dec 20
27/89

Comments: 43 pages, 22 figures

A fast semi-discrete optimal transport algorithm for a unique reconstruction of the early Universe [CEA]

http://arxiv.org/abs/2012.09074


We leverage powerful mathematical tools stemming from optimal transport theory and transform them into an efficient algorithm to reconstruct the fluctuations of the primordial density field, built on solving the Monge-Amp`ere-Kantorovich equation. Our algorithm computes the optimal transport between an initial uniform continuous density field, partitioned into Laguerre cells, and a final input set of discrete point masses, linking the early to the late Universe. While existing early universe reconstruction algorithms based on fully discrete combinatorial methods are limited to a few hundred thousand points, our algorithm scales up well beyond this limit, since it takes the form of a well-posed smooth convex optimization problem, solved using a Newton method. We run our algorithm on cosmological $N$-body simulations, from the AbacusCosmos suite, and reconstruct the initial positions of $\mathcal{O}(10^7)$ particles within a few hours with an off-the-shelf personal computer. We show that our method allows a unique, fast and precise recovery of subtle features of the initial power spectrum, such as the baryonic acoustic oscillations.

Read this paper on arXiv…

B. Lévy, R. Mohayaee and S. Hausegger
Thu, 17 Dec 20
62/85

Comments: 22 pages

First approximation for spacecraft motion relative to (99942) Apophis [EPA]

http://arxiv.org/abs/2012.06781


We aim at providing a preliminary approach on the dynamics of a spacecraft in orbit about the asteroid (99942) Apophis during its Earth close approach. The physical properties from the polyhedral shape of the target are derived assigning each tetrahedron to a point mass in its center. That considerably reduces the computation processing time compared to previous methods to evaluate the gravitational potential. The surfaces of section close to Apophis are build considering or not the gravitational perturbations of the Sun, the planets, and the SRP. The Earth is the one that most affects the invisticated region making the vast majority of the orbits to collide or escape from the system. Moreover, from numerical analysis of orbits started on March 1, 2029, the less perturbed region is characterized by the variation of the semimajor axis of 40-days orbits, which do not exceed 2 km very close to the central body ($a < 4$ km, $e < 0.4$). However, no regions investigated could be a possible option for inserting a spacecraft into natural orbits around Apophis during the close approach with our planet. Finally, to solve the stabilization problem in the system, we apply a robust path following control law to control the orbital geometry of a spacecraft. At last, we present an example of successful operation of our orbit control with a total $\bigtriangleup v$ of 0.495 m/s for 60 days. All our results are gathered in the CPM-ASTEROID database, which will be regularly updated by considering other asteroids.

Read this paper on arXiv…

S. Aljbaae, D. Sanchez, A. Prado, et. al.
Tue, 15 Dec 20
106/136

Comments: 24 pages, 20 figures

Fast error-safe MOID computation involving hyperbolic orbits [IMA]

http://arxiv.org/abs/2011.12148


We extend our previous algorithm computing the minimum orbital intersection distance (MOID) to include hyperbolic orbits, and mixed combinations ellipse–hyperbola. The MOID is computed by finding all stationary points of the distance function, equivalent to finding all the roots of an algebraic polynomial equation of 16th degree. The updated algorithm carries about numerical errors as well, and benchmarks confirmed its numeric reliability together with high computing performance.

Read this paper on arXiv…

R. Baluev
Wed, 25 Nov 2020
52/65

Comments: 15 pages, 5 figures, 2 tables; accepted by Astronomy & Computing

An augmented wavelet reconstructor for atmospheric tomography [IMA]

http://arxiv.org/abs/2011.06842


Atmospheric tomography, i.e. the reconstruction of the turbulence profile in the atmosphere, is a challenging task for adaptive optics (AO) systems of the next generation of extremely large telescopes. Within the community of AO the first choice solver is the so called Matrix Vector Multiplication (MVM), which directly applies the (regularized) generalized inverse of the system operator to the data. For small telescopes this approach is feasible, however, for larger systems such as the European Extremely Large Telescope (ELT), the atmospheric tomography problem is considerably more complex and the computational efficiency becomes an issue. Iterative methods, such as the Finite Element Wavelet Hybrid Algorithm (FEWHA), are a promising alternative. FEWHA is a wavelet based reconstructor that uses the well-known iterative preconditioned conjugate gradient (PCG) method as a solver. The number of floating point operations and memory usage are decreased significantly by using a matrix-free representation of the forward operator. A crucial indicator for the real-time performance are the number of PCG iterations. In this paper, we propose an augmented version of FEWHA, where the number of iterations is decreased by $50\%$ using a Krylov subspace recycling technique. We demonstrate that a parallel implementation of augmented FEWHA allows the fulfilment of the real-time requirements of the ELT.

Read this paper on arXiv…

R. Ramlau and B. Stadler
Mon, 16 Nov 20
31/57

Comments: N/A

Sifting Convolution on the Sphere [CL]

http://arxiv.org/abs/2007.12153


A novel spherical convolution is defined through the sifting property of the Dirac delta on the sphere. The so-called sifting convolution is defined by the inner product of one function with a translated version of another, but with the adoption of an alternative translation operator on the sphere. This translation operator follows by analogy with the Euclidean translation when viewed in harmonic space. The sifting convolution satisfies a variety of desirable properties that are lacking in alternate definitions, namely: it supports directional kernels; it has an output which remains on the sphere; and is efficient to compute. An illustration of the sifting convolution on a topographic map of the Earth demonstrates that it supports directional kernels to perform anisotropic filtering, while its output remains on the sphere.

Read this paper on arXiv…

P. Roddy and J. McEwen
Fri, 24 Jul 20
-544/53

Comments: 5 pages, 3 figures

A single-step third-order temporal discretization with Jacobian-free and Hessian-free formulations for finite difference methods [CL]

http://arxiv.org/abs/2006.00096


Discrete updates of numerical partial differential equations (PDEs) rely on two branches of temporal integration. The first branch is the widely-adopted, traditionally popular approach of the method-of-lines (MOL) formulation, in which multi-stage Runge-Kutta (RK) methods have shown great success in solving ordinary differential equations (ODEs) at high-order accuracy. The clear separation between the temporal and the spatial discretizations of the governing PDEs makes the RK methods highly adaptable. In contrast, the second branch of formulation using the so-called Lax-Wendroff procedure escalates the use of tight couplings between the spatial and temporal derivatives to construct high-order approximations of temporal advancements in the Taylor series expansions. In the last two decades, modern numerical methods have explored the second route extensively and have proposed a set of computationally efficient single-stage, single-step high-order accurate algorithms. In this paper, we present an algorithmic extension of the method called the Picard integration formulation (PIF) that belongs to the second branch of the temporal updates. The extension presented in this paper furnishes ease of calculating the Jacobian and Hessian terms necessary for third-order accuracy in time.

Read this paper on arXiv…

Y. Lee and D. Lee
Tue, 2 Jun 20
82/90

Comments: N/A

A Deep Dive into the Distribution Function: Understanding Phase Space Dynamics with Continuum Vlasov-Maxwell Simulations [CL]

http://arxiv.org/abs/2005.13539


In collisionless and weakly collisional plasmas, the particle distribution function is a rich tapestry of the underlying physics. However, actually leveraging the particle distribution function to understand the dynamics of a weakly collisional plasma is challenging. The equation system of relevance, the Vlasov-Maxwell-Fokker-Planck (VM-FP) system of equations, is difficult to numerically integrate, and traditional methods such as the particle-in-cell method introduce counting noise into the distribution function.
In this thesis, we present a new algorithm for the discretization of VM-FP system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin (DG) finite element method for the spatial discretization and a third order strong-stability preserving Runge-Kutta for the time discretization, we obtain an accurate solution for the plasma’s distribution function in space and time.
We both prove the numerical method retains key physical properties of the VM-FP system, such as the conservation of energy and the second law of thermodynamics, and demonstrate these properties numerically. These results are contextualized in the history of the DG method. We discuss the importance of the algorithm being alias-free, a necessary condition for deriving stable DG schemes of kinetic equations so as to retain the implicit conservation relations embedded in the particle distribution function, and the computational favorable implementation using a modal, orthonormal basis in comparison to traditional DG methods applied in computational fluid dynamics. Finally, we demonstrate how the high fidelity representation of the distribution function, combined with novel diagnostics, permits detailed analysis of the energization mechanisms in fundamental plasma processes such as collisionless shocks.

Read this paper on arXiv…

J. Juno
Fri, 29 May 20
63/75

Comments: N/A

An arbitrary high-order Spectral Difference method for the induction equation [CL]

http://arxiv.org/abs/2005.13563


We study in this paper three variants of the high-order Discontinuous Galerkin (DG) method with Runge-Kutta (RK) time integration for the induction equation, analysing their ability to preserve the divergence free constraint of the magnetic field. To quantify divergence errors, we use a norm based on both a surface term, measuring global divergence errors, and a volume term, measuring local divergence errors. This leads us to design a new, arbitrary high-order numerical scheme for the induction equation in multiple space dimensions, based on a modification of the Spectral Difference (SD) method [1] with ADER time integration [2]. It appears as a natural extension of the Constrained Transport (CT) method. We show that it preserves $\nabla\cdot\vec{B}=0$ exactly by construction, both in a local and a global sense. We compare our new method to the 3 RKDG variants and show that the magnetic energy evolution and the solution maps of our new SD-ADER scheme are qualitatively similar to the RKDG variant with divergence cleaning, but without the need for an additional equation and an extra variable to control the divergence errors.
[1] Liu Y., Vinokur M., Wang Z.J. (2006) Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids. In: Groth C., Zingg D.W. (eds) Computational Fluid Dynamics 2004. Springer, Berlin, Heidelberg
[2] Dumbser M., Castro M., Par\’es C., Toro E.F (2009) ADER schemes on unstructured meshes for nonconservative hyperbolic systems: Applications to geophysical flows. In: Computers & Fluids, Volume 38, Issue 9

Read this paper on arXiv…

M. Veiga, D. Velasco-Romero, Q. Wenger, et. al.
Fri, 29 May 20
74/75

Comments: 26 pages

Phase reconstruction with iterated Hilbert transforms [CL]

http://arxiv.org/abs/2004.13461


We present a study dealing with a novel phase reconstruction method based on iterated Hilbert transform embeddings. We show results for the Stuart-Landau oscillator observed by generic observables. The benefits for reconstruction of the phase response curve a presented and the method is applied in a setting where the observed system is pertubred by noise.

Read this paper on arXiv…

E. Gengel and A. Pikovsky
Wed, 29 Apr 20
66/75

Comments: The manuscript is based on findings presented in the poster presentation at the Dynamics days Europe in 2019

Thermophysical modelling and parameter estimation of small solar system bodies via data assimilation [IMA]

http://arxiv.org/abs/2003.13804


Deriving thermophysical properties such as thermal inertia from thermal infrared observations provides useful insights into the structure of the surface material on planetary bodies. The estimation of these properties is usually done by fitting temperature variations calculated by thermophysical models to infrared observations. For multiple free model parameters, traditional methods such as Least-Squares fitting or Markov-Chain Monte-Carlo methods become computationally too expensive. Consequently, the simultaneous estimation of several thermophysical parameters together with their corresponding uncertainties and correlations is often not computationally feasible and the analysis is usually reduced to fitting one or two parameters. Data assimilation methods have been shown to be robust while sufficiently accurate and computationally affordable even for a large number of parameters. This paper will introduce a standard sequential data assimilation method, the Ensemble Square Root Filter, to thermophysical modelling of asteroid surfaces. This method is used to re-analyse infrared observations of the MARA instrument, which measured the diurnal temperature variation of a single boulder on the surface of near-Earth asteroid (162173) Ryugu. The thermal inertia is estimated to be $295 \pm 18$ $\mathrm{J\,m^{-2}\,K^{-1}\,s^{-1/2}}$, while all five free parameters of the initial analysis are varied and estimated simultaneously. Based on this thermal inertia estimate the thermal conductivity of the boulder is estimated to be between 0.07 and 0.12 $\mathrm{W\,m^{-1}\,K^{-1}}$ and the porosity to be between 0.30 and 0.52. For the first time in thermophysical parameter derivation, correlations and uncertainties of all free model parameters are incorporated in the estimation procedure and thus, results are more accurate than previously derived parameters.

Read this paper on arXiv…

M. Hamm, I. Pelivan, M. Grott, et. al.
Wed, 1 Apr 20
28/83

Comments: N/A

Provably Physical-Constraint-Preserving Discontinuous Galerkin Methods for Multidimensional Relativistic MHD Equations [CL]

http://arxiv.org/abs/2002.03371


We propose and analyze a class of robust, uniformly high-order accurate discontinuous Galerkin (DG) schemes for multidimensional relativistic magnetohydrodynamics (RMHD) on general meshes. A distinct feature of the schemes is their physical-constraint-preserving (PCP) property, i.e., they are proven to preserve the subluminal constraint on the fluid velocity and the positivity of density, pressure, and specific internal energy. Developing PCP high-order schemes for RMHD is highly desirable but remains a challenging task, especially in the multidimensional cases, due to the inherent strong nonlinearity in the constraints and the effect of the magnetic divergence-free condition. Inspired by some crucial observations at the PDE level, we construct the provably PCP schemes by using the locally divergence-free DG schemes of the recently proposed symmetrizable RMHD equations as the base schemes, a limiting technique to enforce the PCP property of the DG solutions, and the strong-stability-preserving methods for time discretization. We rigorously prove the PCP property by using a novel “quasi-linearization” approach to handle the highly nonlinear physical constraints, technical splitting to offset the influence of divergence error, and sophisticated estimates to analyze the beneficial effect of the additional source term in the symmetrizable RMHD system. Several two-dimensional numerical examples are provided to confirm the PCP property and to demonstrate the accuracy, effectiveness and robustness of the proposed PCP schemes.

Read this paper on arXiv…

K. Wu and C. Shu
Tue, 11 Feb 20
54/81

Comments: N/A

Accelerating linear system solvers for time domain component separation of cosmic microwave background data [CEA]

http://arxiv.org/abs/2002.02833


Component separation is one of the key stages of any modern, cosmic microwave background (CMB) data analysis pipeline. It is an inherently non-linear procedure and typically involves a series of sequential solutions of linear systems with similar, albeit not identical system matrices, derived for different data models of the same data set. Sequences of this kind arise for instance in the maximization of the data likelihood with respect to foreground parameters or sampling of their posterior distribution. However, they are also common in many other contexts. In this work we consider solving the component separation problem directly in the measurement (time) domain, which can have a number of important advantageous over the more standard pixel-based methods, in particular if non-negligible time-domain noise correlations are present as it is commonly the case. The time-domain based approach implies, however, significant computational effort due to the need to manipulate the full volume of time-domain data set. To address this challenge, we propose and study efficient solvers adapted to solving time-domain-based, component separation systems and their sequences and which are capable of capitalizing on information derived from the previous solutions. This is achieved either via adapting the initial guess of the subsequent system or through a so-called subspace recycling, which allows to construct progressively more efficient, two-level preconditioners. We report an overall speed-up over solving the systems independently of a factor of nearly 7, or 5, in the worked examples inspired respectively by the likelihood maximization and likelihood sampling procedures we consider in this work.

Read this paper on arXiv…

J. Papez, L. Grigori and R. Stompor
Mon, 10 Feb 20
39/59

Comments: N/A

Andrade rheology in time-domain. Application to Enceladus' dissipation of energy due to forced libration [EPA]

http://arxiv.org/abs/1912.09309


The main purpose of this work is to present a time-domain implementation of the Andrade rheology, instead of the traditional expansion in terms of a Fourier series of the tidal potential. This approach can be used in any fully three dimensional numerical simulation of the dynamics of a system of many deformable bodies. In particular, it allows large eccentricities, large mutual inclinations, and it is not limited to quasi-periodic perturbations. It can take into account an extended class of perturbations, such as chaotic motions, transient events, and resonant librations.
The results are presented by means of a concrete application: the analysis of the libration of Enceladus. This is done by means of both analytic formulas in the frequency domain and direct numerical simulations. We do not a priori assume that Enceladus has a triaxial shape, the eventual triaxiality is a consequence of the satellite motion and its rheology. As a result we obtain an analytic formula for the amplitude of libration that incorporates a new correction due to the rheology.
Our results provide an estimation of the amplitude of libration of the core of Enceladus as 0.6% of that of the shell. They also reproduce the observed 10 GW of tidal heat generated by Enceladus with a value of $0.17\times 10^{14}$Pa$\cdot$s for the global effective viscosity under both Maxwell and Andrade rheology.

Read this paper on arXiv…

Y. Gevorgyan, G. Boué, C. Ragazzo, et. al.
Fri, 20 Dec 19
29/63

Comments: Preprint ‘elsart’ style, 22 pages, 9 multiple figures. Accepted for publication in Icarus

Spectral shock detection for dynamically developing discontinuities [CL]

http://arxiv.org/abs/1910.00858


Pseudospectral schemes are a class of numerical methods capable of solving smooth problems with high accuracy thanks to their exponential convergence to the true solution. When applied to discontinuous problems, such as fluid shocks and material interfaces, due to the Gibbs phenomenon, pseudospectral solutions lose their superb convergence and suffer from spurious oscillations across the entire computational domain. Luckily, there exist theoretical remedies for these issues which have been successfully tested in practice for cases of well defined discontinuities. We focus on one piece of this procedure—detecting a discontinuity in spectral data. We show that realistic applications require treatment of discontinuities dynamically developing in time and that it poses challenges associated with shock detection. More precisely, smoothly steepening gradients in the solution spawn spurious oscillations due to insufficient resolution, causing premature shock identification and information loss. We improve existing spectral shock detection techniques to allow us to automatically detect true discontinuities and identify cases for which post-processing is required to suppress spurious oscillations resulting from the loss of resolution. We then apply these techniques to solve an inviscid Burgers’ equation in 1D, demonstrating that our method correctly treats genuine shocks caused by wave breaking and removes oscillations caused by numerical constraints.

Read this paper on arXiv…

J. Piotrowska and J. Miller
Thu, 3 Oct 19
49/59

Comments: 16 pages, 6 figures

Relativistic changes to particle trajectories are difficult to detect [CL]

http://arxiv.org/abs/1909.04652


We study the sensitivity of the computed orbits for the Kepler problem, both for continuous space, and discretizations of space. While it is known that energy can be very well preserved with symplectic methods, the semi-major-axis is in general not preserved. We study this spurious shift, as a function of the integration method used, and also as a function of an additional interpolation of forces on a 2-dimensional lattice. This is done for several choices of eccentricities, and semi-major axes. Using these results, we can predict which precisions and lattice constants allow for a detection of the relativistic perihelion advance. Such bounds are important for calculations in N-body simulations, if one wants to meaningfully add these relativistic effects.

Read this paper on arXiv…

J. Eckmann and F. Hassani
Thu, 12 Sep 19
63/84

Comments: 14 pages, 8 figures

Relativistic changes to particle trajectories are difficult to detect [CL]

http://arxiv.org/abs/1909.04652


We study the sensitivity of the computed orbits for the Kepler problem, both for continuous space, and discretizations of space. While it is known that energy can be very well preserved with symplectic methods, the semi-major-axis is in general not preserved. We study this spurious shift, as a function of the integration method used, and also as a function of an additional interpolation of forces on a 2-dimensional lattice. This is done for several choices of eccentricities, and semi-major axes. Using these results, we can predict which precisions and lattice constants allow for a detection of the relativistic perihelion advance. Such bounds are important for calculations in N-body simulations, if one wants to meaningfully add these relativistic effects.

Read this paper on arXiv…

J. Eckmann and F. Hassani
Wed, 11 Sep 19
45/86

Comments: 14 pages, 8 figures

Numerical integration in celestial mechanics: a case for contact geometry [CL]

http://arxiv.org/abs/1909.02613


Several dynamical systems of interest in celestial mechanics can be written in the form
\begin{equation}
\ddot q + \frac{\partial V(q,t)}{\partial q}+f(t)\dot q=0\,. %\quad i=1,\dots,n\,.
\end{equation
}
For instance, the modified Kepler problem, the spin–orbit model and the Lane–Emden equation all belong to this class.
In this work we start an investigation of these models from the point of view of contact geometry. In particular we focus on the (contact) Hamiltonisation of these models and on the construction of the corresponding geometric integrators.

Read this paper on arXiv…

A. Bravetti, M. Seri, M. Vermeeren, et. al.
Mon, 9 Sep 19
60/67

Comments: N/A

Barycentric interpolation on Riemannian and semi-Riemannian spaces [IMA]

http://arxiv.org/abs/1907.09487


Interpolation of data represented in curvilinear coordinates and possibly having some non-trivial, typically Riemannian or semi-Riemannian geometry is an ubiquitous task in all of physics. In this work we present a covariant generalization of the barycentric coordinates and the barycentric interpolation method for Riemannian and semi-Riemannian spaces of arbitrary dimension. We show that our new method preserves the linear accuracy property of barycentric interpolation in a coordinate-invariant sense. In addition, we show how the method can be used to interpolate constrained quantities so that the given constraint is automatically respected. We showcase the method with two astrophysics related examples situated in the curved Kerr spacetime. The first problem is interpolating a locally constant vector field, in which case curvature effects are expected to be maximally important. The second example is a General Relativistic Magnetohydrodynamics simulation of a turbulent accretion flow around a black hole, wherein high intrinsic variability is expected to be at least as important as curvature effects.

Read this paper on arXiv…

P. Pihajoki, M. Mannerkoski and P. Johansson
Wed, 24 Jul 19
56/60

Comments: 9 pages, 3 figures. Submitted to MNRAS, comments welcome

Entropy Symmetrization and High-Order Accurate Entropy Stable Numerical Schemes for Relativistic MHD Equations [CL]

http://arxiv.org/abs/1907.07467


This paper presents entropy symmetrization and high-order accurate entropy stable schemes for the relativistic magnetohydrodynamic (RMHD) equations. It is shown that the conservative RMHD equations are not symmetrizable and do not possess an entropy pair. To address this issue, a symmetrizable RMHD system, which admits a convex entropy pair, is proposed by adding a source term into the equations. Arbitrarily high-order accurate entropy stable finite difference schemes are then developed on Cartesian meshes based on the symmetrizable RMHD system. The crucial ingredients of these schemes include (i) affordable explicit entropy conservative fluxes which are technically derived through carefully selected parameter variables, (ii) a special high-order discretization of the source term in the symmetrizable RMHD system, and (iii) suitable high-order dissipative operators based on essentially non-oscillatory reconstruction to ensure the entropy stability. Several benchmark numerical tests demonstrate the accuracy and robustness of the proposed entropy stable schemes of the symmetrizable RMHD equations.

Read this paper on arXiv…

K. Wu and C. Shu
Thu, 18 Jul 19
43/64

Comments: 37 pages, 8 figures

An efficient method for solving highly oscillatory ordinary differential equations with applications to physical systems [CL]

http://arxiv.org/abs/1906.01421


We present a novel numerical routine (oscode) with a C++ and Python interface for the efficient solution of one-dimensional, second-order, ordinary differential equations with rapidly oscillating solutions. The method is based on a Runge-Kutta-like stepping procedure that makes use of the Wentzel-Kramers-Brillouin (WKB) approximation to skip regions of integration where the characteristic frequency varies slowly. In regions where this is not the case, the method is able to switch to a made-to-measure Runge-Kutta integrator that minimises the total number of function evaluations. We demonstrate the effectiveness of the method with example solutions of the Airy equation and an equation exhibiting a burst of oscillations, discussing the error properties of the method in detail. We then show the method applied to physical systems. First, the one-dimensional, time-independent Schr\”odinger equation is solved as part of a shooting method to search for the energy eigenvalues for a potential with quartic anharmonicity. Then, the method is used to solve the Mukhanov-Sasaki equation describing the evolution of cosmological perturbations, and the primordial power spectrum of the perturbations is computed in different cosmological scenarios. We compare the performance of our solver in calculating a primordial power spectrum of scalar perturbations to that of BINGO, an efficient code specifically designed for such applications.

Read this paper on arXiv…

F. Agocs, W. Handley, A. Lasenby, et. al.
Tue, 11 Jun 19
11/60

Comments: 23 pages, 15 figures. Submitted to Physical Review D. The associated code is available online at this https URL

An efficient method for solving highly oscillatory ordinary differential equations with applications to physical systems [CL]

http://arxiv.org/abs/1906.01421


We present a novel numerical routine (oscode) with a C++ and Python interface for the efficient solution of one-dimensional, second-order, ordinary differential equations with rapidly oscillating solutions. The method is based on a Runge-Kutta-like stepping procedure that makes use of the Wentzel-Kramers-Brillouin (WKB) approximation to skip regions of integration where the characteristic frequency varies slowly. In regions where this is not the case, the method is able to switch to a made-to-measure Runge-Kutta integrator that minimises the total number of function evaluations. We demonstrate the effectiveness of the method with example solutions of the Airy equation and an equation exhibiting a burst of oscillations, discussing the error properties of the method in detail. We then show the method applied to physical systems. First, the one-dimensional, time-independent Schr\”odinger equation is solved as part of a shooting method to search for the energy eigenvalues for a potential with quartic anharmonicity. Then, the method is used to solve the Mukhanov-Sasaki equation describing the evolution of cosmological perturbations, and the primordial power spectrum of the perturbations is computed in different cosmological scenarios. We compare the performance of our solver in calculating a primordial power spectrum of scalar perturbations to that of BINGO, an efficient code specifically designed for such applications.

Read this paper on arXiv…

F. Agocs, W. Handley, A. Lasenby, et. al.
Wed, 5 Jun 19
27/74

Comments: 23 pages, 15 figures. Submitted to Physical Review D. The associated code is available online at this https URL

A Fast and Accurate Algorithm for Spherical Harmonic Analysis of the Cosmic Microwave Background Radiation [CL]

http://arxiv.org/abs/1904.10514


The Cosmic Microwave Background Radiation (CMBR) represents the first light to travel during the early stages of the universe’s development. This sphere of relic radiation gives the strongest evidence for the Big Bang theory to date, and refined analysis of its angular power spectrum can lead to revolutionary developments in understanding the nature of dark matter and dark energy. Satellites collect CMBR data over a sphere using a Hierarchical Equal Area isoLatitude Pixelation (HEALPix) grid. While this grid gives a quasiuniform discretization of a sphere, it is not well suited for doing fast \emph{and} accurate spherical harmonic analysis — a central component to computing and analyzing the angular power spectrum of the massive CMBR data sets. In this paper, we present a new method that overcomes these issues through a novel combination of a non-uniform fast Fourier transform, the double Fourier sphere method, and Slevinsky’s fast spherical harmonic transform (Slevinsky, 2017). The method has a quasi-optimal computational complexity of $\mathcal{O}(N\log^2 N)$ with an initial set-up cost of $\mathcal{O}(N^{3/2}\log N)$, where $N$ represents the number of points in the HEALPix grid. Additionally, we provide the first analysis of the method used in the current HEALPix software for computing the spherical harmonic coefficients. Numerical results illustrating the effectiveness of the new technique over the current method are also included.

Read this paper on arXiv…

K. Drake and G. Wright
Thu, 25 Apr 19
46/58

Comments: N/A

Desaturating EUV observations of solar flaring storms [SSA]

http://arxiv.org/abs/1904.04211


Image saturation has been an issue for several instruments in solar astronomy, mainly at EUV wavelengths. However, with the launch of the Atmospheric Imaging Assembly (AIA) as part of the payload of the Solar Dynamic Observatory (SDO) image saturation has become a big data issue, involving around 10^$ frames of the impressive dataset this beautiful telescope has been providing every year since February 2010. This paper introduces a novel desaturation method, which is able to recover the signal in the saturated region of any AIA image by exploiting no other information but the one contained in the image itself. This peculiar methodological property, jointly with the unprecedented statistical reliability of the desaturated images, could make this algorithm the perfect tool for the realization of a reconstruction pipeline for AIA data, able to work properly even in the case of long-lasting, very energetic flaring events.

Read this paper on arXiv…

S. Guastavino, M. Piana, A. Massone, et. al.
Tue, 9 Apr 19
23/105

Comments: N/A

The "Sphered Cube": A New Method for the Solution of Partial Differential Equations in Cubical Geometry [CL]

http://arxiv.org/abs/1903.12642


A new gridding technique for the solution of partial differential equations in cubical geometry is presented. The method is based on volume penalization, allowing for the imposition of a cubical geometry inside of its circumscribing sphere. By choosing to embed the cube inside of the sphere, one obtains a discretization that is free of any sharp edges or corners. Taking full advantage of the simple geometry of the sphere, spectral bases based on spin-weighted spherical harmonics and Jacobi polynomials, which properly capture the regularity of scalar, vector and tensor components in spherical coordinates, can be applied to obtain moderately efficient and accurate numerical solutions of partial differential equations in the cube. This technique demonstrates the advantages of these bases over other methods for solving PDEs in spherical coordinates. We present results for a test case of incompressible hydrodynamics in cubical geometry: Rayleigh-B\’enard convection with fully Dirichlet boundary conditions. Analysis of the simulations provides what is, to our knowledge, the first result on the scaling of the heat flux with the thermal forcing for this type of convection in a cube in a sphere.

Read this paper on arXiv…

K. Burns, D. Lecoanet, G. Vasil, et. al.
Mon, 1 Apr 19
33/56

Comments: 10 pages, 3 figures, 1 cube, 1 sphere

A high-order weighted finite difference scheme with a multi-state approximate Riemann solver for divergence-free magnetohydrodynamic simulations [IMA]

http://arxiv.org/abs/1903.04759


We design a conservative finite difference scheme for ideal magnetohydrodynamic simulations that attains high-order accuracy, shock-capturing, and divergence-free condition of the magnetic field. The scheme interpolates pointwise physical variables from computational nodes to midpoints through a high-order nonlinear weighted average. The numerical flux is evaluated at the midpoint by a multi-state approximate Riemann solver for correct upwinding, and its spatial derivative is approximated by a high-order linear central difference to update the variables with designed order of accuracy and conservation. The magnetic and electric fields are defined at staggered grid points employed in the Constrained Transport (CT) method by Evans & Hawley (1988). We propose a new CT variant, in which the staggered electric field is evaluated so as to be consistent with the base one-dimensional Riemann solver and the staggered magnetic field is updated to be divergence-free as designed high-order finite difference representation. We demonstrate various benchmark tests to measure the performance of the present scheme. We discuss the effect of the choice of interpolation methods, Riemann solvers, and the treatment for the divergence-free condition on the quality of numerical solutions in detail.

Read this paper on arXiv…

T. Minoshima, T. Miyoshi and Y. Matsumoto
Wed, 13 Mar 19
91/125

Comments: 53 pages, 19 figures, 2 tables, submitted to ApJ

Compressed sensing and Sequential Monte Carlo for solar hard X-ray imaging [SSA]

http://arxiv.org/abs/1812.08413


We describe two inversion methods for the reconstruction of hard X-ray solar images. The methods are tested against experimental visibilities recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) and synthetic visibilities based on the design of the Spectrometer/Telescope for Imaging X-rays (STIX).

Read this paper on arXiv…

A. Massone, F. Sciacchitano, M. Piana, et. al.
Fri, 21 Dec 18
45/72

Comments: submitted to ‘Nuovo Cimento’ as proceeding SOHE3

Quantifying Uncertainty in High Dimensional Inverse Problems by Convex Optimisation [CL]

http://arxiv.org/abs/1811.02514


Inverse problems play a key role in modern image/signal processing methods. However, since they are generally ill-conditioned or ill-posed due to lack of observations, their solutions may have significant intrinsic uncertainty. Analysing and quantifying this uncertainty is very challenging, particularly in high-dimensional problems and problems with non-smooth objective functionals (e.g. sparsity-promoting priors). In this article, a series of strategies to visualise this uncertainty are presented, e.g. highest posterior density credible regions, and local credible intervals (cf. error bars) for individual pixels and superpixels. Our methods support non-smooth priors for inverse problems and can be scaled to high-dimensional settings. Moreover, we present strategies to automatically set regularisation parameters so that the proposed uncertainty quantification (UQ) strategies become much easier to use. Also, different kinds of dictionaries (complete and over-complete) are used to represent the image/signal and their performance in the proposed UQ methodology is investigated.

Read this paper on arXiv…

X. Cai, M. Pereyra and J. McEwen
Wed, 7 Nov 18
92/94

Comments: 5 pages, 5 figures

Sparse Bayesian Imaging of Solar Flares [IMA]

http://arxiv.org/abs/1807.11287


We consider imaging of solar flares from NASA RHESSI data as a parametric imaging problem, where flares are represented as a finite collection of geometric shapes. We set up a Bayesian model in which the number of objects forming the image is a priori unknown, as well as their shapes. We use a Sequential Monte Carlo algorithm to explore the corresponding posterior distribution. We apply the method to synthetic and experimental data, largely known in the RHESSI community. The method reconstructs improved images of solar flares, with the additional advantage of providing uncertainty quantification of the estimated parameters.

Read this paper on arXiv…

F. Sciacchitano, S. Lugaro and A. Sorrentino
Tue, 31 Jul 18
41/69

Comments: submitted

Provably Positive High-Order Schemes for Ideal Magnetohydrodynamics: Analysis on General Meshes [CL]

http://arxiv.org/abs/1807.11467


This paper proposes and analyzes arbitrarily high-order discontinuous Galerkin (DG) and finite volume methods which provably preserve the positivity of density and pressure for the ideal MHD on general meshes. Unified auxiliary theories are built for rigorously analyzing the positivity-preserving (PP) property of MHD schemes with a HLL type flux on polytopal meshes in any space dimension. The main challenges overcome here include establishing relation between the PP property and discrete divergence of magnetic field on general meshes, and estimating proper wave speeds in the HLL flux to ensure the PP property. In 1D case, we prove that the standard DG and finite volume methods with the proposed HLL flux are PP, under condition accessible by a PP limiter. For multidimensional conservative MHD system, standard DG methods with a PP limiter are not PP in general, due to the effect of unavoidable divergence-error. We construct provably PP high-order DG and finite volume schemes by proper discretization of symmetrizable MHD system, with two divergence-controlling techniques: locally divergence-free elements and a penalty term. The former leads to zero divergence within each cell, while the latter controls the divergence error across cell interfaces. Our analysis reveals that a coupling of them is important for positivity preservation, as they exactly contribute the discrete divergence-terms absent in standard DG schemes but crucial for ensuring the PP property. Numerical tests confirm the PP property and the effectiveness of proposed PP schemes. Unlike conservative MHD system, the exact smooth solutions of symmetrizable MHD system are proved to retain the positivity even if the divergence-free condition is not satisfied. Our analysis and findings further the understanding, at both discrete and continuous levels, of the relation between the PP property and the divergence-free constraint.

Read this paper on arXiv…

K. Wu and C. Shu
Tue, 31 Jul 18
68/69

Comments: 49 pages, 11 figures

Provably Positive Discontinuous Galerkin Methods for Multidimensional Ideal Magnetohydrodynamics [CL]

http://arxiv.org/abs/1807.00246


The density and pressure are positive physical quantities in magnetohydrodynamics (MHD). Design of provably positivity-preserving (PP) numerical schemes for ideal compressible MHD is highly desirable, but remains a challenge especially in the multidimensional cases. In this paper, we first develop uniformly high-order discontinuous Galerkin (DG) schemes which provably preserve the positivity of density and pressure for multidimensional ideal MHD. The schemes are constructed by using the locally divergence-free DG schemes for the symmetrizable ideal MHD equations as the base schemes, a PP limiter to enforce the positivity of the DG solutions, and the strong stability preserving methods for time discretization. The significant innovation is that we discover and rigorously prove the PP property of the proposed DG schemes by using a novel equivalent form of the admissible state set and very technical estimates. Several two-dimensional numerical examples further confirm the PP property, and demonstrate the accuracy, effectiveness and robustness of the proposed PP methods.

Read this paper on arXiv…

K. Wu and C. Shu
Tue, 3 Jul 18
90/95

Comments: Submitted for publication on Jan. 31, 2018

Numerical treatment of the nonconservative product in a multiscale fluid model for plasmas in thermal nonequilibrium: application to solar physics [CL]

http://arxiv.org/abs/1806.10436


This contribution deals with the modeling of collisional multicomponent magnetized plasmas in thermal and chemical nonequilibrium aiming at simulating and predicting magnetic reconnections in the chromosphere of the sun. We focus on the numerical simulation of a simplified fluid model in order to properly investigate the influence on shock solutions of a nonconservative product present in the electron energy equation. Then, we derive jump conditions based on travelling wave solutions and propose an original numerical treatment in order to avoid non-physical shocks for the solution, that remains valid in the case of coarse-resolution simulations. A key element for the numerical scheme proposed is the presence of diffusion in the electron variables, consistent with the physically-sound scaling used in the model developed by Graille et al. following a multiscale Chapman-Enskog expansion method [M3AS, 19 (2009) 527–599]. The numerical strategy is eventually assessed in the framework of a solar physics test case. The computational method is able to capture the travelling wave solutions in both the highly- and coarsely-resolved cases.

Read this paper on arXiv…

Q. Wargnier, S. Faure, B. Graille, et. al.
Thu, 28 Jun 18
49/60

Comments: N/A

Local, algebraic simplifications of Gaussian random fields [CEA]

http://arxiv.org/abs/1805.03117


Many applications of Gaussian random fields and Gaussian random processes are limited by the computational complexity of evaluating the probability density function, which involves inverting the relevant covariance matrix. In this work, we show how that problem can be completely circumvented for the local Taylor coefficients of a Gaussian random field with a Gaussian (or square exponential') covariance function. Our results hold for any dimension of the field and to any order in the Taylor expansion. We present two applications. First, we show that this method can be used to explicitly generate non-trivial potential energy landscapes with many fields. This application is particularly useful when one is concerned with the field locally around special points (e.g.~maxima or minima), as we exemplify by the problem of cosmicmanyfield’ inflation in the early universe. Second, we show that this method has applications in machine learning, and greatly simplifies the regression problem of determining the hyperparameters of the covariance function given a training data set consisting of local Taylor coefficients at single point. An accompanying Mathematica notebook is available at https://doi.org/10.17863/CAM.22859.

Read this paper on arXiv…

T. Bjorkmo and M. Marsh
Wed, 9 May 18
49/55

Comments: 15 pages, 2 figures

Polynomial data compression for large-scale physics experiments [CL]

http://arxiv.org/abs/1805.01844


The new generation research experiments will introduce huge data surge to a continuously increasing data production by current experiments. This data surge necessitates efficient compression techniques. These compression techniques must guarantee an optimum tradeoff between compression rate and the corresponding compression /decompression speed ratio without affecting the data integrity.
This work presents a lossless compression algorithm to compress physics data generated by Astronomy, Astrophysics and Particle Physics experiments.
The developed algorithms have been tuned and tested on a real use case~: the next generation ground-based high-energy gamma ray observatory, Cherenkov Telescope Array (CTA), requiring important compression performance. Stand-alone, the proposed compression method is very fast and reasonably efficient. Alternatively, applied as pre-compression algorithm, it can accelerate common methods like LZMA, keeping close performance.

Read this paper on arXiv…

P. Aubert, T. Vuillaume, G. Maurin, et. al.
Mon, 7 May 18
32/39

Comments: 12 pages

Tensor calculus in spherical coordinates using Jacobi polynomials. Part-I: Mathematical analysis and derivations [CL]

http://arxiv.org/abs/1804.10320


This paper presents a method for the accurate and efficient computations on scalar, vector and tensor fields in three-dimensional spherical polar coordinates. The methods uses spin-weighted spherical harmonics in the angular directions and rescaled Jacobi polynomials in the radial direction. For the 2-sphere, spin-weighted harmonics allow for automating calculations in a fashion as similar to Fourier series as possible. Derivative operators act as wavenumber multiplication on a set of spectral coefficients. After transforming the angular directions, a set of orthogonal tensor rotations put the radially dependent spectral coefficients into individual spaces each obeying a particular regularity condition at the origin. These regularity spaces have remarkably simple properties under standard vector-calculus operations, such as \textit{grad} and \textit{div}. We use a hierarchy of rescaled Jacobi polynomials for a basis on these regularity spaces. It is possible to select the Jacobi-polynomial parameters such that all relevant operators act in a minimally banded way. Altogether, the geometric structure allows for the accurate and efficient solution of general partial differential equations in the unit ball.

Read this paper on arXiv…

G. Vasil, D. Lecoanet, K. Burns, et. al.
Mon, 30 Apr 18
-122/63

Comments: Submitted to JCP simultaneously with Part-II

Capturing near-equilibrium solutions: a comparison between high-order Discontinuous Galerkin methods and well-balanced schemes [CL]

http://arxiv.org/abs/1803.05919


Equilibrium or stationary solutions usually proceed through the exact balance between hyperbolic transport terms and source terms. Such equilibrium solutions are affected by truncation errors that prevent any classical numerical scheme from capturing the evolution of small amplitude waves of physical significance. In order to overcome this problem, we compare two commonly adopted strategies: going to very high order and reduce drastically the truncation errors on the equilibrium solution, or design a specific scheme that preserves by construction the equilibrium exactly, the so-called well-balanced approach. We present a modern numerical implementation of these two strategies and compare them in details, using hydrostatic but also dynamical equilibrium solutions of several simple test cases. Finally, we apply our methodology to the simulation of a protoplanetary disc in centrifugal equilibrium around its star and model its interaction with an embedded planet, illustrating in a realistic application the strength of both methods.

Read this paper on arXiv…

M. Veiga, R. Abgrall and R. Teyssier
Mon, 19 Mar 2018
4/57

Comments: 31 pages, 21 figures

Solving linear equations with messenger-field and conjugate gradients techniques – an application to CMB data analysis [CEA]

http://arxiv.org/abs/1803.03462


We discuss linear system solvers invoking a messenger-field and compare them with (preconditioned) conjugate gradients approaches. We show that the messenger-field techniques correspond to fixed point iterations of an appropriately preconditioned initial system of linear equations. We then argue that a conjugate gradient solver applied to the same preconditioned system, or equivalently a preconditioned conjugate gradient solver using the same preconditioner and applied to the original system, will in general ensure at least a comparable and typically better performance in terms of the number of iterations to convergence and time-to-solution. We illustrate our conclusions on two common examples drawn from the Cosmic Microwave Background data analysis: Wiener filtering and map-making. In addition, and contrary to the standard lore in the CMB field, we show that the performance of the preconditioned conjugate gradient solver can depend importantly on the starting vector. This observation seems of particular importance in the cases of map-making of high signal-to-noise sky maps and therefore should be of relevance for the next generation of CMB experiments.

Read this paper on arXiv…

J. Papez, L. Grigori and R. Stompor
Mon, 12 Mar 2018
6/45

Comments: N/A

Machine learning in APOGEE: Unsupervised spectral classification with $K$-means [IMA]

http://arxiv.org/abs/1801.07912


The data volume generated by astronomical surveys is growing rapidly. Traditional analysis techniques in spectroscopy either demand intensive human interaction or are computationally expensive. In this scenario, machine learning, and unsupervised clustering algorithms in particular offer interesting alternatives. The Apache Point Observatory Galactic Evolution Experiment (APOGEE) offers a vast data set of near-infrared stellar spectra which is perfect for testing such alternatives. Apply an unsupervised classification scheme based on $K$-means to the massive APOGEE data set. Explore whether the data are amenable to classification into discrete classes. We apply the $K$-means algorithm to 153,847 high resolution spectra ($R\approx22,500$). We discuss the main virtues and weaknesses of the algorithm, as well as our choice of parameters. We show that a classification based on normalised spectra captures the variations in stellar atmospheric parameters, chemical abundances, and rotational velocity, among other factors. The algorithm is able to separate the bulge and halo populations, and distinguish dwarfs, sub-giants, RC and RGB stars. However, a discrete classification in flux space does not result in a neat organisation in the parameters space. Furthermore, the lack of obvious groups in flux space causes the results to be fairly sensitive to the initialisation, and disrupts the efficiency of commonly-used methods to select the optimal number of clusters. Our classification is publicly available, including extensive online material associated with the APOGEE Data Release 12 (DR12). Our description of the APOGEE database can enormously help with the identification of specific types of targets for various applications. We find a lack of obvious groups in flux space, and identify limitations of the $K$-means algorithm in dealing with this kind of data.

Read this paper on arXiv…

R. Garcia-Dias, C. Prieto, J. Almeida, et. al.
Thu, 25 Jan 18
56/67

Comments: 23 pages, 24 images and online material

A fourth-order accurate finite volume method for ideal MHD via upwind constrained transport [IMA]

http://arxiv.org/abs/1711.07439


We present a fourth-order accurate finite volume method for the solution of ideal magnetohydrodynamics (MHD). The numerical method combines high-order quadrature rules in the solution of semi-discrete formulations of hyperbolic conservation laws with the upwind constrained transport (UCT) framework to ensure that the divergence-free constraint of the magnetic field is satisfied. A novel implementation of UCT that uses the piecewise parabolic method (PPM) for the reconstruction of magnetic fields at cell corners in 2D is introduced. The resulting scheme can be expressed as the extension of the second-order accurate constrained transport (CT) Godunov-type scheme that is currently used in the Athena astrophysics code. After validating the base algorithm on a series of hydrodynamics test problems, we present the results of multidimensional MHD test problems which demonstrate formal fourth-order convergence for smooth problems, robustness for discontinuous problems, and improved accuracy relative to the second-order scheme.

Read this paper on arXiv…

K. Felker and J. Stone
Tue, 21 Nov 17
24/79

Comments: 36 pages, 16 figures, submitted to J. Comp. Phys

Fast generation of isotropic Gaussian random fields on the sphere [CL]

http://arxiv.org/abs/1709.10314


The efficient simulation of isotropic Gaussian random fields on the unit sphere is a task encountered frequently in numerical applications. A fast algorithm based on Markov properties and Fast Fourier Transforms in 1d is presented that generates samples on an n x n grid in O(n^2 log n). Furthermore, an efficient method to set up the necessary conditional covariance matrices is derived and simulations demonstrate the performance of the algorithm.

Read this paper on arXiv…

P. Creasey and A. Lang
Mon, 2 Oct 17
28/47

Comments: 13 pages, 3 figures

Solar hard X-ray imaging by means of Compressed Sensing and Finite Isotropic Wavelet Transform [SSA]

http://arxiv.org/abs/1708.03877


This paper shows that compressed sensing realized by means of regularized deconvolution and the Finite Isotropic Wavelet Transform is effective and reliable in hard X-ray solar imaging.
The method utilizes the Finite Isotropic Wavelet Transform with Meyer function as the mother wavelet. Further, compressed sensing is realized by optimizing a sparsity-promoting regularized objective function by means of the Fast Iterative Shrinkage-Thresholding Algorithm. Eventually, the regularization parameter is selected by means of the Miller criterion.
The method is applied against both synthetic data mimicking the Spectrometer/Telescope Imaging X-rays (STIX) measurements and experimental observations provided by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). The performances of the method are compared with the results provided by standard visibility-based reconstruction methods.
The results show that the application of the sparsity constraint and the use of a continuous, isotropic framework for the wavelet transform provide a notable spatial accuracy and significantly reduce the ringing effects due to the instrument point spread functions.

Read this paper on arXiv…

M. Duval-Poo, M. Piana and A. Massone
Tue, 15 Aug 17
38/59

Comments: N/A

Atmospheric turbulence profiling with unknown power spectral density [IMA]

http://arxiv.org/abs/1707.02157


Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack–Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction, compared to standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.

Read this paper on arXiv…

J. Lehtonen, T. Helin, S. Kindermann, et. al.
Mon, 10 Jul 17
40/64

Comments: N/A

Discontinuous Galerkin algorithms for fully kinetic plasmas [CL]

http://arxiv.org/abs/1705.05407


We present a new algorithm for the discretization of the Vlasov-Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma’s distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge-Kutta method. Since the Vlasov equation in the Vlasov-Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the cost while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.

Read this paper on arXiv…

J. Juno, A. Hakim, J. TenBarge, et. al.
Wed, 17 May 17
13/65

Comments: N/A