Speeding up a few orders of magnitude the Jacobi method: high order Chebyshev-Jacobi over GPUs [CL]

http://arxiv.org/abs/1705.00103


In this technical note we show how to reach a remarkable speed up when solving elliptic partial differential equations with finite differences thanks to the joint use of the Chebyshev-Jacobi method with high order discretizations and its parallel implementation over GPUs.

Read this paper on arXiv…

J. Adsuara, I. Cordero-Carrion, P. Cerda-Duran, et. al.
Tue, 2 May 17
39/45

Comments: 18 pages, 4 figures, 3 tables, submitted to JCP

Factorized Runge-Kutta-Chebyshev Methods [CL]

http://arxiv.org/abs/1702.03818


The second-order extended stability Factorized Runge-Kutta-Chebyshev (FRKC2) class of explicit schemes for the integration of large systems of PDEs with diffusive terms is presented. FRKC2 schemes are straightforward to implement through ordered sequences of forward Euler steps with complex stepsizes, and easily parallelised for large scale problems on distributed architectures.
Preserving 7 digits for accuracy at 16 digit precision, the schemes are theoretically capable of maintaining internal stability at acceleration factors in excess of 6000 with respect to standard explicit Runge-Kutta methods. The stability domains have approximately the same extents as those of RKC schemes, and are a third longer than those of RKL2 schemes. Extension of FRKC methods to fourth-order, by both complex splitting and Butcher composition techniques, is discussed.
A publicly available implementation of the FRKC2 class of schemes may be obtained from maths.dit.ie/frkc

Read this paper on arXiv…

S. OSullivan
Thu, 16 Feb 17
11/45

Comments: 9 pages, 6 figures, accepted to the proceedings of Astronum 2016 – 11th Annual International Conference on Numerical Modeling of Space Plasma Flows, June 6-10, 2016

A numerical scheme for the compressible low-Mach number regime of ideal fluid dynamics [CL]

http://arxiv.org/abs/1612.03910


Based on the Roe solver a new technique that allows to correctly represent low Mach number flows with a discretization of the compressible Euler equations was proposed in Miczek et al.: New numerical solver for flows at various mach numbers. A&A 576, A50 (2015). We analyze properties of this scheme and demonstrate that its limit yields a discretization of the continuous limit system. Furthermore we perform a linear stability analysis for the case of explicit time integration and study the performance of the scheme under implicit time integration via the evolution of its condition number. A numerical implementation demonstrates the capabilities of the scheme on the example of the Gresho vortex which can be accurately followed down to Mach numbers of ~1e-10 .

Read this paper on arXiv…

W. Barsukow, P. Edelmann, C. Klingenberg, et. al.
Wed, 14 Dec 16
18/67

Comments: N/A

Space-time adaptive ADER-DG schemes for dissipative flows: compressible Navier-Stokes and resistive MHD equations [CL]

http://arxiv.org/abs/1612.01410


This paper presents an arbitrary h.o. accurate ADER DG method on space-time adaptive meshes (AMR) for the solution of two important families of non-linear time dependent PDE for compr. dissipative flows: the compr. Navier-Stokes equations and the equations of visc. and res. MHD in 2 and 3 space-dimensions. The work continues a recent series of papers concerning the development and application of a proper a posteriori subcell FV limiting procedure suitable for DG methods. It is a well known fact that a major weakness of h.o. DG methods lies in the difficulty of limiting discontinuous solutions, which generate spurious oscillations, namely the so-called ‘Gibbs phenomenon’. In the present work the main benefits of the MOOD paradigm, i.e. the computational robustness even in the presence of strong shocks, are preserved and the numerical diffusion is considerably reduced also for the limited cells by resorting to a proper sub-grid. An important feature of our new scheme is its ability to cure even floating point errors that may occur during a simulation, for example when taking real roots of negative numbers or after divisions by zero. We apply the whole approach for the first time to the equations of compr. gas dynamics and MHD in the presence of viscosity, thermal conductivity and magnetic resistivity, therefore extending our family of adaptive ADER-DG schemes to cases for which the numerical fluxes also depend on the gradient of the state vector. The distinguished high-resolution properties of the presented numerical scheme stands out against a wide number of non-trivial test cases both for the compr. Navier-Stokes and the viscous and resistive MHD equations. The present results show clearly that the shock-capturing capability of the news schemes are significantly enhanced within a cell-by-cell Adaptive Mesh Refinement implementation together with time accurate local time stepping (LTS).

Read this paper on arXiv…

F. Fambri, M. Dumbser and O. Zanotti
Tue, 6 Dec 16
58/71

Comments: 31 pages, 16 figures

Scaling Laws of Passive-Scalar Diffusion in the Interstellar Medium [GA]

http://arxiv.org/abs/1610.06590


Passive scalar mixing (metals, molecules, etc.) in the turbulent interstellar medium (ISM) is critical for abundance patterns of stars and clusters, galaxy and star formation, and cooling from the circumgalactic medium. However, the fundamental scaling laws remain poorly understood (and usually unresolved in numerical simulations) in the highly supersonic, magnetized, shearing regime relevant for the ISM.We therefore study the full scaling laws governing passive-scalar transport in idealized simulations of supersonic MHD turbulence, including shear. Using simple phenomenological arguments for the variation of diffusivity with scale based on Richardson diffusion, we propose a simple fractional diffusion equation to describe the turbulent advection of an initial passive scalar distribution. These predictions agree well with the measurements from simulations, and vary with turbulent Mach number in the expected manner, remaining valid even in the presence of a large-scale shear flow (e.g. rotation in a galactic disk). The evolution of the scalar distribution is not the same as obtained using simple, constant “effective diffusivity” as in Smagorinsky models, because the scale-dependence of turbulent transport means an initially Gaussian distribution quickly develops highly non-Gaussian tails. We also emphasize that these are mean scalings that only apply to ensemble behaviors (assuming many different, random scalar injection sites): individual Lagrangian “patches” remain coherent (poorly-mixed) and simply advect for a large number of turbulent flow-crossing times.

Read this paper on arXiv…

M. Colbrook, X. Ma, P. Hopkins, et. al.
Mon, 24 Oct 16
7/53

Comments: submitted to MNRAS, 8 pages, 4 figures, comments welcome

Symplectic fourth-order maps for the collisional N-body problem [CL]

http://arxiv.org/abs/1609.09375


We study analytically and experimentally certain symplectic and time-reversible N-body integrators which employ a Kepler solver for each pair-wise interaction, including the method of Hernandez & Bertschinger (2015). Owing to the Kepler solver, these methods treat close two-body interactions correctly, while close three-body encounters contribute to the truncation error at second order and above. The second-order errors can be corrected to obtain a fourth-order scheme with little computational overhead. We generalise this map to an integrator which employs a Kepler solver only for selected interactions and yet retains fourth-order accuracy without backward steps. In this case, however, two-body encounters not treated via a Kepler solver contribute to the truncation error.

Read this paper on arXiv…

W. Dehnen and D. Hernandez
Fri, 30 Sep 16
25/75

Comments: 17 pages, re-submitted to MNRAS

Hybrid Entropy Stable HLL-Type Riemann Solvers for Hyperbolic Conservation Laws [CL]

http://arxiv.org/abs/1607.06240


It is known that HLL-type schemes are more dissipative than schemes based on characteristic decompositions. However, HLL-type methods offer greater flexibility to large systems of hyperbolic conservation laws because the eigenstructure of the flux Jacobian is not needed. We demonstrate in the present work that several HLL-type Riemann solvers are provably entropy stable. Further, we provide convex combinations of standard dissipation terms to create hybrid HLL-type methods that have less dissipation while retaining entropy stability. The decrease in dissipation is demonstrated for the ideal MHD equations with a numerical example.

Read this paper on arXiv…

B. Schmidtmann and A. Winters
Fri, 22 Jul 16
15/57

Comments: 5 pages

A Hybrid Riemann Solver for Large Hyperbolic Systems of Conservation Laws [CL]

http://arxiv.org/abs/1607.05721


We are interested in the numerical solution of large systems of hyperbolic conservation laws or systems in which the characteristic decomposition is expensive to compute. Solving such equations using finite volumes or Discontinuous Galerkin requires a numerical flux function which solves local Riemann problems at cell interfaces. There are various methods to express the numerical flux function. On the one end, there is the robust but very diffusive Lax-Friedrichs solver; on the other end the upwind Godunov solver which respects all resulting waves. The drawback of the latter method is the costly computation of the eigensystem.
This work presents a family of simple first order Riemann solvers, named HLLX$\omega$, which avoid solving the eigensystem. The new method reproduces all waves of the system with less dissipation than other solvers with similar input and effort, such as HLL and FORCE. The family of Riemann solvers can be seen as an extension or generalization of the methods introduced by Degond et al. \cite{DegondPeyrardRussoVilledieu1999}. We only require the same number of input values as HLL, namely the globally fastest wave speeds in both directions, or an estimate of the speeds. Thus, the new family of Riemann solvers is particularly efficient for large systems of conservation laws when the spectral decomposition is expensive to compute or no explicit expression for the eigensystem is available.

Read this paper on arXiv…

B. Schmidtmann and M. Torrilhon
Thu, 21 Jul 16
46/48

Comments: arXiv admin note: text overlap with arXiv:1606.08040

On the equivalence between the Scheduled Relaxation Jacobi method and Richardson's non-stationary method [CL]

http://arxiv.org/abs/1607.03712


The Scheduled Relaxation Jacobi (SRJ) method is an extension of the classical Jacobi iterative method to solve linear systems of equations ($Au=b$) associated with elliptic problems. It inherits its robustness and accelerates its convergence rate computing a set of $P$ relaxation factors that result from a minimization problem. In a typical SRJ scheme, the former set of factors is employed in cycles of $M$ consecutive iterations until a prescribed tolerance is reached. We present the analytic form for the optimal set of relaxation factors for the case in which all of them are different, and find that the resulting algorithm is equivalent to a non-stationary generalized Richardson’s method. Our method to estimate the weights has the advantage that the explicit computation of the maximum and minimum eigenvalues of the matrix $A$ is replaced by the (much easier) calculation of the maximum and minimum frequencies derived from a von Neumann analysis. This set of weights is also optimal for the general problem, resulting in the fastest convergence of all possible SRJ schemes for a given grid structure. We also show that with the set of weights computed for the optimal SRJ scheme for a fixed cycle size it is possible to estimate numerically the optimal value of the parameter $\omega$ in the Successive Overtaxation (SOR) method in some cases. Finally, we demonstrate with practical examples that our method also works very well for Poisson-like problems in which a high-order discretization of the Laplacian operator is employed. This is of interest since the former discretizations do not yield consistently ordered $A$ matrices. Furthermore, the optimal SRJ schemes here deduced, are advantageous over existing SOR implementations for high-order discretizations of the Laplacian operator in as much as they do not need to resort to multi-coloring schemes for their parallel implementation. (abridged)

Read this paper on arXiv…

J. Adsuara, I. Cordero-Carrion, P. Cerda-Duran, et. al.
Thu, 14 Jul 16
52/72

Comments: 28 pages, 5 figures, submitted to JCP

Hybrid Riemann Solvers for Large Systems of Conservation Laws [CL]

http://arxiv.org/abs/1606.08040


In this paper we present a new family of approximate Riemann solvers for the numerical approximation of solutions of hyperbolic conservation laws. They are approximate, also referred to as incomplete, in the sense that the solvers avoid computing the characteristic decomposition of the flux Jacobian. Instead, they require only an estimate of the globally fastest wave speeds in both directions. Thus, this family of solvers is particularly efficient for large systems of conservation laws, i.e. with many different propagation speeds, and when no explicit expression for the eigensystem is available. Even though only fastest wave speeds are needed as input values, the new family of Riemann solvers reproduces all waves with less dissipation than HLL, which has the same prerequisites, requiring only one additional flux evaluation.

Read this paper on arXiv…

B. Schmidtmann, M. Astrakhantceva and M. Torrilhon
Tue, 28 Jun 16
56/58

Comments: 9 pages

The IDSA and the homogeneous sphere: Issues and possible improvements [CL]

http://arxiv.org/abs/1606.04020


In this paper, we are concerned with the study of the Isotropic Diffusion Source Approximation (IDSA) (Baxter et al., Phys. Rev. E 73, 046118, 2006) of radiative transfer. After having recalled well-known limits of the radiative transfer equation, we present the IDSA and adapt it to the case of the homogeneous sphere. We then show that for this example the IDSA suffers from severe numerical difficulties. We argue that these difficulties originate in the min-max switch coupling mechanism used in the IDSA. To overcome this problem we reformulate the IDSA to avoid the problematic coupling. This allows us to access the modeling error of the IDSA for the homogeneous sphere test case. The IDSA is shown to overestimate the streaming component, hence we propose a new version of the IDSA which is numerically shown to be more accurate than the old one.
Analytical results and numerical tests are provided to support the accuracy of the new proposed approximation.

Read this paper on arXiv…

J. Michaud
Tue, 14 Jun 16
25/67

Comments: 25 pages, 8 figures, accepted for publication in DCDS-S

On the kernel and particle consistency in smoothed particle hydrodynamics [CL]

http://arxiv.org/abs/1605.05245


The problem of consistency of smoothed particle hydrodynamics (SPH) has demanded considerable attention in the past few years due to the ever increasing number of applications of the method in many areas of science and engineering. A loss of consistency leads to an inevitable loss of approximation accuracy. In this paper, we revisit the issue of SPH kernel and particle consistency and demonstrate that SPH has a limiting second-order convergence rate. Numerical experiments with suitably chosen test functions validate this conclusion. In particular, we find that when using the root mean square error as a model evaluation statistics, well-known corrective SPH schemes, which were thought to converge to second, or even higher order, are actually first-order accurate, or at best close to second order. We also find that observing the joint limit when $N\to\infty$, $h\to 0$, and $n\to\infty$, as was recently proposed by Zhu et al., where $N$ is the total number of particles, $h$ is the smoothing length, and $n$ is the number of neighbor particles, standard SPH restores full $C^{0}$ particle consistency for both the estimates of the function and its derivatives and becomes insensitive to particle disorder.

Read this paper on arXiv…

L. Sigalotti, J. Klapp, O. Rendon, et. al.
Wed, 18 May 16
52/67

Comments: 27 pages, 10 figures. Submitted to Journal of Applied Numerical Mathematics

A parallel code for multiprecision computations of the Lane-Emden differential equation [SSA]

http://arxiv.org/abs/1604.08019


We compute multiprecision solutions of the Lane-Emden equation. This differential equation arises when introducing the well-known polytropic model into the equation of hydrostatic equilibrium for a nondistorted star. Since such multiprecision computations are time-consuming, we apply to this problem parallel programming techniques and thus the execution time of the computations is drastically reduced.

Read this paper on arXiv…

V. Geroyannis and V. Karageorgopoulos
Thu, 28 Apr 16
31/57

Comments: 8 pages

Scheduled Relaxation Jacobi method: improvements and applications [CL]

http://arxiv.org/abs/1511.04292


Elliptic partial differential equations (ePDEs) appear in a wide variety of areas of mathematics, physics and engineering. Typically, ePDEs must be solved numerically, which sets an ever growing demand for efficient and highly parallel algorithms to tackle their computational solution. The Scheduled Relaxation Jacobi (SRJ) is a promising class of methods, atypical for combining simplicity and efficiency, that has been recently introduced for solving linear Poisson-like ePDEs. The SRJ methodology relies on computing the appropriate parameters of a multilevel approach with the goal of minimizing the number of iterations needed to cut down the residuals below specified tolerances. The efficiency in the reduction of the residual increases with the number of levels employed in the algorithm. Applying the original methodology to compute the algorithm parameters with more than 5 levels notably hinders obtaining optimal SRJ schemes, as the mixed (non-linear) algebraic-differential equations from which they result become notably stiff. Here we present a new methodology for obtaining the parameters of SRJ schemes that overcomes the limitations of the original algorithm and provide parameters for SRJ schemes with up to 15 levels and resolutions of up to $2^{15}$ points per dimension, allowing for acceleration factors larger than several hundreds with respect to the Jacobi method for typical resolutions and, in some high resolution cases, close to 1000. Furthermore, we extend the original algorithm to apply it to certain systems of non-linear ePDEs.

Read this paper on arXiv…

J. Adsuara, I. Cordero-Carrion, P. Cerda-Duran, et. al.
Tue, 17 Nov 15
8/87

Comments: 37 pages, 8 figures, submitted to JCP

Efficient conservative ADER schemes based on WENO reconstruction and space-time predictor in primitive variables [CL]

http://arxiv.org/abs/1511.04728


We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. Since the underlying finite volume scheme is still written in terms of cell averages of the conserved quantities, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are converted into point values of the primitive variables. A second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. We have verified the validity of the new approach over the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER schemes provide less oscillatory solutions when compared to ADER finite volume schemes based on the reconstruction in conserved variables, especially for the RMHD and the Baer-Nunziato equations. For the RHD and RMHD equations, the accuracy is improved and the CPU time is reduced by about 25%. We recommend to use this version of ADER as the standard one in the relativistic framework. The new approach can be extended to ADER-DG schemes on space-time adaptive grids.

Read this paper on arXiv…

O. Zanotti and M. Dumbser
Tue, 17 Nov 15
84/87

Comments: N/A

Formulation of discontinuous Galerkin methods for relativistic astrophysics [CL]

http://arxiv.org/abs/1510.01190


The DG algorithm is a powerful method for solving pdes, especially for evolution equations in conservation form. Since the algorithm involves integration over volume elements, it is not immediately obvious that it will generalize easily to arbitrary time-dependent curved spacetimes. We show how to formulate the algorithm in such spacetimes for applications in relativistic astrophysics. We also show how to formulate the algorithm for equations in non-conservative form, such as Einstein’s field equations themselves. We find two computationally distinct formulations in both cases, one of which has seldom been used before for flat space in curvilinear coordinates but which may be more efficient. We also give a new derivation of the ALE algorithm (Arbitrary Lagrangian-Eulerian) using 4-vector methods that is much simpler than the usual derivation and explains why the method preserves the conservation form of the equations. The various formulations are explored with some simple numerical experiments that also explore the effect of the metric identities on the results.

Read this paper on arXiv…

S. Teukolsky
Tue, 6 Oct 15
55/78

Comments: N/A

Tensor calculus in polar coordinates using Jacobi polynomials [CL]

http://arxiv.org/abs/1509.07624


Spectral methods are an efficient way to solve partial differential equations on domains possessing certain symmetries. The utility of a method depends strongly on the choice of spectral basis. In this paper we describe a set of bases built out of Jacobi polynomials, and associated operators for solving scalar, vector, and tensor partial differential equations in polar coordinates on a unit disk. By construction, the bases satisfy regularity conditions at r=0 for any tensorial field. The coordinate singularity in a disk is a prototypical case for many coordinate singularities. The work presented here extends to other geometries. The operators represent covariant derivatives, multiplication by azimuthally symmetric functions, and the tensorial relationship between fields. These arise naturally from relations between classical orthogonal polynomials, and form a Heisenberg algebra. Other past work uses more specific polynomial bases for solving equations in polar coordinates. The main innovation in this paper is to use a larger set of possible bases to achieve maximum bandedness of linear operations. We provide a series of applications of the methods, illustrating their ease-of-use and accuracy.

Read this paper on arXiv…

G. Vasil, K. Burns, D. Lecoanet, et. al.
Mon, 28 Sep 15
38/67

Comments: 43 pages, 8 figures. Submitted to SIAM Review

Numerical methods for solution of the stochastic differential equations equivalent to the non-stationary Parker's transport equation [SSA]

http://arxiv.org/abs/1509.06890


We derive the numerical schemes for the strong order integration of the set of the stochastic differential equations (SDEs) corresponding to the non-stationary Parker transport equation (PTE). PTE is 5-dimensional (3 spatial coordinates, particles energy and time) Fokker- Planck type equation describing the non-stationary the galactic cosmic ray (GCR) particles transport in the heliosphere. We present the formulas for the numerical solution of the obtained set of SDEs driven by a Wiener process in the case of the full three-dimensional diffusion tensor. We introduce the solution applying the strong order Euler-Maruyama, Milstein and stochastic Runge-Kutta methods. We discuss the advantages and disadvantages of the presented numerical methods in the context of increasing the accuracy of the solution of the PTE.

Read this paper on arXiv…

A. Wawrzynczak, R. Modzelewska and M. Kluczek
Thu, 24 Sep 15
20/60

Comments: 4 pages, 2 figures, presented on 4th International Conference on Mathematical Modeling in Physical Sciences, 2015

Stochastic approach to the numerical solution of the non-stationary Parker's transport equation [SSA]

http://arxiv.org/abs/1509.06523


We present the newly developed stochastic model of the galactic cosmic ray (GCR) particles transport in the heliosphere. Mathematically Parker transport equation (PTE) describing non-stationary transport of charged particles in the turbulent medium is the Fokker-Planck type. It is the second order parabolic time-dependent 4-dimensional (3 spatial coordinates and particles energy/rigidity) partial differential equation. It is worth to mention that, if we assume the stationary case it remains as the 3-D parabolic type problem with respect to the particles rigidity R. If we fix the energy it still remains as the 3-D parabolic type problem with respect to time. The proposed method of numerical solution is based on the solution of the system of stochastic differential equations (SDEs) being equivalent to the Parker’s transport equation. We present the method of deriving from PTE the equivalent SDEs in the heliocentric spherical coordinate system for the backward approach. The obtained stochastic model of the Forbush decrease of the GCR intensity is in an agreement with the experimental data. The advantages and disadvantages of the forward and the backward solution of the PTE are discussed.

Read this paper on arXiv…

A. Wawrzynczak, R. Modzelewska and A. Gil
Wed, 23 Sep 15
38/63

Comments: 4 pages, 2 figures, presented on International Conference on Mathematical Modeling in Physical Sciences, 2014

A stochastic method of solution of the Parker transport equation [SSA]

http://arxiv.org/abs/1509.06519


We present the stochastic model of the galactic cosmic ray (GCR) particles transport in the heliosphere. Based on the solution of the Parker transport equation we developed models of the short-time variation of the GCR intensity, i.e. the Forbush decrease (Fd) and the 27-day variation of the GCR intensity. Parker transport equation being the Fokker-Planck type equation delineates non-stationary transport of charged particles in the turbulent medium. The presented approach of the numerical solution is grounded on solving of the set of equivalent stochastic differential equations (SDEs). We demonstrate the method of deriving from Parker transport equation the corresponding SDEs in the heliocentric spherical coordinate system for the backward approach. Features indicative the preeminence of the backward approach over the forward is stressed. We compare the outcomes of the stochastic model of the Fd and 27-day variation of the GCR intensity with our former models established by the finite difference method. Both models are in an agreement with the experimental data.

Read this paper on arXiv…

A. Wawrzynczak, R. Modzelewska and A. Gil
Wed, 23 Sep 15
46/63

Comments: 8 pages, 7 figures, presented on 24th European Cosmic Ray Symposium 2014

Space-time adaptive ADER discontinuous Galerkin finite element schemes with a posteriori sub-cell finite volume limiting [CL]

http://arxiv.org/abs/1412.0081


In this paper we present a novel arbitrary high order accurate discontinuous Galerkin (DG) finite element method on space-time adaptive Cartesian meshes (AMR) for hyperbolic conservation laws in multiple space dimensions, using a high order \aposteriori sub-cell ADER-WENO finite volume \emph{limiter}. Notoriously, the original DG method produces strong oscillations in the presence of discontinuous solutions and several types of limiters have been introduced over the years to cope with this problem. Following the innovative idea recently proposed in \cite{Dumbser2014}, the discrete solution within the troubled cells is \textit{recomputed} by scattering the DG polynomial at the previous time step onto a suitable number of sub-cells along each direction. Relying on the robustness of classical finite volume WENO schemes, the sub-cell averages are recomputed and then gathered back into the DG polynomials over the main grid. In this paper this approach is implemented for the first time within a space-time adaptive AMR framework in two and three space dimensions, after assuring the proper averaging and projection between sub-cells that belong to different levels of refinement. The combination of the sub-cell resolution with the advantages of AMR allows for an unprecedented ability in resolving even the finest details in the dynamics of the fluid. The spectacular resolution properties of the new scheme have been shown through a wide number of test cases performed in two and in three space dimensions, both for the Euler equations of compressible gas dynamics and for the magnetohydrodynamics (MHD) equations.

Read this paper on arXiv…

O. Zanotti, F. Fambri, M. Dumbser, et. al.
Thu, 3 Sep 15
53/58

Comments: Computers and Fluids 118 (2015) 204-224

Orthogonal systems of Zernike type in polygons and polygonal facets [IMA]

http://arxiv.org/abs/1506.07396


Zernike polynomials are commonly used to represent the wavefront phase on circular optical apertures, since they form a complete and orthonormal basis on the unit disk. In [Diaz et all, 2014] we introduced a new Zernike basis for elliptic and annular optical apertures based on an appropriate diffeomorphism between the unit disk and the ellipse and the annulus. Here, we present a generalization of this Zernike basis for a variety of important optical apertures, paying special attention to polygons and the polygonal facets present in segmented mirror telescopes. On the contrary to ad hoc solutions, most of them based on the Gram-Smith orthonormalization method, here we consider a piece-wise diffeomorphism that transforms the unit disk into the polygon under consideration. We use this mapping to define a Zernike-like orthonormal system over the polygon. We also consider ensembles of polygonal facets that are essential in the design of segmented mirror telescopes. This generalization, based on in-plane warping of the basis functions, provides a unique solution, and what is more important, it guarantees a reasonable level of invariance of the mathematical properties and the physical meaning of the initial basis functions. Both, the general form and the explicit expressions for a typical example of telescope optical aperture are provided.

Read this paper on arXiv…

C. Ferreira, J. Lopez, R. Navarro, et. al.
Thu, 25 Jun 15
39/45

Comments: 17 pages, 10 figures

WHFast: A fast and unbiased implementation of a symplectic Wisdom-Holman integrator for long term gravitational simulations [EPA]

http://arxiv.org/abs/1506.01084


We present WHFast, a fast and accurate implementation of a Wisdom-Holman symplectic integrator for long-term orbit integrations of planetary systems. WHFast is significantly faster and conserves energy better than all other Wisdom-Holman integrators tested. We achieve this by significantly improving the Kepler-solver and ensuring numerical stability of coordinate transformations to and from Jacobi coordinates. These refinements allow us to remove the linear secular trend in the energy error that is present in other implementations. For small enough timesteps we achieve Brouwer’s law, i.e. the energy error is dominated by an unbiased random walk due to floating-point round-off errors. We implement symplectic correctors up to order eleven that significantly reduce the energy error. We also implement a symplectic tangent map for the variational equations. This allows us to efficiently calculate two widely used chaos indicators the Lyapunov characteristic number (LCN) and the Mean Exponential Growth factor of Nearby Orbits (MEGNO). WHFast is freely available as a flexible C package, as a shared library, and as an easy-to-use python module.

Read this paper on arXiv…

H. Rein and D. Tamayo
Thu, 4 Jun 15
19/60

Comments: Accepted by MNRAS, 13 pages, 4 figures, source code and tutorials available at this http URL

Solving the relativistic magnetohydrodynamics equations with ADER discontinuous Galerkin methods, a posteriori subcell limiting and adaptive mesh refinement [HEAP]

http://arxiv.org/abs/1504.07458


We present a new numerical tool for solving the special relativistic ideal MHD equations that is based on the combination of the following three key features: (i) a one-step ADER discontinuous Galerkin (DG) scheme that allows for an arbitrary order of accuracy in both space and time, (ii) an a posteriori subcell finite volume limiter that is activated to avoid spurious oscillations at discontinuities without destroying the natural subcell resolution capabilities of the DG finite element framework and finally (iii) a space-time adaptive mesh refinement (AMR) framework with time-accurate local time-stepping. The divergence-free character of the magnetic field is instead taken into account through the so-called ‘divergence-cleaning’ approach. The convergence of the new scheme is verified up to 5th order in space and time and the results for a sample of significant numerical tests including shock tube problems, the RMHD rotor problem and the Orszag-Tang vortex system are shown. We also consider a simple case of the relativistic Kelvin-Helmholtz instability with a magnetic field, emphasizing the potential of the new method for studying turbulent RMHD flows. We discuss the advantages of our new approach when the equations of relativistic MHD need to be solved with high accuracy within various astrophysical systems.

Read this paper on arXiv…

O. Zanotti, F. Fambri and M. Dumbser
Wed, 29 Apr 15
17/62

Comments: 19 pages, 7 figures

A blind deconvolution method for ground based telescopes and Fizeau interferometers [CL]

http://arxiv.org/abs/1503.05673


In the case of ground-based telescopes equipped with adaptive optics systems, the point spread function (PSF) is only poorly known or completely unknown. Moreover, an accurate modeling of the PSF is in general not available. Therefore in several imaging situations the so-called blind deconvolution methods, aiming at estimating both the scientific target and the PSF from the detected image, can be useful. A blind deconvolution problem is severely ill-posed and, in order to reduce the extremely large number of possible solutions, it is necessary to introduce sensible constraints on both the scientific target and the PSF. In a previous paper we proposed a sound mathematical approach based on a suitable inexact alternating minimization strategy for minimizing the generalized Kullback-Leibler divergence, assuring global convergence. In the framework of this method we showed that an important constraint on the PSF is the upper bound which can be derived from the knowledge of its Strehl ratio. The efficacy of the approach was demonstrated by means of numerical simulations. In this paper, besides improving the previous approach by the use of a further constraint on the unknown scientific target, we extend it to the case of multiple images of the same target obtained with different PSFs. The main application we have in mind is to Fizeau interferometry. As it is known this is a special feature of the Large Binocular Telescope (LBT). The method is applied to realistic simulations of imaging both by single mirrors and Fizeau interferometers. Successes and failures of the method in the imaging of stellar fields are demonstrated in simple cases. These preliminary results look promising at least in specific situations. The IDL code of the proposed method is available on request and will be included in the forthcoming version of the Software Package AIRY (v.6.1).

Read this paper on arXiv…

M. Prato, A. Camera, S. Bonettini, et. al.
Fri, 20 Mar 15
11/55

Comments: N/A

Inverse diffraction for the Atmospheric Imaging Assembly in the Solar Dynamics Observatory [IMA]

http://arxiv.org/abs/1501.07805


The Atmospheric Imaging Assembly in the Solar Dynamics Observatory provides full Sun images every 1 seconds in each of 7 Extreme Ultraviolet passbands. However, for a significant amount of these images, saturation affects their most intense core, preventing scientists from a full exploitation of their physical meaning. In this paper we describe a mathematical and automatic procedure for the recovery of information in the primary saturation region based on a correlation/inversion analysis of the diffraction pattern associated to the telescope observations. Further, we suggest an interpolation-based method for determining the image background that allows the recovery of information also in the region of secondary saturation (blooming).

Read this paper on arXiv…

G. Torre, R. Schwartz, F. Benvenuto, et. al.
Mon, 2 Feb 15
10/49

Comments: N/A

IAS15: A fast, adaptive, high-order integrator for gravitational dynamics, accurate to machine precision over a billion orbits [EPA]

http://arxiv.org/abs/1409.4779


We present IAS15, a 15th-order integrator to simulate gravitational dynamics. The integrator is based on a Gau{\ss}-Radau quadrature and can handle conservative as well as non-conservative forces. We develop a step-size control that can automatically choose an optimal timestep. The algorithm can handle close encounters and high-eccentricity orbits. The systematic errors are kept well below machine precision and long-term orbit integrations over $10^9$ orbits show that IAS15 is optimal in the sense that it follows Brouwer’s law, i.e. the energy error behaves like a random walk. Our tests show that IAS15 is superior to a mixed-variable symplectic integrator (MVS) and other high-order integrators in both speed and accuracy. In fact, IAS15 preserves the symplecticity of Hamiltonian systems better than the commonly-used nominally symplectic integrators to which we compared it.
We provide an open-source implementation of IAS15. The package comes with several easy-to-extend examples involving resonant planetary systems, Kozai-Lidov cycles, close encounters, radiation pressure, quadrupole moment, and generic damping functions that can, among other things, be used to simulate planet-disc interactions. Other non-conservative forces can be added easily.

Read this paper on arXiv…

H. Rein and D. Spiegel
Thu, 18 Sep 14
32/58

Comments: submitted, 14 pages, 7 figures, source code in c and python bindings available at this http URL

Elimination of memory from the equations of motion of hereditary viscoelasticity for increased efficiency of numerical integration [IMA]

http://arxiv.org/abs/1406.7494


A method of eliminating the memory from the equations of motion of linear viscoelasticity is presented. Replacing the unbounded memory by a quadrature over a finite or semi-finite interval leads to considerable reduction of computational effort and storage. The method applies to viscoelastic media with separable completely monotonic relaxation moduli with an explicitly known retardation spectrum. In the seismological Strick-Mainardi model the quadrature is a Gauss-Jacobi quaddrature. The relation to fractional-order viscoelasticity is shown

Read this paper on arXiv…

A. Hanyga
Tue, 1 Jul 14
9/70

Comments: N/A

Fast Direct Methods for Gaussian Processes and the Analysis of NASA Kepler Mission Data [CL]

http://arxiv.org/abs/1403.6015


A number of problems in probability and statistics can be addressed using the multivariate normal (or multivariate Gaussian) distribution. In the one-dimensional case, computing the probability for a given mean and variance simply requires the evaluation of the corresponding Gaussian density. In the $n$-dimensional setting, however, it requires the inversion of an $n \times n$ covariance matrix, $C$, as well as the evaluation of its determinant, $\det(C)$. In many cases, the covariance matrix is of the form $C = \sigma^2 I + K$, where $K$ is computed using a specified kernel, which depends on the data and additional parameters (called hyperparameters in Gaussian process computations). The matrix $C$ is typically dense, causing standard direct methods for inversion and determinant evaluation to require $\mathcal O(n^3)$ work. This cost is prohibitive for large-scale modeling. Here, we show that for the most commonly used covariance functions, the matrix $C$ can be hierarchically factored into a product of block low-rank updates of the identity matrix, yielding an $\mathcal O (n\log^2 n) $ algorithm for inversion, as discussed in Ambikasaran and Darve, $2013$. More importantly, we show that this factorization enables the evaluation of the determinant $\det(C)$, permitting the direct calculation of probabilities in high dimensions under fairly broad assumption about the kernel defining $K$. Our fast algorithm brings many problems in marginalization and the adaptation of hyperparameters within practical reach using a single CPU core. The combination of nearly optimal scaling in terms of problem size with high-performance computing resources will permit the modeling of previously intractable problems. We illustrate the performance of the scheme on standard covariance kernels, and apply it to a real data set obtained from the $Kepler$ Mission.

Read this paper on arXiv…

S. Ambikasaran, D. Foreman-Mackey, L. Greengard, et. al.
Tue, 25 Mar 14
50/79

Boltzmann Equation Solver Adapted to Emergent Chemical Non-equilibrium [CL]

http://arxiv.org/abs/1403.2019


We present a novel method to solve the spatially homogeneous and isotropic relativistic Boltzmann equation. We employ a basis set of orthogonal polynomials dynamically adapted to allow emergence of chemical non-equilibrium. Two time dependent parameters characterize the set of orthogonal polynomials, the effective temperature $T(t)$ and phase space occupation factor $\Upsilon(t)$. In this first paper we address (effectively) massless fermions and derive dynamical equations for $T(t)$ and $\Upsilon(t)$ such that the zeroth order term of the basis alone captures the number density and energy density of each particle distribution. We validate our method and illustrate the reduced computational cost and the ability to represent final state chemical non-equilibrium by studying a model problem that is motivated by the physics of the neutrino freeze-out processes in the early Universe, where the essential physical characteristics include reheating from another disappearing particle component ($e^\pm$-annihilation).

Read this paper on arXiv…

J. Birrell and J. Rafelski
Tue, 11 Mar 14
22/66

An ADER-WENO Finite Volume AMR code for Astrophysics [IMA]

http://arxiv.org/abs/1401.6448


A high order one-step ADER-WENO finite volume scheme with Adaptive Mesh Refinement (AMR) in multiple space dimensions is presented. A high order one-step time discretization is achieved using a local space-time discontinuous Galerkin predictor method, while a high order spatial accuracy is obtained through a WENO reconstruction. Thanks to the one-step nature of the underlying scheme, the resulting algorithm can be efficiently imported within an AMR framework on space-time adaptive meshes. We provide convincing evidence that the presented high order AMR scheme behaves better than traditional second order AMR methods. Tests are shown of the new scheme for nonlinear systems of hyperbolic conservation laws, including the classical Euler equations and the equations of ideal magnetohydrodynamics. The proposed scheme is likely to become a useful tool in several astrophysical scenarios.

Read this paper on arXiv…

Mon, 27 Jan 14
15/38

A scaled gradient projection method for the X-ray imaging of solar flares [IMA]

http://arxiv.org/abs/1311.5717


In this paper we present a new optimization algorithm for the reconstruction of X-ray images of solar flares by means of the data collected by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). The imaging concept of the satellite is based of rotating modulation collimator instruments, which allow the use of both Fourier imaging approaches and reconstruction techniques based on the straightforward inversion of the modulated count profiles. Although in the last decade a greater attention has been devoted to the former strategies due to their very limited computational cost, here we consider the latter model and investigate the effectiveness of a scaled gradient projection method for the solution of the corresponding constrained minimization problem. Moreover, regularization is introduced through either an early stopping of the iterative procedure, or a Tikhonov term added to the discrepancy function, by means of a discrepancy principle accounting for the Poisson nature of the noise affecting the data.

Read this paper on arXiv…

Mon, 25 Nov 13
2/48