Particle-in-Cell Simulations of Relativistic Magnetic Reconnection with Advanced Maxwell Solver Algorithms [HEAP]

http://arxiv.org/abs/2304.10566


Relativistic magnetic reconnection is a non-ideal plasma process that is a source of non-thermal particle acceleration in many high-energy astrophysical systems. Particle-in-cell (PIC) methods are commonly used for simulating reconnection from first principles. While much progress has been made in understanding the physics of reconnection, especially in 2D, the adoption of advanced algorithms and numerical techniques for efficiently modeling such systems has been limited. With the GPU-accelerated PIC code WarpX, we explore the accuracy and potential performance benefits of two advanced Maxwell solver algorithms: a non-standard finite difference scheme (CKC) and an ultrahigh-order pseudo-spectral method (PSATD). We find that for the relativistic reconnection problem, CKC and PSATD qualitatively and quantitatively match the standard Yee-grid finite-difference method. CKC and PSATD both admit a time step that is 40% longer than Yee, resulting in a ~40% faster time to solution for CKC, but no performance benefit for PSATD when using a current deposition scheme that satisfies Gauss’s law. Relaxing this constraint maintains accuracy and yields a 30% speedup. Unlike Yee and CKC, PSATD is numerically stable at any time step, allowing for a larger time step than with the finite-difference methods. We found that increasing the time step 2.4-3 times over the standard Yee step still yields accurate results, but only translates to modest performance improvements over CKC due to the current deposition scheme used with PSATD. Further optimization of this scheme will likely improve the effective performance of PSATD.

Read this paper on arXiv…

H. Klion, R. Jambunathan, M. Rowan, et. al.
Mon, 24 Apr 23
22/41

Comments: 19 pages, 10 figures. Submitted to ApJ

A machine learning and feature engineering approach for the prediction of the uncontrolled re-entry of space objects [CL]

http://arxiv.org/abs/2303.10183


The continuously growing number of objects orbiting around the Earth is expected to be accompanied by an increasing frequency of objects re-entering the Earth’s atmosphere. Many of these re-entries will be uncontrolled, making their prediction challenging and subject to several uncertainties. Traditionally, re-entry predictions are based on the propagation of the object’s dynamics using state-of-the-art modelling techniques for the forces acting on the object. However, modelling errors, particularly related to the prediction of atmospheric drag may result in poor prediction accuracies. In this context, we explore the possibility to perform a paradigm shift, from a physics-based approach to a data-driven approach. To this aim, we present the development of a deep learning model for the re-entry prediction of uncontrolled objects in Low Earth Orbit (LEO). The model is based on a modified version of the Sequence-to-Sequence architecture and is trained on the average altitude profile as derived from a set of Two-Line Element (TLE) data of over 400 bodies. The novelty of the work consists in introducing in the deep learning model, alongside the average altitude, three new input features: a drag-like coefficient (B*), the average solar index, and the area-to-mass ratio of the object. The developed model is tested on a set of objects studied in the Inter-Agency Space Debris Coordination Committee (IADC) campaigns. The results show that the best performances are obtained on bodies characterised by the same drag-like coefficient and eccentricity distribution as the training set.

Read this paper on arXiv…

F. Salmaso, M. Trisolini and C. Colombo
Tue, 21 Mar 23
43/68

Comments: N/A

Target selection for Near-Earth Asteroids in-orbit sample collection missions [EPA]

http://arxiv.org/abs/2212.09497


This work presents a mission concept for in-orbit particle collection for sampling and exploration missions towards Near-Earth asteroids. Ejecta is generated via a small kinetic impactor and two possible collection strategies are investigated: collecting the particle along the anti-solar direction, exploiting the dynamical features of the L$_2$ Lagrangian point or collecting them while the spacecraft orbits the asteroid and before they re-impact onto the asteroid surface. Combining the dynamics of the particles in the Circular Restricted Three-Body Problem perturbed by Solar Radiation Pressure with models for the ejecta generation, we identify possible target asteroids as a function of their physical properties, by evaluating the potential for particle collection.

Read this paper on arXiv…

M. Trisolini, C. Colombo and Y. Tsuda
Tue, 20 Dec 22
19/97

Comments: N/A

Reliable event detection for Taylor methods in astrodynamics [EPA]

http://arxiv.org/abs/2204.09948


We present a novel approach for the detection of events in systems of ordinary differential equations. The new method combines the unique features of Taylor integrators with state-of-the-art polynomial root finding techniques to yield a novel algorithm ensuring strong event detection guarantees at a modest computational overhead. Detailed tests and benchmarks focused on problems in astrodynamics and celestial mechanics (such as collisional N-body systems, spacecraft dynamics around irregular bodies accounting for eclipses, computation of Poincare’ sections, etc.) show how our approach is superior in both performance and detection accuracy to strategies commonly employed in modern numerical integration works. The new algorithm is available in our open source Taylor integration package heyoka.

Read this paper on arXiv…

F. Biscani and D. Izzo
Fri, 22 Apr 22
55/64

Comments: Accepted for publication in MNRAS

Interface between the long-term propagation and the destructive re-entry phases exploiting the overshoot boundary [EPA]

http://arxiv.org/abs/2202.12898


In recent years, due to the constant increase of the density of satellites in the space environment, several studies have been focused on the development of active and passive strategies to remove and mitigate space debris. This work investigates the feasibility of developing a reliable and fast approach to analyze the re-entry of a satellite. The numerical model interfaces the long-term orbit propagation obtained through semi-analytical methods with the atmospheric destructive re-entry phase exploiting the concept of overshoot boundary, highlighting the effect that an early break-off of the solar panels can have on the re-entry prediction. The re-entry of ESA’s INTEGRAL mission is chosen as a test case to demonstrate the efficiency of the model in producing a complete simulation of the re-entry. The simulation of the destructive re-entry phase is produced using an object-oriented approach, paying attention to the demisability process of the most critical components of the space system.

Read this paper on arXiv…

C. Fusaro, M. Trisolini and C. Colombo
Tue, 1 Mar 22
65/80

Comments: 10 pages, 9 figures, pre-proof of accepted article in Journal of Space Safety Engineering

Mission Design of DESTINY+: Toward Active Asteroid (3200) Phaethon and Multiple Small Bodies [EPA]

http://arxiv.org/abs/2201.01933


DESTINY+ is an upcoming JAXA Epsilon medium-class mission to fly by the Geminids meteor shower parent body (3200) Phaethon. It will be the world’s first spacecraft to escape from a near-geostationary transfer orbit into deep space using a low-thrust propulsion system. In doing so, DESTINY+ will demonstrate a number of technologies that include a highly efficient ion engine system, lightweight solar array panels, and advanced asteroid flyby observation instruments. These demonstrations will pave the way for JAXA’s envisioned low-cost, high-frequency space exploration plans. Following the Phaethon flyby observation, DESTINY+ will visit additional asteroids as its extended mission. The mission design is divided into three phases: a spiral-shaped apogee-raising phase, a multi-lunar-flyby phase to escape Earth, and an interplanetary and asteroids flyby phase. The main challenges include the optimization of the many-revolution low-thrust spiral phase under operational constraints; the design of a multi-lunar-flyby sequence in a multi-body environment; and the design of multiple asteroid flybys connected via Earth gravity assists. This paper shows a novel, practical approach to tackle these complex problems, and presents feasible solutions found within the mass budget and mission constraints. Among them, the baseline solution is shown and discussed in depth; DESTINY+ will spend two years raising its apogee with ion engines, followed by four lunar gravity assists, and a flyby of asteroids (3200) Phaethon and (155140) 2005 UD. Finally, the flight operations plan for the spiral phase and the asteroid flyby phase are presented in detail.

Read this paper on arXiv…

N. Ozaki, T. Yamamoto, F. Gonzalez-Franquesa, et. al.
Fri, 7 Jan 22
20/34

Comments: N/A

Asteroid Flyby Cycler Trajectory Design Using Deep Neural Networks [IMA]

http://arxiv.org/abs/2111.11858


Asteroid exploration has been attracting more attention in recent years. Nevertheless, we have just visited tens of asteroids while we have discovered more than one million bodies. As our current observation and knowledge should be biased, it is essential to explore multiple asteroids directly to better understand the remains of planetary building materials. One of the mission design solutions is utilizing asteroid flyby cycler trajectories with multiple Earth gravity assists. An asteroid flyby cycler trajectory design problem is a subclass of global trajectory optimization problems with multiple flybys, involving a trajectory optimization problem for a given flyby sequence and a combinatorial optimization problem to decide the sequence of the flybys. As the number of flyby bodies grows, the computation time of this optimization problem expands maliciously. This paper presents a new method to design asteroid flyby cycler trajectories utilizing a surrogate model constructed by deep neural networks approximating trajectory optimization results. Since one of the bottlenecks of machine learning approaches is to generate massive trajectory databases, we propose an efficient database generation strategy by introducing pseudo-asteroids satisfying the Karush-Kuhn-Tucker conditions. The numerical result applied to JAXA’s DESTINY+ mission shows that the proposed method can significantly reduce the computational time for searching asteroid flyby sequences.

Read this paper on arXiv…

N. Ozaki, K. Yanagida, T. Chikazawa, et. al.
Wed, 24 Nov 21
22/61

Comments: N/A

Re-entry prediction and demisability analysis for the atmospheric disposal of geosynchronous satellites [EPA]

http://arxiv.org/abs/2110.04862


The paper presents a re-entry analysis of Geosynchronous Orbit (GSO) satellites on disposal trajectories that enhance the effects of the Earth oblateness and lunisolar perturbations. These types of trajectories can lead to a natural re-entry of the spacecraft within 20 years. An analysis was performed to characterise the entry conditions for these satellites and the risk they can pose for people on the ground if disposal via re-entry is used. The paper first proposes a methodology to interface the long-term propagation used to study the evolution of the disposal trajectories and the destructive re-entry simulations used to assess the spacecraft casualty risk. This is achieved by revisiting the concept of overshoot boundary. The paper also presents the demisability and casualty risk analysis for a representative spacecraft configuration, showing that the casualty risk is greater than the 10$^{-4}$ threshold and that further actions should be taken to improve the compliance of these satellites in case of disposal via re-entry is used.

Read this paper on arXiv…

M. Trisolini and C. Colombo
Tue, 12 Oct 21
37/73

Comments: N/A

Super-resolving star clusters with sheaves [IMA]

http://arxiv.org/abs/2106.08123


This article explains an optimization-based approach for counting and localizing stars within a small cluster, based on photon counts in a focal plane array. The array need not be arranged in any particular way, and relatively small numbers of photons are required in order to ensure convergence. The stars can be located close to one another, as the location and brightness errors were found to be low when the separation was larger than $0.2$ Rayleigh radii. To ensure generality of our approach, it was constructed as a special case of a general theory built upon topological signal processing using the mathematics of sheaves.

Read this paper on arXiv…

M. Robinson and C. Capraro
Wed, 16 Jun 21
33/57

Comments: arXiv admin note: text overlap with arXiv:2106.04445

Modeling of Spiral Structure in a Multi-Component Milky~Way-Like Galaxy [GA]

http://arxiv.org/abs/2105.03198


Using recent observational data, we construct a set of multi-component equilibrium models of the disk of a Milky Way-like galaxy. The disk dynamics are studied using collisionless-gaseous numerical simulations, based on the joined integration of the equations of motion for the collision-less particles using direct integration of gravitational interaction and the gaseous SPH-particles. We find that after approximately one Gyr, a prominent central bar is formed having a semi-axis length of about three kpc, together with a multi-armed spiral pattern represented by a superposition of $m=$ 2-, 3-, and 4-armed spirals. The spiral structure and the bar exist for at least 3 Gyr in our simulations. The existence of the Milky Way bar imposes limitations on the density distributions in the subsystems of the Milky Way galaxy. We find that a bar does not form if the radial scale length of the density distribution in the disk exceeds 2.6 kpc. As expected, the bar formation is also suppressed by a compact massive stellar bulge. We also demonstrate that the maximum value in the rotation curve of the disk of the Milky Way galaxy, as found in its central regions, is explained by non-circular motion due to the presence of a bar and its orientation relative to an observer.

Read this paper on arXiv…

S. Khrapov, A. Khoperskov and V. Korchagin
Mon, 10 May 21
31/60

Comments: 30 pages, 13 figures

Revisiting high-order Taylor methods for astrodynamics and celestial mechanics [EPA]

http://arxiv.org/abs/2105.00800


We present heyoka, a new, modern and general-purpose implementation of Taylor’s integration method for the numerical solution of ordinary differential equations. Detailed numerical tests focused on difficult high-precision gravitational problems in astrodynamics and celestial mechanics show how our general-purpose integrator is competitive with and often superior to state-of-the-art specialised symplectic and non-symplectic integrators in both speed and accuracy. In particular, we show how Taylor methods are capable of satisfying Brouwer’s law for the conservation of energy in long-term integrations of planetary systems over billions of dynamical timescales. We also show how close encounters are modelled accurately during simulations of the formation of the Kirkwood gaps and of Apophis’ 2029 close encounter with the Earth (where heyoka surpasses the speed and accuracy of domain-specific methods). heyoka can be used from both C++ and Python, and it is publicly available as an open-source project.

Read this paper on arXiv…

F. Biscani and D. Izzo
Tue, 4 May 21
18/72

Comments: N/A

Propagation and reconstruction of re-entry uncertainties using continuity equation and simplicial interpolation [CL]

http://arxiv.org/abs/2101.10825


This work proposes a continuum-based approach for the propagation of uncertainties in the initial conditions and parameters for the analysis and prediction of spacecraft re-entries. Using the continuity equation together with the re-entry dynamics, the joint probability distribution of the uncertainties is propagated in time for specific sampled points. At each time instant, the joint probability distribution function is then reconstructed from the scattered data using a gradient-enhanced linear interpolation based on a simplicial representation of the state space. Uncertainties in the initial conditions at re-entry and in the ballistic coefficient for three representative test cases are considered: a three-state and a six-state steep Earth re-entry and a six-state unguided lifting entry at Mars. The paper shows the comparison of the proposed method with Monte Carlo based techniques in terms of quality of the obtained marginal distributions and runtime as a function of the number of samples used.

Read this paper on arXiv…

M. Trisolini and C. Colombo
Wed, 27 Jan 21
17/68

Comments: N/A

A novel structure preserving semi-implicit finite volume method for viscous and resistive magnetohydrodynamics [CL]

http://arxiv.org/abs/2012.11218


In this work we introduce a novel semi-implicit structure-preserving finite-volume/finite-difference scheme for the viscous and resistive equations of magnetohydrodynamics (MHD) based on an appropriate 3-split of the governing PDE system, which is decomposed into a first convective subsystem, a second subsystem involving the coupling of the velocity field with the magnetic field and a third subsystem involving the pressure-velocity coupling. The nonlinear convective terms are discretized explicitly, while the remaining two subsystems accounting for the Alfven waves and the magneto-acoustic waves are treated implicitly. The final algorithm is at least formally constrained only by a mild CFL stability condition depending on the velocity field of the pure hydrodynamic convection. To preserve the divergence-free constraint of the magnetic field exactly at the discrete level, a proper set of overlapping dual meshes is employed. The resulting linear algebraic systems are shown to be symmetric and therefore can be very efficiently solved by means of a standard matrix-free conjugate gradient algorithm. One of the peculiarities of the presented algorithm is that the magnetic field is defined on the edges of the main grid, while the electric field is on the faces. The final scheme can be regarded as a novel shock-capturing, conservative and structure preserving semi-implicit scheme for the nonlinear viscous and resistive MHD equations. Several numerical tests are presented to show the main features of our novel solver: linear-stability in the sense of Lyapunov is verified at a prescribed constant equilibrium solution; a 2nd-order of convergence is numerically estimated; shock-capturing capabilities are proven against a standard set of stringent MHD shock-problems; accuracy and robustness are verified against a nontrivial set of 2- and 3-dimensional MHD problems.

Read this paper on arXiv…

F. Fambri
Tue, 22 Dec 20
27/89

Comments: 43 pages, 22 figures

MatDRAM: A pure-MATLAB Delayed-Rejection Adaptive Metropolis-Hastings Markov Chain Monte Carlo Sampler [CL]

http://arxiv.org/abs/2010.04190


Markov Chain Monte Carlo (MCMC) algorithms are widely used for stochastic optimization, sampling, and integration of mathematical objective functions, in particular, in the context of Bayesian inverse problems and parameter estimation. For decades, the algorithm of choice in MCMC simulations has been the Metropolis-Hastings (MH) algorithm. An advancement over the traditional MH-MCMC sampler is the Delayed-Rejection Adaptive Metropolis (DRAM). In this paper, we present MatDRAM, a stochastic optimization, sampling, and Monte Carlo integration toolbox in MATLAB which implements a variant of the DRAM algorithm for exploring the mathematical objective functions of arbitrary-dimensions, in particular, the posterior distributions of Bayesian models in data science, Machine Learning, and scientific inference. The design goals of MatDRAM include nearly-full automation of MCMC simulations, user-friendliness, fully-deterministic reproducibility, and the restart functionality of simulations. We also discuss the implementation details of a technique to automatically monitor and ensure the diminishing adaptation of the proposal distribution of the DRAM algorithm and a method of efficiently storing the resulting simulated Markov chains. The MatDRAM library is open-source, MIT-licensed, and permanently located and maintained as part of the ParaMonte library at https://github.com/cdslaborg/paramonte.

Read this paper on arXiv…

S. Kumbhare and A. Shahmoradi
Mon, 12 Oct 20
22/59

Comments: N/A

Predicting the vulnerability of spacecraft components: modelling debris impact effects through vulnerable-zones [IMA]

http://arxiv.org/abs/2003.05521


The space environment around the Earth is populated by more than 130 million objects of 1 mm in size and larger, and future predictions shows that this amount is destined to increase, even if mitigation measures are implemented at a far better rate than today. These objects can hit and damage a spacecraft or its components. It is thus necessary to assess the risk level for a satellite during its mission lifetime. Few software packages perform this analysis, and most of them employ time-consuming ray-tracing methodology, where particles are randomly sampled from relevant distributions. In addition, they tend not to consider the risk associated with the secondary debris clouds. The paper presents the development of a vulnerability assessment model, which relies on a fully statistical procedure: the debris fluxes are directly used combining them with the concept of the vulnerable zone, avoiding the random sampling the debris fluxes. A novel methodology is presented to predict damage to internal components. It models the interaction between the components and the secondary debris cloud through basic geometric operations, considering mutual shielding and shadowing between internal components. The methodologies are tested against state-of-the-art software for relevant test cases, comparing results on external structures and internal components.

Read this paper on arXiv…

M. Trisolini, H. Lewis and C. Colombo
Fri, 13 Mar 20
47/53

Comments: Article accepted for pubblication in Advances in Space Research

Forecasting Megaelectron-Volt Electrons inside Earth's Outer Radiation Belt: PreMevE 2.0 Based on Supervised Machine Learning Algorithms [CL]

http://arxiv.org/abs/1911.01315


Here we present the recent progress in upgrading a predictive model for Megaelectron-Volt (MeV) electrons inside the Earth’s outer Van Allen belt. This updated model, called PreMevE 2.0, is demonstrated to make much improved forecasts, particularly at outer Lshells, by including upstream solar wind speeds to the model’s input parameter list. Furthermore, based on several kinds of linear and artificial machine learning algorithms, a list of models were constructed, trained, validated and tested with 42-month MeV electron observations from Van Allen Probes. Out-of-sample test results from these models show that, with optimized model hyperparameters and input parameter combinations, the top performer from each category of models has the similar capability of making reliable 1-day (2-day) forecasts with Lshell-averaged performance efficiency values ~ 0.87 (~0.82). Interestingly, the linear regression model is often the most successful one when compared to other models, which indicates the relationship between 1 MeV electron dynamics and precipitating electrons is dominated by linear components. It is also shown that PreMevE 2.0 can reasonably predict the onsets of MeV electron events in 2-day forecasts. This improved PreMevE model is driven by observations from longstanding space infrastructure (a NOAA satellite in low-Earth-orbit, the solar wind monitor at the L1 point, and one LANL satellite in geosynchronous orbit) to make high-fidelity forecasts for MeV electrons, and thus can be an invaluable space weather forecasting tool for the future.

Read this paper on arXiv…

R. Lima, Y. Chen and Y. Lin
Tue, 5 Nov 19
54/72

Comments: N/A

Spacecraft design optimisation for demise and survivability [CL]

http://arxiv.org/abs/1910.05091


Among the mitigation measures introduced to cope with the space debris issue there is the de-orbiting of decommissioned satellites. Guidelines for re-entering objects call for a ground casualty risk no higher than 0.0001. To comply with this requirement, satellites can be designed through a design-for-demise philosophy. Still, a spacecraft designed to demise has to survive the debris-populated space environment for many years. The demisability and the survivability of a satellite can both be influenced by a set of common design choices such as the material selection, the geometry definition, and the position of the components. Within this context, two models have been developed to analyse the demise and the survivability of satellites. Given the competing nature of the demisability and the survivability, a multi-objective optimisation framework was developed, with the aim to identify trade-off solutions for the preliminary design of satellites. As the problem is nonlinear and involves the combination of continuous and discrete variables, classical derivative based approaches are unsuited and a genetic algorithm was selected instead. The genetic algorithm uses the developed demisability and survivability criteria as the fitness functions of the multi-objective algorithm. The paper presents a test case, which considers the preliminary optimisation of tanks in terms of material, geometry, location, and number of tanks for a representative Earth observation mission. The configuration of the external structure of the spacecraft is fixed. Tanks were selected because they are sensitive to both design requirements: they represent critical components in the demise process and impact damage can cause the loss of the mission because of leaking and ruptures. The results present the possible trade off solutions, constituting the Pareto front obtained from the multi-objective optimisation.

Read this paper on arXiv…

M. Trisolini, H. Lewis and C. Colombo
Mon, 14 Oct 19
40/69

Comments: Paper accepted for publication in Aerospace Science and Technology

WVTICs — SPH initial conditions for everyone [IMA]

http://arxiv.org/abs/1907.11250


We present a novel and fast application to generate glass-like initial conditions for Lagrangian hydrodynamic schemes (e.g. Smoothed Particle Hydrodynamics (SPH)) following arbitrary density models based on weighted Voronoi tessellations and combine it with improved initial configurations and an additional particle reshuffling scheme. We show our application’s ability to sample different kinds of density features and to converge properly towards the given model density as well as a glass-like particle configuration. We analyse convergence with iterations as well as with varying particle number. Additionally, we demonstrate the versatility of the implemented algorithms by providing an extensive test suite for standard (magneto-) hydrodynamic test cases as well as a few common astrophysical applications. We indicate the potential to bridge further between observational astronomy and simulations as well as applicability to other fields of science by advanced features such as describing a density model using gridded data for exampling from an image file instead of an analytic model.

Read this paper on arXiv…

A. Arth, J. Donnert, U. Steinwandel, et. al.
Mon, 29 Jul 19
36/52

Comments: 16 pages, 17 figures, submitted to Astronomy & Computing

Trajectory Design of Multiple Near Earth Asteroids Exploration Using Solar Sail Based on Deep Neural Network [CL]

http://arxiv.org/abs/1901.02172


In the preliminary trajectory design of the multi-target rendezvous problem, a model that can quickly estimate the cost of the orbital transfer is essential. The estimation of the transfer time using solar sail between two arbitrary orbits is difficult and usually necessary to solve an optimal control problem. Inspired by the successful applications of the deep neural network in nonlinear regression, this work explores the possibility and effectiveness of mapping the transfer time for solar sail from the orbital characteristics using the deep neural network. Furthermore, the Monte Carlo Tree Search method is investigated and used to search the optimal sequence for the multi-asteroid exploration problem. The sequences obtained by preliminary design will be solved and verified by sequentially solving the optimal control problem. Two examples of different application backgrounds validate the effectiveness of the proposed approach.

Read this paper on arXiv…

Y. Song and S. Gong
Wed, 9 Jan 19
14/46

Comments: 34 pages, 19 figures

A Semi-Analytical Method for Calculating Revisit Time for Satellite Constellations with Discontinuous Coverage [CL]

http://arxiv.org/abs/1807.02021


This paper presents a unique approach to the problem of calculating revisit time metrics for different satellite orbits, sensor geometries, and constellation configurations with application to early lifecycle design and optimisation processes for Earth observation missions. The developed semi-analytical approach uses an elliptical projected footprint geometry to provide an accuracy similar to that of industry standard numerical orbit simulation software but with an efficiency of published analytical methods. Using the developed method, extensive plots of maximum revisit time are presented for varying altitude, inclination, target latitudes, sensor capabilities, and constellation configuration, providing valuable reference for Earth observation system design.

Read this paper on arXiv…

N. Crisp, S. Livadiotti and P. Roberts
Fri, 6 Jul 18
8/52

Comments: 10 pages, 10 figures

A nonlinear and time-dependent visco-elasto-plastic rheology model for studying shock-physics phenomena [CL]

http://arxiv.org/abs/1805.06453


We present a simple and efficient implementation of a viscous creep rheology based on diffusion creep, dislocation creep and the Peierls mechanism in conjunction with an elasto-plastic rheology model into a shock-physics code, the iSALE open-source impact code. Our approach is based on the calculation of an effective viscosity which is then used as a reference viscosity for any underlying viscoelastic (or even visco-elasto-plastic) model. Here we use a Maxwell-model which best describes stress relaxation and is therefore likely most important for the formation of large meteorite impact basins. While common viscoelastic behavior during mantle convection or other slow geodynamic or geological processes is mostly controlled by diffusion and dislocation creep, we showed that the Peierls mechanism dominates at the large strain rates that typically occur during meteorite impacts. Thus, the resulting visco-elasto-plastic rheology allows implementation of a more realistic mantle behavior in computer simulations, especially for those dealing with large meteorite impacts. The approach shown here opens the way to more faithful simulations of large impact basin formation, especially in elucidating the physics behind the formation of the external fault rings characteristic of large lunar basins.

Read this paper on arXiv…

D. Elbeshausen and J. Melosh
Fri, 18 May 18
23/51

Comments: 20 pages 6 figures

Improving Orbit Prediction Accuracy through Supervised Machine Learning [EPA]

http://arxiv.org/abs/1801.04856


Due to the lack of information such as the space environment condition and resident space objects’ (RSOs’) body characteristics, current orbit predictions that are solely grounded on physics-based models may fail to achieve required accuracy for collision avoidance and have led to satellite collisions already. This paper presents a methodology to predict RSOs’ trajectories with higher accuracy than that of the current methods. Inspired by the machine learning (ML) theory through which the models are learned based on large amounts of observed data and the prediction is conducted without explicitly modeling space objects and space environment, the proposed ML approach integrates physics-based orbit prediction algorithms with a learning-based process that focuses on reducing the prediction errors. Using a simulation-based space catalog environment as the test bed, the paper demonstrates three types of generalization capability for the proposed ML approach: 1) the ML model can be used to improve the same RSO’s orbit information that is not available during the learning process but shares the same time interval as the training data; 2) the ML model can be used to improve predictions of the same RSO at future epochs; and 3) the ML model based on a RSO can be applied to other RSOs that share some common features.

Read this paper on arXiv…

H. Peng and X. Bai
Tue, 16 Jan 18
35/79

Comments: 30 pages, 21 figures, 4 tables, Preprint submitted to Advances in Space Research, on December 14, 2017

ENIGMA: Eccentric, Non-spinning, Inspiral Gaussian-process Merger Approximant for the characterization of eccentric binary black hole mergers [CL]

http://arxiv.org/abs/1711.06276


We present $\texttt{ENIGMA}$, a time domain, inspiral-merger-ringdown waveform model that describes non-spinning binary black holes systems that evolve on moderately eccentric orbits. The inspiral evolution is described using a consistent combination of post-Newtonian theory, self-force and black hole perturbation theory. Assuming moderately eccentric binaries that circularize prior to coalescence, we smoothly match the eccentric inspiral with a stand-alone, quasi-circular merger, which is constructed using machine learning algorithms that are trained with quasi-circular numerical relativity waveforms. We show that $\texttt{ENIGMA}$ reproduces with excellent accuracy the dynamics of quasi-circular compact binaries. We validate $\texttt{ENIGMA}$ using a set of $\texttt{Einstein Toolkit}$ eccentric numerical relativity waveforms, which describe eccentric binary black hole mergers with mass-ratios between $1 \leq q \leq 5.5$, and eccentricities $e_0 \lesssim 0.2$ ten orbits before merger. We use this model to explore in detail the physics that can be extracted with moderately eccentric, non-spinning binary black hole mergers. In particular, we use $\texttt{ENIGMA}$ to show that the gravitational wave transients GW150914, GW151226, GW170104 and GW170814 can be effectively recovered with spinning, quasi-circular templates if the eccentricity of these events at a gravitational wave frequency of 10Hz satisfies $e_0\leq {0.175,\, 0.125,\,0.175,\,0.175}$, respectively. We show that if these systems have eccentricities $e_0\sim 0.1$ at a gravitational wave frequency of 10Hz, they can be misclassified as quasi-circular binaries due to parameter space degeneracies between eccentricity and spin corrections.

Read this paper on arXiv…

E. Huerta, C. Moore, P. Kumar, et. al.
Mon, 20 Nov 17
1/56

Comments: 17 pages, 9 figures, 1 Appendix. Submitted to Phys. Rev. D

Galactos: Computing the Anisotropic 3-Point Correlation Function for 2 Billion Galaxies [CEA]

http://arxiv.org/abs/1709.00086


The nature of dark energy and the complete theory of gravity are two central questions currently facing cosmology. A vital tool for addressing them is the 3-point correlation function (3PCF), which probes deviations from a spatially random distribution of galaxies. However, the 3PCF’s formidable computational expense has prevented its application to astronomical surveys comprising millions to billions of galaxies. We present Galactos, a high-performance implementation of a novel, O(N^2) algorithm that uses a load-balanced k-d tree and spherical harmonic expansions to compute the anisotropic 3PCF. Our implementation is optimized for the Intel Xeon Phi architecture, exploiting SIMD parallelism, instruction and thread concurrency, and significant L1 and L2 cache reuse, reaching 39% of peak performance on a single node. Galactos scales to the full Cori system, achieving 9.8PF (peak) and 5.06PF (sustained) across 9636 nodes, making the 3PCF easily computable for all galaxies in the observable universe.

Read this paper on arXiv…

B. Friesen, M. Patwary, B. Austin, et. al.
Mon, 4 Sep 17
42/61

Comments: 11 pages, 7 figures, accepted to SuperComputing 2017

Solar hard X-ray imaging by means of Compressed Sensing and Finite Isotropic Wavelet Transform [SSA]

http://arxiv.org/abs/1708.03877


This paper shows that compressed sensing realized by means of regularized deconvolution and the Finite Isotropic Wavelet Transform is effective and reliable in hard X-ray solar imaging.
The method utilizes the Finite Isotropic Wavelet Transform with Meyer function as the mother wavelet. Further, compressed sensing is realized by optimizing a sparsity-promoting regularized objective function by means of the Fast Iterative Shrinkage-Thresholding Algorithm. Eventually, the regularization parameter is selected by means of the Miller criterion.
The method is applied against both synthetic data mimicking the Spectrometer/Telescope Imaging X-rays (STIX) measurements and experimental observations provided by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). The performances of the method are compared with the results provided by standard visibility-based reconstruction methods.
The results show that the application of the sparsity constraint and the use of a continuous, isotropic framework for the wavelet transform provide a notable spatial accuracy and significantly reduce the ringing effects due to the instrument point spread functions.

Read this paper on arXiv…

M. Duval-Poo, M. Piana and A. Massone
Tue, 15 Aug 17
38/59

Comments: N/A

Z-checker: A Framework for Assessing Lossy Compression of Scientific Data [CL]

http://arxiv.org/abs/1707.09320


Because of vast volume of data being produced by today’s scientific simulations and experiments, lossy data compressor allowing user-controlled loss of accuracy during the compression is a relevant solution for significantly reducing the data size. However, lossy compressor developers and users are missing a tool to explore the features of scientific datasets and understand the data alteration after compression in a systematic and reliable way. To address this gap, we have designed and implemented a generic framework called Z-checker. On the one hand, Z-checker combines a battery of data analysis components for data compression. On the other hand, Z-checker is implemented as an open-source community tool to which users and developers can contribute and add new analysis components based on their additional analysis demands. In this paper, we present a survey of existing lossy compressors. Then we describe the design framework of Z-checker, in which we integrated evaluation metrics proposed in prior work as well as other analysis tools. Specifically, for lossy compressor developers, Z-checker can be used to characterize critical properties of any dataset to improve compression strategies. For lossy compression users, Z-checker can detect the compression quality, provide various global distortion analysis comparing the original data with the decompressed data and statistical analysis of the compression error. Z-checker can perform the analysis with either coarse granularity or fine granularity, such that the users and developers can select the best-fit, adaptive compressors for different parts of the dataset. Z-checker features a visualization interface displaying all analysis results in addition to some basic views of the datasets such as time series. To the best of our knowledge, Z-checker is the first tool designed to assess lossy compression comprehensively for scientific datasets.

Read this paper on arXiv…

D. Tao, S. Di, H. Guo, et. al.
Mon, 31 Jul 17
22/57

Comments: Submitted to The International Journal of High Performance Computing Application (IJHPCA), first revision, 17 pages, 13 figures, double column

A Hybrid Riemann Solver for Large Hyperbolic Systems of Conservation Laws [CL]

http://arxiv.org/abs/1607.05721


We are interested in the numerical solution of large systems of hyperbolic conservation laws or systems in which the characteristic decomposition is expensive to compute. Solving such equations using finite volumes or Discontinuous Galerkin requires a numerical flux function which solves local Riemann problems at cell interfaces. There are various methods to express the numerical flux function. On the one end, there is the robust but very diffusive Lax-Friedrichs solver; on the other end the upwind Godunov solver which respects all resulting waves. The drawback of the latter method is the costly computation of the eigensystem.
This work presents a family of simple first order Riemann solvers, named HLLX$\omega$, which avoid solving the eigensystem. The new method reproduces all waves of the system with less dissipation than other solvers with similar input and effort, such as HLL and FORCE. The family of Riemann solvers can be seen as an extension or generalization of the methods introduced by Degond et al. \cite{DegondPeyrardRussoVilledieu1999}. We only require the same number of input values as HLL, namely the globally fastest wave speeds in both directions, or an estimate of the speeds. Thus, the new family of Riemann solvers is particularly efficient for large systems of conservation laws when the spectral decomposition is expensive to compute or no explicit expression for the eigensystem is available.

Read this paper on arXiv…

B. Schmidtmann and M. Torrilhon
Thu, 21 Jul 16
46/48

Comments: arXiv admin note: text overlap with arXiv:1606.08040

Hybrid Riemann Solvers for Large Systems of Conservation Laws [CL]

http://arxiv.org/abs/1606.08040


In this paper we present a new family of approximate Riemann solvers for the numerical approximation of solutions of hyperbolic conservation laws. They are approximate, also referred to as incomplete, in the sense that the solvers avoid computing the characteristic decomposition of the flux Jacobian. Instead, they require only an estimate of the globally fastest wave speeds in both directions. Thus, this family of solvers is particularly efficient for large systems of conservation laws, i.e. with many different propagation speeds, and when no explicit expression for the eigensystem is available. Even though only fastest wave speeds are needed as input values, the new family of Riemann solvers reproduces all waves with less dissipation than HLL, which has the same prerequisites, requiring only one additional flux evaluation.

Read this paper on arXiv…

B. Schmidtmann, M. Astrakhantceva and M. Torrilhon
Tue, 28 Jun 16
56/58

Comments: 9 pages

Data Acquisition and Control System for High-Performance Large-Area CCD Systems [CL]

http://arxiv.org/abs/1507.05391


Astronomical CCD systems based on second-generation DINACON controllers were developed at the SAO RAS Advanced Design Laboratory more than seven years ago and since then have been in constant operation at the 6-meter and Zeiss-1000 telescopes. Such systems use monolithic large-area CCDs. We describe the software developed for the control of a family of large-area CCD systems equipped with a DINACON-II controller. The software suite serves for acquisition, primary reduction, visualization, and storage of video data, and also for the control, setup, and diagnostics of the CCD system.

Read this paper on arXiv…

I. Afanasieva
Tue, 21 Jul 15
33/74

Comments: 6 pages, 5 figures

ASTROMLSKIT: A New Statistical Machine Learning Toolkit: A Platform for Data Analytics in Astronomy [CL]

http://arxiv.org/abs/1504.07865


Astroinformatics is a new impact area in the world of astronomy, occasionally called the final frontier, where several astrophysicists, statisticians and computer scientists work together to tackle various data intensive astronomical problems. Exponential growth in the data volume and increased complexity of the data augments difficult questions to the existing challenges. Classical problems in Astronomy are compounded by accumulation of astronomical volume of complex data, rendering the task of classification and interpretation incredibly laborious. The presence of noise in the data makes analysis and interpretation even more arduous. Machine learning algorithms and data analytic techniques provide the right platform for the challenges posed by these problems. A diverse range of open problem like star-galaxy separation, detection and classification of exoplanets, classification of supernovae is discussed. The focus of the paper is the applicability and efficacy of various machine learning algorithms like K Nearest Neighbor (KNN), random forest (RF), decision tree (DT), Support Vector Machine (SVM), Na\”ive Bayes and Linear Discriminant Analysis (LDA) in analysis and inference of the decision theoretic problems in Astronomy. The machine learning algorithms, integrated into ASTROMLSKIT, a toolkit developed in the course of the work, have been used to analyze HabCat data and supernovae data. Accuracy has been found to be appreciably good.

Read this paper on arXiv…

S. Saha, S. Agrawal, M. R, et. al.
Thu, 30 Apr 15
43/43

Comments: Habitability Catalog (HabCat), Supernova classification, data analysis, Astroinformatics, Machine learning, ASTROMLS toolkit, Na\”ive Bayes, SVD, PCA, Random Forest, SVM, Decision Tree, LDA

Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models [CL]

http://arxiv.org/abs/1502.07758


Simulating a binary black hole coalescence by solving Einstein’s equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\em not} used for the surrogate’s training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second depending on the number of output modes and the sampling rate. Our model includes all spherical-harmonic ${}_{-2}Y_{\ell m}$ waveform modes that can be resolved by the NR code up to $\ell=8$, including modes that are typically difficult to model with other approaches. We assess the model’s uncertainty, which could be useful in parameter estimation studies seeking to incorporate model error. We anticipate NR surrogate models to be useful for rapid NR waveform generation in multiple-query applications like parameter estimation, template bank construction, and testing the fidelity of other waveform models.

Read this paper on arXiv…

J. Blackman, S. Field, C. Galley, et. al.
Mon, 2 Mar 15
37/39

Comments: 6 pages, 6 figures

The Murchison Widefield Array Correlator [IMA]

http://arxiv.org/abs/1501.05992


The Murchison Widefield Array (MWA) is a Square Kilometre Array (SKA) Precursor. The telescope is located at the Murchison Radio–astronomy Observatory (MRO) in Western Australia (WA). The MWA consists of 4096 dipoles arranged into 128 dual polarisation aperture arrays forming a connected element interferometer that cross-correlates signals from all 256 inputs. A hybrid approach to the correlation task is employed, with some processing stages being performed by bespoke hardware, based on Field Programmable Gate Arrays (FPGAs), and others by Graphics Processing Units (GPUs) housed in general purpose rack mounted servers. The correlation capability required is approximately 8 TFLOPS (Tera FLoating point Operations Per Second). The MWA has commenced operations and the correlator is generating 8.3 TB/day of correlation products, that are subsequently transferred 700 km from the MRO to Perth (WA) in real-time for storage and offline processing. In this paper we outline the correlator design, signal path, and processing elements and present the data format for the internal and external interfaces.

Read this paper on arXiv…

S. Ord, B. Crosse, D. Emrich, et. al.
Tue, 27 Jan 15
30/79

Comments: 17 pages, 9 figures. Accepted for publication in PASA. Some figures altered to meet astro-ph submission requirements

Achieving 100,000,000 database inserts per second using Accumulo and D4M [CL]

http://arxiv.org/abs/1406.4923


The Apache Accumulo database is an open source relaxed consistency database that is widely used for government applications. Accumulo is designed to deliver high performance on unstructured data such as graphs of network data. This paper tests the performance of Accumulo using data from the Graph500 benchmark. The Dynamic Distributed Dimensional Data Model (D4M) software is used to implement the benchmark on a 216-node cluster running the MIT SuperCloud software stack. A peak performance of over 100,000,000 database inserts per second was achieved which is 100x larger than the highest previously published value for any other database. The performance scales linearly with the number of ingest clients, number of database servers, and data size. The performance was achieved by adapting several supercomputing techniques to this application: distributed arrays, domain decomposition, adaptive load balancing, and single-program-multiple-data programming.

Read this paper on arXiv…

J. Kepner, W. Arcand, D. Bestor, et. al.
Fri, 20 Jun 14
2/48

Comments: 6 pages; to appear in IEEE High Performance Extreme Computing (HPEC) 2014

Supervised detection of anomalous light-curves in massive astronomical catalogs [CL]

http://arxiv.org/abs/1404.4888


The development of synoptic sky surveys has led to a massive amount of data for which resources needed for analysis are beyond human capabilities. To process this information and to extract all possible knowledge, machine learning techniques become necessary. Here we present a new method to automatically discover unknown variable objects in large astronomical catalogs. With the aim of taking full advantage of all the information we have about known objects, our method is based on a supervised algorithm. In particular, we train a random forest classifier using known variability classes of objects and obtain votes for each of the objects in the training set. We then model this voting distribution with a Bayesian network and obtain the joint voting distribution among the training objects. Consequently, an unknown object is considered as an outlier insofar it has a low joint probability. Our method is suitable for exploring massive datasets given that the training process is performed offline. We tested our algorithm on 20 millions light-curves from the MACHO catalog and generated a list of anomalous candidates. We divided the candidates into two main classes of outliers: artifacts and intrinsic outliers. Artifacts were principally due to air mass variation, seasonal variation, bad calibration or instrumental errors and were consequently removed from our outlier list and added to the training set. After retraining, we selected about 4000 objects, which we passed to a post analysis stage by perfoming a cross-match with all publicly available catalogs. Within these candidates we identified certain known but rare objects such as eclipsing Cepheids, blue variables, cataclysmic variables and X-ray sources. For some outliers there were no additional information. Among them we identified three unknown variability types and few individual outliers that will be followed up for a deeper analysis.

Read this paper on arXiv…

I. nun, K. Pichara, P. Protopapas, et. al.
Tue, 22 Apr 14
31/54

Web-Based Visualization of Very Large Scientific Astronomy Imagery [IMA]

http://arxiv.org/abs/1403.6025


Visualizing and navigating through large astronomy images from a remote location with current astronomy display tools can be a frustrating experience in terms of speed and ergonomics, especially on mobile devices. In this paper, we present a high performance, versatile and robust client-server system for remote visualization and analysis of extremely large scientific images. Applications of this work include survey image quality control, interactive data query and exploration, citizen science, as well as public outreach. The proposed software is entirely open source and is designed to be generic and applicable to a variety of data sets. It provides access to full precision floating point data at terabyte scales, with the ability to precisely adjust image settings in real-time. The proposed clients are light-weight, platform-independent web applications built on standard HTML5 web technologies and compatible with both touch-based and mouse-based devices. We put the system to the test and assess the performance of the system and show that a single server can comfortably handle more than a hundred simultaneous users accessing full precision 32 bit astronomy data.

Read this paper on arXiv…

E. Bertin, R. Pillay and C. Marmo
Tue, 25 Mar 14
74/79

Singular Value Decomposition of Images from Scanned Photographic Plates


We want to approximate the mxn image A from scanned astronomical photographic plates (from the Sofia Sky Archive Data Center) by using far fewer entries than in the original matrix. By using rank of a matrix, k we remove the redundant information or noise and use as Wiener filter, when rank k<m or k<n. With this approximation more than 98% compression ration of image of astronomical plate without that image details, is obtained. The SVD of images from scanned photographic plates (SPP) is considered and its possible image compression.

Read this paper on arXiv…

Date added: Tue, 8 Oct 13