Using Mutual Information to measure Time-lags from non-linear processes in Astronomy [IMA]

http://arxiv.org/abs/2106.08623


Measuring time lags between time-series or lighcurves at different wavelengths from a variable or transient source in astronomy is an essential probe of physical mechanisms causing multiwavelength variability. Time-lags are typically quantified using discrete correlation functions (DCF) which are appropriate for linear relationships. However, in variable sources like X-ray binaries, active galactic nuclei (AGN) and other accreting systems, the radiative processes and the resulting multiwavelength lightcurves often have non-linear relationships. For such systems it is more appropriate to use non-linear information-theoretic measures of causation like mutual information, routinely used in other disciplines. We demonstrate with toy models loopholes of using the standard DCF & show improvements when using the mutual information correlation function (MICF). For non-linear correlations, the latter accurately & sharply identifies the lag components as opposed to the DCF which can be erroneous. Following that we apply the MICF to the multiwavelength lightcurves of AGN NGC 4593. We find that X-ray fluxes lead UVW2 fluxes by ~0.2 days, closer to model predictions from reprocessing by the accretion disk than the DCF estimate. The uncertainties with the current lightcurves are too large though to rule out -ve lags. Additionally, we find another delay component at ~-1 day i.e. UVW2 leading X-rays consistent with inward propagating fluctuations in the accretion disk scenario. This is not detected by the DCF. Keeping in mind the non-linear relation between X-ray & UVW2, this is worthy of further theoretical investigation. From both toy models & real observations, it is clear that the mutual information based estimator is highly sensitive to complex non-linear correlations. With sufficiently high temporal resolution, we will precisely detect each of the lag features corresponding to these correlations.

Read this paper on arXiv…

N. Chakraborty and P. Leeuwen
Thu, 17 Jun 21
47/74

Comments: 13 pages, 6 figures

When Outliers Are Different [HEAP]

http://arxiv.org/abs/2106.05212


When does the presence of an outlier in some measured property indicate that the outlying object differs qualitatively, rather than quantitatively, from other members of its apparent class? Historical examples include the many types of supernov\ae\ and short {\it vs.\/} long Gamma Ray Bursts. There may be only one parameter and one outlier, so that principal component analyses are inapplicable. A qualitative difference implies that some parameter has a characteristic scale, and hence its distribution cannot be a power law (that can have no such scale). If the distribution is a power law the objects differ only quantitatively. The applicability of a power law to an empirical distribution may be tested by comparing the most extreme member to its next-most extreme. The probability distribution of their ratio is calculated, and compared to data for stars, radio and X-ray sources, and the fluxes, fluences and rotation measures of Fast Radio Bursts.

Read this paper on arXiv…

J. Katz
Thu, 10 Jun 21
51/77

Comments: 4 pp, 2 figs

Simulating Photometric Images of Moving Targets with Photon-mapping [IMA]

http://arxiv.org/abs/2106.01348


We present a novel, easy-to-use method based on the photon-mapping technique to simulate photometric images of moving targets. Realistic images can be created in two passes: photon tracing and image rendering. The nature of light sources, tracking mode of the telescope, point spread function (PSF), and specifications of the CCD are taken into account in the imaging process. Photometric images in a variety of observation scenarios can be generated flexibly. We compared the simulated images with the observed ones. The residuals between them are negligible, and the correlation coefficients between them are high, with a median of $0.9379_{-0.0201}^{+0.0125}$ for 1020 pairs of images, which means a high fidelity and similarity. The method is versatile and can be used to plan future photometry of moving targets, interpret existing observations, and provide test images for image processing algorithms.

Read this paper on arXiv…

J. Du, S. Hu, X. Chen, et. al.
Thu, 3 Jun 21
45/55

Comments: 17 pages, 7 figures

Nested sampling for frequentist computation: fast estimation of small $p$-values [CL]

http://arxiv.org/abs/2105.13923


We propose a novel method for computing $p$-values based on nested sampling (NS) applied to the sampling space rather than the parameter space of the problem, in contrast to its usage in Bayesian computation. The computational cost of NS scales as $\log^2{1/p}$, which compares favorably to the $1/p$ scaling for Monte Carlo (MC) simulations. For significances greater than about $4\sigma$ in both a toy problem and a simplified resonance search, we show that NS requires orders of magnitude fewer simulations than ordinary MC estimates. This is particularly relevant for high-energy physics, which adopts a $5\sigma$ gold standard for discovery. We conclude with remarks on new connections between Bayesian and frequentist computation and possibilities for tuning NS implementations for still better performance in this setting.

Read this paper on arXiv…

A. Fowlie, S. Hoof and W. Handley
Mon, 31 May 21
25/72

Comments: 6 pages, 3 figures

Signal estimation in On/Off measurements including event-by-event variables [CL]

http://arxiv.org/abs/2105.01019


Signal estimation in the presence of background noise is a common problem in several scientific disciplines. An ‘On/Off’ measurement is performed when the background itself is not known, being estimated from a background control sample. The ‘frequentist’ and Bayesian approaches for signal estimation in On/Off measurements are reviewed and compared, focusing on the weakness of the former and on the advantages of the latter in correctly addressing the Poissonian nature of the problem. In this work, we devise a novel reconstruction method, dubbed BASiL (Bayesian Analysis including Single-event Likelihoods), for estimating the signal rate based on the Bayesian formalism. It uses information on event-by-event individual parameters and their distribution for the signal and background population. Events are thereby weighted according to their likelihood of being a signal or a background event and background suppression can be achieved without performing fixed fiducial cuts. Throughout the work, we maintain a general notation, that allows to apply the method generically, and provide a performance test using real data and simulations of observations with the MAGIC telescopes, as demonstration of the performance for Cherenkov telescopes. BASiL allows to estimate the signal more precisely, avoiding loss of exposure due to signal extraction cuts. We expect its applicability to be straightforward in similar cases.

Read this paper on arXiv…

G. D’Amico, T. Terzić, J. Strišković, et. al.
Tue, 4 May 21
39/72

Comments: Accepted in PRD

Ancient and present surface evolution processes in the Ash regionof comet 67P/Churyumov-Gerasimenko [EPA]

http://arxiv.org/abs/2104.13741


The Rosetta mission provided us with detailed data of the surface of the nucleus of comet 67P/Churyumov-Gerasimenko.In order to better understand the physical processes associated with the comet activity and the surface evolution of its nucleus, we performed a detailed comparative morphometrical analysis of two depressions located in the Ash region. To detect morphological temporal changes, we compared pre- and post-perihelion high-resolution (pixel scale of 0.07-1.75 m) OSIRIS images of the two depressions. We quantified the changes using the dynamic heights and the gravitational slopes calculated from the Digital Terrain Model (DTM) of the studied area using the ArcGIS software before and after perihelion. Our comparative morphometrical analysis allowed us to detect and quantify the temporal changes that occurred in two depressions of the Ash region during the last perihelion passage. We find that the two depressions grew by several meters. The area of the smallest depression (structure I) increased by 90+/-20%, with two preferential growths: one close to the cliff associated with the apparition of new boulders at its foot, and a second one on the opposite side of the cliff. The largest depression (structure II) grew in all directions, increasing in area by 20+/-5%, and no new deposits have been detected. We interpreted these two depression changes as being driven by the sublimation of ices, which explains their global growth and which can also trigger landslides. The deposits associated with depression II reveal a stair-like topography, indicating that they have accumulated during several successive landslides from different perihelion passages. Overall, these observations bring additional evidence of complex active processes and reshaping events occurring on short timescales, such as depression growth and landslides, and on longer timescales, such as cliff retreat.

Read this paper on arXiv…

A. Bouquety, L. Jorda, O. Groussin, et. al.
Thu, 29 Apr 21
42/50

Comments: N/A

Via Machinae: Searching for Stellar Streams using Unsupervised Machine Learning [GA]

http://arxiv.org/abs/2104.12789


We develop a new machine learning algorithm, Via Machinae, to identify cold stellar streams in data from the Gaia telescope. Via Machinae is based on ANODE, a general method that uses conditional density estimation and sideband interpolation to detect local overdensities in the data in a model agnostic way. By applying ANODE to the positions, proper motions, and photometry of stars observed by Gaia, Via Machinae obtains a collection of those stars deemed most likely to belong to a stellar stream. We further apply an automated line-finding method based on the Hough transform to search for line-like features in patches of the sky. In this paper, we describe the Via Machinae algorithm in detail and demonstrate our approach on the prominent stream GD-1. A companion paper contains our identification of other known stellar streams as well as new stellar stream candidates from Via Machinae. Though some parts of the algorithm are tuned to increase sensitivity to cold streams, the Via Machinae technique itself does not rely on astrophysical assumptions, such as the potential of the Milky Way or stellar isochrones. This flexibility suggests that it may have further applications in identifying other anomalous structures within the Gaia dataset, for example debris flow and globular clusters.

Read this paper on arXiv…

D. Shih, M. Buckley, L. Necib, et. al.
Wed, 28 Apr 21
29/60

Comments: 16 pages, 17 figures

The Solar Cycle Variations of the Anisotropy of Taylor Scale and Correlation Scale in the Solar Wind Turbulence [CL]

http://arxiv.org/abs/2104.04920


The field-aligned anisotropy of the solar wind turbulence, which is quantified by the ratio of the parallel to the perpendicular correlation (and Taylor) length scales, is determined by simultaneous two-point correlation measurements during the time period 2001-2017. Our results show that the correlation scale along the magnetic field is the largest, and the correlation scale in the field-perpendicular directions is the smallest, at both solar maximum and solar minimum. However, the Taylor scale reveals inconsistent results for different stages of the solar cycles. During the years 2001-2004, the Taylor scales are slightly larger in the field-parallel directions, while during the years 2004-2017, the Taylor scales are larger in the field-perpendicular directions. The correlation coefficient between the sunspot number and the anisotropy ratio is employed to describe the effects of solar activity on the anisotropy of solar wind turbulence. The results show that the correlation coefficient regarding the Taylor scale anisotropy (0.65) is larger than that regarding the correlation scale anisotropy (0.43), which indicates that the Taylor scale anisotropy is more sensitive to the solar activity. The Taylor scale and the correlation scale are used to calculate the effective magnetic Reynolds number, which is found to be systematically larger in the field-parallel directions than in the field-perpendicular directions. The correlation coefficient between the sunspot number and the magnetic Reynolds number anisotropy ratio is -0.75. Our results will be meaningful for understanding the solar wind turbulence anisotropy and its long-term variability in the context of solar activity.

Read this paper on arXiv…

G. Zhou and H. He
Tue, 13 Apr 2021
69/93

Comments: Published in ApJL

A novel approach to the classification of terrestrial drainage networks based on deep learning and preliminary results on Solar System bodies [CL]

http://arxiv.org/abs/2103.04116


Several approaches were proposed to describe the geomorphology of drainage networks and the abiotic/biotic factors determining their morphology. There is an intrinsic complexity of the explicit qualification of the morphological variations in response to various types of control factors and the difficulty of expressing the cause-effect links. Traditional methods of drainage network classification are based on the manual extraction of key characteristics, then applied as pattern recognition schemes. These approaches, however, have low predictive and uniform ability. We present a different approach, based on the data-driven supervised learning by images, extended also to extraterrestrial cases. With deep learning models, the extraction and classification phase is integrated within a more objective, analytical, and automatic framework. Despite the initial difficulties, due to the small number of training images available, and the similarity between the different shapes of the drainage samples, we obtained successful results, concluding that deep learning is a valid way for data exploration in geomorphology and related fields.

Read this paper on arXiv…

C. Donadio, M. Brescia, A. Riccardo, et. al.
Tue, 9 Mar 21
46/68

Comments: Accepted, To be published on Scientific Reports (Nature Research Journal), 22 pages, 3 figures, 4 tables

Nested sampling with any prior you like [IMA]

http://arxiv.org/abs/2102.12478


Nested sampling is an important tool for conducting Bayesian analysis in Astronomy and other fields, both for sampling complicated posterior distributions for parameter inference, and for computing marginal likelihoods for model comparison. One technical obstacle to using nested sampling in practice is the requirement that prior distributions be provided in the form of bijective transformations from the unit hyper-cube to the target prior density. For many applications – particularly when using the posterior from one experiment as the prior for another – such a transformation is not readily available. In this letter we show that parametric bijectors trained on samples from a desired prior density provide a general-purpose method for constructing transformations from the uniform base density to a target prior, enabling the practical use of nested sampling under arbitrary priors. We demonstrate the use of trained bijectors in conjunction with nested sampling on a number of examples from cosmology.

Read this paper on arXiv…

J. Alsing and W. Handley
Fri, 26 Feb 21
34/60

Comments: 5 pages, 2 figures, prepared for submission as an MNRAS letter

The dynamics of three nearby E0 galaxies in refracted gravity [GA]

http://arxiv.org/abs/2102.12499


We test whether refracted gravity (RG), a modified theory of gravity that describes the dynamics of galaxies without the aid of dark matter, can model the dynamics of the three massive elliptical galaxies, NGC 1407, NGC 4486, and NGC 5846, out to $\sim$$10R_{\rm e}$, where the stellar mass component fades out and dark matter is required in Newtonian gravity. We probe these outer regions with the kinematics of the globular clusters provided by the SLUGGS survey. RG mimics dark matter with the gravitational permittivity, a monotonic function of the local mass density depending on three paramaters, $\epsilon_0$, $\rho_{\rm c}$, and $Q$, that are expected to be universal. RG satisfactorily reproduces the velocity dispersion profiles of the stars and red and blue globular clusters, with stellar mass-to-light ratios in agreement with stellar population synthesis models, and orbital anisotropy parameters consistent with previous results obtained in Newtonian gravity with dark matter. The sets of three parameters of the gravitational permittivity found for each galaxy are consistent with each other within $\sim$1$\sigma$. We compare the mean $\epsilon_0$, $\rho_{\rm c}$, and $Q$ found here with the means of the parameters required to model the rotation curves and vertical velocity dispersion profiles of 30 disk galaxies from the DiskMass survey (DMS): $\rho_{\rm c}$ and $Q$ are within 1$\sigma$ from the DMS values, whereas $\epsilon_0$ is within 2.5$\sigma$ from the DMS value. This result suggests the universality of the permittivity function, despite our simplified galaxy model: we treat each galaxy as isolated, when, in fact, NGC 1407 and NGC 5846 are members of galaxy groups and NGC 4486 is the central galaxy of the Virgo cluster.

Read this paper on arXiv…

V. Cesare, A. Diaferio and T. Matsakos
Fri, 26 Feb 21
44/60

Comments: 22 pages, 12 figures, 8 tables. Submitted to A&A

Sample variance of rounded variables [CL]

http://arxiv.org/abs/2102.08483


If the rounding errors are assumed to be distributed independently from the intrinsic distribution of the random variable, the sample variance $s^2$ of the rounded variable is given by the sum of the true variance $\sigma^2$ and the variance of the rounding errors (which is equal to $w^2/12$ where $w$ is the size of the rounding window). Here the exact expressions for the sample variance of the rounded variables are examined and it is also discussed when the simple approximation $s^2=\sigma^2+w^2/12$ can be considered valid. In particular, if the underlying distribution $f$ belongs to a family of symmetric normalizable distributions such that $f(x)=\sigma^{-1}F(u)$ where $u=(x-\mu)/\sigma$, and $\mu$ and $\sigma^2$ are the mean and variance of the distribution, then the rounded sample variance scales like $s^2-(\sigma^2+w^2/12)\sim\sigma\Phi'(\sigma)$ as $\sigma\to\infty$ where $\Phi(\tau)=\int_{-\infty}^\infty{\rm d}u\,e^{iu\tau}F(u)$ is the characteristic function of $F(u)$. It follows that, roughly speaking, the approximation is valid for a slowly-varying symmetric underlying distribution with its variance sufficiently larger than the size of the rounding unit.

Read this paper on arXiv…

J. An
Thu, 18 Feb 21
55/66

Comments: N/A

Real-Time Likelihood-free Inference of Roman Binary Microlensing Events with Amortized Neural Posterior Estimation [IMA]

http://arxiv.org/abs/2102.05673


Fast and automated inference of binary-lens, single-source (2L1S) microlensing events with sampling-based Bayesian algorithms (e.g., Markov Chain Monte Carlo; MCMC) is challenged on two fronts: high computational cost of likelihood evaluations with microlensing simulation codes, and a pathological parameter space where the negative-log-likelihood surface can contain a multitude of local minima that are narrow and deep. Analysis of 2L1S events usually involves grid searches over some parameters to locate approximate solutions as a prerequisite to posterior sampling, an expensive process that often requires human-in-the-loop and domain expertise. As the next-generation, space-based microlensing survey with the Roman Space Telescope is expected to yield thousands of binary microlensing events, a new fast and automated method is desirable. Here, we present a likelihood-free inference (LFI) approach named amortized neural posterior estimation, where a neural density estimator (NDE) learns a surrogate posterior $\hat{p}(\theta|x)$ as an observation-parametrized conditional probability distribution, from pre-computed simulations over the full prior space. Trained on 291,012 simulated Roman-like 2L1S simulations, the NDE produces accurate and precise posteriors within seconds for any observation within the prior support without requiring a domain expert in the loop, thus allowing for real-time and automated inference. We show that the NDE also captures expected posterior degeneracies. The NDE posterior could then be refined into the exact posterior with a downstream MCMC sampler with minimal burn-in steps.

Read this paper on arXiv…

K. Zhang, J. Bloom, B. Gaudi, et. al.
Fri, 12 Feb 21
47/59

Comments: 14 pages, 8 figures, 3 tables. Submitted to AAS journals. This article supersedes arXiv:2010.04156

PyAutoFit: A Classy Probabilistic Programming Language for Model Composition and Fitting [IMA]

http://arxiv.org/abs/2102.04472


A major trend in academia and data science is the rapid adoption of Bayesian statistics for data analysis and modeling, leading to the development of probabilistic programming languages (PPL). A PPL provides a framework that allows users to easily specify a probabilistic model and perform inference automatically. PyAutoFit is a Python-based PPL which interfaces with all aspects of the modeling (e.g., the model, data, fitting procedure, visualization, results) and therefore provides complete management of every aspect of modeling. This includes composing high-dimensionality models from individual model components, customizing the fitting procedure and performing data augmentation before a model-fit. Advanced features include database tools for analysing large suites of modeling results and exploiting domain-specific knowledge of a problem via non-linear search chaining. Accompanying PyAutoFit is the autofit workspace (see https://github.com/Jammy2211/autofit_workspace), which includes example scripts and the HowToFit lecture series which introduces non-experts to model-fitting and provides a guide on how to begin a project using PyAutoFit. Readers can try PyAutoFit right now by going to the introduction Jupyter notebook on Binder (see https://mybinder.org/v2/gh/Jammy2211/autofit_workspace/HEAD) or checkout our readthedocs(see https://pyautofit.readthedocs.io/en/latest/) for a complete overview of PyAutoFit’s features.

Read this paper on arXiv…

J. Nightingale, R. Hayes and M. Griffiths
Wed, 10 Feb 21
6/64

Comments: Published in the Journal of Open Source Software

Fitting very flexible models: Linear regression with large numbers of parameters [CL]

http://arxiv.org/abs/2101.07256


There are many uses for linear fitting; the context here is interpolation and denoising of data, as when you have calibration data and you want to fit a smooth, flexible function to those data. Or you want to fit a flexible function to de-trend a time series or normalize a spectrum. In these contexts, investigators often choose a polynomial basis, or a Fourier basis, or wavelets, or something equally general. They also choose an order, or number of basis functions to fit, and (often) some kind of regularization. We discuss how this basis-function fitting is done, with ordinary least squares and extensions thereof. We emphasize that it is often valuable to choose far more parameters than data points, despite folk rules to the contrary: Suitably regularized models with enormous numbers of parameters generalize well and make good predictions for held-out data; over-fitting is not (mainly) a problem of having too many parameters. It is even possible to take the limit of infinite parameters, at which, if the basis and regularization are chosen correctly, the least-squares fit becomes the mean of a Gaussian process. We recommend cross-validation as a good empirical method for model selection (for example, setting the number of parameters and the form of the regularization), and jackknife resampling as a good empirical method for estimating the uncertainties of the predictions made by the model. We also give advice for building stable computational implementations.

Read this paper on arXiv…

D. Hogg and S. Villar
Wed, 20 Jan 21
46/61

Comments: all code used to make the figures is available at this https URL

Orbits and masses of binaries from Speckle Interferometry at SOAR [SSA]

http://arxiv.org/abs/2101.04537


We present results from Speckle inteferometric observations of fifteen visual binaries and one double-line spectroscopic binary, carried out with the HRCam Speckle camera of the SOAR 4.1 m telescope. These systems were observed as a part of an on-going survey to characterize the binary population in the solar vicinity, out to a distance of 250 parsec.
We obtained orbital elements and mass sums for our sample of visual binaries. The orbits were computed using a Markov Chain Monte Carlo algorithm that delivers maximum likelihood estimates of the parameters, as well as posterior probability density functions that allow us to evaluate their uncertainty. Their periods cover a range from 5 yr to more than 500 yr; and their spectral types go from early A to mid M – implying total system masses from slightly more than 4 MSun down to 0.2 MSun. They are located at distances between approximately 12 and 200 pc, mostly at low Galactic latitude.
For the double-line spectroscopic binary YSC8 we present the first combined astrometric/radial velocity orbit resulting from a self-consistent fit, leading to individual component masses of 0.897 +/- 0.027 MSun and 0.857 +/- 0.026 MSun; and an orbital parallax of 26.61 +/- 0.29 mas, which compares very well with the Gaia DR2 trigonometric parallax (26.55 +/- 0.27 mas).
In combination with published photometry and trigonometric parallaxes, we place our objects on an H-R diagram and discuss their evolutionary status. We also present a thorough analysis of the precision and consistency of the photometry available for them.

Read this paper on arXiv…

R. Mendez, R. Claveria and E. Costa
Wed, 13 Jan 21
51/70

Comments: 28 pages, 10 figures, 1 appendix. Accepted for publication in The Astronomical Journal

Towards the processing, review, and delivery of 80% of the ALMA data by the Joint ALMA Observatory (JAO) [IMA]

http://arxiv.org/abs/2101.03427


After eight observing Cycles, the Atacama Large Millimeter-submillimeter Array (ALMA) is capable of observing in eight different bands (covering a frequency range from 84 to 950 GHz), with 66 antennas and two correlators. For the current Cycle (7), ALMA offers up to 4300 hours for the 12-m array, and 3000 hours on both the 7-m of the Atacama Compact Array (ACA) and TP Array plus 750 hours in a supplemental call. From the customer perspective (i.e., the astronomical community), ALMA is an integrated product service provider, i.e. it observes in service mode, processes and delivers the data obtained. The Data Management Group (DMG) is in charge of the processing, reviewing, and delivery of the ALMA data and consists of approximately 60 experts in data reduction, from the ALMA Regional Centers (ARCs) and the Joint ALMA Observatory (JAO), distributed in fourteen countries. Prior to their delivery, the ALMA data products go through a thorough quality assurance (QA) process, so that the astronomers can work on their science without the need of significant additional calibration re-processing. Currently, around 90% of the acquired data is processed with the ALMA pipeline (the so called pipeline-able data), while the remaining 10% is processed completely manually. The Level-1 Key Performance Indicator set by the Observatory to DMG is that 90% of the pipeline-able data sets (i.e. some 80% of the data sets observed during an observing cycle) must be processed, reviewed and delivered within 30 days of data acquisition. This paper describes the methodology followed by the JAO in order to process near 80% of the total data observed during Cycle 7, a giant leap with respect to approximately 30% in Cycle 4 (October 2016 – September 2017).

Read this paper on arXiv…

J. Yus, B. Dent, D. Brisbin, et. al.
Tue, 12 Jan 21
59/90

Comments: 18 pages, 11 figures, SPIE conference on Astronomical Telescopes and Instrumentation

Technique for separating velocity and density contributions in spectroscopic data and its application to studying turbulence and magnetic fields [GA]

http://arxiv.org/abs/2012.15776


Based on the theoretical description of Position-Position-Velocity (PPV) statistics in Lazarian & Pogosyan (2000), we introduce a new technique called the Velocity Decomposition Algorithm (VDA) in separating the contribution of turbulent velocity from density fluctuations. Using MHD turbulence simulations, we demonstrate its promise in recovering the velocity caustics in various physical conditions and, in conjunction with the Velocity Gradient Technique (VGT), its prospects in accurately tracing the magnetic field based on pure velocity fluctuations. Employing the theoretical framework developed in Lazarian & Pogosyan (2004), we find that for localized clouds, the velocity fluctuations are most prominent at the wing part of the spectral line, and they dominate the density fluctuations. The same velocity dominance applies to extended HI regions with galactic rotations. Our numerical experiment demonstrates that velocity channels arising from the cold phase of atomic hydrogen (HI) are still strongly affected by the velocity caustics in small scales. We apply the VDA to the HI GALFA-DR2 data corresponding to the high-velocity cloud HVC186+19-114 and high latitude galactic diffuse HI data. Our study confirms the crucial role of velocity caustics in forming linear structures observed within PPV cubes. We discuss the implications of the VDA for both magnetic field studies and predicting polarized galactic emission that acts as the foreground for the Cosmic Microwave Background (CMB) studies. Besides, we address the controversy related to the nature of the filaments in HI channel maps and explain the importance of velocity caustics in the formation of structures in PPV data cubes. The VDA method will allow astronomers to obtain velocity caustics from almost every piece of spectroscopic PPV data and allows direct investigation of the turbulent velocity field in observations.

Read this paper on arXiv…

K. Yuen, K. Ho and A. Lazarian
Fri, 1 Jan 21
59/103

Comments: 47 pages, 11 sections, 6 appendices, 25 figures in the main text, 7 figures in the appendices. Submitted to ApJ

Technique for separating velocity and density contributions in spectroscopic data and its application to studying turbulence and magnetic fields [GA]

http://arxiv.org/abs/2012.15776


Based on the theoretical description of Position-Position-Velocity (PPV) statistics in Lazarian & Pogosyan (2000), we introduce a new technique called the Velocity Decomposition Algorithm (VDA) in separating the contribution of turbulent velocity from density fluctuations. Using MHD turbulence simulations, we demonstrate its promise in recovering the velocity caustics in various physical conditions and, in conjunction with the Velocity Gradient Technique (VGT), its prospects in accurately tracing the magnetic field based on pure velocity fluctuations. Employing the theoretical framework developed in Lazarian & Pogosyan (2004), we find that for localized clouds, the velocity fluctuations are most prominent at the wing part of the spectral line, and they dominate the density fluctuations. The same velocity dominance applies to extended HI regions with galactic rotations. Our numerical experiment demonstrates that velocity channels arising from the cold phase of atomic hydrogen (HI) are still strongly affected by the velocity caustics in small scales. We apply the VDA to the HI GALFA-DR2 data corresponding to the high-velocity cloud HVC186+19-114 and high latitude galactic diffuse HI data. Our study confirms the crucial role of velocity caustics in forming linear structures observed within PPV cubes. We discuss the implications of the VDA for both magnetic field studies and predicting polarized galactic emission that acts as the foreground for the Cosmic Microwave Background (CMB) studies. Besides, we address the controversy related to the nature of the filaments in HI channel maps and explain the importance of velocity caustics in the formation of structures in PPV data cubes. The VDA method will allow astronomers to obtain velocity caustics from almost every piece of spectroscopic PPV data and allows direct investigation of the turbulent velocity field in observations.

Read this paper on arXiv…

K. Yuen, K. Ho and A. Lazarian
Fri, 1 Jan 21
94/103

Comments: 47 pages, 11 sections, 6 appendices, 25 figures in the main text, 7 figures in the appendices. Submitted to ApJ

Simple and statistically sound strategies for analysing physical theories [CL]

http://arxiv.org/abs/2012.09874


Physical theories that depend on many parameters or are tested against data from many different experiments pose unique challenges to parameter estimation. Many models in particle physics, astrophysics and cosmology fall into one or both of these categories. These issues are often sidestepped with very simplistic and statistically unsound ad hoc methods, involving naive intersection of parameter intervals estimated by multiple experiments, and random or grid sampling of model parameters. Whilst these methods are easy to apply, they exhibit pathologies even in low-dimensional parameter spaces, and quickly become problematic to use and interpret in higher dimensions. In this article we give clear guidance for going beyond these rudimentary procedures, suggesting some simple methods for performing statistically sound inference, and recommendations of readily-available software tools and standards that can assist in doing so. Our aim is to provide physicists with recommendations for reaching correct scientific conclusions, with only a modest increase in analysis burden.

Read this paper on arXiv…

S. AbdusSalam, F. Agocs, B. Allanach, et. al.
Mon, 21 Dec 20
4/75

Comments: 10 pages, 3 figures

Improving solar wind forecasting using Data Assimilation [CL]

http://arxiv.org/abs/2012.06362


Data Assimilation (DA) has enabled huge improvements in the skill of terrestrial operational weather forecasting. In this study, we use a variational DA scheme with a computationally efficient solar wind model and in situ observations from STEREO A, STEREO B and ACE. This scheme enables solar-wind observations far from the Sun, such as at 1 AU, to update and improve the inner boundary conditions of the solar wind model (at $30$ solar radii). In this way, observational information can be used to improve estimates of the near-Earth solar wind, even when the observations are not directly downstream of the Earth. This allows improved initial conditions of the solar wind to be passed into forecasting models. To this effect we employ the HUXt solar wind model to produce 27-day forecasts of the solar wind during the operational time of STEREO B ($01/11/2007-30/09/2014$). At ACE, we compare these DA forecasts to the corotation of STEREO B observations and find that $27$-day RMSE for STEREO-B corotation and DA forecasts are comparable. However, the DA forecast is shown to improve solar wind forecasts when STEREO-B’s latitude is offset from Earth. And the DA scheme enables the representation of the solar wind in the whole model domain between the Sun and the Earth to be improved, which will enable improved forecasting of CME arrival time and speed.

Read this paper on arXiv…

M. Lang, J. Witherington, H. Turner, et. al.
Mon, 14 Dec 20
53/74

Comments: 24 pages, 10 figures, 3 tables, under review in Space Weather journal

Statistical estimates of the pulsar glitch activity [HEAP]

http://arxiv.org/abs/2012.01539


A common way to calculate the glitch activity of a pulsar is an ordinary linear regression of the observed cumulative glitch history. This method however is likely to underestimate the errors on the activity, as it implicitly assumes a (long-term) linear dependence between glitch sizes and waiting times, as well as equal variance, i.e. homoscedasticity, in the fit residuals, both assumption that are not well justified in pulsar data. In this paper, we review the extrapolation of the glitch activity parameter and explore two alternatives: the relaxation of the homoscedasticity hypothesis in the linear fit and the use of the bootstrap technique. Our main finding is a much larger uncertainty on activity estimates, with respect to that obtained with an ordinary linear regression. We discuss how this affects the theoretical upper bound on the moment of inertia associated to the region of a neutron star containing the superfluid reservoir of angular momentum released in a stationary sequence of glitches. We find that this upper bound is less tight if one considers the uncertainty on the activity estimated with the bootstrap method, and allows for models in which the superfluid reservoir is entirely in the crust.

Read this paper on arXiv…

A. Montoli, M. Antonelli, B. Haskell, et. al.
Fri, 4 Dec 20
62/77

Comments: 18 pages, 4 figures, comments welcome

Modeling assembly bias with machine learning and symbolic regression [CEA]

http://arxiv.org/abs/2012.00111


Upcoming 21cm surveys will map the spatial distribution of cosmic neutral hydrogen (HI) over unprecedented volumes. Mock catalogues are needed to fully exploit the potential of these surveys. Standard techniques employed to create these mock catalogs, like Halo Occupation Distribution (HOD), rely on assumptions such as the baryonic properties of dark matter halos only depend on their masses. In this work, we use the state-of-the-art magneto-hydrodynamic simulation IllustrisTNG to show that the HI content of halos exhibits a strong dependence on their local environment. We then use machine learning techniques to show that this effect can be 1) modeled by these algorithms and 2) parametrized in the form of novel analytic equations. We provide physical explanations for this environmental effect and show that ignoring it leads to underprediction of the real-space 21-cm power spectrum at $k\gtrsim 0.05$ h/Mpc by $\gtrsim$10\%, which is larger than the expected precision from upcoming surveys on such large scales. Our methodology of combining numerical simulations with machine learning techniques is general, and opens a new direction at modeling and parametrizing the complex physics of assembly bias needed to generate accurate mocks for galaxy and line intensity mapping surveys.

Read this paper on arXiv…

D. Wadekar, F. Villaescusa-Navarro, S. Ho, et. al.
Wed, 2 Dec 20
33/71

Comments: 16 pages, 12 figures. To be submitted to PNAS. Figures 3, 5 and 6 show our main results. Comments are welcome

First optical reconstruction of dust in the region of SNR RX~J1713.7-3946 from astrometric Gaia data [HEAP]

http://arxiv.org/abs/2011.14383


The origin of the radiation observed in the region of the supernova remnant (SNR) RX$\,$J1713.7-3946, one of the brightest TeV emitters, has been debated since its discovery. The existence of atomic and molecular clouds in this object supports the idea that part of the GeV gamma rays in this region originate from proton-proton collisions. However, the observed column density of gas cannot explain the whole emission. Here we present the results of a novel technique that uses the ESA/Gaia DR2 data to reveal faint gas and dust structures in the region of RX$\,$J1713.7-3946 by making use of both astrometric and photometric data. These new structures could be an additional target for cosmic ray protons from the SNR. Our distance resolved reconstruction of dust extinction towards the SNR indicates the presence of only one faint structure in the vicinity of RX$\,$J1713.7-3946. Considering that the SNR is located in a dusty environment, we set the most precise constrain to the SNR distance to date, at ($1.12 \pm 0.01$)~kpc.

Read this paper on arXiv…

R. Leike, S. Celli, A. Krone-Martins, et. al.
Tue, 1 Dec 20
25/108

Comments: N/A

Evaluation of investigational paradigms for the discovery of non-canonical astrophysical phenomena [IMA]

http://arxiv.org/abs/2011.10086


Non-canonical phenomena – defined here as observables which are either insufficiently characterized by existing theory, or otherwise represent inconsistencies with prior observations – are of burgeoning interest in the field of astrophysics, particularly due to their relevance as potential signs of past and/or extant life in the universe (e.g. off-nominal spectroscopic data from exoplanets). However, an inherent challenge in investigating such phenomena is that, by definition, they do not conform to existing predictions, thereby making it difficult to constrain search parameters and develop an associated falsifiable hypothesis.
In this Expert Recommendation, the authors evaluate the suitability of two different approaches – conventional parameterized investigation (wherein experimental design is tailored to optimally test a focused, explicitly parameterized hypothesis of interest) and the alternative approach of anomaly searches (wherein broad-spectrum observational data is collected with the aim of searching for potential anomalies across a wide array of metrics) – in terms of their efficacy in achieving scientific objectives in this context. The authors provide guidelines on the appropriate use-cases for each paradigm, and contextualize the discussion through its applications to the interdisciplinary field of technosignatures (a discipline at the intersection of astrophysics and astrobiology), which essentially specializes in searching for non-canonical astrophysical phenomena.

Read this paper on arXiv…

C. Singam, J. Haqq-Misra, A. Balbi, et. al.
Mon, 23 Nov 20
61/63

Comments: A product of the TechnoClimes 2020 conference

Case study on the identification and classification of small-scale flow patterns in flaring active region [SSA]

http://arxiv.org/abs/2011.07634


We propose a novel methodology to identity flows in the solar atmosphere and classify their velocities as either supersonic, subsonic, or sonic. The proposed methodology consists of three parts. First, an algorithm is applied to the Solar Dynamics Observatory (SDO) image data to locate and track flows, resulting in the trajectory of each flow over time. Thereafter, the differential emission measure inversion method is applied to six AIA channels along the trajectory of each flow in order to estimate its background temperature and sound speed. Finally, we classify each flow as supersonic, subsonic, or sonic by performing simultaneous hypothesis tests on whether the velocity bounds of the flow are larger, smaller, or equal to the background sound speed. The proposed methodology was applied to the SDO image data from the 171 {\AA} spectral line for the date 6 March 2012 from 12:22:00 to 12:35:00 and again for the date 9 March 2012 from 03:00:00 to 03:24:00. Eighteen plasma flows were detected, 11 of which were classified as supersonic, 3 as subsonic, and 3 as sonic at a $70\%$ level of significance. Out of all these cases, 2 flows cannot be strictly ascribed to one of the respective categories as they change from the subsonic state to supersonic and vice versa. We labelled them as a subclass of transonic flows. The proposed methodology provides an automatic and scalable solution to identify small-scale flows and to classify their velocities as either supersonic, subsonic, or sonic. We identified and classified small-scale flow patterns in flaring loops. The results show that the flows can be classified into four classes: sub-, super-, trans-sonic, and sonic. The detected flows from AIA images can be analyzed in combination with the other high-resolution observational data, such as Hi-C 2.1 data, and be used for the development of theories of the formation of flow patterns.

Read this paper on arXiv…

E. Philishvi, B. Shergelashvili, S. Buitendag, et. al.
Tue, 17 Nov 20
17/83

Comments: 13 pages, 7 figures, Accepted for publication in A&A

papaya2: 2D Irreducible Minkowski Tensor computation [CL]

http://arxiv.org/abs/2010.15138


A common challenge in scientific and technical domains is the quantitative description of geometries and shapes, e.g. in the analysis of microscope imagery or astronomical observation data. Frequently, it is desirable to go beyond scalar shape metrics such as porosity and surface to volume ratios because the samples are anisotropic or because direction-dependent quantities such as conductances or elasticity are of interest. Minkowski Tensors are a systematic family of versatile and robust higher-order shape descriptors that allow for shape characterization of arbitrary order and promise a path to systematic structure-function relationships for direction-dependent properties. Papaya2 is a software to calculate 2D higher-order shape metrics with a library interface, support for Irreducible Minkowski Tensors and interpolated marching squares. Extensions to Matlab, JavaScript and Python are provided as well. While the tensor of inertia is computed by many tools, we are not aware of other open-source software which provides higher-rank shape characterization in 2D.

Read this paper on arXiv…

F. Schaller, J. Wagner and S. Kapfer
Fri, 30 Oct 20
69/74

Comments: 5 pages, 3 figures, published in the Journal of Open Source Software, code available at this https URL

Towards Bayesian Data Compression [CL]

http://arxiv.org/abs/2010.10375


In order to handle the large data sets omnipresent in modern science, efficient compression algorithms are necessary. There exist general purpose lossless and lossy compression algorithms, suited for different situations. Here, a Bayesian data compression (BDC) algorithm that adapts to the specific data set is derived. BDC compresses a data set under conservation of its posterior structure with minimal information loss given the prior knowledge on the signal, the quantity of interest. BDC works hand in hand with the signal reconstruction from the data. Its basic form is valid for Gaussian priors and likelihoods. This generalizes to non-linear settings with the help of Metric Gaussian Variational Inference. BDC requires the storage of effective instrument response functions for the compressed data and corresponding noise encoding the posterior covariance structure. Their memory demand counteract the compression gain. In order to improve this, sparsity of the compressed responses can be enforced by separating the data into patches and compressing them separately. The applicability of our method is demonstrated by applying it to synthetic data and radio astronomical data. Still the algorithm needs to be improved further as the computation time of the compression exceeds the time of the inference with the original data.

Read this paper on arXiv…

J. Harth-Kitzerow, R. Leike, P. Arras, et. al.
Wed, 21 Oct 20
37/79

Comments: 31 pages, 15 figures, 1 table, for code, see this https URL

MatDRAM: A pure-MATLAB Delayed-Rejection Adaptive Metropolis-Hastings Markov Chain Monte Carlo Sampler [CL]

http://arxiv.org/abs/2010.04190


Markov Chain Monte Carlo (MCMC) algorithms are widely used for stochastic optimization, sampling, and integration of mathematical objective functions, in particular, in the context of Bayesian inverse problems and parameter estimation. For decades, the algorithm of choice in MCMC simulations has been the Metropolis-Hastings (MH) algorithm. An advancement over the traditional MH-MCMC sampler is the Delayed-Rejection Adaptive Metropolis (DRAM). In this paper, we present MatDRAM, a stochastic optimization, sampling, and Monte Carlo integration toolbox in MATLAB which implements a variant of the DRAM algorithm for exploring the mathematical objective functions of arbitrary-dimensions, in particular, the posterior distributions of Bayesian models in data science, Machine Learning, and scientific inference. The design goals of MatDRAM include nearly-full automation of MCMC simulations, user-friendliness, fully-deterministic reproducibility, and the restart functionality of simulations. We also discuss the implementation details of a technique to automatically monitor and ensure the diminishing adaptation of the proposal distribution of the DRAM algorithm and a method of efficiently storing the resulting simulated Markov chains. The MatDRAM library is open-source, MIT-licensed, and permanently located and maintained as part of the ParaMonte library at https://github.com/cdslaborg/paramonte.

Read this paper on arXiv…

S. Kumbhare and A. Shahmoradi
Mon, 12 Oct 20
22/59

Comments: N/A

Automating Inference of Binary Microlensing Events with Neural Density Estimation [IMA]

http://arxiv.org/abs/2010.04156


Automated inference of binary microlensing events with traditional sampling-based algorithms such as MCMC has been hampered by the slowness of the physical forward model and the pathological likelihood surface. Current analysis of such events requires both expert knowledge and large-scale grid searches to locate the approximate solution as a prerequisite to MCMC posterior sampling. As the next generation, space-based microlensing survey with the Roman Space Observatory is expected to yield thousands of binary microlensing events, a new scalable and automated approach is desired. Here, we present an automated inference method based on neural density estimation (NDE). We show that the NDE trained on simulated Roman data not only produces fast, accurate, and precise posteriors but also captures expected posterior degeneracies. A hybrid NDE-MCMC framework can further be applied to produce the exact posterior.

Read this paper on arXiv…

K. Zhang, J. Bloom, B. Gaudi, et. al.
Fri, 9 Oct 20
30/64

Comments: 7 pages, 1 figure. Submitted to the ML4PS workshop at NeurIPS 2020

Mean Estimate Distances for Galaxies with Multiple Estimates in NED-D [CEA]

http://arxiv.org/abs/2010.02997


Numerous research topics rely on an improved cosmic distance scale (e.g., cosmology, gravitational waves), and the NASA/IPAC Extragalactic Database of Distances (NED-D) supports those efforts by tabulating multiple redshift-independent distances for 12,000 galaxies (e.g., Large Magellanic Cloud (LMC) zero-point). Six methods for securing a mean estimate distance (MED) from the data are presented (e.g., indicator and Decision Tree). All six MEDs yield surprisingly consistent distances for the cases examined, including for the key benchmark LMC and M106 galaxies. The results underscore the utility of the NED-D MEDs in bolstering the cosmic distance scale and facilitating the identification of systematic trends.

Read this paper on arXiv…

I. Steer
Thu, 8 Oct 20
14/54

Comments: 18 pages, 5 figures, 6 tables, published in The Astronomical Journal, October 7, 2020

Estimating longterm power spectral densities in AGN from simulations [HEAP]

http://arxiv.org/abs/2010.01038


The power spectral density (PSD) represents a key property quantifying the stochastic or random noise type fluctuations in variable sources like Active Galactic Nuclei (AGN). In recent years, estimates of the PSD have been refined by improvements in both, the quality of observed lightcurves and modeling them with simulations. This has aided in quantifying the variability including evaluating the significance of quasi-periodic oscillations. A central assumption in making such estimates is that of weak non-stationarity. This is violated for sources with a power-law PSD index steeper than one as the integral power diverges. As a consequence, estimates of the flux probability density function (PDF) and PSD are interlinked. In general, for evaluating parameters of both properties from lightcurves, one cannot avoid a multi-dimensional, multi-parameter model which is complex and computationally expensive, as well as harder to constrain and interpret. However, if we only wish to compute the PSD index as is often the case, we can use a simpler model. We explore a bending power-law model instead of a simple power-law as input to time-series simulations to test the quality of reconstruction. Examining the longterm variability of the classical blazar Mrk 421, extending to multiple years as is typical of Fermi-LAT or Swift-BAT lightcurves, we find that a transition from pink (PSD index one) to white noise at a characteristic timescale, $t_b \sim 500-1000$ years, comparable to the viscous timescale at the disk truncation radius, seems to provide a good model for simulations. This is both a physically motivated as well as a computationally efficient model that can be used to compute the PSD index.

Read this paper on arXiv…

N. Chakraborty and F. Rieger
Mon, 5 Oct 20
12/61

Comments: 14 pages, 6 figures, Submitted

Deep Forest: Neural Network reconstruction of the Lyman-alpha forest [CEA]

http://arxiv.org/abs/2009.10673


We explore the use of Deep Learning to infer physical quantities from the observable transmitted flux in the Lyman-alpha forest. We train a Neural Network using redshift z=3 outputs from cosmological hydrodynamic simulations andmock datasets constructed from them. We evaluate how well the trained network is able to reconstruct the optical depth for Lyman-alpha forest absorption from noisy and often saturated transmitted flux data. The Neural Network outperforms an alternative reconstruction method involving log inversion and spline interpolation by approximately a factor of 2 in the optical depth root mean square error. We find no significant dependence in the improvement on input data signal to noise, although the gain is greatest in high optical depth regions. The Lyman-alpha forest optical depth studied here serves as a simple, one dimensional, example but the use of Deep Learning and simulations to approach the inverse problem in cosmology could be extended to other physical quantities and higher dimensional data.

Read this paper on arXiv…

L. Huang, R. Croft and H. Arora
Wed, 23 Sep 20
-1694/86

Comments: 10 pages, 7 figures, submitted to MNRAS. Code and data used at this https URL

Assessment of Efficiency, Impact Factor, Impact of Probe Mass, Probe Life Expectancy, and Reliability of Mars Missions [IMA]

http://arxiv.org/abs/2009.08534


Mars is the next frontier after Moon for space explorers to demonstrate the extent of human expedition and technology beyond low-earth orbit. Government space agencies as well as private space sectors are extensively endeavouring for a better space enterprise. Focusing on the inspiration to reach Mars by robotic satellite, we have interpreted some of the significant mission parameters like proportionality of mission attempts, efficiency and reliability of Mars probes, Impact and Impact Factor of mass on mission duration, Time lag between consecutive mission attempts, interpretation of probe life and transitional region employing various mathematical analysis. And we have discussed the importance of these parameters for a prospective mission accomplishment. Our novelty in this paper is we have found a deep relation describing that the probe mass adversely affects the mission duration. Applying this relation, we also interpreted the duration of probe life expectancy for upcoming missions.

Read this paper on arXiv…

M. M and R. Annavarapu
Mon, 21 Sep 20
-1693/60

Comments: 14 Pages, 09 Figures, 10 Tables. This Study is Performed at Department of Physics, Pondicherry University, R.V. Nagar, Kalapet, Puducherry – 605 014, India

Extracting the Subhalo Mass Function from Strong Lens Images with Image Segmentation [CEA]

http://arxiv.org/abs/2009.06639


Detecting substructure within strongly lensed images is a promising route to shed light on the nature of dark matter. It is a challenging task, which traditionally requires detailed lens modeling and source reconstruction, taking weeks to analyze each system. We use machine learning to circumvent the need for lens and source modeling and develop a method to both locate subhalos in an image as well as determine their mass using the technique of image segmentation. The network is trained on images with a single subhalo located near the Einstein ring. Training in this way allows the network to learn the gravitational lensing of light and it is then able to accurately detect entire populations of substructure, even far from the Einstein ring. In images with a single subhalo and without noise, the network detects subhalos of mass $10^6 M_{\odot}$ 62% of the time and 78% of these detected subhalos are predicted in the correct mass bin. The detection accuracy increases for heavier masses. When random noise at the level of 1% of the mean brightness of the image is included (which is a realistic approximation HST, for sources brighter than magnitude 20), the network loses sensitivity to the low-mass subhalos; with noise, the $10^{8.5}M_{\odot}$ subhalos are detected 86% of the time, but the $10^8 M_{\odot}$ subhalos are only detected 38% of the time. The false-positive rate is around 2 false subhalos per 100 images with and without noise, coming mostly from masses $\leq10^8 M_{\odot}$. With good accuracy and a low false-positive rate, counting the number of pixels assigned to each subhalo class over multiple images allows for a measurement of the subhalo mass function (SMF). When measured over five mass bins from $10^8 M_{\odot}$ to $10^{10} M_{\odot}$ the SMF slope is recovered with an error of 14.2 (16.3)% for 10 images, and this improves to 2.1 (2.6)% for 1000 images without (with 1%) noise.

Read this paper on arXiv…

B. Ostdiek, A. Rivero and C. Dvorkin
Wed, 16 Sep 20
-1610/74

Comments: 23 + 5 pages, 12 + 2 figures

Model Dependence of Bayesian Gravitational-Wave Background Statistics for Pulsar Timing Arrays [IMA]

http://arxiv.org/abs/2009.05143


Pulsar timing array (PTA) searches for a gravitational-wave background (GWB) typically include time-correlated “red” noise models intrinsic to each pulsar. Using a simple simulated PTA dataset with an injected GWB signal we show that the details of the red noise models used, including the choice of amplitude priors and even which pulsars have red noise, have a striking impact on the GWB statistics, including both upper limits and estimates of the GWB amplitude. We find that the standard use of uniform priors on the red noise amplitude leads to 95% upper limits, as calculated from one-sided Bayesian credible intervals, that are less than the injected GWB amplitude 50% of the time. In addition, amplitude estimates of the GWB are systematically lower than the injected value by 10-40%, depending on which models and priors are chosen for the intrinsic red noise. We tally the effects of model and prior choice and demonstrate how a “dropout” model, which allows flexible use of red noise models in a Bayesian approach, can improve GWB estimates throughout.

Read this paper on arXiv…

J. Hazboun, J. Simon, X. Siemens, et. al.
Mon, 14 Sep 20
-1584/54

Comments: 18 pages, 5 figures

Measuring the spectral index of turbulent gas with deep learning from projected density maps [GA]

http://arxiv.org/abs/2008.11287


Turbulence plays a key role in star formation in molecular clouds, affecting star cluster primordial properties. As modelling present-day objects hinges on our understanding of their initial conditions, better constraints on turbulence can result in windfalls in Galactic archaeology, star cluster dynamics and star formation. Observationally, constraining the spectral index of turbulent gas usually involves computing spectra from velocity maps. Here we suggest that information on the spectral index might be directly inferred from column density maps (possibly obtained by dust emission/absorption) through deep learning. We generate mock density maps from a large set of adaptive mesh refinement turbulent gas simulations using the hydro-simulation code RAMSES. We train a convolutional neural network (CNN) on the resulting images to predict the turbulence index, optimize hyper-parameters in validation and test on a holdout set. Our adopted CNN model achieves a mean squared error of 0.024 in its predictions on our holdout set, over underlying spectral indexes ranging from 3 to 4.5. We also perform robustness tests by applying our model to altered holdout set images, and to images obtained by running simulations at different resolutions. This preliminary result on simulated density maps encourages further developments on real data, where observational biases and other issues need to be taken into account.

Read this paper on arXiv…

P. Trevisan, M. Pasquato, A. Ballone, et. al.
Thu, 27 Aug 20
-1257/52

Comments: 7 pages, 7 figures, 1 table

Effects of Solar Activity on Taylor Scale and Correlation Scale in Solar Wind Magnetic Fluctuations [CL]

http://arxiv.org/abs/2008.08542


The correlation scale and the Taylor scale are evaluated for interplanetary magnetic field fluctuations from two-point, single time correlation function using the Advanced Composition Explorer (ACE), Wind, and Cluster spacecraft data during the time period from 2001 to 2017, which covers over an entire solar cycle. The correlation scale and the Taylor scale are respectively compared with the sunspot number to investigate the effects of solar activity on the structure of the plasma turbulence. Our studies show that the Taylor scale increases with the increasing sunspot number, which indicates that the Taylor scale is positively correlated with the energy cascade rate, and the correlation coefficient between the sunspot number and the Taylor scale is 0.92. However, these results are not consistent with the traditional knowledge in hydrodynamic dissipation theories. One possible explanation is that in the solar wind, the fluid approximation fails at the spatial scales near the dissipation ranges. Therefore, the traditional hydrodynamic turbulence theory is incomplete for describing the physical nature of the solar wind turbulence, especially at the spatial scales near the kinetic dissipation scales.

Read this paper on arXiv…

G. Zhou, H. He and W. Wan
Thu, 20 Aug 20
-1092/48

Comments: Published in ApJL

BAT.jl — A Julia-based tool for Bayesian inference [CL]

http://arxiv.org/abs/2008.03132


We describe the development of a multi-purpose software for Bayesian statistical inference, BAT.jl, written in the Julia language. The major design considerations and implemented algorithms are summarized here, together with a test suite that ensures the proper functioning of the algorithms. We also give an extended example from the realm of physics that demonstrates the functionalities of BAT.jl.

Read this paper on arXiv…

O. Schulz, F. Beaujean, A. Caldwell, et. al.
Mon, 10 Aug 20
-773/53

Comments: N/A

Artificial intelligence and quasar absorption system modelling; application to fundamental constants at high redshift [CEA]

http://arxiv.org/abs/2008.02583


Exploring the possibility that fundamental constants of Nature might vary temporally or spatially constitutes one of the key science drivers for the European Southern Observatory’s ESPRESSO spectrograph on the VLT and for the HIRES spectrograph on the ELT. High-resolution spectra of quasar absorption systems permit accurate measurements of fundamental constants out to high redshifts. The quality of new data demands completely objective and reproducible methods. We have developed a new fully automated Artificial Intelligence-based method capable of deriving optimal models of even the most complex absorption systems known. The AI structure is built around VPFIT, a well-developed and extensively-tested non-linear least-squares code. The new method forms a sophisticated parallelised system, eliminating human decision-making and hence bias. Here we describe the workings of such a system and apply it to synthetic spectra, in doing so establishing methods of importance for future analyses of VLT and ELT data. The results show that modelling line broadening for high-redshift absorption components should include both thermal and turbulent components. Failing to do so means it is easy to derive the wrong model and hence incorrect parameter estimates. We also argue that model non-uniqueness can be significant, such that it is not feasible to expect to derive an unambiguous estimate of the fine structure constant alpha from one or a small number of measurements. No matter how optimal the modelling method, it is a fundamental requirement to use a large sample of measurements to meaningfully constrain temporal or spatial alpha variation.

Read this paper on arXiv…

C. Lee, J. Webb, R. Carswell, et. al.
Fri, 7 Aug 20
-741/46

Comments: Submitted to MNRAS

Regional study of Europa's photometry [EPA]

http://arxiv.org/abs/2007.11445


The surface of Europa is geologically young and shows signs of current activity. Studying it from a photometric point of view gives us insight on its physical state. We used a collection of 57 images from Voyager’s Imaging Science System and New Horizons’ LOng Range Reconnaissance Imager for which we corrected the geometric metadata and projected every pixel to compute photometric information (reflectance and geometry of observation). We studied 20 areas scattered across the surface of Europa and estimated their photometric behavior using the Hapke radiative transfer model and a Bayesian framework in order to estimate their microphysical state. We have found that most of them were consistent with the bright backscattering behavior of Europa, already observed at a global scale, indicating the presence of grains maturated by space weathering. However, we have identified very bright areas showing a narrow forward scattering possibly indicating the presence of fresh deposits that could be attributed to recent cryovolcanism or jets. Overall, we showed that the photometry of Europa’s surface is more diverse than previously thought and so is its microphysical state.

Read this paper on arXiv…

I. Belgacem, F. Schmidt and G. Jonniaux
Thu, 23 Jul 20
-436/83

Comments: 46 pages, 17 figures, 3 tables

A comparison of g(1)(τ), g(3/2)(τ), and g(2)(τ), for radiation from harmonic oscillators in Brownian motion with coherent background [CL]

http://arxiv.org/abs/2007.06470


We compare the field-field g(1)(\tau), intensity-field g(3/2)(\tau), and intensity-intensity g(2)(\tau) correlation functions for models that are of relevance in astrophysics. We obtain expressions for the general case of a chaotic radiation, where the amplitude is Rician based on a model with an ensemble of harmonic oscillators in Brownian motion. We obtain the signal to noise ratios for two methods of measurement. The intensity-field correlation function signal to noise ratio scales with the first power of |g(1)(\tau)|. This is in contrast with the well-established result of g(2)(\tau) which goes as the square of |g(1)(\tau)|.

Read this paper on arXiv…

A. Siciak, L. Orozco, M. Fouché, et. al.
Tue, 14 Jul 20
-162/97

Comments: 23 pages, 2 figures, 3 Tables

Detection of Gravitational Waves Using Bayesian Neural Networks [IMA]

http://arxiv.org/abs/2007.04176


We propose a new model of Bayesian Neural Networks to not only detect the events of compact binary coalescence in the observational data of gravitational waves (GW) but also identify the time periods of the associated GW waveforms before the events. This is achieved by incorporating the Bayesian approach into the CLDNN classifier, which integrates together the Convolutional Neural Network (CNN) and the Long Short-Term Memory Recurrent Neural Network (LSTM). Our model successfully detect all seven BBH events in the LIGO Livingston O2 data, with the periods of their GW waveforms correctly labeled. The ability of a Bayesian approach for uncertainty estimation enables a newly defined `awareness’ state for recognizing the possible presence of signals of unknown types, which is otherwise rejected in a non-Bayesian model. Such data chunks labeled with the awareness state can then be further investigated rather than overlooked. Performance tests show that our model recognizes 90% of the events when the optimal signal-to-noise ratio $\rho_\text{opt} >7$ (100% when $\rho_\text{opt} >8.5$) and successfully labels more than 95% of the waveform periods when $\rho_\text{opt} >8$. The latency between the arrival of peak signal and generating an alert with the associated waveform period labeled is only about 20 seconds for an unoptimized code on a moderate GPU-equipped personal computer. This makes our model possible for nearly real-time detection and for forecasting the coalescence events when assisted with deeper training on a larger dataset using the state-of-art HPCs.

Read this paper on arXiv…

Y. Lin and J. Wu
Thu, 9 Jul 20
0/70

Comments: 15 pages, 13 figures

Cycle-StarNet: Bridging the gap between theory and data by leveraging large datasets [SSA]

http://arxiv.org/abs/2007.03109


Spectroscopy provides an immense amount of information on stellar objects, and this field continues to grow with recent developments in multi-object data acquisition and rapid data analysis techniques. Current automated methods for analyzing spectra are either (a) data-driven models, which require large amounts of data with prior knowledge of stellar parameters and elemental abundances, or (b) based on theoretical synthetic models that are susceptible to the gap between theory and practice. In this study, we present a hybrid generative domain adaptation method to turn simulated stellar spectra into realistic spectra, learning from the large spectroscopic surveys. We use a neural network to emulate computationally expensive stellar spectra simulations, and then train a separate unsupervised domain-adaptation network that learns to relate the generated synthetic spectra to observational spectra. Consequently, the network essentially produces data-driven models without the need for a labeled training set. As a proof of concept, two case studies are presented. The first of which is the auto-calibration of synthetic models without using any standard stars. To accomplish this, synthetic models are morphed into spectra that resemble observations, thereby reducing the gap between theory and observations. The second case study is the identification of the elemental source of missing spectral lines in the synthetic modelling. These sources are predicted by interpreting the differences between the domain-adapted and original spectral models. To test our ability to identify missing lines, we use a mock dataset and show that, even with noisy observations, absorption lines can be recovered when they are absent in one of the domains. While we focus on spectral analyses in this study, this method can be applied to other fields, which use large data sets and are currently limited by modelling accuracy.

Read this paper on arXiv…

T. O’Briain, Y. Ting, S. Fabbro, et. al.
Wed, 8 Jul 20
37/77

Comments: 20 pages, 11 figures, 1 table, submitted ApJ. A companion 4-page preview is accepted to the ICML 2020 Machine Learning Interpretability for Scientific Discovery workshop. The code used in this study is made publicly available on github: this https URL

The GALAH survey: Characterization of emission-line stars with spectral modelling using autoencoders [SSA]

http://arxiv.org/abs/2006.03062


We present a neural network autoencoder structure that is able to extract essential latent spectral features from observed spectra and then reconstruct a spectrum from those features. Because of the training with a set of unpeculiar spectra, the network is able to reproduce a spectrum of high signal-to-noise ratio that does not show any spectral peculiarities even if they are present in an observed spectrum. Spectra generated in this manner were used to identify various emission features among spectra acquired by multiple surveys using the HERMES spectrograph at the Anglo-Australian telescope. Emission features were identified by a direct comparison of the observed and generated spectra. Using the described comparison procedure, we discovered 10,364 candidate spectra with a varying degree of H$\alpha$/H$\beta$ emission component produced by different physical mechanisms. A fraction of those spectra belongs to the repeated observation that shows temporal variability in their emission profile. Among emission spectra, we find objects that feature contributions of a nearby rarefied gas (identified through the emission of [NII] and [SII] lines) that was identified in 4004 spectra, which were not all identified as having H$\alpha$ emission. Positions of identified emission-line objects coincide with multiple known regions that harbour young stars. Similarly, detected nebular emission spectra coincide with visually-prominent nebular clouds observable in the red all-sky photographic composites.

Read this paper on arXiv…

K. Čotar, T. Zwitter, G. Traven, et. al.
Mon, 8 Jun 20
10/57

Comments: 14+5 pages, 18 figures, 3 tables, 1 catalogue, submitted to MNRAS

Nested sampling cross-checks using order statistics [CL]

http://arxiv.org/abs/2006.03371


Nested sampling (NS) is an invaluable tool in data analysis in modern astrophysics, cosmology, gravitational wave astronomy and particle physics. We identify a previously unused property of NS related to order statistics: the insertion indexes of new live points into the existing live points should be uniformly distributed. This observation enabled us to create a novel cross-check of single NS runs. The tests can detect when an NS run failed to sample new live points from the constrained prior and plateaus in the likelihood function, which break an assumption of NS and thus leads to unreliable results. We applied our cross-check to NS runs on toy functions with known analytic results in 2 – 50 dimensions, showing that our approach can detect problematic runs on a variety of likelihoods, settings and dimensions. As an example of a realistic application, we cross-checked NS runs performed in the context of cosmological model selection. Since the cross-check is simple, we recommend that it become a mandatory test for every applicable NS run.

Read this paper on arXiv…

A. Fowlie, W. Handley and L. Su
Mon, 8 Jun 20
47/57

Comments: 8 pages, 1 figure

A Multilevel Empirical Bayesian Approach to Estimating the Unknown Redshifts of 1366 BATSE Catalog Long-Duration Gamma-Ray Bursts [HEAP]

http://arxiv.org/abs/2006.01157


We present a catalog of the probabilistic redshift estimates and for 1366 individual Long-duration Gamma-Ray Bursts (LGRBs) detected by the Burst And Transient Source Experiment (BATSE). This result is based on a careful selection and modeling of the population distribution of 1366 BATSE LGRBs in the 5-dimensional space of redshift and the four intrinsic prompt gamma-ray emission properties: the isotropic 1024ms peak luminosity, the total isotropic emission, the spectral peak energy, as well as the intrinsic duration, while carefully taking into account the effects of sample incompleteness and the LGRB-detection mechanism of BATSE. Two fundamental plausible assumptions underlie our purely-probabilistic approach: 1. LGRBs trace, either exactly or closely, the Cosmic Star Formation Rate and 2. the joint 4-dimensional distribution of the aforementioned prompt gamma-ray emission properties is well-described by a multivariate log-normal distribution.
Our modeling approach enables us to constrain the redshifts of individual BATSE LGRBs to within $0.36$ and $0.96$ average uncertainty ranges at $50\%$ and $90\%$ confidence levels, respectively. Our redshift predictions are completely at odds with the previous redshift estimates of BATSE LGRBs that were computed via the proposed phenomenological high-energy relations, specifically, the apparently-strong correlation of LGRBs’ peak luminosity with the spectral peak energy, lightcurve variability, and the spectral lag. The observed discrepancies between our predictions and the previous works can be explained by the strong influence of detector threshold and sample-incompleteness in shaping these phenomenologically-proposed high-energy correlations in the literature.

Read this paper on arXiv…

J. Osborne, A. Shahmoradi and R. Nemiroff
Wed, 3 Jun 20
10/83

Comments: 53 pages, 8 figures, 2 tables, submitted to the Astrophysical Journal. This article is a continuation of and builds upon arXiv:1903.06989 [astro-ph.HE]

Data Analysis Recipes: Products of multivariate Gaussians in Bayesian inferences [CL]

http://arxiv.org/abs/2005.14199


A product of two Gaussians (or normal distributions) is another Gaussian. That’s a valuable and useful fact! Here we use it to derive a refactoring of a common product of multivariate Gaussians: The product of a Gaussian likelihood times a Gaussian prior, where some or all of those parameters enter the likelihood only in the mean and only linearly. That is, a linear, Gaussian, Bayesian model. This product of a likelihood times a prior pdf can be refactored into a product of a marginalized likelihood (or a Bayesian evidence) times a posterior pdf, where (in this case) both of these are also Gaussian. The means and variance tensors of the refactored Gaussians are straightforward to obtain as closed-form expressions; here we deliver these expressions, with discussion. The closed-form expressions can be used to speed up and improve the precision of inferences that contain linear parameters with Gaussian priors. We connect these methods to inferences that arise frequently in physics and astronomy.
If all you want is the answer, the question is posed and answered at the beginning of Section 3. We show two toy examples, in the form of worked exercises, in Section 4. The solutions, discussion, and exercises in this Note are aimed at someone who is already familiar with the basic ideas of Bayesian inference and probability.

Read this paper on arXiv…

D. Hogg, A. Price-Whelan and B. Leistedt
Mon, 1 Jun 20
3/50

Comments: a chapter of a book we will never write

Noise Reduction in Gravitational-wave Data via Deep Learning [IMA]

http://arxiv.org/abs/2005.06534


With the advent of gravitational wave astronomy, techniques to extend the reach of gravitational wave detectors are desired. In addition to the stellar-mass black hole and neutron star mergers already detected, many more are below the surface of the noise, available for detection if the noise is reduced enough. Our method (DeepClean) applies machine learning algorithms to gravitational wave detector data and data from on-site sensors monitoring the instrument to reduce the noise in the time-series due to instrumental artifacts and environmental contamination. This framework is generic enough to subtract linear, non-linear, and non-stationary coupling mechanisms. It may also provide handles in learning about the mechanisms which are not currently understood to be limiting detector sensitivities. The robustness of the noise reduction technique in its ability to efficiently remove noise with no unintended effects on gravitational-wave signals is also addressed through software signal injection and parameter estimation of the recovered signal. It is shown that the optimal SNR ratio of the injected signal is enhanced by $\sim 21.6\%$ and the recovered parameters are consistent with the injected set. We present the performance of this algorithm on linear and non-linear noise sources and discuss its impact on astrophysical searches by gravitational wave detectors.

Read this paper on arXiv…

R. Ormiston, T. Nguyen, M. Coughlin, et. al.
Fri, 15 May 20
46/65

Comments: 12 pages, 7 figures

ECoPANN: A Framework for Estimating Cosmological Parameters using Artificial Neural Networks [CEA]

http://arxiv.org/abs/2005.07089


In this work, we present a new method to estimate cosmological parameters accurately based on Artificial Neural Network (ANN), and a code called ECoPANN (Estimating Cosmological Parameters with ANN) is developed to achieve parameter inference. We test the ANN method by estimating the basic parameters of the concordance cosmological model using the simulated temperature power spectrum of the cosmic microwave background (CMB). The results show that the ANN performs excellently on best-fit values and errors of parameters, as well as correlations between parameters when compared with that of the Markov chain Monte Carlo (MCMC) method. Besides, for a well trained ANN model, it is capable of estimating parameters for multiple experiments that have different precisions, which can greatly reduce the consumption of time and computing resources for parameter inference. Moreover, the well trained ANN is capable of discovering new potential physics in both the current and future higher precision observations. In addition, we extend the ANN to a multi-branch network to achieve joint constraint on parameters. We test the multi-branch network using the simulated temperature and polarization power spectra of CMB, type Ia supernovae, and baryon acoustic oscillation, and almost obtain the same results as the MCMC method. Therefore, we propose that the ANN can provide an alternative way to accurate and fast estimate cosmological parameters, and ECoPANN can be applied to the research of cosmology and even other broader scientific fields.

Read this paper on arXiv…

G. Wang, S. Li and J. Xia
Fri, 15 May 20
57/65

Comments: 21 pages, 12 figures, and 7 tables. Resubmitted to ApJS after revision. The code repository is available at this https URL

Deep-Learning Continuous Gravitational Waves: Multiple detectors and realistic noise [CL]

http://arxiv.org/abs/2005.04140


The sensitivity of wide-parameter-space searches for continuous gravitational waves is limited by computational cost. Recently it was shown that Deep Neural Networks (DNNs) can perform all-sky searches directly on (single-detector) strain data, potentially providing a low-computing-cost search method that could lead to a better overall sensitivity. Here we expand on this study in two respects: (i) using (simulated) strain data from two detectors simultaneously, and (ii) training for directed (i.e.\ single sky-position) searches in addition to all-sky searches. For a data timespan of $T = 10^5\, s$, the all-sky two-detector DNN is about $7\%$ less sensitive (in amplitude $h_0$) at low frequency ($f=20\,Hz$), and about $51\,\%$ less sensitive at high frequency ($f=1000\,Hz$) compared to fully-coherent matched-filtering (using WEAVE). In the directed case the sensitivity gap compared to matched-filtering ranges from about $7-14\%$ at $f=20\,Hz$ to about $37-49\%$ at $f=1500\,Hz$. Furthermore we assess the DNN’s ability to generalize in signal frequency, spindown and sky-position, and we test its robustness to realistic data conditions, namely gaps in the data and using real LIGO detector noise. We find that the DNN performance is not adversely affected by gaps in the test data or by using a relatively undisturbed band of LIGO detector data instead of Gaussian noise. However, when using a more disturbed LIGO band for the tests, the DNN’s detection performance is substantially degraded due to the increase in false alarms, as expected.

Read this paper on arXiv…

C. Dreissigacker and R. Prix
Mon, 11 May 20
59/61

Comments: (12 pages,8 figures, 6 tables)

The STONE curve: A ROC-derived model performance assessment tool [CL]

http://arxiv.org/abs/2005.03542


A new model validation and performance assessment tool is introduced, the sliding threshold of observation for numeric evaluation (STONE) curve. It is based on the relative operating characteristic (ROC) curve technique, but instead of sorting all observations in a categorical classification, the STONE tool uses the continuous nature of the observations. Rather than defining events in the observations and then sliding the threshold only in the classifier (model) data set, the threshold is changed simultaneously for both the observational and model values, with the same threshold value for both data and model. This is only possible if the observations are continuous and the model output is in the same units and scale as the observations, that is, the model is trying to exactly reproduce the data. The STONE curve has several similarities with the ROC curve, plotting probability of detection against probability of false detection, ranging from the (1,1) corner for low thresholds to the (0,0) corner for high thresholds, and values above the zero-intercept unity-slope line indicating better than random predictive ability. The main difference is that the STONE curve can be nonmonotonic, doubling back in both the x and y directions. These ripples reveal asymmetries in the data-model value pairs. This new technique is applied to modeling output of a common geomagnetic activity index as well as energetic electron fluxes in the Earth’s inner magnetosphere. It is not limited to space physics applications but can be used for any scientific or engineering field where numerical models are used to reproduce observations.

Read this paper on arXiv…

M. Liemohn, A. Azari, N. Ganushkina, et. al.
Fri, 8 May 20
64/72

Comments: 19 pages, including 4 figures. Currently in second-round review with “Earth and Space Science”: this https URL

Phase reconstruction with iterated Hilbert transforms [CL]

http://arxiv.org/abs/2004.13461


We present a study dealing with a novel phase reconstruction method based on iterated Hilbert transform embeddings. We show results for the Stuart-Landau oscillator observed by generic observables. The benefits for reconstruction of the phase response curve a presented and the method is applied in a setting where the observed system is pertubred by noise.

Read this paper on arXiv…

E. Gengel and A. Pikovsky
Wed, 29 Apr 20
66/75

Comments: The manuscript is based on findings presented in the poster presentation at the Dynamics days Europe in 2019

Parametric unfolding. Method and restrictions [CL]

http://arxiv.org/abs/2004.12766


Parametric unfolding of a true distribution distorted due to finite resolution and limited efficiency for the registration of individual events is discussed. Details of the computational algorithm of the unfolding procedure are presented.

Read this paper on arXiv…

N. Gagunashvili
Tue, 28 Apr 20
68/81

Comments: 14 pages, 9 figures

Ionization Yield in Silicon for eV-Scale Electron-Recoil Processes [IMA]

http://arxiv.org/abs/2004.10709


The development of single charge resolving, macroscopic silicon detectors has opened a window into rare processes at the O(eV) scale. In order to reconstruct the energy of a given event, or model the charge signal obtained for a given amount of energy absorbed by the electrons in a detector, an accurate charge yield model is needed. In this paper we review existing measurements of charge yield in Silicon, focusing in particular on the region below 1 keV. We highlight a calibration gap between 12-50 eV (referred to as the “UV-gap”) and employ a phenomenological model of impact ionization to explore the likely charge yield in this energy regime. Finally, we explore the impact of variations in this model on a test case, that of dark matter scattering off electrons, to illustrate the scientific impact of uncertainties in charge yield.

Read this paper on arXiv…

K. Ramanathan and N. Kurinksy
Thu, 23 Apr 20
12/45

Comments: 13 pages, 13 figures

Lessons learned from CHIME repeating FRBs [HEAP]

http://arxiv.org/abs/2003.12581


CHIME has now detected 18 repeating fast radio bursts (FRBs). We explore what can be learned about the energy distribution and activity level of the repeaters by constructing a realistic FRB population model, taking into account wait-time clustering and cosmological effects. For a power-law energy distribution dN/dE ~ E^{-gamma} for the repeating bursts, a steep energy distribution means that most repeaters should be found in the local Universe with low dispersion measure (DM), whereas a shallower distribution means some repeaters may be detected at large distances with high DM. It is especially interesting that there are two high-DM repeaters (FRB 181017 and 190417) with DM ~ 1000 pc/cm^3. These can be understood if: (i) the energy distribution is shallow gamma = 1.7 + 0.3 – 0.1 (68% confidence) or (ii) a small fraction of sources are extremely active. In the second scenario, these high-DM sources should be repeating more than 100 times more frequently than FRB 121102, and the energy index is constrained to be gamma = 1.9 + 0.3 – 0.2 (68% confidence). In either case, this power-law index is consistent with the energy dependence of the non-repeating ASKAP sample, which suggests that they are drawn from the same population. Finally, we show that the CHIME repeating fraction can be used to infer the distribution of activity level in the whole population.

Read this paper on arXiv…

W. Lu, A. Piro and E. Waxman
Tue, 31 Mar 20
9/94

Comments: 10 pages, 5 figures, submitted to MNRAS

Global Evolution of Solar Magnetic Fields and Prediction of Activity Cycles [SSA]

http://arxiv.org/abs/2003.04563


Prediction of solar activity cycles is challenging because physical processes inside the Sun involve a broad range of multiscale dynamics that no model can reproduce and because the available observations are highly limited and cover mostly surface layers. Helioseismology makes it possible to probe solar dynamics in the convective zone, but variations in differential rotation and meridional circulation are currently available for only two solar activity cycles. It has been demonstrated that sunspot observations, which cover over 400 years, can be used to calibrate the Parker-Kleeorin-Ruzmaikin dynamo model, and that the Ensemble Kalman Filter (EnKF) method can be used to link the modeled magnetic fields to sunspot observations and make reliable predictions of a following activity cycle. However, for more accurate predictions, it is necessary to use actual observations of the solar magnetic fields, which are available only for the last four solar cycles. In this paper I briefly discuss the influence of the limited number of available observations on the accuracy of EnKF estimates of solar cycle parameters, the criteria to evaluate the predictions, and application of synoptic magnetograms to the prediction of solar activity.

Read this paper on arXiv…

I. Kitiashvili
Wed, 11 Mar 20
3/65

Comments: 10 pages, 6 figures, submitted to Proceedings of IAUS #354

Modeling Aerial Gamma-Ray Backgrounds using Non-negative Matrix Factorization [IMA]

http://arxiv.org/abs/2002.10440


Airborne gamma-ray surveys are useful for many applications, ranging from geology and mining to public health and nuclear security. In all these contexts, the ability to decompose a measured spectrum into a linear combination of background source terms can provide useful insights into the data and lead to improvements over techniques that use spectral energy windows. Multiple methods for the linear decomposition of spectra exist but are subject to various drawbacks, such as allowing negative photon fluxes or requiring detailed Monte Carlo modeling. We propose using Non-negative Matrix Factorization (NMF) as a data-driven approach to spectral decomposition. Using aerial surveys that include flights over water, we demonstrate that the mathematical approach of NMF finds physically relevant structure in aerial gamma-ray background, namely that measured spectra can be expressed as the sum of nearby terrestrial emission, distant terrestrial emission, and radon and cosmic emission. These NMF background components are compared to the background components obtained using Noise-Adjusted Singular Value Decomposition (NASVD), which contain negative photon fluxes and thus do not represent emission spectra in as straightforward a way. Finally, we comment on potential areas of research that are enabled by NMF decompositions, such as new approaches to spectral anomaly detection and data fusion.

Read this paper on arXiv…

M. Bandstra, T. Joshi, K. Bilton, et. al.
Tue, 25 Feb 20
26/76

Comments: 14 pages, 12 figures, accepted for publication in IEEE Transactions on Nuclear Science

The Widely Linear Complex Ornstein-Uhlenbeck Process with Application to Polar Motion [CL]

http://arxiv.org/abs/2001.05965


Complex-valued and widely linear modelling of time series signals are widespread and found in many applications. However, existing models and analysis techniques are usually restricted to signals observed in discrete time. In this paper we introduce a widely linear version of the complex Ornstein-Uhlenbeck (OU) process. This is a continuous-time process which generalises the standard complex-valued OU process such that signals generated from the process contain elliptical oscillations, as opposed to circular oscillations, when viewed in the complex plane. We determine properties of the widely linear complex OU process, including the conditions for stationarity, and the geometrical structure of the elliptical oscillations. We derive the analytical form of the power spectral density function, which then provides an efficient procedure for parameter inference using the Whittle likelihood. We apply the process to measure periodic and elliptical properties of Earth’s polar motion, including that of the Chandler wobble, for which the standard complex OU process was originally proposed.

Read this paper on arXiv…

A. Sykulski, S. Olhede and H. Sykulska-Lawrence
Fri, 17 Jan 20
48/60

Comments: Submitted for peer-review

Dynamic Gauss Newton Metropolis Algorithm [CL]

http://arxiv.org/abs/2001.03530


GNM: The MCMC Jagger. A rocking awesome sampler. This python package is an affine invariant Markov chain Monte Carlo (MCMC) sampler based on the dynamic Gauss-Newton-Metropolis (GNM) algorithm. The GNM algorithm is specialized in sampling highly non-linear posterior probability distribution functions of the form $e^{-||f(x)||^2/2}$, and the package is an implementation of this algorithm. On top of the back-off strategy in the original GNM algorithm, there is the dynamic hyper-parameter optimization feature added to the algorithm and included in the package to help increase performance of the back-off and therefore the sampling. Also, there are the Jacobian tester, error bars creator and many more features for the ease of use included in the code. The problem is introduced and a guide to installation is given in the introduction. Then how to use the python package is explained. The algorithm is given and finally there are some examples using exponential time series to show the performance of the algorithm and the back-off strategy.

Read this paper on arXiv…

M. Ugurbil
Mon, 13 Jan 20
1/61

Comments: 21 pages, 5 figures

Deep learning for clustering of continuous gravitational wave candidates [CL]

http://arxiv.org/abs/2001.03116


In searching for continuous gravitational waves over very many ($\approx 10^{17}$) templates , clustering is a powerful tool which increases the search sensitivity by identifying and bundling together candidates that are due to the same root cause. We implement a deep learning network that identifies clusters of signal candidates in the output of continuous gravitational wave searches and assess its performance.

Read this paper on arXiv…

B. Beheshtipour and M. Papa
Fri, 10 Jan 20
8/65

Comments: N/A

Investigating Multiwavelength Lognormality with Simulations : Case of Mrk 421 [HEAP]

http://arxiv.org/abs/2001.02458


Blazars are highly variable and display complex characteristics. A key characteristic is the flux probability distribution function or flux PDF whose shape depends upon the form of the underlying physical process driving variability. The BL Lacertae Mrk 421 is one of the brightest and most variable blazars across the electromagnetic spectrum. It has been reported to show hints of lognormality across the spectrum from radio to gamma-ray histograms of observed fluxes. This would imply that the underlying mechanisms may not conform to the “standard” additive, multi-zone picture, but could potentially have multiplicative processes. This is investigated by testing the observed lightcurves at different wavelengths with time-series simulations. We find that the simulations reveal a more complex scenario, than a single lognormal distribution explaining the multiwavelength lightcurves of Mrk 421.

Read this paper on arXiv…

N. Chakraborty
Thu, 9 Jan 20
59/61

Comments: Accepted in Galaxies

Multifractal signatures of gravitational waves detected by LIGO [HEAP]

http://arxiv.org/abs/1912.12967


We analyze the data from the 6 gravitational waves signals detected by LIGO through the lens of multifractal formalism using the MFDMA method, as well as shuffled and surrogate procedures. We identified two regimes of multifractality in the strain measure of the time series by examining long memory and the presence of nonlinearities. The moment used to divide the series into two parts separates these two regimes and can be interpreted as the moment of collision between the black holes. An empirical relationship between the variation in left side diversity and the chirp mass of each event was also determined.

Read this paper on arXiv…

D. Freitas, M. Nepomuceno and J. Medeiros
Wed, 1 Jan 20
40/88

Comments: 7 pages, 3 figures, proceedings of IAU Symposium 346: High-mass X-ray binaries: illuminating the passage from massive binaries to merging compact objects

Multifractal signatures of gravitational waves detected by LIGO [HEAP]

http://arxiv.org/abs/1912.12967


We analyze the data from the 6 gravitational waves signals detected by LIGO through the lens of multifractal formalism using the MFDMA method, as well as shuffled and surrogate procedures. We identified two regimes of multifractality in the strain measure of the time series by examining long memory and the presence of nonlinearities. The moment used to divide the series into two parts separates these two regimes and can be interpreted as the moment of collision between the black holes. An empirical relationship between the variation in left side diversity and the chirp mass of each event was also determined.

Read this paper on arXiv…

D. Freitas, M. Nepomuceno and J. Medeiros
Wed, 1 Jan 20
41/88

Comments: 7 pages, 3 figures, proceedings of IAU Symposium 346: High-mass X-ray binaries: illuminating the passage from massive binaries to merging compact objects

Evolution of the accretion disk-corona during bright hard-to-soft state transition: A reflection spectroscopic study with GX 339-4 [HEAP]

http://arxiv.org/abs/1912.11447


We present the analysis of several observations of the black hole binary GX 339–4 during its bright intermediate states from two different outbursts (2002 and 2004), as observed by RXTE/PCA. We perform a consistent study of its reflection spectrum by employing the relxill family of relativistic reflection models to probe the evolutionary properties of the accretion disk including the inner disk radius ($R_{\rm in}$), ionization parameter ($\xi$), temperatures of the inner disk ($T_{\rm in}$), corona ($kT_{\rm e}$), and its optical depth ($\tau$). Our analysis indicates that the disk inner edge approaches the inner-most stable circular orbit (ISCO) during the early onset of bright hard state, and that the truncation radius of the disk remains low ($< 9 R_{\rm g}$) throughout the transition from hard to soft state. This suggests that the changes observed in the accretion disk properties during the state transition are driven by variation in accretion rate, and not necessarily due to changes in the inner disk’s radius. We compare the aforementioned disk properties in two different outbursts, with state transitions occurring at dissimilar luminosities, and find identical evolutionary trends in the disk properties, with differences only seen in corona’s $kT_{\rm e}$ and $\tau$. We also perform an analysis by employing a self-consistent Comptonized accretion disk model accounting for the scatter of disk photons by the corona, and measure low inner disk truncation radius across the bright intermediate states, using the temperature dependent values of spectral hardening factor, thereby independently confirming our results from the reflection spectrum analysis.

Read this paper on arXiv…

N. Sridhar, J. García, J. Steiner, et. al.
Wed, 25 Dec 19
19/31

Comments: Accepted for publication in The Astrophysical Journal. 24 pages, 11 figures (44 panels), 4 tables

Evolution of the accretion disk-corona during bright hard-to-soft state transition: A reflection spectroscopic study with GX 339-4 [HEAP]

http://arxiv.org/abs/1912.11447


We present the analysis of several observations of the black hole binary GX 339–4 during its bright intermediate states from two different outbursts (2002 and 2004), as observed by RXTE/PCA. We perform a consistent study of its reflection spectrum by employing the relxill family of relativistic reflection models to probe the evolutionary properties of the accretion disk including the inner disk radius ($R_{\rm in}$), ionization parameter ($\xi$), temperatures of the inner disk ($T_{\rm in}$), corona ($kT_{\rm e}$), and its optical depth ($\tau$). Our analysis indicates that the disk inner edge approaches the inner-most stable circular orbit (ISCO) during the early onset of bright hard state, and that the truncation radius of the disk remains low ($< 9 R_{\rm g}$) throughout the transition from hard to soft state. This suggests that the changes observed in the accretion disk properties during the state transition are driven by variation in accretion rate, and not necessarily due to changes in the inner disk’s radius. We compare the aforementioned disk properties in two different outbursts, with state transitions occurring at dissimilar luminosities, and find identical evolutionary trends in the disk properties, with differences only seen in corona’s $kT_{\rm e}$ and $\tau$. We also perform an analysis by employing a self-consistent Comptonized accretion disk model accounting for the scatter of disk photons by the corona, and measure low inner disk truncation radius across the bright intermediate states, using the temperature dependent values of spectral hardening factor, thereby independently confirming our results from the reflection spectrum analysis.

Read this paper on arXiv…

N. Sridhar, J. García, J. Steiner, et. al.
Wed, 25 Dec 19
24/31

Comments: Accepted for publication in The Astrophysical Journal. 24 pages, 11 figures (44 panels), 4 tables

Evolution of the accretion disk-corona during bright hard-to-soft state transition: A reflection spectroscopic study with GX 339-4 [HEAP]

http://arxiv.org/abs/1912.11447


We present the analysis of several observations of the black hole binary GX 339–4 during its bright intermediate states from two different outbursts (2002 and 2004), as observed by RXTE/PCA. We perform a consistent study of its reflection spectrum by employing the relxill family of relativistic reflection models to probe the evolutionary properties of the accretion disk including the inner disk radius ($R_{\rm in}$), ionization parameter ($\xi$), temperatures of the inner disk ($T_{\rm in}$), corona ($kT_{\rm e}$), and its optical depth ($\tau$). Our analysis indicates that the disk inner edge approaches the inner-most stable circular orbit (ISCO) during the early onset of bright hard state, and that the truncation radius of the disk remains low ($< 9 R_{\rm g}$) throughout the transition from hard to soft state. This suggests that the changes observed in the accretion disk properties during the state transition are driven by variation in accretion rate, and not necessarily due to changes in the inner disk’s radius. We compare the aforementioned disk properties in two different outbursts, with state transitions occurring at dissimilar luminosities, and find identical evolutionary trends in the disk properties, with differences only seen in corona’s $kT_{\rm e}$ and $\tau$. We also perform an analysis by employing a self-consistent Comptonized accretion disk model accounting for the scatter of disk photons by the corona, and measure low inner disk truncation radius across the bright intermediate states, using the temperature dependent values of spectral hardening factor, thereby independently confirming our results from the reflection spectrum analysis.

Read this paper on arXiv…

N. Sridhar, J. García, J. Steiner, et. al.
Wed, 25 Dec 19
2/31

Comments: Accepted for publication in The Astrophysical Journal. 24 pages, 11 figures (44 panels), 4 tables

Decoding Cosmological Information in Weak-Lensing Mass Maps with Generative Adversarial Networks [CEA]

http://arxiv.org/abs/1911.12890


Galaxy imaging surveys enable us to map the cosmic matter density field through weak gravitational lensing analysis. The density reconstruction is compromised by a variety of noise originating from observational conditions, galaxy number density fluctuations, and intrinsic galaxy properties. We propose a deep-learning approach based on generative adversarial networks (GANs) to reduce the noise in the weak lensing map under realistic conditions. We perform image-to-image translation using conditional GANs in order to produce noiseless lensing maps using the first-year data of the Subaru Hyper Suprime-Cam (HSC) survey. We train the conditional GANs by using 30000 sets of mock HSC catalogs that directly incorporate observational effects. We show that an ensemble learning method with GANs can reproduce the one-point probability distribution function (PDF) of the lensing convergence map within a $0.5-1\sigma$ level. We use the reconstructed PDFs to estimate a cosmological parameter $S_{8} = \sigma_{8}\sqrt{\Omega_{\rm m0}/0.3}$, where $\Omega_{\rm m0}$ and $\sigma_{8}$ represent the mean and the scatter in the cosmic matter density. The reconstructed PDFs place tighter constraint, with the statistical uncertainty in $S_8$ reduced by a factor of $2$ compared to the noisy PDF. This is equivalent to increasing the survey area by $4$ without denoising by GANs. Finally, we apply our denoising method to the first-year HSC data, to place $2\sigma$-level cosmological constraints of $S_{8} < 0.777 \, ({\rm stat}) + 0.105 \, ({\rm sys})$ and $S_{8} < 0.633 \, ({\rm stat}) + 0.114 \, ({\rm sys})$ for the noisy and denoised data, respectively.

Read this paper on arXiv…

M. Shirasaki, N. Yoshida, S. Ikeda, et. al.
Mon, 2 Dec 19
59/91

Comments: 19 pages, 17 figures, 1 table

Expanding Core-Collapse Supernova Search Horizon of Neutrino Detectors [IMA]

http://arxiv.org/abs/1911.11450


Core-Collapse Supernovae, failed supernovae and quark novae are expected to release an energy of few $10^{53}$ ergs through MeV neutrinos and a network of detectors is operative to look online for these events. However, when the source distance increases and/or the average energy of emitted neutrinos decreases, the signal statistics drops and the identification of these low statistic astrophysical bursts could be challenging. In a standard search, neutrino detectors characterise the observed clusters of events with a parameter called multiplicity, i.e. the number of collected events in a fixed time-window. We discuss a new parameter called $\xi$ (=multiplicity/duration of the cluster) in order to add the information on the temporal behaviour of the expected signal with respect to background. By adding this parameter to the multiplicity we optimise the search of astrophysical bursts and we increase their detection horizon. Moreover, the use of the $\xi$ can be easily implemented in an online system and can apply also to a network of detectors like SNEWS. For these reasons this work is relevant in the multi-messengers era when fast alerts with high significance are mandatory.

Read this paper on arXiv…

O. Halim, C. Vigorito, C. Casentini, et. al.
Wed, 27 Nov 19
10/59

Comments: 4 pages, 2 figures, this contribution was accepted by IOP Conference Series – proceedings services for science

Searching for new physics with profile likelihoods: Wilks and beyond [CL]

http://arxiv.org/abs/1911.10237


Particle physics experiments use likelihood ratio tests extensively to compare hypotheses and to construct confidence intervals. Often, the null distribution of the likelihood ratio test statistic is approximated by a $\chi^2$ distribution, following a theorem due to Wilks. However, many circumstances relevant to modern experiments can cause this theorem to fail. In this paper, we review how to identify these situations and construct valid inference.

Read this paper on arXiv…

S. Algeri, J. Aalbers, K. Morå, et. al.
Tue, 26 Nov 19
18/66

Comments: Submitted to Nature Expert Recommendations

Machine-learning non-stationary noise out of gravitational wave detectors [CL]

http://arxiv.org/abs/1911.09083


Signal extraction out of background noise is a common challenge in high precision physics experiments, where the measurement output is often a continuous data stream. To improve the signal to noise ratio of the detection, witness sensors are often used to independently measure background noises and subtract them from the main signal. If the noise coupling is linear and stationary, optimal techniques already exist and are routinely implemented in many experiments. However, when the noise coupling is non-stationary, linear techniques often fail or are sub-optimal. Inspired by the properties of the background noise in gravitational wave detectors, this work develops a novel algorithm to efficiently characterize and remove non-stationary noise couplings, provided there exist witnesses of the noise source and of the modulation. In this work, the algorithm is described in its most general formulation, and its efficiency is demonstrated with examples from the data of the Advanced LIGO gravitational wave observatory, where we could obtain an improvement of the detector gravitational wave reach without introducing any bias on the source parameter estimation.

Read this paper on arXiv…

G. Vajente, Y. Huang, M. Isi, et. al.
Thu, 21 Nov 19
12/57

Comments: N/A

The DNNLikelihood: enhancing likelihood distribution with Deep Learning [CL]

http://arxiv.org/abs/1911.03305


We introduce the DNNLikelihood, a novel framework to easily encode, through Deep Neural Networks (DNN), the full experimental information contained in complicated likelihood functions (LFs). We show how to efficiently parametrise the LF, treated as a multivariate function of parameters and nuisance parameters with high dimensionality, as an interpolating function in the form of a DNN predictor. We do not use any Gaussian approximation or dimensionality reduction, such as marginalisation or profiling over nuisance parameters, so that the full experimental information is retained. The procedure applies to both binned and unbinned LFs, and allows for an efficient distribution to multiple software platforms, e.g. through the framework-independent ONNX model format. The distributed DNNLikelihood can be used for different use cases, such as re-sampling through Markov Chain Monte Carlo techniques, possibly with custom priors, combination with other LFs, when the correlations among parameters are known, and re-interpretation within different statistical approaches, i.e. Bayesian vs frequentist. We discuss the accuracy of our proposal and its relations with other approximation techniques and likelihood distribution frameworks. As an example, we apply our procedure to a pseudo-experiment corresponding to a realistic LHC search for new physics already considered in the literature.

Read this paper on arXiv…

A. Coccaro, M. Pierini, L. Silvestrini, et. al.
Wed, 13 Nov 19
31/73

Comments: 44 pages, 17 figures, 8 tables

GetDist: a Python package for analysing Monte Carlo samples [IMA]

http://arxiv.org/abs/1910.13970


Monte Carlo techniques, including MCMC and other methods, are widely used and generate sets of samples from a parameter space of interest that can be used to infer or plot quantities of interest. This note outlines methods used the Python GetDist package to calculate marginalized one and two dimensional densities using Kernel Density Estimation (KDE). Many Monte Carlo methods produce correlated and/or weighted samples, for example produced by MCMC, nested, or importance sampling, and there can be hard boundary priors. GetDist’s baseline method consists of applying a linear boundary kernel, and then using multiplicative bias correction. The smoothing bandwidth is selected automatically following Botev et al., based on a mixture of heuristics and optimization results using the expected scaling with an effective number of samples (defined to account for MCMC correlations and weights). Two-dimensional KDE use an automatically-determined elliptical Gaussian kernel for correlated distributions. The package includes tools for producing a variety of publication-quality figures using a simple named-parameter interface, as well as a graphical user interface that can be used for interactive exploration. It can also calculate convergence diagnostics, produce tables of limits, and output in latex.

Read this paper on arXiv…

A. Lewis
Thu, 31 Oct 19
34/55

Comments: GetDist 1.0 now released, see this https URL

New methods to assess and improve LIGO detector duty cycle [IMA]

http://arxiv.org/abs/1910.12143


A network of three or more gravitational wave detectors simultaneously taking data is required to generate a well-localized sky map for gravitational wave sources, such as GW170817. Local seismic disturbances often cause the LIGO and Virgo detectors to lose light resonance in one or more of their component optic cavities, and the affected detector is unable to take data until resonance is recovered. In this paper, we use machine learning techniques to gain insight into the predictive behavior of the LIGO detector optic cavities during the second LIGO-Virgo observing run. We identify a minimal set of optic cavity control signals and data features which capture interferometer behavior leading to a loss of light resonance, or lockloss. We use these channels to accurately distinguish between lockloss events and quiet interferometer operating times via both supervised and unsupervised machine learning methods. This analysis yields new insights into how components of the LIGO detectors contribute to lockloss events, which could inform detector commissioning efforts to mitigate the associated loss of uptime. Particularly, we find that the state of the component optical cavities is a better predictor of loss of lock than ground motion trends. We report prediction accuracies of 98% for times just prior to lock loss, and 90% for times up to 30 seconds prior to lockloss, which shows promise for this method to be applied in near-real time to trigger preventative detector state changes. This method can be extended to target other auxiliary subsystems or times of interest, such as transient noise or loss in detector sensitivity. Application of these techniques during the third LIGO-Virgo observing run and beyond would maximize the potential of the global detector network for multi-messenger astronomy with gravitational waves.

Read this paper on arXiv…

A. Biswas, J. McIver and A. Mahabal
Tue, 29 Oct 19
68/78

Comments: N/A

A Novel CMB Component Separation Method: Hierarchical Generalized Morphological Component Analysis [CEA]

http://arxiv.org/abs/1910.08077


We present a novel technique for Cosmic Microwave Background (CMB) foreground subtraction based on the framework of blind source separation. Inspired by previous work incorporating local variation to Generalized Morphological Component Analysis (GMCA), we introduce Hierarchical GMCA (HGMCA), a Bayesian hierarchical framework for source separation. We test our method on $N_{\rm side}=256$ simulated sky maps that include dust, synchrotron, free-free and anomalous microwave emission, and show that HGMCA reduces foreground contamination by $25\%$ over GMCA in both the regions included and excluded by the Planck UT78 mask, decreases the error in the measurement of the CMB temperature power spectrum to the $0.02-0.03\%$ level at $\ell>200$ (and $<0.26\%$ for all $\ell$), and reduces correlation to all the foregrounds. We find equivalent or improved performance when compared to state-of-the-art Internal Linear Combination (ILC)-type algorithms on these simulations, suggesting that HGMCA may be a competitive alternative to foreground separation techniques previously applied to observed CMB data. Additionally, we show that our performance does not suffer when we perturb model parameters or alter the CMB realization, which suggests that our algorithm generalizes well beyond our simplified simulations. Our results open a new avenue for constructing CMB maps through Bayesian hierarchical analysis.

Read this paper on arXiv…

S. Wagner-Carena, M. Hopkins, A. Rivero, et. al.
Mon, 21 Oct 19
10/54

Comments: 22 pages, 16 figures

Detection of gravitational waves using topological data analysis and convolutional neural network: An improved approach [IMA]

http://arxiv.org/abs/1910.08245


The gravitational wave detection problem is challenging because the noise is typically overwhelming. Convolutional neural networks (CNNs) have been successfully applied, but require a large training set and the accuracy suffers significantly in the case of low SNR. We propose an improved method that employs a feature extraction step using persistent homology. The resulting method is more resilient to noise, more capable of detecting signals with varied signatures and requires less training. This is a powerful improvement as the detection problem can be computationally intense and is concerned with a relatively large class of wave signatures.

Read this paper on arXiv…

C. Bresten and J. Jung
Mon, 21 Oct 19
17/54

Comments: N/A

A blinding solution for inference from astronomical data [CEA]

http://arxiv.org/abs/1910.08533


This paper presents a joint blinding and deblinding strategy for inference of physical laws from astronomical data. The strategy allows for up to three blinding stages, where the data may be blinded, the computations of theoretical physics may be blinded, and –assuming Gaussianly distributed data– the covariance matrix may be blinded. We found covariance blinding to be particularly effective, as it enables the blinder to determine close to exactly where the blinded posterior will peak. Accordingly, we present an algorithm which induces posterior shifts in predetermined directions by hiding untraceable biases in a covariance matrix. The associated deblinding takes the form of a numerically lightweight post-processing step, where the blinded posterior is multiplied with deblinding weights. We illustrate the blinding strategy for cosmic shear from KiDS-450, and show that even though there is no direct evidence of the KiDS-450 covariance matrix being biased, the famous cosmic shear tension with Planck could easily be induced by a mischaracterization of correlations between $\xi_-$ at the highest redshift and all lower redshifts. The blinding algorithm illustrates the increasing importance of accurate uncertainty assessment in astronomical inferences, as otherwise involuntary blinding through biases occurs.

Read this paper on arXiv…

E. Sellentin
Mon, 21 Oct 19
28/54

Comments: N/A

Identifying extra high frequency gravitational waves generated from oscillons with cuspy potentials using deep neural networks [CL]

http://arxiv.org/abs/1910.07862


During oscillations of cosmology inflation around the minimum of a cuspy potential after inflation, the existence of extra high frequency gravitational waves (HFGWs) (GHz) has been proven effectively recently. Based on the electromagnetic resonance system for detecting such extra HFGWs, we adopt a new data processing scheme to identify the corresponding GW signal, which is the transverse perturbative photon fluxes (PPF). In order to overcome the problems of low efficiency and high interference in traditional data processing methods, we adopt deep learning to extract PPF and make some source parameters estimation. Deep learning is able to provide an effective method to realize classification and prediction tasks. Meanwhile, we also adopt anti-overfitting technique and make adjustment of some hyperparameters in the course of study, which improve the performance of classifier and predictor to a certain extent. Here the convolutional neural network (CNN) is used to implement deep learning process concretely. In this case, we investigate the classification accuracy varying with the ratio between the number of positive and negative samples. When such ratio exceeds to 0.11, the accuracy could reach up to 100%.

Read this paper on arXiv…

L. Wang, J. Li, N. Yang, et. al.
Fri, 18 Oct 19
24/77

Comments: N/A

Constraining power of open likelihoods, made prior-independent [CEA]

http://arxiv.org/abs/1910.06646


One of the most criticized features of Bayesian statistics is the fact that credible intervals, especially when open likelihoods are involved, may strongly depend on the prior shape and range. Many analyses involving open likelihoods are affected by the eternal dilemma of choosing between linear and logarithmic prior, and in particular in the latter case the situation is worsened by the dependence on the prior range under consideration. In this letter, using the tools of Bayesian model comparison, we propose a simple method to obtain constraints that depend neither on the prior shape nor range. An application to the case of cosmological bounds on the sum of the neutrino masses is discussed as an example.

Read this paper on arXiv…

S. Gariazzo
Wed, 16 Oct 19
48/56

Comments: 5 pages, 2 figures

Spacecraft design optimisation for demise and survivability [CL]

http://arxiv.org/abs/1910.05091


Among the mitigation measures introduced to cope with the space debris issue there is the de-orbiting of decommissioned satellites. Guidelines for re-entering objects call for a ground casualty risk no higher than 0.0001. To comply with this requirement, satellites can be designed through a design-for-demise philosophy. Still, a spacecraft designed to demise has to survive the debris-populated space environment for many years. The demisability and the survivability of a satellite can both be influenced by a set of common design choices such as the material selection, the geometry definition, and the position of the components. Within this context, two models have been developed to analyse the demise and the survivability of satellites. Given the competing nature of the demisability and the survivability, a multi-objective optimisation framework was developed, with the aim to identify trade-off solutions for the preliminary design of satellites. As the problem is nonlinear and involves the combination of continuous and discrete variables, classical derivative based approaches are unsuited and a genetic algorithm was selected instead. The genetic algorithm uses the developed demisability and survivability criteria as the fitness functions of the multi-objective algorithm. The paper presents a test case, which considers the preliminary optimisation of tanks in terms of material, geometry, location, and number of tanks for a representative Earth observation mission. The configuration of the external structure of the spacecraft is fixed. Tanks were selected because they are sensitive to both design requirements: they represent critical components in the demise process and impact damage can cause the loss of the mission because of leaking and ruptures. The results present the possible trade off solutions, constituting the Pareto front obtained from the multi-objective optimisation.

Read this paper on arXiv…

M. Trisolini, H. Lewis and C. Colombo
Mon, 14 Oct 19
40/69

Comments: Paper accepted for publication in Aerospace Science and Technology

Footprints of Doppler and Aberration Effects in CMB Experiments: Statistical and Cosmological Implications [CEA]

http://arxiv.org/abs/1910.04315


In the frame of the Solar System, the Doppler and aberration effects cause distortions in the form of mode couplings in the cosmic microwave background (CMB) temperature and polarization power spectra and hence impose biases on the statistics derived by the moving observer. We explore several aspects of such biases and pay close attention to their effects on CMB polarization which have not been examined in detail previously. A potentially important bias that we introduce here is $\textit{boost variance}$—an additional term in cosmic variance, induced by the observer’s motion. Although this additional term is negligible for whole-sky experiments and can be safely neglected, in partial-sky experiments it can change cosmic variance by 10\% in temperature and 20\% in polarization. Furthermore, we investigate the significance of motion-induced $\textit{power}$ and $\textit{parity}$ asymmetries in TT, EE, and TE as well as potential biases induced in cosmological parameter estimation performed with TTTEEE in whole-sky experiments. Using Planck-like simulations, we find that our local motion induces $\sim1-2 \%$ hemispherical asymmetry in a wide range of angular scales in the CMB temperature and polarization power spectra, but not any significant amount of parity asymmetry or shift in cosmological parameters. Finally, we examine the prospects of measuring the velocity of the Solar System w.r.t. the CMB with future experiments via the the mode coupling induced by the Doppler and aberration effects. Using the CMB TT, EE, and TE power spectra up to $\ell=4000$, SO and CMB-S4 can make a dipole-independent measurement of our local velocity respectively at $8.5\sigma$ and $20\sigma$.

Read this paper on arXiv…

S. Yasini and E. Pierpaoli
Fri, 11 Oct 19
17/76

Comments: 16 pages, 15 figures, 3 appendices

AOtools — a Python package for adaptive optics modelling and analysis [IMA]

http://arxiv.org/abs/1910.04414


AOtools is a Python package which is open-source and aimed at providing tools for adaptive optics users and researchers. We present version 1.0 which contains tools for adaptive optics processing, including analysing data in the pupil plane, images and point spread functions in the focal plane, wavefront sensors, modelling of atmospheric turbulence, physical optical propagation of wavefronts, and conversion between frequently used adaptive optics and astronomical units. The main drivers behind AOtools is that it should be easy to install and use. To achieve this the project features extensive documentation, automated unit testing and is registered on the Python Package Index. AOtools is under continuous active development to expand the features available and we encourage everyone involved in adaptive optics to become involved and contribute to the project.

Read this paper on arXiv…

M. Townson, O. Farley, G. Xivry, et. al.
Fri, 11 Oct 19
43/76

Comments: Accepted in Optics Express

Reconstructing Functions and Estimating Parameters with Artificial Neural Network: a test with Hubble parameter and SNe Ia [CEA]

http://arxiv.org/abs/1910.03636


In this work, we propose a new non-parametric approach for reconstructing a function from observational data using Artificial Neural Network (ANN), which has no assumptions to the data and is a completely data-driven approach. We test the ANN method by reconstructing functions of the Hubble parameter measurements $H(z)$ and the distance redshift relation $D_L(z)$ of type Ia supernova. We find that both $H(z)$ and $D_L(z)$ can be reconstructed with high accuracy. Furthermore, we estimate cosmological parameters using the reconstructed functions of $H(z)$ and $D_L(z)$ and find the results are consistent with those obtained using the observational data directly. Therefore, we propose that the function reconstructed by ANN can represent the actual distribution of observational data and can be used for parameter estimation in further cosmological research. In addition, we present a new strategy to train and evaluate the neural network, and a code for reconstructing functions using ANN has been developed and will be available soon.

Read this paper on arXiv…

G. Wang, X. Ma, S. Li, et. al.
Thu, 10 Oct 19
50/63

Comments: 12 pages, 13 figures and 1 table

Application of Synoptic Magnetograms to Global Solar Activity Forecast [SSA]

http://arxiv.org/abs/1910.00820


Synoptic magnetograms provide us with knowledge about the evolution of magnetic fields on the solar surface and present important information for forecasting future solar activity. In this work, poloidal and toroidal magnetic field components derived from synoptic magnetograms are assimilated, using the Ensemble Kalman Filter method, into a mean-field dynamo model based on Parker’s migratory dynamo theory complemented by magnetic helicity conservation. It was found that the predicted toroidal field is in good agreement with observations for almost the entire following solar cycle. However, poloidal field predictions agree with observations only for the first 2 – 3 years of the predicted cycle. The results indicate that the upcoming Solar Maximum of Cycle 25 (SC25) is expected to be weaker than the current Cycle 24. The model results show that a deep extended solar activity minimum is expected during 2019 – 2021, and that the next solar maximum will occur in 2024 – 2025. The sunspot number at the maximum will be about 50 with an error estimate of 15 – 30 %. The maximum will likely have a double peak or show extended periods (for 2 – 2.5 years) of high activity. According to the hemispheric prediction results, SC25 will start in 2020 in the Southern hemisphere, and will have a maximum in 2024 with a sunspot number of about 28. In the Northern hemisphere the cycle will be delayed for about 1 year (with an error of $\pm 0.5$ year), and reach a maximum in 2025 with a sunspot number of about 23.

Read this paper on arXiv…

I. Kitiashvili
Thu, 3 Oct 19
2/59

Comments: 15 figures, 1 table, 29 pages, submitted to ApJ

A Conceptual Introduction to Markov Chain Monte Carlo Methods [CL]

http://arxiv.org/abs/1909.12313


Markov Chain Monte Carlo (MCMC) methods have become a cornerstone of many modern scientific analyses by providing a straightforward approach to numerically estimate uncertainties in the parameters of a model using a sequence of random samples. This article provides a basic introduction to MCMC methods by establishing a strong conceptual understanding of what problems MCMC methods are trying to solve, why we want to use them, and how they work in theory and in practice. To develop these concepts, I outline the foundations of Bayesian inference, discuss how posterior distributions are used in practice, explore basic approaches to estimate posterior-based quantities, and derive their link to Monte Carlo sampling and MCMC. Using a simple toy problem, I then demonstrate how these concepts can be used to understand the benefits and drawbacks of various MCMC approaches. Exercises designed to highlight various concepts are also included throughout the article.

Read this paper on arXiv…

J. Speagle
Mon, 30 Sep 19
45/55

Comments: 54 pages, 15 figures, submitted to the Journal of Statistics Education. All comments and feedback greatly appreciated

Exact joint likelihood of pseudo-$C_\ell$ estimates from correlated Gaussian cosmological fields [CEA]

http://arxiv.org/abs/1908.00795


We present the exact joint likelihood of pseudo-$C_\ell$ power spectrum estimates measured from an arbitrary number of Gaussian cosmological fields. Our method is applicable to both spin-0 fields and spin-2 fields, including a mixture of the two, and is relevant to Cosmic Microwave Background, weak lensing and galaxy clustering analyses. We show that Gaussian cosmological fields are mixed by a mask in such a way that retains their Gaussianity, without making any assumptions about the mask geometry. We then show that each auto- or cross-pseudo-$C_\ell$ estimator can be written as a quadratic form, and apply the known joint distribution of quadratic forms to obtain the exact joint likelihood of a set of pseudo-$C_\ell$ estimates in the presence of an arbitrary mask. Considering the polarisation of the Cosmic Microwave Background as an example, we show using simulations that our likelihood recovers the full, exact multivariate distribution of $EE$, $BB$ and $EB$ pseudo-$C_\ell$ power spectra. Our method provides a route to robust cosmological constraints from future Cosmic Microwave Background and large-scale structure surveys in an era of ever-increasing statistical precision.

Read this paper on arXiv…

R. Upham, L. Whittaker and M. Brown
Mon, 5 Aug 19
13/53

Comments: 17 pages, 7 figures. Submitted to MNRAS

Improving Galaxy Clustering Measurements with Deep Learning: analysis of the DECaLS DR7 data [CEA]

http://arxiv.org/abs/1907.11355


Robust measurements of cosmological parameters from galaxy surveys rely on our understanding of systematic effects that impact the observed galaxy density field. In this paper we present, validate, and implement the idea of adopting the systematics mitigation method of Artificial Neural Networks for modeling the relationship between the target galaxy density field and various observational realities including but not limited to Galactic extinction, seeing, and stellar density. Our method by construction does not assume a fitting model a priori and is less prone to over-training by performing k-fold cross-validation and dimensionality reduction via backward feature elimination. By permuting the choice of the training, validation, and test sets, we construct a selection mask for the entire footprint. We apply our method on the extended Baryon Oscillation Spectroscopic Survey (eBOSS) Emission Line Galaxies (ELGs) selection from the Dark Energy Camera Legacy Survey (DECaLS) DR7 data and show that the spurious large-scale contamination due to imaging systematics can be significantly reduced by up-weighting the observed galaxy density using the selection mask from the neural network and that our method is more effective than the conventional linear and quadratic polynomial functions. We perform extensive analyses on simulated mock datasets with and without systematic effects. Our analyses indicate that our methodology is more robust to overfitting compared to the conventional methods. This method can be utilized in the catalog generation of future spectroscopic galaxy surveys such as eBOSS and Dark Energy Spectroscopic Instrument (DESI) to better mitigate observational systematics.

Read this paper on arXiv…

M. Rezaie, H. Seo, A. Ross, et. al.
Mon, 29 Jul 19
39/52

Comments: 27 pages, 22 figures

Barycentric interpolation on Riemannian and semi-Riemannian spaces [IMA]

http://arxiv.org/abs/1907.09487


Interpolation of data represented in curvilinear coordinates and possibly having some non-trivial, typically Riemannian or semi-Riemannian geometry is an ubiquitous task in all of physics. In this work we present a covariant generalization of the barycentric coordinates and the barycentric interpolation method for Riemannian and semi-Riemannian spaces of arbitrary dimension. We show that our new method preserves the linear accuracy property of barycentric interpolation in a coordinate-invariant sense. In addition, we show how the method can be used to interpolate constrained quantities so that the given constraint is automatically respected. We showcase the method with two astrophysics related examples situated in the curved Kerr spacetime. The first problem is interpolating a locally constant vector field, in which case curvature effects are expected to be maximally important. The second example is a General Relativistic Magnetohydrodynamics simulation of a turbulent accretion flow around a black hole, wherein high intrinsic variability is expected to be at least as important as curvature effects.

Read this paper on arXiv…

P. Pihajoki, M. Mannerkoski and P. Johansson
Wed, 24 Jul 19
56/60

Comments: 9 pages, 3 figures. Submitted to MNRAS, comments welcome

The Importance of Telescope Training in Data Interpretation [IMA]

http://arxiv.org/abs/1907.05889


In this State of the Profession Consideration, we will discuss the state of hands-on observing within the profession, including: information about professional observing trends; student telescope training, beginning at the undergraduate and graduate levels, as a key to ensuring a base level of technical understanding among astronomers; the role that amateurs can take moving forward; the impact of telescope training on using survey data effectively; and the need for modest investments in new, standard instrumentation at mid-size aperture telescope facilities to ensure their usefulness for the next decade.

Read this paper on arXiv…

D. Whelan, G. Privon, R. Beaton, et. al.
Tue, 16 Jul 19
58/89

Comments: Astro 2020 APC White Paper, to be published in BAAS

A Comparison of Flare Forecasting Methods. II. Benchmarks, Metrics and Performance Results for Operational Solar Flare Forecasting Systems [SSA]

http://arxiv.org/abs/1907.02905


Solar flares are extremely energetic phenomena in our Solar System. Their impulsive, often drastic radiative increases, in particular at short wavelengths, bring immediate impacts that motivate solar physics and space weather research to understand solar flares to the point of being able to forecast them. As data and algorithms improve dramatically, questions must be asked concerning how well the forecasting performs; crucially, we must ask how to rigorously measure performance in order to critically gauge any improvements. Building upon earlier-developed methodology (Barnes et al, 2016, Paper I), international representatives of regional warning centers and research facilities assembled in 2017 at the Institute for Space-Earth Environmental Research, Nagoya University, Japan to – for the first time – directly compare the performance of operational solar flare forecasting methods. Multiple quantitative evaluation metrics are employed, with focus and discussion on evaluation methodologies given the restrictions of operational forecasting. Numerous methods performed consistently above the “no skill” level, although which method scored top marks is decisively a function of flare event definition and the metric used; there was no single winner. Following in this paper series we ask why the performances differ by examining implementation details (Leka et al. 2019, Paper III), and then we present a novel analysis method to evaluate temporal patterns of forecasting errors in (Park et al. 2019, Paper IV). With these works, this team presents a well-defined and robust methodology for evaluating solar flare forecasting methods in both research and operational frameworks, and today’s performance benchmarks against which improvements and new methods may be compared.

Read this paper on arXiv…

K. Leka, S. Park, K. Kusano, et. al.
Mon, 8 Jul 19
12/43

Comments: 26 pages, 5 figures, accepted for publication in the Astrophysical Journal Supplement Series

A Comparison of Flare Forecasting Methods. III. Systematic Behaviors of Operational Solar Flare Forecasting Systems [SSA]

http://arxiv.org/abs/1907.02909


A workshop was recently held at Nagoya University (31 October – 02 November 2017), sponsored by the Center for International Collaborative Research, at the Institute for Space-Earth Environmental Research, Nagoya University, Japan, to quantitatively compare the performance of today’s operational solar flare forecasting facilities. Building upon Paper I of this series (Barnes et al. 2016), in Paper II (Leka et al. 2019) we described the participating methods for this latest comparison effort, the evaluation methodology, and presented quantitative comparisons. In this paper we focus on the behavior and performance of the methods when evaluated in the context of broad implementation differences. Acknowledging the short testing interval available and the small number of methods available, we do find that forecast performance: 1) appears to improve by including persistence or prior flare activity, region evolution, and a human “forecaster in the loop”; 2) is hurt by restricting data to disk-center observations; 3) may benefit from long-term statistics, but mostly when then combined with modern data sources and statistical approaches. These trends are arguably weak and must be viewed with numerous caveats, as discussed both here and in Paper II. Following this present work, we present in Paper IV a novel analysis method to evaluate temporal patterns of forecasting errors of both types (i.e., misses and false alarms; Park et al. 2019). Hence, most importantly, with this series of papers we demonstrate the techniques for facilitating comparisons in the interest of establishing performance-positive methodologies.

Read this paper on arXiv…

K. Leka, S. Park, K. Kusano, et. al.
Mon, 8 Jul 19
37/43

Comments: 23 pages, 6 figures, accepted for publication in The Astrophysical Journal

Investigating Dark Matter and MOND Models with Galactic Rotation Curve Data: Analysing the Gas-Dominated Galaxies [GA]

http://arxiv.org/abs/1906.09798


In this study the geometry of gas dominated galaxies in the SPARC database is analyzed in a normalized $(g_{bar},g_{obs})$-space ($g2$-space), where $g_{obs}$ is the observed centripetal acceleration and $g_{bar}$ is the centripetal acceleration as obtained from the observed baryonic matter via Newtonian dynamics. The normalization of $g2$-space significantly reduce the effect of both random and systematic uncertainties as well as enable a comparison of the geometries of different galaxies. Analyzing the gas-dominated galaxies (as opposed to other galaxies) further suppress the impact of the mass to light ratios.
It is found that the overall geometry of the gas dominated galaxies in SPARC is consistent with a rightward curving geometry in the normalized $g2$-space (characterized by $r_{obs}>r_{bar}$, where $r_{bar}=\arg \max_r[g_{bar}(r)]$ and $r_{obs}=\arg \max_r[g_{obs}(r)]$). This is in contrast to the overall geometry of all galaxies in SPARC which best approximates a geometry curing nowhere in normalized $g2$-space (characterized by $r_{obs}=r_{bar}$) with a slight inclination toward a rightward curving geometry. The geometry of the gas dominated galaxies not only indicate the true (independent of mass to light ratios to leading order) geometry of data in $g2$-space (which can be used to infer properties on the solution to the missing mass problem) but also – when compared to the geometry of all galaxies – indicate the underlying radial dependence of the disk mass to light ratio.

Read this paper on arXiv…

J. Petersen
Tue, 25 Jun 19
5/68

Comments: 10 pages, 6 figures

The 8-parameter Fisher-Bingham distribution on the sphere [CL]

http://arxiv.org/abs/1906.08247


The Fisher-Bingham distribution ($\mathrm{FB}_8$) is an eight-parameter family of probability density functions (PDF) on $S^2$ that, under certain conditions, reduce to spherical analogues of bivariate normal PDFs. Due to difficulties in computing its overall normalization constant, applications have been mainly restricted to subclasses of $\mathrm{FB}_8$, such as the Kent ($\mathrm{FB}_5$) or von Mises-Fisher (vMF) distributions. However, these subclasses often do not adequately describe directional data that are not symmetric along great circles. The normalizing constant of $\mathrm{FB}_8$ can be numerically integrated, and recently Kume and Sei showed that it can be computed using an adjusted holonomic gradient method. Both approaches, however, can be computationally expensive. In this paper, I show that the normalization of $\mathrm{FB}_8$ can be expressed as an infinite sum consisting of hypergeometric functions, similar to that of the $\mathrm{FB}_5$. This allows the normalization to be computed under summation with adequate stopping conditions. I then fit the $\mathrm{FB}_8$ to a synthetic dataset using a maximum-likelihood approach and show its improvements over a fit with the more restrictive $\mathrm{FB}_5$ distribution.

Read this paper on arXiv…

T. Yuan
Thu, 20 Jun 19
14/51

Comments: 8 pages, 4 figures, code available at this https URL

Detecting new signals under background mismodelling [CL]

http://arxiv.org/abs/1906.06615


Searches for new astrophysical phenomena often involve several sources of non-random uncertainties which can lead to highly misleading results. Among these, model-uncertainty arising from background mismodelling can dramatically compromise the sensitivity of the experiment under study. Specifically, overestimating the background distribution in the signal region increases the chances of missing new physics. Conversely, underestimating the background outside the signal region leads to an artificially enhanced sensitivity and a higher likelihood of claiming false discoveries. The aim of this work is to provide a unified statistical strategy to perform modelling, estimation, inference, and signal characterization under background mismodelling. The method proposed allows to incorporate the (partial) scientific knowledge available on the background distribution and provides a data-updated version of it in a purely nonparametric fashion without requiring the specification of prior distributions. Applications in the context of dark matter searches and radio surveys show how the tools presented in this article can be used to incorporate non-stochastic uncertainty due to instrumental noise and to overcome violations of classical distributional assumptions in stacking experiments.

Read this paper on arXiv…

S. Algeri
Tue, 18 Jun 19
59/73

Comments: N/A

Exact enumeration approach to first-passage time distribution of non-Markov random walks [CL]

http://arxiv.org/abs/1906.02081


We propose an analytical approach to study non-Markov random walks by employing an exact enumeration method. Using the method, we derive an exact expansion for the first-passage time (FPT) distribution for any continuous, differentiable non-Markov random walk with Gaussian or non-Gaussian multivariate distribution. As an example, we study the FPT distribution of a fractional Brownian motion with a Hurst exponent $H\in(1/2,1)$ that describes numerous non-Markov stochastic phenomena in physics, biology and geology, and for which the limit $H=1/2$ represents a Markov process.

Read this paper on arXiv…

S. Baghram, F. Nikakhtar, M. Tabar, et. al.
Fri, 7 Jun 19
46/49

Comments: 23 pages, 4 figures, 1 table and 5 appendices. Published version

Evolution of Novel Activation Functions in Neural Network Training with Applications to Classification of Exoplanets [IMA]

http://arxiv.org/abs/1906.01975


We present analytical exploration of novel activation functions as consequence of integration of several ideas leading to implementation and subsequent use in habitability classification of exoplanets. Neural networks, although a powerful engine in supervised methods, often require expensive tuning efforts for optimized performance. Habitability classes are hard to discriminate, especially when attributes used as hard markers of separation are removed from the data set. The solution is approached from the point of investigating analytical properties of the proposed activation functions. The theory of ordinary differential equations and fixed point are exploited to justify the “lack of tuning efforts” to achieve optimal performance compared to traditional activation functions. Additionally, the relationship between the proposed activation functions and the more popular ones is established through extensive analytical and empirical evidence. Finally, the activation functions have been implemented in plain vanilla feed-forward neural network to classify exoplanets.

Read this paper on arXiv…

S. Saha, N. Nagaraj, A. Mathur, et. al.
Thu, 6 Jun 19
40/67

Comments: 41 pages, 11 figures

Precise photometric transit follow-up observations of five close-in exoplanets : update on their physical properties [EPA]

http://arxiv.org/abs/1905.11258


We report the results of the high precision photometric follow-up observations of five transiting hot jupiters – WASP-33b, WASP-50b, WASP-12b, HATS-18b and HAT-P-36b. The observations are made from the 2m Himalayan Chandra Telescope at Indian Astronomical Observatory, Hanle and the 1.3m J. C. Bhattacharyya Telescope at Vainu Bappu Observatory, Kavalur. This exercise is a part of the capability testing of the two telescopes and their back-end instruments. Leveraging the large aperture of both the telescopes used, the images taken during several nights were used to produce the transit light curves with high photometric S/N ($>200$) by performing differential photometry. In order to reduce the fluctuations in the transit light curves due to various sources such as stellar activity, varying sky transparency etc. we preprocessed them using wavelet denoising and applied Gaussian process correlated noise modeling technique while modeling the transit light curves. To demonstrate the efficiency of the wavelet denoising process we have also included the results without the denoising process. A state-of-the-art algorithm used for modeling the transit light curves provided the physical parameters of the planets with more precise values than reported earlier.

Read this paper on arXiv…

A. Chakrabarty and S. Sengupta
Tue, 28 May 19
22/82

Comments: 24 pages including 12 figures with 13 subfigures and 5 tables. It has been accepted for publishing in the Astronomical Journal

Projected Pupil Plane Pattern (PPPP) with artificial Neural Networks [IMA]

http://arxiv.org/abs/1905.09535


Focus anisoplanatism is a significant measurement error when using one single laser guide star (LGS) in an Adaptive Optics (AO) system, especially for the next generation of extremely large telescopes. An alternative LGS configuration, called Projected Pupil Plane Pattern (PPPP) solves this problem by launching a collimated laser beam across the full pupil of the telescope. If using a linear, modal reconstructor, the high laser power requirement ($\sim1000\,\mbox{W}$) renders PPPP uncompetitive with Laser Tomography AO. This work discusses easing the laser power requirements by using an artificial Neural Network (NN) as a non-linear reconstructor. We find that the non-linear NN reduces the required measurement signal-to-noise ratio (SNR) significantly to reduce PPPP laser power requirements to $\sim200\,\mbox{W}$ for useful residual wavefront error (WFE). At this power level, the WFE becomes 160\,nm root mean square (RMS) and 125\,nm RMS when $r_0=0.098$\,m and $0.171$\,m respectively for turbulence profiles which are representative of conditions at the ESO Paranal observatory. In addition, it is shown that as a non-linear reconstructor, a NN can perform useful wavefront sensing using a beam-profile from one height as the input instead of the two profiles required as a minimum by the linear reconstructor.

Read this paper on arXiv…

H. Yang, C. Gutierrez, N. Bharmal, et. al.
Fri, 24 May 19
31/60

Comments: N/A

Nested sampling on non-trivial geometries [CL]

http://arxiv.org/abs/1905.09110


Metropolis nested sampling evolves a Markov chain from a current livepoint and accepts new points along the chain according to a version of the Metropolis acceptance ratio modified to satisfy the likelihood constraint, characteristic of nested sampling algorithms. The geometric nested sampling algorithm we present here is a based on the Metropolis method, but treats parameters as though they represent points on certain geometric objects, namely circles, tori and spheres. For parameters which represent points on a circle or torus, the trial distribution is `wrapped’ around the domain of the posterior distribution such that samples cannot be rejected automatically when evaluating the Metropolis ratio due to being outside the sampling domain. Furthermore, this enhances the mobility of the sampler. For parameters which represent coordinates on the surface of a sphere, the algorithm transforms the parameters into a Cartesian coordinate system before sampling which again makes sure no samples are automatically rejected, and provides a physically intutive way of the sampling the parameter space. \ We apply the geometric nested sampler to two types of toy model which include circular, toroidal and spherical parameters. We find that the geometric nested sampler generally outperforms \textsc{MultiNest} in both cases. \ %We also apply the algorithm to a gravitational wave detection model which includes circular and spherical parameters, and find that the geometric nested sampler and \textsc{MultiNest} appear to perform equally well as one another. Our implementation of the algorithm can be found at \url{https://github.com/SuperKam91/nested_sampling}.

Read this paper on arXiv…

K. Javid
Thu, 23 May 19
21/67

Comments: 13 pages, 11 figures, 28 equations

Broadband reflection spectroscopy of MAXI J1535-571 using AstroSat: Estimation of black hole mass and spin [HEAP]

http://arxiv.org/abs/1905.09253


We report the results from \textit{AstroSat} observations of the transient Galactic black hole X-ray binary MAXI J1535-571 during its hard-intermediate state of the 2017 outburst. We systematically study the individual and joint spectra from two simultaneously observing \textit{AstroSat} X-ray instruments, and probe and measure a number of parameter values of accretion disc, corona and reflection from the disc in the system using models with generally increasing complexities. Using our broadband ($1.3-70$ keV) X-ray spectrum, we clearly show that a soft X-ray instrument, which works below $\sim 10-12$ keV, alone cannot correctly characterize the Comptonizing component from the corona, thus highlighting the importance of broadband spectral analysis. By fitting the reflection spectrum with the latest version of the \textsc{relxill} family of relativistic reflection models, we constrain the black hole’s dimensionless spin parameter to be $0.67^{+0.16}{-0.04}$. We also jointly use the reflection spectral component (\textsc{relxill}) and a general relativistic thin disc component (\texttt{Kerrbb}), and estimate the black hole’s mass and distance to be $10.39{-0.62}^{+0.61} M_{\odot}$ and $5.4_{-1.1}^{+1.8}$ kpc respectively.

Read this paper on arXiv…

N. Sridhar, S. Bhattacharyya, S. Chandra, et. al.
Thu, 23 May 19
62/67

Comments: Accepted for publication in MNRAS