New radiographic image processing tested on the simple and double-flux platform at OMEGA [CL]

http://arxiv.org/abs/1705.10147


Ablation fronts and shocks are two radiative/hydrodynamic features ubiquitous in inertial confinement fusion (ICF). A specially designed shock-tube experiment was tested on the OMEGA laser facility to observe these two features evolve at once and to assess thermodynamical and radiative properties. It is a basic science experiment aimed at improving our understanding of shocked and ablated matter which is critical to ICF design. At all time, these two moving “interfaces” separate the tube into three distinct zones where matter is either ablated, shocked or unshocked. The {\it simple-flux} or {\it double-flux} experiments, respectively one or two halfraum-plus-tube, have been thought up to observe and image these zones using x-ray and visible image diagnostic. The possibility of observing all three regions at once was instrumental in our new radiographic image processing used to remove the backlighter background otherwise detrimental to quantitative measurement. By so doing, after processing the radiographic images of the 15 shots accumulated during the 2013 and 2015 campaigns, a quantitative comparison between experiments and our radiative hydrocode simulations was made possible. Several points of the principal Hugoniot of the aerogel used as a light material in the shock-tube were inferred from that comparison. Most surprisingly, rapid variations of relative-transmission in the ablated region were observed during radiographic irradiations while it remained constant in the shocked region. This effect might be attributed to the spectral distribution variability of the backlighter during the radiographic pulse. Numerically, that distribution is strongly dependent upon NLTE models and it could potentially be used as a mean to discriminate among them.

Read this paper on arXiv…

O. Poujade, M. Ferri and I. Geoffray
Tue, 30 May 17
-85/66

Comments: N/A

Comparison of statistical sampling methods with ScannerBit, the GAMBIT scanning module [CL]

http://arxiv.org/abs/1705.07959


We introduce ScannerBit, the statistics and sampling module of the public, open-source global fitting framework Gambit. ScannerBit provides a standardised interface to different sampling algorithms, enabling the use and comparison of multiple computational methods for inferring profile likelihoods, Bayesian posteriors, and other statistical quantities. The current version offers random, grid, raster, nested sampling, differential evolution, Markov Chain Monte Carlo (MCMC) and ensemble Monte Carlo samplers. We also announce the release of a new standalone differential evolution sampler, Diver, and describe its design, usage and interface to ScannerBit. We subject Diver and three other samplers (the nested sampler MultiNest, the MCMC GreAT, and the native ScannerBit implementation of the ensemble Monte Carlo algorithm TWalk) to a battery of statistical tests. For this we use a realistic physical likelihood function, based on the scalar singlet model of dark matter. We examine the performance of each sampler as a function of its adjustable settings, and the dimensionality of the sampling problem. We evaluate performance on four metrics: optimality of the best fit found, completeness in exploring the best-fit region, number of likelihood evaluations, and total runtime. For Bayesian posterior estimation at high resolution, TWalk provides the most accurate and timely mapping of the full parameter space. For profile likelihood analysis in less than about ten dimensions, we find that Diver and MultiNest score similarly in terms of best fit and speed, outperforming GreAT and TWalk; in ten or more dimensions, Diver substantially outperforms the other three samplers on all metrics.

Read this paper on arXiv…

G. Workgroup, G. Martinez, J. McKay, et. al.
Wed, 24 May 17
14/70

Comments: 46 pages, 18 figures, 2 tables, submitted to EPJC

Noisy independent component analysis of auto-correlated components [CL]

http://arxiv.org/abs/1705.02344


We present a new method for the separation of superimposed, independent, auto-correlated com- ponents from noisy multi-channel measurement. The presented method simultaneously reconstructs and separates the components, taking all channels into account and thereby increases the effective signal-to-noise ratio considerably, allowing separations even in the high noise regime. Characteristics of the measurement instruments can be included, allowing for application in complex measurement situations. Independent posterior samples can be provided, permitting error estimates on all de- sired quantities. Using the concept of information field theory, the algorithm is not restricted to any dimensionality of the underlying space or discretization scheme thereof.

Read this paper on arXiv…

J. Knollmuller and T. Ensslin
Tue, 9 May 17
42/82

Comments: N/A

Recurrence network measures for hypothesis testing using surrogate data: application to black hole light curves [CL]

http://arxiv.org/abs/1704.08606


Recurrence networks and the associated statistical measures have become important tools in the analysis of time series data. In this work, we test how effective the recurrence network measures are in analyzing real world data involving two main types of noise, white noise and colored noise. We use two prominent network measures as discriminating statistic for hypothesis testing using surrogate data for a specific null hypothesis that the data is derived from a linear stochastic process. We show that the characteristic path length is especially efficient as a discriminating measure with the conclusions reasonably accurate even with limited number of data points in the time series. We also highlight an additional advantage of the network approach in identifying the dimensionality of the system underlying the time series through a convergence measure derived from the probability distribution of the local clustering coefficients. As examples of real world data, we use the light curves from a prominent black hole system and show that a combined analysis using three primary network measures can provide vital information regarding the nature of temporal variability of light curves from different spectroscopic classes.

Read this paper on arXiv…

R. Jacob, K. Harikrishnan, R. Misra, et. al.
Fri, 28 Apr 17
29/55

Comments: 29 pages, 15 figures, submitted to . Communications in Nonlinear Science and Numerical Simulation

Multifractal Analysis of Pulsar Timing Residuals: Assessment of Gravitational Waves Detection [SSA]

http://arxiv.org/abs/1704.08599


Relying on multifractal behavior of pulsar timing residuals ({\it PTR}s), we examine the capability of Multifractal Detrended Fluctuation Analysis (MF-DFA) and Multifractal Detrending Moving Average Analysis (MF-DMA) modified by Singular Value Decomposition (SVD) and Adaptive Detrending (AD), to detect footprint of gravitational waves (GWs) superimposed on {\it PTR}s. Mentioned methods enable us to clarify the type of GWs which is related to the value of Hurst exponent. We introduce three strategies based on generalized Hurst exponent and width of singularity spectrum, to determine the dimensionless amplitude of GWs. For a stochastic gravitational wave background with characteristic strain spectrum as $\mathcal{H}c(f)\sim \mathcal{A}f^{\zeta}$, the dimensionless amplitude greater than $\mathcal{A}\gtrsim 10^{-17}$ can be recognized irrespective to value of $\zeta$. We also utilize MF-DFA and MF-DMA to explore 20 millisecond pulsars observed by Parkes Pulsar Timing Array (PPTA). Our analysis demonstrates that there exists a cross-over in fluctuation function versus time scale for observed timing residuals representing a universal property and equates to $s{\times}\sim60$ days. To asses multifractal nature of observed timing residuals, we apply AD and SVD algorithms on time series as pre-processes to remove superimposed trends as much as possible. The scaling exponents determined by MF-DFA and MF-DMA confirm that, all data are classified in non-stationary class elucidating second universality feature. The value of corresponding Hurst exponent is in interval $H \in [0.35,0.85]$. The $q$-dependency of generalized Hurst exponent demonstrates observed {\it PTR}s have multifractal behavior and the source of this multifractality is mainly devoted to correlation of data which is another universality of observed data sets.

Read this paper on arXiv…

I. Eghdami, H. Panahi and S. Movahed
Fri, 28 Apr 17
30/55

Comments: 17 pages, 13 figures and 2 tables

A Fresh Approach to Forecasting in Astroparticle Physics and Dark Matter Searches [IMA]

http://arxiv.org/abs/1704.05458


We present a toolbox of new techniques and concepts for the efficient forecasting of experimental sensitivities. These are applicable to a large range of scenarios in (astro-)particle physics, and based on the Fisher information formalism. Fisher information provides an answer to the question what is the maximum extractable information from a given observation?. It is a common tool for the forecasting of experimental sensitivities in many branches of science, but rarely used in astroparticle physics or searches for particle dark matter. After briefly reviewing the Fisher information matrix of general Poisson likelihoods, we propose very compact expressions for estimating expected exclusion and discovery limits (equivalent counts method). We demonstrate by comparison with Monte Carlo results that they remain surprisingly accurate even deep in the Poisson regime. We show how correlated background systematics can be efficiently accounted for by a treatment based on Gaussian random fields. Finally, we introduce the novel concept of Fisher information flux. It can be thought of as a generalization of the commonly used signal-to-noise ratio, while accounting for the non-local properties and saturation effects of background and instrumental uncertainties. It is a powerful and flexible tool ready to be used as core concept for informed strategy development in astroparticle physics and searches for particle dark matter.

Read this paper on arXiv…

T. Edwards and C. Weniger
Thu, 20 Apr 17
1/49

Comments: 19 pages, 12 figures

Dynamic nested sampling: an improved algorithm for parameter estimation and evidence calculation [CL]

http://arxiv.org/abs/1704.03459


We introduce dynamic nested sampling: a generalisation of the nested sampling algorithm in which the number of “live points” varies to allocate samples more efficiently. In empirical tests the new method increases accuracy by up to a factor of ~8 for parameter estimation and ~3 for evidence calculation compared to standard nested sampling with the same number of samples – equivalent to speeding up the computation by factors of ~64 and ~9 respectively. In addition unlike in standard nested sampling more accurate results can be obtained by continuing the calculation for longer. Dynamic nested sampling can be easily included in existing nested sampling software such as MultiNest and PolyChord.

Read this paper on arXiv…

E. Higson, W. Handley, M. Hobson, et. al.
Thu, 13 Apr 17
36/56

Comments: 16 pages + appendix, 8 figures, submitted to Bayesian Analysis. arXiv admin note: text overlap with arXiv:1703.09701

Periodic behaviour of coronal mass ejections, eruptive events, and solar activity proxies during solar cycles 23 and 24 [SSA]

http://arxiv.org/abs/1704.02336


We report on the parallel analysis of the periodic behaviour of coronal mass ejections (CMEs) based on 21 years [1996-2016] of observations with the SOHO/LASCO-C2 coronagraph, solar flares, prominences, and several proxies of solar activity. We consider values of the rates globally and whenever possible, distinguish solar hemispheres and solar cycles 23 and 24. Periodicities are investigated using both frequency (periodogram) and time-frequency (wavelet) analysis. We find that these different processes, in addition to following the ~11-year Solar Cycle, exhibit diverse statistically significant oscillations with properties common to all solar, coronal, and heliospheric processes: variable periodicity, intermittency, asymmetric development in the northern and southern solar hemispheres, and largest amplitudes during the maximum phase of solar cycles, being more pronounced during solar cycle 23 than the weaker cycle 24. However, our analysis reveals an extremely complex and diverse situation. For instance, there exists very limited commonality for periods of less than one year. The few exceptions are the periods of 3.1-3.2 months found in the global occurrence rates of CMEs and in the sunspot area (SSA) and those of 5.9-6.1 months found in the northern hemisphere. Mid-range periods of ~1 and ~2 years are more wide spread among the studied processes, but exhibit a very distinct behaviour with the first one being present only in the northern hemisphere and the second one only in the southern hemisphere. These periodic behaviours likely results from the complexity of the underlying physical processes, prominently the emergence of magnetic flux.

Read this paper on arXiv…

T. Barlyaeva, J. Wojak, P. Lamy, et. al.
Tue, 11 Apr 17
23/62

Comments: 32 pages, 15 figures, 2 tables

Fast and scalable Gaussian process modeling with applications to astronomical time series [IMA]

http://arxiv.org/abs/1703.09710


The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large datasets. Gaussian Processes are a popular class of models used for this purpose but, since the computational cost scales as the cube of the number of data points, their application has been limited to relatively small datasets. In this paper, we present a method for Gaussian Process modeling in one-dimension where the computational requirements scale linearly with the size of the dataset. We demonstrate the method by applying it to simulated and real astronomical time series datasets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically-driven damped harmonic oscillators – providing a physical motivation for and interpretation of this choice – but we also demonstrate that it is effective in many other cases. We present a mathematical description of the method, the details of the implementation, and a comparison to existing scalable Gaussian Process methods. The method is flexible, fast, and most importantly, interpretable, with a wide range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

Read this paper on arXiv…

D. Foreman-Mackey, E. Agol, R. Angus, et. al.
Thu, 30 Mar 17
29/69

Comments: Submitted to the AAS Journals. Comments welcome. Code available: this https URL

Flare forecasting at the Met Office Space Weather Operations Centre [SSA]

http://arxiv.org/abs/1703.06754


The Met Office Space Weather Operations Centre produces 24/7/365 space weather guidance, alerts, and forecasts to a wide range of government and commercial end users across the United Kingdom. Solar flare forecasts are one of its products, which are issued multiple times a day in two forms; forecasts for each active region on the solar disk over the next 24 hours, and full-disk forecasts for the next four days. Here the forecasting process is described in detail, as well as first verification of archived forecasts using methods commonly used in operational weather prediction. Real-time verification available for operational flare forecasting use is also described. The influence of human forecasters is highlighted, with human-edited forecasts outperforming original model results, and forecasting skill decreasing over longer forecast lead times.

Read this paper on arXiv…

S. Murray, S. Bingham, M. Sharpe, et. al.
Tue, 21 Mar 2017
47/80

Comments: Accepted for publication in Space Weather. 18 pages, 8 figures, 3 tables

Parametric analysis of Cherenkov light LDF from EAS in the range 30-3000 TeV for primary gamma rays and nuclei [IMA]

http://arxiv.org/abs/1702.07796


A simple ‘knee-like’ approximation of the Lateral Distribution Function (LDF) of Cherenkov light emitted by EAS (extensive air showers) in the atmosphere is proposed for solving various tasks of data analysis in HiSCORE and other wide angle ground-based experiments designed to detect gamma rays and cosmic rays with the energy above tens of TeV. Simulation-based parametric analysis of individual LDF curves revealed that on the radial distance 20-500 m the 5-parameter ‘knee-like’ approximation fits individual LDFs as well as the mean LDF with a very good accuracy. In this paper we demonstrate the efficiency and flexibility of the ‘knee-like’ LDF approximation for various primary particles and shower parameters and the advantages of its application to suppressing proton background and selecting primary gamma rays.

Read this paper on arXiv…

A. Elshoukrofy, E. Postnikov, E. Korosteleva, et. al.
Tue, 28 Feb 17
2/69

Comments: 7 pages, 1 table, 2 figures; Bulletin of the Russian Academy of Sciences: Physics, 81, 4 (2017), in press

Parametric Analysis of Cherenkov Light LDF from EAS for High Energy Gamma Rays and Nuclei: Ways of Practical Application [IMA]

http://arxiv.org/abs/1702.08390


In this paper we propose a ‘knee-like’ approximation of the lateral distribution of the Cherenkov light from extensive air showers in the energy range 30-3000 TeV and study a possibility of its practical application in high energy ground-based gamma-ray astronomy experiments (in particular, in TAIGA-HiSCORE). The approximation has a very good accuracy for individual showers and can be easily simplified for practical application in the HiSCORE wide angle timing array in the condition of a limited number of triggered stations.

Read this paper on arXiv…

A. Elshoukrofy, E. Postnikov, E. Korosteleva, et. al.
Tue, 28 Feb 17
24/69

Comments: 4 pages, 5 figures, proceedings of ISVHECRI 2016 (19th International Symposium on Very High Energy Cosmic Ray Interactions)

Background rejection method for tens of TeV gamma-ray astronomy applicable to wide angle timing arrays [IMA]

http://arxiv.org/abs/1702.07756


A ‘knee-like’ approximation of Cherenkov light Lateral Distribution Functions, which we developed earlier, now is used for the actual tasks of background rejection methods for high energy (tens and hundreds of TeV) gamma-ray astronomy. In this work we implement this technique to the HiSCORE wide angle timing array consisting of Cherenkov light detectors with spacing of 100 m covering 0.2 km$^2$ presently and up to 5 km$^2$ in future. However, it can be applied to other similar arrays. We also show that the application of a multivariable approach (where 3 parameters of the knee-like approximation are used) allows us to reach a high level of background rejection, but it strongly depends on the number of hit detectors.

Read this paper on arXiv…

A. Elshoukrofy, E. Postnikov and L. Sveshnikova
Tue, 28 Feb 17
33/69

Comments: 5 pages, 3 figures; proceedings of the 2nd International Conference on Particle Physics and Astrophysics (ICPPA-2016)

Primary gamma ray selection in a hybrid timing/imaging Cherenkov array [IMA]

http://arxiv.org/abs/1702.07768


This work is a methodical study on hybrid reconstruction techniques for hybrid imaging/timing Cherenkov observations. This type of hybrid array is to be realized at the gamma-observatory TAIGA intended for very high energy gamma-ray astronomy (>30 TeV). It aims at combining the cost-effective timing-array technique with imaging telescopes. Hybrid operation of both of these techniques can lead to a relatively cheap way of development of a large area array. The joint approach of gamma event selection was investigated on both types of simulated data: the image parameters from the telescopes, and the shower parameters reconstructed from the timing array. The optimal set of imaging parameters and shower parameters to be combined is revealed. The cosmic ray background suppression factor depending on distance and energy is calculated. The optimal selection technique leads to cosmic ray background suppression of about 2 orders of magnitude on distances up to 450 m for energies greater than 50 TeV.

Read this paper on arXiv…

E. Postnikov, A. Grinyuk, L. Kuzmichev, et. al.
Tue, 28 Feb 17
57/69

Comments: 4 pages, 5 figures; proceedings of the 19th International Symposium on Very High Energy Cosmic Ray Interactions (ISVHECRI 2016)

Hybrid method for identifying mass groups of primary cosmic rays in the joint operation of IACTs and wide angle Cherenkov timing arrays [IMA]

http://arxiv.org/abs/1702.08302


This work is a methodical study of another option of the hybrid method originally aimed at gamma/hadron separation in the TAIGA experiment. In the present paper this technique was performed to distinguish between different mass groups of cosmic rays in the energy range 200 TeV – 500 TeV. The study was based on simulation data of TAIGA prototype and included analysis of geometrical form of images produced by different nuclei in the IACT simulation as well as shower core parameters reconstructed using timing array simulation. We show that the hybrid method can be sufficiently effective to precisely distinguish between mass groups of cosmic rays.

Read this paper on arXiv…

E. Postnikov, A. Grinyuk, L. Kuzmichev, et. al.
Tue, 28 Feb 17
60/69

Comments: 6 pages, 3 figures; proceedings of the 2nd International Conference on Particle Physics and Astrophysics (ICPPA-2016)

Methodology to create a new Total Solar Irradiance record: Making a composite out of multiple data records [SSA]

http://arxiv.org/abs/1702.02341


Many observational records critically rely on our ability to merge different (and not necessarily overlapping) observations into a single composite. We provide a novel and fully-traceable approach for doing so, which relies on a multi-scale maximum likelihood estimator. This approach overcomes the problem of data gaps in a natural way and uses data-driven estimates of the uncertainties. We apply it to the total solar irradiance (TSI) composite, which is currently being revised and is critical to our understanding of solar radiative forcing. While the final composite is pending decisions on what corrections to apply to the original observations, we find that the new composite is in closest agreement with the PMOD composite and the NRLTSI2 model. In addition, we evaluate long-term uncertainties in the TSI, which reveal a 1/f scaling

Read this paper on arXiv…

T. Wit, G. Kopp, C. Frohlich, et. al.
Thu, 9 Feb 17
13/67

Comments: slightly expanded version of a manuscript to appear in Geophysical Research Letters (2017)

Corral Framework: Trustworthy and Fully Functional Data Intensive Parallel Astronomical Pipelines [IMA]

http://arxiv.org/abs/1701.05566


Data processing pipelines are one of most common astronomical software. This kind of programs are chains of processes that transform raw data into valuable information. In this work a Python framework for astronomical pipeline generation is presented. It features a design pattern (Model-View-Controller) on top of a SQL Relational Database capable of handling custom data models, processing stages, and result communication alerts, as well as producing automatic quality and structural measurements. This pat- tern provides separation of concerns between the user logic and data models and the processing flow inside the pipeline, delivering for free multi processing and distributed computing capabilities. For the astronomical community this means an improvement on previous data processing pipelines, by avoiding the programmer deal with the processing flow, and parallelization issues, and by making him focusing just in the algorithms involved in the successive data transformations. This software as well as working examples of pipelines are available to the community at https://github.com/toros-astro.

Read this paper on arXiv…

J. Cabral, B. Sanchez, M. Beroiz, et. al.
Mon, 23 Jan 17
15/55

Comments: 8 pages, 2 figures, submitted for consideration at Astronomy and Computing. Code available at this https URL

From Blackbirds to Black Holes: Investigating Capture-Recapture Methods for Time Domain Astronomy [HEAP]

http://arxiv.org/abs/1701.03801


In time domain astronomy, recurrent transients present a special problem: how to infer total populations from limited observations. Monitoring observations may give a biassed view of the underlying population due to limitations on observing time, visibility and instrumental sensitivity. A similar problem exists in the life sciences, where animal populations (such as migratory birds) or disease prevalence, must be estimated from sparse and incomplete data. The class of methods termed Capture-Recapture is used to reconstruct population estimates from time-series records of encounters with the study population. This paper investigates the performance of Capture-Recapture methods in astronomy via a series of numerical simulations. The Blackbirds code simulates monitoring of populations of transients, in this case accreting binary stars (neutron star or black hole accreting from a stellar companion) under a range of observing strategies. We first generate realistic light-curves for populations of binaries with contrasting orbital period distributions. These models are then randomly sampled at observing cadences typical of existing and planned monitoring surveys. The classical capture-recapture methods, Lincoln-Peterson, Schnabel estimators, related techniques, and newer methods implemented in the Rcapture package are compared. A general exponential model based on the radioactive decay law is introduced, and demonstrated to recover (at 95% confidence) the underlying population abundance and duty cycle, in a fraction of the observing visits (10-50%) required to discover all the sources in the simulation. Capture-Recapture is a promising addition to the toolbox of time domain astronomy, and methods implemented in R by the biostats community can be readily called from within Python.

Read this paper on arXiv…

S. Laycock
Tue, 17 Jan 17
75/81

Comments: Accepted to New Astronomy. 11 pages, 8 figures (refereed version prior to editorial process)

Quasi-oscillatory dynamics observed in ascending phase of the flare on March 6, 2012 [SSA]

http://arxiv.org/abs/1612.09562


Context. The dynamics of the flaring loops in active region (AR) 11429 are studied. The observed dynamics consist of several evolution stages of the flaring loop system during both the ascending and descending phases of the registered M-class flare. The dynamical properties can also be classified by different types of magnetic reconnection, related plasma ejection and aperiodic flows, quasi-periodic oscillatory motions, and rapid temperature and density changes, among others. The focus of the present paper is on a specific time interval during the ascending (pre-flare) phase. Aims. The goal is to understand the quasi-periodic behavior in both space and time of the magnetic loop structures during the considered time interval. Methods.We have studied the characteristic location, motion, and periodicity properties of the flaring loops by examining space-time diagrams and intensity variation analysis along the coronal magnetic loops using AIA intensity and HMI magnetogram images (from the Solar Dynamics Observatory(SDO)). Results. We detected bright plasma blobs along the coronal loop during the ascending phase of the solar flare, the intensity variations of which clearly show quasi-periodic behavior. We also determined the periods of these oscillations. Conclusions. Two different interpretations are presented for the observed dynamics. Firstly, the oscillations are interpreted as the manifestation of non-fundamental harmonics of longitudinal standing acoustic oscillations driven by the thermodynamically nonequilibrium background (with time variable density and temperature). The second possible interpretation we provide is that the observed bright blobs could be a signature of a strongly twisted coronal loop that is kink unstable.

Read this paper on arXiv…

E. Philishvili, B. Shergelashvili, T. Zaqarashvili, et. al.
Mon, 2 Jan 17
15/45

Comments: 12 pages, 10 figures, A&A, in press

Method of frequency dependent correlations: investigating the variability of total solar irradiance [SSA]

http://arxiv.org/abs/1612.07494


This paper contributes to the field of modeling and hindcasting of the total solar irradiance (TSI) based on different proxy data that extend further back in time than the TSI that is measured from satellites.
We introduce a simple method to analyze persistent frequency-dependent correlations (FDCs) between the time series and use these correlations to hindcast missing historical TSI values. We try to avoid arbitrary choices of the free parameters of the model by computing them using an optimization procedure. The method can be regarded as a general tool for pairs of data sets, where correlating and anticorrelating components can be separated into non-overlapping regions in frequency domain.
Our method is based on low-pass and band-pass filtering with a Gaussian transfer function combined with de-trending and computation of envelope curves.
We find a major controversy between the historical proxies and satellite-measured targets: a large variance is detected between the low-frequency parts of targets, while the low-frequency proxy behavior of different measurement series is consistent with high precision. We also show that even though the rotational signal is not strongly manifested in the targets and proxies, it becomes clearly visible in FDC spectrum.
The application of the new method to solar data allows us to obtain important insights into the different TSI modeling procedures and their capabilities for hindcasting based on the directly observed time intervals.

Read this paper on arXiv…

J. Pelt, M. Kapyla and N. Olspert
Fri, 23 Dec 16
2/60

Comments: 19 pages, 5 figures, accepted for publication in Astronomy & Astrophysics

When "Optimal Filtering" Isn't [CL]

http://arxiv.org/abs/1611.07856


The so-called “optimal filter” analysis of a microcalorimeter’s x-ray pulses is statistically optimal only if all pulses have the same shape, regardless of energy. The shapes of pulses from a nonlinear detector can and do depend on the pulse energy, however. A pulse-fitting procedure that we call “tangent filtering” accounts for the energy dependence of the shape and should therefore achieve superior energy resolution. We take a geometric view of the pulse-fitting problem and give expressions to predict how much the energy resolution stands to benefit from such a procedure. We also demonstrate the method with a case study of K-line fluorescence from several 3d transition metals. The method improves the resolution from 4.9 eV to 4.2 eV at the Cu K$\alpha$ line (8.0keV).

Read this paper on arXiv…

J. Fowler, B. Alpert, W. Doriese, et. al.
Thu, 24 Nov 16
39/54

Comments: Submitted to the Proceedings of the 2016 Applied Superconductivity Conference

Filling the gaps: Gaussian mixture models from noisy, truncated or incomplete samples [IMA]

http://arxiv.org/abs/1611.05806


We extend the common mixtures-of-Gaussians density estimation approach to account for a known sample incompleteness by simultaneous imputation from the current model. The method called GMMis generalizes existing Expectation-Maximization techniques for truncated data to arbitrary truncation geometries and probabilistic rejection. It can incorporate an uniform background distribution as well as independent multivariate normal measurement errors for each of the observed samples, and recovers an estimate of the error-free distribution from which both observed and unobserved samples are drawn. We compare GMMis to the standard Gaussian mixture model for simple test cases with different types of incompleteness, and apply it to observational data from the NASA Chandra X-ray telescope. The python code is capable of performing density estimation with millions of samples and thousands of model components and is released as an open-source package at https://github.com/pmelchior/pyGMMis

Read this paper on arXiv…

P. Melchior and A. Goulding
Fri, 18 Nov 16
49/60

Comments: 12 pages, 6 figures, submitted to Computational Statistics & Data Analysis

A model independent safeguard for unbinned Profile Likelihood [CL]

http://arxiv.org/abs/1610.02643


We present a general method to include residual un-modeled background shape uncertainties in profile likelihood based statistical tests for high energy physics and astroparticle physics counting experiments. This approach provides a simple and natural protection against undercoverage, thus lowering the chances of a false discovery or of an over constrained confidence interval, and allows a natural transition to unbinned space. Unbinned likelihood enhances the sensitivity and allows optimal usage of information for the data and the models.
We show that the asymptotic behavior of the test statistic can be regained in cases where the model fails to describe the true background behavior, and present 1D and 2D case studies for model-driven and data-driven background models. The resulting penalty on sensitivities follows the actual discrepancy between the data and the models, and is asymptotically reduced to zero with increasing knowledge.

Read this paper on arXiv…

N. Priel, L. Rauch, H. Landsman, et. al.
Tue, 11 Oct 16
55/78

Comments: N/A

Long-period oscillations of active region patterns: least-squares mapping on second-order curves [SSA]

http://arxiv.org/abs/1610.01509


Active regions (ARs) are the main sources of variety in solar dynamic events. Automated detection and identification tools need to be developed for solar features for a deeper understanding of the solar cycle. Of particular interest here are the dynamical properties of the ARs, regardless of their internal structure and sunspot distribution. We studied the oscillatory dynamics of two ARs: NOAA 11327 and NOAA 11726 using two different methods of pattern recognition. We developed a novel method of automated AR border detection and compared it to an existing method for the proof-of-concept. The first method uses least-squares fitting on the smallest ellipse enclosing the AR, while the second method applies regression on the convex hull.} After processing the data, we found that the axes and the inclination angle of the ellipse and the convex hull oscillate in time. These oscillations are interpreted as the second harmonic of the standing long-period kink oscillations (with the node at the apex) of the magnetic flux tube connecting the two main sunspots of the ARs. In both ARs we have estimated the distribution of the phase speed magnitude along the magnetic tubes (along the two main spots) by interpreting the obtained oscillation of the inclination angle as the standing second harmonic kink mode. After comparing the obtained results for fast and slow kink modes, we conclude that both of these modes are good candidates to explain the observed oscillations of the AR inclination angles, as in the high plasma $\beta$ regime the phase speeds of these modes are comparable and on the order of the Alfv\'{e}n speed. Based on the properties of the observed oscillations, we detected the appropriate depth of the sunspot patterns, which coincides with estimations made by helioseismic methods. The latter analysis can be used as a basis for developing a magneto-seismological tool for ARs.

Read this paper on arXiv…

G. Dumbadze, B. Shergelashvili, V. Kukhianidze, et. al.
Thu, 6 Oct 16
22/67

Comments: 10 pages, 6 figures, Accepted for publication in A&A

Solar Activity and Transformer Failures in the Greek National Electric Grid [CL]

http://arxiv.org/abs/1307.1149


We study both the short term and long term effects of solar activity on the large transformers (150kV and 400kV) of the Greek national electric grid. We use data analysis and various analytic and statistical methods and models. Contrary to the common belief in PPC Greece, we see that there are considerable both short term (immediate) and long term effects of solar activity onto large transformers in a mid-latitude country (latitude approx. 35 – 41 degrees North) like Greece. Our results can be summarized as follows: For the short term effects: During 1989-2010 there were 43 stormy days (namely days with for example Ap larger or equal to 100) and we had 19 failures occurring during a stormy day plus or minus 3 days and 51 failures occurring during a stormy day plus or minus 7 days. All these failures can be directly related to Geomagnetically Induced Currents (GICs). Explicit cases are presented. For the long term effects we have two main results: The annual transformer failure number for the period of study 1989-2010 follows the solar activity pattern (11 year periodicity, bell-shaped graph). Yet the maximum number of transformer failures occur 3-4 years after the maximum of solar activity. There is statistical correlation between solar activity expressed using various newly defined long term solar activity indices and the annual number of transformer failures. These new long term solar activity indices were defined using both local (from geomagnetic stations in Greece) and global (planetary averages) geomagnetic data. Applying both linear and non-linear statistical regression we compute the regression equations and the corresponding coefficients of determination.

Read this paper on arXiv…

I. Zois
Tue, 13 Sep 16
50/91

Comments: 45 pages,a summary will be presented at the International Conference on Mathematical Modeling in Physical Sciences, 1-5 September 2013, Prague, Czech Republic.Some preliminary results were presented during the 8th European Space Weather Week in Namur, Belgium, 2011. Another part was presented at the 9th European Space Weather Week at the Acad\’emie Royale de Belgique, Brussels, Belgium 2012

Model-independent inference on compact-binary observations [HEAP]

http://arxiv.org/abs/1608.08223


The recent advanced LIGO detections of gravitational waves from merging binary black holes enhance the prospect of exploring binary evolution via gravitational-wave observations of a population of compact-object binaries. In the face of uncertainty about binary formation models, model-independent inference provides an appealing alternative to comparisons between observed and modelled populations. We describe a procedure for clustering in the multi-dimensional parameter space of observations that are subject to significant measurement errors. We apply this procedure to a mock data set of population-synthesis predictions for the masses of merging compact binaries convolved with realistic measurement uncertainties, and demonstrate that we can accurately distinguish subpopulations of binary neutron stars, binary black holes, and mixed black hole — neutron star binaries.

Read this paper on arXiv…

I. Mandel, W. Farr, A. Colonna, et. al.
Wed, 31 Aug 16
48/61

Comments: N/A

The chaotic four-body problem in Newtonian gravity I: Identical point-particles [SSA]

http://arxiv.org/abs/1608.07286


In this paper, we study the chaotic four-body problem in Newtonian gravity. Assuming point particles and total encounter energies $\le$ 0, the problem has three possible outcomes. We describe each outcome as a series of discrete transformations in energy space, using the diagrams first presented in Leigh \& Geller (2012; see the Appendix). Furthermore, we develop a formalism for calculating probabilities for these outcomes to occur, expressed using the density of escape configurations per unit energy, and based on the Monaghan description originally developed for the three-body problem. We compare this analytic formalism to results from a series of binary-binary encounters with identical point particles, simulated using the \texttt{FEWBODY} code. Each of our three encounter outcomes produces a unique velocity distribution for the escaping star(s). Thus, these distributions can potentially be used to constrain the origins of dynamically-formed populations, via a direct comparison between the predicted and observed velocity distributions. Finally, we show that, for encounters that form stable triples, the simulated single star escape velocity distributions are the same as for the three-body problem. This is also the case for the other two encounter outcomes, but only at low virial ratios. This suggests that single and binary stars processed via single-binary and binary-binary encounters in dense star clusters should have a unique velocity distribution relative to the underlying Maxwellian distribution (provided the relaxation time is sufficiently long), which can be calculated analytically.

Read this paper on arXiv…

N. Leigh, N. Stone, A. Geller, et. al.
Mon, 29 Aug 16
4/41

Comments: 18 pages, 12 figures; accepted for publication in MNRAS

Does the Planetary Dynamo Go Cycling On? Re-examining the Evidence for Cycles in Magnetic Reversal Rate [EPA]

http://arxiv.org/abs/1608.07303


The record of reversals of the geomagnetic field has played an integral role in the development of plate tectonic theory. Statistical analyses of the reversal record are aimed at detailing patterns and linking those patterns to core-mantle processes. The geomagnetic polarity timescale is a dynamic record and new paleomagnetic and geochronologic data provide additional detail. In this paper, we examine the periodicity revealed in the reversal record back to 375 Ma using Fourier analysis. Four significant peaks were found in the reversal power spectra within the 16-40-million-year range. Plotting the function constructed from the sum of the frequencies of the proximal peaks yield a transient 26 Myr periodicity, suggesting chaotic motion with a periodic attractor. The possible 16 Myr periodicity, a previously recognized result, may be correlated with “pulsation” of mantle plumes.

Read this paper on arXiv…

A. Melott, A. Pivarunas, J. Meert, et. al.
Mon, 29 Aug 16
20/41

Comments: 4 figures. Submitted to Earth and Planetary Science Letters

Uncertainties in the Sunspot Numbers: Estimation and Implications [SSA]

http://arxiv.org/abs/1608.05261


Sunspot number series are subject to various uncertainties, which are still poorly known. The need for their better understanding was recently highlighted by the major makeover of the international Sunspot Number [Clette et al., Space Science Reviews, 2014]. We present the first thorough estimation of these uncertainties, which behave as Poisson-like random variables with a multiplicative coefficient that is time- and observatory-dependent. We provide a simple expression for these uncertainties, and reveal how their evolution in time coincides with changes in the observations, and processing of the data. Knowing their value is essential for properly building composites out of multiple observations, and for preserving the stability of the composites in time.

Read this paper on arXiv…

T. Wit, L. Lefevre and F. Clette
Fri, 19 Aug 16
25/45

Comments: accepted in Solar Physics (2016), 24 pages

Uncertainty Limits on Solutions of Inverse Problems over Multiple Orders of Magnitude using Bootstrap Methods: An Astroparticle Physics Example [IMA]

http://arxiv.org/abs/1607.07226


Astroparticle experiments such as IceCube or MAGIC require a deconvolution of their measured data with respect to the response function of the detector to provide the distributions of interest, e.g. energy spectra. In this paper, appropriate uncertainty limits that also allow to draw conclusions on the geometric shape of the underlying distribution are determined using bootstrap methods, which are frequently applied in statistical applications. Bootstrap is a collective term for resampling methods that can be employed to approximate unknown probability distributions or features thereof. A clear advantage of bootstrap methods is their wide range of applicability. For instance, they yield reliable results, even if the usual normality assumption is violated.
The use, meaning and construction of uncertainty limits to any user-specific confidence level in the form of confidence intervals and levels are discussed. The precise algorithms for the implementation of these methods, applicable for any deconvolution algorithm, are given. The proposed methods are applied to Monte Carlo simulations to show their feasibility and their precision in comparison to the statistical uncertainties calculated with the deconvolution software TRUEE.

Read this paper on arXiv…

S. Einecke, K. Proksch, N. Bissantz, et. al.
Tue, 26 Jul 16
19/75

Comments: N/A

Deep Recurrent Neural Networks for Supernovae Classification [IMA]

http://arxiv.org/abs/1606.07442


We apply deep recurrent neural networks, which are capable of learning complex sequential information, to classify supernovae. The observational time and filter fluxes are used as inputs to the network, but since the inputs are agnostic additional data such as host galaxy information can also be included. Using the Supernovae Photometric Classification Challenge (SPCC) data, we find that deep networks are capable of learning about light curves, however the performance of the network is highly sensitive to the amount of training data. For a training size of 50% of the representational SPCC dataset (around 104 supernovae) we obtain a type Ia vs non type Ia classification accuracy of 94.8%, an area under the Receiver Operating Characteristic curve AUC of 0.986 and a SPCC figure-of-merit F1 = 0.64. We also apply a pre-trained model to obtain classification probabilities as a function of time, and show it can give early indications of supernovae type. Our method is competitive with existing algorithms and has applications for future large-scale photometric surveys.

Read this paper on arXiv…

T. Charnock and A. Moss
Mon, 27 Jun 16
27/43

Comments: 6 pages, 3 figures

Tests for Comparing Weighted Histograms. Review and Improvements [CL]

http://arxiv.org/abs/1606.06591


Histograms with weighted entries are used to estimate probability density functions. Computer simulation is the main application of this type of histograms. A review on chi-square tests for comparing weighted histograms is presented in this paper. Improvements to these tests that have a size closer to its nominal value are proposed. Numerical examples are presented for evaluation and demonstration of various applications of the tests.

Read this paper on arXiv…

N. Gagunashvili
Wed, 22 Jun 16
21/50

Comments: 23 pages, 2 figures. arXiv admin note: text overlap with arXiv:0905.4221

Information Gain in Cosmology: From the Discovery of Expansion to Future Surveys [CEA]

http://arxiv.org/abs/1606.06273


Facing the advent of the next generation cosmological surveys we present a method to forecast knowledge gain on cosmological models. We propose this as a well defined and general tool to quantify the performance of different experiments in relation to different theoretical models. In particular, the assessment of experimental performance will benefit enormously from the fact that this method is invariant under re-parametrization of the model. We apply this to future surveys and compare expected knowledge advancements to the most relevant experiments performed over the history of modern cosmology. When considering the standard cosmological model, we show that it will rapidly reach knowledge saturation in the near future and forthcoming improvements will not match the past ones. On the contrary, we find that new observations have the potential for unprecedented knowledge jumps when extensions of the standard scenario are considered.

Read this paper on arXiv…

M. Raveri, M. Martinelli, G. Zhao, et. al.
Tue, 21 Jun 16
44/75

Comments: 6 pages, 2 figures

Evidence for periodicity in 43-year-long monitoring of NGC 5548 [HEAP]

http://arxiv.org/abs/1606.04606


We present an analysis of 43 years (1972 to 2015) of spectroscopic observations of the Seyfert 1 galaxy NGC 5548. This includes 12 years of new unpublished observations (2003 to 2015). We compiled about 1600 H$\beta$ spectra and analyzed the long term spectral variations of the 5100\AA\ continuum and the H$\beta$ line. Our analysis is based on standard procedures like the Lomb-Scargle method that is known to be rather limited to such heterogeneous data sets, as well as a new method developed specifically for this project that is more robust and reveals a $\sim$5700 day periodicity in the continuum light curve, the H$\beta$ light curve and the radial velocity curve of the red wing of the H$\beta$ line. The data are consistent with orbital motion inside the broad emission line region of the source. We discuss several possible mechanisms that can explain this periodicity, including orbiting dusty and dust-free clouds, a binary black hole system, tidal disruption events and the effect of an orbiting star periodically passing through an accretion disc.

Read this paper on arXiv…

E. Bon, S. Zucker, H. Netzer, et. al.
Thu, 16 Jun 16
20/67

Comments: Accepted in ApJS, 65 pages, 10 figures and 4 tables

DNest4: Diffusive Nested Sampling in C++ and Python [CL]

http://arxiv.org/abs/1606.03757


In probabilistic (Bayesian) inferences, we typically want to compute properties of the posterior distribution, describing knowledge of unknown quantities in the context of a particular dataset and the assumed prior information. The marginal likelihood, also known as the “evidence”, is a key quantity in Bayesian model selection. The Diffusive Nested Sampling algorithm, a variant of Nested Sampling, is a powerful tool for generating posterior samples and estimating marginal likelihoods. It is effective at solving complex problems including many where the posterior distribution is multimodal or has strong dependencies between variables. DNest4 is an open source (MIT licensed), multi-threaded implementation of this algorithm in C++11, along with associated utilities including: i) RJObject, a class template for finite mixture models, (ii) A Python package allowing basic use without C++ coding, and iii) Experimental support for models implemented in Julia. In this paper we demonstrate DNest4 usage through examples including simple Bayesian data analysis, finite mixture models, and Approximate Bayesian Computation.

Read this paper on arXiv…

B. Brewer and D. Foreman-Mackey
Tue, 14 Jun 16
40/67

Comments: Submitted. 31 pages, 9 figures

Detecting Damped Lyman-$α$ Absorbers with Gaussian Processes [CEA]

http://arxiv.org/abs/1605.04460


We develop an automated technique for detecting damped Lyman-$\alpha$ absorbers (DLAs) along spectroscopic sightlines to quasi-stellar objects (QSOs or quasars). The detection of DLAs in large-scale spectroscopic surveys such as SDSS-III sheds light on galaxy formation at high redshift, showing the nucleation of galaxies from diffuse gas. We use nearly 50 000 QSO spectra to learn a novel tailored Gaussian process model for quasar emission spectra, which we apply to the DLA detection problem via Bayesian model selection. We propose models for identifying an arbitrary number of DLAs along a given line of sight. We demonstrate our method’s effectiveness using a large-scale validation experiment, with excellent performance. We also provide a catalog of our results applied to 162 861 spectra from SDSS-III data release 12.

Read this paper on arXiv…

R. Garnett, S. Ho, S. Bird, et. al.
Tue, 17 May 16
19/65

Comments: N/A

Track reconstruction through the application of the Legendre Transform on ellipses [CL]

http://arxiv.org/abs/1605.04738


We propose a pattern recognition method that identifies the common tangent lines of a set of ellipses. The detection of the tangent lines is attained by applying the Legendre transform on a given set of ellipses. As context, we consider a hypothetical detector made out of layers of chambers, each of which returns an ellipse as an output signal. The common tangent of these ellipses represents the trajectory of a charged particle crossing the detector. The proposed method is evaluated using ellipses constructed from Monte Carlo generated tracks.

Read this paper on arXiv…

T. Alexopoulos, Y. Bristogiannis and S. Leontsinis
Tue, 17 May 16
22/65

Comments: 17 pages, 12 figures

Application of Bayesian Neural Networks to Energy Reconstruction in EAS Experiments for ground-based TeV Astrophysics [IMA]

http://arxiv.org/abs/1604.06532


A toy detector array has been designed to simulate the detection of cosmic rays in Extended Air Shower(EAS) Experiments for ground-based TeV Astrophysics. The primary energies of protons from the Monte-Carlo simulation have been reconstructed by the algorithm of Bayesian neural networks (BNNs) and a standard method like the LHAASO experiment\cite{lhaaso-ma}, respectively. The result of the energy reconstruction using BNNs has been compared with the one using the standard method. Compared to the standard method, the energy resolutions are significantly improved using BNNs. And the improvement is more obvious for the high energy protons than the low energy ones.

Read this paper on arXiv…

Y. Bai, Y. Xu, J. Lan, et. al.
Mon, 25 Apr 16
13/40

Comments: 10 pages, 3 figures

Joint signal extraction from galaxy clusters in X-ray and SZ surveys: A matched-filter approach [CEA]

http://arxiv.org/abs/1604.06107


The hot ionized gas of the intra-cluster medium emits thermal radiation in the X-ray band and also distorts the cosmic microwave radiation through the Sunyaev-Zel’dovich (SZ) effect. Combining these two complementary sources of information through innovative techniques can therefore potentially improve the cluster detection rate when compared to using only one of the probes. Our aim is to build such a joint X-ray-SZ analysis tool, which will allow us to detect fainter or more distant clusters while maintaining high catalogue purity. We present a method based on matched multifrequency filters (MMF) for extracting cluster catalogues from SZ and X-ray surveys. We first designed an X-ray matched-filter method, analogous to the classical MMF developed for SZ observations. Then, we built our joint X-ray-SZ algorithm by combining our X-ray matched filter with the classical SZ-MMF, for which we used the physical relation between SZ and X-ray observations. We show that the proposed X-ray matched filter provides correct photometry results, and that the joint matched filter also provides correct photometry when the $F_{\rm X}/Y_{500}$ relation of the clusters is known. Moreover, the proposed joint algorithm provides a better signal-to-noise ratio than single-map extractions, which improves the detection rate even if we do not exactly know the $F_{\rm X}/Y_{500}$ relation. The proposed methods were tested using data from the ROSAT all-sky survey and from the Planck survey.

Read this paper on arXiv…

P. Tarrio, J. Melin, M. Arnaud, et. al.
Fri, 22 Apr 16
45/54

Comments: 22 pages (before appendices), 19 figures, 3 tables, 5 appendices. Accepted for publication in A&A

Using Extreme Value Theory for Determining the Probability of Carrington-Like Solar Flares [CL]

http://arxiv.org/abs/1604.03325


Space weather events can negatively affect satellites, the electricity grid, satellite navigation systems and human health. As a consequence, extreme space weather has been added to the UK and other national risk registers. However, by their very nature, extreme events occur rarely and statistical methods are required to determine the probability of occurrence solar storms. Space weather events can be characterised by a number of natural phenomena such as X-ray (solar) flares, solar energetic particle (SEP) fluxes, coronal mass ejections and various geophysical indices (Dst, Kp, F10.7). Here we use extreme value theory (EVT) to investigate the probability of extreme solar flares. Previous work has suggested that the distribution of solar flares follows a power law. However such an approach can lead to overly “fat-tails” in the probability distribution function and thus to an under estimation of the return time of such events. Using EVT and GOES X-ray flux data we find that the expected 150 year return level is an X60 flare (6×10^(-3) Wm-2, 1-8 {\AA} X-ray flux). We also show that the EVT results are consistent with flare data from the Kepler space telescope mission.

Read this paper on arXiv…

S. Elvidge and M. Angling
Wed, 13 Apr 16
43/60

Comments: 10 pages, 3 figures, submitted to Nature

LikeDM: likelihood calculator of dark matter detection [CL]

http://arxiv.org/abs/1603.07119


With the large progresses of searching for dark matter (DM) particles from indirect and direct methods, we develop a numerical tool which enables fast calculation of the likelihood of specified DM particle models given a number of observational data, such as charged cosmic rays from space-borne experiments (e.g., PAMELA, AMS-02), $\gamma$-rays from Fermi space telescope, and the underground direct detection experiments. The purpose of this tool, \likedm\ — likelihood calculator of dark matter detection, is to bridge the particle model of DM and the observational data. The intermediate steps between these two, including the astrophysical backgrounds, the propagation of charged particles, the analysis of Fermi $\gamma$-ray data, as well as the DM velocity distribution and the nuclear form factor, have been dealt with in the code. We release the first version (v1.0) focusing on the constraints of charged cosmic and gamma rays and the direct detection part will be implemented in the next version. This manual describes the framework, usage, and related physics of the code.

Read this paper on arXiv…

X. Huang, Y. Tsai and Q. Yuan
Thu, 24 Mar 16
1/60

Comments: 26 pages, 5 figures, LikeDM version 1

$K$-corrections: an Examination of their Contribution to the Uncertainty of Luminosity Measurements [GA]

http://arxiv.org/abs/1603.07299


In this paper we provide formulae that can be used to determine the uncertainty contributed to a measurement by a $K$-correction and, thus, valuable information about which flux measurement will provide the most accurate $K$-corrected luminosity. All of this is done at the level of a Gaussian approximation of the statistics involved, that is, where the galaxies in question can be characterized by a mean spectral energy distribution (SED) and a covariance function (spectral 2-point function). This paper also includes approximations of the SED mean and covariance for galaxies, and the three common subclasses thereof, based on applying the templates from Assef et al. (2010) to the objects in zCOSMOS bright 10k (Lilly et al. 2009) and photometry of the same field from Capak et al. (2007), Sanders et al. (2007), and the AllWISE source catalog.

Read this paper on arXiv…

S. Lake and E. Wright
Thu, 24 Mar 16
33/60

Comments: 10 pages, 6 figures, 6 tables (1 extended)

PageRank Pipeline Benchmark: Proposal for a Holistic System Benchmark for Big-Data Platforms [CL]

http://arxiv.org/abs/1603.01876


The rise of big data systems has created a need for benchmarks to measure and compare the capabilities of these systems. Big data benchmarks present unique scalability challenges. The supercomputing community has wrestled with these challenges for decades and developed methodologies for creating rigorous scalable benchmarks (e.g., HPC Challenge). The proposed PageRank pipeline benchmark employs supercomputing benchmarking methodologies to create a scalable benchmark that is reflective of many real-world big data processing systems. The PageRank pipeline benchmark builds on existing prior scalable benchmarks (Graph500, Sort, and PageRank) to create a holistic benchmark with multiple integrated kernels that can be run together or independently. Each kernel is well defined mathematically and can be implemented in any programming environment. The linear algebraic nature of PageRank makes it well suited to being implemented using the GraphBLAS standard. The computations are simple enough that performance predictions can be made based on simple computing hardware models. The surrounding kernels provide the context for each kernel that allows rigorous definition of both the input and the output for each kernel. Furthermore, since the proposed PageRank pipeline benchmark is scalable in both problem size and hardware, it can be used to measure and quantitatively compare a wide range of present day and future systems. Serial implementations in C++, Python, Python with Pandas, Matlab, Octave, and Julia have been implemented and their single threaded performance has been measured.

Read this paper on arXiv…

P. Dreher, C. Byun, C. Hill, et. al.
Tue, 8 Mar 16
82/83

Comments: 9 pages, 7 figures, to appear in IPDPS 2016 Graph Algorithms Building Blocks (GABB) workshop

Superplot: a graphical interface for plotting and analysing MultiNest output [CL]

http://arxiv.org/abs/1603.00555


We present an application, Superplot, for calculating and plotting statistical quantities relevant to parameter inference from a “chain” of samples drawn from a parameter space, produced by e.g. MultiNest. A simple graphical interface allows one to browse a chain of many variables quickly, and make publication quality plots of, inter alia, profile likelihood, posterior pdf, confidence intervals and credible regions. In this short manual, we document installation and basic usage, and define all statistical quantities and conventions.

Read this paper on arXiv…

A. Fowlie and M. Bardsley
Thu, 3 Mar 16
39/75

Comments: 13 pages, 2 colour figures

Unfolding problem clarification and solution validation [CL]

http://arxiv.org/abs/1602.05834


The unfolding problem formulation for correcting experimental data distortions due to finite resolution and limited detector acceptance is discussed. A novel validation of the problem solution is proposed. Attention is drawn to fact that different unfolded distributions may satisfy the validation criteria, in which case a conservative approach using entropy is suggested. The importance of analysis of residuals is demonstrated.

Read this paper on arXiv…

N. Gagunashvili
Fri, 19 Feb 16
22/50

Comments: 9 pages,4 figures

Gravitational wave astrophysics, data analysis and multimessenger astronomy [IMA]

http://arxiv.org/abs/1602.05573


This paper reviews gravitational wave sources and their detection. One of the most exciting potential sources of gravitational waves are coalescing binary black hole systems. They can occur on all mass scales and be formed in numerous ways, many of which are not understood. They are generally invisible in electromagnetic waves, and they provide opportunities for deep investigation of Einstein’s general theory of relativity. Sect. 1 of this paper considers ways that binary black holes can be created in the universe, and includes the prediction that binary black hole coalescence events are likely to be the first gravitational wave sources to be detected. The next parts of this paper address the detection of chirp waveforms from coalescence events in noisy data. Such analysis is computationally intensive. Sect. 2 reviews a new and powerful method of signal detection based on the GPU-implemented summed parallel infinite impulse response filters. Such filters are intrinsically real time alorithms, that can be used to rapidly detect and localise signals. Sect. 3 of the paper reviews the use of GPU processors for rapid searching for gravitational wave bursts that can arise from black hole births and coalescences. In sect. 4 the use of GPU processors to enable fast efficient statistical significance testing of gravitational wave event candidates is reviewed. Sect. 5 of this paper addresses the method of multimessenger astronomy where the discovery of electromagnetic counterparts of gravitational wave events can be used to identify sources, understand their nature and obtain much greater science outcomes from each identified event.

Read this paper on arXiv…

H. Lee, E. Bigot, Z. Du, et. al.
Fri, 19 Feb 16
42/50

Comments: N/A

Practical Introduction to Clustering Data [CL]

http://arxiv.org/abs/1602.05124


Data clustering is an approach to seek for structure in sets of complex data, i.e., sets of “objects”. The main objective is to identify groups of objects which are similar to each other, e.g., for classification. Here, an introduction to clustering is given and three basic approaches are introduced: the k-means algorithm, neighbour-based clustering, and an agglomerative clustering method. For all cases, C source code examples are given, allowing for an easy implementation.

Read this paper on arXiv…

A. Hartmann
Wed, 17 Feb 16
30/55

Comments: 22 pages. All source code in anc directory included. Section 8.5.6 of book: A.K. Hartmann, Big Practical Guide to Computer Simulations, World-Scientifc, Singapore (2015)

Looking for a Needle in a Haystack? Look Elsewhere! A statistical comparison of approximate global p-values [CL]

http://arxiv.org/abs/1602.03765


The search for new significant peaks over a energy spectrum often involves a statistical multiple hypothesis testing problem. Separate tests of hypothesis are conducted at different locations producing an ensemble of local p-values, the smallest of which is reported as evidence for the new resonance. Unfortunately, controlling the false detection rate (type I error rate) of such procedures may lead to excessively stringent acceptance criteria. In the recent physics literature, two promising statistical tools have been proposed to overcome these limitations. In 2005, a method to “find needles in haystacks” was introduced by Pilla et al. [1], and a second method was later proposed by Gross and Vitells [2] in the context of the “look elsewhere effect” and trial factors. We show that, for relatively small sample sizes, the former leads to an artificial inflation of statistical power that stems from an increase in the false detection rate, whereas the two methods exhibit similar performance for large sample sizes. Finally, we provide general guidelines to select between statistical procedures for signal detection with respect to the specifics of the physics problem under investigation.

Read this paper on arXiv…

S. Algeri, J. Conrad, D. Dyk, et. al.
Fri, 12 Feb 16
6/48

Comments: Submitted to EPJ C

Dynamic system classifier [CL]

http://arxiv.org/abs/1601.07901


Stochastic differential equations describe well many physical, biological and sociological systems, despite the simplification often made in their derivation. Here the usage of simple stochastic differential equations to characterize and classify complex dynamical systems is proposed within a Bayesian framework. To this end, we develop a dynamic system classifier (DSC). The DSC first abstracts training data of a system in terms of time dependent coefficients of the descriptive stochastic differential equation. Thereby the DSC identifies unique correlation structures within the training data. For definiteness we restrict the presentation of DSC to oscillation processes with a time dependent frequency {\omega}(t) and damping factor {\gamma}(t). Although real systems might be more complex, this simple oscillator captures many characteristic features. The {\omega} and {\gamma} timelines represent the abstract system characterization and permit the construction of efficient signal classifiers. Numerical experiments show that such classifiers perform well even in the low signal-to-noise regime.

Read this paper on arXiv…

D. Pumpe, M. Greiner, E. Muller, et. al.
Fri, 29 Jan 16
23/52

Comments: 11 pages, 8 figures

Processing of X-ray Microcalorimeter Data with Pulse Shape Variation using Principal Component Analysis [CL]

http://arxiv.org/abs/1601.01651


We present a method using principal component analysis (PCA) to process x-ray pulses with severe shape variation where traditional optimal filter methods fail. We demonstrate that PCA is able to noise-filter and extract energy information from x-ray pulses despite their different shapes. We apply this method to a dataset from an x-ray thermal kinetic inductance detector which has severe pulse shape variation arising from position-dependent absorption.

Read this paper on arXiv…

D. Yan, T. Cecil, L. Gades, et. al.
Fri, 8 Jan 16
13/51

Comments: Accepted for publication in J. Low Temperature Physics, Low Temperature Detectors 16 (LTD-16) conference

On the Solar Component in the Observed Global Temperature Anomalies [CL]

http://arxiv.org/abs/1512.01075


In this paper, starting from the updated time series of global temperature anomalies, Ta, we show how the solar component affects the observed behavior using, as an indicator of solar activity, the Solar Sunspot Number SSN. The results that are found clearly show that the solar component has an important role and affects significantly the current observed stationary behavior of global temperature anomalies. The solar activity behavior and its future role will therefore be decisive in determining whether or not the restart of the increase of temperature anomalies observed since 1975 will occur.

Read this paper on arXiv…

S. Sello
Fri, 4 Dec 15
23/64

Comments: 9 pages, 7 figures

Frequentist tests for Bayesian models [IMA]

http://arxiv.org/abs/1511.02363


Analogues of the frequentist chi-square and $F$ tests are proposed for testing goodness-of-fit and consistency for Bayesian models. Simple examples exhibit these tests’ detection of inconsistency between consecutive experiments with identical parameters, when the first experiment provides the prior for the second. In a related analysis, a quantitative measure is derived for judging the degree of tension between two different experiments with partially overlapping parameter vectors.

Read this paper on arXiv…

L. Lucy
Tue, 10 Nov 15
27/62

Comments: 8 pages, 4 figures

On the universality of interstellar filaments: theory meets simulations and observations [SSA]

http://arxiv.org/abs/1510.05654


Filaments are ubiquitous in the universe. They are seen in cosmological structures, in the Milky Way centre and in dense interstellar gas. Recent observations have revealed that stars and star clusters form preferentially at the intersection of dense filaments. Understanding the formation and properties of filaments is therefore a crucial step in understanding star formation. Here we perform three-dimensional high-resolution magnetohydrodynamical simulations that follow the evolution of molecular clouds and the formation of filaments and stars within them. We apply a filament detection algorithm and compare simulations with different combinations of physical ingredients: gravity, turbulence, magnetic fields and jet/outflow feedback. We find that gravity-only simulations produce significantly narrower filament profiles than observed, while simulations that at least include turbulence produce realistic filament properties. For these turbulence simulations, we find a remarkably universal filament width of (0.10+/-0.02) pc, which is independent of the evolutionary stage or the star formation history of the clouds. We derive a theoretical model that provides a physical explanation for this characteristic filament width, based on the sonic scale (lambda_sonic) of molecular cloud turbulence. Our derivation provides lambda_sonic as a function of the cloud diameter L, the velocity dispersion sigma_v, the gas sound speed c_s and the strength of the magnetic field parameterised by plasma beta. For typical cloud conditions in the Milky Way spiral arms, we find theoretically that lambda_sonic = 0.04-0.16 pc, in excellent agreement with the filament width of 0.05-0.15 pc found in observations.

Read this paper on arXiv…

C. Federrath
Wed, 21 Oct 15
48/66

Comments: 13 pages, 8 figures, submitted to MNRAS, comments welcome

Effect of data gaps on correlation dimension computed from light curves of variable stars [IMA]

http://arxiv.org/abs/1410.4454


Observational data, especially astrophysical data, is often limited by gaps in data that arises due to lack of observations for a variety of reasons. Such inadvertent gaps are usually smoothed over using interpolation techniques. However the smoothing techniques can introduce artificial effects, especially when non-linear analysis is undertaken. We investigate how gaps can affect the computed values of correlation dimension of the system, without using any interpolation. For this we introduce gaps artificially in synthetic data derived from standard chaotic systems, like the R{\”o}ssler and Lorenz, with frequency of occurrence and size of missing data drawn from two Gaussian distributions. Then we study the changes in correlation dimension with change in the distributions of position and size of gaps. We find that for a considerable range of mean gap frequency and size, the value of correlation dimension is not significantly affected, indicating that in such specific cases, the calculated values can still be reliable and acceptable. Thus our study introduces a method of checking the reliability of computed correlation dimension values by calculating the distribution of gaps with respect to its size and position. This is illustrated for the data from light curves of three variable stars, R Scuti, U Monocerotis and SU Tauri. We also demonstrate how a cubic spline interpolation can cause a time series of Gaussian noise with missing data to be misinterpreted as being chaotic in origin. This is demonstrated for the non chaotic light curve of variable star SS Cygni, which gives a saturated D$_{2}$ value, when interpolated using a cubic spline. In addition we also find that a careful choice of binning, in addition to reducing noise, can help in shifting the gap distribution to the reliable range for D$_2$ values.

Read this paper on arXiv…

S. George, G. Ambika and R. Misra
Tue, 13 Oct 15
63/64

Comments: 13 pages, 15 figures

Resolution enhancement by extrapolation of coherent diffraction images: a quantitative study about the limits and a numerical study of non-binary and phase objects [CL]

http://arxiv.org/abs/1510.01654


In coherent diffractive imaging (CDI) the resolution with which the reconstructed object can be obtained is limited by the numerical aperture of the experimental setup. We present here a theoretical and numerical study for achieving super-resolution by post-extrapolation of coherent diffraction images, such as diffraction patterns or holograms. We proof that a diffraction pattern can unambiguously be extrapolated from just a fraction of the entire pattern and that the ratio of the extrapolated signal to the originally available signal, is linearly proportional to the oversampling ratio. While there could be in principle other methods to achieve extrapolation, we devote our discussion to employing phase retrieval methods and demonstrate their limits. We present two numerical studies; namely the extrapolation of diffraction patterns of non-binary and that of phase objects together with a discussion of the optimal extrapolation procedure.

Read this paper on arXiv…

T. Latychevskaia and H. Fink
Wed, 7 Oct 15
37/72

Comments: N/A

Testing a Novel Self-Assembling Data Paradigm in the Context of IACT Data [IMA]

http://arxiv.org/abs/1509.02202


The process of gathering and associating data from multiple sensors or sub-detectors due to a common physical event (the process of event-building) is used in many fields, including high-energy physics and $\gamma$-ray astronomy. Fault tolerance in event-building is a challenging problem that increases in difficulty with higher data throughput rates and increasing numbers of sub-detectors. We draw on biological self-assembly models in the development of a novel event-building paradigm that treats each packet of data from an individual sensor or sub-detector as if it were a molecule in solution. Just as molecules are capable of forming chemical bonds, “bonds” can be defined between data packets using metadata-based discriminants. A database — which plays the role of a beaker of solution — continually selects pairs of assemblies at random to test for bonds, which allows single packets and small assemblies to aggregate into larger assemblies. During this process higher-quality associations supersede spurious ones. The database thereby becomes fluid, dynamic, and self-correcting rather than static. We will describe tests of the self-assembly paradigm using our first fluid database prototype and data from the VERITAS $\gamma$-ray telescope.

Read this paper on arXiv…

A. Weinstein, L. Fortson, T. Brantseg, et. al.
Wed, 9 Sep 15
2/56

Comments: In Proceedings of the 34th International Cosmic Ray Conference (ICRC2015), The Hague, The Netherlands

Machine Learning Model of the Swift/BAT Trigger Algorithm for Long GRB Population Studies [HEAP]

http://arxiv.org/abs/1509.01228


To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift/BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien 2014 is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of $\gtrsim97\%$ ($\lesssim 3\%$ error), which is a significant improvement on a cut in GRB flux which has an accuracy of $89.6\%$ ($10.4\%$ error). These models are then used to measure the detection efficiency of Swift as a function of redshift $z$, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of $n_0 \sim 0.48^{+0.41}_{-0.23} \ {\rm Gpc}^{-3} {\rm yr}^{-1}$ with power-law indices of $n_1 \sim 1.7^{+0.6}_{-0.5}$ and $n_2 \sim -5.9^{+5.7}_{-0.1}$ for GRBs above and below a break point of $z_1 \sim 6.8^{+2.8}_{-3.2}$. This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting. The code used in this is analysis is publicly available online (https://github.com/PBGraff/SwiftGRB_PEanalysis).

Read this paper on arXiv…

P. Graff, A. Lien, J. Baker, et. al.
Fri, 4 Sep 15
52/58

Comments: 16 pages, 18 figures, 5 tables, submitted to ApJ

Comparing non-nested models in the search for new physics [CL]

http://arxiv.org/abs/1509.01010


Searches for unknown physics and deciding between competing physical models to explain data rely on statistical hypotheses testing. A common approach, used for example in the discovery of the Brout-Englert-Higgs boson, is based on the statistical Likelihood Ratio Test (LRT) and its asymptotic properties. In the common situation, when neither of the two models under comparison is a special case of the other i.e., when the hypotheses are non-nested, this test is not applicable, and so far no efficient solution exists. In physics, this problem occurs when two models that reside in different parameter spaces are to be compared. An important example is the recently reported excess emission in astrophysical $\gamma$-rays and the question whether its origin is known astrophysics or dark matter. We develop and study a new, generally applicable, frequentist method and validate its statistical properties using a suite of simulations studies. We exemplify it on realistic simulated data of the Fermi-LAT $\gamma$-ray satellite, where non-nested hypotheses testing appears in the search for particle dark matter.

Read this paper on arXiv…

S. Algeri, J. Conrad and D. Dyk
Fri, 4 Sep 15
53/58

Comments: We welcome examples of non-nested models testing problems

Performance analysis of the Least-Squares estimator in Astrometry [IMA]

http://arxiv.org/abs/1509.00677


We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated condition) the least-squares estimator is near optimal, as its performance asymptotically approaches the Cramer-Rao bound. However, we also demonstrate that, in general, there is no unbiased estimator for the astrometric position that can precisely reach the Cramer-Rao bound. We validate our theoretical analysis through simulated digital-detector observations under typical observing conditions. We show that the nominal value for the mean-square-error of the least-squares estimator (obtained from our theorem) can be used as a benchmark indicator of the expected statistical performance of the least-squares method under a wide range of conditions. Our results are valid for an idealized linear (one-dimensional) array detector where intra-pixel response changes are neglected, and where flat-fielding is achieved with very high accuracy.

Read this paper on arXiv…

R. Lobos, J. Silva, R. Mendez, et. al.
Thu, 3 Sep 15
17/58

Comments: 35 pages, 8 figures. Accepted for publication by PASP

Time Series with Tailored Nonlinearities [CL]

http://arxiv.org/abs/1509.00223


It is demonstrated how to generate time series with tailored nonlinearities by inducing well- defined constraints on the Fourier phases. Correlations between the phase information of adjacent phases and (static and dynamic) measures of nonlinearities are established and their origin is explained. By applying a set of simple constraints on the phases of an originally linear and uncor- related Gaussian time series, the observed scaling behavior of the intensity distribution of empirical time series can be reproduced. The power law character of the intensity distributions being typical for e.g. turbulence and financial data can thus be explained in terms of phase correlations.

Read this paper on arXiv…

C. Raeth and I. Laut
Wed, 2 Sep 15
72/87

Comments: 5 pages, 5 figures, Phys. Rev. E, Rapid Communication, accepted

Statistical framework for estimating GNSS bias [IMA]

http://arxiv.org/abs/1508.02957


We present a statistical framework for estimating global navigation satellite system (GNSS) non-ionospheric differential time delay bias. The biases are estimated by examining differences of measured line integrated electron densities (TEC) that are scaled to equivalent vertical integrated densities. The spatio-temporal variability, instrumentation dependent errors, and errors due to inaccurate ionospheric altitude profile assumptions are modeled as structure functions. These structure functions determine how the TEC differences are weighted in the linear least-squares minimization procedure, which is used to produce the bias estimates. A method for automatic detection and removal of outlier measurements that do not fit into a model of receiver bias is also described. The same statistical framework can be used for a single receiver station, but it also scales to a large global network of receivers. In addition to the Global Positioning System (GPS), the method is also applicable to other dual frequency GNSS systems, such as GLONASS (Globalnaya Navigazionnaya Sputnikovaya Sistema). The use of the framework is demonstrated in practice through several examples. A specific implementation of the methods presented here are used to compute GPS receiver biases for measurements in the MIT Haystack Madrigal distributed database system. Results of the new algorithm are compared with the current MIT Haystack Observatory MAPGPS bias determination algorithm. The new method is found to produce estimates of receiver bias that have reduced day-to-day variability and more consistent coincident vertical TEC values.

Read this paper on arXiv…

J. Vierinen, A. Coster, W. Rideout, et. al.
Thu, 13 Aug 15
20/49

Comments: 18 pages, 5 figures, submitted to AMT

Trans-Dimensional Bayesian Inference for Gravitational Lens Substructures [IMA]

http://arxiv.org/abs/1508.00662


We introduce a Bayesian solution to the problem of inferring the density profile of strong gravitational lenses when the lens galaxy may contain multiple dark or faint substructures. The source and lens models are based on a superposition of an unknown number of non-negative basis functions (or “blobs”) whose form was chosen with speed as a primary criterion. The prior distribution for the blobs’ properties is specified hierarchically, so the mass function of substructures is a natural output of the method. We use reversible jump Markov Chain Monte Carlo (MCMC) within Diffusive Nested Sampling (DNS) to sample the posterior distribution and evaluate the marginal likelihood of the model, including the summation over the unknown number of blobs in the source and the lens. We demonstrate the method on a simulated data set with a single substructure, which is recovered well with moderate uncertainties. We also apply the method to the g-band image of the “Cosmic Horseshoe” system, and find some hints of potential substructures. However, we caution that such results could also be caused by misspecifications in the model (such as the shape of the smooth lens component or the point spread function), which are difficult to guard against in full generality.

Read this paper on arXiv…

B. Brewer, D. Huijser and G. Lewis
Wed, 5 Aug 15
5/46

Comments: Submitted. 10 pages, 10 figures

Weighted ABC: a new strategy for cluster strong lensing cosmology with simulations [CEA]

http://arxiv.org/abs/1507.05617


Comparisons between observed and predicted strong lensing properties of galaxy clusters have been routinely used to claim either tension or consistency with $\Lambda$CDM cosmology. However, standard approaches to such cosmological tests are unable to quantify the preference for one cosmology over another. We advocate using a `weighted’ variant of approximate Bayesian computation (ABC), whereby the parameters of the scaling relation between Einstein radii and cluster mass, $\alpha$ and $\beta$, are treated as summary statistics. We demonstrate, for the first time, a method of estimating the likelihood of the data under the $\Lambda$CDM framework, using the X-ray selected $z>0.5$ MACS clusters as a case in point and employing both N-body and hydrodynamic simulations of clusters. We investigate the uncertainty in the calculated likelihood, and consequential ability to compare competing cosmologies, that arises from incomplete descriptions of baryonic processes, discrepancies in cluster selection criteria, redshift distribution, and dynamical state. The relation between triaxial cluster masses at various overdensities provide a promising alternative to the strong lensing test.

Read this paper on arXiv…

M. Killedar, S. Borgani, D. Fabjan, et. al.
Wed, 22 Jul 15
34/59

Comments: 15 pages, 6 figures, 1 table, submitted to MNRAS, comments welcome

Limitation of the Least Square Method in the Evaluation of Dimension of Fractal Brownian Motions [CL]

http://arxiv.org/abs/1507.03250


With the standard deviation for the logarithm of the re-scaled range $\langle |F(t+\tau)-F(t)|\rangle$ of simulated fractal Brownian motions $F(t)$ given in a previous paper \cite{q14}, the method of least squares is adopted to determine the slope, $S$, and intercept, $I$, of the log$(\langle |F(t+\tau)-F(t)|\rangle)$ vs $\rm{log}(\tau)$ plot to investigate the limitation of this procedure. It is found that the reduced $\chi^2$ of the fitting decreases with the increase of the Hurst index, $H$ (the expectation value of $S$), which may be attributed to the correlation among the re-scaled ranges. Similarly, it is found that the errors of the fitting parameters $S$ and $I$ are usually smaller than their corresponding standard deviations. These results show the limitation of using the simple least square method to determine the dimension of a fractal time series. Nevertheless, they may be used to reinterpret the fitting results of the least square method to determine the dimension of fractal Brownian motions more self-consistently. The currency exchange rate between Euro and Dollar is used as an example to demonstrate this procedure and a fractal dimension of 1.511 is obtained for spans greater than 30 transactions.

Read this paper on arXiv…

B. Qiao, S. Liu, H. Zeng, et. al.
Tue, 14 Jul 15
45/64

Comments: 7 pages,23 figures, to appear on Multiscale Modeling and Simulation Journal

Measurement of Hubble constant: Non-Gaussian Errors in HST key project data [CEA]

http://arxiv.org/abs/1506.06212


Random errors in any data set are expected to follow the Gaussian distribution with zero mean. We propose an elegant method based on Kolmogorov-Smirnov statistic to test the above and apply it on the measurement of Hubble constant which determines the expansion rate of the Universe. The measurements were made using Hubble Space Telescope. Our analysis shows that the errors in the above measurement are non-Gaussian.

Read this paper on arXiv…

M. Singh, S. Gupta and A. Pandey
Tue, 23 Jun 15
27/67

Comments: 3 pages, 2 figures

Investigating the Kinematics of Coronal Mass Ejections with the Automated CORIMP Catalog [SSA]

http://arxiv.org/abs/1506.04046


Studying coronal mass ejections (CMEs) in coronagraph data can be challenging due to their diffuse structure and transient nature, compounded by the variations in their dynamics, morphology, and frequency of occurrence. The large amounts of data available from missions like the Solar and Heliospheric Observatory (SOHO) make manual cataloging of CMEs tedious and prone to human error, and so a robust method of detection and analysis is required and often preferred. A new coronal image processing catalog called CORIMP has been developed in an effort to achieve this, through the implementation of a dynamic background separation technique and multiscale edge detection. These algorithms together isolate and characterise CME structure in the field-of-view of the Large Angle Spectrometric Coronagraph (LASCO) onboard SOHO. CORIMP also applies a Savitzky-Golay filter, along with quadratic and linear fits, to the height-time measurements for better revealing the true CME speed and acceleration profiles across the plane-of-sky. Here we present a sample of new results from the CORIMP CME catalog, and directly compare them with the other automated catalogs of Computer Aided CME Tracking (CACTus) and Solar Eruptive Events Detection System (SEEDS), as well as the manual CME catalog at the Coordinated Data Analysis Workshop (CDAW) Data Center and a previously published study of the sample events. We further investigate a form of unsupervised machine learning by using a k-means clustering algorithm to distinguish detections of multiple CMEs that occur close together in space and time. While challenges still exist, this investigation and comparison of results demonstrates the reliability and robustness of the CORIMP catalog, proving its effectiveness at detecting and tracking CMEs throughout the LASCO dataset.

Read this paper on arXiv…

J. Byrne
Mon, 15 Jun 15
13/45

Comments: 23 pages, 11 figures, 1 table

Solar Axion search with Micromegas detectors in the CAST Experiment with $^{3}$He as buffer gas [SSA]

http://arxiv.org/abs/1506.02601


Axions are well motivated particles proposed in an extension of the SM as a solution to the strong CP problem. Also, there is the category of Axion-Like Particles (ALPs) which appear in extensions of the SM and share the same phenomenology of the axion. Axions and ALPs are candidates to solve the Dark Matter problem. CAST, the CERN Axion Solar Telescope is looking for solar axions since 2003. CAST exploit the helioscope technique using a decommissioned LHC dipole magnet in which solar axions could be reconverted into photons. Three of the four detectors operating at CAST are of the Micromegas type. The analysis of the data of the three Micromegas detectors during the 2011 data taking campaign at CAST is presented in this thesis, obtaining a limit on the coupling constant of g$_{a \gamma}$ < 3.90 $\times$ 10$^{-10}$ GeV$^{-1}$ at a 95$\%$ of confidence level, for axion masses from 1 to 1.17 eV. CAST Micromegas detectors exploit different strategies developed for the reduction of the background level. Moreover, different test benches have been developed in order to understand the origin of the background. The state of art in low background techniques is shown in the upgrades of the Micromegas detectors at CAST, which has led to a reduction of the background in a factor $\sim$6. It translates in an improvement of the sensitivity of CAST in a factor $\sim$2.5. Beyond CAST a new generation axion helioscope has been proposed: IAXO-the International Axion Observatory. IAXO will enhance the helioscope technique by exploiting all the singularities of CAST implemented into a large superconducting toroidal magnet, dedicated X-ray optics and ultra-low background detectors. A description of the IAXO proposal and the study of the sensitivity of IAXO are presented in this thesis. IAXO will surpass CAST in more than one order of magnitude, entering into an unexplored parameter space area.

Read this paper on arXiv…

J. Garcia
Tue, 9 Jun 15
49/56

Comments: PhD. Thesis

A simple reform for treating the loss of accuracy of Humlicek's W4 algorithm near the real axis [IMA]

http://arxiv.org/abs/1505.05596


We present a simple reform for treating the reported problem of loss-of-accuracy near the real axis of Humlicek’s w4 algorithm, widely used for the calculation of the Faddeyeva or complex probability function. The reformed routine maintains the claimed accuracy of the algorithm over a wide and fine grid that covers all the domain of the real part, x, of the complex input variable, z=x+iy, and values for the imaginary part in the range y=[10-30, 10+30]

Read this paper on arXiv…

M. Zaghloul
Fri, 22 May 15
11/67

Comments: 7 pages 4 figures

Measuring photometric redshifts using galaxy images and Deep Neural Networks [IMA]

http://arxiv.org/abs/1504.07255


We propose a new method to estimate the photometric redshift of galaxies by using the full galaxy image in each measured band. This method draws from the latest techniques and advances in machine learning, in particular Deep Neural Networks. We pass the entire multi-band galaxy image into the machine learning architecture to obtain a redshift estimate that is competitive with the best existing standard machine learning techniques. The standard techniques estimate redshifts using post-processed features, such as magnitudes and colours, which are extracted from the galaxy images and are deemed to be salient by the user. This new method removes the user from the photometric redshift estimation pipeline. However we do note that Deep Neural Networks require many orders of magnitude more computing resources than standard machine learning architectures.

Read this paper on arXiv…

B. Hoyle
Wed, 29 Apr 15
1/62

Comments: 21 pages, 3 figures, 1 table, submitted to Astronomy and Computer Science

Stochastic determination of matrix determinants [CL]

http://arxiv.org/abs/1504.02661


Matrix determinants play an important role in data analysis, in particular when Gaussian processes are involved. Due to currently exploding data volumes linear operations – matrices – acting on the data are often not accessible directly, but are only represented indirectly in form of a computer routine. Such a routine implements the transformation a data vector undergoes under matrix multiplication. Meanwhile efficient probing routines to estimate a matrix’s diagonal or trace, based solely on such computationally affordable matrix-vector multiplications, are well known and frequently used in signal inference, a stochastic estimate for its determinant is still lacking. In this work a probing method for the logarithm of a determinant of a linear operator is introduced. This method rests upon a reformulation of the log-determinant by an integral representation and the transformation of the involved terms into stochastic expressions. This stochastic determinant determination enables large-size applications in Bayesian inference, in particular evidence calculations, model comparison, and posterior determination.

Read this paper on arXiv…

S. Dorn and T. Ensslin
Mon, 13 Apr 15
49/54

Comments: 8 pages, 5 figures

Fast and accurate prediction of numerical relativity waveforms from binary black hole mergers using surrogate models [CL]

http://arxiv.org/abs/1502.07758


Simulating a binary black hole coalescence by solving Einstein’s equations is computationally expensive, requiring days to months of supercomputing time. In this paper, we construct an accurate and fast-to-evaluate surrogate model for numerical relativity (NR) waveforms from non-spinning binary black hole coalescences with mass ratios from $1$ to $10$ and durations corresponding to about $15$ orbits before merger. Our surrogate, which is built using reduced order modeling techniques, is distinct from traditional modeling efforts. We find that the full multi-mode surrogate model agrees with waveforms generated by NR to within the numerical error of the NR code. In particular, we show that our modeling strategy produces surrogates which can correctly predict NR waveforms that were {\em not} used for the surrogate’s training. For all practical purposes, then, the surrogate waveform model is equivalent to the high-accuracy, large-scale simulation waveform but can be evaluated in a millisecond to a second depending on the number of output modes and the sampling rate. Our model includes all spherical-harmonic ${}_{-2}Y_{\ell m}$ waveform modes that can be resolved by the NR code up to $\ell=8$, including modes that are typically difficult to model with other approaches. We assess the model’s uncertainty, which could be useful in parameter estimation studies seeking to incorporate model error. We anticipate NR surrogate models to be useful for rapid NR waveform generation in multiple-query applications like parameter estimation, template bank construction, and testing the fidelity of other waveform models.

Read this paper on arXiv…

J. Blackman, S. Field, C. Galley, et. al.
Mon, 2 Mar 15
37/39

Comments: 6 pages, 6 figures

Efficient method for measuring the parameters encoded in a gravitational-wave signal [IMA]

http://arxiv.org/abs/1502.05407


Once upon a time, predictions for the accuracy of inference on gravitational-wave signals relied on computationally inexpensive but often inaccurate techniques. Recently, the approach has shifted to actual inference on noisy signals with complex stochastic Bayesian methods, at the expense of significant computational cost. Here, we argue that it is often possible to have the best of both worlds: a Bayesian approach that incorporates prior information and correctly marginalizes over uninteresting parameters, providing accurate posterior probability distribution functions, but carried out on a simple grid at a low computational cost, comparable to the inexpensive predictive techniques.

Read this paper on arXiv…

C. Haster, I. Mandel and W. Farr
Fri, 20 Feb 15
28/48

Comments: 17 pages, 5 figures

LUX likelihood and limits on spin-independent and spin-dependent WIMP couplings with LUXCalc [CL]

http://arxiv.org/abs/1502.02667


We present LUXCalc, a new utility for calculating likelihoods and deriving WIMP-nucleon coupling limits from the recent results of the LUX direct search dark matter experiment. After a brief review of WIMP-nucleon scattering, we derive, for the first time, LUX limits on the spin-dependent WIMP-nucleon couplings over a broad range of WIMP masses, under standard assumptions on the relevant astrophysical parameters. We find that, under these and other common assumptions, LUX excludes the entire spin-dependent parameter space consistent with a dark matter interpretation of DAMA’s anomalous signal, the first time a single experiment has been able to do so. We also revisit the case of spin-independent couplings, and demonstrate good agreement between our results and the published LUX results. Finally, we derive constraints on the parameters of an effective dark matter theory in which a spin-1 mediator interacts with a fermionic WIMP and Standard Model fermions via axial-vector couplings. A detailed appendix describes the use of LUXCalc with standard codes to place constraints on generic dark matter theories.

Read this paper on arXiv…

C. Savage, A. Scaffidi, M. White, et. al.
Wed, 11 Feb 15
63/72

Comments: 29 pages, 6 figures. Software package included as ancillary files

Fast Bayesian Inference for Exoplanet Discovery in Radial Velocity Data [IMA]

http://arxiv.org/abs/1501.06952


Inferring the number of planets $N$ in an exoplanetary system from radial velocity (RV) data is a challenging task. Recently, it has become clear that RV data can contain periodic signals due to stellar activity, which can be difficult to distinguish from planetary signals. However, even doing the inference under a given set of simplifying assumptions (e.g. no stellar activity) can be difficult. It is common for the posterior distribution for the planet parameters, such as orbital periods, to be multimodal and to have other awkward features. In addition, when $N$ is unknown, the marginal likelihood (or evidence) as a function of $N$ is required. Rather than doing separate runs with different trial values of $N$, we propose an alternative approach using a trans-dimensional Markov Chain Monte Carlo method within Nested Sampling. The posterior distribution for $N$ can be obtained with a single run. We apply the method to $\nu$ Oph and Gliese 581, finding moderate evidence for additional signals in $\nu$ Oph with periods of 36.11 $\pm$ 0.034 days, 75.58 $\pm$ 0.80 days, and 1709 $\pm$ 183 days; the posterior probability that at least one of these exists is 85%. The results also suggest Gliese 581 hosts many (7-15) “planets” (or other causes of other periodic signals), but only 4-6 have well determined periods. The analysis of both of these datasets shows phase transitions exist which are difficult to negotiate without Nested Sampling.

Read this paper on arXiv…

B. Brewer and C. Donovan
Thu, 29 Jan 15
36/49

Comments: Accepted for publication in MNRAS. 9 pages, 12 figures. Code at this http URL

Sub-pixel resolution with color X-ray camera SLcam(R) [CL]

http://arxiv.org/abs/1501.06825


The color X-ray camera SLcam(R) is a full-field, single photon detector providing scanning free, energy and spatially resolved X-ray imaging. Spatial resolution is achieved with the use of polycapillary optics guiding X-ray photons from small regions on a sample to distinct energy dispersive pixels on a CCD. Applying sub-pixel resolution, signals from individual capillary channels can be distinguished. Accordingly the SLcam(R) spatial resolution can be released from pixel size being confined rather to a diameter of individual polycapillary channels. In this work a new approach to sub-pixel resolution algorithm comprising photon events also from the pixel centers is proposed. The details of the employed numerical method and several sub-pixel resolution examples are presented and discussed.

Read this paper on arXiv…

S. Nowak, A. Bjeoumikhov, J. Borany, et. al.
Wed, 28 Jan 15
15/58

Comments: 8 pages, 7 figures

Sensitivity improvement of a laser interferometer limited by inelastic back-scattering, employing dual readout [CL]

http://arxiv.org/abs/1501.05219


Inelastic back-scattering of stray light is a long-standing problem in high-sensitivity interferometric measurements and a potential limitation for advanced gravitational-wave detectors, in particular at sub-audio-band frequencies. The emerging parasitic interferences cannot be distinguished from a scientific signal via conventional single readout. In this work, we propose and demonstrate the subtraction of inelastic back-scatter signals by employing dual homodyne detection on the output light — here — of a table-top Michelson interferometer. The additional readout contains solely parasitic signals and is used to model the scatter source. Subtraction of the scatter signal reduces the noise spectral density and thus improves the measurement sensitivity. Our scheme is qualitatively different from the previously demonstrated vetoing of scatter signals and opens a new path for improving the sensitivity of future gravitational-wave detectors.

Read this paper on arXiv…

M. Meinders and R. Schnabel
Thu, 22 Jan 15
20/58

Comments: N/A

Sign singularity and flares in solar active region NOAA 11158 [SSA]

http://arxiv.org/abs/1501.04279


Solar Active Region NOAA 11158 has hosted a number of strong flares, including one X2.2 event. The complexity of current density and current helicity are studied through cancellation analysis of their sign-singular measure, which features power-law scaling. Spectral analysis is also performed, revealing the presence of two separate scaling ranges with different spectral index. The time evolution of parameters is discussed. Sudden changes of the cancellation exponents at the time of large flares, and the presence of correlation with EUV and X-ray flux, suggest that eruption of large flares can be linked to the small scale properties of the current structures.

Read this paper on arXiv…

L. Sorriso-Valvo, G. Vita, M. Kazachenko, et. al.
Tue, 20 Jan 15
45/76

Comments: N/A

The NIFTY way of Bayesian signal inference [IMA]

http://arxiv.org/abs/1412.7160


We introduce NIFTY, “Numerical Information Field Theory”, a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

Read this paper on arXiv…

M. Selig
Wed, 24 Dec 14
18/37

Comments: 6 pages, 2 figures, refereed proceeding of the 33rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2013), software available at this http URL and this http URL

Blurring Out Cosmic Puzzles [CL]

http://arxiv.org/abs/1412.4382


The Doomsday argument and anthropic reasoning are two puzzling examples of probabilistic confirmation. In both cases, a lack of knowledge apparently yields surprising conclusions. Since they are formulated within a Bayesian framework, they constitute a challenge to Bayesianism. Several attempts, some successful, have been made to avoid these conclusions, but some versions of these arguments cannot be dissolved within the framework of orthodox Bayesianism. I show that adopting an imprecise framework of probabilistic reasoning allows for a more adequate representation of ignorance in Bayesian reasoning and explains away these puzzles.

Read this paper on arXiv…

Y. Benetreau-Dupin
Tue, 16 Dec 14
27/78

Comments: 15 pages, 1 figure. To appear in Philosophy of Science (PSA 2014)

Inference for Trans-dimensional Bayesian Models with Diffusive Nested Sampling [CL]

http://arxiv.org/abs/1411.3921


Many inference problems involve inferring the number $N$ of objects in some region, along with their properties $\{\mathbf{x}_i\}_{i=1}^N$, from a dataset $\mathcal{D}$. A common statistical example is finite mixture modelling. In the Bayesian framework, these problems are typically solved using one of the following two methods: i) by executing a Monte Carlo algorithm (such as Nested Sampling) once for each possible value of $N$, and calculating the marginal likelihood or evidence as a function of $N$; or ii) by doing a single run that allows the model dimension $N$ to change (such as Markov Chain Monte Carlo with birth/death moves), and obtaining the posterior for $N$ directly. In this paper we present a general approach to this problem that uses trans-dimensional MCMC embedded {\it within} a Nested Sampling algorithm, allowing us to explore the posterior distribution and calculate the marginal likelihood (summed over $N$) even if the problem contains a phase transition or other difficult features such as multimodality. We present two example problems, finding sinusoidal signals in noisy data, and finding and measuring galaxies in a noisy astronomical image. Both of the examples demonstrate phase transitions in the relationship between the likelihood and the cumulative prior mass.

Read this paper on arXiv…

B. Brewer
Mon, 17 Nov 14
1/52

Comments: Submitted. Comments welcome. 14 pages, 7 figures. Software available at this https URL

Angular Power Spectra with Finite Counts [CEA]

http://arxiv.org/abs/1411.4031


Angular anisotropy techniques for cosmic diffuse radiation maps are powerful probes, even for quite small data sets. A popular observable is the angular power spectrum; we present a detailed study applicable to any unbinned source skymap S(n) from which N random, independent events are observed. Its exact variance, which is due to the finite statistics, depends only on S(n) and N; we also derive an unbiased estimator of the variance from the data. First-order effects agree with previous analytic estimates. Importantly, heretofore unidentified higher-order effects are found to contribute to the variance and may cause the uncertainty to be significantly larger than previous analytic estimates—potentially orders of magnitude larger. Neglect of these higher-order terms, when significant, may result in a spurious detection of the power spectrum. On the other hand, this would indicate the presence of higher-order spatial correlations, such as a large bispectrum, providing new clues about the sources. Numerical simulations are shown to support these conclusions. Applying the formalism to an ensemble of Gaussian-distributed skymaps, the noise-dominated part of the power spectrum uncertainty is significantly increased at high multipoles by the new, higher-order effects. This work is important for harmonic analyses of the distributions of diffuse high-energy gamma-rays, neutrinos, and charged cosmic rays, as well as for populations of sparse point sources such as active galactic nuclei.

Read this paper on arXiv…

S. Campbell
Mon, 17 Nov 14
15/52

Comments: 27 pages, 8 figures

On-off intermittency and amplitude-phase synchronization in Keplerian shear flows [SSA]

http://arxiv.org/abs/1411.3998


We study the development of coherent structures in local simulations of the magnetorotational instability in accretion discs in regimes of on-off intermittency. In a previous paper [Chian et al., Phys. Rev. Lett. 104, 254102 (2010)], we have shown that the laminar and bursty states due to the on-off spatiotemporal intermittency in a one-dimensional model of nonlinear waves correspond, respectively, to nonattracting coherent structures with higher and lower degrees of amplitude-phase synchronization. In this paper we extend these results to a three-dimensional model of magnetized Keplerian shear flows. Keeping the kinetic Reynolds number and the magnetic Prandtl number fixed, we investigate two different intermittent regimes by varying the plasma beta parameter. The first regime is characterized by turbulent patterns interrupted by the recurrent emergence of a large-scale coherent structure known as two-channel flow, where the state of the system can be described by a single Fourier mode. The second regime is dominated by the turbulence with sporadic emergence of coherent structures with shapes that are reminiscent of a perturbed channel flow. By computing the Fourier power and phase spectral entropies in three-dimensions, we show that the large-scale coherent structures are characterized by a high degree of amplitude-phase synchronization.

Read this paper on arXiv…

R. Miranda, E. Rempel and A. Chian
Mon, 17 Nov 14
30/52

Comments: 17 pages, 10 figures

Monte Carlo error analyses of Spearman's rank test [IMA]

http://arxiv.org/abs/1411.3816


Spearman’s rank correlation test is commonly used in astronomy to discern whether a set of two variables are correlated or not. Unlike most other quantities quoted in astronomical literature, the Spearman’s rank correlation coefficient is generally quoted with no attempt to estimate the errors on its value. This is a practice that would not be accepted for those other quantities, as it is often regarded that an estimate of a quantity without an estimate of its associated uncertainties is meaningless. This manuscript describes a number of easily implemented, Monte Carlo based methods to estimate the uncertainty on the Spearman’s rank correlation coefficient, or more precisely to estimate its probability distribution.

Read this paper on arXiv…

P. Curran
Mon, 17 Nov 14
38/52

Comments: Unubmitted manuscript (comments welcome); 5 pages; Code available at this https URL

Target Density Normalization for Markov Chain Monte Carlo Algorithms [CL]

http://arxiv.org/abs/1410.7149


Techniques for evaluating the normalization integral of the target density for Markov Chain Monte Carlo algorithms are described and tested numerically. It is assumed that the Markov Chain algorithm has converged to the target distribution and produced a set of samples from the density. These are used to evaluate sample mean, harmonic mean and Laplace algorithms for the calculation of the integral of the target density. A clear preference for the sample mean algorithm applied to a reduced support region is found, and guidelines are given for implementation.

Read this paper on arXiv…

A. Caldwell and C. Liu
Tue, 28 Oct 14
45/67

Comments: N/A

Automatic fault detection on BIPV systems without solar irradiation data [CL]

http://arxiv.org/abs/1410.6946


BIPV systems are small PV generation units spread out over the territory, and whose characteristics are very diverse. This makes difficult a cost-effective procedure for monitoring, fault detection, performance analyses, operation and maintenance. As a result, many problems affecting BIPV systems go undetected. In order to carry out effective automatic fault detection procedures, we need a performance indicator that is reliable and that can be applied on many PV systems at a very low cost. The existing approaches for analyzing the performance of PV systems are often based on the Performance Ratio (PR), whose accuracy depends on good solar irradiation data, which in turn can be very difficult to obtain or cost-prohibitive for the BIPV owner. We present an alternative fault detection procedure based on a performance indicator that can be constructed on the sole basis of the energy production data measured at the BIPV systems. This procedure does not require the input of operating conditions data, such as solar irradiation, air temperature, or wind speed. The performance indicator, called Performance to Peers (P2P), is constructed from spatial and temporal correlations between the energy output of neighboring and similar PV systems. This method was developed from the analysis of the energy production data of approximately 10,000 BIPV systems located in Europe. The results of our procedure are illustrated on the hourly, daily and monthly data monitored during one year at one BIPV system located in the South of Belgium. Our results confirm that it is possible to carry out automatic fault detection procedures without solar irradiation data. P2P proves to be more stable than PR most of the time, and thus constitutes a more reliable performance indicator for fault detection procedures.

Read this paper on arXiv…

J. Leloux, L. Narvarte, A. Luna, et. al.
Tue, 28 Oct 14
48/67

Comments: 7 pages, 8 figures, conference proceedings, 29th European Photovoltaic Solar Energy Conference and Exhibition, Amsterdam, 2014

A Comprehensive Search for Dark Matter Annihilation in Dwarf Galaxies [CEA]

http://arxiv.org/abs/1410.2242


We present a new formalism designed to discover dark matter annihilation occurring in the Milky Way’s dwarf galaxies. The statistical framework extracts all available information in the data by simultaneously combining observations of all the dwarf galaxies and incorporating the impact of particle physics properties, the distribution of dark matter in the dwarfs, and the detector response. The method performs maximally powerful frequentist searches and produces confidence limits on particle physics parameters. Probability distributions of test statistics under various hypotheses are constructed exactly, without relying on large sample approximations. The derived limits have proper coverage by construction and claims of detection are not biased by imperfect background modeling. We implement this formalism using data from the Fermi Gamma-ray Space Telescope to search for an annihilation signal in the complete sample of Milky Way dwarfs whose dark matter distributions can be reliably determined. We find that the observed data is consistent with background for each of the dwarf galaxies individually as well as in a joint analysis. The strongest constraints are at small dark matter particle masses. Taking the median of the systematic uncertainty in dwarf density profiles, the cross section upper limits are below the pure s-wave weak scale relic abundance value (2.2 x 10^-26 cm^3/s) for dark matter masses below 26 GeV (for annihilation into b quarks), 29 GeV (tau leptons), 35 GeV (up, down, strange, charm quarks and gluons), 6 GeV (electrons/positrons), and 114 GeV (two-photon final state). For dark matter particle masses less than 1 TeV, these represent the strongest limits obtained to date using dwarf galaxies.

Read this paper on arXiv…

A. Geringer-Sameth, S. Koushiappas and M. Walker
Fri, 10 Oct 14
15/61

Comments: 34 pages, 15 figures, a machine-readable table of observed cross section limits is available as an ancillary file

Densities mixture unfolding for data obtained from detectors with finite resolution and limited acceptance [CL]

http://arxiv.org/abs/1410.1586


A procedure based on a Mixture Density Model for correcting experimental data for distortions due to finite resolution and limited detector acceptance is presented. Addressing the case that the solution is known to be non-negative, in the approach presented here the true distribution is estimated by a weighted sum of probability density functions with positive weights and with the width of the densities acting as a regularisation parameter responsible for the smoothness of the result. To obtain better smoothing in less populated regions, the width parameter scales inversely proportional to the square root of estimated density. Furthermore, the non-negative garrotte method is used to find the most economic representation of the solution. Cross-validation is employed to determine the optimal values of the resolution and garrotte parameters. The proposed approach is directly applicable to multidimensional problems. Numerical examples in one and two dimensions are presented to illustrate the procedure.

Read this paper on arXiv…

N. Gagunashvili
Wed, 8 Oct 14
68/68

Comments: 25 pages, 14 figures. arXiv admin note: text overlap with arXiv:1209.3766

A near-infrared interferometric survey of debris-disc stars. IV. An unbiased sample of 92 southern stars observed in H-band with VLTI/PIONIER [EPA]

http://arxiv.org/abs/1409.6143


Context. Detecting and characterizing circumstellar dust is a way to study the architecture and evolution of planetary systems. Cold dust in debris disks only traces the outer regions. Warm and hot exozodiacal dust needs to be studied in order to trace regions close to the habitable zone.
Aims. We aim to determine the prevalence and to constrain the properties of hot exozodiacal dust around nearby main-sequence stars.
Methods. We search a magnitude limited (H < 5) sample of 92 stars for bright exozodiacal dust using our VLTI visitor instrument PIONIER in the H-band. We derive statistics of the detection rate with respect to parameters such as the stellar spectral type and age or the presence of a debris disk in the outer regions of the systems. We derive more robust statistics by combining our sample with the results from our CHARA/FLUOR survey in the K-band. In addition, our spectrally dispersed data allows us to put constraints on the emission mechanism and the dust properties in the detected systems.
Results. We find an over-all detection rate of bright exozodiacal dust in the H-band of 11% (9 out of 85 targets) and three tentative detections. The detection rate decreases from early type to late type stars and increases with the age of the host star. We do not confirm the tentative correlation between the presence of cold and hot dust found in our earlier analysis of the FLUOR sample alone. Our spectrally dispersed data suggest that either the dust is extremely hot or the emission is dominated by the scattered light in most cases. The implications of our results for the target selection of future terrestrial planet finding missions using direct imaging are discussed.

Read this paper on arXiv…

S. Ertel, O. Absil, D. Defrere, et. al.
Tue, 23 Sep 14
5/60

Comments: 20 pages, 16 figures, 4 tables, accepted for publication in A&A

Bayesian parameter estimation of core collapse supernovae using gravitational wave simulations [CL]

http://arxiv.org/abs/1407.7549


Using the latest numerical simulations of rotating stellar core collapse, we present a Bayesian framework to extract the physical information encoded in noisy gravitational wave signals. We fit Bayesian principal component regression models with known and unknown signal arrival times to reconstruct gravitational wave signals, and subsequently fit known astrophysical parameters on the posterior means of the principal component coefficients using a linear model. We predict the ratio of rotational kinetic energy to gravitational energy of the inner core at bounce by sampling from the posterior predictive distribution, and find that these predictions are generally very close to the true parameter values, with $90\%$ credible intervals $\sim 0.04$ and $\sim 0.06$ wide for the known and unknown arrival time models respectively. Two supervised machine learning methods are implemented to classify precollapse differential rotation, and we find that these methods discriminate rapidly rotating progenitors particularly well. We also introduce a constrained optimization approach to model selection to find an optimal number of principal components in the signal reconstruction step. Using this approach, we select 14 principal components as the most parsimonious model.

Read this paper on arXiv…

M. Edwards, R. Meyer and N. Christensen
Wed, 30 Jul 14
51/65

Comments: N/A

Study of the influence of solar variability on a regional (Indian) climate: 1901-2007 [SSA]

http://arxiv.org/abs/1407.1805


We use Indian temperature data of more than 100 years to study the influence of solar activity on climate. We study the Sun-climate relationship by averaging solar and climate data at various time scales; decadal, solar activity and solar magnetic cycles. We also consider the minimum and maximum values of sunspot number (SSN) during each solar cycle. This parameter SSN is correlated better with Indian temperature when these data are averaged over solar magnetic polarity epochs (SSN maximum to maximum). Our results indicate that the solar variability may still be contributing to ongoing climate change and suggest for more investigations.

Read this paper on arXiv…

O. Aslam and Badruddin.
Tue, 8 Jul 14
17/66

Comments: 6 pages, 2 figures and 1 table, Accepted to Advances in Space Research, 2014

Primordial power spectrum from Planck [CEA]

http://arxiv.org/abs/1406.4827


Using modified Richardson-Lucy algorithm we reconstruct the primordial power spectrum (PPS) from Planck Cosmic Microwave Background (CMB) temperature anisotropy data. In our analysis we use different combinations of angular power spectra from Planck to reconstruct the shape of the primordial power spectrum and locate possible features. Performing an extensive error analysis we found the dip near $\ell\sim750-850$ represents the most prominent feature in the data. Feature near $\ell\sim1800-2000$ is detectable with high confidence only in 217 GHz spectrum and is apparently consequence of a small systematic as described in the revised Planck 2013 papers. Fixing the background cosmological parameters and the foreground nuisance parameters to their best fit baseline values, we report that the best fit power law primordial power spectrum is consistent with the reconstructed form of the PPS at 2$\sigma$ C.L. of the estimated errors (apart from the local features mentioned above). As a consistency test, we found the reconstructed primordial power spectrum from Planck temperature data can also substantially improve the fit to WMAP-9 angular power spectrum data (with respect to power-law form of the PPS) allowing an overall amplitude shift of $\sim2.5\%$. In this context low-$\ell$ and 100 GHz spectrum from Planck which have proper overlap in the multipole range with WMAP data found to be completely consistent with WMAP-9 (allowing amplitude shift). As another important result of our analysis we do report the evidence of gravitational lensing through the reconstruction analysis. Finally we present two smooth form of the PPS containing only the important features. These smooth forms of PPS can provide significant improvements in fitting the data (with respect to the power law PPS) and can be helpful to give hints for inflationary model building.

Read this paper on arXiv…

D. Hazra, A. Shafieloo and T. Souradeep
Thu, 19 Jun 14
27/62

Comments: 31 pages, 11 figures, 1 table

Astrophysical data analysis with information field theory [IMA]

http://arxiv.org/abs/1405.7701


Non-parametric imaging and data analysis in astrophysics and cosmology can be addressed by information field theory (IFT), a means of Bayesian, data based inference on spatially distributed signal fields. IFT is a statistical field theory, which permits the construction of optimal signal recovery algorithms. It exploits spatial correlations of the signal fields even for nonlinear and non-Gaussian signal inference problems. The alleviation of a perception threshold for recovering signals of unknown correlation structure by using IFT will be discussed in particular as well as a novel improvement on instrumental self-calibration schemes. IFT can be applied to many areas. Here, applications in in cosmology (cosmic microwave background, large-scale structure) and astrophysics (galactic magnetism, radio interferometry) are presented.

Read this paper on arXiv…

T. Ensslin
Mon, 2 Jun 14
43/56

Comments: 4 pages, 2 figures, accepted chapter to the conference proceedings for MaxEnt 2013, to be published by AIP

Another Look at Confidence Intervals: Proposal for a More Relevant and Transparent Approach [CL]

http://arxiv.org/abs/1405.5010


The behaviors of various confidence/credible interval constructions are explored, particularly in the region of low statistics where methods diverge most. We highlight a number of challenges, such as the treatment of nuisance parameters, and common misconceptions associated with such constructions. An informal survey of the literature suggests that confidence intervals are not always defined in relevant ways and are too often misinterpreted and/or misapplied. This can lead to seemingly paradoxical behaviours and flawed comparisons regarding the relevance of experimental results. We therefore conclude that there is a need for a more pragmatic strategy which recognizes that, while it is critical to objectively convey the information content of the data, there is also a strong desire to derive bounds on models and a natural instinct to interpret things this way. Accordingly, we attempt to put aside philosophical biases in favor of a practical view to propose a more transparent and self-consistent approach that better addresses these issues.

Read this paper on arXiv…

S. Biller and S. Oser
Wed, 21 May 14
21/45

Comments: 23 pages, 11 figures

Software for Geodynamical Researches Used in the LSGER IAA [IMA]

http://arxiv.org/abs/1405.3054


Laboratory of Space Geodesy and Earth Rotation (LSGER) of the Institute of Applied Astronomy (IAA) of the Russian Academy of Sciences has been carrying on, since its creation, the computation of geodynamical products: Earth Orientation Parameters (EOP) and station coordinates (TRF) based on observations of space geodesy techniques: Very Long Baseline Interferometry (VLBI), Satellite Laser Ranging (SLR), Global Positioning System (GPS). Principal software components, used for these investigations, include: package GROSS for processing of SLR observations, package Bernese for processing of GPS observations, package OCCAM for processing of VLBI observations, software for data exchange, and software for combination of space geodesy products.

Read this paper on arXiv…

Z. Malkin, A. Voinov and E. Skurikhina
Wed, 14 May 14
20/48

Comments: Full variant of the paper presented at ADASS IX Conference, Hawaii, October 3-6, 1999

Spatially-Aware Temporal Anomaly Mapping of Gamma Spectra [CL]

http://arxiv.org/abs/1405.1135


For security, environmental, and regulatory purposes it is useful to continuously monitor wide areas for unexpected changes in radioactivity. We report on a temporal anomaly detection algorithm which uses mobile detectors to build a spatial map of background spectra, allowing sensitive detection of any anomalies through many days or months of monitoring. We adapt previously-developed anomaly detection methods, which compare spectral shape rather than count rate, to function with limited background data, allowing sensitive detection of small changes in spectral shape from day to day. To demonstrate this technique we collected daily observations over the period of six weeks on a 0.33 square mile research campus and performed source injection simulations.

Read this paper on arXiv…

A. Reinhart, A. Athey and S. Biegalski
Wed, 7 May 14
54/58

Comments: 7 pages, 6 figures. Submitted to IEEE Transactions on Nuclear Science

Parameter estimation on compact binary coalescences with abruptly terminating gravitational waveforms [CL]

http://arxiv.org/abs/1404.2382


Gravitational-wave astronomy seeks to extract information about astrophysical systems from the gravitational-wave signals they emit. For coalescing compact-binary sources this requires accurate model templates for the inspiral and, potentially, the subsequent merger and ringdown. Models with frequency-domain waveforms that terminate abruptly in the sensitive band of the detector are often used for parameter-estimation studies. We show that the abrupt waveform termination contains significant information that affects parameter-estimation accuracy. If the sharp cutoff is not physically motivated, this extra information can lead to misleadingly good accuracy claims. We also show that using waveforms with a cutoff as templates to recover complete signals can lead to biases in parameter estimates. We evaluate when the information content in the cutoff is likely to be important in both cases. We also point out that the standard Fisher matrix formalism, frequently employed for approximately predicting parameter-estimation accuracy, cannot properly incorporate an abrupt cutoff that is present in both signals and templates; this observation explains some previously unexpected results found in the literature. These effects emphasize the importance of using complete waveforms with accurate merger and ringdown phases for parameter estimation.

Read this paper on arXiv…

I. Mandel, C. Berry, F. Ohme, et. al.
Thu, 10 Apr 14
30/57

Bayesian Source Separation Applied to Identifying Complex Organic Molecules in Space [IMA]

http://arxiv.org/abs/1403.4626


Emission from a class of benzene-based molecules known as Polycyclic Aromatic Hydrocarbons (PAHs) dominates the infrared spectrum of star-forming regions. The observed emission appears to arise from the combined emission of numerous PAH species, each with its unique spectrum. Linear superposition of the PAH spectra identifies this problem as a source separation problem. It is, however, of a formidable class of source separation problems given that different PAH sources potentially number in the hundreds, even thousands, and there is only one measured spectral signal for a given astrophysical site. Fortunately, the source spectra of the PAHs are known, but the signal is also contaminated by other spectral sources. We describe our ongoing work in developing Bayesian source separation techniques relying on nested sampling in conjunction with an ON/OFF mechanism enabling simultaneous estimation of the probability that a particular PAH species is present and its contribution to the spectrum.

Read this paper on arXiv…

K. Knuth, M. Tse, J. Choinsky, et. al.
Thu, 20 Mar 14
5/51

SCoPE: An efficient method of Cosmological Parameter Estimation [CEA]

http://arxiv.org/abs/1403.1271


Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than $95\%$ and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data.

Read this paper on arXiv…

S. Das and T. Souradeep
Fri, 7 Mar 14
32/47

Superposition Enhanced Nested Sampling [CL]

http://arxiv.org/abs/1402.6306


The theoretical analysis of many problems in physics, astronomy and applied mathematics requires an efficient numerical exploration of multimodal parameter spaces that exhibit broken ergodicity. Monte Carlo methods are widely used to deal with these classes of problems, but such simulations suffer from a ubiquitous sampling problem: the probability of sampling a particular state is proportional to its entropic weight. Devising an algorithm capable of sampling efficiently the full phase space is a long-standing problem. Here we report a new hybrid method for the exploration of multimodal parameter spaces exhibiting broken ergodicity. Superposition enhanced nested sampling (SENS) combines the strengths of global optimization with the unbiased/athermal sampling of nested sampling, greatly enhancing its efficiency with no additional parameters. We report extensive tests of this new approach for atomic clusters that are known to have energy landscapes for which conventional sampling schemes suffer from broken ergodicity. We also introduce a novel parallelization algorithm for nested sampling.

Read this paper on arXiv…

S. Martiniani, J. Stevenson, D. Wales, et. al.
Wed, 26 Feb 14
18/51

Matrix-free Large Scale Bayesian inference in cosmology [CEA]

http://arxiv.org/abs/1402.1763


In this work we propose a new matrix-free implementation of the Wiener sampler which is traditionally applied to high dimensional analysis when signal covariances are unknown. Specifically, the proposed method addresses the problem of jointly inferring a high dimensional signal and its corresponding covariance matrix from a set of observations. Our method implements a Gibbs sampling adaptation of the previously presented messenger approach, permitting to cast the complex multivariate inference problem into a sequence of uni-variate random processes. In this fashion, the traditional requirement of inverting high dimensional matrices is completely eliminated from the inference process, resulting in an efficient algorithm that is trivial to implement. Using cosmic large scale structure data as a showcase, we demonstrate the capabilities of our Gibbs sampling approach by performing a joint analysis of three dimensional density fields and corresponding power-spectra from Gaussian mock catalogues. These tests clearly demonstrate the ability of the algorithm to accurately provide measurements of the three dimensional density field and its power-spectrum and corresponding uncertainty quantification. Moreover, these tests reveal excellent numerical and statistical efficiency which will generally render the proposed algorithm a valuable addition to the toolbox of large scale Bayesian inference in cosmology and astrophysics.

Read this paper on arXiv…

J. Jasche and G. Lavaux
Tue, 11 Feb 14
52/55