Thorough experimental testing of Multimode interference couplers and expected nulls thereof; for exoplanet detection [IMA]

http://arxiv.org/abs/1802.06727


In exoplanet interferometry a null of 40~dB is a large step in achieving the ability to directly image an Earth-like planet that is in the habitable zone around a star like our own. Based on the standard procedure at the Australian National University we have created a nulling interferometer that has achieved a 25~dB null in the astronomical L~band under laboratory conditions. The device has been constructed on a 2-dimensional platform of chalcogenide glass: a three layered structure of $Ge_{11.5}As_{24}S_{64.5}$ undercladding, 2~\si{\um} of $Ge_{11.5}As_{24}Se_{64.5}$ core and an angled deposition of $Ge_{11.5}As_{24}S_{64.5}$ as a complete overcladding. Matching simulation from Rsoft and individual results of the MMIs the expected null should produce a null of 40~dB over a bandwidth of 400~nm but due to limitations in mask design and light contamination only a 25~dB extinction can be reliably achieved.

Read this paper on arXiv…

H. Goldsmith, M. Ireland and S. Madden
Tue, 20 Feb 18
17/54

Comments: Only a first draft. Will upload later versions as we go

Spatial field reconstruction with INLA: Application to IFU galaxy data [IMA]

http://arxiv.org/abs/1802.06280


Astronomical observations of extended sources, such as cubes of integral field spectroscopy (IFS), encode auto-correlated spatial structures that cannot be optimally exploited by standard methodologies. Here we introduce a novel technique to model IFS datasets, which treats the observed galaxy properties as manifestations of an unobserved Gaussian Markov random field. The method is computationally efficient, resilient to the presence of low-signal-to-noise regions, and uses an alternative to Markov Chain Monte Carlo for fast Bayesian inference, the Integrated Nested Laplace Approximation (INLA). As a case study, we analyse 721 IFS data cubes of nearby galaxies from the CALIFA and PISCO surveys, for which we retrieved the following physical properties: age, metallicity, mass, and extinction. The proposed Bayesian approach, built on a generative representation of the galaxy properties, enables the creation of synthetic images, recovery of areas with bad pixels, and an increased power to detect structures in datasets subject to substantial noise and/or sparsity of sampling. A snippet code to reproduce the analysis of this paper is available in the COIN toolbox, together with the field reconstructions for the CALIFA and PISCO samples.

Read this paper on arXiv…

S. Gonzalez-Gaitan, R. Souza, A. Krone-Martins, et. al.
Tue, 20 Feb 18
30/54

Comments: 12 pages, 9 figures, submitted to MNRAS (comments welcome)

K2SUPERSTAMP: The release of calibrated mosaics for the {\em Kepler/K2} Mission [IMA]

http://arxiv.org/abs/1802.06354


We describe the release of a new High Level Science Product (HLSP) available at the MAST archive. The HLSP, called K2Superstamp, consists of a series of FITS images for four open star clusters observed by the K2 Mission using so-called “superstamp” pixel masks: M35, the $\sim$150 Myr old open cluster observed during K2 Campaign 0, M67, the solar-age, solar-metallicity benchmark cluster observed during Campaign 5, Ruprecht 147, the $\sim$3 Gyr-old open cluster observed during Campaign 7, and the Lagoon Nebula (M8/NGC 6530), the high-mass star-forming region observed during Campaign 9. While the data for these regions have long been served on MAST, until now they were only available as a disconnected set of smaller Target Pixel Files (TPFs) because the spacecraft stored these observations in small chunks. As a result, these regions have hitherto been ignored by many lightcurve and planet search pipelines. With this new release, we have stitched these TPFs together into spatially contiguous FITS images (one per cadence) to make their scientific analysis easier. In addition, each image has been fit with an accurate WCS solution so that you may locate any object of interest via its right ascension and declination. We describe here the process of stitching and astrometric calibration.

Read this paper on arXiv…

A. Cody, G. Barentsen, C. Hedges, et. al.
Tue, 20 Feb 18
37/54

Comments: 3 pages, 1 figure, published in RNAAS

Kernel-nulling for a robust direct interferometric detection of extrasolar planets [IMA]

http://arxiv.org/abs/1802.06252


Combining the resolving power of long-baseline interferometry with the high-dynamic range capability of nulling still remains the only technique that can directly sense the presence of structures in the innermost regions of extrasolar planetary systems. Ultimately, the performance of any nuller architecture is constrained by the partial resolution of the on-axis star whose light it attempts to cancel out, and the design of nullers focuses on increasing the order of the extinction to reduce the sensitivity to this effect. However from the ground, the effective performance of nulling is dominated by residual time-varying instrumental phase errors that keep the instrument off the null. This is similar to what happens with high-contrast imaging, and is what we aim to ameliorate. We introduce a modified nuller architecture that enables the extraction of information that is robust against piston excursions. Our method generalizes the concept of kernel, now applied to the outputs of the modified nuller so as to make them robust to second order pupil phase error. We present the general method to determine these kernel-outputs and highlight the benefits of this novel approach. We present the properties of VIKiNG: the VLTI Infrared Kernel NullinG, an instrument concept within the Hi-5 framework for the 4-UT VLTI infrastructure that takes advantage of the proposed architecture, to produce three self-calibrating nulled outputs. Stabilized by a fringe-tracker that would bring piston-excursions down to 50 nm, this instrument would be able to directly detect more than a dozen extrasolar planets so-far detected by radial velocity only, as well as many hot transiting planets and a significant number of very young exoplanets.

Read this paper on arXiv…

F. Martinache and M. Ireland
Tue, 20 Feb 18
48/54

Comments: 9 pages, 10 figures, submitted to Astronomy and Astrophysics

Superresolution Interferometric Imaging with Sparse Modeling Using Total Squared Variation — Application to Imaging the Black Hole Shadow [IMA]

http://arxiv.org/abs/1802.05783


We propose a new superresolution imaging technique for interferometry using sparse modeling, utilizing two regularization terms: the $\ell_1$-norm and a new function named Total Squared Variation (TSV) of the brightness distribution. TSV is an edge-smoothing variant of Total Variation (TV), leading to reducing the sum of squared gradients. First, we demonstrate that our technique may achieve super-resolution of $\sim 30$% compared to the traditional CLEAN beam size using synthetic observations of two point sources. Second, we present simulated observations of three physically motivated static models of Sgr A* with the Event Horizon Telescope (EHT) to show the performance of proposed techniques in greater detail. We find that $\ell_1$+TSV regularization outperforms $\ell_1$+TV regularization with the popular isotropic TV term and the Cotton-Schwab CLEAN algorithm, demonstrating that TSV is well-matched to the expected physical properties of the astronomical images, which are often nebulous. Remarkably, in both the image and gradient domains, the optimal beam size minimizing root-mean-squared errors is $\lesssim 10$% of the traditional CLEAN beam size for $\ell_1$+TSV regularization, and non-convolved reconstructed images have smaller errors than beam-convolved reconstructed images. This indicates that the traditional post-processing technique of Gaussian convolution in interferometric imaging may not be required for the $\ell_1$+TSV regularization. We also propose a feature extraction method to detect circular features from the image of a black hole shadow with the circle Hough transform (CHT) and use it to evaluate the performance of the image reconstruction. With our imaging technique and the CHT, the EHT can constrain the radius of the black hole shadow with an accuracy of $\sim 10-20$% in present simulations for Sgr A*.

Read this paper on arXiv…

K. Kuramochi, K. Akiyama, S. Ikeda, et. al.
Mon, 19 Feb 18
1/41

Comments: 18 pages, 7 figures, submitted to ApJ, revised in Feb 2018

Projected WIMP sensitivity of the LUX-ZEPLIN (LZ) dark matter experiment [IMA]

http://arxiv.org/abs/1802.06039


LUX-ZEPLIN (LZ) is a next generation dark matter direct detection experiment that will operate 4850 feet underground at the Sanford Underground Research Facility (SURF) in Lead, South Dakota, USA. Using a two-phase xenon detector with an active mass of 7 tonnes, LZ will search primarily for low-energy interactions with Weakly Interacting Massive Particles (WIMPs), which are hypothesized to make up the dark matter in our galactic halo. In this paper, the projected WIMP sensitivity of LZ is presented based on the latest background estimates and simulations of the detector. For a 1000 live day run using a 5.6 tonne fiducial mass, LZ is projected to exclude at 90% confidence level spin-independent WIMP-nucleon cross sections above $1.6 \times 10^{-48}$ cm$^{2}$ for a 40 $\mathrm{GeV}/c^{2}$ mass WIMP. Additionally, a $5\sigma$ discovery potential is projected reaching cross sections below the existing and projected exclusion limits of similar experiments that are currently operating. For spin-dependent WIMP-neutron(-proton) scattering, a sensitivity of $2.7 \times 10^{-43}$ cm$^{2}$ ($8.1 \times 10^{-42}$ cm$^{2}$) for a 40 $\mathrm{GeV}/c^{2}$ mass WIMP is expected. With construction well underway, LZ is on track for underground installation at SURF in 2019 and will start collecting data in 2020.

Read this paper on arXiv…

D. Akerib, C. Akerlof, S. Alsum, et. al.
Mon, 19 Feb 18
17/41

Comments: 14 pages, 11 figures