The Hyper Suprime-Cam Software Pipeline [IMA]

In this paper, we describe the optical imaging data processing pipeline developed for the Subaru Telescope’s Hyper Suprime-Cam (HSC) instrument. The HSC Pipeline builds on the prototype pipeline being developed by the Large Synoptic Survey Telescope’s Data Management system, adding customizations for HSC, large-scale processing capabilities, and novel algorithms that have since been reincorporated into the LSST codebase. While designed primarily to reduce HSC Subaru Strategic Program (SSP) data, it is also the recommended pipeline for reducing general-observer HSC data. The HSC pipeline includes high level processing steps that generate coadded images and science-ready catalogs as well as low-level detrending and image characterizations.

J. Bosch, R. Armstrong, S. Bickerton, et. al.
Mon, 22 May 17
5/51

Comments: 39 pages, 21 figures, 2 tables. Submitted to Publications of the Astronomical Society of Japan

|

The first-year shear catalog of the Subaru Hyper Suprime-Cam SSP Survey [CEA]

We present and characterize the catalog of galaxy shape measurements that will be used for cosmological weak lensing measurements in the Wide layer of the first year of the Hyper Suprime-Cam (HSC) survey. The catalog covers an area of 136.9 deg$^2$ split into six fields, with a mean $i$-band seeing of 0.58 arcsec and $5\sigma$ point-source depth of $i\sim 26$. Given conservative galaxy selection criteria for first year science, the depth and excellent image quality results in unweighted and weighted source number densities of 24.6 and 21.8 arcmin$^{-2}$, respectively. Point-spread function (PSF) modeling is carried out on individual exposures, while galaxy shapes are measured on a linear coaddition. We define the requirements for cosmological weak lensing science with this catalog, characterize potential systematics in the catalog using a series of internal null tests for problems with PSF modeling, shear estimation, and other aspects of the image processing, and describe systematics tests using two different sets of image simulations. Finally, we discuss the dominant systematics and the planned algorithmic changes to reduce them in future data reductions.

R. Mandelbaum, H. Miyatake, T. Hamana, et. al.
Mon, 22 May 17
11/51

Comments: 23 figures, 4 tables, submitted to PASJ

|

A generalized approach to model the spectra and radiation dose rate of solar particle events on the surface of Mars [EPA]

For future human missions to Mars, it is important to study the surface radiation environment during extreme and elevated conditions. In the long term, it is mainly Galactic Cosmic Rays (GCRs) modulated by solar activity that contributes to the radiation on the surface of Mars, but intense solar energetic particle (SEP) events may induce acute health effects. Such events may enhance the radiation level significantly and should be detected as immediately as possible to prevent severe damage to humans and equipment. However, the energetic particle environment on the Martian surface is significantly different from that in deep space due to the influence of the Martian atmosphere, and, to a lesser extent, the regolith. Depending on the intensity and shape of the original solar particle spectra as well as particle types, the surface spectra may induce entirely different radiation effects. For instance, an intense SEP event with a soft spectrum that would be hazardous on the lunar surface may, in contrast, induce only low levels of radiation on the Martian surface that would be well within human health tolerances. In order to give immediate and accurate alerts while avoiding unnecessary ones, it is important to model and well understand the atmospheric effect on the incoming SEPs including both protons and helium ions. In this paper, we have developed a generalized approach to quickly model the surface response of any given incoming proton/helium ion spectra and have applied it to a set of historical large solar events, thus providing insights into the possible variety of surface radiation environments that may be induced during SEP events.

J. Guo, C. Zeitlin, R. Wimmer-Schweingruber, et. al.
Mon, 22 May 17
13/51

Deep Full-sky Coadds from Three Years of WISE and NEOWISE Observations [IMA]

We have reprocessed over 100 terabytes of single-exposure WISE/NEOWISE images to create the deepest ever full-sky maps at 3-5 microns. We incorporate all publicly available W1 and W2 imaging – a total of ~8 million exposures in each band – from ~37 months of observations spanning 2010 January to 2015 December. Our coadds preserve the native WISE resolution and feature depth of coverage ~3 times greater than that of the AllWISE Atlas stacks. Our coadds are designed to enable deep forced photometry, in particular for the Dark Energy Camera Legacy Survey (DECaLS) and Mayall z-Band Legacy Survey (MzLS), both of which are being used to select targets for the Dark Energy Spectroscopic Instrument (DESI). We describe newly introduced processing steps aimed at leveraging added redundancy to remove artifacts, with the intent of facilitating uniform target selection and searches for rare/exotic objects (e.g. high-redshift quasars and distant galaxy clusters). Forced photometry depths achieved with these coadds extend 0.56 (0.46) magnitudes deeper in W1 (W2) than is possible with only pre-hibernation WISE imaging.

A. Meisner, D. Lang and D. Schlegel
Mon, 22 May 17
27/51

Comments: data release available at this http URL

|

Simple Stabilized Radio-Frequency Transfer with Optical Phase Actuation [IMA]

We describe and experimentally evaluate a stabilized radio-frequency transfer technique that employs optical phase sensing and optical phase actuation. This technique can be achieved by modifying existing stabilized optical frequency equipment and also exhibits advantages over previous stabilized radio-frequency transfer techniques in terms of size and complexity. We demonstrate the stabilized transfer of a 160 MHz signal over an 166 km fiber optical link, achieving an Allan deviation of 9.7×10^-12 Hz/Hz at 1 s of integration, and 3.9×10^-1414 Hz/Hz at 1000 s. This technique is being considered for application to the Square Kilometre Array SKA1-low radio telescope.

D. Gozzard, S. Schediwy, R. Whitaker, et. al.
Mon, 22 May 17
32/51

Comments: 4 pages, 2 figures, submitted to Optics Letters

|

Improving galaxy morphology with machine learning [GA]

This paper presents machine learning experiments performed over results of galaxy classification into elliptical (E) and spiral (S) with morphological parameters: concetration (CN), assimetry metrics (A3), smoothness metrics (S3), entropy (H) and gradient pattern analysis parameter (GA). Except concentration, all parameters performed a image segmentation pre-processing. For supervision and to compute confusion matrices, we used as true label the galaxy classification from GalaxyZoo. With a 48145 objects dataset after preprocessing (44760 galaxies labeled as S and 3385 as E), we performed experiments with Support Vector Machine (SVM) and Decision Tree (DT). Whit a 1962 objects balanced dataset, we applied K- means and Agglomerative Hierarchical Clustering. All experiments with supervision reached an Overall Accuracy OA >= 97%.

P. Barchi, F. Costa, R. Sautter, et. al.
Mon, 22 May 17
49/51

Hyperspectral imaging is an ubiquitous technique in solar physics observations and the recent advances in solar instrumentation enabled us to acquire and record data at an unprecedented rate. The huge amount of data which will be archived in the upcoming solar observatories press us to compress the data in order to reduce the storage space and transfer times. The correlation present over all dimensions, spatial, temporal and spectral, of solar data-sets suggests the use of a 3D base wavelet decomposition, to achieve higher compression rates. In this work, we evaluate the performance of the recent JPEG2000 Part 10 standard, known as JP3D, for the lossless compression of several types of solar data-cubes. We explore the differences in: a) The compressibility of broad-band or narrow-band time-sequence; I or V stokes profiles in spectropolarimetric data-sets; b) Compressing data in [x,y,$\lambda$] packages at different times or data in [x,y,t] packages of different wavelength; c) Compressing a single large data-cube or several smaller data-cubes; d) Compressing data which is under-sampled or super-sampled with respect to the diffraction cut-off.