Compressive Shack-Hartmann Wavefront Sensing based on Deep Neural Networks [IMA]

http://arxiv.org/abs/2011.10241


The Shack-Hartmann wavefront sensor is widely used to measure aberrations induced by atmospheric turbulence in adaptive optics systems. However if there exists strong atmospheric turbulence or the brightness of guide stars is low, the accuracy of wavefront measurements will be affected. In this paper, we propose a compressive Shack-Hartmann wavefront sensing method. Instead of reconstructing wavefronts with slope measurements of all sub-apertures, our method reconstructs wavefronts with slope measurements of sub-apertures which have spot images with high signal to noise ratio. Besides, we further propose to use a deep neural network to accelerate wavefront reconstruction speed. During the training stage of the deep neural network, we propose to add a drop-out layer to simulate the compressive sensing process, which could increase development speed of our method. After training, the compressive Shack-Hartmann wavefront sensing method can reconstruct wavefronts in high spatial resolution with slope measurements from only a small amount of sub-apertures. We integrate the straightforward compressive Shack-Hartmann wavefront sensing method with image deconvolution algorithm to develop a high-order image restoration method. We use images restored by the high-order image restoration method to test the performance of our the compressive Shack-Hartmann wavefront sensing method. The results show that our method can improve the accuracy of wavefront measurements and is suitable for real-time applications.

Read this paper on arXiv…

P. Jia, M. Ma, D. Cai, et. al.
Mon, 23 Nov 20
30/63

Comments: Submitted to MNRAS

Smart obervation method with wide field small aperture telescopes for real time transient detection [IMA]

http://arxiv.org/abs/2011.10407


Wide field small aperture telescopes (WFSATs) are commonly used for fast sky survey. Telescope arrays composed by several WFSATs are capable to scan sky several times per night. Huge amount of data would be obtained by them and these data need to be processed immediately. In this paper, we propose ARGUS (Astronomical taRGets detection framework for Unified telescopes) for real-time transit detection. The ARGUS uses a deep learning based astronomical detection algorithm implemented in embedded devices in each WFSATs to detect astronomical targets. The position and probability of a detection being an astronomical targets will be sent to a trained ensemble learning algorithm to output information of celestial sources. After matching these sources with star catalog, ARGUS will directly output type and positions of transient candidates. We use simulated data to test the performance of ARGUS and find that ARGUS can increase the performance of WFSATs in transient detection tasks robustly.

Read this paper on arXiv…

P. Jia, Q. Liu, Y. Sun, et. al.
Mon, 23 Nov 20
38/63

Comments: To appear in Proc. of SPIE 2020, Paper Number (11449-80), Comments are welcome

Data–driven Image Restoration with Option–driven Learning for Big and Small Astronomical Image Datasets [IMA]

http://arxiv.org/abs/2011.03696


Image restoration methods are commonly used to improve the quality of astronomical images. In recent years, developments of deep neural networks and increments of the number of astronomical images have evoked a lot of data–driven image restoration methods. However, most of these methods belong to supervised learning algorithms, which require paired images either from real observations or simulated data as training set. For some applications, it is hard to get enough paired images from real observations and simulated images are quite different from real observed ones. In this paper, we propose a new data–driven image restoration method based on generative adversarial networks with option–driven learning. Our method uses several high resolution images as references and applies different learning strategies when the number of reference images is different. For sky surveys with variable observation conditions, our method can obtain very stable image restoration results, regardless of the number of reference images.

Read this paper on arXiv…

P. Jia, R. Ning, R. Sun, et. al.
Tue, 10 Nov 20
73/88

Comments: 11 pages. Submitted to MNRAS with minor revision

Machine Learning for Semi-Automated Meteorite Recovery [EPA]

http://arxiv.org/abs/2009.13852


We present a novel methodology for recovering meteorite falls observed and constrained by fireball networks, using drones and machine learning algorithms. This approach uses images of the local terrain for a given fall site to train an artificial neural network, designed to detect meteorite candidates. We have field tested our methodology to show a meteorite detection rate between 75-97%, while also providing an efficient mechanism to eliminate false-positives. Our tests at a number of locations within Western Australia also showcase the ability for this training scheme to generalize a model to learn localized terrain features. Our model-training approach was also able to correctly identify 3 meteorites in their native fall sites, that were found using traditional searching techniques. Our methodology will be used to recover meteorite falls in a wide range of locations within globe-spanning fireball networks.

Read this paper on arXiv…

S. Anderson, M. Towner, P. Bland, et. al.
Wed, 30 Sep 2020
37/86

Comments: 15 pages, 3 figures, 2 tables

Predicting galaxy spectra from images with hybrid convolutional neural networks [IMA]

http://arxiv.org/abs/2009.12318


Galaxies can be described by features of their optical spectra such as oxygen emission lines, or morphological features such as spiral arms. Although spectroscopy provides a rich description of the physical processes that govern galaxy evolution, spectroscopic data are observationally expensive to obtain. We are able to robustly predict and reconstruct galaxy spectra directly from broad-band imaging. We present a powerful new approach using a hybrid convolutional neural network with deconvolution instead of batch normalization; this hybrid CNN outperforms other models in our tests. The learned mapping between galaxy imaging and spectra will be transformative for future wide-field surveys, such as with the Vera C. Rubin Observatory and \textit{Nancy Grace Roman Space Telescope}, by multiplying the scientific returns for spectroscopically-limited galaxy samples.

Read this paper on arXiv…

J. Wu and J. Peek
Mon, 28 Sep 20
39/52

Comments: 5 pages, 2 figures, submitted to a NeurIPS 2020 conference workshop

A study of Neural networks point source extraction on simulated Fermi/LAT Telescope images [CL]

http://arxiv.org/abs/2007.04295


Astrophysical images in the GeV band are challenging to analyze due to the strong contribution of the background and foreground astrophysical diffuse emission and relatively broad point spread function of modern space-based instruments. In certain cases, even finding of point sources on the image becomes a non-trivial task. We present a method for point sources extraction using a convolution neural network (CNN) trained on our own artificial data set which imitates images from the Fermi Large Area Telescope. These images are raw count photon maps of 10×10 degrees covering energies from 1 to 10 GeV. We compare different CNN architectures that demonstrate accuracy increase by ~15% and reduces the inference time by at least the factor of 4 accuracy improvement with respect to a similar state of the art models.

Read this paper on arXiv…

M. Drozdova, A. Broilovskiy, A. Ustyuzhanin, et. al.
Thu, 9 Jul 20
-33/70

Comments: Accepted to Astronomische Nachrichten

Learning to do multiframe blind deconvolution unsupervisedly [IMA]

http://arxiv.org/abs/2006.01438


Observation from ground based telescopes are affected by the presence of the Earth atmosphere, which severely perturbs them. The use of adaptive optics techniques has allowed us to partly beat this limitation. However, image selection or post-facto image reconstruction methods are routinely needed to reach the diffraction limit of telescopes. Deep learning has been recently used to accelerate these image reconstructions. Currently, these deep neural networks are trained with supervision, so that standard deconvolution algorithms need to be applied a-priori to generate the training sets. Our aim is to propose an unsupervised method which can then be trained simply with observations and check it with data from the FastCam instrument. We use a neural model composed of three neural networks that are trained end-to-end by leveraging the linear image formation theory to construct a physically-motivated loss function. The analysis of the trained neural model shows that multiframe blind deconvolution can be trained self-supervisedly, i.e., using only observations. The output of the network are the corrected images and also estimations of the instantaneous wavefronts. The network model is of the order of 1000 times faster than applying standard deconvolution based on optimization. With some work, the model can bed used on real-time at the telescope.

Read this paper on arXiv…

A. Ramos
Wed, 3 Jun 20
26/83

Comments: 11 pages, 8 figures, submitted to A&A

Transformation Based Deep Anomaly Detection in Astronomical Images [CL]

http://arxiv.org/abs/2005.07779


In this work, we propose several enhancements to a geometric transformation based model for anomaly detection in images (GeoTranform). The model assumes that the anomaly class is unknown and that only inlier samples are available for training. We introduce new filter based transformations useful for detecting anomalies in astronomical images, that highlight artifact properties to make them more easily distinguishable from real objects. In addition, we propose a transformation selection strategy that allows us to find indistinguishable pairs of transformations. This results in an improvement of the area under the Receiver Operating Characteristic curve (AUROC) and accuracy performance, as well as in a dimensionality reduction. The models were tested on astronomical images from the High Cadence Transient Survey (HiTS) and Zwicky Transient Facility (ZTF) datasets. The best models obtained an average AUROC of 99.20% for HiTS and 91.39% for ZTF. The improvement over the original GeoTransform algorithm and baseline methods such as One-Class Support Vector Machine, and deep learning based methods is significant both statistically and in practice.

Read this paper on arXiv…

E. Reyes and P. Estévez
Tue, 19 May 20
84/92

Comments: 8 pages, 6 figures, 4 tables. Accepted for publication in proceedings of the IEEE World Congress on Computational Intelligence (IEEE WCCI), Glasgow, UK, 19-24 July, 2020

Simulating Anisoplanatic Turbulence by Sampling Inter-modal and Spatially Correlated Zernike Coefficients [CL]

http://arxiv.org/abs/2004.11210


Simulating atmospheric turbulence is an essential task for evaluating turbulence mitigation algorithms and training learning-based methods. Advanced numerical simulators for atmospheric turbulence are available, but they require evaluating wave propagation which is computationally expensive. In this paper, we present a propagation-free method for simulating imaging through turbulence. The key idea behind our work is a new method to draw inter-modal and spatially correlated Zernike coefficients. By establishing the equivalence between the angle-of-arrival correlation by Basu, McCrae and Fiorino (2015) and the multi-aperture correlation by Chanan (1992), we show that the Zernike coefficients can be drawn according to a covariance matrix defining the correlations. We propose fast and scalable sampling strategies to draw these samples. The new method allows us to compress the wave propagation problem into a sampling problem, hence making the new simulator significantly faster than existing ones. Experimental results show that the simulator has an excellent match with the theory and real turbulence data.

Read this paper on arXiv…

N. Chimitt and S. Chan
Fri, 24 Apr 20
39/63

Comments: N/A

Inpainting via Generative Adversarial Networks for CMB data analysis [CEA]

http://arxiv.org/abs/2004.04177


In this work, we propose a new method to inpaint the CMB signal in regions masked out following a point source extraction process. We adopt a modified Generative Adversarial Network (GAN) and compare different combinations of internal (hyper-)parameters and training strategies. We study the performance using a suitable $\mathcal{C}_r$ variable in order to estimate the performance regarding the CMB power spectrum recovery. We consider a test set where one point source is masked out in each sky patch with a 1.83 $\times$ 1.83 squared degree extension, which, in our gridding, corresponds to 64 $\times$ 64 pixels. The GAN is optimized for estimating performance on Planck 2018 total intensity simulations. The training makes the GAN effective in reconstructing a masking corresponding to about 1500 pixels with $1\%$ error down to angular scales corresponding to about 5 arcminutes.

Read this paper on arXiv…

A. Sadr and F. Farsian
Fri, 10 Apr 20
55/56

Comments: 19 pages, 21 figures. Prepared for submission to JCAP. All codes will be published after acceptance

Non-dimensional Star-Identification [IMA]

http://arxiv.org/abs/2003.13736


This study introduces a new “Non-Dimensional” star identification algorithm to reliably identify the stars observed by a wide field-of-view star tracker when the focal length and optical axis offset values are known with poor accuracy. This algorithm is particularly suited to complement nominal lost-in-space algorithms when they fail the star identification due to focal length and/or optical axis offset deviations from their nominal operational ranges. These deviations may be caused, for example, by launch vibrations or thermal variations in orbit. The algorithm performance is compared in terms of accuracy, speed, and robustness to the Pyramid algorithm. These comparisons highlight the clear advantages that a combined approach of these methodologies provides.

Read this paper on arXiv…

C. Leake, D. Arnas and D. Mortari
Wed, 1 Apr 20
30/83

Comments: 14 pages, 17 figures, 4 tables

PSF–NET: A Non-parametric Point Spread Function Model for Ground Based Optical Telescopes [IMA]

http://arxiv.org/abs/2003.00615


Ground based optical telescopes are seriously affected by atmospheric turbulence induced aberrations. Understanding properties of these aberrations is important both for instruments design and image restoration methods development. Because the point spread function can reflect performance of the whole optic system, it is appropriate to use the point spread function to describe atmospheric turbulence induced aberrations. Assuming point spread functions induced by the atmospheric turbulence with the same profile belong to the same manifold space, we propose a non-parametric point spread function — PSF-NET. The PSF-NET has a cycle convolutional neural network structure and is a statistical representation of the manifold space of PSFs induced by the atmospheric turbulence with the same profile. Testing the PSF-NET with simulated and real observation data, we find that a well trained PSF–NET can restore any short exposure images blurred by atmospheric turbulence with the same profile. Besides, we further use the impulse response of the PSF-NET, which can be viewed as the statistical mean PSF, to analyze interpretation properties of the PSF-NET. We find that variations of statistical mean PSFs are caused by variations of the atmospheric turbulence profile: as the difference of the atmospheric turbulence profile increases, the difference between statistical mean PSFs also increases. The PSF-NET proposed in this paper provides a new way to analyze atmospheric turbulence induced aberrations, which would be benefit to develop new observation methods for ground based optical telescopes.

Read this paper on arXiv…

P. Jia, X. Wu, Y. Huang, et. al.
Tue, 3 Mar 20
31/68

Comments: Accepted by AJ. The complete code can be downloaded at DOI:10.12149/101014

Comparison of Multi-Class and Binary Classification Machine Learning Models in Identifying Strong Gravitational Lenses [GA]

http://arxiv.org/abs/2002.11849


Typically, binary classification lens-finding schemes are used to discriminate between lens candidates and non-lenses. However, these models often suffer from substantial false-positive classifications. Such false positives frequently occur due to images containing objects such as crowded sources, galaxies with arms, and also images with a central source and smaller surrounding sources. Therefore, a model might confuse the stated circumstances with an Einstein ring. It has been proposed that by allowing such commonly misclassified image types to constitute their own classes, machine learning models will more easily be able to learn the difference between images that contain real lenses, and images that contain lens imposters. Using Hubble Space Telescope (HST) images, in the F814W filter, we compare the usage of binary and multi-class classification models applied to the lens finding task. From our findings, we conclude there is not a significant benefit to using the multi-class model over a binary model. We will also present the results of a simple lens search using a multi-class machine learning model, and potential new lens candidates.

Read this paper on arXiv…

H. Teimoorinia, R. Toyonaga, S. Fabbro, et. al.
Fri, 28 Feb 20
49/49

Comments: PASP accepted, 14 pages, 10 figures, 4 tables

Detection and Classification of Astronomical Targets with Deep Neural Networks in Wide Field Small Aperture Telescopes [IMA]

http://arxiv.org/abs/2002.09211


Wide field small aperture telescopes are widely used in optical transient observations. Detection and classification of astronomical targets are important steps during data post-processing stage. In this paper, we propose an astronomical targets detection and classification framework based on deep neural networks for images obtained by wide field small aperture telescopes. Our framework adopts the concept of the Faster R-CNN and we further propose to use a modified Resnet-50 as backbone network and a Feature Pyramid Network architecture in our framework. To improve the effectiveness of our framework and reduce requirements of large training set, we propose to use simulated images to train our framework at first and then modify weights of our framework with only a small amount of training data through transfer-learning. We have tested our framework with simulated and real observation data. Comparing with the traditional source detection and classification framework, our framework has better detection ability, particularly for dim astronomical targets. To unleash the transient detection ability of wide field small aperture telescopes, we further propose to install our framework in embedded devices to achieve real-time astronomical targets detection abilities.

Read this paper on arXiv…

P. Jia, Q. Liu and Y. Sun
Mon, 24 Feb 20
3/49

Comments: Submitted to AAS journal. The complete code can be downloaded from this https URL This code can be directly used to process images obtained by WFSATs. Images obtained by ordinary sky survey telescopes can also be processed with this code, however more annotated images are required to train the neural network

Determination of the relative inclination and the viewing angle of an interacting pair of galaxies using convolutional neural networks [GA]

http://arxiv.org/abs/2002.01238


Constructing dynamical models for interacting pair of galaxies as constrained by their observed structure and kinematics crucially depends on the correct choice of the values of the relative inclination ($i$) between their galactic planes as well as the viewing angle ($\theta$), the angle between the line of sight and the normal to the plane of their orbital motion. We construct Deep Convolutional Neural Network (DCNN) models to determine the relative inclination ($i$) and the viewing angle ($\theta$) of interacting galaxy pairs, using N-body $+$ Smoothed Particle Hydrodynamics (SPH) simulation data from the GALMER database for training the same. In order to classify galaxy pairs based on their $i$ values only, we first construct DCNN models for a (a) 2-class ( $i$ = 0 $^{\circ}$, 45$^{\circ}$ ) and (b) 3-class ($i = 0^{\circ}, 45^{\circ} \text{ and } 90^{\circ}$) classification, obtaining $F_1$ scores of 99% and 98% respectively. Further, for a classification based on both $i$ and $\theta$ values, we develop a DCNN model for a 9-class classification ($(i,\theta) \sim (0^{\circ},15^{\circ}) ,(0^{\circ},45^{\circ}), (0^{\circ},90^{\circ}), (45^{\circ},15^{\circ}), (45^{\circ}, 45^{\circ}), (45^{\circ}, 90^{\circ}), (90^{\circ}, 15^{\circ}), (90^{\circ}, 45^{\circ}), (90^{\circ},90^{\circ})$), and the $F_1$ score was 97$\%$. Finally, we tested our 2-class model on real data of interacting galaxy pairs from the Sloan Digital Sky Survey (SDSS) DR15, and achieve an $F_1$ score of 78%. Our DCNN models could be further extended to determine additional parameters needed to model dynamics of interacting galaxy pairs, which is currently accomplished by trial and error method.

Read this paper on arXiv…

P. Prakash, A. Banerjee and P. Perepu
Wed, 5 Feb 20
33/67

Comments: N/A

Hyperspectral and multispectral image fusion under spectrally varying spatial blurs — Application to high dimensional infrared astronomical imaging [CL]

http://arxiv.org/abs/1912.11868


Hyperspectral imaging has become a significant source of valuable data for astronomers over the past decades. Current instrumental and observing time constraints allow direct acquisition of multispectral images, with high spatial but low spectral resolution, and hyperspectral images, with low spatial but high spectral resolution. To enhance scientific interpretation of the data, we propose a data fusion method which combines the benefits of each image to recover a high spatio-spectral resolution datacube. The proposed inverse problem accounts for the specificities of astronomical instruments, such as spectrally variant blurs. We provide a fast implementation by solving the problem in the frequency domain and in a low-dimensional subspace to efficiently handle the convolution operators as well as the high dimensionality of the data. We conduct experiments on a realistic synthetic dataset of simulated observation of the upcoming James Webb Space Telescope, and we show that our fusion algorithm outperforms state-of-the-art methods commonly used in remote sensing for Earth observation.

Read this paper on arXiv…

C. Guilloteau, T. Oberlin, O. Berné, et. al.
Mon, 30 Dec 19
1/51

Comments: N/A

Hyperspectral and multispectral image fusion under spectrally varying spatial blurs — Application to high dimensional infrared astronomical imaging [CL]

http://arxiv.org/abs/1912.11868


Hyperspectral imaging has become a significant source of valuable data for astronomers over the past decades. Current instrumental and observing time constraints allow direct acquisition of multispectral images, with high spatial but low spectral resolution, and hyperspectral images, with low spatial but high spectral resolution. To enhance scientific interpretation of the data, we propose a data fusion method which combines the benefits of each image to recover a high spatio-spectral resolution datacube. The proposed inverse problem accounts for the specificities of astronomical instruments, such as spectrally variant blurs. We provide a fast implementation by solving the problem in the frequency domain and in a low-dimensional subspace to efficiently handle the convolution operators as well as the high dimensionality of the data. We conduct experiments on a realistic synthetic dataset of simulated observation of the upcoming James Webb Space Telescope, and we show that our fusion algorithm outperforms state-of-the-art methods commonly used in remote sensing for Earth observation.

Read this paper on arXiv…

C. Guilloteau, T. Oberlin, O. Berné, et. al.
Mon, 30 Dec 19
18/51

Comments: N/A

Toward Filament Segmentation Using Deep Neural Networks [SSA]

http://arxiv.org/abs/1912.02743


We use a well-known deep neural network framework, called Mask R-CNN, for identification of solar filaments in full-disk H-alpha images from Big Bear Solar Observatory (BBSO). The image data, collected from BBSO’s archive, are integrated with the spatiotemporal metadata of filaments retrieved from the Heliophysics Events Knowledgebase (HEK) system. This integrated data is then treated as the ground-truth in the training process of the model. The available spatial metadata are the output of a currently running filament-detection module developed and maintained by the Feature Finding Team; an international consortium selected by NASA. Despite the known challenges in the identification and characterization of filaments by the existing module, which in turn are inherited into any other module that intends to learn from such outputs, Mask R-CNN shows promising results. Trained and validated on two years worth of BBSO data, this model is then tested on the three following years. Our case-by-case and overall analyses show that Mask R-CNN can clearly compete with the existing module and in some cases even perform better. Several cases of false positives and false negatives, that are correctly segmented by this model are also shown. The overall advantages of using the proposed model are two-fold: First, deep neural networks’ performance generally improves as more annotated data, or better annotations are provided. Second, such a model can be scaled up to detect other solar events, as well as a single multi-purpose module. The results presented in this study introduce a proof of concept in benefits of employing deep neural networks for detection of solar events, and in particular, filaments.

Read this paper on arXiv…

A. Ahmadzadeh, S. Mahajan, D. Kempton, et. al.
Fri, 6 Dec 19
26/78

Comments: 10 pages, 10 figures, 1 table, accepted in IEEE BigData 2019

Deriving star cluster parameters with convolutional neural networks. II. Extinction and cluster/background classification [GA]

http://arxiv.org/abs/1911.10059


Context. Convolutional neural networks (CNNs) have been established as the go-to method for fast object detection and classification on natural images. This opens the door for astrophysical parameter inference on the exponentially increasing amount of sky survey data. Until now, star cluster analysis was based on integral or resolved stellar photometry, which limits the amount of information that can be extracted from individual pixels of cluster images.
Aims. We aim to create a CNN capable of inferring star cluster evolutionary, structural, and environmental parameters from multi-band images, as well to demonstrate its capabilities in discriminating genuine clusters from galactic stellar backgrounds.
Methods. A CNN based on the deep residual network (ResNet) architecture was created and trained to infer cluster ages, masses, sizes, and extinctions, with respect to the degeneracies between them. Mock clusters placed on M83 Hubble Space Telescope (HST) images utilizing three photometric passbands (F336W, F438W, and F814W) were used. The CNN is also capable of predicting the likelihood of a cluster’s presence in an image, as well as quantifying its visibility (signal-to-noise).
Results. The CNN was tested on mock images of artificial clusters and has demonstrated reliable inference results for clusters of ages $\lesssim$100 Myr, extinctions $A_V$ between 0 and 3 mag, masses between $3\times10^3$ and $3\times10^5$ ${\rm M_\odot}$, and sizes between 0.04 and 0.4 arcsec at the distance of the M83 galaxy. Real M83 galaxy cluster parameter inference tests were performed with objects taken from previous studies and have demonstrated consistent results.

Read this paper on arXiv…

J. Bialopetravičius, D. Narbutis and V. Vansevičius
Mon, 25 Nov 19
23/55

Comments: 17 pages, 21 figures

Carving out the low surface brightness universe with NoiseChisel [IMA]

http://arxiv.org/abs/1909.11230


NoiseChisel is a program to detect very low signal-to-noise ratio (S/N) features with minimal assumptions on their morphology. It was introduced in 2015 and released within a collection of data analysis programs and libraries known as GNU Astronomy Utilities (Gnuastro). Over the last ten stable releases of Gnuastro, NoiseChisel has significantly improved: detecting even fainter signal, enabling better user control over its inner workings, and many bug fixes. The most important change may be that NoiseChisel’s segmentation features have been moved into a new program called Segment. Another major change is the final growth strategy of its true detections, for example NoiseChisel is able to detect the outer wings of M51 down to S/N of 0.25, or 28.27 mag/arcsec2 on a single-exposure SDSS image (r-band). Segment is also able to detect the localized HII regions as “clumps” much more successfully. Finally, to orchestrate a controlled analysis, the concept of a “reproducible paper” is discussed: this paper itself is exactly reproducible (snapshot v4-0-g8505cfd).

Read this paper on arXiv…

M. Akhlaghi
Thu, 26 Sep 19
55/61

Comments: Invited talk at IAU Symposium 355 (The Realm of the Low Surface Brightness Universe). The downloadable source (on arXiv) includes the full reproduction info (scripts, config files and input data links) and can reproduce the paper automatically. It is also available with its Git history in this https URL , and in Zenodo at this https URL

A method for Cloud Mapping in the Field of View of the Infra-Red Camera during the EUSO-SPB1 flight [IMA]

http://arxiv.org/abs/1909.05917


EUSO-SPB1 was released on April 24th, 2017, from the NASA balloon launch site in Wanaka (New Zealand) and landed on the South Pacific Ocean on May 7th. The data collected by the instruments onboard the balloon were analyzed to search UV pulse signatures of UHECR (Ultra High Energy Cosmic Rays) air showers. Indirect measurements of UHECRs can be affected by cloud presence during nighttime, therefore it is crucial to know the meteorological conditions during the observation period of the detector. During the flight, the onboard EUSO-SPB1 UCIRC camera (University of Chicago Infra-Red Camera), acquired images in the field of view of the UV telescope. The available nighttime and daytime images include information on meteorological conditions of the atmosphere observed in two infra-red bands. The presence of clouds has been investigated employing a method developed to provide a dense cloudiness map for each available infra-red image. The final masks are intended to give pixel cloudiness information at the IR-camera pixel resolution that is nearly 4-times higher than the one of the UV-camera. In this work, cloudiness maps are obtained by using an expert system based on the analysis of different low-level image features. Furthermore, an image enhancement step was needed to be applied as a preprocessing step to deal with uncalibrated data.

Read this paper on arXiv…

A. Bruno, A. Anzalone and C. Vigorito
Mon, 16 Sep 19
9/74

Comments: 7 pages, 8 figures, 36th International Cosmic Ray Conference -ICRC2019

Astroalign: A Python module for astronomical image registration [IMA]

http://arxiv.org/abs/1909.02946


We present an algorithm implemented in the astroalign Python module for image registration in astronomy. Our module does not rely on WCS information and instead matches 3-point asterisms (triangles) on the images to find the most accurate linear transformation between the two. It is especially useful in the context of aligning images prior to stacking or performing difference image analysis. Astroalign can match images of different point-spread functions, seeing, and atmospheric conditions.

Read this paper on arXiv…

M. Beroiz, J. Cabral and B. Sanchez
Mon, 9 Sep 19
12/67

Comments: 4 pages, 2 figures, Python package

Contour Detection in Cassini ISS images based on Hierarchical Extreme Learning Machine and Dense Conditional Random Field [IMA]

http://arxiv.org/abs/1908.08279


In Cassini ISS (Imaging Science Subsystem) images, contour detection is often performed on disk-resolved object to accurately locate their center. Thus, the contour detection is a key problem. Traditional edge detection methods, such as Canny and Roberts, often extract the contour with too much interior details and noise. Although the deep convolutional neural network has been applied successfully in many image tasks, such as classification and object detection, it needs more time and computer resources. In the paper, a contour detection algorithm based on H-ELM (Hierarchical Extreme Learning Machine) and DenseCRF (Dense Conditional Random Field) is proposed for Cassini ISS images. The experimental results show that this algorithm’s performance is better than both traditional machine learning methods such as SVM, ELM and even deep convolutional neural network. And the extracted contour is closer to the actual contour. Moreover, it can be trained and tested quickly on the general configuration of PC, so can be applied to contour detection for Cassini ISS images.

Read this paper on arXiv…

X. Yang, Q. Zhang and Z. Li
Fri, 23 Aug 19
5/57

Comments: N/A

Solar Image Restoration with the Cycle-GAN Based on Multi-Fractal Properties of Texture Features [IMA]

http://arxiv.org/abs/1907.12192


Texture is one of the most obvious characteristics in solar images and it is normally described by texture features. Because textures from solar images of the same wavelength are similar, we assume texture features of solar images are multi-fractals. Based on this assumption, we propose a pure data-based image restoration method: with several high resolution solar images as references, we use the Cycle-Consistent Adversarial Network to restore burred images of the same steady physical process, in the same wavelength obtained by the same telescope. We test our method with simulated and real observation data and find that our method can improve the spatial resolution of solar images, without loss of any frames. Because our method does not need paired training set or additional instruments, it can be used as a post-processing method for solar images obtained by either seeing limited telescopes or telescopes with ground layer adaptive optic system.

Read this paper on arXiv…

P. Jia, Y. Huang, B. Cai, et. al.
Tue, 30 Jul 19
35/79

Comments: Accepted by APJ Letters

Maximum likelihood estimation for disk image parameters [CL]

http://arxiv.org/abs/1907.10557


We present a novel technique for estimating disc parameters from its 2D image. It is based on the maximal likelihood approach utilising both edge coordinates and the image intensity gradients. We emphasise the following advantages of our likelihood model. It has closed-form formulae for parameter estimating, therefore requiring less computational resources than iterative algorithms. The likelihood model naturally distinguishes the outer and inner annulus edges. The proposed technique was evaluated on both synthetic and real data.

Read this paper on arXiv…

M. Kornilov
Thu, 25 Jul 19
4/72

Comments: 12 pages, 4 figures

deepCR: Cosmic Ray Rejection with Deep Learning [IMA]

http://arxiv.org/abs/1907.09500


Cosmic ray (CR) identification and removal are critical components of imaging and spectroscopic reduction pipelines involving solid-state detectors. We present deepCR, a deep learning based framework for cosmic ray (CR) identification and subsequent image inpainting based on the predicted CR mask. To demonstrate the effectiveness of our framework, we have trained and evaluated models on Hubble Space Telescope ACS/WFC images of sparse extragalactic fields, globular clusters, and resolved galaxies. We demonstrate that at a reasonable false positive rate of 0.5%, deepCR achieves close to 100% detection rates in both extragalactic and globular cluster fields, and 91% in resolved galaxy fields, which is a significant improvement over current state-of-the-art method, LACosmic. Compared to a well-threaded CPU implementation of LACosmic, deepCR mask predictions runs up to 6.5 times faster on CPU and 90 times faster on GPU. For image inpainting, mean squared error of deepDR predictions are 20 times lower in globular cluster fields, 5 times lower in resolved galaxy fields, and 2.5 times lower in extragalactic fields, compared to the best performing non-neural technique. We present our framework and trained models as an open-source Python project, with a simple-to-use API.

Read this paper on arXiv…

K. Zhang and J. Bloom
Wed, 24 Jul 19
55/60

Comments: Submitted to AAS Journals. 11 pages, 6 figures. An open-source Python package, deepCR, which implements the approach in this paper is at this https URL Figures and benchmarks can be reproduced using: this https URL

Automated crater shape retrieval using weakly-supervised deep learning [EPA]

http://arxiv.org/abs/1906.08826


Crater shape determination is a complex and time consuming task that so far has evaded automation. We train a state of the art computer vision algorithm to identify craters on the moon and retrieve their sizes and shapes. The computational backbone of the model is MaskRCNN, an “instance segmentation” general framework that detects craters in an image while simultaneously producing a mask for each crater that traces its outer rim. Our post-processing pipeline then finds the closest fitting ellipse to these masks, allowing us to retrieve the crater ellipticities. Our model is able to correctly identify 87% of known craters in the holdout set, while predicting thousands of additional craters not present in our training data. Manual validation of a subset of these craters indicates that a majority of them are real, which we take as an indicator of the strength of our model in learning to identify craters, despite incomplete training data. The crater size, ellipticity, and depth distributions predicted by our model are consistent with human-generated results. The model allows us to perform a large scale search for differences in crater diameter and shape distributions between the lunar highlands and maria, and we exclude any such differences with a high statistical significance.

Read this paper on arXiv…

M. Ali-Dib, K. Menou, C. Zhu, et. al.
Mon, 24 Jun 19
50/56

Comments: 35 pages, 11 figures, submitted to Icarus

A Curated Image Parameter Dataset from Solar Dynamics Observatory Mission [SSA]

http://arxiv.org/abs/1906.01062


We provide a large image parameter dataset extracted from the Solar Dynamics Observatory (SDO) mission’s AIA instrument, for the period of January 2011 through the current date, with the cadence of six minutes, for nine wavelength channels. The volume of the dataset for each year is just short of 1 TiB. Towards achieving better results in the region classification of active regions and coronal holes, we improve upon the performance of a set of ten image parameters, through an in depth evaluation of various assumptions that are necessary for calculation of these image parameters. Then, where possible, a method for finding an appropriate settings for the parameter calculations was devised, as well as a validation task to show our improved results. In addition, we include comparisons of JP2 and FITS image formats using supervised classification models, by tuning the parameters specific to the format of the images from which they are extracted, and specific to each wavelength. The results of these comparisons show that utilizing JP2 images, which are significantly smaller files, is not detrimental to the region classification task that these parameters were originally intended for. Finally, we compute the tuned parameters on the AIA images and provide a public API (this http URL) to access the dataset. This dataset can be used in a range of studies on AIA images, such as content-based image retrieval or tracking of solar events, where dimensionality reduction on the images is necessary for feasibility of the tasks.

Read this paper on arXiv…

A. Ahmadzadeh, D. Kempton and R. Angryk
Wed, 5 Jun 19
23/74

Comments: Accepted to The Astrophysical Journal Supplement Series, 2019, 29 pages

Fast Solar Image Classification Using Deep Learning and its Importance for Automation in Solar Physics [SSA]

http://arxiv.org/abs/1905.13575


The volume of data being collected in solar physics has exponentially increased over the past decade and with the introduction of the $\textit{Daniel K. Inouye Solar Telescope}$ (DKIST) we will be entering the age of petabyte solar data. Automated feature detection will be an invaluable tool for post-processing of solar images to create catalogues of data ready for researchers to use. We propose a deep learning model to accomplish this; a deep convolutional neural network is adept at feature extraction and processing images quickly. We train our network using data from $\textit{Hinode/Solar Optical Telescope}$ (SOT) H$\alpha$ images of a small subset of solar features with different geometries: filaments, prominences, flare ribbons, sunspots and the quiet Sun ($\textit{i.e.}$ the absence of any of the other four features). We achieve near perfect performance on classifying unseen images from SOT ($\approx$99.9\%) in 4.66 seconds. We also for the first time explore transfer learning in a solar context. Transfer learning uses pre-trained deep neural networks to help train new deep learning models $\textit{i.e.}$ it teaches a new model. We show that our network is robust to changes in resolution by degrading images from SOT resolution ($\approx$0.33$^{\prime \prime}$ at $\lambda$=6563\AA{}) to $\textit{Solar Dynamics Observatory/Atmospheric Imaging Assembly}$ (SDO/AIA) resolution ($\approx$1.2$^{\prime \prime}$) without a change in performance of our network. However, we also observe where the network fails to generalise to sunspots from SDO/AIA bands 1600/1700\AA{} due to small-scale brightenings around the sunspots and prominences in SDO/AIA 304\AA{} due to coronal emission.

Read this paper on arXiv…

J. Armstrong and L. Fletcher
Mon, 3 Jun 19
18/59

Comments: 19 pages, 9 figures, accepted for publication in Solar Physics

Perception Evaluation — A new solar image quality metric based on the multi-fractal property of texture features [IMA]

http://arxiv.org/abs/1905.09980


Next-generation ground-based solar observations require good image quality metrics for post-facto processing techniques. Based on the assumption that texture features in solar images are multi-fractal which can be extracted by a trained deep neural network as feature maps, a new reduced-reference objective image quality metric, the perception evaluation is proposed. The perception evaluation is defined as cosine distance of Gram matrix between feature maps extracted from high resolution reference image and that from blurred images. We evaluate performance of the perception evaluation with simulated and real observation images. The results show that with a high resolution image as reference, the perception evaluation can give robust estimate of image quality for solar images of different scenes.

Read this paper on arXiv…

Y. Huang, P. Jia, D. Cai, et. al.
Mon, 27 May 19
17/51

Comments: 15 pages, 13 figures, submitted to Solar Physics

Galaxy Zoo: Probabilistic Morphology through Bayesian CNNs and Active Learning [GA]

http://arxiv.org/abs/1905.07424


We use Bayesian convolutional neural networks and a novel generative model of Galaxy Zoo volunteer responses to infer posteriors for the visual morphology of galaxies. Bayesian CNN can learn from galaxy images with uncertain labels and then, for previously unlabelled galaxies, predict the probability of each possible label. Our posteriors are well-calibrated (e.g. for predicting bars, we achieve coverage errors of 10.6% within 5 responses and 2.9% within 10 responses) and hence are reliable for practical use. Further, using our posteriors, we apply the active learning strategy BALD to request volunteer responses for the subset of galaxies which, if labelled, would be most informative for training our network. We show that training our Bayesian CNNs using active learning requires up to 35-60% fewer labelled galaxies, depending on the morphological feature being classified. By combining human and machine intelligence, Galaxy Zoo will be able to classify surveys of any conceivable scale on a timescale of weeks, providing massive and detailed morphology catalogues to support research into galaxy evolution.

Read this paper on arXiv…

M. Walmsley, L. Smith, C. Lintott, et. al.
Tue, 21 May 19
33/71

Comments: Submitted to MNRAS

TiK-means: $K$-means clustering for skewed groups [CL]

http://arxiv.org/abs/1904.09609


The $K$-means algorithm is extended to allow for partitioning of skewed groups. Our algorithm is called TiK-Means and contributes a $K$-means type algorithm that assigns observations to groups while estimating their skewness-transformation parameters. The resulting groups and transformation reveal general-structured clusters that can be explained by inverting the estimated transformation. Further, a modification of the jump statistic chooses the number of groups. Our algorithm is evaluated on simulated and real-life datasets and then applied to a long-standing astronomical dispute regarding the distinct kinds of gamma ray bursts.

Read this paper on arXiv…

N. Berry and R. Maitra
Tue, 23 Apr 19
13/58

Comments: 15 pages, 6 figures, to appear in Statistical Analysis and Data Mining – The ASA Data Science Journal

Stokes Inversion based on Convolutional Neural Networks [SSA]

http://arxiv.org/abs/1904.03714


Spectropolarimetric inversions are routinely used in the field of Solar Physics for the extraction of physical information from observations. The application to two-dimensional fields of view often requires the use of supercomputers with parallelized inversion codes. Even in this case, the computing time spent on the process is still very large. Our aim is to develop a new inversion code based on the application of convolutional neural networks that can quickly provide a three-dimensional cube of thermodynamical and magnetic properties from the interpretation of two-dimensional maps of Stokes profiles. We train two different architectures of fully convolutional neural networks. To this end, we use the synthetic Stokes profiles obtained from two snapshots of three-dimensional magneto-hydrodynamic numerical simulations of different structures of the solar atmosphere. We provide an extensive analysis of the new inversion technique, showing that it infers the thermodynamical and magnetic properties with a precision comparable to that of standard inversion techniques. However, it provides several key improvements: our method is around one million times faster, it returns a three-dimensional view of the physical properties of the region of interest in geometrical height, it provides quantities that cannot be obtained otherwise (pressure and Wilson depression) and the inferred properties are decontaminated from the blurring effect of instrumental point spread functions for free. The code is provided for free on a specific repository, with options for training and evaluation.

Read this paper on arXiv…

A. Ramos and C. Baso
Tue, 9 Apr 19
96/105

Comments: 17 pages, 12 figures, submitted to Astronomy & Astrophysics

Filling Factors of Sunspots in SODISM Images [SSA]

http://arxiv.org/abs/1904.01133


Received: 1st December 2018; Accepted: 18th February 2019; Published: 1st April 2019 Abstract: The calculated filling factors (FFs) for a feature reflect the fraction of the solar disc covered by that feature, and the assignment of reference synthetic spectra. In this paper, the FFs, specified as a function of radial position on the solar disc, are computed for each image in a tabular form. The filling factor (FF) is an important parameter and is defined as the fraction of area in a pixel covered with the magnetic field, whereas the rest of the area in the pixel is field-free. However, this does not provide extensive information about the experiments conducted on tens or hundreds of such images. This is the first time that filling factors for SODISM images have been catalogued in tabular formation. This paper presents a new method that provides the means to detect sunspots on full-disk solar images recorded by the Solar Diameter Imager and Surface Mapper (SODISM) on the PICARD satellite. The method is a totally automated detection process that achieves a sunspot recognition rate of 97.6%. The number of sunspots detected by this method strongly agrees with the NOAA catalogue. The sunspot areas calculated by this method have a 99% correlation with SOHO over the same period, and thus help to calculate the filling factor for wavelength (W.L.) 607nm.

Read this paper on arXiv…

A. Alasta, A. Algamudi, F. Almesrati, et. al.
Wed, 3 Apr 19
29/68

Comments: 11 pages, 7 figures, 2 tables This article is an extension of our previous studies investigating the detection of sunspots using SODISM images. The paper presented in August 2018 at the IEEE International Conference on Computing, Electronics and Communications Engineering. this http URL

NEARBY Platform for Automatic Asteroids Detection and EURONEAR Surveys [IMA]

http://arxiv.org/abs/1903.03479


The survey of the nearby space and continuous monitoring of the Near Earth Objects (NEOs) and especially Near Earth Asteroids (NEAs) are essential for the future of our planet and should represent a priority for our solar system research and nearby space exploration. More computing power and sophisticated digital tracking algorithms are needed to cope with the larger astronomy imaging cameras dedicated for survey telescopes. The paper presents the NEARBY platform that aims to experiment new algorithms for automatic image reduction, detection and validation of moving objects in astronomical surveys, specifically NEAs. The NEARBY platform has been developed and experimented through a collaborative research work between the Technical University of Cluj-Napoca (UTCN) and the University of Craiova, Romania, using observing infrastructure of the Instituto de Astrofisica de Canarias (IAC) and Isaac Newton Group (ING), La Palma, Spain. The NEARBY platform has been developed and deployed on the UTCN’s cloud infrastructure and the acquired images are processed remotely by the astronomers who transfer it from ING through the web interface of the NEARBY platform. The paper analyzes and highlights the main aspects of the NEARBY platform development, and the results and conclusions on the EURONEAR surveys.

Read this paper on arXiv…

D. Gorgan, O. Vaduvescu, T. Stefanut, et. al.
Mon, 11 Mar 19
48/78

Comments: ESA NEO and Debris Detection Conference, ESA/ESOC, Darmstadt, Germany, 22-24 Jan 2019

HexagDLy – Processing hexagonally sampled data with CNNs in PyTorch [CL]

http://arxiv.org/abs/1903.01814


HexagDLy is a Python-library extending the PyTorch deep learning framework with convolution and pooling operations on hexagonal grids. It aims to ease the access to convolutional neural networks for applications that rely on hexagonally sampled data as, for example, commonly found in ground-based astroparticle physics experiments.

Read this paper on arXiv…

C. Steppa and T. Holch
Wed, 6 Mar 19
39/75

Comments: N/A

Automated Prototype for Asteroids Detection [IMA]

http://arxiv.org/abs/1901.10469


Near Earth Asteroids (NEAs) are discovered daily, mainly by few major surveys, nevertheless many of them remain unobserved for years, even decades. Even so, there is room for new discoveries, including those submitted by smaller projects and amateur astronomers. Besides the well-known surveys that have their own automated system of asteroid detection, there are only a few software solutions designed to help amateurs and mini-surveys in NEAs discovery. Some of these obtain their results based on the blink method in which a set of reduced images are shown one after another and the astronomer has to visually detect real moving objects in a series of images. This technique becomes harder with the increase in size of the CCD cameras. Aiming to replace manual detection we propose an automated pipeline prototype for asteroids detection, written in Python under Linux, which calls some 3rd party astrophysics libraries.

Read this paper on arXiv…

D. Copandean, O. Vaduvescu and D. Gorgan
Thu, 31 Jan 19
15/59

Comments: 13th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania. arXiv admin note: text overlap with arXiv:1901.02542

NEARBY Platform for Detecting Asteroids in Astronomical Images Using Cloud-based Containerized Applications [IMA]

http://arxiv.org/abs/1901.04248


The continuing monitoring and surveying of the nearby space to detect Near Earth Objects (NEOs) and Near Earth Asteroids (NEAs) are essential because of the threats that this kind of objects impose on the future of our planet. We need more computational resources and advanced algorithms to deal with the exponential growth of the digital cameras’ performances and to be able to process (in near real-time) data coming from large surveys. This paper presents a software platform called NEARBY that supports automated detection of moving sources (asteroids) among stars from astronomical images. The detection procedure is based on the classic “blink” detection and, after that, the system supports visual analysis techniques to validate the moving sources, assisted by static and dynamical presentations.

Read this paper on arXiv…

V. Bacu, A. Sabou, T. Stefanut, et. al.
Tue, 15 Jan 19
2/83

Comments: IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP), Cluj-Napoca, Romania

Rotation Invariant Descriptors for Galaxy Morphological Classification [CL]

http://arxiv.org/abs/1812.04706


The detection of objects that are multi-oriented is a difficult pattern recognition problem. In this paper, we propose to evaluate the performance of different families of descriptors for the classification of galaxy morphologies. We investigate the performance of the Hu moments, Flusser moments, Zernike moments, Fourier-Mellin moments, and ring projection techniques based on 1D moment and the Fourier transform. We consider two main datasets for the performance evaluation. The first dataset is an artificial dataset based on representative templates from 11 types of galaxies, which are evaluated with different transformations (noise, smoothing), alone or combined. The evaluation is based on image retrieval performance to estimate the robustness of the rotation invariant descriptors with this type of images. The second dataset is composed of real images extracted from the Galaxy Zoo 2 project. The binary classification of elliptical and spiral galaxies is achieved with pre-processing steps including morphological filtering and a Laplacian pyramid. For the binary classification, we compare the different set of features with Support Vector Machines (SVM), Extreme Learning Machine, and different types of linear discriminant analysis techniques. The results support the conclusion that the proposed framework for the binary classification of elliptical and spiral galaxies provides an area under the ROC curve reaching 99.54%, proving the robustness of the approach for helping astronomers to study galaxies.

Read this paper on arXiv…

H. Cecotti
Thu, 13 Dec 18
14/50

Comments: 11 pages

DeepSphere: Efficient spherical Convolutional Neural Network with HEALPix sampling for cosmological applications [CEA]

http://arxiv.org/abs/1810.12186


Convolutional Neural Networks (CNNs) are a cornerstone of the Deep Learning toolbox and have led to many breakthroughs in Artificial Intelligence. These networks have mostly been developed for regular Euclidean domains such as those supporting images, audio, or video. Because of their success, CNN-based methods are becoming increasingly popular in Cosmology. Cosmological data often comes as spherical maps, which make the use of the traditional CNNs more complicated. The commonly used pixelization scheme for spherical maps is the Hierarchical Equal Area isoLatitude Pixelisation (HEALPix). We present a spherical CNN for analysis of full and partial HEALPix maps, which we call DeepSphere. The spherical CNN is constructed by representing the sphere as a graph. Graphs are versatile data structures that can act as a discrete representation of a continuous manifold. Using the graph-based representation, we define many of the standard CNN operations, such as convolution and pooling. With filters restricted to being radial, our convolutions are equivariant to rotation on the sphere, and DeepSphere can be made invariant or equivariant to rotation. This way, DeepSphere is a special case of a graph CNN, tailored to the HEALPix sampling of the sphere. This approach is computationally more efficient than using spherical harmonics to perform convolutions. We demonstrate the method on a classification problem of weak lensing mass maps from two cosmological models and compare the performance of the CNN with that of two baseline classifiers. The results show that the performance of DeepSphere is always superior or equal to both of these baselines. For high noise levels and for data covering only a smaller fraction of the sphere, DeepSphere achieves typically 10% better classification accuracy than those baselines. Finally, we show how learned filters can be visualized to introspect the neural network.

Read this paper on arXiv…

N. Perraudin, M. Defferrard, T. Kacprzak, et. al.
Tue, 30 Oct 18
70/73

Comments: N/A

On the dissection of degenerate cosmologies with machine learning [CEA]

http://arxiv.org/abs/1810.11027


Based on the DUSTGRAIN-pathfinder suite of simulations, we investigate observational degeneracies between nine models of modified gravity and massive neutrinos. Three types of machine learning techniques are tested for their ability to discriminate lensing convergence maps by extracting dimensional reduced representations of the data. Classical map descriptors such as the power spectrum, peak counts and Minkowski functionals are combined into a joint feature vector and compared to the descriptors and statistics that are common to the field of digital image processing. To learn new features directly from the data we use a Convolutional Neural Network (CNN). For the mapping between feature vectors and the predictions of their underlying model, we implement two different classifiers; one based on a nearest-neighbour search and one that is based on a fully connected neural network. We find that the neural network provides a much more robust classification than the nearest-neighbour approach and that the CNN provides the most discriminating representation of the data. It achieves the cleanest separation between the different models and the highest classification success rate of 59% for a single source redshift. Once we perform a tomographic CNN analysis, the total classification accuracy increases significantly to 76% with no observational degeneracies remaining. Visualising the filter responses of the CNN at different network depths provides us with the unique opportunity to learn from very complex models and to understand better why they perform so well.

Read this paper on arXiv…

J. Merten, C. Giocoli, M. Baldi, et. al.
Mon, 29 Oct 18
32/45

Comments: 20 pages, 14 figures, 10 tables. Associated code and data respository at this https URL . Submitted to MNRAS, comments welcome

DeepCMB: Lensing Reconstruction of the Cosmic Microwave Background with Deep Neural Networks [CEA]

http://arxiv.org/abs/1810.01483


Next-generation cosmic microwave background (CMB) experiments will have lower noise and therefore increased sensitivity, enabling improved constraints on fundamental physics parameters such as the sum of neutrino masses and the tensor-to-scalar ratio r. Achieving competitive constraints on these parameters requires high signal-to-noise extraction of the projected gravitational potential from the CMB maps. Standard methods for reconstructing the lensing potential employ the quadratic estimator (QE). However, the QE performs suboptimally at the low noise levels expected in upcoming experiments. Other methods, like maximum likelihood estimators (MLE), are under active development. In this work, we demonstrate reconstruction of the CMB lensing potential with deep convolutional neural networks (CNN) – ie, a ResUNet. The network is trained and tested on simulated data, and otherwise has no physical parametrization related to the physical processes of the CMB and gravitational lensing. We show that, over a wide range of angular scales, ResUNets recover the input gravitational potential with a higher signal-to-noise ratio than the QE method, reaching levels comparable to analytic approximations of MLE methods. We demonstrate that the network outputs quantifiably different lensing maps when given input CMB maps generated with different cosmologies. We also show we can use the reconstructed lensing map for cosmological parameter estimation. This application of CNN provides a few innovations at the intersection of cosmology and machine learning. First, while training and regressing on images, we predict a continuous-variable field rather than discrete classes. Second, we are able to establish uncertainty measures for the network output that are analogous to standard methods. We expect this approach to excel in capturing hard-to-model non-Gaussian astrophysical foreground and noise contributions.

Read this paper on arXiv…

J. Caldeira, W. Wu, B. Nord, et. al.
Thu, 4 Oct 18
1/72

Comments: 17 pages; LaTeX; 11 figures

Scientific image rendering for space scenes with the SurRender software [IMA]

http://arxiv.org/abs/1810.01423


Spacecraft autonomy can be enhanced by vision-based navigation (VBN) techniques. Applications range from manoeuvers around Solar System objects and landing on planetary surfaces, to in-orbit servicing or space debris removal. The development and validation of VBN algorithms relies on the availability of physically accurate relevant images. Yet archival data from past missions can rarely serve this purpose and acquiring new data is often costly. The SurRender software is an image simulator that addresses the challenges of realistic image rendering, with high representativeness for space scenes. Images are rendered by raytracing, which implements the physical principles of geometrical light propagation, in physical units. A macroscopic instrument model and scene objects reflectance functions are used. SurRender is specially optimized for space scenes, with huge distances between objects and scenes up to Solar System size. Raytracing conveniently tackles some important effects for VBN algorithms: image quality, eclipses, secondary illumination, subpixel limb imaging, etc. A simulation is easily setup (in MATLAB, Python, and more) by specifying the position of the bodies (camera, Sun, planets, satellites) over time, 3D shapes and material surface properties. SurRender comes with its own modelling tool enabling to go beyond existing models for shapes, materials and sensors (projection, temporal sampling, electronics, etc.). It is natively designed to simulate different kinds of sensors (visible, LIDAR, etc.). Tools are available for manipulating huge datasets to store albedo maps and digital elevation models, or for procedural (fractal) texturing that generates high-quality images for a large range of observing distances (from millions of km to touchdown). We illustrate SurRender performances with a selection of case studies, placing particular emphasis on a 900-km Moon flyby simulation.

Read this paper on arXiv…

R. Brochard, J. Lebreton, C. Robin, et. al.
Thu, 4 Oct 18
17/72

Comments: 11 pages, 10 figures, 69th International Astronautical Congress (IAC), Bremen, Germany, 1-5 October 2018, this https URL

Novel Sparse Recovery Algorithms for 3D Debris Localization using Rotating Point Spread Function Imagery [CL]

http://arxiv.org/abs/1809.10541


An optical imager that exploits off-center image rotation to encode both the lateral and depth coordinates of point sources in a single snapshot can perform 3D localization and tracking of space debris. When actively illuminated, unresolved space debris, which can be regarded as a swarm of point sources, can scatter a fraction of laser irradiance back into the imaging sensor. Determining the source locations and fluxes is a large-scale sparse 3D inverse problem, for which we have developed efficient and effective algorithms based on sparse recovery using non-convex optimization. Numerical simulations illustrate the efficiency and stability of the algorithms.

Read this paper on arXiv…

C. Wang, R. Plemmons, S. Prasad, et. al.
Fri, 28 Sep 18
27/52

Comments: 16 pages. arXiv admin note: substantial text overlap with arXiv:1804.04000

Deriving star cluster parameters by convolutional neural networks. I. Age, mass, and size [GA]

http://arxiv.org/abs/1807.07658


Context. Convolutional neural networks (CNNs) are proven to perform fast classification and detection on natural images and have potential to infer astrophysical parameters on the exponentially increasing amount of sky survey imaging data. The inference pipeline can be trained either from real human-annotated data or simulated mock observations. Up to now star cluster analysis was based on integral or individual resolved stellar photometry. This limits the amount of information that can be extracted from cluster images.
Aims. Develop a CNN based algorithm aimed to simultaneously derive ages, masses, and sizes of star clusters directly from multi-band images. Demonstrate CNN capabilities on low mass semi-resolved star clusters in a low signal-to-noise regime.
Methods. A CNN was constructed based on the deep residual network (ResNet) architecture and trained on simulated images of star clusters with various ages, masses, and sizes. To provide realistic backgrounds, M31 star fields taken from the PHAT survey were added to the mock cluster images.
Results. The proposed CNN was verified on mock images of artificial clusters and has demonstrated high precision and no significant bias for clusters of ages $\lesssim$3Gyr and masses between 250 and 4,000 ${\rm M_\odot}$. The pipeline is end-to-end starting from input images all the way to the inferred parameters, no hand-coded steps have to be performed – estimates of parameters are provided by the neural network in one inferential step from raw images.

Read this paper on arXiv…

J. Bialopetravičius, D. Narbutis and V. Vansevičius
Mon, 23 Jul 18
2/48

Comments: 10 pages, 11 figures

DeepSource: Point Source Detection using Deep Learning [IMA]

http://arxiv.org/abs/1807.02701


Point source detection at low signal-to-noise is challenging for astronomical surveys, particularly in radio interferometry images where the noise is correlated. Machine learning is a promising solution, allowing the development of algorithms tailored to specific telescope arrays and science cases. We present DeepSource – a deep learning solution – that uses convolutional neural networks to achieve these goals. DeepSource enhances the Signal-to-Noise Ratio (SNR) of the original map and then uses dynamic blob detection to detect sources. Trained and tested on two sets of 500 simulated 1 deg x 1 deg MeerKAT images with a total of 300,000 sources, DeepSource is essentially perfect in both purity and completeness down to SNR = 4 and outperforms PyBDSF in all metrics. For uniformly-weighted images it achieves a Purity x Completeness (PC) score at SNR = 3 of 0.73, compared to 0.31 for the best PyBDSF model. For natural-weighting we find a smaller improvement of ~40% in the PC score at SNR = 3. If instead we ask where either of the purity or completeness first drop to 90%, we find that DeepSource reaches this value at SNR = 3.6 compared to the 4.3 of PyBDSF (natural-weighting). A key advantage of DeepSource is that it can learn to optimally trade off purity and completeness for any science case under consideration. Our results show that deep learning is a promising approach to point source detection in astronomical images.

Read this paper on arXiv…

A. Sadr, E. Vos, B. Bassett, et. al.
Tue, 10 Jul 18
3/79

Comments: 15 pages, 13 figures, submitted to MNRAS

A volumetric deep Convolutional Neural Network for simulation of dark matter halo catalogues [CEA]

http://arxiv.org/abs/1805.04537


For modern large-scale structure survey techniques it has become standard practice to test data analysis pipelines on large suites of mock simulations, a task which is currently prohibitively expensive for full N-body simulations. Instead of calculating this costly gravitational evolution, we have trained a three-dimensional deep Convolutional Neural Network (CNN) to identify dark matter protohalos directly from the cosmological initial conditions. Training on halo catalogues from the Peak Patch semi-analytic code, we test various CNN architectures and find they generically achieve a Dice coefficient of ~92% in only 24 hours of training. We present a simple and fast geometric halo finding algorithm to extract halos from this powerful pixel-wise binary classifier and find that the predicted catalogues match the mass function and power spectra of the ground truth simulations to within ~10%. We investigate the effect of long-range tidal forces on an object-by-object basis and find that the network’s predictions are consistent with the non-linear ellipsoidal collapse equations used explicitly by the Peak Patch algorithm.

Read this paper on arXiv…

P. Berger and G. Stein
Tue, 15 May 18
33/87

Comments: 11 pages, 8 figures, 1 table. Comments welcome

Analyzing Solar Irradiance Variation From GPS and Cameras [IMA]

http://arxiv.org/abs/1804.07629


The total amount of solar irradiance falling on the earth’s surface is an important area of study amongst the photo-voltaic (PV) engineers and remote sensing analysts. The received solar irradiance impacts the total amount of generated solar energy. However, this generation is often hindered by the high degree of solar irradiance variability. In this paper, we study the main factors behind such variability with the assistance of Global Positioning System (GPS) and ground-based, high-resolution sky cameras. This analysis will also be helpful for understanding cloud phenomenon and other events in the earth’s atmosphere.

Read this paper on arXiv…

S. Manandhar, S. Dev, Y. Lee, et. al.
Mon, 23 Apr 18
50/63

Comments: Published in IEEE AP-S Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting, 2018

A Subpixel Registration Algorithm for Low PSNR Images [CL]

http://arxiv.org/abs/1804.00174


This paper presents a fast algorithm for obtaining high-accuracy subpixel translation of low PSNR images. Instead of locating the maximum point on the upsampled images or fitting the peak of correlation surface, the proposed algorithm is based on the measurement of centroid on the cross correlation surface by Modified Moment method. Synthetic images, real solar images and standard testing images with white Gaussian noise added were tested, and the results show that the accuracies of our algorithm are comparable with other subpixel registration techniques and the processing speed is higher. The drawback is also discussed at the end of this paper.

Read this paper on arXiv…

S. Feng, L. Deng, G. Shu, et. al.
Tue, 3 Apr 18
57/57

Comments: in 2012 IEEE 5th Int. Conf. on Advanced Computational Intelligence (ICACI) (New York: IEEE), 626

Image-based deep learning for classification of noise transients in gravitational wave detectors [CL]

http://arxiv.org/abs/1803.09933


The detection of gravitational waves has inaugurated the era of gravitational astronomy and opened new avenues for the multimessenger study of cosmic sources. Thanks to their sensitivity, the Advanced LIGO and Advanced Virgo interferometers will probe a much larger volume of space and expand the capability of discovering new gravitational wave emitters. The characterization of these detectors is a primary task in order to recognize the main sources of noise and optimize the sensitivity of interferometers. Glitches are transient noise events that can impact the data quality of the interferometers and their classification is an important task for detector characterization. Deep learning techniques are a promising tool for the recognition and classification of glitches. We present a classification pipeline that exploits convolutional neural networks to classify glitches starting from their time-frequency evolution represented as images. We evaluated the classification accuracy on simulated glitches, showing that the proposed algorithm can automatically classify glitches on very fast timescales and with high accuracy, thus providing a promising tool for online detector characterization.

Read this paper on arXiv…

M. Razzano and E. Cuoco
Wed, 28 Mar 18
7/148

Comments: 25 pages, 8 figures, accepted for publication in Classical and Quantum Gravity

Classification of simulated radio signals using Wide Residual Networks for use in the search for extra-terrestrial intelligence [IMA]

http://arxiv.org/abs/1803.08624


We describe a new approach and algorithm for the detection of artificial signals and their classification in the search for extraterrestrial intelligence (SETI). The characteristics of radio signals observed during SETI research are often most apparent when those signals are represented as spectrograms. Additionally, many observed signals tend to share the same characteristics, allowing for sorting of the signals into different classes. For this work, complex-valued time-series data were simulated to produce a corpus of 140,000 signals from seven different signal classes. A wide residual neural network was then trained to classify these signal types using the gray-scale 2D spectrogram representation of those signals. An average $F_1$ score of 95.11\% was attained when tested on previously unobserved simulated signals. We also report on the performance of the model across a range of signal amplitudes.

Read this paper on arXiv…

G. Cox, S. Egly, G. Harp, et. al.
Mon, 26 Mar 18
36/43

Comments: 16 pages, 8 figures

Classification of simulated radio signals using Wide Residual Networks for use in the search for extra-terrestrial intelligence [IMA]

http://arxiv.org/abs/1803.08624


We describe a new approach and algorithm for the detection of artificial signals and their classification in the search for extraterrestrial intelligence (SETI). The characteristics of radio signals observed during SETI research are often most apparent when those signals are represented as spectrograms. Additionally, many observed signals tend to share the same characteristics, allowing for sorting of the signals into different classes. For this work, complex-valued time-series data were simulated to produce a corpus of 140,000 signals from seven different signal classes. A wide residual neural network was then trained to classify these signal types using the gray-scale 2D spectrogram representation of those signals. An average $F_1$ score of 95.11\% was attained when tested on previously unobserved simulated signals. We also report on the performance of the model across a range of signal amplitudes.

Read this paper on arXiv…

G. Cox, S. Egly, G. Harp, et. al.
Mon, 26 Mar 18
43/43

Comments: 16 pages, 8 figures

Towards understanding feedback from supermassive black holes using convolutional neural networks [IMA]

http://arxiv.org/abs/1712.00523


Supermassive black holes at centers of clusters of galaxies strongly interact with their host environment via AGN feedback. Key tracers of such activity are X-ray cavities — regions of lower X-ray brightness within the cluster. We present an automatic method for detecting, and characterizing X-ray cavities in noisy, low-resolution X-ray images. We simulate clusters of galaxies, insert cavities into them, and produce realistic low-quality images comparable to observations at high redshifts. We then train a custom-built convolutional neural network to generate pixel-wise analysis of presence of cavities in a cluster. A ResNet architecture is then used to decode radii of cavities from the pixel-wise predictions. We surpass the accuracy, stability, and speed of current visual inspection based methods on simulated data.

Read this paper on arXiv…

S. Fort
Tue, 5 Dec 17
30/96

Comments: 5 pages, 5 figures, accepted at Workshop on Deep Learning for Physical Sciences (DLPS 2017), NIPS 2017, Long Beach, CA, USA

Single-epoch supernova classification with deep convolutional neural networks [IMA]

http://arxiv.org/abs/1711.11526


Supernovae Type-Ia (SNeIa) play a significant role in exploring the history of the expansion of the Universe, since they are the best-known standard candles with which we can accurately measure the distance to the objects. Finding large samples of SNeIa and investigating their detailed characteristics have become an important issue in cosmology and astronomy. Existing methods relied on a photometric approach that first measures the luminance of supernova candidates precisely and then fits the results to a parametric function of temporal changes in luminance. However, it inevitably requires multi-epoch observations and complex luminance measurements. In this work, we present a novel method for classifying SNeIa simply from single-epoch observation images without any complex measurements, by effectively integrating the state-of-the-art computer vision methodology into the standard photometric approach. Our method first builds a convolutional neural network for estimating the luminance of supernovae from telescope images, and then constructs another neural network for the classification, where the estimated luminance and observation dates are used as features for classification. Both of the neural networks are integrated into a single deep neural network to classify SNeIa directly from observation images. Experimental results show the effectiveness of the proposed method and reveal classification performance comparable to existing photometric methods with multi-epoch observations.

Read this paper on arXiv…

A. Kimura, I. Takahashi, M. Tanaka, et. al.
Fri, 1 Dec 17
1/68

Comments: 7 pages, published as a workshop paper in ICDCS2017, in June 2017

Pulsar Candidate Identification with Artificial Intelligence Techniques [IMA]

http://arxiv.org/abs/1711.10339


Discovering pulsars is a significant and meaningful research topic in the field of radio astronomy. With the advent of astronomical instruments such as he Five-hundred-meter Aperture Spherical Telescope (FAST) in China, data volumes and data rates are exponentially growing. This fact necessitates a focus on artificial intelligence (AI) technologies that can perform the automatic pulsar candidate identification to mine large astronomical data sets. Automatic pulsar candidate identification can be considered as a task of determining potential candidates for further investigation and eliminating noises of radio frequency interferences or other non-pulsar signals. It is very hard to raise the performance of DCNN-based pulsar identification because the limited training samples restrict network structure to be designed deep enough for learning good features as well as the crucial class imbalance problem due to very limited number of real pulsar samples. To address these problems, we proposed a framework which combines deep convolution generative adversarial network (DCGAN) with support vector machine (SVM) to deal with imbalance class problem and to improve pulsar identification accuracy. DCGAN is used as sample generation and feature learning model, and SVM is adopted as the classifier for predicting candidate’s labels in the inference stage. The proposed framework is a novel technique which not only can solve imbalance class problem but also can learn discriminative feature representations of pulsar candidates instead of computing hand-crafted features in preprocessing steps too, which makes it more accurate for automatic pulsar candidate selection. Experiments on two pulsar datasets verify the effectiveness and efficiency of our proposed method.

Read this paper on arXiv…

P. Guo, F. Duan, P. Wang, et. al.
Wed, 29 Nov 17
21/69

Comments: arXiv admin note: text overlap with arXiv:1603.05166 by other authors

A Dictionary Approach to Identifying Transient RFI [IMA]

http://arxiv.org/abs/1711.08823


As radio telescopes become more sensitive, the damaging effects of radio frequency interference (RFI) become more apparent. Near radio telescope arrays, RFI sources are often easily removed or replaced; the challenge lies in identifying them. Transient (impulsive) RFI is particularly difficult to identify. We propose a novel dictionary-based approach to transient RFI identification. RFI events are treated as sequences of sub-events, drawn from particular labelled classes. We demonstrate an automated method of extracting and labelling sub-events using a dataset of transient RFI. A dictionary of labels may be used in conjunction with hidden Markov models to identify the sources of RFI events reliably. We attain improved classification accuracy over traditional approaches such as SVMs or a na\”ive kNN classifier. Finally, we investigate why transient RFI is difficult to classify. We show that cluster separation in the principal components domain is influenced by the mains supply phase for certain sources.

Read this paper on arXiv…

D. Czech, A. Mishra and M. Inggs
Mon, 27 Nov 2017
41/78

Comments: N/A

Multiple component decomposition from millimeter single-channel data [IMA]

http://arxiv.org/abs/1711.08456


We present an implementation of a blind source separation algorithm to remove foregrounds off millimeter surveys made by single-channel instruments. In order to make possible such a decomposition over single-wavelength data: we generate levels of artificial redundancy, then perform a blind decomposition, calibrate the resulting maps, and lastly measure physical information. We simulate the reduction pipeline using mock data: atmospheric fluctuations, extended astrophysical foregrounds, and point-like sources, but we apply the same methodology to the AzTEC/ASTE survey of the Great Observatories Origins Deep Survey-South (GOODS-S). In both applications, our technique robustly decomposes redundant maps into their underlying components, reducing flux bias, improving signal-to-noise, and minimizing information loss. In particular, the GOODS-S survey is decomposed into four independent physical components, one of them is the already known map of point sources, two are atmospheric and systematic foregrounds, and the fourth component is an extended emission that can be interpreted as the confusion background of faint sources.

Read this paper on arXiv…

I. Rodriguez-Montoya, D. Sanchez-Arguelles, I. Aretxaga, et. al.
Mon, 27 Nov 2017
51/78

Comments: Accepted in ApJS

Reconstructing Video from Interferometric Measurements of Time-Varying Sources [IMA]

http://arxiv.org/abs/1711.01357


Very long baseline interferometry (VLBI) makes it possible to recover images of astronomical sources with extremely high angular resolution. Most recently, the Event Horizon Telescope (EHT) has extended VLBI to short mm wavelengths with a goal of achieving angular resolution sufficient for imaging the event horizons of supermassive black holes. VLBI provides measurements related to the underlying source image through a sparse set spatial frequencies. An image can then be recovered from these measurements by making assumptions about the underlying image. One of the most important assumptions made by conventional imaging methods is that over the course of a night’s observation the image is static. However, for quickly evolving sources, such as the galactic center’s supermassive black hole (SgrA*) targeted by the EHT, this assumption is violated and these conventional imaging approaches fail. In this work we propose a new way to model VLBI measurements that allows us to recover both the appearance and dynamics of an evolving source by reconstructing a video rather than a static image. By modeling VLBI measurements using a Gaussian Markov Model, we are able to propagate information across observations in time to reconstruct a video, while simultaneously learning about the dynamics of the source’s emission region. We demonstrate our proposed Expectation-Maximization (EM) algorithm, StarWarps, on realistic, synthetic observations of black holes, and show how it substantially improves results compared to conventional imaging algorithms.

Read this paper on arXiv…

K. Bouman, M. Johnson, A. Dalca, et. al.
Tue, 7 Nov 17
86/118

Comments: Submitted to Transactions on Computational Imaging

Muon Trigger for Mobile Phones [CL]

http://arxiv.org/abs/1709.08605


The CRAYFIS experiment proposes to use privately owned mobile phones as a ground detector array for Ultra High Energy Cosmic Rays. Upon interacting with Earth’s atmosphere, these events produce extensive particle showers which can be detected by cameras on mobile phones. A typical shower contains minimally-ionizing particles such as muons. As these particles interact with CMOS image sensors, they may leave tracks of faintly-activated pixels that are sometimes hard to distinguish from random detector noise. Triggers that rely on the presence of very bright pixels within an image frame are not efficient in this case.
We present a trigger algorithm based on Convolutional Neural Networks which selects images containing such tracks and are evaluated in a lazy manner: the response of each successive layer is computed only if activation of the current layer satisfies a continuation criterion. Usage of neural networks increases the sensitivity considerably comparable with image thresholding, while the lazy evaluation allows for execution of the trigger under the limited computational power of mobile phones.

Read this paper on arXiv…

M. Borisyak, M. Usvyatsov, M. Mulhearn, et. al.
Tue, 26 Sep 2017
43/87

Comments: N/A

Deep-Learnt Classification of Light Curves [IMA]

http://arxiv.org/abs/1709.06257


Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach is to derive statistical features from the time series and to use machine learning methods, generally supervised, to separate objects into a few of the standard classes. In this work, we transform the time series to two-dimensional light curve representations in order to classify them using modern deep learning techniques. In particular, we show that convolutional neural networks based classifiers work well for broad characterization and classification. We use labeled datasets of periodic variables from CRTS survey and show how this opens doors for a quick classification of diverse classes with several possible exciting extensions.

Read this paper on arXiv…

A. Mahabal, K. Sheth, F. Gieseke, et. al.
Wed, 20 Sep 17
22/57

Comments: 8 pages, 9 figures, 6 tables, 2 listings. Accepted to 2017 IEEE Symposium Series on Computational Intelligence (SSCI)

What does a convolutional neural network recognize in the moon? [CL]

http://arxiv.org/abs/1708.05636


Many people see a human face or animals in the pattern of the maria on the moon. Although the pattern corresponds to the actual variation in composition of the lunar surface, the culture and environment of each society influence the recognition of these objects (i.e., symbols) as specific entities. In contrast, a convolutional neural network (CNN) recognizes objects from characteristic shapes in a training data set. Using CNN, this study evaluates the probabilities of the pattern of lunar maria categorized into the shape of a crab, a lion and a hare. If Mare Frigoris (a dark band on the moon) is included in the lunar image, the lion is recognized. However, in an image without Mare Frigoris, the hare has the highest probability of recognition. Thus, the recognition of objects similar to the lunar pattern depends on which part of the lunar maria is taken into account. In human recognition, before we find similarities between the lunar maria and objects such as animals, we may be persuaded in advance to see a particular image from our culture and environment and then adjust the lunar pattern to the shape of the imagined object.

Read this paper on arXiv…

D. Shoji
Mon, 21 Aug 17
37/44

Comments: 12 pages, 6 figures

Colorimetric Calibration of a Digital Camera [CL]

http://arxiv.org/abs/1708.04685


In this paper, we introduce a novel – physico-chemical – approach for calibration of a digital camera chip. This approach utilizes results of measurement of incident light spectra of calibration films of different levels of gray for construction of calibration curve (number of incident photons vs. image pixel intensity) for each camera pixel. We show spectral characteristics of such corrected digital raw image files (a primary camera signal) and demonstrate their suitability for next image processing and analysis.

Read this paper on arXiv…

R. Rychtarikova, P. Soucek and D. Stys
Thu, 17 Aug 17
29/50

Comments: 14 pages, 6 figures

Flare Prediction Using Photospheric and Coronal Image Data [SSA]

http://arxiv.org/abs/1708.01323


The precise physical process that triggers solar flares is not currently understood. Here we attempt to capture the signature of this mechanism in solar image data of various wavelengths and use these signatures to predict flaring activity. We do this by developing an algorithm that [1] automatically generates features in 5.5 TB of image data taken by the Solar Dynamics Observatory of the solar photosphere, chromosphere, transition region, and corona during the time period between May 2010 and May 2014, [2] combines these features with other features based on flaring history and a physical understanding of putative flaring processes, and [3] classifies these features to predict whether a solar active region will flare within a time period of $T$ hours, where $T$ = 2 and 24. We find that when optimizing for the True Skill Score (TSS), photospheric vector magnetic field data combined with flaring history yields the best performance, and when optimizing for the area under the precision-recall curve, all the data are helpful. Our model performance yields a TSS of $0.84 \pm 0.03$ and $0.81 \pm 0.03$ in the $T$ = 2 and 24 hour cases, respectively, and a value of $0.13 \pm 0.07$ and $0.43 \pm 0.08$ for the area under the precision-recall curve in the $T$ = 2 and 24 hour cases, respectively. These relatively high scores are similar to, but not greater than, other attempts to predict solar flares. Given the similar values of algorithm performance across various types of models reported in the literature, we conclude that we can expect a certain baseline predictive capacity using these data. This is the first attempt to predict solar flares using photospheric vector magnetic field data as well as multiple wavelengths of image data from the chromosphere, transition region, and corona.

Read this paper on arXiv…

E. Jonas, M. Bobra, V. Shankar, et. al.
Mon, 7 Aug 17
43/54

Comments: submitted for publication in the Astrophysical Journal

GPU-Accelerated Algorithms for Compressed Signals Recovery with Application to Astronomical Imagery Deblurring [CL]

http://arxiv.org/abs/1707.02244


Compressive sensing promises to enable bandwidth-efficient on-board compression of astronomical data by lifting the encoding complexity from the source to the receiver. The signal is recovered off-line, exploiting GPUs parallel computation capabilities to speedup the reconstruction process. However, inherent GPU hardware constraints limit the size of the recoverable signal and the speedup practically achievable. In this work, we design parallel algorithms that exploit the properties of circulant matrices for efficient GPU-accelerated sparse signals recovery. Our approach reduces the memory requirements, allowing us to recover very large signals with limited memory. In addition, it achieves a tenfold signal recovery speedup thanks to ad-hoc parallelization of matrix-vector multiplications and matrix inversions. Finally, we practically demonstrate our algorithms in a typical application of circulant matrices: deblurring a sparse astronomical image in the compressed domain.

Read this paper on arXiv…

A. Fiandrotti, S. Fosson, C. Ravazzi, et. al.
Mon, 10 Jul 17
12/64

Comments: N/A

Geometric calibration of Colour and Stereo Surface Imaging System of ESA's Trace Gas Orbiter [IMA]

http://arxiv.org/abs/1707.00606


There are many geometric calibration methods for “standard” cameras. These methods, however, cannot be used for the calibration of telescopes with large focal lengths and complex off-axis optics. Moreover, specialized calibration methods for the telescopes are scarce in literature. We describe the calibration method that we developed for the Colour and Stereo Surface Imaging System (CaSSIS) telescope, on board of the ExoMars Trace Gas Orbiter (TGO). Although our method is described in the context of CaSSIS, with camera-specific experiments, it is general and can be applied to other telescopes. We further encourage re-use of the proposed method by making our calibration code and data available on-line.

Read this paper on arXiv…

S. Tulyakov, A. Ivanov, N. Thomas, et. al.
Tue, 4 Jul 17
33/74

Comments: Submitted to Advances in Space Research

Deep Transfer Learning: A new deep learning glitch classification method for advanced LIGO [CL]

http://arxiv.org/abs/1706.07446


The exquisite sensitivity of the advanced LIGO detectors has enabled the detection of multiple gravitational wave signals. The sophisticated design of these detectors mitigates the effect of most types of noise. However, advanced LIGO data streams are contaminated by numerous artifacts known as glitches: non-Gaussian noise transients with complex morphologies. Given their high rate of occurrence, glitches can lead to false coincident detections, obscure and even mimic gravitational wave signals. Therefore, successfully characterizing and removing glitches from advanced LIGO data is of utmost importance. Here, we present the first application of Deep Transfer Learning for glitch classification, showing that knowledge from deep learning algorithms trained for real-world object recognition can be transferred for classifying glitches in time-series based on their spectrogram images. Using the Gravity Spy dataset, containing hand-labeled, multi-duration spectrograms obtained from real LIGO data, we demonstrate that this method enables optimal use of very deep convolutional neural networks for classification given small training datasets, significantly reduces the time for training the networks, and achieves state-of-the-art accuracy above 98.8%, with perfect precision-recall on 8 out of 22 classes. Furthermore, new types of glitches can be classified accurately given few labeled examples with this technique. Once trained via transfer learning, we show that the convolutional neural networks can be truncated and used as excellent feature extractors for unsupervised clustering methods to identify new classes based on their morphology, without any labeled examples. Therefore, this provides a new framework for dynamic glitch classification for gravitational wave detectors, which are expected to encounter new types of noise as they undergo gradual improvements to attain design sensitivity.

Read this paper on arXiv…

D. George, H. Shen and E. Huerta
Mon, 26 Jun 17
9/40

Comments: N/A

Big Universe, Big Data: Machine Learning and Image Analysis for Astronomy [IMA]

http://arxiv.org/abs/1704.04650


Astrophysics and cosmology are rich with data. The advent of wide-area digital cameras on large aperture telescopes has led to ever more ambitious surveys of the sky. Data volumes of entire surveys a decade ago can now be acquired in a single night and real-time analysis is often desired. Thus, modern astronomy requires big data know-how, in particular it demands highly efficient machine learning and image analysis algorithms. But scalability is not the only challenge: Astronomy applications touch several current machine learning research questions, such as learning from biased data and dealing with label and measurement noise. We argue that this makes astronomy a great domain for computer science research, as it pushes the boundaries of data analysis. In the following, we will present this exciting application area for data scientists. We will focus on exemplary results, discuss main challenges, and highlight some recent methodological advancements in machine learning and image analysis triggered by astronomical applications.

Read this paper on arXiv…

J. Kremer, K. Stensbo-Smidt, F. Gieseke, et. al.
Tue, 18 Apr 17
33/40

Comments: N/A

Restoration of Images with Wavefront Aberrations [IMA]

http://arxiv.org/abs/1704.00331


This contribution deals with image restoration in optical systems with coherent illumination, which is an important topic in astronomy, coherent microscopy and radar imaging. Such optical systems suffer from wavefront distortions, which are caused by imperfect imaging components and conditions. Known image restoration algorithms work well for incoherent imaging, they fail in case of coherent images. In this paper a novel wavefront correction algorithm is presented, which allows image restoration under coherent conditions. In most coherent imaging systems, especially in astronomy, the wavefront deformation is known. Using this information, the proposed algorithm allows a high quality restoration even in case of severe wavefront distortions. We present two versions of this algorithm, which are an evolution of the Gerchberg-Saxton and the Hybrid-Input-Output algorithm. The algorithm is verified on simulated and real microscopic images.

Read this paper on arXiv…

C. Zelenka and R. Koch
Tue, 4 Apr 17
52/75

Comments: To appear in the proceedings of the 23rd International Conference on Pattern Recognition (ICPR 2016)

PSF field learning based on Optimal Transport Distances [CL]

http://arxiv.org/abs/1703.06066


Context: in astronomy, observing large fractions of the sky within a reasonable amount of time implies using large field-of-view (fov) optical instruments that typically have a spatially varying Point Spread Function (PSF). Depending on the scientific goals, galaxies images need to be corrected for the PSF whereas no direct measurement of the PSF is available. Aims: given a set of PSFs observed at random locations, we want to estimate the PSFs at galaxies locations for shapes measurements correction. Contributions: we propose an interpolation framework based on Sliced Optimal Transport. A non-linear dimension reduction is first performed based on local pairwise approximated Wasserstein distances. A low dimensional representation of the unknown PSFs is then estimated, which in turn is used to derive representations of those PSFs in the Wasserstein metric. Finally, the interpolated PSFs are calculated as approximated Wasserstein barycenters. Results: the proposed method was tested on simulated monochromatic PSFs of the Euclid space mission telescope (to be launched in 2020). It achieves a remarkable accuracy in terms of pixels values and shape compared to standard methods such as Inverse Distance Weighting or Radial Basis Function based interpolation methods.

Read this paper on arXiv…

F. Mboula and J. Starck
Mon, 20 Mar 2017
32/47

Comments: N/A

Deep-HiTS: Rotation Invariant Convolutional Neural Network for Transient Detection [IMA]

http://arxiv.org/abs/1701.00458


We introduce Deep-HiTS, a rotation invariant convolutional neural network (CNN) model for classifying images of transients candidates into artifacts or real sources for the High cadence Transient Survey (HiTS). CNNs have the advantage of learning the features automatically from the data while achieving high performance. We compare our CNN model against a feature engineering approach using random forests (RF). We show that our CNN significantly outperforms the RF model reducing the error by almost half. Furthermore, for a fixed number of approximately 2,000 allowed false transient candidates per night we are able to reduce the miss-classified real transients by approximately 1/5. To the best of our knowledge, this is the first time CNNs have been used to detect astronomical transient events. Our approach will be very useful when processing images from next generation instruments such as the Large Synoptic Survey Telescope (LSST). We have made all our code and data available to the community for the sake of allowing further developments and comparisons at https://github.com/guille-c/Deep-HiTS.

Read this paper on arXiv…

G. Cabrera-Vives, I. Reyes, F. Forster, et. al.
Tue, 3 Jan 17
4/55

Comments: N/A

Astronomical image reconstruction with convolutional neural networks [CL]

http://arxiv.org/abs/1612.04526


State of the art methods in astronomical image reconstruction rely on the resolution of a regularized or constrained optimization problem. Solving this problem can be computationally intensive and usually leads to a quadratic or at least superlinear complexity w.r.t. the number of pixels in the image. We investigate in this work the use of convolutional neural networks for image reconstruction in astronomy. With neural networks, the computationally intensive tasks is the training step, but the prediction step has a fixed complexity per pixel, i.e. a linear complexity. Numerical experiments show that our approach is both computationally efficient and competitive with other state of the art methods in addition to being interpretable.

Read this paper on arXiv…

R. Flamary
Thu, 15 Dec 16
33/59

Comments: N/A

Constraint matrix factorization for space variant PSFs fiel restoration [CL]

http://arxiv.org/abs/1608.08104


Context: in large-scale spatial surveys, the Point Spread Function (PSF) varies across the instrument ?eld of view (FOV). Local measurements of the PSFs are given by the isolated stars images. Yet, these estimates may not be directly usable for post-processings because of the observational noise and potentially the aliasing. Aims: given a set of aliased and noisy stars images from a telescope, we want to estimate well-resolved and noise-free PSFs at the observed stars positions, in particular, exploiting the spatial correlation of the PSFs across the FOV. Contributions: we introduce RCA (Resolved Components Analysis) which is a noise-robust dimension reduction and super-resolution method based on matrix- factorization. We propose an original way of using the PSFs spatial correlation in the restoration process through sparsity. The introduced formalism can be applied to correlated data sets with respect to any euclidean parametric space. Results: we tested our method on simulated monochromatic PSFs of Euclid telescope (launch planned for 2020). The proposed method outperforms existing PSFs restoration and dimension reduction methods. We show that a coupled sparsity constraint on individual PSFs and their spatial distribution yields a signi?cant improvement on both the restored PSFs shapes and the PSFs subspace identi?cation, in presence of aliasing. Perspectives: RCA can be naturally extended to account for the wavelength dependency of the PSFs.

Read this paper on arXiv…

F. Mboula, J. Starck, K. Okumura, et. al.
Tue, 30 Aug 16
69/78

Comments: 33 pages

Star-galaxy Classification Using Deep Convolutional Neural Networks [IMA]

http://arxiv.org/abs/1608.04369


Most existing star-galaxy classifiers use the reduced summary information from catalogs, requiring careful feature extraction and selection. The latest advances in machine learning that use deep convolutional neural networks allow a machine to automatically learn the features directly from data, minimizing the need for input from human experts. We present a star-galaxy classification framework that uses deep convolutional neural networks (ConvNets) directly on the reduced, calibrated pixel values. Using data from the Sloan Digital Sky Survey (SDSS) and the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), we demonstrate that ConvNets are able to produce accurate and well-calibrated probabilistic classifications that are competitive with conventional machine learning techniques. Future advances in deep learning may bring more success with current and forthcoming photometric surveys, such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST), because deep neural networks require very little, manual feature engineering.

Read this paper on arXiv…

E. Kim and R. Brunner
Tue, 16 Aug 16
33/57

Comments: 13 page, 13 figures. Submitted to MNRAS. Code available at this https URL

WAHRSIS: A Low-cost, High-resolution Whole Sky Imager With Near-Infrared Capabilities [IMA]

http://arxiv.org/abs/1605.06595


Cloud imaging using ground-based whole sky imagers is essential for a fine-grained understanding of the effects of cloud formations, which can be useful in many applications. Some such imagers are available commercially, but their cost is relatively high, and their flexibility is limited. Therefore, we built a new daytime Whole Sky Imager (WSI) called Wide Angle High-Resolution Sky Imaging System. The strengths of our new design are its simplicity, low manufacturing cost and high resolution. Our imager captures the entire hemisphere in a single high-resolution picture via a digital camera using a fish-eye lens. The camera was modified to capture light across the visible as well as the near-infrared spectral ranges. This paper describes the design of the device as well as the geometric and radiometric calibration of the imaging system.

Read this paper on arXiv…

S. Dev, F. Savoy, Y. Lee, et. al.
Tue, 24 May 16
5/73

Comments: Proc. IS&T/SPIE Infrared Imaging Systems, 2014

A Selection of Giant Radio Sources from NVSS [GA]

http://arxiv.org/abs/1603.06895


Results of the application of pattern recognition techniques to the problem of identifying Giant Radio Sources (GRS) from the data in the NVSS catalog are presented and issues affecting the process are explored. Decision-tree pattern recognition software was applied to training set source pairs developed from known NVSS large angular size radio galaxies. The full training set consisted of 51,195 source pairs, 48 of which were known GRS for which each lobe was primarily represented by a single catalog component. The source pairs had a maximum separation of 20 arc minutes and a minimum component area of 1.87 square arc minutes at the 1.4 mJy level. The importance of comparing resulting probability distributions of the training and application sets for cases of unknown class ratio is demonstrated. The probability of correctly ranking a randomly selected (GRS, non-GRS) pair from the best of the tested classifiers was determined to be 97.8 +/- 1.5%. The best classifiers were applied to the over 870,000 candidate pairs from the entire catalog. Images of higher ranked sources were visually screened and a table of over sixteen hundred candidates, including morphological annotation, is presented. These systems include doubles and triples, Wide-Angle Tail (WAT) and Narrow-Angle Tail (NAT), S- or Z-shaped systems, and core-jets and resolved cores. While some resolved lobe systems are recovered with this technique, generally it is expected that such systems would require a different approach.

Read this paper on arXiv…

D. Proctor
Wed, 23 Mar 16
34/73

Comments: 20 pages of text, 6 figures, 22 pages tables, total 55 pages. The stub for Table 6 is followed by the complete machine readable file. To be published in The Astrophysical Journal Supplement

Computational Imaging for VLBI Image Reconstruction [IMA]

http://arxiv.org/abs/1512.01413


Very long baseline interferometry (VLBI) is a technique for imaging celestial radio emissions by simultaneously observing a source from telescopes distributed across Earth. The challenges in reconstructing images from fine angular resolution VLBI data are immense. The data is extremely sparse and noisy, thus requiring statistical image models such as those designed in the computer vision community. In this paper we present a novel Bayesian approach for VLBI image reconstruction. While other methods require careful tuning and parameter selection for different types of images, our method is robust and produces good results under different settings such as low SNR or extended emissions. The success of our method is demonstrated on realistic synthetic experiments as well as publicly available real data. We present this problem in a way that is accessible to members of the computer vision community, and provide a dataset website (vlbiimaging.csail.mit.edu) to allow for controlled comparisons across algorithms. This dataset can foster development of new methods by making VLBI easily approachable to computer vision researchers.

Read this paper on arXiv…

K. Bouman, M. Johnson, D. Zoran, et. al.
Mon, 7 Dec 15
17/46

Comments: 10 pages, project website: this http URL

Distributed image reconstruction for very large arrays in radio astronomy [IMA]

http://arxiv.org/abs/1507.00501


Current and future radio interferometric arrays such as LOFAR and SKA are characterized by a paradox. Their large number of receptors (up to millions) allow theoretically unprecedented high imaging resolution. In the same time, the ultra massive amounts of samples makes the data transfer and computational loads (correlation and calibration) order of magnitudes too high to allow any currently existing image reconstruction algorithm to achieve, or even approach, the theoretical resolution. We investigate here decentralized and distributed image reconstruction strategies which select, transfer and process only a fraction of the total data. The loss in MSE incurred by the proposed approach is evaluated theoretically and numerically on simple test cases.

Read this paper on arXiv…

A. Ferrari, D. Mary, R. Flamary, et. al.
Fri, 3 Jul 15
36/50

Comments: Sensor Array and Multichannel Signal Processing Workshop (SAM), 2014 IEEE 8th, Jun 2014, Coruna, Spain. 2014

Machine learning based data mining for Milky Way filamentary structures reconstruction [IMA]

http://arxiv.org/abs/1505.06621


We present an innovative method called FilExSeC (Filaments Extraction, Selection and Classification), a data mining tool developed to investigate the possibility to refine and optimize the shape reconstruction of filamentary structures detected with a consolidated method based on the flux derivative analysis, through the column-density maps computed from Herschel infrared Galactic Plane Survey (Hi-GAL) observations of the Galactic plane. The present methodology is based on a feature extraction module followed by a machine learning model (Random Forest) dedicated to select features and to classify the pixels of the input images. From tests on both simulations and real observations the method appears reliable and robust with respect to the variability of shape and distribution of filaments. In the cases of highly defined filament structures, the presented method is able to bridge the gaps among the detected fragments, thus improving their shape reconstruction. From a preliminary “a posteriori” analysis of derived filament physical parameters, the method appears potentially able to add a sufficient contribution to complete and refine the filament reconstruction.

Read this paper on arXiv…

G. Riccio, S. Cavuoti, E. Schisano, et. al.
Tue, 26 May 15
1/67

Comments: Accepted by peer reviewed WIRN 2015 Conference, to appear on Smart Innovation, Systems and Technology, Springer, ISSN 2190-3018, 9 pages, 4 figures

A Sparse Gaussian Process Framework for Photometric Redshift Estimation [IMA]

http://arxiv.org/abs/1505.05489


Accurate photometric redshift are a lynchpin for many future experiments to pin down the cosmological model and for studies of galaxy evolution. In this study, a novel sparse regression framework for photometric redshift estimation is presented. Data from a simulated survey was used to train and test the proposed models. We show that approaches which include careful data preparation and model design offer a significant improvement in comparison with several competing machine learning algorithms. Standard implementation of most regression algorithms has as the objective the minimization of the sum of squared errors. For redshift inference, however, this induces a bias in the posterior mean of the output distribution, which can be problematic. In this paper we optimize to directly target minimizing $\Delta z = (z_\textrm{s} – z_\textrm{p})/(1+z_\textrm{s})$ and address the bias problem via a distribution-based weighting scheme, incorporated as part of the optimization objective. The results are compared with other machine learning algorithms in the field such as Artificial Neural Networks (ANN), Gaussian Processes (GPs) and sparse GPs. The proposed framework reaches a mean absolute $\Delta z = 0.002(1+z_\textrm{s})$, with a maximum absolute error of 0.0432, over the redshift range of $0.2 \le z_\textrm{s} \le 2$, a factor of three improvement over standard ANNs used in the literature. We also investigate how the relative size of the training affects the photometric redshift accuracy. We find that a training set of $>$30 per cent of total sample size, provides little additional constraint on the photometric redshifts, and note that our GP formalism strongly outperforms ANN in the sparse data regime.

Read this paper on arXiv…

I. Almosallam, S. Lindsay, M. Jarvis, et. al.
Thu, 21 May 15
15/59

Comments: N/A

Meta learning of bounds on the Bayes classifier error [CL]

http://arxiv.org/abs/1504.07116


Meta learning uses information from base learners (e.g. classifiers or estimators) as well as information about the learning problem to improve upon the performance of a single base learner. For example, the Bayes error rate of a given feature space, if known, can be used to aid in choosing a classifier, as well as in feature selection and model selection for the base classifiers and the meta classifier. Recent work in the field of f-divergence functional estimation has led to the development of simple and rapidly converging estimators that can be used to estimate various bounds on the Bayes error. We estimate multiple bounds on the Bayes error using an estimator that applies meta learning to slowly converging plug-in estimators to obtain the parametric convergence rate. We compare the estimated bounds empirically on simulated data and then estimate the tighter bounds on features extracted from an image patch analysis of sunspot continuum and magnetogram images.

Read this paper on arXiv…

K. Moon, V. Delouille and A. Hero
Tue, 28 Apr 15
36/70

Comments: 6 pages, 3 figures

A spectral optical flow method for determining velocities from digital imagery [CL]

http://arxiv.org/abs/1504.04660


We present a method for determining surface flows from solar images based upon optical flow techniques. We apply the method to sets of images obtained by a variety of solar imagers to assess its performance. The {\tt opflow3d} procedure is shown to extract accurate velocity estimates when provided perfect test data and quickly generates results consistent with completely distinct methods when applied on global scales. We also validate it in detail by comparing it to an established method when applied to high-resolution datasets and find that it provides comparable results without the need to tune, filter or otherwise preprocess the images before its application.

Read this paper on arXiv…

N. Hurlburt and S. Jaffey
Tue, 21 Apr 15
57/69

Comments: 12 pages, 5 figures. Submitted to Earth Science Informatics

Image patch analysis of sunspots and active regions. II. Clustering via dictionary learning [SSA]

http://arxiv.org/abs/1504.02762


Separating active regions that are quiet from potentially eruptive ones is a key issue in Space Weather applications. Traditional classification schemes such as Mount Wilson and McIntosh have been effective in relating an active region large scale magnetic configuration to its ability to produce eruptive events. However, their qualitative nature prevents systematic studies of an active region’s evolution for example. We introduce a new clustering of active regions that is based on the local geometry observed in Line of Sight magnetogram and continuum images. We use a reduced-dimension representation of an active region that is obtained by factoring (i.e. applying dictionary learning to) the corresponding data matrix comprised of local image patches. Two factorizations can be compared via the definition of appropriate metrics on the resulting factors. The distances obtained from these metrics are then used to cluster the active regions. We find that these metrics result in natural clusterings of active regions. The clusterings are related to large scale descriptors of an active region such as its size, its local magnetic field distribution, and its complexity as measured by the Mount Wilson classification scheme. We also find that including data focused on the neutral line of an active region can result in an increased correspondence between the Mount Wilson classifications and our clustering results. We provide some recommendations for which metrics and matrix factorization techniques to use to study small, large, complex, or simple active regions.

Read this paper on arXiv…

K. Moon, V. Delouille, J. Li, et. al.
Mon, 13 Apr 15
16/54

Comments: 31 pages, 15 figures

Linearly Supporting Feature Extraction For Automated Estimation Of Stellar Atmospheric Parameters [SSA]

http://arxiv.org/abs/1504.02164


We describe a scheme to extract linearly supporting (LSU) features from stellar spectra to automatically estimate the atmospheric parameters $T_{eff}$, log$~g$, and [Fe/H]. “Linearly supporting” means that the atmospheric parameters can be accurately estimated from the extracted features through a linear model. The successive steps of the process are as follow: first, decompose the spectrum using a wavelet packet (WP) and represent it by the derived decomposition coefficients; second, detect representative spectral features from the decomposition coefficients using the proposed method Least Absolute Shrinkage and Selection Operator (LARS)$_{bs}$; third, estimate the atmospheric parameters $T_{eff}$, log$~g$, and [Fe/H] from the detected features using a linear regression method. One prominent characteristic of this scheme is its ability to evaluate quantitatively the contribution of each detected feature to the atmospheric parameter estimate and also to trace back the physical significance of that feature. This work also shows that the usefulness of a component depends on both wavelength and frequency. The proposed scheme has been evaluated on both real spectra from the Sloan Digital Sky Survey (SDSS)/SEGUE and synthetic spectra calculated from Kurucz’s NEWODF models. On real spectra, we extracted 23 features to estimate $T_{eff}$, 62 features for log$~g$, and 68 features for [Fe/H]. Test consistencies between our estimates and those provided by the Spectroscopic Sarameter Pipeline of SDSS show that the mean absolute errors (MAEs) are 0.0062 dex for log$~T_{eff}$ (83 K for $T_{eff}$), 0.2345 dex for log$~g$, and 0.1564 dex for [Fe/H]. For the synthetic spectra, the MAE test accuracies are 0.0022 dex for log$~T_{eff}$ (32 K for $T_{eff}$), 0.0337 dex for log$~g$, and 0.0268 dex for [Fe/H].

Read this paper on arXiv…

X. Li, Y. Lu, G. Comte, et. al.
Fri, 10 Apr 15
68/68

Comments: 21 pages, 7 figures, 8 tables, The Astrophysical Journal Supplement Series (accepted for publication)

Rotation-invariant convolutional neural networks for galaxy morphology prediction [IMA]

http://arxiv.org/abs/1503.07077


Measuring the morphological parameters of galaxies is a key requirement for studying their formation and evolution. Surveys such as the Sloan Digital Sky Survey (SDSS) have resulted in the availability of very large collections of images, which have permitted population-wide analyses of galaxy morphology. Morphological analysis has traditionally been carried out mostly via visual inspection by trained experts, which is time-consuming and does not scale to large ($\gtrsim10^4$) numbers of images.
Although attempts have been made to build automated classification systems, these have not been able to achieve the desired level of accuracy. The Galaxy Zoo project successfully applied a crowdsourcing strategy, inviting online users to classify images by answering a series of questions. Unfortunately, even this approach does not scale well enough to keep up with the increasing availability of galaxy images.
We present a deep neural network model for galaxy morphology classification which exploits translational and rotational symmetry. It was developed in the context of the Galaxy Challenge, an international competition to build the best model for morphology classification based on annotated images from the Galaxy Zoo project.
For images with high agreement among the Galaxy Zoo participants, our model is able to reproduce their consensus with near-perfect accuracy ($> 99\%$) for most questions. Confident model predictions are highly accurate, which makes the model suitable for filtering large collections of images and forwarding challenging images to experts for manual annotation. This approach greatly reduces the experts’ workload without affecting accuracy. The application of these algorithms to larger sets of training data will be critical for analysing results from future surveys such as the LSST.

Read this paper on arXiv…

S. Dieleman, K. Willett and J. Dambre
Wed, 25 Mar 15
30/38

Comments: Accepted for publication in MNRAS. 20 pages, 14 figures

Towards radio astronomical imaging using an arbitrary basis [IMA]

http://arxiv.org/abs/1503.04338


The new generation of radio telescopes, such as the Square Kilometer Array (SKA), requires dramatic advances in computer hardware and software, in order to process the large amounts of produced data efficiently. In this document, we explore a new approach to wide-field imaging. By generalizing the image reconstruction, which is performed by an inverse Fourier transform, to arbitrary transformations, we gain enormous new possibilities. In particular, we outline an approach that might allow to obtain a sky image of size P times Q in (optimal) O(PQ) time. This could be a step in the direction of real-time, wide-field sky imaging for future telescopes.

Read this paper on arXiv…

M. Petschow
Tue, 17 Mar 15
8/79

Comments: N/A

DESAT: an SSW tool for SDO/AIA image de-saturation [IMA]

http://arxiv.org/abs/1503.02302


Saturation affects a significant rate of images recorded by the Atmospheric Imaging Assembly on the Solar Dynamics Observatory. This paper describes a computational method and a technological pipeline for the de-saturation of such images, based on several mathematical ingredients like Expectation Maximization, image correlation and interpolation. An analysis of the computational properties and demands of the pipeline, together with an assessment of its reliability are performed against a set of data recorded from the Feburary 25 2014 flaring event.

Read this paper on arXiv…

R. Schwartz, G. Torre, A. Massone, et. al.
Tue, 10 Mar 15
3/77

Comments: N/A

Montblanc: GPU accelerated Radio Interferometer Measurement Equations in support of Bayesian Inference for Radio Observations [CL]

http://arxiv.org/abs/1501.07719


We present Montblanc, a GPU implementation of the Radio interferometer measurement equation (RIME) in support of the Bayesian inference for radio observations (BIRO) technique. BIRO uses Bayesian inference to select sky models that best match the visibilities observed by a radio interferometer. To accomplish this, BIRO evaluates the RIME multiple times, varying sky model parameters to produce multiple model visibilities. Chi-squared values computed from the model and observed visibilities are used as likelihood values to drive the Bayesian sampling process and select the best sky model.
As most of the elements of the RIME and chi-squared calculation are independent of one another, they are highly amenable to parallel computation. Additionally, Montblanc caters for iterative RIME evaluation to produce multiple chi-squared values. Only modified model parameters are transferred to the GPU between each iteration.
We implemented Montblanc as a Python package based upon NVIDIA’s CUDA architecture. As such, it is easy to extend and implement different pipelines. At present, Montblanc supports point and Gaussian morphologies, but is designed for easy addition of new source profiles. Montblanc’s RIME implementation is performant: On an NVIDIA K40, it is approximately 250 times faster than MeqTrees on a dual hexacore Intel E5{2620v2 CPU. Compared to the OSKAR simulator’s GPU-implemented RIME components it is 7.7 and 12 times faster on the same K40 for single and double-precision oating point respectively. However, OSKAR’s RIME implementation is more general than Montblanc’s BIRO-tailored RIME.
Theoretical analysis of Montblanc’s dominant CUDA kernel suggests that it is memory bound. In practice, profiling shows that is balanced between compute and memory, as much of the data required by the problem is retained in L1 and L2 cache.

Read this paper on arXiv…

S. Perkins, P. Maraism, J. Zwart, et. al.
Mon, 2 Feb 15
21/49

Comments: Submitted to Astronomy and Computing (this http URL). The code is available online at this https URL 26 pages long, with 13 figures, 6 tables and 3 algorithms

Non-parametric PSF estimation from celestial transit solar images using blind deconvolution [CL]

http://arxiv.org/abs/1412.6279


Context: Characterization of instrumental effects in astronomical imaging is important in order to extract accurate physical information from the observations. Optics are never perfect and the non-ideal path through the telescope is usually represented by the convolution of an ideal image with a Point Spread Function (PSF). Other sources of noise (read-out, Photon) also contaminate the image acquisition process. The problem of estimating both the PSF filter and a denoised image is called blind deconvolution and is ill-posed.
Aims: We propose a blind deconvolution scheme that relies on image regularization. Contrarily to most methods presented in the literature, it does not assume a parametric model of the PSF and can thus be applied to any telescope.
Methods: Our scheme uses a wavelet analysis image prior model and weak assumptions on the PSF filter’s response. We use the observations from a celestial body transit where such object can be assumed to be a black disk. Such constraints limits the interchangeability between the filter and the image in the blind deconvolution problem.
Results: Our method is applied on synthetic and experimental data. We compute the PSF for SECCHI/EUVI instrument using the 2007 Lunar transit, and for SDO/AIA with the 2012 Venus transit. Results show that the proposed non-parametric blind deconvolution method is able to estimate the core of the PSF with a similar quality than parametric methods proposed in the literature. We also show that, if these parametric estimations are incorporated in the acquisition model, the resulting PSF outperforms both the parametric and non-parametric methods.

Read this paper on arXiv…

A. Gonzalez, V. Delouille and L. Jacques
Mon, 19 Jan 15
4/50

Comments: 19 pages, 23 figures

Spectral classification using convolutional neural networks [CL]

http://arxiv.org/abs/1412.8341


There is a great need for accurate and autonomous spectral classification methods in astrophysics. This thesis is about training a convolutional neural network (ConvNet) to recognize an object class (quasar, star or galaxy) from one-dimension spectra only. Author developed several scripts and C programs for datasets preparation, preprocessing and postprocessing of the data. EBLearn library (developed by Pierre Sermanet and Yann LeCun) was used to create ConvNets. Application on dataset of more than 60000 spectra yielded success rate of nearly 95%. This thesis conclusively proved great potential of convolutional neural networks and deep learning methods in astrophysics.

Read this paper on arXiv…

P. Hala
Tue, 30 Dec 14
81/83

Comments: 71 pages, 50 figures, Master’s thesis, Masaryk University

High-level numerical simulations of noise in CCD and CMOS photosensors: review and tutorial [IMA]

http://arxiv.org/abs/1412.4031


In many applications, such as development and testing of image processing algorithms, it is often necessary to simulate images containing realistic noise from solid-state photosensors. A high-level model of CCD and CMOS photosensors based on a literature review is formulated in this paper. The model includes photo-response non-uniformity, photon shot noise, dark current Fixed Pattern Noise, dark current shot noise, offset Fixed Pattern Noise, source follower noise, sense node reset noise, and quantisation noise. The model also includes voltage-to-voltage, voltage-to-electrons, and analogue-to-digital converter non-linearities. The formulated model can be used to create synthetic images for testing and validation of image processing algorithms in the presence of realistic images noise. An example of the simulated CMOS photosensor and a comparison with a custom-made CMOS hardware sensor is presented. Procedures for characterisation from both light and dark noises are described. Experimental results that confirm the validity of the numerical model are provided. The paper addresses the issue of the lack of comprehensive high-level photosensor models that enable engineers to simulate realistic effects of noise on the images obtained from solid-state photosensors.

Read this paper on arXiv…

M. Konnik and J. Welsh
Mon, 15 Dec 14
45/53

Comments: N/A

Super-resolution method using sparse regularization for point-spread function recovery [CL]

http://arxiv.org/abs/1410.7679


In large-scale spatial surveys, such as the forthcoming ESA Euclid mission, images may be undersampled due to the optical sensors sizes. Therefore, one may consider using a super-resolution (SR) method to recover aliased frequencies, prior to further analysis. This is particularly relevant for point-source images, which provide direct measurements of the instrument point-spread function (PSF). We introduce SPRITE, SParse Recovery of InsTrumental rEsponse, which is an SR algorithm using a sparse analysis prior. We show that such a prior provides significant improvements over existing methods, especially on low SNR PSFs.

Read this paper on arXiv…

F. Mboula, J. Starck, S. Ronayette, et. al.
Wed, 29 Oct 14
60/81

Comments: N/A

Combining human and machine learning for morphological analysis of galaxy images [IMA]

http://arxiv.org/abs/1409.7935


The increasing importance of digital sky surveys collecting many millions of galaxy images has reinforced the need for robust methods that can perform morphological analysis of large galaxy image databases. Citizen science initiatives such as Galaxy Zoo showed that large datasets of galaxy images can be analyzed effectively by non-scientist volunteers, but since databases generated by robotic telescopes grow much faster than the processing power of any group of citizen scientists, it is clear that computer analysis is required. Here we propose to use citizen science data for training machine learning systems, and show experimental results demonstrating that machine learning systems can be trained with citizen science data. Our findings show that the performance of machine learning depends on the quality of the data, which can be improved by using samples that have a high degree of agreement between the citizen scientists. The source code of the method is publicly available.

Read this paper on arXiv…

E. Kuminski, J. George, J. Wallin, et. al.
Tue, 30 Sep 14
6/81

Comments: PASP, accepted

Machine Learning Classification of SDSS Transient Survey Images [IMA]

http://arxiv.org/abs/1407.4118


We show that multiple machine learning algorithms can match human performance in classifying transient imaging data from the SDSS supernova survey into real objects and artefacts. This is the first step in any transient science pipeline and is currently still done by humans, but future surveys such as LSST will necessitate fully machine-enabled solutions. Using features trained from eigenimage analysis (PCA) of single-epoch g, r, i-difference images we can reach a completeness (recall) of 95%, while only incorrectly classifying 18% of artefacts as real objects, corresponding to a precision (purity) of 85%. In general the k-nearest neighbour and the SkyNet artificial neural net algorithms performed most robustly compared to other methods such as naive Bayes and kernel SVM. Our results show that PCA-based machine learning can match human success levels and can naturally be extended by including multiple epochs of data, transient colours and host galaxy information which should allow for significant further improvements, especially at low signal to noise.

Read this paper on arXiv…

L. Buisson, N. Sivanandam, B. Bassett, et. al.
Thu, 17 Jul 14
5/66

Comments: 11 pages, 8 figures

PAINTER: a spatio-spectral image reconstruction algorithm for optical interferometry [IMA]

http://arxiv.org/abs/1407.1885


Astronomical optical interferometers sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid perturbations caused by atmospheric turbulence, the phases of the complex Fourier samples (visibilities) cannot be directly exploited. Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic optical interferometric instruments are now paving the way to multiwavelength imaging. This paper is devoted to the derivation of a spatio-spectral (3D) image reconstruction algorithm, coined PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also differential phases, which helps to better constrain the polychromatic reconstruction. Simulations on synthetic data illustrate the efficiency of the algorithm and in particular the relevance of injecting a differential phases model in the reconstruction.

Read this paper on arXiv…

A. Schutz, A. Ferrari, D. Mary, et. al.
Wed, 9 Jul 14
64/74

Comments: 12 pages, 10 figures

Towards building a Crowd-Sourced Sky Map [CL]

http://arxiv.org/abs/1406.1528


We describe a system that builds a high dynamic-range and wide-angle image of the night sky by combining a large set of input images. The method makes use of pixel-rank information in the individual input images to improve a “consensus” pixel rank in the combined image. Because it only makes use of ranks and the complexity of the algorithm is linear in the number of images, the method is useful for large sets of uncalibrated images that might have undergone unknown non-linear tone mapping transformations for visualization or aesthetic reasons. We apply the method to images of the night sky (of unknown provenance) discovered on the Web. The method permits discovery of astronomical objects or features that are not visible in any of the input images taken individually. More importantly, however, it permits scientific exploitation of a huge source of astronomical images that would not be available to astronomical research without our automatic system.

Read this paper on arXiv…

D. Lang, D. Hogg and B. Scholkopf
Mon, 9 Jun 14
19/40

Comments: Appeared at AI-STATS 2014

Sparsity averaging for radio-interferometric imaging [IMA]

http://arxiv.org/abs/1402.2335


We propose a novel regularization method for compressive imaging in the context of the compressed sensing (CS) theory with coherent and redundant dictionaries. Natural images are often complicated and several types of structures can be present at once. It is well known that piecewise smooth images exhibit gradient sparsity, and that images with extended structures are better encapsulated in wavelet frames. Therefore, we here conjecture that promoting average sparsity or compressibility over multiple frames rather than single frames is an extremely powerful regularization prior.

Read this paper on arXiv…

R. Carrillo, J. McEwen and Y. Wiaux
Wed, 12 Feb 14
34/67

Reconstruction of Complex-Valued Fractional Brownian Motion Fields Based on Compressive Sampling and Its Application to PSF Interpolation in Weak Lensing Survey [CL]

http://arxiv.org/abs/1311.0124


A new reconstruction method of complex-valued fractional Brownian motion (CV-fBm) field based on Compressive Sampling (CS) is proposed. The decay property of Fourier coefficients magnitude of the fBm signals/ fields indicates that fBms are compressible. Therefore, a few numbers of samples will be sufficient for a CS based method to reconstruct the full field. The effectiveness of the proposed method is showed by simulating, random sampling, and reconstructing CV-fBm fields. Performance evaluation shows advantages of the proposed method over boxcar filtering and thin plate methods. It is also found that the reconstruction performance depends on both of the fBm’s Hurst parameter and the number of samples, which in fact is consistent with the CS reconstruction theory. In contrast to other fBm or fractal interpolation methods, the proposed CS based method does not require the knowledge of fractal parameters in the reconstruction process; the inherent sparsity is just sufficient for the CS to do the reconstruction. Potential applicability of the proposed method in weak gravitational lensing survey, particularly for interpolating non-smooth PSF (Point Spread Function) distribution representing distortion by a turbulent field is also discussed.

Read this paper on arXiv…

Tue, 5 Nov 13
29/73

A Parallel Compressive Imaging Architecture for One-Shot Acquisition [CL]

http://arxiv.org/abs/1311.0646


A limitation of many compressive imaging architectures lies in the sequential nature of the sensing process, which leads to long sensing times. In this paper we present a novel architecture that uses fewer detectors than the number of reconstructed pixels and is able to acquire the image in a single acquisition. This paves the way for the development of video architectures that acquire several frames per second. We specifically address the diffraction problem, showing that deconvolution normally used to recover diffraction blur can be replaced by convolution of the sensing matrix, and how measurements of a 0/1 physical sensing matrix can be converted to -1/1 compressive sensing matrix without any extra acquisitions. Simulations of our architecture show that the image quality is comparable to that of a classic Compressive Imaging camera, whereas the proposed architecture avoids long acquisition times due to sequential sensing. This one-shot procedure also allows to employ a fixed sensing matrix instead of a complex device such as a Digital Micro Mirror array or Spatial Light Modulator. It also enables imaging at bandwidths where these are not efficient.

Read this paper on arXiv…

Tue, 5 Nov 13
68/73

Feature Selection Strategies for Classifying High Dimensional Astronomical Data Sets


The amount of collected data in many scientific fields is increasing, all of them requiring a common task: extract knowledge from massive, multi parametric data sets, as rapidly and efficiently possible. This is especially true in astronomy where synoptic sky surveys are enabling new research frontiers in the time domain astronomy and posing several new object classification challenges in multi dimensional spaces; given the high number of parameters available for each object, feature selection is quickly becoming a crucial task in analyzing astronomical data sets. Using data sets extracted from the ongoing Catalina Real-Time Transient Surveys (CRTS) and the Kepler Mission we illustrate a variety of feature selection strategies used to identify the subsets that give the most information and the results achieved applying these techniques to three major astronomical problems.

Read this paper on arXiv…

Date added: Wed, 9 Oct 13

Singular Value Decomposition of Images from Scanned Photographic Plates


We want to approximate the mxn image A from scanned astronomical photographic plates (from the Sofia Sky Archive Data Center) by using far fewer entries than in the original matrix. By using rank of a matrix, k we remove the redundant information or noise and use as Wiener filter, when rank k<m or k<n. With this approximation more than 98% compression ration of image of astronomical plate without that image details, is obtained. The SVD of images from scanned photographic plates (SPP) is considered and its possible image compression.

Read this paper on arXiv…

Date added: Tue, 8 Oct 13