A Conditional Denoising Diffusion Probabilistic Model for Radio Interferometric Image Reconstruction [IMA]

http://arxiv.org/abs/2305.09121


In radio astronomy, signals from radio telescopes are transformed into images of observed celestial objects, or sources. However, these images, called dirty images, contain real sources as well as artifacts due to signal sparsity and other factors. Therefore, radio interferometric image reconstruction is performed on dirty images, aiming to produce clean images in which artifacts are reduced and real sources are recovered. So far, existing methods have limited success on recovering faint sources, preserving detailed structures, and eliminating artifacts. In this paper, we present VIC-DDPM, a Visibility and Image Conditioned Denoising Diffusion Probabilistic Model. Our main idea is to use both the original visibility data in the spectral domain and dirty images in the spatial domain to guide the image generation process with DDPM. This way, we can leverage DDPM to generate fine details and eliminate noise, while utilizing visibility data to separate signals from noise and retaining spatial information in dirty images. We have conducted experiments in comparison with both traditional methods and recent deep learning based approaches. Our results show that our method significantly improves the resulting images by reducing artifacts, preserving fine details, and recovering dim sources. This advancement further facilitates radio astronomical data analysis tasks on celestial phenomena.

Read this paper on arXiv…

R. Wang, Z. Chen, Q. Luo, et. al.
Wed, 17 May 23
5/67

Comments: 8 pages

Using a Conditional Generative Adversarial Network to Control the Statistical Characteristics of Generated Images for IACT Data Analysis [IMA]

http://arxiv.org/abs/2211.15807


Generative adversarial networks are a promising tool for image generation in the astronomy domain. Of particular interest are conditional generative adversarial networks (cGANs), which allow you to divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images. In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size), which is in direct correlation with the energy of primary particles. We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment. As a training set, we used a set of two-dimensional images generated using the TAIGA Monte Carlo simulation software. We artificiallly divided the training set into 10 classes, sorting images by size and defining the boundaries of the classes so that the same number of images fall into each class. These classes were used while training our network. The paper shows that for each class, the size distribution of the generated images is close to normal with the mean value located approximately in the middle of the corresponding class. We also show that for the generated images, the total image size distribution obtained by summing the distributions over all classes is close to the original distribution of the training set. The results obtained will be useful for more accurate generation of realistic synthetic images similar to the ones taken by IACTs.

Read this paper on arXiv…

J. Dubenskaya, A. Kryukov, A. Demichev, et. al.
Wed, 30 Nov 22
37/81

Comments: N/A

Automated Sunspot Detection as an Alternative to Visual Observations [SSA]

http://arxiv.org/abs/2211.13552


We developed an automated method for sunspot detection using digital white-light solar images to achieve a performance similar to that of visual drawing observations in sunspot counting. To identify down to small, isolated spots correctly, we pay special attention to the accurate derivation of the quiet-disk component of the Sun, which is used as a reference to identify sunspots using a threshold. This threshold is determined using an adaptive method to process images obtained under various conditions. To eliminate the seeing effect, our method can process multiple images taken within a short time. We applied the developed method to digital images captured at three sites and compared the detection results with those of visual observations. We conclude that the proposed sunspot detection method has a similar performance to that of visual observation. This method can be widely used by public observatories and amateurs as well as professional observatories as an alternative to hand-drawn visual observation for sunspot counting.

Read this paper on arXiv…

Y. Hanaoka
Mon, 28 Nov 22
84/93

Comments: “Solar Physics”, accepted. 24 pages, 10 figures

Astrometric Calibration and Source Characterisation of the Latest Generation Neuromorphic Event-based Cameras for Space Imaging [CL]

http://arxiv.org/abs/2211.09939


As an emerging approach to space situational awareness and space imaging, the practical use of an event-based camera in space imaging for precise source analysis is still in its infancy. The nature of event-based space imaging and data collection needs to be further explored to develop more effective event-based space image systems and advance the capabilities of event-based tracking systems with improved target measurement models. Moreover, for event measurements to be meaningful, a framework must be investigated for event-based camera calibration to project events from pixel array coordinates in the image plane to coordinates in a target resident space object’s reference frame. In this paper, the traditional techniques of conventional astronomy are reconsidered to properly utilise the event-based camera for space imaging and space situational awareness. This paper presents the techniques and systems used for calibrating an event-based camera for reliable and accurate measurement acquisition. These techniques are vital in building event-based space imaging systems capable of real-world space situational awareness tasks. By calibrating sources detected using the event-based camera, the spatio-temporal characteristics of detected sources or `event sources’ can be related to the photometric characteristics of the underlying astrophysical objects. Finally, these characteristics are analysed to establish a foundation for principled processing and observing techniques which appropriately exploit the capabilities of the event-based camera.

Read this paper on arXiv…

N. Ralph, A. Marcireau, S. Afshar, et. al.
Mon, 21 Nov 22
20/66

Comments: N/A

Deep Learning-based galaxy image deconvolution [IMA]

http://arxiv.org/abs/2211.09597


With the onset of large-scale astronomical surveys capturing millions of images, there is an increasing need to develop fast and accurate deconvolution algorithms that generalize well to different images. A powerful and accessible deconvolution method would allow for the reconstruction of a cleaner estimation of the sky. The deconvolved images would be helpful to perform photometric measurements to help make progress in the fields of galaxy formation and evolution. We propose a new deconvolution method based on the Learnlet transform. Eventually, we investigate and compare the performance of different Unet architectures and Learnlet for image deconvolution in the astrophysical domain by following a two-step approach: a Tikhonov deconvolution with a closed-form solution, followed by post-processing with a neural network. To generate our training dataset, we extract HST cutouts from the CANDELS survey in the F606W filter (V-band) and corrupt these images to simulate their blurred-noisy versions. Our numerical results based on these simulations show a detailed comparison between the considered methods for different noise levels.

Read this paper on arXiv…

U. Akhaury, J. Starck, P. Jablonka, et. al.
Fri, 18 Nov 22
47/70

Comments: 15 pages, 5 figures

Real-Time Dense Field Phase-to-Space Simulation of Imaging through Atmospheric Turbulence [CL]

http://arxiv.org/abs/2210.06713


Numerical simulation of atmospheric turbulence is one of the biggest bottlenecks in developing computational techniques for solving the inverse problem in long-range imaging. The classical split-step method is based upon numerical wave propagation which splits the propagation path into many segments and propagates every pixel in each segment individually via the Fresnel integral. This repeated evaluation becomes increasingly time-consuming for larger images. As a result, the split-step simulation is often done only on a sparse grid of points followed by an interpolation to the other pixels. Even so, the computation is expensive for real-time applications. In this paper, we present a new simulation method that enables \emph{real-time} processing over a \emph{dense} grid of points. Building upon the recently developed multi-aperture model and the phase-to-space transform, we overcome the memory bottleneck in drawing random samples from the Zernike correlation tensor. We show that the cross-correlation of the Zernike modes has an insignificant contribution to the statistics of the random samples. By approximating these cross-correlation blocks in the Zernike tensor, we restore the homogeneity of the tensor which then enables Fourier-based random sampling. On a $512\times512$ image, the new simulator achieves 0.025 seconds per frame over a dense field. On a $3840 \times 2160$ image which would have taken 13 hours to simulate using the split-step method, the new simulator can run at approximately 60 seconds per frame.

Read this paper on arXiv…

N. Chimitt, X. Zhang, Z. Mao, et. al.
Fri, 14 Oct 22
7/75

Comments: N/A

Attention-Based Generative Neural Image Compression on Solar Dynamics Observatory [CL]

http://arxiv.org/abs/2210.06478


NASA’s Solar Dynamics Observatory (SDO) mission gathers 1.4 terabytes of data each day from its geosynchronous orbit in space. SDO data includes images of the Sun captured at different wavelengths, with the primary scientific goal of understanding the dynamic processes governing the Sun. Recently, end-to-end optimized artificial neural networks (ANN) have shown great potential in performing image compression. ANN-based compression schemes have outperformed conventional hand-engineered algorithms for lossy and lossless image compression. We have designed an ad-hoc ANN-based image compression scheme to reduce the amount of data needed to be stored and retrieved on space missions studying solar dynamics. In this work, we propose an attention module to make use of both local and non-local attention mechanisms in an adversarially trained neural image compression network. We have also demonstrated the superior perceptual quality of this neural image compressor. Our proposed algorithm for compressing images downloaded from the SDO spacecraft performs better in rate-distortion trade-off than the popular currently-in-use image compression codecs such as JPEG and JPEG2000. In addition we have shown that the proposed method outperforms state-of-the art lossy transform coding compression codec, i.e., BPG.

Read this paper on arXiv…

A. Zafari, A. Khoshkhahtinat, P. Mehta, et. al.
Fri, 14 Oct 22
17/75

Comments: Accepted to ICMLA 2022 (Oral Presentation)

Parallel faceted imaging in radio interferometry via proximal splitting (Faceted HyperSARA): II. Code and real data proof of concept [IMA]

http://arxiv.org/abs/2209.07604


In a companion paper, a faceted wideband imaging technique for radio interferometry, dubbed Faceted HyperSARA, has been introduced and validated on synthetic data. Building on the recent HyperSARA approach, Faceted HyperSARA leverages the splitting functionality inherent to the underlying primal-dual forward-backward algorithm to decompose the image reconstruction over multiple spatio-spectral facets. The approach allows complex regularization to be injected into the imaging process while providing additional parallelization flexibility compared to HyperSARA. The present paper introduces new algorithm functionalities to address real datasets, implemented as part of a fully fledged MATLAB imaging library made available on Github. A large scale proof-of-concept is proposed to validate Faceted HyperSARA in a new data and parameter scale regime, compared to the state-of-the-art. The reconstruction of a 15 GB wideband image of Cyg A from 7.4 GB of VLA data is considered, utilizing 1440 CPU cores on a HPC system for about 9 hours. The conducted experiments illustrate the reconstruction performance of the proposed approach on real data, exploiting new functionalities to set, both an accurate model of the measurement operator accounting for known direction-dependent effects (DDEs), and an effective noise level accounting for imperfect calibration. They also demonstrate that, when combined with a further dimensionality reduction functionality, Faceted HyperSARA enables the recovery of a 3.6 GB image of Cyg A from the same data using only 91 CPU cores for 39 hours. In this setting, the proposed approach is shown to provide a superior reconstruction quality compared to the state-of-the-art wideband CLEAN-based algorithm of the WSClean software.

Read this paper on arXiv…

P. Thouvenin, A. Dabbech, M. Jiang, et. al.
Mon, 19 Sep 22
47/50

Comments: N/A

Field Distortion Model Based on Fredholm Integral [CL]

http://arxiv.org/abs/2205.09022


Field distortion is widespread in imaging systems. If it cannot be measured and corrected well, it will affect the accuracy of photogrammetry. To this end, we proposed a general field distortion model based on Fredholm integration, which uses a reconstructed high-resolution reference point spread function (PSF) and two sets of 4-variable polynomials to describe an imaging system. The model includes the point-to-point positional distortion from the object space to the image space and the deformation of the PSF so that we can measure an actual field distortion with arbitrary accuracy. We also derived the formula required for correcting the sampling effect of the image sensor. Through numerical simulation, we verify the effectiveness of the model and reconstruction algorithm. This model will have potential application in high-precision image calibration, photogrammetry and astrometry.

Read this paper on arXiv…

Y. Sun and J. Zhou
Thu, 19 May 22
32/61

Comments: 11 pages,9 figures

Image reconstruction algorithms in radio interferometry: from handcrafted to learned denoisers [CL]

http://arxiv.org/abs/2202.12959


We introduce a new class of iterative image reconstruction algorithms for radio interferometry, at the interface of convex optimization and deep learning, inspired by plug-and-play methods. The approach consists in learning a prior image model by training a deep neural network (DNN) as a denoiser, and substituting it for the handcrafted proximal regularization operator of an optimization algorithm. The proposed AIRI (“AI for Regularization in Radio-Interferometric Imaging”) framework, for imaging complex intensity structure with diffuse and faint emission, inherits the robustness and interpretability of optimization, and the learning power and speed of networks. Our approach relies on three steps. Firstly, we design a low dynamic range database for supervised training from optical intensity images. Secondly, we train a DNN denoiser with basic architecture ensuring positivity of the output image, at a noise level inferred from the signal-to-noise ratio of the data. We use either $\ell_2$ or $\ell_1$ training losses, enhanced with a nonexpansiveness term ensuring algorithm convergence, and including on-the-fly database dynamic range enhancement via exponentiation. Thirdly, we plug the learned denoiser into the forward-backward optimization algorithm, resulting in a simple iterative structure alternating a denoising step with a gradient-descent data-fidelity step. The resulting AIRI-$\ell_2$ and AIRI-$\ell_1$ were validated against CLEAN and optimization algorithms of the SARA family, propelled by the “average sparsity” proximal regularization operator. Simulation results show that these first AIRI incarnations are competitive in imaging quality with SARA and its unconstrained forward-backward-based version uSARA, while providing significant acceleration. CLEAN remains faster but offers lower reconstruction quality.

Read this paper on arXiv…

M. Terris, A. Dabbech, C. Tang, et. al.
Tue, 1 Mar 22
24/80

Comments: N/A

Tomographic Muon Imaging of the Great Pyramid of Giza [CL]

http://arxiv.org/abs/2202.08184


The pyramids of the Giza plateau have fascinated visitors since ancient times and are the last of the Seven Wonders of the ancient world still standing. It has been half a century since Luiz Alvarez and his team used cosmic-ray muon imaging to look for hidden chambers in Khafres Pyramid. Advances in instrumentation for High-Energy Physics (HEP) allowed a new survey, ScanPyramids, to make important new discoveries at the Great Pyramid (Khufu) utilizing the same basic technique that the Alvarez team used, but now with modern instrumentation. The Exploring the Great Pyramid Mission plans to field a very-large muon telescope system that will be transformational with respect to the field of cosmic-ray muon imaging. We plan to field a telescope system that has upwards of 100 times the sensitivity of the equipment that has recently been used at the Great Pyramid, will image muons from nearly all angles and will, for the first time, produce a true tomographic image of such a large structure.

Read this paper on arXiv…

A. Bross, E. Dukes, R. Ehrlich, et. al.
Thu, 17 Feb 22
44/60

Comments: N/A

alpha-Deep Probabilistic Inference (alpha-DPI): efficient uncertainty quantification from exoplanet astrometry to black hole feature extraction [IMA]

http://arxiv.org/abs/2201.08506


Inference is crucial in modern astronomical research, where hidden astrophysical features and patterns are often estimated from indirect and noisy measurements. Inferring the posterior of hidden features, conditioned on the observed measurements, is essential for understanding the uncertainty of results and downstream scientific interpretations. Traditional approaches for posterior estimation include sampling-based methods and variational inference. However, sampling-based methods are typically slow for high-dimensional inverse problems, while variational inference often lacks estimation accuracy. In this paper, we propose alpha-DPI, a deep learning framework that first learns an approximate posterior using alpha-divergence variational inference paired with a generative neural network, and then produces more accurate posterior samples through importance re-weighting of the network samples. It inherits strengths from both sampling and variational inference methods: it is fast, accurate, and scalable to high-dimensional problems. We apply our approach to two high-impact astronomical inference problems using real data: exoplanet astrometry and black hole feature extraction.

Read this paper on arXiv…

H. Sun, K. Bouman, P. Tiede, et. al.
Mon, 24 Jan 22
42/59

Comments: N/A

Image Processing Methods for Coronal Hole Segmentation, Matching, and Map Classification [CL]

http://arxiv.org/abs/2201.01380


The paper presents the results from a multi-year effort to develop and validate image processing methods for selecting the best physical models based on solar image observations. The approach consists of selecting the physical models based on their agreement with coronal holes extracted from the images. Ultimately, the goal is to use physical models to predict geomagnetic storms. We decompose the problem into three subproblems: (i) coronal hole segmentation based on physical constraints, (ii) matching clusters of coronal holes between different maps, and (iii) physical map classification. For segmenting coronal holes, we develop a multi-modal method that uses segmentation maps from three different methods to initialize a level-set method that evolves the initial coronal hole segmentation to the magnetic boundary. Then, we introduce a new method based on Linear Programming for matching clusters of coronal holes. The final matching is then performed using Random Forests. The methods were carefully validated using consensus maps derived from multiple readers, manual clustering, manual map classification, and method validation for 50 maps. The proposed multi-modal segmentation method significantly outperformed SegNet, U-net, Henney-Harvey, and FCN by providing accurate boundary detection. Overall, the method gave a 95.5% map classification accuracy.

Read this paper on arXiv…

V. Jatla, M. Pattichis and C. Arge
Thu, 6 Jan 22
31/56

Comments: N/A

Astronomical Image Colorization and upscaling with Generative Adversarial Networks [CL]

http://arxiv.org/abs/2112.13865


Automatic colorization of images without human intervention has been a subject of interest in the machine learning community for a brief period of time. Assigning color to an image is a highly ill-posed problem because of its innate nature of possessing very high degrees of freedom; given an image, there is often no single color-combination that is correct. Besides colorization, another problem in reconstruction of images is Single Image Super Resolution, which aims at transforming low resolution images to a higher resolution. This research aims to provide an automated approach for the problem by focusing on a very specific domain of images, namely astronomical images, and process them using Generative Adversarial Networks (GANs). We explore the usage of various models in two different color spaces, RGB and Lab. We use transferred learning owing to a small data set, using pre-trained ResNet-18 as a backbone, i.e. encoder for the U-net and fine-tune it further. The model produces visually appealing images which hallucinate high resolution, colorized data in these results which does not exist in the original image. We present our results by evaluating the GANs quantitatively using distance metrics such as L1 distance and L2 distance in each of the color spaces across all channels to provide a comparative analysis. We use Frechet inception distance (FID) to compare the distribution of the generated images with the distribution of the real image to assess the model’s performance.

Read this paper on arXiv…

S. Kalvankar, H. Pandit, P. Parwate, et. al.
Thu, 30 Dec 21
6/71

Comments: 14 pages, 10 figures, 7 tables

Astronomical Image Colorization and upscaling with Generative Adversarial Networks [CL]

http://arxiv.org/abs/2112.13865


Automatic colorization of images without human intervention has been a subject of interest in the machine learning community for a brief period of time. Assigning color to an image is a highly ill-posed problem because of its innate nature of possessing very high degrees of freedom; given an image, there is often no single color-combination that is correct. Besides colorization, another problem in reconstruction of images is Single Image Super Resolution, which aims at transforming low resolution images to a higher resolution. This research aims to provide an automated approach for the problem by focusing on a very specific domain of images, namely astronomical images, and process them using Generative Adversarial Networks (GANs). We explore the usage of various models in two different color spaces, RGB and Lab. We use transferred learning owing to a small data set, using pre-trained ResNet-18 as a backbone, i.e. encoder for the U-net and fine-tune it further. The model produces visually appealing images which hallucinate high resolution, colorized data in these results which does not exist in the original image. We present our results by evaluating the GANs quantitatively using distance metrics such as L1 distance and L2 distance in each of the color spaces across all channels to provide a comparative analysis. We use Frechet inception distance (FID) to compare the distribution of the generated images with the distribution of the real image to assess the model’s performance.

Read this paper on arXiv…

S. Kalvankar, H. Pandit, P. Parwate, et. al.
Thu, 30 Dec 21
15/71

Comments: 14 pages, 10 figures, 7 tables

unrolling palm for sparse semi-blind source separation [IMA]

http://arxiv.org/abs/2112.05694


Sparse Blind Source Separation (BSS) has become a well established tool for a wide range of applications – for instance, in astrophysics and remote sensing. Classical sparse BSS methods, such as the Proximal Alternating Linearized Minimization (PALM) algorithm, nevertheless often suffer from a difficult hyperparameter choice, which undermines their results. To bypass this pitfall, we propose in this work to build on the thriving field of algorithm unfolding/unrolling. Unrolling PALM enables to leverage the data-driven knowledge stemming from realistic simulations or ground-truth data by learning both PALM hyperparameters and variables. In contrast to most existing unrolled algorithms, which assume a fixed known dictionary during the training and testing phases, this article further emphasizes on the ability to deal with variable mixing matrices (a.k.a. dictionaries). The proposed Learned PALM (LPALM) algorithm thus enables to perform semi-blind source separation, which is key to increase the generalization of the learnt model in real-world applications. We illustrate the relevance of LPALM in astrophysical multispectral imaging: the algorithm not only needs up to $10^4-10^5$ times fewer iterations than PALM, but also improves the separation quality, while avoiding the cumbersome hyperparameter and initialization choice of PALM. We further show that LPALM outperforms other unrolled source separation methods in the semi-blind setting.

Read this paper on arXiv…

M. Fahes, C. Kervazo, J. Bobin, et. al.
Mon, 13 Dec 21
45/70

Comments: N/A

Big Data in Astroinformatics — Compression of Scanned Astronomical Photographic Plates [IMA]

http://arxiv.org/abs/2108.08399


Construction of Scanned Astronomical Photographic Plates(SAPPs) databases and SVD image compression algorithm are considered. Some examples of compression with different plates are shown.

Read this paper on arXiv…

V. Kolev
Fri, 20 Aug 21
23/59

Comments: 9 pages, 4 figures, International Conference on Big Data, Knowledge and Control Systems Engineering,5 – 6 November 2015, Sofia, Bulgaria

Autofocusing Optimal Search Algorithm for a Telescope System [CL]

http://arxiv.org/abs/2107.05398


Focus accuracy affects the quality of the astronomical observations. Auto-focusing is necessary for imaging systems designed for astronomical observations. The automatic focus system searches for the best focus position by using a proposed search algorithm. The search algorithm uses the image’s focus levels as its objective function in the search process. This paper aims to study the performance of several search algorithms to select a suitable one. The proper search algorithm will be used to develop an automatic focus system for Kottamia Astronomical Observatory (KAO). The optimal search algorithm is selected by applying several search algorithms into five sequences of star-clusters observations. Then, their performance is evaluated based on two criteria, which are accuracy and number of steps. The experimental results show that the Binary search is the optimal search algorithm.

Read this paper on arXiv…

I. Helmy, A. Hamdy, D. Eid, et. al.
Tue, 13 Jul 21
6/79

Comments: 13 Pages, 10 Figures, 7 Tables

Impact of Scene-Specific Enhancement Spectra on Matched Filter Greenhouse Gas Retrievals from Imaging Spectroscopy [CL]

http://arxiv.org/abs/2107.05578


Matched filter (MF) techniques have been widely used for retrieval of greenhouse gas enhancements (enh.) from imaging spectroscopy datasets. While multiple algorithmic techniques and refinements have been proposed, the greenhouse gas target spectrum used for concentration enh. estimation has remained largely unaltered since the introduction of quantitative MF retrievals. The magnitude of retrieved methane and carbon dioxide enh., and thereby integrated mass enh. (IME) and estimated flux of point-source emitters, is heavily dependent on this target spectrum. Current standard use of molecular absorption coefficients to create unit enh. target spectra does not account for absorption by background concentrations of greenhouse gases, solar and sensor geometry, or atmospheric water vapor absorption. We introduce geometric and atmospheric parameters into the generation of scene-specific (SS) unit enh. spectra to provide target spectra that are compatible with all greenhouse gas retrieval MF techniques. For methane plumes, IME resulting from use of standard, generic enh. spectra varied from -22 to +28.7% compared to SS enh. spectra. Due to differences in spectral shape between the generic and SS enh. spectra, differences in methane plume IME were linked to surface spectral characteristics in addition to geometric and atmospheric parameters. IME differences for carbon dioxide plumes, with generic enh. spectra producing integrated mass enh. -76.1 to -48.1% compared to SS enh. spectra. Fluxes calculated from these integrated enh. would vary by the same %s, assuming equivalent wind conditions. Methane and carbon dioxide IME were most sensitive to changes in solar zenith angle and ground elevation. SS target spectra can improve confidence in greenhouse gas retrievals and flux estimates across collections of scenes with diverse geometric and atmospheric conditions.

Read this paper on arXiv…

M. Foote, P. Dennison, P. Sullivan, et. al.
Tue, 13 Jul 21
43/79

Comments: 14 pages, 5 figures, 3 tables

The Simons Observatory: HoloSim-ML: machine learning applied to the efficient analysis of radio holography measurements of complex optical systems [IMA]

http://arxiv.org/abs/2107.04138


Near-field radio holography is a common method for measuring and aligning mirror surfaces for millimeter and sub-millimeter telescopes. In instruments with more than a single mirror, degeneracies arise in the holography measurement, requiring multiple measurements and new fitting methods. We present HoloSim-ML, a Python code for beam simulation and analysis of radio holography data from complex optical systems. This code uses machine learning to efficiently determine the position of hundreds of mirror adjusters on multiple mirrors with few micron accuracy. We apply this approach to the example of the Simons Observatory 6m telescope.

Read this paper on arXiv…

G. Chesmore, A. Adler, N. Cothard, et. al.
Mon, 12 Jul 21
30/49

Comments: Software is publicly available at: this https URL

A Comparative Study of Convolutional Neural Networks for the Detection of Strong Gravitational Lensing [IMA]

http://arxiv.org/abs/2106.01754


As we enter the era of large-scale imaging surveys with the up-coming telescopes such as LSST and SKA, it is envisaged that the number of known strong gravitational lensing systems will increase dramatically. However, these events are still very rare and require the efficient processing of millions of images. In order to tackle this image processing problem, we present Machine Learning techniques and apply them to the Gravitational Lens Finding Challenge. The Convolutional Neural Networks (CNNs) presented have been re-implemented within a new modular, and extendable framework, LEXACTUM. We report an Area Under the Curve (AUC) of 0.9343 and 0.9870, and an execution time of 0.0061s and 0.0594s per image, for the Space and Ground datasets respectively, showing that the results obtained by CNNs are very competitive with conventional methods (such as visual inspection and arc finders) for detecting gravitational lenses.

Read this paper on arXiv…

D. Magro, K. Adami, A. DeMarco, et. al.
Fri, 4 Jun 21
34/71

Comments: 12 pages, 13 figures

Geospatial Transformations for Ground-Based Sky Imaging Systems [IMA]

http://arxiv.org/abs/2103.02066


Sky imaging systems use lenses to acquire images concentrating light beams in an imager. The light beams received by the imager have an elevation angle with respect to the normal of the device. This produces that the pixels in an image contain information from different areas of the sky within imaging system Field Of View (FOV). The area of the field of view contained in the pixels increases as the elevation angle of the incident light beams decreases. When the sky imaging system are mounted on a solar tracker the angle of incidence of the light beams varies along time. This investigation introduces a transformation that projects the original euclidean frame of the plane of the imager to the geospatial frame of the sky imaging system field of view.

Read this paper on arXiv…

G. Terrén-Serrano and M. Martínez-Ramón
Thu, 4 Mar 21
9/83

Comments: N/A

Scattering Networks on the Sphere for Scalable and Rotationally Equivariant Spherical CNNs [CL]

http://arxiv.org/abs/2102.02828


Convolutional neural networks (CNNs) constructed natively on the sphere have been developed recently and shown to be highly effective for the analysis of spherical data. While an efficient framework has been formulated, spherical CNNs are nevertheless highly computationally demanding; typically they cannot scale beyond spherical signals of thousands of pixels. We develop scattering networks constructed natively on the sphere that provide a powerful representational space for spherical data. Spherical scattering networks are computationally scalable and exhibit rotational equivariance, while their representational space is invariant to isometries and provides efficient and stable signal representations. By integrating scattering networks as an additional type of layer in the generalized spherical CNN framework, we show how they can be leveraged to scale spherical CNNs to the high resolution data typical of many practical applications, with spherical signals of many tens of megapixels and beyond.

Read this paper on arXiv…

J. McEwen, C. Wallis and A. Mavor-Parker
Mon, 8 Feb 21
20/46

Comments: 13 pages, 5 figures

Galaxy Image Restoration with Shape Constraint [IMA]

http://arxiv.org/abs/2101.10021


Images acquired with a telescope are blurred and corrupted by noise. The blurring is usually modeled by a convolution with the Point Spread Function and the noise by Additive Gaussian Noise. Recovering the observed image is an ill-posed inverse problem. Sparse deconvolution is well known to be an efficient deconvolution technique, leading to optimized pixel Mean Square Errors, but without any guarantee that the shapes of objects (e.g. galaxy images) contained in the data will be preserved. In this paper, we introduce a new shape constraint and exhibit its properties. By combining it with a standard sparse regularization in the wavelet domain, we introduce the Shape COnstraint REstoration algorithm (SCORE), which performs a standard sparse deconvolution, while preserving galaxy shapes. We show through numerical experiments that this new approach leads to a reduction of galaxy ellipticity measurement errors by at least 44%.

Read this paper on arXiv…

F. Nammour, M. Schmitz, F. Mboula, et. al.
Tue, 26 Jan 21
27/84

Comments: 22 pages, 6 figures, 1 table, accepted in Journal of Fourier Analysis and Applications

Data Processing for Short-Term Solar Irradiance Forecasting using Ground-Based Infrared Images [IMA]

http://arxiv.org/abs/2101.08694


The generation of energy in a power grid which uses Photovoltaic (PV) systems depends on the projection of shadows from moving clouds in the Troposphere. This investigation proposes an efficient method of data processing for the statistical quantification of cloud features using long-wave infrared (IR) images and Global Solar Irradiance (GSI) measurements. The IR images are obtained using a data acquisition system (DAQ) mounted on a solar tracker. We explain how to remove cyclostationary biases in GSI measurements. Seasonal trends are removed from the GSI time series, using the theoretical GSI to obtain the Clear-Sky Index (CSI) time series. We introduce an atmospheric model to remove from IR images both the effect of atmosphere scatter irradiance and the effect of the Sun’s direct irradiance. Scattering is produced by water spots and dust particles on the germanium lens of the enclosure. We explain how to remove the scattering effect produced by the germanium lens attached to the DAQ enclosure window of the IR camera. An atmospheric condition model classifies the sky-conditions in four different categories: clear-sky, cumulus, stratus and nimbus. When an IR image is classified in the category of clear-sky, it is used to model the scattering effect of the germanium lens.

Read this paper on arXiv…

G. Terrén-Serrano and M. Martínez-Ramón
Fri, 22 Jan 21
53/69

Comments: arXiv admin note: text overlap with arXiv:2011.12401

Galaxy Image Translation with Semi-supervised Noise-reconstructed Generative Adversarial Networks [CL]

http://arxiv.org/abs/2101.07389


Image-to-image translation with Deep Learning neural networks, particularly with Generative Adversarial Networks (GANs), is one of the most powerful methods for simulating astronomical images. However, current work is limited to utilizing paired images with supervised translation, and there has been rare discussion on reconstructing noise background that encodes instrumental and observational effects. These limitations might be harmful for subsequent scientific applications in astrophysics. Therefore, we aim to develop methods for using unpaired images and preserving noise characteristics in image translation. In this work, we propose a two-way image translation model using GANs that exploits both paired and unpaired images in a semi-supervised manner, and introduce a noise emulating module that is able to learn and reconstruct noise characterized by high-frequency features. By experimenting on multi-band galaxy images from the Sloan Digital Sky Survey (SDSS) and the Canada France Hawaii Telescope Legacy Survey (CFHT), we show that our method recovers global and local properties effectively and outperforms benchmark image translation models. To our best knowledge, this work is the first attempt to apply semi-supervised methods and noise reconstruction techniques in astrophysical studies.

Read this paper on arXiv…

Q. Lin, D. Fouchez and J. Pasquet
Wed, 20 Jan 21
33/61

Comments: Accepted at ICPR 2020

Digital Elevation Model enhancement using Deep Learning [CL]

http://arxiv.org/abs/2101.04812


We demonstrate high fidelity enhancement of planetary digital elevation models (DEMs) using optical images and deep learning with convolutional neural networks. Enhancement can be applied recursively to the limit of available optical data, representing a 90x resolution improvement in global Mars DEMs. Deep learning-based photoclinometry robustly recovers features obscured by non-ideal lighting conditions. Method can be automated at global scale. Analysis shows enhanced DEM slope errors are comparable with high resolution maps using conventional, labor intensive methods.

Read this paper on arXiv…

C. Handmer
Thu, 14 Jan 21
78/79

Comments: 11 pages, 13 figures

PSF Estimation in Crowded Astronomical Imagery as a Convolutional Dictionary Learning Problem [CL]

http://arxiv.org/abs/2101.01268


We present a new algorithm for estimating the Point Spread Function (PSF) in wide-field astronomical images with extreme source crowding. Robust and accurate PSF estimation in crowded astronomical images dramatically improves the fidelity of astrometric and photometric measurements extracted from wide-field sky monitoring imagery. Our radically new approach utilizes convolutional sparse representations to model the continuous functions involved in the image formation. This approach avoids the need to detect and precisely localize individual point sources that is shared by existing methods. In experiments involving simulated astronomical imagery, it significantly outperforms the recent alternative method with which it is compared.

Read this paper on arXiv…

B. Wohlberg and P. Wozniak
Wed, 6 Jan 21
72/82

Comments: N/A

Optical Wavelength Guided Self-Supervised Feature Learning For Galaxy Cluster Richness Estimate [CL]

http://arxiv.org/abs/2012.02368


Most galaxies in the nearby Universe are gravitationally bound to a cluster or group of galaxies. Their optical contents, such as optical richness, are crucial for understanding the co-evolution of galaxies and large-scale structures in modern astronomy and cosmology. The determination of optical richness can be challenging. We propose a self-supervised approach for estimating optical richness from multi-band optical images. The method uses the data properties of the multi-band optical images for pre-training, which enables learning feature representations from a large but unlabeled dataset. We apply the proposed method to the Sloan Digital Sky Survey. The result shows our estimate of optical richness lowers the mean absolute error and intrinsic scatter by 11.84% and 20.78%, respectively, while reducing the need for labeled training data by up to 60%. We believe the proposed method will benefit astronomy and cosmology, where a large number of unlabeled multi-band images are available, but acquiring image labels is costly.

Read this paper on arXiv…

G. Liang, Y. Su, S. Lin, et. al.
Mon, 7 Dec 20
68/69

Comments: Accepted to NeurIPS 2020 Workshop on Machine Learning and the Physical Sciences

Survey2Survey: A deep learning generative model approach for cross-survey image mapping [IMA]

http://arxiv.org/abs/2011.07124


During the last decade, there has been an explosive growth in survey data and deep learning techniques, both of which have enabled great advances for astronomy. The amount of data from various surveys from multiple epochs with a wide range of wavelengths and vast sky coverage, albeit with varying brightness and quality, is overwhelming, and leveraging information from overlapping observations from different surveys has limitless potential in understanding galaxy formation and evolution. Synthetic galaxy image generation using physical models has been an important tool for survey data analysis, while using deep learning generative models shows great promise. In this paper, we present a novel approach for robustly expanding and improving survey data through cross-survey feature translation. We trained two types of generative neural networks to map images from the Sloan Digital Sky Survey (SDSS) into corresponding images from the Dark Energy Survey (DES), increasing the brightness and S/N of the fainter, lower quality source images without losing important morphological information. We demonstrate the robustness of our method by generating DES representations of SDSS images from outside the overlapping region, showing that the brightness and quality are improved even when the source images are of lower quality than the training images. Finally, we highlight several images in which the reconstruction process appears to have removed large artifacts from SDSS images. While only an initial application, our method shows promise as a method for robustly expanding and improving the quality of optical survey data and provides a potential avenue for cross-band reconstruction.

Read this paper on arXiv…

B. Buncher, A. Sharma and M. Kind
Tue, 17 Nov 20
69/83

Comments: 13 pages, 18 figures

Simulating Anisoplanatic Turbulence by Sampling Inter-modal and Spatially Correlated Zernike Coefficients [CL]

http://arxiv.org/abs/2004.11210


Simulating atmospheric turbulence is an essential task for evaluating turbulence mitigation algorithms and training learning-based methods. Advanced numerical simulators for atmospheric turbulence are available, but they require evaluating wave propagation which is computationally expensive. In this paper, we present a propagation-free method for simulating imaging through turbulence. The key idea behind our work is a new method to draw inter-modal and spatially correlated Zernike coefficients. By establishing the equivalence between the angle-of-arrival correlation by Basu, McCrae and Fiorino (2015) and the multi-aperture correlation by Chanan (1992), we show that the Zernike coefficients can be drawn according to a covariance matrix defining the correlations. We propose fast and scalable sampling strategies to draw these samples. The new method allows us to compress the wave propagation problem into a sampling problem, hence making the new simulator significantly faster than existing ones. Experimental results show that the simulator has an excellent match with the theory and real turbulence data.

Read this paper on arXiv…

N. Chimitt and S. Chan
Fri, 24 Apr 20
39/63

Comments: N/A

Parallel faceted imaging in radio interferometry via proximal splitting (Faceted HyperSARA): when precision meets scalability [IMA]

http://arxiv.org/abs/2003.07358


Upcoming radio interferometers are aiming to image the sky at new levels of resolution and sensitivity, with wide-band image cubes reaching close to the Petabyte scale for SKA. Modern proximal optimization algorithms have shown a potential to significantly outperform CLEAN thanks to their ability to inject complex image models to regularize the inverse problem for image formation from visibility data. They were also shown to be scalable to large data volumes thanks to a splitting functionality enabling the decomposition of data into blocks, for parallel processing of block-specific data-fidelity terms of the objective function. In this work, the splitting functionality is further exploited to decompose the image cube into spatio-spectral facets, and enable parallel processing of facet-specific regularization terms in the objective. The resulting Faceted HyperSARA algorithm is implemented in MATLAB (code available on GitHub). Simulation results on synthetic image cubes confirm that faceting can provide a major increase in scalability at no cost in imaging quality. A proof-of-concept reconstruction of a 15 GB image of Cyg A from 7.4 GB of VLA data, utilizing 496 CPU cores on a HPC system for 68 hours, confirms both scalability and a quantum jump in imaging quality from CLEAN. Assuming slow spectral slope of Cyg A, we also demonstrate that Faceted HyperSARA can be combined with a dimensionality reduction technique, enabling utilizing only 31 CPU cores for 142 hours to form the Cyg A image from the same data, while preserving reconstruction quality. Cyg A reconstructed cubes are available online.

Read this paper on arXiv…

P. Thouvenin, A. Abdulaziz, M. Jiang, et. al.
Wed, 18 Mar 20
41/46

Comments: N/A

Adversarial training applied to Convolutional Neural Network for photometric redshift predictions [IMA]

http://arxiv.org/abs/2002.10154


The use of Convolutional Neural Networks (CNN) to estimate the galaxy photometric redshift probability distribution by analysing the images in different wavelength bands has been developed in the recent years thanks to the rapid development of the Machine Learning (ML) ecosystem. Authors have set-up CNN architectures and studied their performances and some sources of systematics using standard methods of training and testing to ensure the generalisation power of their models. So far so good, but one piece was missing : does the model generalisation power is well measured? The present article shows clearly that very small image perturbations can fool the model completely and opens the Pandora’s box of \textit{adversarial} attack. Among the different techniques and scenarios, we have chosen to use the Fast Sign Gradient one-step Method and its Projected Gradient Descent iterative extension as adversarial generator tool kit. However, as unlikely as it may seem these adversarial samples which fool not only a single model, reveal a weakness both of the model and the classical training. A revisited algorithm is shown and applied by injecting a fraction of adversarial samples during the training phase. Numerical experiments have been conducted using a specific CNN model for illustration although our study could be applied to other models – not only CNN ones – and in other contexts – not only redshift measurements – as it deals with the complexity of the boundary decision surface.

Read this paper on arXiv…

J. Campagne
Tue, 25 Feb 20
23/76

Comments: 12 pages, 6 figures

Point Spread Function Modelling for Wide Field Small Aperture Telescopes with a Denoising Autoencoder [IMA]

http://arxiv.org/abs/2001.11716


The point spread function reflects the state of an optical telescope and it is important for data post-processing methods design. For wide field small aperture telescopes, the point spread function is hard to model, because it is affected by many different effects and has strong temporal and spatial variations. In this paper, we propose to use the denoising autoencoder, a type of deep neural network, to model the point spread function of wide field small aperture telescopes. The denoising autoencoder is a pure data based point spread function modelling method, which uses calibration data from real observations or numerical simulated results as point spread function templates. According to real observation conditions, different levels of random noise or aberrations are added to point spread function templates, making them as realizations of the point spread function, i.e., simulated star images. Then we train the denoising autoencoder with realizations and templates of the point spread function. After training, the denoising autoencoder learns the manifold space of the point spread function and can map any star images obtained by wide field small aperture telescopes directly to its point spread function, which could be used to design data post-processing or optical system alignment methods.

Read this paper on arXiv…

P. Jia, X. Li, Z. Li, et. al.
Mon, 3 Feb 20
20/46

Comments: 10 pages, 10 figures, Accpeted after minor revision by MNRAS

CosmoVAE: Variational Autoencoder for CMB Image Inpainting [CL]

http://arxiv.org/abs/2001.11651


Cosmic microwave background radiation (CMB) is critical to the understanding of the early universe and precise estimation of cosmological constants. Due to the contamination of thermal dust noise in the galaxy, the CMB map that is an image on the two-dimensional sphere has missing observations, mainly concentrated on the equatorial region. The noise of the CMB map has a significant impact on the estimation precision for cosmological parameters. Inpainting the CMB map can effectively reduce the uncertainty of parametric estimation. In this paper, we propose a deep learning-based variational autoencoder — CosmoVAE, to restoring the missing observations of the CMB map. The input and output of CosmoVAE are square images. To generate training, validation, and test data sets, we segment the full-sky CMB map into many small images by Cartesian projection. CosmoVAE assigns physical quantities to the parameters of the VAE network by using the angular power spectrum of the Gaussian random field as latent variables. CosmoVAE adopts a new loss function to improve the learning performance of the model, which consists of $\ell_1$ reconstruction loss, Kullback-Leibler divergence between the posterior distribution of encoder network and the prior distribution of latent variables, perceptual loss, and total-variation regularizer. The proposed model achieves state of the art performance for Planck \texttt{Commander} 2018 CMB map inpainting.

Read this paper on arXiv…

K. Yi, Y. Guo, Y. Fan, et. al.
Mon, 3 Feb 20
31/46

Comments: 7 pages, 6 figures

Solar Image Deconvolution by Generative Adversarial Network [SSA]

http://arxiv.org/abs/2001.03850


With Aperture synthesis (AS) technique, a number of small antennas can assemble to form a large telescope which spatial resolution is determined by the distance of two farthest antennas instead of the diameter of a single-dish antenna. Different from direct imaging system, an AS telescope captures the Fourier coefficients of a spatial object, and then implement inverse Fourier transform to reconstruct the spatial image. Due to the limited number of antennas, the Fourier coefficients are extremely sparse in practice, resulting in a very blurry image. To remove/reduce blur, “CLEAN” deconvolution was widely used in the literature. However, it was initially designed for point source. For extended source, like the sun, its efficiency is unsatisfied. In this study, a deep neural network, referring to Generative Adversarial Network (GAN), is proposed for solar image deconvolution. The experimental results demonstrate that the proposed model is markedly better than traditional CLEAN on solar images.

Read this paper on arXiv…

L. Xu, W. Sun, Y. Yan, et. al.
Tue, 14 Jan 20
62/72

Comments: 14 pages, 6 figures, 2 tables

Simulated JWST datasets for multispectral and hyperspectral image fusion [IMA]

http://arxiv.org/abs/2001.02618


This paper aims at providing a comprehensive framework to generate an astrophysical scene and to simulate realistic hyperspectral and multispectral data acquired by two JWST instruments, namely NIRCam Imager and NIRSpec IFU. We want to show that this simulation framework can be resorted to assess the benefits of fusing these images to recover an image of high spatial and spectral resolutions. To do so, we create a synthetic scene associated with a canonical infrared source, the Orion Bar. This scene combines pre-existing modelled spectra provided by the JWST Early Release Science Program 1288 and real high resolution spatial maps from the Hubble space and ALMA telescopes. We develop forward models including corresponding noises for the two JWST instruments based on their technical designs and physical features. JWST observations are then simulated by applying the forward models to the aforementioned synthetic scene. We test a dedicated fusion algorithm we developed on these simulated observations. We show the fusion process reconstructs the high spatio-spectral resolution scene with a good accuracy on most areas, and we identify some limitations of the method to be tackled in future works. The synthetic scene and observations presented in the paper are made publicly available and can be used for instance to evaluate instrument models (aboard the JWST or on the ground), pipelines, or more sophisticated algorithms dedicated to JWST data analysis. Besides, fusion methods such as the one presented in this paper are shown to be promising tools to fully exploit the unprecedented capabilities of the JWST.

Read this paper on arXiv…

C. Guilloteau, T. Oberlin, O. Berné, et. al.
Thu, 9 Jan 20
8/61

Comments: N/A

Hyperspectral and multispectral image fusion under spectrally varying spatial blurs — Application to high dimensional infrared astronomical imaging [CL]

http://arxiv.org/abs/1912.11868


Hyperspectral imaging has become a significant source of valuable data for astronomers over the past decades. Current instrumental and observing time constraints allow direct acquisition of multispectral images, with high spatial but low spectral resolution, and hyperspectral images, with low spatial but high spectral resolution. To enhance scientific interpretation of the data, we propose a data fusion method which combines the benefits of each image to recover a high spatio-spectral resolution datacube. The proposed inverse problem accounts for the specificities of astronomical instruments, such as spectrally variant blurs. We provide a fast implementation by solving the problem in the frequency domain and in a low-dimensional subspace to efficiently handle the convolution operators as well as the high dimensionality of the data. We conduct experiments on a realistic synthetic dataset of simulated observation of the upcoming James Webb Space Telescope, and we show that our fusion algorithm outperforms state-of-the-art methods commonly used in remote sensing for Earth observation.

Read this paper on arXiv…

C. Guilloteau, T. Oberlin, O. Berné, et. al.
Mon, 30 Dec 19
1/51

Comments: N/A

Hyperspectral and multispectral image fusion under spectrally varying spatial blurs — Application to high dimensional infrared astronomical imaging [CL]

http://arxiv.org/abs/1912.11868


Hyperspectral imaging has become a significant source of valuable data for astronomers over the past decades. Current instrumental and observing time constraints allow direct acquisition of multispectral images, with high spatial but low spectral resolution, and hyperspectral images, with low spatial but high spectral resolution. To enhance scientific interpretation of the data, we propose a data fusion method which combines the benefits of each image to recover a high spatio-spectral resolution datacube. The proposed inverse problem accounts for the specificities of astronomical instruments, such as spectrally variant blurs. We provide a fast implementation by solving the problem in the frequency domain and in a low-dimensional subspace to efficiently handle the convolution operators as well as the high dimensionality of the data. We conduct experiments on a realistic synthetic dataset of simulated observation of the upcoming James Webb Space Telescope, and we show that our fusion algorithm outperforms state-of-the-art methods commonly used in remote sensing for Earth observation.

Read this paper on arXiv…

C. Guilloteau, T. Oberlin, O. Berné, et. al.
Mon, 30 Dec 19
18/51

Comments: N/A

Probabilistic Super-Resolution of Solar Magnetograms: Generating Many Explanations and Measuring Uncertainties [CL]

http://arxiv.org/abs/1911.01486


Machine learning techniques have been successfully applied to super-resolution tasks on natural images where visually pleasing results are sufficient. However in many scientific domains this is not adequate and estimations of errors and uncertainties are crucial. To address this issue we propose a Bayesian framework that decomposes uncertainties into epistemic and aleatoric uncertainties. We test the validity of our approach by super-resolving images of the Sun’s magnetic field and by generating maps measuring the range of possible high resolution explanations compatible with a given low resolution magnetogram.

Read this paper on arXiv…

X. Gitiaux, S. Maloney, A. Jungbluth, et. al.
Wed, 6 Nov 19
48/57

Comments: N/A

Deep Learning for space-variant deconvolution in galaxy surveys [IMA]

http://arxiv.org/abs/1911.00443


Deconvolution of large survey images with millions of galaxies requires to develop a new generation of methods which can take into account a space variant Point Spread Function and have to be at the same time accurate and fast. We investigate in this paper how Deep Learning could be used to perform this task. We employ a U-NET Deep Neural Network architecture to learn in a supervised setting parameters adapted for galaxy image processing and study two strategies for deconvolution. The first approach is a post-processing of a mere Tikhonov deconvolution with closed form solution and the second one is an iterative deconvolution framework based on the Alternating Direction Method of Multipliers (ADMM). Our numerical results based on GREAT3 simulations with realistic galaxy images and PSFs show that our two approaches outperforms standard techniques based on convex optimization, whether assessed in galaxy image reconstruction or shape recovery. The approach based on Tikhonov deconvolution leads to the most accurate results except for ellipticity errors at high signal to noise ratio where the ADMM approach performs slightly better, is also more computation-time efficient to process a large number of galaxies, and is therefore recommended in this scenario.

Read this paper on arXiv…

F. Sureau, A. Lechat and J. Starck
Mon, 4 Nov 19
37/55

Comments: N/A

HEALPix View-order for 3D Radial Self-Navigated Motion-Corrected ZTE MRI [CL]

http://arxiv.org/abs/1910.10276


Compressed sensing has reinvigorated the field of non-Cartesian sampling in magnetic resonance imaging (MRI). Until now there has been no 3D radial view-order which meets all the desired characteristics for simultaneous dynamic/high-resolution imaging, such as for self-navigated motion-corrected high resolution neuroimaging. In this work, we examine the use of Hierarchical Equal Area iso-Latitude Pixelization (HEALPix) for generation of three-dimensional (3D) radial view-orders for MRI, and compare to a selection of commonly used 3D view-orders. The resulting trajectories were evaluated through simulation of the point spread function and slanted surface object suitable for modulation transfer function, contrast ratio, and SNR measurement. Results from the HEALPix view-order were compared to Generalized Spiral, 2D Golden Means, and Random view-orders. Finally, we show the first use of the HEALPix view-order to acquire in-vivo brain images.

Read this paper on arXiv…

C. Corum, S. Kruger and V. Magnotta
Thu, 24 Oct 19
4/68

Comments: 4 pages, 6 figures, 2 tables

Adaptive Proximal Gradient Method for Constrained Matrix Factorization [CL]

http://arxiv.org/abs/1910.10094


The Proximal Gradient Method (PGM) is a robust and efficient way to minimize the sum of a smooth convex function $f$ and a non-differentiable convex function $r$. It determines the sizes of gradient steps according to the Lipschitz constant of the gradient of $f$. For many problems in data analysis, the Lipschitz constants are expensive or impossible to compute analytically because they depend on details of the experimental setup and the noise properties of the data. Adaptive optimization methods like AdaGrad choose step sizes according to on-the-fly estimates of the Hessian of $f$. As quasi-Newton methods, they generally outperform first-order gradient methods like PGM and adjust step sizes iteratively and with low computational cost. We propose an iterative proximal quasi-Newton algorithm, AdaProx, that utilizes the adaptive schemes of Adam and its variants (AMSGrad, AdamX, PAdam) and works for arbitrary proxable penalty functions $r$. In test cases for Constrained Matrix Factorization we demonstrate the advantages of AdaProx in fidelity and performance over PGM, especially when factorization components are poorly balanced. The python implementation of the algorithm presented here is available as an open-source package at https://github.com/pmelchior/proxmin

Read this paper on arXiv…

P. Melchior, R. Joseph and F. Moolekamp
Wed, 23 Oct 19
53/64

Comments: 12 pages, 5 figures; submitted to Optimization & Engineering

Unit panel nodes detection by CNN on FAST reflector [IMA]

http://arxiv.org/abs/1909.11806


The 500-meter Aperture Spherical Radio Telescope(FAST) has an active reflector. During the observation, the reflector will be deformed into a paraboloid of 300-meters. To improve its surface accuracy, we propose a scheme for photogrammetry to measure the positions of 2226 nodes on the reflector. And the way to detect the nodes in the photos is the key problem in photogrammetry. This paper applies Convolutional Neural Network(CNN) with candidate regions to detect the nodes in the photos. The experiment results show a high recognition rate of 91.5%, which is much higher than the recognition rate of traditional edge detection.

Read this paper on arXiv…

Z. Zhang, L. Zhu, W. Tang, et. al.
Fri, 27 Sep 19
36/64

Comments: 13 pages, 12 figures, 2 tables, CNN applied on FAST’s reflector measurement; matches the published version in RAA

Cleaning our own Dust: Simulating and Separating Galactic Dust Foregrounds with Neural Networks [IMA]

http://arxiv.org/abs/1909.06467


Separating galactic foreground emission from maps of the cosmic microwave background (CMB), and quantifying the uncertainty in the CMB maps due to errors in foreground separation are important for avoiding biases in scientific conclusions. Our ability to quantify such uncertainty is limited by our lack of a model for the statistical distribution of the foreground emission. Here we use a Deep Convolutional Generative Adversarial Network (DCGAN) to create an effective non-Gaussian statistical model for intensity of emission by interstellar dust. For training data we use a set of dust maps inferred from observations by the Planck satellite. A DCGAN is uniquely suited for such unsupervised learning tasks as it can learn to model a complex non-Gaussian distribution directly from examples. We then use these simulations to train a second neural network to estimate the underlying CMB signal from dust-contaminated maps. We discuss other potential uses for the trained DCGAN, and the generalization to polarized emission from both dust and synchrotron.

Read this paper on arXiv…

K. Aylor, M. Haq, L. Knox, et. al.
Tue, 17 Sep 19
73/98

Comments: N/A

Image Processing in Python With Montage [IMA]

http://arxiv.org/abs/1908.09753


The Montage image mosaic engine has found wide applicability in astronomy research, integration into processing environments, and is an examplar application for the development of advanced cyber-infrastructure. It is written in C to provide performance and portability. Linking C/C++ libraries to the Python kernel at run time as binary extensions allows them to run under Python at compiled speeds and enables users to take advantage of all the functionality in Python. We have built Python binary extensions of the 59 ANSI-C modules that make up version 5 of the Montage toolkit. This has involved a turning the code into a C library, with driver code fully separated to reproduce the calling sequence of the command-line tools; and then adding Python and C linkage code with the Cython library, which acts as a bridge between general C libraries and the Python interface. We will demonstrate how to use these Python binary extensions to perform image processing, including reprojecting and resampling images, rectifying background emission to a common level, creation of image mosaics that preserve the calibration and astrometric fidelity of the input images, creating visualizations with an adaptive stretch algorithm, processing HEALPix images, and analyzing and managing image metadata.

Read this paper on arXiv…

J. Good and G. Berriman
Tue, 27 Aug 19
85/85

Comments: 4 pages, 1 figure. Submitted to Proceedings of ADASS XXVIII

Contour Detection in Cassini ISS images based on Hierarchical Extreme Learning Machine and Dense Conditional Random Field [IMA]

http://arxiv.org/abs/1908.08279


In Cassini ISS (Imaging Science Subsystem) images, contour detection is often performed on disk-resolved object to accurately locate their center. Thus, the contour detection is a key problem. Traditional edge detection methods, such as Canny and Roberts, often extract the contour with too much interior details and noise. Although the deep convolutional neural network has been applied successfully in many image tasks, such as classification and object detection, it needs more time and computer resources. In the paper, a contour detection algorithm based on H-ELM (Hierarchical Extreme Learning Machine) and DenseCRF (Dense Conditional Random Field) is proposed for Cassini ISS images. The experimental results show that this algorithm’s performance is better than both traditional machine learning methods such as SVM, ELM and even deep convolutional neural network. And the extracted contour is closer to the actual contour. Moreover, it can be trained and tested quickly on the general configuration of PC, so can be applied to contour detection for Cassini ISS images.

Read this paper on arXiv…

X. Yang, Q. Zhang and Z. Li
Fri, 23 Aug 19
5/57

Comments: N/A

Cosmological N-body simulations: a challenge for scalable generative models [CL]

http://arxiv.org/abs/1908.05519


Deep generative models, such as Generative Adversarial Networks (GANs) or Variational Autoencoders have been demonstrated to produce images of high visual quality. However, the existing hardware on which these models are trained severely limits the size of the images that can be generated. The rapid growth of high dimensional data in many fields of science therefore poses a significant challenge for generative models. In cosmology, the large-scale, 3D matter distribution, modeled with N-body simulations, plays a crucial role in understanding of evolution of structures in the universe. As these simulations are computationally very expensive, GANs have recently generated interest as a possible method to emulate these datasets, but they have been, so far, mostly limited to 2D data. In this work, we introduce a new benchmark for the generation of 3D N-body simulations, in order to stimulate new ideas in the machine learning community and move closer to the practical use of generative models in cosmology. As a first benchmark result, we propose a scalable GAN approach for training a generator of N-body 3D cubes. Our technique relies on two key building blocks, (i) splitting the generation of the high-dimensional data into smaller parts, and (ii) using a multi-scale approach that efficiently captures global image features that might otherwise be lost in the splitting process. We evaluate the performance of our model for the generation of N-body samples using various statistical measures commonly used in cosmology. Our results show that the proposed model produces samples of high visual quality, although the statistical analysis reveals that capturing rare features in the data poses significant problems for the generative models. We make the data, quality evaluation routines, and the proposed GAN architecture publicly available at https://github.com/nperraud/3DcosmoGAN

Read this paper on arXiv…

N. Perraudin, A. Srivastava, A. Lucchi, et. al.
Fri, 16 Aug 19
3/54

Comments: N/A

CMB-GAN: Fast Simulations of Cosmic Microwave background anisotropy maps using Deep Learning [CEA]

http://arxiv.org/abs/1908.04682


Cosmic Microwave Background (CMB) has been a cornerstone in many cosmology experiments and studies since it was discovered back in 1964. Traditional computational models like CAMB that are used for generating CMB anisotropy maps are extremely resource intensive and act as a bottleneck in cosmology experiments that require a large amount of CMB data for analysis. In this paper, we present a new approach to the generation of CMB anisotropy maps using a machine learning technique called Generative Adversarial Network (GAN). We train our deep generative model to learn the complex distribution of CMB maps and efficiently generate new sets of CMB data in the form of 2D patches of anisotropy maps. We limit our experiment to the generation of 56{\deg} and 112{\deg} patches of CMB maps. We have also trained a Multilayer perceptron model for estimation of baryon density from a CMB map, we will be using this model for the performance evaluation of our generative model using diagnostic measures like Histogram of pixel intensities, the standard deviation of pixel intensity distribution, Power Spectrum, Cross power spectrum, Correlation matrix of the power spectrum and Peak count.

Read this paper on arXiv…

A. Mishra, P. Reddy and R. Nigam
Wed, 14 Aug 19
13/60

Comments: 8 pages, cosmic microwave background radiation, deep learning, generative adversarial network. arXiv admin note: substantial text overlap with arXiv:1903.12253

Maximum likelihood estimation for disk image parameters [CL]

http://arxiv.org/abs/1907.10557


We present a novel technique for estimating disc parameters from its 2D image. It is based on the maximal likelihood approach utilising both edge coordinates and the image intensity gradients. We emphasise the following advantages of our likelihood model. It has closed-form formulae for parameter estimating, therefore requiring less computational resources than iterative algorithms. The likelihood model naturally distinguishes the outer and inner annulus edges. The proposed technique was evaluated on both synthetic and real data.

Read this paper on arXiv…

M. Kornilov
Thu, 25 Jul 19
4/72

Comments: 12 pages, 4 figures

Self-supervised Learning with Physics-aware Neural Networks I: Galaxy Model Fitting [GA]

http://arxiv.org/abs/1907.03957


Estimating the parameters of a model describing a set of observations using a neural network is in general solved in a supervised way. In cases when we do not have access to the model’s true parameters this approach can not be applied. Standard unsupervised learning techniques on the other hand, do not produce meaningful or semantic representations that can be associated to the model’s parameters. Here we introduce a self-supervised hybrid network that combines traditional neural network elements with analytic or numerical models which represent a physical process to be learned by the system. Self-supervised learning is achieved by generating an internal representation equivalent to the parameters of the physical model. This semantic representation is used to evaluate the model and compare it to the input data during training. The Semantic Autoencoder architecture described here shares the robustness of neural networks while including an explicit model of the data, learns in an unsupervised way and estimates, by construction, parameters with direct physical interpretation. As an illustrative application we perform unsupervised learning for 2D model fitting of exponential light profiles.

Read this paper on arXiv…

M. Aragon-Calvo
Wed, 10 Jul 19
27/53

Comments: N/A

Fast Fourier-transform calculation of artificial night sky brightness maps [IMA]

http://arxiv.org/abs/1907.02891


Light pollution poses a growing threat to optical astronomy, in addition to its detrimental impacts on the natural environment, the intangible heritage of humankind related to the contemplation of the starry sky and, potentially, on human health. The computation of maps showing the spatial distribution of several light pollution related functions (e.g. the anthropogenic zenithal night sky brightness, or the average brightness of the celestial hemisphere) is a key tool for light pollution monitoring and control, providing the scientific rationale for the adoption of informed decisions on public lighting and astronomical site preservation. The calculation of such maps from satellite radiance data for wide regions of the planet with sub-kilometric spatial resolution often implies a huge amount of basic pixel operations, requiring in many cases extremely large computation times. In this paper we show that, using adequate geographical projections, a wide set of light pollution map calculations can be reframed in terms of two-dimensional convolutions that can be easily evaluated using conventional fast Fourier-transform (FFT) algorithms, with typical computation times smaller than 10^-6 s per output pixel.

Read this paper on arXiv…

S. Bará, F. Falchi, R. Furgoni, et. al.
Mon, 8 Jul 19
19/43

Comments: 22 pages, 4 figures

Standardized spectral and radiometric calibration of consumer cameras [CL]

http://arxiv.org/abs/1906.04155


Consumer cameras, particularly onboard smartphones and UAVs, are now commonly used as scientific instruments. However, their data processing pipelines are not optimized for quantitative radiometry and their calibration is more complex than that of scientific cameras. The lack of a standardized calibration methodology limits the interoperability between devices and, in the ever-changing market, ultimately the lifespan of projects using them. We present a standardized methodology and database (SPECTACLE) for spectral and radiometric calibrations of consumer cameras, including linearity, bias variations, read-out noise, dark current, ISO speed and gain, flat-field, and RGB spectral response. This includes golden standard ground-truth methods and do-it-yourself methods suitable for non-experts. Applying this methodology to seven popular cameras, we found high linearity in RAW but not JPEG data, inter-pixel gain variations >400% correlated with large-scale bias and read-out noise patterns, non-trivial ISO speed normalization functions, flat-field correction factors varying by up to 2.79 over the field of view, and both similarities and differences in spectral response. Moreover, these results differed wildly between camera models, highlighting the importance of standardization and a centralized database.

Read this paper on arXiv…

O. Burggraaff, N. Schmidt, J. Zamorano, et. al.
Tue, 11 Jun 19
8/60

Comments: 27 pages, 11 figures, accepted for publication in Optics Express

A Curated Image Parameter Dataset from Solar Dynamics Observatory Mission [SSA]

http://arxiv.org/abs/1906.01062


We provide a large image parameter dataset extracted from the Solar Dynamics Observatory (SDO) mission’s AIA instrument, for the period of January 2011 through the current date, with the cadence of six minutes, for nine wavelength channels. The volume of the dataset for each year is just short of 1 TiB. Towards achieving better results in the region classification of active regions and coronal holes, we improve upon the performance of a set of ten image parameters, through an in depth evaluation of various assumptions that are necessary for calculation of these image parameters. Then, where possible, a method for finding an appropriate settings for the parameter calculations was devised, as well as a validation task to show our improved results. In addition, we include comparisons of JP2 and FITS image formats using supervised classification models, by tuning the parameters specific to the format of the images from which they are extracted, and specific to each wavelength. The results of these comparisons show that utilizing JP2 images, which are significantly smaller files, is not detrimental to the region classification task that these parameters were originally intended for. Finally, we compute the tuned parameters on the AIA images and provide a public API (this http URL) to access the dataset. This dataset can be used in a range of studies on AIA images, such as content-based image retrieval or tracking of solar events, where dimensionality reduction on the images is necessary for feasibility of the tasks.

Read this paper on arXiv…

A. Ahmadzadeh, D. Kempton and R. Angryk
Wed, 5 Jun 19
23/74

Comments: Accepted to The Astrophysical Journal Supplement Series, 2019, 29 pages

Fast Solar Image Classification Using Deep Learning and its Importance for Automation in Solar Physics [SSA]

http://arxiv.org/abs/1905.13575


The volume of data being collected in solar physics has exponentially increased over the past decade and with the introduction of the $\textit{Daniel K. Inouye Solar Telescope}$ (DKIST) we will be entering the age of petabyte solar data. Automated feature detection will be an invaluable tool for post-processing of solar images to create catalogues of data ready for researchers to use. We propose a deep learning model to accomplish this; a deep convolutional neural network is adept at feature extraction and processing images quickly. We train our network using data from $\textit{Hinode/Solar Optical Telescope}$ (SOT) H$\alpha$ images of a small subset of solar features with different geometries: filaments, prominences, flare ribbons, sunspots and the quiet Sun ($\textit{i.e.}$ the absence of any of the other four features). We achieve near perfect performance on classifying unseen images from SOT ($\approx$99.9\%) in 4.66 seconds. We also for the first time explore transfer learning in a solar context. Transfer learning uses pre-trained deep neural networks to help train new deep learning models $\textit{i.e.}$ it teaches a new model. We show that our network is robust to changes in resolution by degrading images from SOT resolution ($\approx$0.33$^{\prime \prime}$ at $\lambda$=6563\AA{}) to $\textit{Solar Dynamics Observatory/Atmospheric Imaging Assembly}$ (SDO/AIA) resolution ($\approx$1.2$^{\prime \prime}$) without a change in performance of our network. However, we also observe where the network fails to generalise to sunspots from SDO/AIA bands 1600/1700\AA{} due to small-scale brightenings around the sunspots and prominences in SDO/AIA 304\AA{} due to coronal emission.

Read this paper on arXiv…

J. Armstrong and L. Fletcher
Mon, 3 Jun 19
18/59

Comments: 19 pages, 9 figures, accepted for publication in Solar Physics

A parallel & automatically tuned algorithm for multispectral image deconvolution [IMA]

http://arxiv.org/abs/1905.08468


In the era of big data in the radio astronomical field, image reconstruction algorithms are challenged to estimate clean images given limited computing resources and time. This article is driven by the extensive need for large scale image reconstruction for the future Square Kilometre Array (SKA), the largest low- and intermediate frequency radio telescope of the next decades. This work proposes a scalable wideband deconvolution algorithm called MUFFIN, which stands for `MUlti Frequency image reconstruction For radio INterferometry’. MUFFIN estimates the sky images at various frequency bands given the corresponding dirty images and point spread functions. The reconstruction is achieved by minimizing a data fidelity term and joint spatial and spectral sparse analysis regularization terms. It is consequently non-parametric w.r.t. the spectral behaviour of radio sources. MUFFIN algorithm is endowed with a parallel implementation and an automatic tuning of the regularization parameters, making it scalable and well suited for big data applications such as SKA. Comparisons between MUFFIN and the state-of-the-art wideband reconstruction algorithm are provided.

Read this paper on arXiv…

R. Ammanouil, A. Ferrari, D. Mary, et. al.
Wed, 22 May 19
17/59

Comments: N/A

Development of Systematic Image Preprocessing of LAPAN-A3/IPB Multispectral Images [CL]

http://arxiv.org/abs/1901.09189


As of any other satellite images, LAPAN-A3/IPB multispectral images suffered from both geometric and radiometric distortions which need to be corrected. LAPAN as satellite owner has developed image preprocessing algorithm to process raw image into systematically corrected image. This research aims to evaluate the performance of the developed algorithm, particularly the performance of lens vignetting and band co-registration correction as well as the performance of direct image georeferencing. Lens vignetting distortion occurs on image was corrected by using pre-flight calibration data, while calculation of direct georeferencing was done by using satellite metadata of satellite position and attitude. Meanwhile, band co-registration correction was conducted based entirely on the image being processed using image matching approach. Based on several results and analysis which have been done, lens vignetting effects on image can be suppressed significantly from about 40 percent down to 10 percent, band coregistration error can be reduced to below 2-3 pixels in average, and the calculated direct georeferencing has 3000 meter accuracy. The results show that the developed image preprocessing algorithm has moderately good performance to process LAPAN-A3/IPB multispectral images.

Read this paper on arXiv…

P. Hakim, A. Syafrudin, S. Salaswati, et. al.
Tue, 29 Jan 19
19/62

Comments: 10 pages, 16 figures, journal

Rethinking Image Sensor Noise for Forensic Advantage [CL]

http://arxiv.org/abs/1808.07971


Sensor pattern noise has been found to be a reliable tool for providing information relating to the provenance of an image. Conventionally sensor pattern noise is modelled as a mutual interaction of pixel non-uniformity noise and dark current. By using a wavelet denoising filter it is possible to isolate a unique signal within a sensor caused by the way the silicon reacts non-uniformly to light. This signal is often referred to as a fingerprint. To obtain the estimate of this photo response non-uniformity multiple sample images are averaged and filtered to derive a noise residue. This process and model, while useful at providing insight into an images provenance, fails to take into account additional sources of noise that are obtained during this process. These other sources of noise include digital processing artefacts collectively known as camera noise, image compression artefacts, lens artefacts, and image content. By analysing the diversity of sources of noise remaining within the noise residue, we show that further insight is possible within a unified sensor pattern noise concept which opens the field to approaches for obtaining fingerprints utilising fewer resources with comparable performance to existing methods.

Read this paper on arXiv…

R. Matthews, M. Sorell and N. Falkner
Mon, 27 Aug 18
26/46

Comments: 17 pages, 10 figures, preprint for journal submission, paper is based on a chapter of a thesis

Experimental validation of joint phase and amplitude wave-front sensing with coronagraphic phase diversity for high-contrast imaging [IMA]

http://arxiv.org/abs/1807.07140


Context. The next generation of space-borne instruments dedicated to the direct detection of exoplanets requires unprecedented levels of wavefront control precision. Coronagraphic wavefront sensing techniques for these instruments must measure both the phase and amplitude of the optical aberrations using the scientific camera as a wavefront sensor.
Aims. In this paper, we develop an extension of coronagraphic phase diversity to the estimation of the complex electric field, that is, the joint estimation of phase and amplitude.
Methods. We introduced the formalism for complex coronagraphic phase diversity. We have demonstrated experimentally on the Tr`es Haute Dynamique testbed at the Observatoire de Paris that it is possible to reconstruct phase and amplitude aberrations with a subnanometric precision using coronagraphic phase diversity. Finally, we have performed the first comparison between the complex wavefront estimated using coronagraphic phase diversity (which relies on time-modulation of the speckle pattern) and the one reconstructed by the self-coherent camera (which relies on the spatial modulation of the speckle pattern).
Results. We demonstrate that coronagraphic phase diversity retrieves complex wavefront with subnanometric precision with a good agreement with the reconstruction performed using the self-coherent camera.
Conclusions. This result paves the way to coronagraphic phase diversity as a coronagraphic wave-front sensor candidate for very high contrast space missions.

Read this paper on arXiv…

O. Herscovici-Schiller, L. Mugnier, P. Baudoz, et. al.
Fri, 20 Jul 18
26/63

Comments: Reproduced with permission from Astronomy & Astrophysics, Copyright ESO

Resolution and accuracy of non-linear regression of PSF with artificial neural networks [CL]

http://arxiv.org/abs/1806.08689


In a previous work we have demonstrated a novel numerical model for the point spread function (PSF) of an optical system that can efficiently model both experimental measurements and lens design simulations of the PSF. The novelty lies in the portability and the parameterization of this model, which allows for completely new ways to validate optical systems, which is especially interesting for mass production optics like in the automotive industry, but also for ophtalmology. The numerical basis for this model is a non-linear regression of the PSF with an artificial neural network (ANN). In this work we examine two important aspects of this model: the spatial resolution and the accuracy of the model. Measurement and simulation of a PSF can have a much higher resolution then the typical pixel size used in current camera sensors, especially those for the automotive industry. We discuss the influence this has on on the topology of the ANN and the final application where the modeled PSF is actually used. Another important influence on the accuracy of the trained ANN is the error metric which is used during training. The PSF is a distinctly non-linear function, which varies strongly over field and defocus, but nonetheless exhibits strong symmetries and spatial relations. Therefore we examine different distance and similarity measures and discuss its influence on the modeling performance of the ANN.

Read this paper on arXiv…

M. Lehmann, C. Wittpahl, H. Zakour, et. al.
Mon, 25 Jun 18
31/54

Comments: 12 pages, 9 figures, submitted and accepted for SPIE Optical Systems Design, 2018, Frankfurt, Germany. arXiv admin note: text overlap with arXiv:1801.02197

Wideband Super-resolution Imaging in Radio Interferometry via Low Rankness and Joint Average Sparsity Models (HyperSARA) [CL]

http://arxiv.org/abs/1806.04596


We propose a new approach within the versatile framework of convex optimization to solve the radio-interferometric wideband imaging problem. Our approach, dubbed HyperSARA, solves a sequence of weighted nuclear norm and $\ell_{2,1}$ minimization problems promoting low rankness and joint average sparsity of the wideband model cube. On the one hand, enforcing low rankness enhances the overall resolution of the reconstructed model cube by exploiting the correlation between the different channels. On the other hand, promoting joint average sparsity improves the overall sensitivity by rejecting artefacts present on the different channels. An adaptive Preconditioned Primal-Dual algorithm is adopted to solve the minimization problem. The algorithmic structure is highly scalable to large data sets and allows for imaging in the presence of unknown noise levels and calibration errors. We showcase the superior performance of the proposed approach, reflected in high-resolution images on simulations and real VLA observations with respect to single channel imaging and the CLEAN-based wideband imaging algorithm in the WSCLEAN software. Our MATLAB code is available online on GITHUB.

Read this paper on arXiv…

A. Abdulaziz, A. Dabbech and Y. Wiaux
Wed, 13 Jun 18
20/57

Comments: N/A

Realistic Image Degradation with Measured PSF [CL]

http://arxiv.org/abs/1801.02197


Training autonomous vehicles requires lots of driving sequences in all situations\cite{zhao2016}. Typically a simulation environment (software-in-the-loop, SiL) accompanies real-world test drives to systematically vary environmental parameters. A missing piece in the optical model of those SiL simulations is the sharpness, given in linear system theory by the point-spread function (PSF) of the optical system. We present a novel numerical model for the PSF of an optical system that can efficiently model both experimental measurements and lens design simulations of the PSF. The numerical basis for this model is a non-linear regression of the PSF with an artificial neural network (ANN). The novelty lies in the portability and the parameterization of this model, which allows to apply this model in basically any conceivable optical simulation scenario, e.g. inserting a measured lens into a computer game to train autonomous vehicles. We present a lens measurement series, yielding a numerical function for the PSF that depends only on the parameters defocus, field and azimuth. By convolving existing images and videos with this PSF model we apply the measured lens as a transfer function, therefore generating an image as if it were seen with the measured lens itself. Applications of this method are in any optical scenario, but we focus on the context of autonomous driving, where quality of the detection algorithms depends directly on the optical quality of the used camera system. With the parameterization of the optical model we present a method to validate the functional and safety limits of camera-based ADAS based on the real, measured lens actually used in the product.

Read this paper on arXiv…

C. Wittpahl, H. Zakour, M. Lehmann, et. al.
Tue, 9 Jan 18
40/94

Comments: 5 pages, 12 figures, submitted and accepted for IS&T Electronic Imaging, Autonomous Vehicles and Machines 2018