Realistic galaxy images and improved robustness in machine learning tasks from generative modelling [GA]

http://arxiv.org/abs/2203.11956


We examine the capability of generative models to produce realistic galaxy images. We show that mixing generated data with the original data improves the robustness in downstream machine learning tasks. We focus on three different data sets; analytical S\’ersic profiles, real galaxies from the COSMOS survey, and galaxy images produced with the SKIRT code, from the IllustrisTNG simulation. We quantify the performance of each generative model using the Wasserstein distance between the distributions of morphological properties (e.g. the Gini-coefficient, the asymmetry, and ellipticity), the surface brightness distribution on various scales (as encoded by the power-spectrum), the bulge statistic and the colour for the generated and source data sets. With an average Wasserstein distance (Fr\’echet Inception Distance) of $7.19 \times 10^{-2}\, (0.55)$, $5.98 \times 10^{-2}\, (1.45)$ and $5.08 \times 10^{-2}\, (7.76)$ for the S\’ersic, COSMOS and SKIRT data set, respectively, our best models convincingly reproduce even the most complicated galaxy properties and create images that are visually indistinguishable from the source data. We demonstrate that by supplementing the training data set with generated data, it is possible to significantly improve the robustness against domain-shifts and out-of-distribution data. In particular, we train a convolutional neural network to denoise a data set of mock observations. By mixing generated images into the original training data, we obtain an improvement of $11$ and $45$ per cent in the model performance regarding domain-shifts in the physical pixel size and background noise level, respectively.

Read this paper on arXiv…

B. Holzschuh, C. O’Riordan, S. Vegetti, et. al.
Thu, 24 Mar 22
46/56

Comments: 33 pages, 21 figures, submitted to MNRAS, comments welcome