http://arxiv.org/abs/2005.07773
The ability to generate physically plausible ensembles of variable sources is critical to the optimization of time-domain survey cadences and the training of classification models on datasets with few to no labels. Traditional data augmentation techniques expand training sets by reenvisioning observed exemplars, seeking to simulate observations of specific training sources under different (exogenous) conditions. Unlike fully theory-driven models, these approaches do not typically allow principled interpolation nor extrapolation. Moreover, the principal drawback of theory-driven models lies in the prohibitive computational cost of simulating source observables from {\it ab initio} parameters. In this work, we propose a computationally tractable machine learning approach to generate realistic light curves of periodic variables capable of integrating physical parameters and variability classes as inputs. Our deep generative model, inspired by the Transparent Latent Space Generative Adversarial Networks (TL-GANs), uses a Variational Autoencoder (VAE) architecture with Temporal Convolutional Network (TCN) layers, trained using the \hbox{OGLE-III} optical light curves and physical characteristics (e.g., effective temperature and absolute magnitude) from Gaia DR2. A test using the temperature-shape relationship of RR\,Lyrae demonstrates the efficacy of our generative “Physics-Enhanced Latent Space VAE” (PELS-VAE) model. Such deep generative models, serving as non-linear non-parametric emulators, present a novel tool for astronomers to create synthetic time series over arbitrary cadences.
J. Martínez-Palomera, J. Bloom and E. Abrahams
Tue, 19 May 20
85/92
Comments: 19 pages, 9 figures, 4 tables
You must be logged in to post a comment.