http://arxiv.org/abs/1909.05966
We wish to achieve the Holy Grail of Bayesian inference with deep-learning techniques: training a neural network to instantly produce the posterior $p(\theta|D)$ for the parameters $\theta$, given the data $D$. In the setting of gravitational-wave astronomy, we have access to a generative model for signals in noisy data (i.e., we can instantiate the prior $p(\theta)$ and likelihood $p(D|\theta)$), but are unable to economically compute the posterior for even a single realization of $D$. Here we demonstrate how a network may be taught to estimate $p(\theta|D)$ regardless, by simply showing it numerous realizations of $D$.
A. Chua and M. Vallisneri
Mon, 16 Sep 19
30/74
Comments: 6 pages, 4 figures