Bayesian error propagation for neural-net based parameter inference [IMA]

http://arxiv.org/abs/2205.11587


Neural nets have become popular to accelerate parameter inferences, especially for the upcoming generation of galaxy surveys in cosmology. As neural nets are approximative by nature, a recurrent question has been how to propagate the neural net’s approximation error, in order to avoid biases in the parameter inference. We present a Bayesian solution to propagating a neural net’s approximation error and thereby debiasing parameter inference. We exploit that a neural net reports its approximation errors during the validation phase. We capture the thus reported approximation errors via the highest-order summary statistics, allowing us to eliminate the neural net’s bias during inference, and propagating its uncertainties. We demonstrate that our method is quickly implemented and successfully infers parameters even for strongly biased neural nets. In summary, our method provides the missing element to judge the accuracy of a posterior if it cannot be computed based on an infinitely accurately theory code.

Read this paper on arXiv…

D. Grandón and E. Sellentin
Wed, 25 May 22
52/56

Comments: 7 pages, 9 figures