Massive data compression for parameter-dependent covariance matrices [CEA]

http://arxiv.org/abs/1707.06529


We show how the massive data compression algorithm MOPED can be used to reduce, by orders of magnitude, the number of simulated datasets that are required to estimate the covariance matrix required for the analysis of gaussian-distributed data. This is relevant when the covariance matrix cannot be calculated directly. The compression is especially valuable when the covariance matrix varies with the model parameters. In this case, it may be prohibitively expensive to run enough simulations to estimate the full covariance matrix throughout the parameter space. This compression may be particularly valuable for the next-generation of weak lensing surveys, such as proposed for Euclid and LSST, for which the number of summary data (such as band power or shear correlation estimates) is very large, $\sim 10^4$, due to the large number of tomographic redshift bins that the data will be divided into. In the pessimistic case where the covariance matrix is estimated separately for all points in an MCMC analysis, this may require an unfeasible $10^9$ simulations. We show here that MOPED can reduce this number by a factor of 1000, or a factor of $\sim 10^6$ if some regularity in the covariance matrix is assumed, reducing the number of simulations required to a manageable $10^3$, making an otherwise intractable analysis feasible.

Read this paper on arXiv…

A. Heavens, E. Sellentin, D. Mijolla, et. al.
Fri, 21 Jul 17
5/59

Comments: 7 pages. For submission to MNRAS