Sample variance in weak lensing: how many simulations are required? [CEA]

http://arxiv.org/abs/1601.06792


Constraining cosmology using weak gravitational lensing consists of comparing a measured feature vector of dimension $N_b$ with its simulated counterpart. An accurate estimate of the $N_b\times N_b$ feature covariance matrix $\mathbf{C}$ is essential to obtain accurate parameter confidence intervals. When $\mathbf{C}$ is measured from a set of simulations, an important question is how large this set should be. To answer this question, we construct different ensembles of $N_r$ realizations of the shear field, using a common randomization procedure that recycles the outputs from a smaller number $N_s\leq N_r$ of independent ray-tracing $N$–body simulations. We study parameter confidence intervals as a function of ($N_s,N_r$) in the range $1\leq N_s\leq 200$ and $1\leq N_r\lesssim 10^5$. Previous work has shown that Gaussian noise in the feature vectors (from which the covariance is estimated) lead, at quadratic order, to an $O(1/N_r)$ degradation of the parameter confidence intervals. Using a variety of lensing features measured in our simulations, including shear-shear power spectra and peak counts, we show that cubic and quartic covariance fluctuations lead to additional $O(1/N_r^2)$ error degradation that is not negligible when $N_r$ is only a factor of few larger than $N_b$. We study the large $N_r$ limit, and find that a single, 240Mpc$/h$ sized $512^3$-particle $N$–body simulation ($N_s=1$) can be repeatedly recycled to produce as many as $N_r={\rm few}\times10^4$ shear maps whose power spectra and high-significance peak counts can be treated as statistically independent. As a result, a small number of simulations ($N_s=1$ or $2$) is sufficient to forecast parameter confidence intervals at percent accuracy.

Read this paper on arXiv…

A. Petri, Z. Haiman and M. May
Wed, 27 Jan 16
50/57

Comments: 12 pages, 6 figures, 2 tables; submitted to PRD