http://arxiv.org/abs/2303.12030
Over the past decade, hundreds of nights have been spent on the worlds largest telescopes to search for and directly detect new exoplanets using high-contrast imaging (HCI). Thereby, two scientific goals are of central interest: First, to study the characteristics of the underlying planet population and distinguish between different planet formation and evolution theories. Second, to find and characterize planets in our immediate Solar neighborhood. Both goals heavily rely on the metric used to quantify planet detections and non-detections.
Current standards often rely on several explicit or implicit assumptions about the noise. For example, it is often assumed that the residual noise after data post-processing is Gaussian. While being an inseparable part of the metric, these assumptions are rarely verified. This is problematic as any violation of these assumptions can lead to systematic biases. This makes it hard, if not impossible, to compare results across datasets or instruments with different noise characteristics.
We revisit the fundamental question of how to quantify detection limits in HCI. We focus our analysis on the error budget resulting from violated assumptions. To this end, we propose a new metric based on bootstrapping that generalizes current standards to non-Gaussian noise. We apply our method to archival HCI data from the NACO-VLT instrument and derive detection limits for different types of noise. Our analysis shows that current standards tend to give detection limit that are about one magnitude too optimistic in the speckle-dominated regime. That is, HCI surveys may have excluded planets that can still exist.
M. Bonse, E. Garvin, T. Gebhard, et. al.
Wed, 22 Mar 23
7/68
Comments: After first iteration with the referee, resubmitted to AJ. Comments welcome!
You must be logged in to post a comment.