http://arxiv.org/abs/2204.03853
Starshades are a leading technology to enable the direct detection and spectroscopic characterization of Earth-like exoplanets. To keep the starshade and telescope aligned over large separations, reliable sensing of the peak of the diffracted light of the occluded star is required. Current techniques rely on image matching or model fitting, both of which put substantial computational burdens on resource-limited spacecraft computers. We present a lightweight image processing method based on a convolutional neural network paired with a simulation-based inference technique to estimate the position of the spot of Arago and its uncertainty. The method achieves an accuracy of a few centimeters across the entire pupil plane, while only requiring 1.6 MB in stored data structures and 5.3 MFLOPs (million floating point operations) per image at test time. By deploying our method at the Princeton Starshade Testbed, we demonstrate that the neural network can be trained on simulated images and used on real images, and that it can successfully be integrated in the control system for closed-loop formation flying.
A. Chen, A. Harness and P. Melchior
Mon, 11 Apr 22
5/61
Comments: submitted to JATIS
You must be logged in to post a comment.