PhotoRedshift-MML: a multimodal machine learning method for estimating photometric redshifts of quasars [GA]

http://arxiv.org/abs/2211.04260


We propose a Multimodal Machine Learning method for estimating the Photometric Redshifts of quasars (PhotoRedshift-MML for short), which has long been the subject of many investigations. Our method includes two main models, i.e. the feature transformation model by multimodal representation learning, and the photometric redshift estimation model by multimodal transfer learning. The prediction accuracy of the photometric redshift was significantly improved owing to the large amount of information offered by the generated spectral features learned from photometric data via the MML. A total of 415,930 quasars from Sloan Digital Sky Survey (SDSS) Data Release 17, with redshifts between 1 and 5, were screened for our experiments. We used |{\Delta}z| = |(z_phot-z_spec)/(1+z_spec)| to evaluate the redshift prediction and demonstrated a 4.04% increase in accuracy. With the help of the generated spectral features, the proportion of data with |{\Delta}z| < 0.1 can reach 84.45% of the total test samples, whereas it reaches 80.41% for single-modal photometric data. Moreover, the Root Mean Square (RMS) of |{\Delta}z| is shown to decreases from 0.1332 to 0.1235. Our method has the potential to be generalized to other astronomical data analyses such as galaxy classification and redshift prediction. The algorithm code can be found at https://github.com/HongShuxin/PhotoRedshift-MML .

Read this paper on arXiv…

S. Hong, Z. Zou, A. Luo, et. al.
Wed, 9 Nov 22
16/76

Comments: 10 pages, 8 figures, accepted for publication in MNRAS