http://arxiv.org/abs/1907.02909
A workshop was recently held at Nagoya University (31 October – 02 November 2017), sponsored by the Center for International Collaborative Research, at the Institute for Space-Earth Environmental Research, Nagoya University, Japan, to quantitatively compare the performance of today’s operational solar flare forecasting facilities. Building upon Paper I of this series (Barnes et al. 2016), in Paper II (Leka et al. 2019) we described the participating methods for this latest comparison effort, the evaluation methodology, and presented quantitative comparisons. In this paper we focus on the behavior and performance of the methods when evaluated in the context of broad implementation differences. Acknowledging the short testing interval available and the small number of methods available, we do find that forecast performance: 1) appears to improve by including persistence or prior flare activity, region evolution, and a human “forecaster in the loop”; 2) is hurt by restricting data to disk-center observations; 3) may benefit from long-term statistics, but mostly when then combined with modern data sources and statistical approaches. These trends are arguably weak and must be viewed with numerous caveats, as discussed both here and in Paper II. Following this present work, we present in Paper IV a novel analysis method to evaluate temporal patterns of forecasting errors of both types (i.e., misses and false alarms; Park et al. 2019). Hence, most importantly, with this series of papers we demonstrate the techniques for facilitating comparisons in the interest of establishing performance-positive methodologies.
K. Leka, S. Park, K. Kusano, et. al.
Mon, 8 Jul 19
37/43
Comments: 23 pages, 6 figures, accepted for publication in The Astrophysical Journal
You must be logged in to post a comment.