The Performance of FAST with Ultra-Wide Bandwidth Receiver at 500-3300 MHz [IMA]

http://arxiv.org/abs/2304.11895


The Five-hundred-meter Aperture Spherical radio Telescope (FAST) has been running for several years. A new Ultra-Wide Bandwidth (UWB) receiver, simultaneously covering 500-3300 MHz, has been mounted in the FAST feed cabin and passed a series of observational tests. The whole UWB band is separated into four independent bands. Each band has 1048576 channels in total, resulted in a spectral resolution of 1 kHz. At 500-3300 MHz, the antenna gain is around 14.3-7.7 K/Jy, the aperture efficiency is around 0.56-0.30, the system temperature is around 88-130 K, and the HPBW is around 7.6-1.6 arcmin. The measured standard deviation of pointing accuracy is better than ~7.9 arcsec, when zenith angle (ZA) is within 26.4deg. The sensitivity and stability of the UWB receiver are confirmed to satisfy expectation by spectral observations, e.g., HI and OH. The FAST UWB receiver already has a good performance for taking sensitive observations in various scientific goals.

Read this paper on arXiv…

C. Zhang, P. Jiang, M. Zhu, et. al.
Tue, 25 Apr 23
15/72

Comments: 11 pages, 7 figures, 2 tables, submitted to Research in Astronomy and Astrophysics

Analyzing the neutron and $γ$-ray emission properties of an americium-beryllium tagged neutron source [CL]

http://arxiv.org/abs/2304.12153


Americium-beryllium (AmBe), a well-known tagged neutron source, is commonly used for evaluating the neutron detection efficiency of detectors used in ultralow background particle physics experiments, such as reactor neutrino and diffuse supernova neutrino background experiments. In particular, AmBe sources are used to calibrate neutron tagging by selecting the 4438-keV $\gamma$-ray signal, which is simultaneously emitted with a neutron signal. Therefore, analyzing the neutron and $\gamma$-ray emission properties of AmBe sources is crucial. In this study, we used the theoretical shape of a neutron energy spectrum, which was divided into three parts, to develop models of the energy spectrum and verify the results using experimental data. We used an AmBe source to measure the energy spectra of simultaneously emitted neutrons and $\gamma$-rays and determine the emission ratio of the neutrons with and without $\gamma$-ray emission. The measured spectrum was consistent with that obtained from the simulated result, whereas the measured emission ratio was significantly different from the corresponding simulated result. Here, we also discuss the feasibility of determining the neutron emission rates from the spectra divided into three parts.

Read this paper on arXiv…

H. Ito, K. Wada, T. Yano, et. al.
Tue, 25 Apr 23
17/72

Comments: 8 pages, 10 figures, 2 tables

Fifteen years of millimeter accuracy lunar laser ranging with APOLLO: data reduction and calibration [IMA]

http://arxiv.org/abs/2304.11174


The Apache Point Lunar Laser-ranging Operation (APOLLO) has been collecting lunar range measurements for 15 years at millimeter accuracy. The median nightly range uncertainty since 2006 is 1.7 mm. A recently added Absolute Calibration System (ACS), providing an independent assessment of APOLLO system accuracy and the capability to correct lunar range data, revealed a 0.4% systematic error in the calibration of one piece of hardware that has been present for the entire history of APOLLO. Application of ACS-based timing corrections suggests systematic errors are reduced to < 1 mm, such that overall data accuracy and precision are both 1 mm. This paper describes the processing of APOLLO/ACS data that converts photon-by-photon range measurements into the aggregated normal points that are used for physics analyses. Additionally we present methodologies to estimate timing corrections for range data lacking contemporaneous ACS photons, including range data collected prior to installation of the ACS. We also provide access to the full 15-year archive of APOLLO normal points (2006-04-06 to 2020-12-27).

Read this paper on arXiv…

N. Colmenares, J. Battat, D. Gonzales, et. al.
Tue, 25 Apr 23
23/72

Comments: 23 pages, 9 figures

First Detection of the Powerful Gamma Ray Burst GRB221009A by the THEMIS ESA and SST particle detectors on October 9, 2022 [HEAP]

http://arxiv.org/abs/2304.11225


We present the first results study of the effects of the powerful Gamma Ray Burst GRB 221009A that occurred on October 9, 2022, and was serendipitously recorded by electron and proton detectors aboard the four spacecraft of the NASA THEMIS mission. Long-duration gamma-ray bursts (GRBs) are powerful cosmic explosions, signaling the death of massive stars, and, among them, GRB 221009A is so far the brightest burst ever observed due to its enormous energy ($E_{\gamma iso}\sim10^{55}$ erg) and proximity (the redshift is $z\sim 0.1505$). The THEMIS mission launched in 2008 was designed to study the plasma processes in the Earth’s magnetosphere and the solar wind. The particle flux measurements from the two inner magnetosphere THEMIS probes THA and THE and ARTEMIS spacecraft THB and THC orbiting the Moon captured the dynamics of GRB 221009A with a high-time resolution of more than 20 measurements per second. This allowed us to resolve the fine structure of the gamma-ray burst and determine the temporal scales of the two main bursts spiky structure complementing the results from gamma-ray space telescopes and detectors.

Read this paper on arXiv…

O. Agapitov, M. Balikhin, A. Hull, et. al.
Tue, 25 Apr 23
28/72

Comments: N/A

Prospects for the characterization of habitable planets [EPA]

http://arxiv.org/abs/2304.11570


With thousands of exoplanets now identified, the characterization of habitable planets and the potential identification of inhabited ones is a major challenge for the coming decades. We review the current working definition of habitable planets, the upcoming observational prospects for their characterization and present an innovative approach to assess habitability and inhabitation. This integrated method couples for the first time the atmosphere and the interior modeling with the biological activity based on ecosystem modeling. We review here the first applications of the method to asses the likelihood and impact of methanogenesis for Enceladus, primitive Earth, and primitive Mars. Informed by these applications for solar system situations where habitability and inhabitation is questionned, we show how the method can be used to inform the design of future space observatories by considering habitability and inhabitation of Earth-like exoplanets around sun-like stars.

Read this paper on arXiv…

S. Mazevet, A. Affholder, B. Sauterey, et. al.
Tue, 25 Apr 23
30/72

Comments: 16 pages, 4 figures

Using multiobjective optimization to reconstruct interferometric data (I) [IMA]

http://arxiv.org/abs/2304.12107


Imaging in radioastronomy is an ill-posed inverse problem. Particularly the Event Horizon Telescope (EHT) Collaboration faces two big limitations for the existing methods when imaging the active galactic nuclei (AGN): large and expensive surveys solving the problem with different optimization parameters must be done, and only one local minima for each instance is returned. With our novel nonconvex, multiobjective optimization modeling approach, we aim to overcome these limitations. To this end we used a multiobjective version of the genetic algorithm (GA): the Multiobjective Evolutionary Algorithm Based on Decomposition, or MOEA/D. GA strategies explore the objective function by evolutionary operations to find the different local minima, and to avoid getting trapped in saddle points. First, we have tested our algorithm (MOEA/D) using synthetic data based on the 2017 Event Horizon Telescope (EHT) array and a possible EHT + next-generation EHT (ngEHT) configuration. We successfully recover a fully evolved Pareto front of non-dominated solutions for these examples. The Pareto front divides into clusters of image morphologies representing the full set of locally optimal solutions. We discuss approaches to find the most natural guess among these solutions and demonstrate its performance on synthetic data. Finally, we apply MOEA/D to observations of the black hole shadow in Messier 87 (M87) with the EHT data in 2017. MOEA/D is very flexible, faster than any other Bayesian method and explores more solutions than Regularized Maximum Likelihood methods (RML). We have done two papers to present this new algorithm: the first explains the basic idea behind multi-objective optimization and MOEA/D and it is used to recover static images, while in the second paper we extend the algorithm to allow dynamic and (static and dynamic) polarimetric reconstructions.

Read this paper on arXiv…

H. Müller, A. Mus and A. Lobanov
Tue, 25 Apr 23
31/72

Comments: accepted for publication in A&A, both first authors have contributed equally to this work

Terzina on board NUSES: a pathfinder for EAS Cherenkov Light Detection from space [IMA]

http://arxiv.org/abs/2304.11992


In this paper we introduce the Terzina telescope as a part of the NUSES space mission. This telescope aims to detect Ultra High Energy Cosmic Rays (UHECRs) through the Cherenkov light emission from the extensive air showers (EAS) that they create in the Earth’s atmosphere. The Cherenkov photons are aligned along the shower axis inside about $\sim 0.2-1^{\circ}$, so that they become detectable by Terzina when it points towards the Earth’s limb. A sun-synchronous orbit will allow the telescope to observe only the night side of the Earth’s atmosphere. In this contribution, we focus on the description of the telescope detection goals, geometry, optical design and its photon detection camera composed of Silicon Photo-Multipliers (SiPMs). Moreover, we describe the full Monte Carlo simulation chain developed to estimate Terzina’s performance for UHECR detection. The estimate of the radiation damage and light background rates, the readout electronics and trigger logic are briefly described. Terzina will be able to study the potential for future physics missions devoted to UHECR detection and to UHE neutrino astronomy. It is a pathfinder for missions like POEMMA or future constellations of similar satellites to NUSES.

Read this paper on arXiv…

L. Burmistrov
Tue, 25 Apr 23
37/72

Comments: N/A

Rotational spectroscopy of oxirane-\textit{2,2}-$d_2$, $c$-CD$_2$CH$_2$O, and its tentative detection toward IRAS 16293$-$2422~B [GA]

http://arxiv.org/abs/2304.12045


We prepared a sample of oxirane doubly deuterated at one C atom and studied its rotational spectrum in the laboratory for the first time between 120~GHz and 1094~GHz. Accurate spectroscopic parameters up to eighth order were determined, and the calculated rest frequencies were used to identify $c$-CD$2$CH$_2$O tentatively in the interstellar medium in the Atacama Large Millimeter/submillimeter Array Protostellar Interferometric Line Survey (PILS) of the Class 0 protostellar system IRAS 16293$-$2422. The $c$-CD$_2$CH$_2$O to $c$-C$_2$H$_4$O ratio was estimated to be $\sim$0.054 with $T{\rm rot} = 125$ K. This value translates to a D-to-H ratio of $\sim$0.16 per H atom which is higher by a factor of 4.5 than the $\sim$0.036 per H atom obtained for $c$-C$_2$H$_3$DO. Such increase in the degree of deuteration referenced to one H atom in multiply deuterated isotopologs compared to their singly deuterated variants have been observed commonly in recent years.

Read this paper on arXiv…

H. Müller, J. Jørgensen, J. Guillemin, et. al.
Tue, 25 Apr 23
38/72

Comments: Journal of Molecular Spectroscopy, in press; Per Jensen special issue. 12 pages here

Key Science Goals for the Next-Generation Event Horizon Telescope [HEAP]

http://arxiv.org/abs/2304.11188


The Event Horizon Telescope (EHT) has led to the first images of a supermassive black hole, revealing the central compact objects in the elliptical galaxy M87 and the Milky Way. Proposed upgrades to this array through the next-generation EHT (ngEHT) program would sharply improve the angular resolution, dynamic range, and temporal coverage of the existing EHT observations. These improvements will uniquely enable a wealth of transformative new discoveries related to black hole science, extending from event-horizon-scale studies of strong gravity to studies of explosive transients to the cosmological growth and influence of supermassive black holes. Here, we present the key science goals for the ngEHT and their associated instrument requirements, both of which have been formulated through a multi-year international effort involving hundreds of scientists worldwide.

Read this paper on arXiv…

M. Johnson, K. Akiyama, L. Blackburn, et. al.
Tue, 25 Apr 23
41/72

Comments: 32 pages, 11 figures, accepted for publication in a special issue of Galaxies on the ngEHT (this https URL)

Pulsar Candidate Classification Using A Computer Vision Method Combining with Convolution and Attention [IMA]

http://arxiv.org/abs/2304.11604


Artificial intelligence methods are indispensable to identifying pulsars from large amounts of candidates. We develop a new pulsar identification system that utilizes the CoAtNet to score two-dimensional features of candidates, uses a multilayer perceptron to score one-dimensional features, and uses logistic regression to judge the scores above. In the data preprocessing stage, we performed two feature fusions separately, one for one-dimensional features and the other for two-dimensional features, which are used as inputs for the multilayer perceptron and the CoAtNet respectively. The newly developed system achieves 98.77\% recall, 1.07\% false positive rate and 98.85\% accuracy in our GPPS test set.

Read this paper on arXiv…

N. Cai, J. Han, W. Jing, et. al.
Tue, 25 Apr 23
54/72

Comments: 12 pages, 4 figures, 5 tables

New compound and hybrid binding energy sputter model for modeling purposes in agreement with experimental data [EPA]

http://arxiv.org/abs/2304.12048


Rocky planets and moons experiencing solar wind sputtering are continuously supplying their enveloping exosphere with ejected neutral atoms. To understand the quantity and properties of the ejecta, well established Binary Collision Approximation Monte Carlo codes like TRIM with default settings are used predominantly. Improved models such as SDTrimSP have come forward and together with new experimental data the underlying assumptions have been challenged. We introduce a hybrid model, combining the previous surface binding approach with a new bulk binding model akin to Hofs\”ass & Stegmaier (2023). In addition, we expand the model implementation by distinguishing between free and bound components sourced from mineral compounds such as oxides or sulfides. The use of oxides and sulfides also enables the correct setting of the mass densities of minerals, which was previously limited to the manual setting of individual atomic densities of elements. All of the energies and densities used are thereby based on tabulated data, so that only minimal user input and no fitting of parameters are required. We found unprecedented agreement between the newly implemented hybrid model and previously published sputter yields for incidence angles up to 45{\deg} from surface normal. Good agreement is found for the angular distribution of mass sputtered from enstatite MgSiO$_3$ compared to latest experimental data. Energy distributions recreate trends of experimental data of oxidized metals. Similar trends are to be expected from future mineral experimental data. The model thus serves its purpose of widespread applicability and ease of use for modelers of rocky body exospheres.

Read this paper on arXiv…

N. Jäggi, A. Mutzke, H. Biber, et. al.
Tue, 25 Apr 23
57/72

Comments: 23 pages, 6 figures, 3 tables

GJ3470-d and GJ3470-e: Discovery of Co-Orbiting Exoplanets in a Horseshoe Exchange Orbit [EPA]

http://arxiv.org/abs/2304.11769


We report the discovery of a pair of exoplanets co-orbiting the red dwarf star GJ3470. The larger planet, GJ3470-d, was observed in a 14.9617-days orbit and the smaller planet, GJ3470-e, in a 14.9467-days orbit. GJ3470-d is sub-Jupiter size with a 1.4% depth and a duration of 3 hours, 4 minutes. The smaller planet, GJ3470-e, currently leads the larger planet by approximately 1.146-days and is extending that lead by about 7.5-minutes (JD 0.0052) per orbital cycle. It has an average depth of 0.5% and an average duration of 3 hours, 2 minutes. The larger planet, GJ3470-d, has been observed on seven separate occasions over a 3-year period, allowing for a very precise orbital period calculation. The last transit was observed by three separate observatories in Oklahoma and Arizona. The smaller planet, GJ3470-e, has been observed on five occasions over 2-years. Our data appears consistent with two exoplanets in a Horseshoe Exchange orbit. When confirmed, these will be the second and third exoplanets discovered and characterized by amateur astronomers without professional data or assistance. It will also be the first ever discovery of co-orbiting exoplanets in a Horseshoe Exchange orbit.

Read this paper on arXiv…

P. Scott, J. Taylor, L. Beatty, et. al.
Tue, 25 Apr 23
60/72

Comments: 10 pages, 4 figures, 3 tables

Identifying Stochasticity in Time-Series with Autoencoder-Based Content-aware 2D Representation: Application to Black Hole Data [CL]

http://arxiv.org/abs/2304.11560


In this work, we report an autoencoder-based 2D representation to classify a time-series as stochastic or non-stochastic, to understand the underlying physical process. Content-aware conversion of 1D time-series to 2D representation, that simultaneously utilizes time- and frequency-domain characteristics, is proposed. An autoencoder is trained with a loss function to learn latent space (using both time- and frequency domains) representation, that is designed to be, time-invariant. Every element of the time-series is represented as a tuple with two components, one each, from latent space representation in time- and frequency-domains, forming a binary image. In this binary image, those tuples that represent the points in the time-series, together form the “Latent Space Signature” (LSS) of the input time-series. The obtained binary LSS images are fed to a classification network. The EfficientNetv2-S classifier is trained using 421 synthetic time-series, with fair representation from both categories. The proposed methodology is evaluated on publicly available astronomical data which are 12 distinct temporal classes of time-series pertaining to the black hole GRS 1915 + 105, obtained from RXTE satellite. Results obtained using the proposed methodology are compared with existing techniques. Concurrence in labels obtained across the classes, illustrates the efficacy of the proposed 2D representation using the latent space co-ordinates. The proposed methodology also outputs the confidence in the classification label.

Read this paper on arXiv…

C. Pradeep and N. Sinha
Tue, 25 Apr 23
69/72

Comments: N/A

US National Gemini Office in the NOIRLab era [IMA]

http://arxiv.org/abs/2304.10657


This article presents an overview of the US National Gemini Office (US NGO) and its role within the International Gemini Observatory user community. Throughout the years, the US NGO charter changed considerably to accommodate the evolving needs of astronomers and the observatory. The current landscape of observational astronomy requires effective communication between stakeholders and reliable/accessible data reduction tools and products, which minimize the time between data gathering and publication of scientific results. Because of that, the US NGO heavily invests in producing data reduction tutorials and cookbooks. Recently, the US NGO started engaging with the Gemini user community through social media, and the results have been encouraging, increasing the observatory’s visibility. The US NGO staff developed tools to assess whether the support provided to the user community is sufficient and effective, through website analytics and social media engagement numbers. These quantitative metrics serve as the baseline for internal reporting and directing efforts to new or current products. In the era of the NSF’s National Optical-Infrared Astronomy Research Laboratory (NOIRLab), the US NGO is well-positioned to be the liaison between the US user base and the Gemini Observatory. Furthermore, collaborations within NOIRLab programs, such as the Astro Data Lab and the Time Allocation Committee, enhance the US NGO outreach to attract users and develop new products. The future landscape laid out by the Astro 2020 report confirms the need to establish such synergies and provide more integrated user support services to the astronomical community at large.

Read this paper on arXiv…

V. Placco and L. Stanghellini
Mon, 24 Apr 23
10/41

Comments: 15 pages, 8 figures, published in the Journal of Astronomical Telescopes, Instruments, and Systems

Simulating Stellar Merger using HPX/Kokkos on A64FX on Supercomputer Fugaku [CL]

http://arxiv.org/abs/2304.11002


The increasing availability of machines relying on non-GPU architectures, such as ARM A64FX in high-performance computing, provides a set of interesting challenges to application developers. In addition to requiring code portability across different parallelization schemes, programs targeting these architectures have to be highly adaptable in terms of compute kernel sizes to accommodate different execution characteristics for various heterogeneous workloads. In this paper, we demonstrate an approach to code and performance portability that is based entirely on established standards in the industry. In addition to applying Kokkos as an abstraction over the execution of compute kernels on different heterogeneous execution environments, we show that the use of standard C++ constructs as exposed by the HPX runtime system enables superb portability in terms of code and performance based on the real-world Octo-Tiger astrophysics application. We report our experience with porting Octo-Tiger to the ARM A64FX architecture provided by Stony Brook’s Ookami and Riken’s Supercomputer Fugaku and compare the resulting performance with that achieved on well established GPU-oriented HPC machines such as ORNL’s Summit, NERSC’s Perlmutter and CSCS’s Piz Daint systems. Octo-Tiger scaled well on Supercomputer Fugaku without any major code changes due to the abstraction levels provided by HPX and Kokkos. Adding vectorization support for ARM’s SVE to Octo-Tiger was trivial thanks to using standard C++

Read this paper on arXiv…

P. Diehl, G. Daiß, K. Huck, et. al.
Mon, 24 Apr 23
11/41

Comments: N/A

VLBI Astrometry of Radio Stars to Link Radio and Optical Celestial Reference Frames. I. HD 199178 $\&$ AR Lacertae [SSA]

http://arxiv.org/abs/2304.10886


To accurately link the radio and optical Celestial Reference Frames (CRFs) at optical bright end, i.e., with Gaia G band magnitude < 13, increasing number and improving sky distribution of radio stars with accurate astrometric parameters from both Very Long Baseline Interferometry (VLBI) and Gaia measurements are mandatory. We selected two radio stars HD 199178 and AR Lacertae as the target for a pilot program for the frame link, using the Very Long Baseline Array (VLBA) at 15 GHz at six epochs spanning about 1 year, to measure their astrometric parameters. The measured parallax of HD 199178 is $8.949 \pm 0.059$ mas and the proper motion is $\mu_\alpha cos \delta = 26.393 \pm 0.093$, $\mu_\delta = -0.950 \pm 0.083~mas~yr^{-1}$, while the parallax of AR Lac is $23.459 \pm 0.094$ mas and the proper motion is $\mu_\alpha cos \delta = -51.906 \pm 0.138$, $\mu_\delta = 46.732 \pm 0.131~mas~yr^{-1}$. Our VLBI measured astrometric parameters have accuracies about 4-5 times better than the corresponding historic VLBI measurements and comparable accuracies with those from Gaia, validating the feasibility of frame link using radio stars. With the updated astrometric parameters for these two stars, there is a 25% reduction of the uncertainties on the Y axis for both orientation and spin parameters.

Read this paper on arXiv…

W. Chen, B. Zhang, J. Zhang, et. al.
Mon, 24 Apr 23
17/41

Comments: 11 pages, accepted by MNRAS on 2023 April 20

The Magnetohydrodynamic-Particle-In-Cell Module in Athena++: Implementation and Code Tests [HEAP]

http://arxiv.org/abs/2304.10568


We present a new magnetohydrodynamic-particle-in-cell (MHD-PIC) code integrated into the Athena++ framework. It treats energetic particles as in conventional PIC codes while the rest of thermal plasmas are treated as background fluid described by MHD, thus primarily targeting at multi-scale astrophysical problems involving the kinetic physics of the cosmic-rays (CRs). The code is optimized toward efficient vectorization in interpolation and particle deposits, with excellent parallel scaling. The code is also compatible with static/adaptive mesh refinement, with dynamic load balancing to further enhance multi-scale simulations. In addition, we have implemented a compressing/expanding box framework which allows adiabatic driving of CR pressure anisotropy, as well as the $\delta f$ method that can dramatically reduce Poisson noise in problems where distribution function $f$ is only expected to slightly deviate from the background. The code performance is demonstrated over a series of benchmark test problems including particle acceleration in non-relativistic parallel shocks. In particular, we reproduce the linear growth of the CR gyro-resonant (streaming and pressure anisotropy) instabilities, under both the periodic and expanding/compressing box setting. We anticipate the code to open up the avenue for a wide range of astrophysical and plasma physics applications.

Read this paper on arXiv…

X. Sun and X. Bai
Mon, 24 Apr 23
18/41

Comments: 20 pages, 19 figures, submitted to MNRAS

First observations with a GNSS antenna to radio telescope interferometer [CL]

http://arxiv.org/abs/2304.11016


We describe the design of a radio interferometer composed of a Global Navigation Satellite Systems (GNSS) antenna and a Very Long Baseline Interferometry (VLBI) radio telescope. Our eventual goal is to use this interferometer for geodetic applications including local tie measurements. The GNSS element of the interferometer uses a unique software-defined receiving system and modified commercial geodetic-quality GNSS antenna. We ran three observing sessions in 2022 between a 25 m radio telescope in Fort Davis, Texas (FD-VLBA), a transportable GNSS antenna placed within 100 meters, and a GNSS antenna placed at a distance of about 9 km. We have detected a strong interferometric response with a Signal-to-Noise Ratio (SNR) of over 1000 from Global Positioning System (GPS) and Galileo satellites. We also observed natural radio sources including Galactic supernova remnants and Active Galactic Nuclei (AGN) located as far as one gigaparsec, thus extending the range of sources that can be referenced to a GNSS antenna by 18 orders of magnitude. These detections represent the first observations made with a GNSS antenna to radio telescope interferometer. We have developed a novel technique based on a Precise Point Positioning (PPP) solution of the recorded GNSS signal that allows us to extend integration time at 1.5 GHz to at least 20 minutes without any noticeable SNR degradation when a rubidium frequency standard is used.

Read this paper on arXiv…

J. Skeens, J. York, L. Petrov, et. al.
Mon, 24 Apr 23
23/41

Comments: 33 pages, 19 figures

SLEPLET: Slepian Scale-Discretised Wavelets in Python [CL]

http://arxiv.org/abs/2304.10680


Wavelets are widely used in various disciplines to analyse signals both in space and scale. Whilst many fields measure data on manifolds (i.e., the sphere), often data are only observed on a partial region of the manifold. Wavelets are a typical approach to data of this form, but the wavelet coefficients that overlap with the boundary become contaminated and must be removed for accurate analysis. Another approach is to estimate the region of missing data and to use existing whole-manifold methods for analysis. However, both approaches introduce uncertainty into any analysis. Slepian wavelets enable one to work directly with only the data present, thus avoiding the problems discussed above. Applications of Slepian wavelets to areas of research measuring data on the partial sphere include gravitational/magnetic fields in geodesy, ground-based measurements in astronomy, measurements of whole-planet properties in planetary science, geomagnetism of the Earth, and cosmic microwave background analyses.

Read this paper on arXiv…

P. Roddy
Mon, 24 Apr 23
26/41

Comments: 4 pages

S-ACF: A selective estimator for the autocorrelation function of irregularly sampled time series [IMA]

http://arxiv.org/abs/2304.10641


We present a generalised estimator for the autocorrelation function, S-ACF, which is an extended version of the standard estimator of the autocorrelation function (ACF). S-ACF is a versatile definition that can robustly and efficiently extract periodicity and signal shape information from a time series, independent of the time sampling and with minimal assumptions about the underlying process. Calculating the autocorrelation of irregularly sampled time series becomes possible by generalising the lag of the standard estimator of the ACF to a real parameter and introducing the notion of selection and weight functions. We show that the S-ACF reduces to the standard ACF estimator for regularly sampled time series. Using a large number of synthetic time series we demonstrate that the performance of the S-ACF is as good or better than commonly used Gaussian and rectangular kernel estimators, and is comparable to a combination of interpolation and the standard estimator. We apply the S-ACF to astrophysical data by extracting rotation periods for the spotted star KIC 5110407, and compare our results to Gaussian process (GP) regression and Lomb-Scargle (LS) periodograms. We find that the S-ACF periods typically agree better with those from GP regression than from LS periodograms, especially in cases where there is evolution in the signal shape. The S-ACF has a wide range of potential applications and should be useful in quantitative science disciplines where irregularly sampled time series occur. A Python implementation of the S-ACF is available under the MIT license.

Read this paper on arXiv…

L. Kreutzer, E. Gillen, J. Briegal, et. al.
Mon, 24 Apr 23
28/41

Comments: N/A

Magnetic field measurement from the Davis-Chandrasekhar-Fermi method employed with Atomic Alignment [IMA]

http://arxiv.org/abs/2304.10665


The Davis-Chandrasekhar-Fermi (DCF) method is widely employed to estimate the mean magnetic field strength in astrophysical plasmas. In this study, we present a numerical investigation using the DCF method in conjunction with a promising new diagnostic tool for studying magnetic fields: the polarization of spectral lines resulting from the atomic alignment effect. We obtain synthetic spectro-polarimetry observations from 3D magnetohydrodynamic (MHD) turbulence simulations and estimate the mean magnetic field projected onto the plane of the sky using the DCF method with GSA polarization maps and a modification to account for the driving scale of turbulence. We also compare the method to the classical DCF approach using dust polarization observations. Our observations indicate that the modified DCF method correctly estimates the plane-of-sky projected magnetic field strengths for sub-Alfv\’enic turbulence using a newly proposed correction factor of $\xi’ \in 0.35 – 0.75$. We find that the field strengths are accurately obtained for all magnetic field inclination and azimuth angles. We also observe a minimum threshold for the mean magnetic field inclination angle with respect to the line of sight, $\theta_B \sim 16^\circ$, for the method. The magnetic field dispersion traced by the polarization from the spectral lines is comparable in accuracy to dust polarization, while mitigating some of the uncertainties associated with dust observations. The measurements of the DCF observables from the same atomic/ionic line targets ensure the same origin for the magnetic field and velocity fluctuations and offer a possibility of tracing the 3D direction of the magnetic field.

Read this paper on arXiv…

P. Pavaskar, H. Yan and J. Cho
Mon, 24 Apr 23
29/41

Comments: N/A

Fifteen years of millimeter accuracy lunar laser ranging with APOLLO: dataset characterization [IMA]

http://arxiv.org/abs/2304.11128


We present data from the Apache Point Observatory Lunar Laser-ranging Operation (APOLLO) covering the 15-year span from April 2006 through the end of 2020. APOLLO measures the earth-moon separation by recording the round-trip travel time of photons from the Apache Point Observatory to five retro-reflector arrays on the moon. The APOLLO data set, combined with the 50-year archive of measurements from other lunar laser ranging (LLR) stations, can be used to probe fundamental physics such as gravity and Lorentz symmetry, as well as properties of the moon itself. We show that range measurements performed by APOLLO since 2006 have a median nightly accuracy of 1.7 mm, which is significantly better than other LLR stations.

Read this paper on arXiv…

J. Battat, E. Adelberger, N. Colmenares, et. al.
Mon, 24 Apr 23
38/41

Comments: 16 pages, 9 figures

Muons in EASs with $E_0 = 10^{19}$ eV according to data of the Yakutsk Array [HEAP]

http://arxiv.org/abs/2304.09924


Lateral distribution functions of particles in extensive air showers with the energy $E_0 \simeq 10^{19}$ eV recorded by ground-based and underground scintillation detectors with a threshold of $E_{\mu} \simeq 1.0 \times \sec\theta$ GeV at the Yakutsk array during the continuous observations from 1986 to 2016 have been analyzed using events with zenith angles $\theta \le 60^{\circ}$ functions have been compared to the predictions obtained with the QGSJet01 hadron interaction model by applying the CORSIKA code. The entire dataset indicates that cosmic rays consist predominantly of protons.

Read this paper on arXiv…

A. Glushkov, K. Lebedev and A. Sabourov
Fri, 21 Apr 23
3/60

Comments: 11 pages, 5 figures, 2 tables. Accepted for publication in JETP Letters (v.117, no.4, 2023), minor typos fixed

VarIabiLity seLection of AstrophysIcal sources iN PTF (VILLAIN) I. Structure function fits to 71 million objects [GA]

http://arxiv.org/abs/2304.09903


Context. Lightcurve variability is well-suited for characterising objects in surveys with high cadence and long baseline. This is especially relevant in view of the large datasets to be produced by the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST).
Aims. We aim to determine variability parameters for objects in the Palomar Transient Factory (PTF) and explore differences between quasars (QSOs), stars and galaxies. We will relate variability and colour information in preparation for future surveys.
Methods. We fit joint likelihoods to structure functions (SFs) of 71 million PTF lightcurves with a Markov Chain Monte Carlo method. For each object, we assume a power law SF and extract two parameters: the amplitude on timescales of one year, $A$, and a power law index, $\gamma$. With these parameters and colours in the optical (Pan-STARRS1) and mid infrared (WISE), we identify regions of parameter space dominated by different types of spectroscopically confirmed objects from SDSS. Candidate QSOs, stars and galaxies are selected to show their parameter distributions.
Results. QSOs have high amplitude variations in the $R$ band, and the strongest timescale dependence of variability. Galaxies have a broader range of amplitudes and low timescale dependency. With variability and colours, we achieve a photometric selection purity of 99.3 % for QSOs. Even though hard cuts in monochromatic variability alone are not as effective as seven-band magnitude cuts, variability is useful in characterising object sub-classes. Through variability, we also find QSOs that were erroneously classified as stars in the SDSS. We discuss perspectives and computational solutions in view of the upcoming LSST.

Read this paper on arXiv…

S. Bruun, A. Agnello and J. Hjorth
Fri, 21 Apr 23
19/60

Comments: Accepted by A&A on 11/04/2023, 16 pages, 14 figures

GSpyNetTree: A signal-vs-glitch classifier for gravitational-wave event candidates [CL]

http://arxiv.org/abs/2304.09977


Despite achieving sensitivities capable of detecting the extremely small amplitude of gravitational waves (GWs), LIGO and Virgo detector data contain frequent bursts of non-Gaussian transient noise, commonly known as ‘glitches’. Glitches come in various time-frequency morphologies, and they are particularly challenging when they mimic the form of real GWs. Given the higher expected event rate in the next observing run (O4), LIGO-Virgo GW event candidate validation will require increased levels of automation. Gravity Spy, a machine learning tool that successfully classified common types of LIGO and Virgo glitches in previous observing runs, has the potential to be restructured as a signal-vs-glitch classifier to accurately distinguish between glitches and GW signals. A signal-vs-glitch classifier used for automation must be robust and compatible with a broad array of background noise, new sources of glitches, and the likely occurrence of overlapping glitches and GWs. We present GSpyNetTree, the Gravity Spy Convolutional Neural Network Decision Tree: a multi-CNN classifier using CNNs in a decision tree sorted via total GW candidate mass tested under these realistic O4-era scenarios.

Read this paper on arXiv…

S. Alvarez-Lopez, A. Liyanage, J. Ding, et. al.
Fri, 21 Apr 23
23/60

Comments: 19 pages, 12 figures, submitted to Classical and Quantum Gravity

TONE: A CHIME/FRB Outrigger Pathfinder for localizations of Fast Radio Bursts using Very Long Baseline Interferometry [IMA]

http://arxiv.org/abs/2304.10534


The sensitivity and field of view of the Canadian Hydrogen Intensity Mapping Experiment (CHIME) has enabled its fast radio burst (FRB) backend to detect thousands of FRBs. However, the low angular resolution of CHIME prevents it from localizing most FRBs to their host galaxies. Very long baseline interferometry (VLBI) can readily provide the subarcsecond resolution needed to localize many FRBs to their hosts. Thus we developed TONE: an interferometric array of eight $6~\mathrm{m}$ dishes to serve as a pathfinder for the CHIME/FRB Outriggers project, which will use wide field of view cylinders to determine the sky positions for a large sample of FRBs, revealing their positions within their host galaxies to subarcsecond precision. In the meantime, TONE’s $\sim3333~\mathrm{km}$ baseline with CHIME proves to be an excellent testbed for the development and characterization of single-pulse VLBI techniques at the time of discovery. This work describes the TONE instrument, its sensitivity, and its astrometric precision in single-pulse VLBI. We believe that our astrometric errors are dominated by uncertainties in the clock measurements which build up between successive Crab pulsar calibrations which happen every $\approx 24~\mathrm{h}$; the wider fields of view and higher sensitivity of the Outriggers will provide opportunities for higher-cadence calibration. At present, CHIME-TONE localizations of the Crab pulsar yield systematic localization errors of ${0.1}-{0.2}~\mathrm{arcsec}$ – comparable to the resolution afforded by state-of-the-art optical instruments ($\sim 0.05 ~\mathrm{arcsec}$).

Read this paper on arXiv…

P. Sanghavi, C. Leung, K. Bandura, et. al.
Fri, 21 Apr 23
24/60

Comments: 31 Pages, 25 Figures, To be submitted to Journal of Astronomical Instrumentation

VarIabiLity seLection of AstrophysIcal sources iN PTF (VILLAIN) II. Supervised classification of variable sources [GA]

http://arxiv.org/abs/2304.09905


Context. Large, high-dimensional astronomical surveys require efficient data analysis. Automatic fitting of lightcurve variability and machine learning may assist in identification of sources including candidate quasars.
Aims. We aim to classify sources from the Palomar Transient Factory (PTF) as quasars, stars or galaxies, and to examine model performance using variability and colours. We determine the added value of variability information as well as quantifying the performance when colours are not available.
Methods. We use supervised learning in the form of a histogram-based gradient boosting classifier to predict spectroscopic SDSS classes using photometry. For comparison, we create models with structure function variability parameters only, magnitudes only and using all parameters.
Results. We achieve highly accurate predictions for 71 million sources with lightcurves in PTF. The full model correctly identifies 92.49 % of spectroscopically confirmed quasars from the SDSS with a purity of 95.64 %. With only variability, the completeness is 34.97 % and the purity is 58.71 % for quasars. The predictions and probabilities of PTF objects belonging to each class are made available in a catalogue, VILLAIN-Cat, including magnitudes and variability parameters.
Conclusions. We have developed a method for automatic and effective classification of PTF sources using magnitudes and variability. For similar supervised models, we recommend using at least 100,000 labeled objects, and we show how performance scales with data volume.

Read this paper on arXiv…

S. Bruun, J. Hjorth and A. Agnello
Fri, 21 Apr 23
33/60

Comments: 10 pages, 5 figures

Jupiter Science Enabled by ESA's Jupiter Icy Moons Explorer [EPA]

http://arxiv.org/abs/2304.10229


ESA’s Jupiter Icy Moons Explorer (JUICE) will provide a detailed investigation of the Jovian system in the 2030s, combining a suite of state-of-the-art instruments with an orbital tour tailored to maximise observing opportunities. We review the Jupiter science enabled by the JUICE mission, building on the legacy of discoveries from the Galileo, Cassini, and Juno missions, alongside ground- and space-based observatories. We focus on remote sensing of the climate, meteorology, and chemistry of the atmosphere and auroras from the cloud-forming weather layer, through the upper troposphere, into the stratosphere and ionosphere. The Jupiter orbital tour provides a wealth of opportunities for atmospheric and auroral science: global perspectives with its near-equatorial and inclined phases, sampling all phase angles from dayside to nightside, and investigating phenomena evolving on timescales from minutes to months. The remote sensing payload spans far-UV spectroscopy (50-210 nm), visible imaging (340-1080 nm), visible/near-infrared spectroscopy (0.49-5.56 $\mu$m), and sub-millimetre sounding (near 530-625\,GHz and 1067-1275\,GHz). This is coupled to radio, stellar, and solar occultation opportunities to explore the atmosphere at high vertical resolution; and radio and plasma wave measurements of electric discharges in the Jovian atmosphere and auroras. Cross-disciplinary scientific investigations enable JUICE to explore coupling processes in giant planet atmospheres, to show how the atmosphere is connected to (i) the deep circulation and composition of the hydrogen-dominated interior; and (ii) to the currents and charged particle environments of the external magnetosphere. JUICE will provide a comprehensive characterisation of the atmosphere and auroras of this archetypal giant planet.

Read this paper on arXiv…

L. Fletcher, T. Cavalié, D. Grassi, et. al.
Fri, 21 Apr 23
45/60

Comments: 83 pages, 24 figures, submitted to Space Science Reviews special issue on ESA’s JUICE mission

Annotated bibliography: Philosophy of Astrophysics [CL]

http://arxiv.org/abs/2304.10067


The following annotated bibliography contains a reasonably complete survey of contemporary work in the philosophy of astrophysics. Spanning approximately forty years from the early 1980s to the present day, the bibliography should help researchers entering the field to acquaint themselves with its major texts, while providing an opportunity for philosophers already working on astrophysics to expand their knowledge base and engage with unfamiliar material.

Read this paper on arXiv…

C. Yetman
Fri, 21 Apr 23
59/60

Comments: 28 pages, 79 entries, forthcoming 2023

Modeling Charge Cloud Dynamics in Cross Strip Semiconductor Detectors [IMA]

http://arxiv.org/abs/2304.09713


When a $\gamma$-ray interacts in a semiconductor detector, the resulting electron-hole charge clouds drift towards their respective electrodes for signal collection. These charge clouds will expand over time due to both thermal diffusion and mutual electrostatic repulsion. Solutions to the resulting charge profiles are well understood for the limiting cases accounting for only diffusion and only repulsion, but the general solution including both effects can only be solved numerically. Previous attempts to model these effects have taken into account the broadening of the charge profile due to both effects, but have simplified the shape of the profile by assuming Gaussian distributions. However, the detailed charge profile can have important impacts on charge sharing in multi-electrode strip detectors. In this work, we derive an analytical approximation to the general solution, including both diffusion and repulsion, that closely replicates both the width and the detailed shape of the charge profiles. This analytical solution simplifies the modeling of charge clouds in semiconductor strip detectors.

Read this paper on arXiv…

S. Boggs
Thu, 20 Apr 23
4/57

Comments: Accepted for publication in Nuclear Instruments and Methods in Physics Research A

The SunPy Project: An Interoperable Ecosystem for Solar Data Analysis [SSA]

http://arxiv.org/abs/2304.09794


The SunPy Project is a community of scientists and software developers creating an ecosystem of Python packages for solar physics. The project includes the sunpy core package as well as a set of affiliated packages. The sunpy core package provides general purpose tools to access data from different providers, read image and time series data, and transform between commonly used coordinate systems. Affiliated packages perform more specialized tasks that do not fall within the more general scope of the sunpy core package. In this article, we give a high-level overview of the SunPy Project, how it is broader than the sunpy core package, and how the project curates and fosters the affiliated package system. We demonstrate how components of the SunPy ecosystem, including sunpy and several affiliated packages, work together to enable multi-instrument data analysis workflows. We also describe members of the SunPy Project and how the project interacts with the wider solar physics and scientific Python communities. Finally, we discuss the future direction and priorities of the SunPy Project.

Read this paper on arXiv…

S. Community, W. Barnes, S. Christe, et. al.
Thu, 20 Apr 23
22/57

Comments: 15 pages, 1 figure, published in Frontiers

Numerically studying the degeneracy problem in extreme finite-source microlensing events [IMA]

http://arxiv.org/abs/2304.09529


Most transit microlensing events due to very low-mass lens objects suffer from extreme finite-source effects. While modeling their light curves, there is a known continuous degeneracy between their relevant lensing parameters, i.e., the source angular radius normalized to the angular Einstein radius $\rho_{\star}$, the Einstein crossing time $t_{\rm E}$, the lens impact parameter $u_{0}$, the blending parameter, and the stellar apparent magnitude. In this work, I numerically study the origin of this degeneracy. I find that these light curves have 5 observational parameters (i.e., the baseline magnitude, the maximum deviation in the magnification factor, the Full Width at Half Maximum $\rm{FWHM}=2 t_{\rm{HM}}$, the deviation from top-hat model, the time of the maximum time-derivative of microlensing light curves $T_{\rm{max}}=t_{\rm E}\sqrt{\rho_{\star}^{2}-u_{0}^{2}}$). For extreme finite-source microlensing events due to uniform source stars we get $t_{\rm{HM}}\simeq T_{\rm{max}}$, and the deviation from the top-hat model tends to zero which both cause the known continuous degeneracy. When either $\rho_{\star}\lesssim10$ or the limb-darkening effect is considerable $t_{\rm{HM}}$, and $T_{\rm{max}}$ are two independent observational parameters. I use a numerical approach, i.e., Random Forests containing $100$-$120$ Decision Trees, to study how these observational parameters are efficient in yielding the lensing parameters. These machine learning models find the mentioned 5 lensing parameters for finite-source microlensing events from uniform, and limb-darkened source stars with the average $R^{2}$-scores of $0.87$, and $0.84$, respectively. $R^{2}$-score for evaluating the lens impact parameter gets worse on adding limb darkening, and for extracting the limb-darkening coefficient itself this score falls as low as $0.67$.

Read this paper on arXiv…

S. Sajadian
Thu, 20 Apr 23
25/57

Comments: 10 pages, 6 figures

Fabrication of a 64-Pixel TES Microcalorimeter Array with Iron Absorbers Uniquely Designed for 14.4-keV Solar Axion Search [IMA]

http://arxiv.org/abs/2304.09539


If a hypothetical elementary particle called an axion exists, to solve the strong CP problem, a 57Fe nucleus in the solar core could emit a 14.4-keV monochromatic axion through the M1 transition. If such axions are once more transformed into photons by a 57Fe absorber, a transition edge sensor (TES) X-ray microcalorimeter should be able to detect them efficiently. We have designed and fabricated a dedicated 64-pixel TES array with iron absorbers for the solar axion search. In order to decrease the effect of iron magnetization on spectroscopic performance, the iron absorber is placed next to the TES while maintaining a certain distance. A gold thermal transfer strap connects them. We have accomplished the electroplating of gold straps with high thermal conductivity. The residual resistivity ratio (RRR) was over 23, more than eight times higher than a previous evaporated strap. In addition, we successfully electroplated pure-iron films of more than a few micrometers in thickness for absorbers and a fabricated 64-pixel TES calorimeter structure.

Read this paper on arXiv…

Y. Yagi, T. Hayashi, K. Tanaka, et. al.
Thu, 20 Apr 23
36/57

Comments: 5 pages, 5 figures, published in IEEE Transactions on Applied Superconductivity on 8 March 2023

Zenith-Angular Characteristics of Particles in EASs with $E_0 \simeq 10^{18}$ eV According to the Yakutsk Array Data [HEAP]

http://arxiv.org/abs/2304.08561


Particle lateral distributions were investigated in cosmic ray air showers with energy $E_0 \simeq 10^{18}$ eV registered at the Yakutsk array with surface and underground scintillation detectors with $\simeq 1 \times \sec\theta$~GeV threshold during the period of continuous observations from 1986 to 2016. The analysis covers events with arrival direction zenith angles $\theta \le 60^{\circ}$ within five intervals with step $\Delta\cos\theta = 0.1$. Experimental values were compared to simulation results obtained with the use of CORSIKA code within the framework of QGSJet01 hadron interaction model. The whole dataset points at probable cosmic ray composition which is close to protons.

Read this paper on arXiv…

A. Glushkov, K. Lebedev and A. Sabourov
Wed, 19 Apr 23
5/58

Comments: 14 pages, 6 figures. Accepted for publication in Physics of Atomic Nuclei, volume 86 (2023)

Cosmology with Galaxy Cluster Properties using Machine Learning [CEA]

http://arxiv.org/abs/2304.09142


[Abridged] Galaxy clusters are the most massive gravitationally-bound systems in the universe and are widely considered to be an effective cosmological probe. We propose the first Machine Learning method using galaxy cluster properties to derive unbiased constraints on a set of cosmological parameters, including Omega_m, sigma_8, Omega_b, and h_0. We train the machine learning model with mock catalogs including “measured” quantities from Magneticum multi-cosmology hydrodynamical simulations, like gas mass, gas bolometric luminosity, gas temperature, stellar mass, cluster radius, total mass, velocity dispersion, and redshift, and correctly predict all parameters with uncertainties of the order of ~14% for Omega_m, ~8% for sigma_8, ~6% for Omega_b, and ~3% for h_0. This first test is exceptionally promising, as it shows that machine learning can efficiently map the correlations in the multi-dimensional space of the observed quantities to the cosmological parameter space and narrow down the probability that a given sample belongs to a given cosmological parameter combination. In the future, these ML tools can be applied to cluster samples with multi-wavelength observations from surveys like CSST in the optical band, Euclid and Roman in the near-infrared band, and eROSITA in the X-ray band to constrain both the cosmology and the effect of the baryonic feedback.

Read this paper on arXiv…

L. Qiu, N. Napolitano, S. Borgani, et. al.
Wed, 19 Apr 23
23/58

Comments: 18 pages, submitted to A&A Main Journal. Comments are welcome

Persistent and occasional: searching for the variable population of the ZTF/4MOST sky using ZTF data release 11 [IMA]

http://arxiv.org/abs/2304.08519


We present a variability, color and morphology based classifier, designed to identify transients, persistently variable, and non-variable sources, from the Zwicky Transient Facility (ZTF) Data Release 11 (DR11) light curves of extended and point sources. The main motivation to develop this model was to identify active galactic nuclei (AGN) at different redshift ranges to be observed by the 4MOST ChANGES project. Still, it serves as a more general time-domain astronomy study. The model uses nine colors computed from CatWISE and PS1, a morphology score from PS1, and 61 single-band variability features computed from the ZTF DR11 g and r light curves. We trained two versions of the model, one for each ZTF band. We used a hierarchical local classifier per parent node approach, where each node was composed of a balanced random forest model. We adopted a 17-class taxonomy, including non-variable stars and galaxies, three transient classes, five classes of stochastic variables, and seven classes of periodic variables. The macro averaged precision, recall and F1-score are 0.61, 0.75, and 0.62 for the g-band model, and 0.60, 0.74, and 0.61, for the r-band model. When grouping the four AGN classes into one single class, its precision, recall, and F1-score are 1.00, 0.95, and 0.97, respectively, for both the g and r bands. We applied the model to all the sources in the ZTF/4MOST overlapping sky, avoiding ZTF fields covering the Galactic bulge, including 86,576,577 light curves in the g-band and 140,409,824 in the r-band. Only 0.73\% of the g-band light curves and 2.62\% of the r-band light curves were classified as stochastic, periodic, or transient with high probability ($P_{init}\geq0.9$). We found that, in general, more reliable results are obtained when using the g-band model. Using the latter, we identified 384,242 AGN candidates, 287,156 of which have $P_{init}\geq0.9$.

Read this paper on arXiv…

P. Sánchez-Sáez, J. Arredondo, A. Bayo, et. al.
Wed, 19 Apr 23
35/58

Comments: Accepted for publication in Astronomy & Astrophysics. Abstract shortened for arXiv. Tables containing the classifications and features for the ZTF g and r bands, and the labeled sets will be available at CDS. Individual catalogs per class and band, as well as the labeled set catalogs, can be downloaded at Zenodo DOI:10.5281/zenodo.7826045

The Simons Observatory: Beam characterization for the Small Aperture Telescopes [IMA]

http://arxiv.org/abs/2304.08995


We use time-domain simulations of Jupiter observations to test and develop a beam reconstruction pipeline for the Simons Observatory Small Aperture Telescopes. The method relies on a map maker that estimates and subtracts correlated atmospheric noise and a beam fitting code designed to compensate for the bias caused by the map maker. We test our reconstruction performance for four different frequency bands against various algorithmic parameters, atmospheric conditions and input beams. We additionally show the reconstruction quality as function of the number of available observations and investigate how different calibration strategies affect the beam uncertainty. For all of the cases considered, we find good agreement between the fitted results and the input beam model within a ~1.5% error for a multipole range l = 30 – 700.

Read this paper on arXiv…

N. Dachlythra, A. Duivenvoorden, J. Gudmundsson, et. al.
Wed, 19 Apr 23
45/58

Comments: 22 pages, 21 figures, to be submitted to ApJ

The Breakthrough Listen Search for Intelligent Life: Nearby Stars' Close Encounters with the Brightest Earth Transmissions [SSA]

http://arxiv.org/abs/2304.07400


After having left the heliosphere, Voyager 1 and Voyager 2 continue to travel through interstellar space. The Pioneer 10, Pioneer 11, and New Horizons spacecraft are also on paths to pass the heliopause. These spacecraft have communicated with the Deep Station Network (DSN) radio antennas in order to download scientific data and telemetry data. Outward transmissions from DSN travel to the spacecraft and beyond into interstellar space. These transmissions have encountered and will encounter other stars, introducing the possibility that intelligent life in other solar systems will encounter our terrestrial transmissions. We use the beamwidth of the transmissions between DSN and interstellar spacecraft to perform a search around the past and future positions of each spacecraft obtained from the JPL Horizons System. By performing this search over the Gaia Catalogue of Nearby Stars (GCNS), a catalogue of precisely mapped stars within 100 pc, we determine which stars the transmissions of these spacecraft will encounter. We highlight stars that are in the background of DSN transmissions and calculate the dates of these encounters to determine the time and place for potential intelligent extraterrestrial life to encounter terrestrial transmissions.

Read this paper on arXiv…

R. Derrick and H. Isaacson
Tue, 18 Apr 23
5/80

Comments: N/A

Spectral classification of young stars using conditional invertible neural networks I. Introducing and validating the method [SSA]

http://arxiv.org/abs/2304.08398


Aims. We introduce a new deep learning tool that estimates stellar parameters (such as effective temperature, surface gravity, and extinction) of young low-mass stars by coupling the Phoenix stellar atmosphere model with a conditional invertible neural network (cINN). Our networks allow us to infer the posterior distribution of each stellar parameter from the optical spectrum.
Methods. We discuss cINNs trained on three different Phoenix grids: Settl, NextGen, and Dusty. We evaluate the performance of these cINNs on unlearned Phoenix synthetic spectra and on the spectra of 36 Class III template stars with well-characterised stellar parameters.
Results. We confirm that the cINNs estimate the considered stellar parameters almost perfectly when tested on unlearned Phoenix synthetic spectra. Applying our networks to Class III stars, we find good agreement with deviations of at most 5–10 per cent. The cINNs perform slightly better for earlier-type stars than for later-type stars like late M-type stars, but we conclude that estimations of effective temperature and surface gravity are reliable for all spectral types within the network’s training range.
Conclusions. Our networks are time-efficient tools applicable to large amounts of observations. Among the three networks, we recommend using the cINN trained on the Settl library (Settl-Net), as it provides the best performance across the largest range of temperature and gravity.

Read this paper on arXiv…

D. Kang, V. Ksoll, D. Itrich, et. al.
Tue, 18 Apr 23
8/80

Comments: 29 pages, 19 figures, Accepted for publication by Astronomy & Astrophysics on 10. April

Noise in the LIGO Livingston Gravitational Wave Observatory due to Trains [IMA]

http://arxiv.org/abs/2304.07477


Environmental seismic disturbances limit the sensitivity of LIGO gravitational wave detectors. Trains near the LIGO Livingston detector produce low frequency (0.5-10 Hz) ground noise that couples into the gravitational wave sensitive frequency band (10-100 Hz) through light reflected in mirrors and other surfaces. We investigate the effect of trains during the Advanced LIGO third observing run, and propose a method to search for narrow band seismic frequencies responsible for contributing to increases in scattered light. Through the use of the linear regression tool Lasso (least absolute shrinkage and selection operator) and glitch correlations, we identify the most common seismic frequencies that correlate with increases in detector noise as 0.6-0.8 Hz, 1.7-1.9 Hz, 1.8-2.0 Hz, and 2.3-2.5 Hz in the LIGO Livingston corner station.

Read this paper on arXiv…

J. Glanzer, S. Soni, J. Spoon, et. al.
Tue, 18 Apr 23
14/80

Comments: 18 pages (including bibliography), 17 figures, 2 tables, and 1 appendix. Submitted to Classical and Quantum Gravity

Parallelization of the Symplectic Massive Body Algorithm (SyMBA) $N$-body Code [EPA]

http://arxiv.org/abs/2304.07325


Direct $N$-body simulations of a large number of particles, especially in the study of planetesimal dynamics and planet formation, have been computationally challenging even with modern machines. This work presents the combination of fully parallelized $N^2/2$ interactions and the incorporation of the GENGA code’s close encounter pair grouping strategy to enable MIMD parallelization of the Symplectic Massive Body Algorithm (SyMBA) with OpenMP on multi-core CPUs in shared-memory environment. SyMBAp (SyMBA parallelized) preserves the symplectic nature of SyMBA and shows good scalability, with a speedup of 30.8 times with 56 cores in a simulation with 5,000 fully interactive particles.

Read this paper on arXiv…

T. Lau and M. Lee
Tue, 18 Apr 23
23/80

Comments: Accepted for publication in Research Notes of the AAS

Updates to ALMA Site Properties: using the ESO-Allegro Phase RMS database — ALMA Memo 624 [IMA]

http://arxiv.org/abs/2304.08318


We present a long-term overview of the atmospheric phase stability at the Atacama Large Millimeter/submillimeter Array (ALMA) site, using >5 years of data, that acts as the successor to the studies summarized two decades ago by Evans et al 2003. Importantly, we explore the atmospheric variations, the `phase RMS’, and associated metadata of over 17000 accrued ALMA observations taken since Cycle 3 (2015) by using the Bandpass calibrator source scans. We indicate the temporal phase RMS trends for average baseline lengths of 500, 1000, 5000, and 10000m, in contrast to the old stability studies that used a single 300m baseline phase monitor system. At the ALMA site, on the Chajnantor plateau, we report the diurnal variations and monthly changes in the phase RMS on ALMA relevant baselines lengths, measured directly from data, and we reaffirm such trends in atmospheric transmission (via Precipitable Water Vapour – PWV). We confirm that day observations have respectively higher phase RMS and PWV in contrast to night, while the monthly variations show Chilean winter (June – August) providing the best, high-frequency and long-baseline observing conditions – low (stable) phase RMS and low PWV. Yet, not all good phase stability condition occur when the PWV is low. Measurements of the phase RMS as a function of short timescales, 30 to 240s, that tie with typical target source scan times, and as a function of baseline length indicate that phase variations are smaller for short timescales and baselines and larger for longer timescales and baselines. We illustrate that fast-switching phase-referencing techniques, that allow short target scan times, could work well in reducing the phase RMS to suitable levels specifically for high-frequencies (Band 8, 9 and 10), long-baselines, and the two combined.

Read this paper on arXiv…

L. Maud, A. Pérez-Sánchez, Y. Asaki, et. al.
Tue, 18 Apr 23
27/80

Comments: 34 pages, 19 Figures, 10 Tables ALMA Memo 624: this https URL

Gas selection for Xe-based LCP-GEM detectors onboard the CubeSat X-ray observatory NinjaSat [IMA]

http://arxiv.org/abs/2304.08321


We present a gas selection for Xe-based gas electron multiplier (GEM) detectors, Gas Multiplier Counters (GMCs) onboard the CubeSat X-ray observatory NinjaSat. To achieve an energy bandpass of 2-50 keV, we decided to use a Xe-based gas mixture at a pressure of 1.2 atm that is sensitive to high-energy X-rays. In addition, an effective gain of over 300 is required for a single GEM so that the 2 keV X-ray signal can be sufficiently larger than the electrical noise. At first, we measured the effective gains of GEM in nine Xe-based gas mixtures (combinations of Xe, Ar, CO2, CH4, and dimethyl ether; DME) at 1.0 atm. The highest gains were obtained with Xe/Ar/DME mixtures, while relatively lower gains were obtained with Xe/Ar/CO2, Xe/Ar/CH4, and Xe+quencher mixtures. Based on these results, we selected the Xe/Ar/DME (75%/24%/1%) mixture at 1.2 atm as the sealed gas for GMC. Then we investigated the dependence of an effective gain on the electric fields in the drift and induction gaps ranging from 100-650 V cm$^{-1}$ and 500-5000 V cm$^{-1}$, respectively, in the selected gas mixture. The effective gain weakly depended on the drift field while it was almost linearly proportional to the induction field: 2.4 times higher at 5000 V cm$^{-1}$ than at 1000 V cm$^{-1}$. With the optimal induction and drift fields, the flight model GMC achieves an effective gain of 460 with an applied GEM voltage of 590 V.

Read this paper on arXiv…

T. Takeda, T. Tamagawa, T. Enoto, et. al.
Tue, 18 Apr 23
34/80

Comments: 7th international conference on Micro Pattern Gaseous Detectors 2022 – MPGD2022, 3 pages, 2 figures

Using Dark Energy Explorers and Machine Learning to Enhance the Hobby-Eberly Telescope Dark Energy Experiment [IMA]

http://arxiv.org/abs/2304.07348


We present analysis using a citizen science campaign to improve the cosmological measures from the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX). The goal of HETDEX is to measure the Hubble expansion rate, $H(z)$, and angular diameter distance, $D_A(z)$, at $z =$ 2.4, each to percent-level accuracy. This accuracy is determined primarily from the total number of detected Lyman-$\alpha$ emitters (LAEs), the false positive rate due to noise, and the contamination due to [O II] emitting galaxies. This paper presents the citizen science project, Dark Energy Explorers, with the goal of increasing the number of LAEs, decreasing the number of false positives due to noise and the [O II] galaxies. Initial analysis shows that citizen science is an efficient and effective tool for classification most accurately done by the human eye, especially in combination with unsupervised machine learning. Three aspects from the citizen science campaign that have the most impact are 1) identifying individual problems with detections, 2) providing a clean sample with 100% visual identification above a signal-to-noise cut, and 3) providing labels for machine learning efforts. Since the end of 2022, Dark Energy Explorers has collected over three and a half million classifications by 11,000 volunteers in over 85 different countries around the world. By incorporating the results of the Dark Energy Explorers we expect to improve the accuracy on the $D_A(z)$ and $H(z)$ parameters at $z =$ 2.4 by 10 – 30%. While the primary goal is to improve on HETDEX, Dark Energy Explorers has already proven to be a uniquely powerful tool for science advancement and increasing accessibility to science worldwide.

Read this paper on arXiv…

L. House, K. Gebhardt, K. Finkelstein, et. al.
Tue, 18 Apr 23
35/80

Comments: 14 pages, 6 figures, accepted for publication in The Astrophysical Journal

LISAmax: Improving the Gravitational-Wave Sensitivity by Two Orders of Magnitude [CL]

http://arxiv.org/abs/2304.08287


Within its Voyage 2050 planning cycle, the European Space Agency (ESA) is considering long-term large class science mission themes. Gravitational-wave astronomy is among the topics under study. This paper presents “LISAmax”, a gravitational-wave interferometer concept consisting of three spacecraft located close to the Sun-Earth libration points L3, L4 and L5, forming a triangular constellation with an arm length of 259 million kilometers (to be compared to LISA’s 2.5 million kilometer arms). This is the largest triangular formation that can be reached from Earth without a major leap in mission complexity and cost. The sensitivity curve of such a detector is at least two orders of magnitude lower in amplitude than that of LISA. Depending on the choice of other instrument parameters, this makes the detector sensitive to gravitational waves in the micro-Hertz range and opens a new window for gravitational-wave astronomy, not covered by any other planned detector concept. We analyze in detail the constellation stability for a 10-year mission in the full numerical model and compute the orbit transfers using a European launcher and chemical propulsion. The payload design parameters are assessed, and the expected sensitivity curve is compared with a number of potential gravitational-wave sources. No show stoppers are identified at this point of the analysis.

Read this paper on arXiv…

W. Martens, M. Khan and J. Bayle
Tue, 18 Apr 23
45/80

Comments: 18 pages, 11 figures

New variable sources revealed by DECam toward the LMC: the first 15 deg2 [SSA]

http://arxiv.org/abs/2304.08133


The Dark Energy Camera (DECam) is a sensitive, wide field instrument mounted at the prime focus of the 4 m V. Blanco Telescope in Chile. Beside its main objectives, i.e. understanding the growth and evolution of structures in the Universe, the camera offers the opportunity to observe a 3 deg2 field of view in one single pointing and, with an adequate cadence, to identify the variable sources contained. In this paper, we present the result of a DECam observational campaign toward the LMC and give a catalogue of the observed variable sources. We considered all the available DECam observations of the LMC, acquired during 32 nights over a period of two years (from February 2018 to January 2020), and set up a specific pipeline for detecting and characterizing variable sources in the observed fields. Here, we report on the first 15 deg2 in and around the LMC as observed by DECam, testing the capabilities of our pipeline. Since many of the observed fields cover a rather crowded region of the sky, we adopted the ISIS subtraction package which, even in these conditions, can detect variables at a very low signal to noise ratio. All the potentially identified variable sources were then analyzed and each light curve tested for periodicity by using the Lomb-Scargle and Schwarzenberg-Czerny algorithms. Furthermore, we classified the identified sources by using the UPSILoN neural network. This analysis allowed us to find 70 981 variable stars, 1266 of which were previously unknown. We estimated the period of the variables and compared it with the available values in the catalogues. Moreover, for the 1266 newly detected objects, an attempted classification based on light curve analysis is presented.

Read this paper on arXiv…

A. Franco, A. Nucita, F. Paolis, et. al.
Tue, 18 Apr 23
51/80

Comments: 11 pages, 7 figures

Detection of magnetic galactic binaries in quasi-circular orbit with LISA [HEAP]

http://arxiv.org/abs/2304.07294


Laser Interferometer Space Antenna (LISA) will observe gravitational waves from galactic binaries (GBs) of white dwarfs or neutron stars. Some of these objects are among the most magnetic astrophysical objects in the Universe. Magnetism, by secularly disrupting the orbit, can eventually affect the gravitational waves emission and could then be potentially detected and characterized after several years of observations by LISA. Currently, the data processing pipeline of the LISA Data Challenge (LDC) for GBs does not consider either magnetism or eccentricity. Recently, it was shown [Bourgoin et al. PRD 105, 124042 (2022)] that magnetism induces a shift on the gravitational wave frequencies. Additionally, it was argued that, if the binary’s orbit is eccentric, the presence of magnetism could be detected by LISA. In this work, we explore the consequences of a future data analysis conducted on quasi-circular and magnetic GB systems using the current LDC tools. We first show that a single eccentric GB can be interpreted as several GBs and this can eventually bias population studies deduced from LISA’s future catalog. Then, we confirm that for quasi-circular orbits, the secular magnetic energy of the system can be inferred if the signal-to-noise ratio of the second harmonic is high enough to be detected by traditional quasi-monochromatic source searching algorithms. LISA observations could therefore bring new insights on the nature and origin of magnetic fields in white dwarfs or neutron stars.

Read this paper on arXiv…

E. Savalle, A. Bourgoin, C. Poncin-Lafitte, et. al.
Tue, 18 Apr 23
60/80

Comments: 18 pages, 6 figures

Microwave Observations of Venus with CLASS [EPA]

http://arxiv.org/abs/2304.07367


We report on the disk-averaged absolute brightness temperatures of Venus measured at four microwave frequency bands with the Cosmology Large Angular Scale Surveyor (CLASS). We measure temperatures of 432.3 $\pm$ 2.8 K, 355.6 $\pm$ 1.3 K, 317.9 $\pm$ 1.7 K, and 294.7 $\pm$ 1.9 K for frequency bands centered at 38.8, 93.7, 147.9, and 217.5 GHz, respectively. We do not observe any dependence of the measured brightness temperatures on solar illumination for all four frequency bands. A joint analysis of our measurements with lower frequency Very Large Array (VLA) observations suggests relatively warmer ($\sim$ 7 K higher) mean atmospheric temperatures and lower abundances of microwave continuum absorbers than those inferred from prior radio occultation measurements.

Read this paper on arXiv…

S. Dahal, M. Brewer, A. Akins, et. al.
Tue, 18 Apr 23
65/80

Comments: 10 pages, 3 figures, submitted to PSJ

GREX-PLUS Science Book [CEA]

http://arxiv.org/abs/2304.08104


GREX-PLUS (Galaxy Reionization EXplorer and PLanetary Universe Spectrometer) is a mission candidate for a JAXA’s strategic L-class mission to be launched in the 2030s. Its primary sciences are two-fold: galaxy formation and evolution and planetary system formation and evolution. The GREX-PLUS spacecraft will carry a 1.2 m primary mirror aperture telescope cooled down to 50 K. The two science instruments will be onboard: a wide-field camera in the 2-8 $\mu$m wavelength band and a high resolution spectrometer with a wavelength resolution of 30,000 in the 10-18 $\mu$m band. The GREX-PLUS wide-field camera aims to detect the first generation of galaxies at redshift $z>15$. The GREX-PLUS high resolution spectrometer aims to identify the location of the water “snow line” in proto-planetary disks. Both instruments will provide unique data sets for a broad range of scientific topics including galaxy mass assembly, origin of supermassive blackholes, infrared background radiation, molecular spectroscopy in the interstellar medium, transit spectroscopy for exoplanet atmosphere, planetary atmosphere in the Solar system, and so on.

Read this paper on arXiv…

G. Team, A. Inoue, Y. Harikane, et. al.
Tue, 18 Apr 23
77/80

Comments: This document is the first version of a collection of scientific themes which can be achieved with GREX-PLUS. Each section in Chapters 2 and 3 is based on the presentation at the GREX-PLUS Science Workshop held on 24-25 March, 2022 at Waseda University

A statistical model of stellar variability. I. FENRIR: a physics-based model of stellar activity, and its fast Gaussian process approximation [SSA]

http://arxiv.org/abs/2304.08489


The detection of terrestrial planets by radial velocity and photometry is hindered by the presence of stellar signals. Those are often modeled as stationary Gaussian processes, whose kernels are based on qualitative considerations, which do not fully leverage the existing physical understanding of stars. Our aim is to build a formalism which allows to transfer the knowledge of stellar activity into practical data analysis methods. In particular, we aim at obtaining kernels with physical parameters. This has two purposes: better modelling signals of stellar origin to find smaller exoplanets, and extracting information about the star from the statistical properties of the data. We consider several observational channels such as photometry, radial velocity, activity indicators, and build a model called FENRIR to represent their stochastic variations due to stellar surface inhomogeneities. We compute analytically the covariance of this multi-channel stochastic process, and implement it in the S+LEAF framework to reduce the cost of likelihood evaluations from $O(N^3)$ to $O(N)$. We also compute analytically higher order cumulants of our FENRIR model, which quantify its non-Gaussianity. We obtain a fast Gaussian process framework with physical parameters, which we apply to the HARPS-N and SORCE observations of the Sun, and constrain a solar inclination compatible with the viewing geometry. We then discuss the application of our formalism to granulation. We exhibit non-Gaussianity in solar HARPS radial velocities, and argue that information is lost when stellar activity signals are assumed to be Gaussian. We finally discuss the origin of phase shifts between RVs and indicators, and how to build relevant activity indicators. We provide an open-source implementation of the FENRIR Gaussian process model with a Python interface.

Read this paper on arXiv…

N. Hara and J. Delisle
Tue, 18 Apr 23
79/80

Comments: Submitted to Astronomy \& Astrophysics

How the Moon Impacts Subsea Communication Cables [CL]

http://arxiv.org/abs/2304.06905


We report tidal-induced latency variations on a transpacific subsea cable. Week-long recordings with a precision phase meter suggest length changes in the sub-meter range caused by the Poisson effect. The described method adds to the toolbox for the new field >>optical oceanic seismology<<.

Read this paper on arXiv…

L. Moeller
Mon, 17 Apr 23
2/51

Comments: N/A

Radio Galaxy Zoo EMU: Towards a Semantic Radio Galaxy Morphology Taxonomy [GA]

http://arxiv.org/abs/2304.07171


We present a novel natural language processing (NLP) approach to deriving plain English descriptors for science cases otherwise restricted by obfuscating technical terminology. We address the limitations of common radio galaxy morphology classifications by applying this approach. We experimentally derive a set of semantic tags for the Radio Galaxy Zoo EMU (Evolutionary Map of the Universe) project and the wider astronomical community. We collect 8,486 plain English annotations of radio galaxy morphology, from which we derive a taxonomy of tags. The tags are plain English. The result is an extensible framework which is more flexible, more easily communicated, and more sensitive to rare feature combinations which are indescribable using the current framework of radio astronomy classifications.

Read this paper on arXiv…

M. Bowles, H. Tang, E. Vardoulaki, et. al.
Mon, 17 Apr 23
4/51

Comments: 17 pages, 11 Figures, Accepted at MNRAS

Anatomy of parameter-estimation biases in overlapping gravitational-wave signals [IMA]

http://arxiv.org/abs/2304.06734


In future gravitational-wave (GW) detections, a large number of overlapping GW signals will appear in the data stream of detectors. When extracting information from one signal, the presence of other signals can cause large parameter estimation biases. Using the Fisher matrix (FM), we develop a bias analysis procedure to investigate how each parameter of other signals affects the inference biases. Taking two-signal overlapping as an example, we show detailedly and quantitatively that the biases essentially originate from the overlapping of the frequency evolution. Furthermore, we find that the behaviors of the correlation coefficients between the parameters of the two signals are similar to the biases. Both of them can be used as characterization of the influence between signals. We also corroborate the bias results of the FM method with full Bayesian analysis. Our results provide powerful guidance for parameter estimation, and the analysis methodology is easy to generalize.

Read this paper on arXiv…

Z. Wang, D. Liang, J. Zhao, et. al.
Mon, 17 Apr 23
5/51

Comments: 29 pages, 13 figures

Revisiting the trajectory of the interstellar object 'Oumuamua: preference for a radially directed non-gravitational acceleration? [EPA]

http://arxiv.org/abs/2304.06964


I present a re-analysis of the available observational constraints on the trajectory of ‘Oumuamua, the first confirmed interstellar object discovered in the solar system. ‘Oumuamua passed through the inner solar system on a hyperbolic (i.e., unbound) trajectory. Its discovery occurred after perihelion passage, and near the time of its closest approach to Earth. After being observable for approximately four months, the object became too faint and was lost at a heliocentric distance of around 3 au. Intriguingly, analysis of the trajectory of ‘Oumuamua revealed that a dynamical model including only gravitational accelerations does not provide a satisfactory fit of the data, and a non-gravitational term must be included. The detected non-gravitational acceleration is compatible with either solar radiation pressure or recoil due to outgassing. It has, however, proved challenging to reconcile either interpretation with the existing quantitative models of such effects without postulating unusual physical properties for ‘Oumuamua (such as extremely low density and/or unusual geometry, non-standard chemistry). My analysis independently confirms the detection of the non-gravitational acceleration. After comparing several possible parametrizations for this effects, I find a strong preference for a radially directed non-gravitational acceleration, pointing away from the Sun, and a moderate preference for a power-law scaling with the heliocentric distance, with an exponent between 1 and 2. These results provide valuable constraints on the physical mechanism behind the effect; a conclusive identification, however, is probably not possible on the basis of dynamical arguments alone.

Read this paper on arXiv…

F. Spada
Mon, 17 Apr 23
16/51

Comments: MATLAB code will be shared upon reasonable request to the author. Comments are welcome!

Lossy Compression of Large-Scale Radio Interferometric Data [IMA]

http://arxiv.org/abs/2304.07050


This work proposes to reduce visibility data volume using a baseline-dependent lossy compression technique that preserves smearing at the edges of the field-of-view. We exploit the relation of the rank of a matrix and the fact that a low-rank approximation can describe the raw visibility data as a sum of basic components where each basic component corresponds to a specific Fourier component of the sky distribution. As such, the entire visibility data is represented as a collection of data matrices from baselines, instead of a single tensor. The proposed methods are formulated as follows: provided a large dataset of the entire visibility data; the first algorithm, named $simple~SVD$ projects the data into a regular sampling space of rank$-r$ data matrices. In this space, the data for all the baselines has the same rank, which makes the compression factor equal across all baselines. The second algorithm, named $BDSVD$ projects the data into an irregular sampling space of rank$-r_{pq}$ data matrices. The subscript $pq$ indicates that the rank of the data matrix varies across baselines $pq$, which makes the compression factor baseline-dependent. MeerKAT and the European Very Long Baseline Interferometry Network are used as reference telescopes to evaluate and compare the performance of the proposed methods against traditional methods, such as traditional averaging and baseline-dependent averaging (BDA). For the same spatial resolution threshold, both $simple~SVD$ and $BDSVD$ show effective compression by two-orders of magnitude higher than traditional averaging and BDA. At the same space-saving rate, there is no decrease in spatial resolution and there is a reduction in the noise variance in the data which improves the S/N to over $1.5$ dB at the edges of the field-of-view.

Read this paper on arXiv…

M. Atemkeng, S. Perkins, E. Seck, et. al.
Mon, 17 Apr 23
19/51

Comments: N/A

Optical characteristics and capabilities of the successive versions of Meudon and Haute Provence H$α$ heliographs (1954-2004) [IMA]

http://arxiv.org/abs/2304.07055


H$\alpha$ heliographs are imaging instruments designed to produce monochromatic images of the solar chromosphere at fast cadence (60 s or less). They are designed to monitor efficiently dynamic phenomena of solar activity, such as flares or material ejections. Meudon and Haute Provence observatories started systematic observations in the frame of the International Geophysical Year (1957) with Lyot filters. This technology evolved several times until 1985 with tunable filters allowing to observe alternatively the line wings and core (variable wavelength). More than 6 million images were produced during 50 years, mostly on 35 mm films (catalogs are available on-line). We present in this paper the optical characteristics and the capabilities of the successive versions of the H$\alpha$ heliographs in operation between 1954 and 2004, and describe briefly the new heliograph (MeteoSpace) which will be commissioned in 2023 at Calern observatory.

Read this paper on arXiv…

J. Malherbe
Mon, 17 Apr 23
25/51

Comments: N/A

$\texttt{LIMpy}$: A Semi-analytic Approach to Simulating Multi-line Intensity Maps at Millimetre Wavelengths [GA]

http://arxiv.org/abs/2304.06748


Mapping of multiple lines such as the fine-structure emission from [CII] (157.7 $\mu \text{m}$), [OIII] (52 \& 88.4 $\mu \text{m}$), and rotational emission lines from CO are of particular interest for upcoming line intensity mapping (LIM) experiments at millimetre wavelengths, due to their brightness features. Several upcoming experiments aim to cover a broad range of scientific goals, from detecting signatures of the epoch of reionization to the physics of star formation and its role in galaxy evolution. In this paper, we develop a semi-analytic approach to modelling line strengths as functions of the star formation rate (SFR) or infrared (IR) luminosity based on observations of local and high-z galaxies. This package, $\texttt{LIMpy}$ (Line Intensity Mapping in Python), estimates the intensity and power spectra of [CII], [OIII], and CO rotational transition lines up to the $J$-levels (1-0) to (13-12) based both on analytic formalism and on simulations. We develop a relation among halo mass, SFR, and multi-line intensities that permits us to construct a generic formula for the evolution of several line strengths up to $z \sim 10$. We implement a variety of star formation models and multi-line luminosity relations to estimate the astrophysical uncertainties on the intensity power spectrum of these lines. As a demonstration, we predict the signal-to-noise ratio of [CII] detection for an EoR-Spec-like instrument on the Fred Young Submillimeter Telescope (FYST). Furthermore, the ability to use any halo catalogue allows the $\texttt{LIMpy}$ code to be easily integrated into existing simulation pipelines, providing a flexible tool to study intensity mapping in the context of complex galaxy formation physics.

Read this paper on arXiv…

A. Roy, D. Valentín-Martínez, K. Wang, et. al.
Mon, 17 Apr 23
29/51

Comments: 19 pages, 10 figures, comments are welcome

Results from the ARIANNA high-energy neutrino detector [IMA]

http://arxiv.org/abs/2304.07179


The ARIANNA in-ice radio detector explores the detection of UHE neutrinos with shallow detector stations on the Ross Ice Shelf and the South Pole. Here, we present recent results that lay the foundation for future large-scale experiments. We show a limit on the UHE neutrino flux derived from ARIANNA data, measurements of the more abundant air showers, results from in-situ measurement campaigns, a study of a potential background from internal reflection layers, and give an outlook of future detector improvements.

Read this paper on arXiv…

C. Glaser
Mon, 17 Apr 23
30/51

Comments: Proceedings of the 9th ARENA workshop 2022

A brief History of Image Sensors in the Optical [IMA]

http://arxiv.org/abs/2304.07121


Image sensors, most notably the Charge Coupled Device (CCD), have revolutionized observational astronomy as perhaps the most important innovation after photography. Since the 50th anniversary of the invention of the CCD has passed in 2019, it is time to review the development of detectors for the visible wavelength range, starting with the discovery of the photoelectric effect and first experiments to utilize it for the photometry of stars at Sternwarte Babelsberg in 1913, over the invention of the CCD, its development at the Jet Propulsion Laboratory, to the high performance CCD and CMOS imagers that are available off-the-shelf today.

Read this paper on arXiv…

M. Roth
Mon, 17 Apr 23
34/51

Comments: 9 pages, 10 figures. Presented at SDW2022, accepted for publication in Special Issue of Astronomische Nachrichten

CAPP Axion Search Experiments with Quantum Noise Limited Amplifiers [CL]

http://arxiv.org/abs/2304.07222


The axion is expected to solve the strong CP problem of quantum chromodynamics and is one of the leading candidates for dark matter. CAPP in South Korea has several axion search experiments based on cavity haloscopes in the frequency range of 1-6 GHz. The main effort focuses on operation of the experiments with the highest possible sensitivity. It requires maintenance of the haloscopes at the lowest physical temperature in the range of mK and usage of low noise components to amplify the weak axion signal. We report development and operation of low noise amplifiers for 5 haloscope experiments targeting at different frequency ranges. The amplifiers show noise temperatures approaching the quantum limit.

Read this paper on arXiv…

S. Uchaikin, B. Ivanov, J. Kim, et. al.
Mon, 17 Apr 23
40/51

Comments: 6 pages, 7 figures, 29th International Conference on Low Temperature Physics, August 18-24, 2022, Sapporo, Japan

Performance of TES X-Ray Microcalorimeters Designed for 14.4-keV Solar Axion Search [IMA]

http://arxiv.org/abs/2304.07068


A 57Fe nucleus in the solar core could emit a 14.4-keV monochromatic axion through the M1 transition if a hypothetical elementary particle, axion, exists to solve the strong CP problem. Transition edge sensor (TES) X-ray microcalorimeters can detect such axions very efficiently if they are again converted into photons by a 57Fe absorber. We have designed and produced a dedicated TES array with 57Fe absorbers for the solar axion search. The iron absorber is set next to the TES, keeping a certain distance to reduce the iron-magnetization effect on the spectroscopic performance. A gold thermal transfer strap connects them. A sample pixel irradiated from a 55Fe source detected 698 pulses. In contrast to thermal simulations, we consider that the pulses include either events produced in an iron absorber or gold strap at a fraction dependent on the absorption rate of each material. Furthermore, photons deposited on the iron absorber are detected through the strap as intended. The identification of all events still needs to be completed. However, we successfully operated the TES with the unique design under iron magnetization for the first time.

Read this paper on arXiv…

Y. Yagi, R. Konno, T. Hayash, et. al.
Mon, 17 Apr 23
48/51

Comments: 10 pages, 6 figures, published in Journal of Low Temperature Physics on 4 February 2023

AutoTAB: Automatic Tracking Algorithm for Bipolar Magnetic Regions [SSA]

http://arxiv.org/abs/2304.06615


Bipolar Magnetic Regions (BMRs) provide crucial information about solar magnetism. They exhibit varying morphology and magnetic properties throughout their lifetime, and studying these properties can provide valuable insights into the workings of the solar dynamo. The majority of previous studies have counted every detected BMR as a new one and have not been able to study the full life history of each BMRs. To address this issue, we have developed an Automatic Tracking Algorithm (AutoTAB) for BMRs, that tracks the BMRs for their entire lifetime or throughout their disk passage. AutoTAB uses the binary maps of detected BMRs to automatically track the regions. This is done by differentially rotating the binary maps of the detected regions and checking for overlaps between them. In this first article of this project, we provide a detailed description of the working of the algorithm and evaluate its strengths and weaknesses. We also compare its performance with other existing tracking techniques. AutoTAB excels in tracking even for the small features and it successfully tracks 9152 BMRs over the last two solar cycles (1996-2020), providing a comprehensive dataset that depicts the evolution of various properties for each tracked region. The tracked BMRs follow familiar properties of solar cycles except for these small BMRs that appear at all phases of the solar cycle and show weak latitudinal dependency, which is represented through the butterfly diagram. Finally, we discuss the possibility of adapting our algorithm to other datasets and expanding the technique to track other solar features in the future.

Read this paper on arXiv…

A. Sreedevi, B. Jha, B. Karak, et. al.
Fri, 14 Apr 23
2/64

Comments: 14 pages including 9 figures; Submitted in ApJS; Comments are welcome

Prospects for detecting anisotropies and polarization of the stochastic gravitational wave background with ground-based detectors [CL]

http://arxiv.org/abs/2304.06640


We build an analytical framework to study the observability of anisotropies and a net chiral polarization of the Stochastic Gravitational Wave Background (SGWB) with a generic network of ground-based detectors. We apply this formalism to perform a Fisher forecast of the performance of a network consisting of the current interferometers (LIGO, Virgo and KAGRA) and planned third-generation ones, such as the Einstein Telescope and Cosmic Explorer. Our results yield limits on the observability of anisotropic modes, spanning across noise- and signal-dominated regimes. We find that if the isotropic component of the SGWB has an amplitude close to the current limit, third-generation interferometers with an observation time of $10$ years can measure multipoles (in a spherical harmonic expansion) up to $\ell = 8$ with ${\cal O }\left( 10^{-3} – 10^{-2} \right)$ accuracy relative to the isotropic component, and an ${\cal O }\left( 10^{-3} \right)$ amount of net polarization. For weaker signals, the accuracy worsens as roughly the inverse of the SGWB amplitude.

Read this paper on arXiv…

G. Mentasti, C. Contaldi and M. Peloso
Fri, 14 Apr 23
9/64

Comments: 40 pages, 7 figures, prepared for submission to JCAP

Fast emulation of cosmological density fields based on dimensionality reduction and supervised machine-learning [CEA]

http://arxiv.org/abs/2304.06099


N-body simulations are the most powerful method to study the non-linear evolution of large-scale structure. However, they require large amounts of computational resources, making unfeasible their direct adoption in scenarios that require broad explorations of parameter spaces. In this work, we show that it is possible to perform fast dark matter density field emulations with competitive accuracy using simple machine-learning approaches. We build an emulator based on dimensionality reduction and machine learning regression combining simple Principal Component Analysis and supervised learning methods. For the estimations with a single free parameter, we train on the dark matter density parameter, $\Omega_m$, while for emulations with two free parameters, we train on a range of $\Omega_m$ and redshift. The method first adopts a projection of a grid of simulations on a given basis; then, a machine learning regression is trained on this projected grid. Finally, new density cubes for different cosmological parameters can be estimated without relying directly on new N-body simulations by predicting and de-projecting the basis coefficients. We show that the proposed emulator can generate density cubes at non-linear cosmological scales with density distributions within a few percent compared to the corresponding N-body simulations. The method enables gains of three orders of magnitude in CPU run times compared to performing a full N-body simulation while reproducing the power spectrum and bispectrum within $\sim 1\%$ and $\sim 3\%$, respectively, for the single free parameter emulation and $\sim 5\%$ and $\sim 15\%$ for two free parameters. This can significantly accelerate the generation of density cubes for a wide variety of cosmological models, opening the doors to previously unfeasible applications, such as parameter and model inferences at full survey scales as the ESA/NASA Euclid mission.

Read this paper on arXiv…

M. Conceição, A. Krone-Martins, A. Silva, et. al.
Fri, 14 Apr 23
12/64

Comments: 10 pages, 6 figures. To be submitted to A&A. Comments are welcome!

Quasi Real-Time Autonomous Satellite Detection and Orbit Estimation [IMA]

http://arxiv.org/abs/2304.06227


A method of near real-time detection and tracking of resident space objects (RSOs) using a convolutional neural network (CNN) and linear quadratic estimator (LQE) is proposed. Advances in machine learning architecture allow the use of low-power/cost embedded devices to perform complex classification tasks. In order to reduce the costs of tracking systems, a low-cost embedded device will be used to run a CNN detection model for RSOs in unresolved images captured by a gray-scale camera and small telescope. Detection results computed in near real-time are then passed to an LQE to compute tracking updates for the telescope mount, resulting in a fully autonomous method of optical RSO detection and tracking. Keywords: Space Domain Awareness, Neural Networks, Real-Time, Object Detection, Embedded Systems.

Read this paper on arXiv…

J. Jordan, D. Posada, M. Gillette, et. al.
Fri, 14 Apr 23
25/64

Comments: SPIE Defense and Commercial 2023, Orlando, FL

Cosmology with one galaxy? — The ASTRID model and robustness [CEA]

http://arxiv.org/abs/2304.06084


Recent work has pointed out the potential existence of a tight relation between the cosmological parameter $\Omega_{\rm m}$, at fixed $\Omega_{\rm b}$, and the properties of individual galaxies in state-of-the-art cosmological hydrodynamic simulations. In this paper, we investigate whether such a relation also holds for galaxies from simulations run with a different code that made use of a distinct subgrid physics: Astrid. We find that also in this case, neural networks are able to infer the value of $\Omega_{\rm m}$ with a $\sim10\%$ precision from the properties of individual galaxies while accounting for astrophysics uncertainties as modeled in CAMELS. This tight relationship is present at all considered redshifts, $z\leq3$, and the stellar mass, the stellar metallicity, and the maximum circular velocity are among the most important galaxy properties behind the relation. In order to use this method with real galaxies, one needs to quantify its robustness: the accuracy of the model when tested on galaxies generated by codes different from the one used for training. We quantify the robustness of the models by testing them on galaxies from four different codes: IllustrisTNG, SIMBA, Astrid, and Magneticum. We show that the models perform well on a large fraction of the galaxies, but fail dramatically on a small fraction of them. Removing these outliers significantly improves the accuracy of the models across simulation codes.

Read this paper on arXiv…

N. Echeverri, F. Villaescusa-Navarro, C. Chawak, et. al.
Fri, 14 Apr 23
34/64

Comments: 16 pages, 12 figures

Precision measurement of the index of refraction of deep glacial ice at radio frequencies at Summit Station, Greenland [IMA]

http://arxiv.org/abs/2304.06181


Glacial ice is used as a target material for the detection of ultra-high energy neutrinos, by measuring the radio signals that are emitted when those neutrinos interact in the ice. Thanks to the large attenuation length at radio frequencies, these signals can be detected over distances of several kilometers. One experiment taking advantage of this is the Radio Neutrino Observatory Greenland (RNO-G), currently under construction at Summit Station, near the apex of the Greenland ice sheet. These experiments require a thorough understanding of the dielectric properties of ice at radio frequencies. Towards this goal, calibration campaigns have been undertaken at Summit, during which we recorded radio reflections off internal layers in the ice sheet. Using data from the nearby GISP2 and GRIP ice cores, we show that these reflectors can be associated with features in the ice conductivity profiles; we use this connection to determine the index of refraction of the bulk ice as n=1.778 +/- 0.006.

Read this paper on arXiv…

J. Aguilar, P. Allison, D. Besson, et. al.
Fri, 14 Apr 23
35/64

Comments: N/A

Full-frame data reduction method: a data mining tool to detect the potential variations in optical photometry [SSA]

http://arxiv.org/abs/2304.06207


A Synchronous Photometry Data Extraction (SPDE) program, performing indiscriminate monitors of all stars appearing at the same field of view of astronomical image, is developed by integrating several Astropy affiliated packages to make full use of time series observed by the traditional small/medium aperture ground-based telescope. The complete full-frame stellar photometry data reductions implemented for the two time series of cataclysmic variables: RX J2102.0+3359 and Paloma J0524+4244 produce 363 and 641 optimal light curves, respectively. A cross-identification with the SIMBAD finds 23 known stars, of which 16 red giant-/horizontal-branch stars, 2 W UMa-type eclipsing variables, 2 program stars, a X-ray source and 2 Asteroid Terrestrial-impact Last Alert System variables. Based on the data productions of the SPDE program, a followup Light Curve Analysis (LCA) program identifies 32 potential variable light curves, of which 18 are from the time series of RX J2102.0+3359, and 14 are from that of Paloma J0524+4244. They are preliminarily separated into periodical, transient, and peculiar types. By querying for the 58 VizieR online data catalogs, their physical parameters and multi-band brightness spanning from X-ray to radio are compiled for future analysis.

Read this paper on arXiv…

Z. Dai, H. Zhou and J. Cao
Fri, 14 Apr 23
39/64

Comments: 35pages, 8 figures, accepted by RAA

Revisiting K2-233 spectroscopic time-series with multidimensional Gaussian Processes [EPA]

http://arxiv.org/abs/2304.06406


Detecting planetary signatures in radial velocity time-series of young stars is challenging due to their inherently strong stellar activity. However, it is possible to learn information about the properties of the stellar signal by using activity indicators measured from the same stellar spectra used to extract radial velocities. In this manuscript, we present a reanalysis of spectroscopic HARPS data of the young star K2-233, which hosts three transiting planets. We perform a multidimensional Gaussian Process regression on the radial velocity and the activity indicators to characterise the planetary Doppler signals. We demonstrate, for the first time on a real dataset, that the use of a multidimensional Gaussian Process can boost the precision with which we measure the planetary signals compared to a one-dimensional Gaussian Process applied to the radial velocities alone. We measure the semi-amplitudes of K2-233 b, c, and d as 1.31(-0.74)(+0.81), 1.81(-0.67)(+0.71), and 2.72(-0.70)(+0.66) m/s, which translates into planetary masses of 2.4(-1.3)(+1.5), 4.6(-1.7)(+1.8), and 10.3(-2.6)(+2.4), respectively. These new mass measurements make K2-233 d a valuable target for transmission spectroscopy observations with JWST. K2-233 is the only young system with two detected inner planets below the radius valley and a third outer planet above it. This makes it an excellent target to perform comparative studies, to inform our theories of planet evolution, formation, migration, and atmospheric evolution.

Read this paper on arXiv…

O. Barragán, E. Gillen, S. Aigrain, et. al.
Fri, 14 Apr 23
45/64

Comments: Accepted for publication in MNRAS

Growing Pains: Understanding the Impact of Likelihood Uncertainty on Hierarchical Bayesian Inference for Gravitational-Wave Astronomy [IMA]

http://arxiv.org/abs/2304.06138


Observations of gravitational waves emitted by merging compact binaries have provided tantalising hints about stellar astrophysics, cosmology, and fundamental physics. However, the physical parameters describing the systems, (mass, spin, distance) used to extract these inferences about the Universe are subject to large uncertainties. The current method of performing these analyses requires performing many Monte Carlo integrals to marginalise over the uncertainty in the properties of the individual binaries and the survey selection bias. These Monte Carlo integrals are subject to fundamental statistical uncertainties. Previous treatments of this statistical uncertainty has focused on ensuring the precision of the inferred inference is unaffected, however, these works have neglected the question of whether sufficient accuracy can also be achieved. In this work, we provide a practical exploration of the impact of uncertainty in our analyses and provide a suggested framework for verifying that astrophysical inferences made with the gravitational-wave transient catalogue are accurate. Applying our framework to models used by the LIGO-Virgo-Kagra collaboration, we find that Monte Carlo uncertainty in estimating the survey selection bias is the limiting factor in our ability to probe narrow populations model and this will rapidly grow more problematic as the size of the observed population increases.

Read this paper on arXiv…

C. Talbot and J. Golomb
Fri, 14 Apr 23
52/64

Comments: 8 pages, 6 figures

Priors for symbolic regression [CL]

http://arxiv.org/abs/2304.06333


When choosing between competing symbolic models for a data set, a human will naturally prefer the “simpler” expression or the one which more closely resembles equations previously seen in a similar context. This suggests a non-uniform prior on functions, which is, however, rarely considered within a symbolic regression (SR) framework. In this paper we develop methods to incorporate detailed prior information on both functions and their parameters into SR. Our prior on the structure of a function is based on a $n$-gram language model, which is sensitive to the arrangement of operators relative to one another in addition to the frequency of occurrence of each operator. We also develop a formalism based on the Fractional Bayes Factor to treat numerical parameter priors in such a way that models may be fairly compared though the Bayesian evidence, and explicitly compare Bayesian, Minimum Description Length and heuristic methods for model selection. We demonstrate the performance of our priors relative to literature standards on benchmarks and a real-world dataset from the field of cosmology.

Read this paper on arXiv…

D. Bartlett, H. Desmond and P. Ferreira
Fri, 14 Apr 23
62/64

Comments: 8+2 pages, 2 figures. Submitted to The Genetic and Evolutionary Computation Conference (GECCO) 2023 Workshop on Symbolic Regression

Capella: A Space-only High-frequency Radio VLBI Network Formed by a Constellation of Small Satellites [IMA]

http://arxiv.org/abs/2304.06482


Very long baseline radio interferometry (VLBI) with ground-based observatories is limited by the size of Earth, the geographic distribution of antennas, and the transparency of the atmosphere. In this whitepaper, we present Capella, a tentative design of a space-only VLBI system. Using four small (<500 kg) satellites on two orthogonal polar low-Earth orbits, and single-band heterodyne receivers operating at frequencies around 690 GHz, the interferometer is able to achieve angular resolutions of approximately 7 microarcsec. Within a total observing time of three days, a near-complete uv plane coverage can be reached, with a 1-sigma point source sensitivity as good as about 6~mJy for an instantaneous bandwidth of 1 GHz. The required downlink data rates of >10 Gbps can be reached through near-infrared laser communication; depending on the actual downlink speed, one or multiple ground communication stations are necessary. We note that all key technologies required for the Capella system are already available, some of them off-the-shelf. Data can be correlated using dedicated versions of existing Fourier transform (FX) software correlators; dedicated routines will be needed to handle the effects of orbital motion, including relativistic corrections. With the specifications assumed in this whitepaper, Capella will be able to address a range of science cases, including: photon rings around supermassive black holes; the acceleration and collimation zones of plasma jets emitted from the vicinity of supermassive black holes; the chemical composition of accretion flows into active galactic nuclei through observations of molecular absorption lines; mapping supermassive binary black holes; the magnetic activity of stars; and nova eruptions of symbiotic binary stars – and, like any substantially new observing technique, has the potential for unexpected discoveries.

Read this paper on arXiv…

S. Trippe, T. Jung, J. Lee, et. al.
Fri, 14 Apr 23
63/64

Comments: 18 pages, 2 figures, 1 table. Whitepaper version 1.0. Living document, will be updated when necessary

Dynamics of space debris removal: A review [IMA]

http://arxiv.org/abs/2304.05709


Space debris, also known as “space junk,” presents a significant challenge for all space exploration activities, including those involving human-onboard spacecraft such as SpaceX’s Crew Dragon and the International Space Station. The amount of debris in space is rapidly increasing and poses a significant environmental concern. Various studies and research have been conducted on space debris capture mechanisms, including contact and contact-less capturing methods, in Earth’s orbits. While advancements in technology, such as telecommunications, weather forecasting, high-speed internet, and GPS, have benefited society, their improper and unplanned usage has led to the creation of debris. The growing amount of debris poses a threat of collision with the International Space Station, shuttle, and high-value satellites, and is present in different parts of Earth’s orbit, varying in size, shape, speed, and mass. As a result, capturing and removing space debris is a challenging task. This review article provides an overview of space debris statistics and specifications, and focuses on ongoing mitigation strategies, preventive measures, and statutory guidelines for removing and preventing debris creation, emphasizing the serious issue of space debris damage to space agencies and relevant companies.

Read this paper on arXiv…

M. Bigdeli, R. Srivastava and M. Scaraggi
Thu, 13 Apr 23
4/59

Comments: N/A

Finding AGN remnant candidates based on radio morphology with machine learning [GA]

http://arxiv.org/abs/2304.05813


Remnant radio galaxies represent the dying phase of radio-loud active galactic nuclei (AGN). Large samples of remnant radio galaxies are important for quantifying the radio galaxy life cycle. The remnants of radio-loud AGN can be identified in radio sky surveys based on their spectral index, or, complementary, through visual inspection based on their radio morphology. However, this is extremely time-consuming when applied to the new large and sensitive radio surveys. Here we aim to reduce the amount of visual inspection required to find AGN remnants based on their morphology, through supervised machine learning trained on an existing sample of remnant candidates. For a dataset of 4107 radio sources, with angular sizes larger than 60 arcsec, from the LOw Frequency ARray (LOFAR) Two-Metre Sky Survey second data release (LoTSS-DR2), we started with 151 radio sources that were visually classified as ‘AGN remnant candidate’. We derived a wide range of morphological features for all radio sources from their corresponding Stokes-I images: from simple source catalogue-derived properties, to clustered Haralick-features, and self-organising map (SOM) derived morphological features. We trained a random forest classifier to separate the ‘AGN remnant candidates’ from the not yet inspected sources. The SOM-derived features and the total to peak flux ratio of a source are shown to be most salient to the classifier. We estimate that $31\pm5\%$ of sources with positive predictions from our classifier will be labelled ‘AGN remnant candidates’ upon visual inspection, while we estimate the upper bound of the $95\%$ confidence interval for ‘AGN remnant candidates’ in the negative predictions at $8\%$. Visual inspection of just the positive predictions reduces the number of radio sources requiring visual inspection by $73\%$.

Read this paper on arXiv…

R. Mostert, R. Morganti, M. Brienza, et. al.
Thu, 13 Apr 23
18/59

Comments: 23 pages; accepted for publication in A&A

A Gaussian process cross-correlation approach to time delay estimation in active galactic nuclei [IMA]

http://arxiv.org/abs/2304.05536


We present a probabilistic cross-correlation approach to estimate time delays in the context of reverberation mapping (RM) of Active Galactic Nuclei (AGN). We reformulate the traditional interpolated cross-correlation method as a statistically principled model that delivers a posterior distribution for the delay. The method employs Gaussian processes as a model for observed AGN light curves. We describe the mathematical formalism and demonstrate the new approach using both simulated light curves and available RM observations. The proposed method delivers a posterior distribution for the delay that accounts for observational noise and the non-uniform sampling of the light curves. This feature allow us to fully quantify its uncertainty and propagate it to subsequent calculations of dependent physical quantities, e.g., black hole masses. It delivers out-of-sample predictions, which enables us to subject it to model selection and it can calculate the joint posterior delay for more than two light curves. Because of the numerous advantages of our reformulation and the simplicity of its application, we anticipate that our method will find favour not only in the specialised community of RM, but in all fields where cross-correlation analysis is performed. We provide the algorithms and examples of their application as part of our Julia GPCC package.

Read this paper on arXiv…

F. Nuñez, N. Gianniotis and K. Polsterer
Thu, 13 Apr 23
24/59

Comments: 13 pages, 16 figures, Accepted for publication in Astronomy and Astrophysics

Galactic ChitChat: Using Large Language Models to Converse with Astronomy Literature [CL]

http://arxiv.org/abs/2304.05406


We demonstrate the potential of the state-of-the-art OpenAI GPT-4 large language model to engage in meaningful interactions with Astronomy papers using in-context prompting. To optimize for efficiency, we employ a distillation technique that effectively reduces the size of the original input paper by 50\%, while maintaining the paragraph structure and overall semantic integrity. We then explore the model’s responses using a multi-document context (ten distilled documents). Our findings indicate that GPT-4 excels in the multi-document domain, providing detailed answers contextualized within the framework of related research findings. Our results showcase the potential of large language models for the astronomical community, offering a promising avenue for further exploration, particularly the possibility of utilizing the models for hypothesis generation.

Read this paper on arXiv…

I. Ciucă and Y. Ting
Thu, 13 Apr 23
42/59

Comments: 3 pages, submitted to RNAAS, comments very welcome from the community

A Glimpse of International Cooperation in Astrophysical Sciences in India [IMA]

http://arxiv.org/abs/2304.05626


Astronomy and Astrophysics is an observational science dealing with celestial objects. Aryabhatta Research Institute of Observational Sciences (ARIES) is one of the premier institutions in astronomy and astrophysics and has contributed significantly in this field. No doubt, India is a part of several mega-science projects in the domain of Astronomy and Astrophysics, such as the Thirty Meter Telescope (TMT); Square Kilometer Array (SKA) and Laser Interferometer Gravitational-wave Observatory (LIGO) projects. Growing engagement of India with mega-science projects has brought a positive impact on its science and technology landscape. A few such collaborations are mentioned to demonstrate that international cooperation are necessary in the field of Astrophysical sciences.

Read this paper on arXiv…

R. Sagar
Thu, 13 Apr 23
43/59

Comments: 4 pages, 1 figure, Invited article

Dynamo modelling for cycle variability and occurrence of grand minima in Sun-like stars: Rotation rate dependence [SSA]

http://arxiv.org/abs/2304.05819


Like the solar cycle, stellar activity cycles are also irregular. Observations reveal that rapidly rotating (young) Sun-like stars exhibit a high level of activity with no Maunder-like grand minima and rarely display smooth regular activity cycles. On the other hand, slowly rotating old stars like the Sun have low activity levels and smooth cycles with occasional grand minima. We, for the first time, try to model these observational trends using flux transport dynamo models. Following previous works, we build kinematic dynamo models of one solar mass star with different rotation rates. Differential rotation and meridional circulation are specified with a mean-field hydrodynamic model. We include stochastic fluctuations in the Babcock-Leighton source of the poloidal field to capture the inherent fluctuations in the stellar convection. Based on extensive simulations, we find that rapidly rotating stars produce highly irregular cycles with strong magnetic fields and rarely produce Maunder-like grand minima, whereas the slowly-rotating stars (with a rotation period of 10 days and longer) produce smooth cycles of weaker strength, long-term modulation in the amplitude, and occasional extended grand minima. The average duration and the frequency of grand minima increase with decreasing rotation rate. These results can be understood as the tendency of less supercritical dynamo in slower rotating stars to be more prone to produce extended grand minima

Read this paper on arXiv…

V. Vashishth, B. Karak and L. Kitchatinov
Thu, 13 Apr 23
58/59

Comments: Accepted in MNRAS

Deep-learning based measurement of planetary radial velocities in the presence of stellar variability [EPA]

http://arxiv.org/abs/2304.04807


We present a deep-learning based approach for measuring small planetary radial velocities in the presence of stellar variability. We use neural networks to reduce stellar RV jitter in three years of HARPS-N sun-as-a-star spectra. We develop and compare dimensionality-reduction and data splitting methods, as well as various neural network architectures including single line CNNs, an ensemble of single line CNNs, and a multi-line CNN. We inject planet-like RVs into the spectra and use the network to recover them. We find that the multi-line CNN is able to recover planets with 0.2 m/s semi-amplitude, 50 day period, with 8.8% error in the amplitude and 0.7% in the period. This approach shows promise for mitigating stellar RV variability and enabling the detection of small planetary RVs with unprecedented precision.

Read this paper on arXiv…

I. Colwell, V. Timmaraju and A. Wise
Wed, 12 Apr 23
5/45

Comments: Draft, unsubmitted, 10 pages, 8 figures

SBI++: Flexible, Ultra-fast Likelihood-free Inference Customized for Astronomical Application [IMA]

http://arxiv.org/abs/2304.05281


Flagship near-future surveys targeting $10^8-10^9$ galaxies across cosmic time will soon reveal the processes of galaxy assembly in unprecedented resolution. This creates an immediate computational challenge on effective analyses of the full data-set. With simulation-based inference (SBI), it is possible to attain complex posterior distributions with the accuracy of traditional methods but with a $>10^4$ increase in speed. However, it comes with a major limitation. Standard SBI requires the simulated data to have identical characteristics to the observed data, which is often violated in astronomical surveys due to inhomogeneous coverage and/or fluctuating sky and telescope conditions. In this work, we present a complete SBI-based methodology, “SBI$^{++}$,” for treating out-of-distribution measurement errors and missing data. We show that out-of-distribution errors can be approximated by using standard SBI evaluations and that missing data can be marginalized over using SBI evaluations over nearby data realizations in the training set. In addition to the validation set, we apply SBI$^{++}$ to galaxies identified in extragalactic images acquired by the James Webb Space Telescope, and show that SBI$^{++}$ can infer photometric redshifts at least as accurately as traditional sampling methods and crucially, better than the original SBI algorithm using training data with a wide range of observational errors. SBI$^{++}$ retains the fast inference speed of $\sim$1 sec for objects in the observational training set distribution, and additionally permits parameter inference outside of the trained noise and data at $\sim$1 min per object. This expanded regime has broad implications for future applications to astronomical surveys.

Read this paper on arXiv…

B. Wang, J. Leja, V. Villar, et. al.
Wed, 12 Apr 23
6/45

Comments: 12 pages, 5 figures. Code and a Jupyter tutorial are made publicly available at this https URL

Field-level inference of cosmic shear with intrinsic alignments and baryons [CEA]

http://arxiv.org/abs/2304.04785


We construct a field-based Bayesian Hierarchical Model for cosmic shear that includes, for the first time, the important astrophysical systematics of intrinsic alignments and baryon feedback, in addition to a gravity model. We add to the BORG-WL framework the tidal alignment and tidal torquing model (TATT) for intrinsic alignments and compare them with the non-linear alignment (NLA) model. With synthetic data, we have shown that adding intrinsic alignments and sampling the TATT parameters does not reduce the constraining power of the method and the field-based approach lifts the weak lensing degeneracy. We add baryon effects at the field level using the enthalpy gradient descent (EGD) model. This model displaces the dark matter particles without knowing whether they belong to a halo and allows for self-calibration of the model parameters, which are inferred from the data. We have also illustrated the effects of model misspecification for the baryons. The resulting model now contains the most important physical effects and is suitable for application to data.

Read this paper on arXiv…

N. Porqueres, A. Heavens, D. Mortlock, et. al.
Wed, 12 Apr 23
10/45

Comments: N/A

The International Pulsar Timing Array checklist for the detection of nanohertz gravitational waves [IMA]

http://arxiv.org/abs/2304.04767


Pulsar timing arrays (PTAs) provide a way to detect gravitational waves at nanohertz frequencies. In this band, the most likely signals are stochastic, with a power spectrum that rises steeply at lower frequencies. Indeed, the observation of a common red noise process in pulsar-timing data suggests that the first credible detection of nanohertz-frequency gravitational waves could take place within the next few years. The detection process is complicated by the nature of the signals and the noise: the first observational claims will be statistical inferences drawn at the threshold of detectability. To demonstrate that gravitational waves are creating some of the noise in the pulsar-timing data sets, observations must exhibit the Hellings and Downs curve — the angular correlation function associated with gravitational waves — as well as demonstrating that there are no other reasonable explanations. To ensure that detection claims are credible, the International Pulsar Timing Array (IPTA) has a formal process to vet results prior to publication. This includes internal sharing of data and processing pipelines between different PTAs, enabling independent cross-checks and validation of results. To oversee and validate any detection claim, the IPTA has also created an eight-member Detection Committee (DC) which includes four independent external members. IPTA members will only publish their results after a formal review process has concluded. This document is the initial DC checklist, describing some of the conditions that should be fulfilled by a credible detection.

Read this paper on arXiv…

B. Allen, S. Dhurandhar, Y. Gupta, et. al.
Wed, 12 Apr 23
12/45

Comments: 6 pages

Spherical Harmonics for the 1D Radiative Transfer Equation II: Thermal Emission [EPA]

http://arxiv.org/abs/2304.04830


Approximate methods to estimate solutions to the radiative transfer equation are essential for the understanding of atmospheres of exoplanets and brown dwarfs. The simplest and most popular choice is the “two-stream method” which is often used to produce simple yet effective models for radiative transfer in scattering and absorbing media. Toon et al. (1989) (Toon89) outlined a two-stream method for computing reflected light and thermal spectra and was later implemented in the open-source radiative transfer model PICASO. In Part~I of this series, we developed an analytical spherical harmonics method for solving the radiative transfer equation for reflected solar radiation (Rooney et al. 2023), which was implemented in PICASO to increase the accuracy of the code by offering a higher-order approximation. This work is an extension of this spherical harmonics derivation to study thermal emission spectroscopy. We highlight the model differences in the approach for thermal emission and benchmark the 4-term method (SH4) against Toon89 and a high-stream discrete-ordinates method, CDISORT. By comparing the spectra produced by each model we demonstrate that the SH4 method provides a significant increase in accuracy, compared to Toon89, which can be attributed to the increased order of approximation and to the choice of phase function. We also explore the trade-off between computational time and model accuracy. We find that our 4-term method is twice as slow as our 2-term method, but is up to five times more accurate, when compared with CDISORT. Therefore, SH4 provides excellent improvement in model accuracy with minimal sacrifice in numerical expense.

Read this paper on arXiv…

C. Rooney, N. Batalha and M. Marley
Wed, 12 Apr 23
13/45

Comments: Submitted ApJ; 17 pages; 7 figures; Code available at this https URL; Zenodo release at this https URL; Tutorials/figure reproducibility at this https URL;

Spherical Harmonics for the 1D Radiative Transfer Equation I: Reflected Light [EPA]

http://arxiv.org/abs/2304.04829


A significant challenge in radiative transfer theory for atmospheres of exoplanets and brown dwarfs is the derivation of computationally efficient methods that have adequate fidelity to more precise, numerically demanding solutions. In this work, we extend the capability of the first open-source radiative transfer model for computing the reflected light of exoplanets at any phase geometry, PICASO: Planetary Intensity Code for Atmospheric Spectroscopy Observations. Until now, PICASO has implemented two-stream approaches to the solving the radiative transfer equation for reflected light, in particular following the derivations of Toon et al. (1989) (Toon89). In order to improve the model accuracy, we have considered higher-order approximations of the phase functions, namely, we have increased the order of approximation from 2 to 4, using spherical harmonics. The spherical harmonics approximation decouples spatial and directional dependencies by expanding the intensity and phase function into a series of spherical harmonics, or Legendre polynomials, allowing for analytical solutions for low-order approximations to optimize computational efficiency. We rigorously derive the spherical harmonics method for reflected light and benchmark the 4-term method (SH4) against Toon89 and two independent and higher-fidelity methods (CDISORT & doubling-method). On average, the SH4 method provides an order of magnitude increase in accuracy, compared to Toon89. Lastly, we implement SH4 within PICASO and observe only modest increase in computational time, compared to two-stream methods (20% increase).

Read this paper on arXiv…

C. Rooney, N. Batalha and M. Marley
Wed, 12 Apr 23
19/45

Comments: Accepted ApJ; 27 pages; 5 figures; Code available at this https URL; Zenodo release at this https URL; Tutorials/figure reproducibility at this https URL

The James Webb Space Telescope Mission [IMA]

http://arxiv.org/abs/2304.04869


Twenty-six years ago a small committee report, building on earlier studies, expounded a compelling and poetic vision for the future of astronomy, calling for an infrared-optimized space telescope with an aperture of at least $4m$. With the support of their governments in the US, Europe, and Canada, 20,000 people realized that vision as the $6.5m$ James Webb Space Telescope. A generation of astronomers will celebrate their accomplishments for the life of the mission, potentially as long as 20 years, and beyond. This report and the scientific discoveries that follow are extended thank-you notes to the 20,000 team members. The telescope is working perfectly, with much better image quality than expected. In this and accompanying papers, we give a brief history, describe the observatory, outline its objectives and current observing program, and discuss the inventions and people who made it possible. We cite detailed reports on the design and the measured performance on orbit.

Read this paper on arXiv…

J. Gardner, J. Mather, R. Abbott, et. al.
Wed, 12 Apr 23
20/45

Comments: Accepted by PASP for the special issue on The James Webb Space Telescope Overview, 29 pages, 4 figures

Multicolor and multi-spot observations of Starlink's Visorsat [IMA]

http://arxiv.org/abs/2304.05191


This study provides the results of simultaneous multicolor observations for the first Visorsat (STARLINK-1436) and the ordinary Starlink satellite, STARLINK-1113 in the $U$, $B$, $V$, $g’$, $r$, $i$, $R_{\rm C}$, $I_{\rm C}$, $z$, $J$, $H$, and $K_s$ bands to quantitatively investigate the extent to which Visorsat reduces its reflected light. Our results are as follows: (1) in most cases, Virorsat is fainter than STARLINK-1113, and the sunshade on Visorsat, therefore, contributes to the reduction of the reflected sunlight; (2) the magnitude at 550 km altitude (normalized magnitude) of both satellites often reaches the naked-eye limiting magnitude ($<$ 6.0); (3) from a blackbody radiation model of the reflected flux, the peak of the reflected components of both satellites is around the $z$ band; and (4) the albedo of the near infrared range is larger than that of the optical range. Under the assumption that Visorsat and STARLINK-1113 have the same reflectivity, we estimate the covering factor, $C_{\rm f}$, of the sunshade on Visorsat, using the blackbody radiation model: the covering factor ranges from $0.18 \leq C_{\rm f} \leq 0.92$. From the multivariable analysis of the solar phase angle (Sun-target-observer), the normalized magnitude, and the covering factor, the phase angle versus covering factor distribution presents a moderate anti-correlation between them, suggesting that the magnitudes of Visorsat depend not only on the phase angle but also on the orientation of the sunshade along our line of sight. However, the impact on astronomical observations from Visorsat-designed satellites remains serious. Thus, new countermeasures are necessary for the Starlink satellites to further reduce reflected sunlight.

Read this paper on arXiv…

T. Horiuchi, H. Hanayama, M. Ohishi, et. al.
Wed, 12 Apr 23
22/45

Comments: 31 pages, 9 figures, published in PASJ

Applications of the gamma/hadron discriminator $LCm$ to realistic air shower array experiments [HEAP]

http://arxiv.org/abs/2304.05348


In this article, it is shown that the $C_k$ and $LCm$ variables, recently introduced as an effective way to discriminate gamma and proton-induced showers in large wide-field gamma-ray observatories, can be generalised to be used in arrays of different detectors and variable fill factors. In particular, the $C_k$ profile discrimination capabilities are evaluated for scintillator and water Cherenkov detector arrays.

Read this paper on arXiv…

R. Conceição, P. Costa, L. Gibilisco, et. al.
Wed, 12 Apr 23
23/45

Comments: N/A

The Large Array Survey Telescope — System Overview and Performances [IMA]

http://arxiv.org/abs/2304.04796


The Large Array Survey Telescope (LAST) is a wide-field visible-light telescope array designed to explore the variable and transient sky with a high cadence. LAST will be composed of 48, 28-cm f/2.2 telescopes (32 already installed) equipped with full-frame backside-illuminated cooled CMOS detectors. Each telescope provides a field of view (FoV) of 7.4 deg^2 with 1.25 arcsec/pix, while the system FoV is 355 deg^2 in 2.9 Gpix. The total collecting area of LAST, with 48 telescopes, is equivalent to a 1.9-m telescope. The cost-effectiveness of the system (i.e., probed volume of space per unit time per unit cost) is about an order of magnitude higher than most existing and under-construction sky surveys. The telescopes are mounted on 12 separate mounts, each carrying four telescopes. This provides significant flexibility in operating the system. The first LAST system is under construction in the Israeli Negev Desert, with 32 telescopes already deployed. We present the system overview and performances based on the system commissioning data. The Bp 5-sigma limiting magnitude of a single 28-cm telescope is about 19.6 (21.0), in 20 s (20×20 s). Astrometric two-axes precision (rms) at the bright-end is about 60 (30)\,mas in 20\,s (20×20 s), while absolute photometric calibration, relative to GAIA, provides ~10 millimag accuracy. Relative photometric precision, in a single 20 s (320 s) image, at the bright-end measured over a time scale of about 60 min is about 3 (1) millimag. We discuss the system science goals, data pipelines, and the observatory control system in companion publications.

Read this paper on arXiv…

E. Ofek, S. Ben-Ami, D. Polishook, et. al.
Wed, 12 Apr 23
24/45

Comments: Submitted to PASP, 15pp

Measuring tidal effects with the Einstein Telescope: A design study [IMA]

http://arxiv.org/abs/2304.05349


Over the last few years, there has been a large momentum to ensure that the third-generation era of gravitational wave detectors will find its realisation in the next decades, and numerous design studies have been ongoing for some time. Some of the main factors determining the cost of the Einstein Telescope lie in the length of the interferometer arms and its shape: L-shaped detectors versus a single triangular configuration. Both designs are further expected to include a xylophone configuration for improvement on both ends of the frequency bandwidth of the detector. We consider binary neutron star sources in our study, as examples of sources already observed with the current generation detectors and ones which hold most promise given the broader frequency band and higher sensitivity of the third-generation detectors. We estimate parameters of the sources, with different kinds of configurations of the Einstein Telescope detector, varying arm-lengths as well as shapes and alignments. Overall, we find little improvement with respect to changing the shape, or alignment. However, there are noticeable differences in the estimates of some parameters, including tidal deformability, when varying the arm-length of the detectors. In addition, we also study the effect of changing the laser power, and the lower limit of the frequency band in which we perform the analysis.

Read this paper on arXiv…

A. Puecher, A. Samajdar and T. Dietrich
Wed, 12 Apr 23
25/45

Comments: 11 pages, 7 figures, 4 tables

Simulated observations of star formation regions: infrared evolution of globally collapsing clouds [GA]

http://arxiv.org/abs/2304.04864


The direct comparison between hydrodynamical simulations and observations is needed to improve the physics included in the former and test biases in the latter. Post-processing radiative transfer and synthetic observations are now the standard way to do this. We report on the first application of the \texttt{SKIRT} radiative transfer code to simulations of a star-forming cloud. The synthetic observations are then analyzed following traditional observational workflows. We find that in the early stages of the simulation, stellar radiation is inefficient in heating dust to the temperatures observed in Galactic clouds, thus the addition of an interstellar radiation field is necessary. The spectral energy distribution of the cloud settles rather quickly after $\sim3$ Myr of evolution from the onset of star formation, but its morphology continues to evolve for $\sim8$ Myr due to the expansion of \textsc{Hii} regions and the respective creation of cavities, filaments, and ridges. Modeling synthetic \textit{Herschel} fluxes with 1- or 2-component modified black bodies underestimates total dust masses by a factor of $\sim2$. Spatially-resolved fitting recovers up to about $70\%$ of the intrinsic value. This “missing mass” is located in a very cold dust component with temperatures below $10$ K, which does not contribute appreciably to the far-infrared flux. This effect could bias real observations if such dust exists in large amounts. Finally, we tested observational calibrations of the SFR based on infrared fluxes and concluded that they are in agreement when compared to the intrinsic SFR of the simulation averaged over $\sim100$ Myr.

Read this paper on arXiv…

J. Jáquez-Domínguez, R. Galván-Madrid, J. Fritz, et. al.
Wed, 12 Apr 23
28/45

Comments: N/A

Feature Guided Training and Rotational Standardisation for the Morphological Classification of Radio Galaxies [IMA]

http://arxiv.org/abs/2304.05095


State-of-the-art radio observatories produce large amounts of data which can be used to study the properties of radio galaxies. However, with this rapid increase in data volume, it has become unrealistic to manually process all of the incoming data, which in turn led to the development of automated approaches for data processing tasks, such as morphological classification. Deep learning plays a crucial role in this automation process and it has been shown that convolutional neural networks (CNNs) can deliver good performance in the morphological classification of radio galaxies. This paper investigates two adaptations to the application of these CNNs for radio galaxy classification. The first adaptation consists of using principal component analysis (PCA) during preprocessing to align the galaxies’ principal components with the axes of the coordinate system, which will normalize the orientation of the galaxies. This adaptation led to a significant improvement in the classification accuracy of the CNNs and decreased the average time required to train the models. The second adaptation consists of guiding the CNN to look for specific features within the samples in an attempt to utilize domain knowledge to improve the training process. It was found that this adaptation generally leads to a stabler training process and in certain instances reduced overfitting within the network, as well as the number of epochs required for training.

Read this paper on arXiv…

K. Brand, T. Grobler, W. Kleynhans, et. al.
Wed, 12 Apr 23
35/45

Comments: 20 pages, 17 figures, this is a pre-copyedited, author-produced PDF of an article accepted for publication in the Monthly Notices of the Royal Astronomical Society

On the recent discovery claim of a new $z>7$ quasar [GA]

http://arxiv.org/abs/2304.05162


Koptelova et al. 2022 (K22) recently claimed a new quasar discovery at $z=7.46$. After careful consideration of the publicly-available data underlying K22’s claim, we find that the observations were contaminated by a moving Solar System object, likely a main-belt asteroid. In the absence of the contaminated photometry, there is no evidence for the nearby, persistent WISE source being a high-redshift object; in fact, a detection of the source in DELS $z$-band rules out a redshift $z>7.3$. We present our findings as a cautionary tale of the dangers of passing asteroids for photometric selections.

Read this paper on arXiv…

S. Bosman, F. Davies and E. Bañados
Wed, 12 Apr 23
37/45

Comments: RNAAS; 2 pages, 1 figure

Comparison of modified black-body fits for the estimation of dust optical depths in interstellar clouds [IMA]

http://arxiv.org/abs/2304.05102


When dust far-infrared spectral energy distributions (SEDs) are fitted with a single modified black body (MBB), the optical depths tend to be underestimated. This is caused by temperature variations, and fits with several temperature components could lead to smaller errors. We want to quantify the performance of the standard model of a single MBB in comparison with some multi-component models. We are interested in both the accuracy and computational cost. We examine some cloud models relevant for interstellar medium studies. Synthetic spectra are fitted with a single MBB, a sum of several MBBs, and a sum of fixed spectral templates, but keeping the dust opacity spectral index fixed. When observations are used at their native resolution, the beam convolution becomes part of the fitting procedure. This increases the computational cost, but the analysis of large maps is still feasible with direct optimisation or even with Markov chain Monte Carlo methods. Compared to the single MBB fits, multi-component models can show significantly smaller systematic errors, at the cost of more statistical noise. The $\chi^2$ values of the fits are not a good indicator of the accuracy of the $\tau$ estimates, due to the potentially dominant role of the model errors. The single-MBB model also remains a valid alternative if combined with empirical corrections to reduce its bias. It is technically feasible to fit multi-component models to maps of millions of pixels. However, the SED model and the priors need to be selected carefully, and the model errors can only be estimated by comparing alternative models.

Read this paper on arXiv…

M. Juvela
Wed, 12 Apr 23
44/45

Comments: Accepted to A&A

The infrared colors of 51 Eridani b: micrometereoid dust or chemical disequilibrium? [EPA]

http://arxiv.org/abs/2304.03850


We reanalyze near-infrared spectra of the young extrasolar giant planet 51 Eridani b which was originally presented in (Macintosh et al. 2015) and (Rajan et al. 2017) using modern atmospheric models which include a self-consistent treatment of disequilibrium chemistry due to turbulent vertical mixing. In addition, we investigate the possibility that significant opacity from micrometeors or other impactors in the planet’s atmosphere may be responsible for shaping the observed spectral energy distribution (SED). We find that disequilibrium chemistry is useful for describing the mid-infrared colors of the planet’s spectra, especially in regards to photometric data at M band around 4.5 $\mu$m which is the result of super-equilibrium abundances of carbon monoxide, while the micrometeors are unlikely to play a pivotal role in shaping the SED. The best-fitting, micrometeroid-dust-free, disequilibrium chemistry, patchy cloud model has the following parameters: effective temperature $T_\textrm{eff} = 681$ K with clouds (or without clouds, i.e. the grid temperature $T_\textrm{grid}$ = 900 K), surface gravity $g$ = 1000 m/s$^2$, sedimentation efficiency $f_\textrm{sed}$ = 10, vertical eddy diffusion coefficient $K_\textrm{zz}$ = 10$^3$ cm$^2$/s, cloud hole fraction $f_\textrm{hole}$ = 0.2, and planet radius $R_\textrm{planet}$ = 1.0 R$_\textrm{Jup}$.

Read this paper on arXiv…

A. Madurowicz, S. Mukherjee, N. Batalha, et. al.
Tue, 11 Apr 23
4/63

Comments: 22 pages, 14 figures, Accepted to AJ

Measuring the properties of $f-$mode oscillations of a protoneutron star by third generation gravitational-wave detectors [IMA]

http://arxiv.org/abs/2304.04283


Core-collapse supernovae are among the astrophysical sources of gravitational waves that could be detected by third-generation gravitational-wave detectors. Here, we analyze the gravitational-wave strain signals from two- and three-dimensional simulations of core-collapse supernovae generated using the code F{\sc{ornax}}. A subset of the two-dimensional simulations has non-zero core rotation at the core bounce. A dominant source of time changing quadrupole moment is the $l=2$ fundamental mode ($f-$ mode) oscillation of the proto-neutron star. From the time-frequency spectrogram of the gravitational-wave strain we see that, starting $\sim 400$ ms after the core bounce, most of the power lies within a narrow track that represents the frequency evolution of the $f-$mode oscillations. The $f-$mode frequencies obtained from linear perturbation analysis of the angle-averaged profile of the protoneutron star corroborate what we observe in the spectrograms of the gravitational-wave signal. We explore the measurability of the $f-$mode frequency evolution of protoneutron star for a supernova signal observed in the third-generation gravitational-wave detectors. Measurement of the frequency evolution can reveal information about the masses, radii, and densities of the proto-neutron stars. We find that if the third generation detectors observe a supernova within 10 kpc, we can measure these frequencies to within $\sim$90\% accuracy. We can also measure the energy emitted in the fundamental $f-$mode using the spectrogram data of the strain signal. We find that the energy in the $f-$mode can be measured to within 20\% error for signals observed by Cosmic Explorer using simulations with successful explosion, assuming source distances within 10 kpc.

Read this paper on arXiv…

C. Afle, S. Kundu, J. Cammerino, et. al.
Tue, 11 Apr 23
5/63

Comments: 17 pages, 11 figures, 2 tables

ASAS-SN Sky Patrol V2.0 [IMA]

http://arxiv.org/abs/2304.03791


The All-Sky Automated Survey for Supernovae (ASAS-SN) began observing in late-2011 and has been imaging the entire sky with nightly cadence since late 2017. A core goal of ASAS-SN is to release as much useful data as possible to the community. Working towards this goal, in 2017 the first ASAS-SN Sky Patrol was established as a tool for the community to obtain light curves from our data with no preselection of targets. Then, in 2020 we released static V-band photometry from 2013–2018 for 61 million sources. Here we describe the next generation ASAS-SN Sky Patrol, Version 2.0, which represents a major progression of this effort. Sky Patrol 2.0 provides continuously updated light curves for 111 million targets derived from numerous external catalogs of stars, galaxies, and solar system objects. We are generally able to serve photometry data within an hour of observation. Moreover, with a novel database architecture, the catalogs and light curves can be queried at unparalleled speed, returning thousands of light curves within seconds. Light curves can be accessed through a web interface (this http URL) or a Python client (https://asas-sn.ifa.hawaii.edu/documentation). The Python client can be used to retrieve up to 1 million light curves, generally limited only by bandwidth. This paper gives an updated overview of our survey, introduces the new Sky Patrol, and describes its system architecture. These results provide significant new capabilities to the community for pursuing multi-messenger and time-domain astronomy.

Read this paper on arXiv…

K. Hart, B. Shappee, D. Hey, et. al.
Tue, 11 Apr 23
8/63

Comments: Light curves can be accessed through a web interface this http URL, or a Python client at this http URL

Prompt-to-afterglow transition of optical emission in a long gamma-ray burst consistent with a fireball [HEAP]

http://arxiv.org/abs/2304.04669


Long gamma-ray bursts (GRBs), which signify the end-life collapsing of very massive stars, are produced by extremely relativistic jets colliding into circumstellar medium. Huge energy is released both in the first few seconds, namely the internal dissipation phase that powers prompt emissions, and in the subsequent self-similar jet-deceleration phase that produces afterglows observed in broad-band electromagnetic spectrum. However, prompt optical emissions of GRBs have been rarely detected, seriously limiting our understanding of the transition between the two phases. Here we report detection of prompt optical emissions from a gamma-ray burst (i.e. GRB 201223A) using a dedicated telescope array with a high temporal resolution and a wide time coverage. The early phase coincident with prompt {\gamma}-ray emissions show a luminosity in great excess with respect to the extrapolation of {\gamma}-rays, while the later luminosity bump is consistent with onset of the afterglow. The clearly detected transition allows us to differentiate physical processes contributing to early optical emissions and to diagnose the composition of the jet

Read this paper on arXiv…

L. Xin, X. Han, H. Li, et. al.
Tue, 11 Apr 23
17/63

Comments: Authors’ version of article published in Nature Astronomy, see their website for official version

Reducing roundoff errors in numerical integration of planetary ephemeris [EPA]

http://arxiv.org/abs/2304.04458


Modern lunar-planetary ephemerides are numerically integrated on the observational timespan of more than 100 years (with the last 20 years having very precise astrometrical data). On such long timespans, not only finite difference approximation errors, but also the accumulating arithmetic roundoff errors become important because they exceed random errors of high-precision range observables of Moon, Mars, and Mercury. One way to tackle this problem is using extended-precision arithmetics available on x86 processors. Noting the drawbacks of this approach, we propose an alternative: using double-double arithmetics where appropriate. This will allow to use only double precision floating-point primitives which have ubiquitous support.

Read this paper on arXiv…

M. Subbotin, A. Kodukov and D. Pavlov
Tue, 11 Apr 23
18/63

Comments: N/A

Review of X-ray pulsar spacecraft autonomous navigation [IMA]

http://arxiv.org/abs/2304.04154


This article provides a review on X-ray pulsar-based navigation (XNAV). The review starts with the basic concept of XNAV, and briefly introduces the past, present and future projects concerning XNAV. This paper focuses on the advances of the key techniques supporting XNAV, including the navigation pulsar database, the X-ray detection system, and the pulse time of arrival estimation. Moreover, the methods to improve the estimation performance of XNAV are reviewed. Finally, some remarks on the future development of XNAV are provided.

Read this paper on arXiv…

Y. Wang, W. Zheng, S. Zhang, et. al.
Tue, 11 Apr 23
27/63

Comments: has been accepted by Chinese Journal of Aeronautics

Latent Stochastic Differential Equations for Modeling Quasar Variability and Inferring Black Hole Properties [GA]

http://arxiv.org/abs/2304.04277


Active galactic nuclei (AGN) are believed to be powered by the accretion of matter around supermassive black holes at the centers of galaxies. The variability of an AGN’s brightness over time can reveal important information about the physical properties of the underlying black hole. The temporal variability is believed to follow a stochastic process, often represented as a damped random walk described by a stochastic differential equation (SDE). With upcoming wide-field surveys set to observe 100 million AGN in multiple bandpass filters, there is a need for efficient and automated modeling techniques that can handle the large volume of data. Latent SDEs are well-suited for modeling AGN time series data, as they can explicitly capture the underlying stochastic dynamics. In this work, we modify latent SDEs to jointly reconstruct the unobserved portions of multivariate AGN light curves and infer their physical properties such as the black hole mass. Our model is trained on a realistic physics-based simulation of ten-year AGN light curves, and we demonstrate its ability to fit AGN light curves even in the presence of long seasonal gaps and irregular sampling across different bands, outperforming a multi-output Gaussian process regression baseline.

Read this paper on arXiv…

J. Fagin, J. Park, H. Best, et. al.
Tue, 11 Apr 23
35/63

Comments: 10 pages, 5 figures, accepted at the ICLR 2023 Workshop on Physics for Machine Learning