A guiding center implementation for relativistic particle dynamics in the PLUTO code [IMA]

http://arxiv.org/abs/2212.08064


We present a numerical implementation of the guiding center approximation to describe the relativistic motion of charged test particles in the PLUTO code for astrophysical plasma dynamics. The guiding center approximation (GCA) removes the time step constraint due to particle gyration around magnetic field lines by following the particle center of motion rather than its full trajectory. The gyration can be detached from the guiding center motion if electromagnetic fields vary sufficiently slow compared to the particle gyration radius and period. Our implementation employs a variable step-size linear multistep method, more efficient when compared to traditional one-step Runge Kutta schemes. A number of numerical benchmarks is presented in order to assess the validity of our implementation.

Read this paper on arXiv…

A. Mignone, H. Haudemand and E. Puzzoni
Fri, 16 Dec 22
42/72

Comments: N/A

A Measurement of the Cosmic Optical Background and Diffuse Galactic Light Scaling from the R < 50 AU New Horizons-LORRI Data [CEA]

http://arxiv.org/abs/2212.07449


Direct photometric measurements of the cosmic optical background (COB) provide an important point of comparison to both other measurement methodologies and models of cosmic structure formation, and permit a cosmic consistency test with the potential to reveal additional diffuse sources of emission. The COB has been challenging to measure from Earth due to the difficulty of isolating it from the diffuse light scattered from interplanetary dust in our solar system. We present a measurement of the COB using data taken by the Long-Range Reconnaissance Imager (LORRI) on NASA’s New Horizons mission, considering all data acquired to 47 AU. We employ a blind methodology where our analysis choices are developed against a subset of the full data set, which is then unblinded. Dark current and other instrumental systematics are accounted for, including a number of sources of scattered light. We fully characterize and remove structured and diffuse astrophysical foregrounds including bright stars, the integrated starlight from faint unresolved sources, and diffuse galactic light. For the full data set, we find the surface brightness of the COB to be $\lambda I_{\lambda}^{\mathrm{COB}}$ $=$ 21.98 $\pm$ 1.23 (stat.) $\pm$ 1.36 (cal.) nW m$^{-2}$ sr$^{-1}$. This result supports recent determinations that find a factor of 2 ${-}$ 3 $\times$ more light than expected from the integrated light from galaxies and motivate new diffuse intensity measurements with more capable instruments that can support spectral measurements over the optical and near-IR.

Read this paper on arXiv…

T. Symons, M. Zemcov, A. Cooray, et. al.
Fri, 16 Dec 22
44/72

Comments: 36 pages, 22 figures, 8 tables; accepted for publication in ApJ

Fast-Cadence High-Contrast Imaging with Information Field Theory [IMA]

http://arxiv.org/abs/2212.07714


Although many exoplanets have been indirectly detected over the last years, direct imaging of them with ground-based telescopes remains challenging. In the presence of atmospheric fluctuations, it is ambitious to resolve the high brightness contrasts at the small angular separation between the star and its potential partners. Post-processing of telescope images has become an essential tool to improve the resolvable contrast ratios. This paper contributes a post-processing algorithm for fast-cadence imaging, which deconvolves sequences of telescopes images. The algorithm infers a Bayesian estimate of the astronomical object as well as the atmospheric optical path length, including its spatial and temporal structures. For this, we utilize physics-inspired models for the object, the atmosphere, and the telescope. The algorithm is computationally expensive but allows to resolve high contrast ratios despite short observation times and no field rotation. We test the performance of the algorithm with point-like companions synthetically injected into a real data set acquired with the SHARK-VIS pathfinder instrument at the LBT telescope. Sources with brightness ratios down to $6\cdot10^{-4}$ to the star are detected at $185$ mas separation with a short observation time of $0.6\,\text{s}$.

Read this paper on arXiv…

J. Roth, G. Causi, V. Testa, et. al.
Fri, 16 Dec 22
56/72

Comments: 12 pages, 6 figures

Consistency tests of field level inference with the EFT likelihood [CEA]

http://arxiv.org/abs/2212.07875


Analyzing the clustering of galaxies at the field level in principle promises access to all the cosmological information available. Given this incentive, in this paper we investigate the performance of field-based forward modeling approach to galaxy clustering using the effective field theory (EFT) framework of large-scale structure (LSS). We do so by applying this formalism to a set of consistency and convergence tests on synthetic datasets. We explore the high-dimensional joint posterior of LSS initial conditions by combining Hamiltonian Monte Carlo sampling for the field of initial conditions, and slice sampling for cosmology and model parameters. We adopt the Lagrangian perturbation theory forward model from [1], up to second order, for the forward model of biased tracers. We specifically include model mis-specifications in our synthetic datasets within the EFT framework. We achieve this by generating synthetic data at a higher cutoff scale $\Lambda_0$, which controls which Fourier modes enter the EFT likelihood evaluation, than the cutoff $\Lambda$ used in the inference. In the presence of model mis-specifications, we find that the EFT framework still allows for robust, unbiased joint inference of a) cosmological parameters – specifically, the scaling amplitude of the initial conditions – b) the initial conditions themselves, and c) the bias and noise parameters. In addition, we show that in the purely linear case, where the posterior is analytically tractable, our samplers fully explore the posterior surface. We also demonstrate convergence in the cases of nonlinear forward models. Our findings serve as a confirmation of the EFT field-based forward model framework developed in [2-7], and as another step towards field-level cosmological analyses of real galaxy surveys.

Read this paper on arXiv…

A. Kostić, N. Nguyen, F. Schmidt, et. al.
Fri, 16 Dec 22
63/72

Comments: 28 + 13 pages, 12 figures; comments welcomed!; prepared for submission to JCAP

25,000 optical fiber positioning robots for next-generation cosmology [IMA]

http://arxiv.org/abs/2212.07908


Massively parallel multi-object spectrographs are on the leading edge of cosmology instrumentation. The highly successful Dark Energy Spectroscopic Instrument (DESI) which begun survey operations in May 2021, for example, has 5,000 robotically-actuated multimode fibers, which deliver light from thousands of individual galaxies and quasars simultaneously to an array of high-resolution spectrographs off-telescope. The redshifts are individually measured, thus providing 3D maps of the Universe in unprecedented detail, and enabling precise measurement of dark energy expansion and other key cosmological parameters. Here we present new work in the design and prototyping of the next generation of fiber-positioning robots. At 6.2 mm center-to-center pitch, with 1-2 um positioning precision, and in a scalable form factor, these devices will enable the next generation of cosmology instruments, scaling up to instruments with 10,000 to 25,000 fiber robots.

Read this paper on arXiv…

J. Silber, D. Schlegel, R. Araujo, et. al.
Fri, 16 Dec 22
66/72

Comments: 6 pages, 8 figures, presented at conference Thirty-Seventh Annual Meeting of The American Society for Precision Engineering, 2022-10-14

Trends in Planetary Science research in the Puna and Atacama desert regions: under-representation of local scientific institutions? [IMA]

http://arxiv.org/abs/2212.07863


In 2019 while launching a multidisciplinary research project aimed at developing the Puna de Atacama region as a natural laboratory, investigators within the University of Atacama (Chile) conducted a bibliographic search identifying previously studied geographical points of the region and of potential interest for planetary science and astrobiology research. This preliminary work highlighted a significant absence in foreign publications consideration of local institutional involvement. In light of this, a follow-up study was carried out to confirm or refute these first impressions, by comparing the search in two bibliographic databases: Web of Science and Scopus. The results show that almost 60% of the publications based directly on data from the Puna, the Altiplano or the Atacama Desert with objectives related to planetary science or astrobiology do not include any local institutional partner (Argentina, Bolivia, Chile and Peru). Indeed, and beyond the ethical questioning of international collaborations, Latin-American planetary science deserve a strategic structuring, networking, as well as a road map at a national and continental scale, not only to enhance research, development and innovation but also to protect an exceptional natural heritage sampling extreme environmental niches on Earth. Examples of successful international collaborations such as the field of meteorites, terrestrial analogues and space exploration in Chile or astrobiology in Mexico are given as illustrations and possible directions to follow in order to develop planetary sciences in South America.

Read this paper on arXiv…

A. Tavernier, G. Pinto, M. Valenzuela, et. al.
Fri, 16 Dec 22
72/72

Comments: N/A

Image-based searches for pulsar candidates using MWA VCS data [HEAP]

http://arxiv.org/abs/2212.06982


Pulsars have proven instrumental in exploring a wide variety of physics. Pulsars at low radio frequencies is crucial to further our understanding of spectral properties and emission mechanisms.The Murchison Widefield Array Voltage Capture System (MWA-VCS) has been routinely used to study and discover pulsars at low frequencies, offering the unique opportunity of recording complex voltages ,which can be off-line beamformed or imaged at millisecond time resolution.Devising imaged-based methods for finding pulsar candidates, which can be verified in beamformed data, can accelerate the complete process and lead to more pulsar detections by reducing the number of tied-array beams required, increasing compute resource efficiency.Despite a factor of ~4 loss in sensitivity, searching for pulsar candidates in images from the MWA-VCS, we can explore a larger parameter space, potentially leading to discoveries of pulsars missed by high-frequency surveys such as pulsars obscured in high-time resolution timeseries data by propagation effects.Image-based searches are also essential to probing parts of parameter space inaccessible to traditional beamformed searches with the MWA.In this paper we describe the innovative approach and capability of dual-processing MWA VCS data, i.e. finding pulsar candidates in these images, and verifying by forming tied-array beam.We developed and tested image-based methods of finding pulsar candidates, based on pulsar properties such as spectral index, polarisation and variability.The efficiency of these methodologies has been verified on known pulsars, and the main limitations explained in terms of sensitivity and low-frequency spectral turnover of some pulsars.No candidates were confirmed to be a new pulsar.This new capability will now be applied to multiple observations to accelerate pulsar discoveries with MWA and speed up future searches with the SKA-Low.

Read this paper on arXiv…

S. S.Sett, N. N.D.R.Bhat, M. M.Sokolowski, et. al.
Thu, 15 Dec 22
4/75

Comments: 12 pages, 9 figures, accepted for publication in Publications of the Astronomical Society of Australia (PASA)

Transmission strings: a technique for spatially mapping exoplanet atmospheres around their terminators [EPA]

http://arxiv.org/abs/2212.07294


Exoplanet transmission spectra, which measure the absorption of light passing through a planet’s atmosphere during transit, are most often assessed globally, resulting in a single spectrum per planetary atmosphere. However, the inherent three-dimensional nature of planetary atmospheres, via thermal, chemical, and dynamical processes, can imprint inhomogeneous structure and properties in the observables. In this work, we devise a technique for spatially mapping the atmospheres of exoplanets in transmission. Our approach relaxes the assumption that transit light curves are created from circular stars occulted by circular planets, and instead we allow for flexibility in the planet’s sky-projected shape. We define the planet’s radius to be a single-valued function of angle around its limb, and we refer to this mathematical object as a transmission string. These transmission strings are parameterised in terms of Fourier series, a choice motivated by these series having adjustable complexity, generating physically practical shapes, while being reducible to the classical circular case. The utility of our technique is primarily intended for high-precision multi-wavelength light curves, from which inferences of transmission spectra can be made as a function of angle around a planet’s terminator, enabling analysis of the multidimensional physics at play in exoplanet atmospheres. More generally, the technique can be applied to any transit light curve to derive the shape of the transiting body. The algorithm we develop is available as an open-source package, called harmonica.

Read this paper on arXiv…

D. Grant and H. Wakeford
Thu, 15 Dec 22
16/75

Comments: 15 pages, 8 figures, accepted for publication in MNRAS

Hyperion: The origin of the stars A far-UV space telescope for high-resolution spectroscopy over wide fields [SSA]

http://arxiv.org/abs/2212.06869


We present Hyperion, a mission concept recently proposed to the December 2021 NASA Medium Explorer announcement of opportunity. Hyperion explores the formation and destruction of molecular clouds and planet-forming disks in nearby star-forming regions of the Milky Way. It does this using long-slit, high-resolution spectroscopy of emission from fluorescing molecular hydrogen, which is a powerful far-ultraviolet (FUV) diagnostic. Molecular hydrogen (H2) is the most abundant molecule in the universe and a key ingredient for star and planet formation, but is typically not observed directly because its symmetric atomic structure and lack of a dipole moment mean there are no spectral lines at visible wavelengths and few in the infrared. Hyperion uses molecular hydrogen’s wealth of FUV emission lines to achieve three science objectives: (1) determining how star formation is related to molecular hydrogen formation and destruction at the boundaries of molecular clouds; (2) determining how quickly and by what process massive star feedback disperses molecular clouds; and (3) determining the mechanism driving the evolution of planet-forming disks around young solar-analog stars. Hyperion conducts this science using a straightforward, highly-efficient, single-channel instrument design. Hyperion’s instrument consists of a 48 cm primary mirror, with an f/5 focal ratio. The spectrometer has two modes, both covering 138.5-161.5 nm bandpasses. A low resolution mode has a spectral resolution of R>10,000 with a slit length of 65 arcmin, while the high resolution mode has a spectral resolution of R>50,000 over a slit length of 5 armin. Hyperion occupies a 2 week long, high-earth, Lunar resonance TESS-like orbit, and conducts 2 weeks of planned observations per orbit, with time for downlinks and calibrations. Hyperion was reviewed as Category I, which is the highest rating possible, but was not selected.

Read this paper on arXiv…

E. Hamden, D. Schiminovich, S. Nikzad, et. al.
Thu, 15 Dec 22
30/75

Comments: Accepted to JATIS, 9 Figures

Novel Conservative Methods for Adaptive Force Softening in Collisionless and Multi-Species N-Body Simulations [GA]

http://arxiv.org/abs/2212.06851


Modeling self-gravity of collisionless fluids (e.g. ensembles of dark matter, stars, black holes, dust, planetary bodies) in simulations is challenging and requires some force softening. It is often desirable to allow softenings to evolve adaptively, in any high-dynamic range simulation, but this poses unique challenges of consistency, conservation, and accuracy, especially in multi-physics simulations where species with different ‘softening laws’ may interact. We therefore derive a generalized form of the energy-and-momentum conserving gravitational equations of motion, applicable to arbitrary rules used to determine the force softening, together with consistent associated timestep criteria, interaction terms between species with different softening laws, and arbitrary maximum/minimum softenings. We also derive new methods to maintain better accuracy and conservation when symmetrizing forces between particles. We review and extend previously-discussed adaptive softening schemes based on the local neighbor particle density, and present several new schemes for scaling the softening with properties of the gravitational field, i.e. the potential or acceleration or tidal tensor. We show that the ‘tidal softening’ scheme not only represents a physically-motivated, translation and Galilean invariant and equivalence-principle respecting (and therefore conservative) method, but imposes negligible timestep or other computational penalties, ensures that pairwise two-body scattering is small compared to smooth background forces, and can resolve outstanding challenges in properly capturing tidal disruption of substructures (minimizing artificial destruction) while also avoiding excessive N-body heating. We make all of this public in the GIZMO code.

Read this paper on arXiv…

P. Hopkins, E. Nadler, M. Grudic, et. al.
Thu, 15 Dec 22
34/75

Comments: 20 pages, 12 figures, submitted to MNRAS. Comments welcome

Propagating Uncertainties in the SALT3 Model Training Process to Cosmological Constraints [CEA]

http://arxiv.org/abs/2212.06879


Type Ia supernovae (SNe Ia) are standardizable candles that must be modeled empirically to yield cosmological constraints. To understand the robustness of this modeling to variations in the model training procedure, we build an end-to-end pipeline to test the recently developed SALT3 model. We explore the consequences of removing pre-2000s low-$z$ or poorly calibrated $U$-band data, adjusting the amount and fidelity of SN Ia spectra, and using a model-independent framework to simulate the training data. We find the SALT3 model surfaces are improved by having additional spectra and $U$-band data, and can be shifted by $\sim 5\%$ if host galaxy contamination is not sufficiently removed from SN spectra. We find that resulting measurements of $w$ are consistent to within $2.5\%$ for all training variants explored in this work, with the largest shifts coming from variants that add color-dependent calibration offsets or host galaxy contamination to the training spectra, and those that remove pre-2000s low-$z$ data. These results demonstrate that the SALT3 model training procedure is largely robust to reasonable variations in the training data, but that additional attention must be paid to the treatment of spectroscopic data in the training process. We also find that the training procedure is sensitive to the color distributions of the input data; the resulting $w$ measurement can be biased by $\sim2\%$ if the color distribution is not sufficiently wide. Future low-$z$ data, particularly $u$-band observations and high signal-to-noise ratio SN Ia spectra, will help to significantly improve SN Ia modeling in the coming years.

Read this paper on arXiv…

M. Dai, D. Jones, W. Kenworthy, et. al.
Thu, 15 Dec 22
41/75

Comments: 16 pages, 10 figures

Interferometric imaging using shared quantum entanglement [CL]

http://arxiv.org/abs/2212.07395


Entanglement-based imaging promises significantly increased imaging resolution by extending the spatial separation of collection apertures used in very-long-baseline interferometry for astronomy and geodesy. We report a table-top quantum-entanglement-based interferometric imaging technique that utilizes two entangled field modes serving as a phase reference between two apertures. The spatial distribution of the source is determined by interfering light collected at each aperture with one of the entangled fields and making joint measurements. This approach provides a route to increase angular resolution while maximizing the information gained per received photon.

Read this paper on arXiv…

M. Brown, M. Allgaier, V. Thiel, et. al.
Thu, 15 Dec 22
53/75

Comments: N/A

Machine learning cosmology from void properties [CEA]

http://arxiv.org/abs/2212.06860


Cosmic voids are the largest and most underdense structures in the Universe. Their properties have been shown to encode precious information about the laws and constituents of the Universe. We show that machine learning techniques can unlock the information in void features for cosmological parameter inference. We rely on thousands of void catalogs from the GIGANTES dataset, where every catalog contains an average of 11,000 voids from a volume of $1~(h^{-1}{\rm Gpc})^3$. We focus on three properties of cosmic voids: ellipticity, density contrast, and radius. We train 1) fully connected neural networks on histograms from void properties and 2) deep sets from void catalogs, to perform likelihood-free inference on the value of cosmological parameters. We find that our best models are able to constrain the value of $\Omega_{\rm m}$, $\sigma_8$, and $n_s$ with mean relative errors of $10\%$, $4\%$, and $3\%$, respectively, without using any spatial information from the void catalogs. Our results provide an illustration for the use of machine learning to constrain cosmology with voids.

Read this paper on arXiv…

B. Wang, A. Pisani, F. Villaescusa-Navarro, et. al.
Thu, 15 Dec 22
57/75

Comments: 11 pages, 4 figures, 1 table, to be submitted to ApJ

GALLIFRAY — A geometric modeling and parameter estimation framework for black hole images using bayesian techniques [IMA]

http://arxiv.org/abs/2212.06827


Recent observations of the galactic centers of M87 and the Milky Way with the Event Horizon Telescope have ushered in a new era of black hole based tests of fundamental physics using very long baseline interferometry (VLBI). Being a nascent field, there are several different modeling and analysis approaches in vogue (e.g., geometric and physical models, visibility and closure amplitudes, agnostic and multimessenger priors). We present \texttt{GALLIFRAY}, an open-source Python-based framework for estimation/extraction of parameters using VLBI data. It is developed with modularity, efficiency, and adaptability as the primary objectives. This article outlines the design and usage of \texttt{GALLIFRAY}. As an illustration, we fit a geometric and a physical model to simulated datasets using markov chain monte carlo sampling and find good convergence of the posterior distribution. We conclude with an outline of further enhancements currently in development.

Read this paper on arXiv…

S. Saurabh and S. Nampalliwar
Thu, 15 Dec 22
62/75

Comments: 10 pages, 5 figures. Comments are welcome!

Comparison of dynamical and kinematic reference frames via pulsar positions from timing, Gaia, and interferometric astrometry [IMA]

http://arxiv.org/abs/2212.07178


Pulsars are special objects whose positions can be determined independently from timing, radio interferometric, and Gaia astrometry at sub-milliarcsecond (mas) precision; thus, they provide a unique way to monitor the link between dynamical and kinematic reference frames. We aimed to assess the orientation consistency between the dynamical reference frame represented by the planetary ephemeris and the kinematic reference frames constructed by Gaia and VLBI through pulsar positions. We identified 49 pulsars in Gaia Data Release 3 and 62 pulsars with very long baseline interferometry (VLBI) positions from the PSR$\pi$ and MSPSR$\pi$ projects and searched for the published timing solutions of these pulsars. We then compared pulsar positions measured by timing, VLBI, and Gaia to estimate the orientation offsets of the ephemeris frames with respect to the Gaia and VLBI reference frames by iterative fitting. We found orientation offsets of $\sim$10 mas in the DE200 frame with respect to the Gaia and VLBI frame. Our results depend strongly on the subset used in the comparison and could be biased by underestimated errors in the archival timing data, reflecting the limitation of using the literature timing solutions to determine the frame rotation.

Read this paper on arXiv…

N. Liu, Z. Zhu, J. Antoniadis, et. al.
Thu, 15 Dec 22
63/75

Comments: 22 pages, 15 figures, 3 tables, accepted for publication at A&A

The Standard RV Equation uses $ω_p$, not $ω_*$ [EPA]

http://arxiv.org/abs/2212.06966


Since the discovery of the first exoplanet orbiting a main-sequence star, astronomers have used stellar radial velocity (RV) measurements to infer the orbital properties of planets. For a star orbited by a single planet, the stellar orbit is a dilation and $180^\circ$ rotation of the planetary orbit. Many of the orbital properties of the star are identical to those of the planet including the orbital period, eccentricity, inclination, longitude of the ascending node, time of periastron passage, and mean anomaly. There is a notable exception to this pattern: the argument of periastron, $\omega$, which is defined as the angle between the periapsis of an orbiting body and its ascending node; in other words, $\omega$ describes the orientation of a body’s elliptical path within the orbital plane. For a star-planet system, the argument of periastron of the star ($\omega_$) is $180^\circ$ offset from the argument of periastron of the planet ($\omega_p$). For a conventional coordinate system with $\hat{z}$ pointed away from the observer, the standard RV equation is defined with $\omega_p$; however, we find that many interpretations of the RV equation are not self-consistent. For instance, the commonly used Radial Velocity Modeling Toolkit \texttt{RadVel} relies on an RV equation that uses the standard $\omega_p$, but its documentation states that it instead models $\omega_$. As a result, we identify 54 published papers reporting a total of 265 $\omega$ values that are likely $180^\circ$ offset from their true values, and the scope of this issue is potentially even larger.

Read this paper on arXiv…

A. Householder and L. Weiss
Thu, 15 Dec 22
64/75

Comments: 9 pages,1 figure, 1 table

The Direct Mid-Infrared Detectability of Habitable-zone Exoplanets Around Nearby Stars [EPA]

http://arxiv.org/abs/2212.06993


Giant planets within the habitable zones of the closest several stars can currently be imaged with ground-based telescopes. Within the next decade, the Extremely Large Telescopes (ELTs) will begin to image the habitable zones of a greater number of nearby stars with much higher sensitivity$-$ potentially imaging exo-Earths around the closest stars. To determine the most promising candidates for observations over the next decade, we establish a theoretical framework for the direct detectability of Earth$-$ to super-Jovian-mass exoplanets in the mid-infrared based on available atmospheric and evolutionary models. Of the 83 closest BAFGK type stars, we select 37 FGK type stars within 10 pc and 34 BA type stars within 30 pc with reliable age constraints. We prioritize targets based on a parametric model of a planet’s effective temperature based on a star’s luminosity, distance, and age, and on the planet’s orbital semi-major axis, radius, and albedo. We then predict the most likely planets to be detectable with current 8-meter telescopes and with a 39-m ELT with up to 100 hours of observation per star. Putting this together, we recommend observation times needed for the detection of habitable-zone exoplanets spanning the range of very nearby temperate Earth-sized planets to more distant young giant planets. We then recommend ideal initial targets for current telescopes and the upcoming ELTs.

Read this paper on arXiv…

Z. Werber, K. Wagner and D. Apai
Thu, 15 Dec 22
67/75

Comments: Responded to the first highly positive referee report

A Near-Infrared Pyramid Wavefront Sensor for the MMT [IMA]

http://arxiv.org/abs/2212.06904


The MMTO Adaptive optics exoPlanet characterization System (MAPS) is an ongoing upgrade to the 6.5-meter MMT Observatory on Mount Hopkins in Arizona. MAPS includes an upgraded adaptive secondary mirror (ASM), upgrades to the ARIES spectrograph, and a new AO system containing both an optical and near-infrared (NIR; 0.9-1.8 um) pyramid wavefront sensor (PyWFS). The NIR PyWFS will utilize an IR-optimized double pyramid coupled with a SAPHIRA detector: a low-read noise electron Avalanche Photodiode (eAPD) array. This NIR PyWFS will improve MAPS’s sky coverage by an order of magnitude by allowing redder guide stars (e.g. K & M-dwarfs or highly obscured stars in the Galactic plane) to be used. To date, the custom designed cryogenic SAPHIRA camera has been fully characterized and can reach sub-electron read noise at high avalanche gain. In order to test the performance of the camera in a closed-loop environment prior to delivery to the observatory, an AO testbed was designed and constructed. In addition to testing the SAPHIRA’s performance, the testbed will be used to test and further develop the proposed on-sky calibration procedure for MMTO’s ASM. We will report on the anticipated performance improvements from our NIR PyWFS, the SAPHIRA’s closed-loop performance on our testbed, and the status of our ASM calibration procedure.

Read this paper on arXiv…

J. Taylor, S. Sivanandam, N. Anugu, et. al.
Thu, 15 Dec 22
71/75

Comments: SPIE Proceedings, Astronomical Telescopes and Instrumentation, July 2022, 10 pages, 8 figures

Overview of the Observing System and Initial Scientific Accomplishments of the East Asian VLBI Network (EAVN) [IMA]

http://arxiv.org/abs/2212.07040


The East Asian VLBI Network (EAVN) is an international VLBI facility in East Asia and is operated under mutual collaboration between East Asian countries, as well as part of Southeast Asian and European countries. EAVN currently consists of 16 radio telescopes and three correlators located in China, Japan, and Korea, and is operated mainly at three frequency bands, 6.7, 22, and 43 GHz with the longest baseline length of 5078 km, resulting in the highest angular resolution of 0.28 milliarcseconds at 43 GHz. One of distinct capabilities of EAVN is multi-frequency simultaneous data reception at nine telescopes, which enable us to employ the frequency phase transfer technique to obtain better sensitivity at higher observing frequencies. EAVN started its open-use program in the second half of 2018, providing a total observing time of more than 1100 hours in a year. EAVN fills geographical gap in global VLBI array, resulting in enabling us to conduct contiguous high-resolution VLBI observations. EAVN has produced various scientific accomplishments especially in observations toward active galactic nuclei, evolved stars, and star-forming regions. These activities motivate us to initiate launch of the ‘Global VLBI Alliance’ to provide an opportunity of VLBI observation with the longest baselines on the earth.

Read this paper on arXiv…

K. Akiyama, J. Algaba, T. An, et. al.
Thu, 15 Dec 22
75/75

Comments: 27 pages, appeared in Galaxies special issue ‘Challenges in Understanding Black Hole Powered Jets with VLBI’ as an invited review

Impact of MgII interstellar medium absorption on near-ultraviolet exoplanet transit measurements [EPA]

http://arxiv.org/abs/2212.06192


Ultraviolet (UV) transmission spectroscopy probes atmospheric escape, which has a significant impact on planetary atmospheric evolution. If unaccounted for, interstellar medium absorption (ISM) at the position of specific UV lines might bias transit depth measurements, and thus potentially affect the (non-)detection of features in transmission spectra. Ultimately, this is connected to the so called “resolution-linked bias” (RLB) effect. We present a parametric study quantifying the impact of unresolved or unconsidered ISM absorption in transit depth measurements at the position of the MgII h&k resonance lines (i.e. 2802.705 {\AA} and 2795.528 {\AA} respectively) in the near-ultraviolet spectral range. We consider main-sequence stars of different spectral types and vary the shape and amount of chromospheric emission, ISM absorption, and planetary absorption, as well as their relative velocities. We also evaluate the role played by integration bin and spectral resolution. We present an open-source tool enabling one to quantify the impact of unresolved or unconsidered MgII ISM absorption in transit depth measurements. We further apply this tool to a few already or soon to be observed systems. On average, we find that ignoring ISM absorption leads to biases in the MgII transit depth measurements comparable to the uncertainties obtained from the observations published to date. However, considering the bias induced by ISM absorption might become necessary when analysing observations obtained with the next generation space telescopes with UV coverage (e.g. LUVOIR, HABEX), which will provide transmission spectra with significantly smaller uncertainties compared to what obtained with current facilities (e.g. HST).

Read this paper on arXiv…

A. Sreejith, L. Fossati, P. Cubillos, et. al.
Wed, 14 Dec 22
55/69

Comments: Accepted for publication in MNRAS

Detecting dense-matter phase transition signatures in neutron star mass-radius measurements as data anomalies using normalising flows [HEAP]

http://arxiv.org/abs/2212.05480


Observations of neutron stars may be used to study aspects of extremely dense matter, specifically a possibility of phase transitions to exotic states, such as de-confined quarks.
We present a novel data analysis method for detecting signatures of dense-matter phase transitions in sets of mass-radius measurements, and study its sensitivity with respect to the size of observational errors and the number of observations. The method is based on machine learning anomaly detection coupled with normalizing flows technique: the algorithm trained on samples of astrophysical observations featuring no phase transition signatures interprets a phase transition sample as an ”anomaly”. For the sake of this study, we focus on dense-matter equations of state leading to detached branches of mass-radius sequences (strong phase transitions), use an astrophysically-informed neutron-star mass function, and various magnitudes of observational errors and sample sizes.
The method is shown to reliably detect cases of mass-radius relations with phase transition signatures, while increasing its sensitivity with decreasing measurement errors and increasing number of observations. We discuss marginal cases, when the phase transition mass is located near the edges of the mass function range. Evaluated on the current state-of-art selection of real measurements of electromagnetic and gravitational-wave observations, the method gives inconclusive results, which we interpret as due to small available sample size, large observational errors and complex systematics.

Read this paper on arXiv…

F. Morawski and M. Bejger
Tue, 13 Dec 22
2/105

Comments: 9 pages, 6 figures, accepted for publication in Physical Review C

Millimeter/submillimeter VLBI with a Next Generation Large Radio Telescope in the Atacama Desert [IMA]

http://arxiv.org/abs/2212.05118


The proposed next generation Event Horizon Telescope (ngEHT) concept envisions the imaging of various astronomical sources on scales of microarcseconds in unprecedented detail with at least two orders of magnitude improvement in the image dynamic ranges by extending the Event Horizon Telescope (EHT). A key technical component of ngEHT is the utilization of large aperture telescopes to anchor the entire array, allowing the connection of less sensitive stations through highly sensitive fringe detections to form a dense network across the planet. Here, we introduce two projects for planned next generation large radio telescopes in the 2030s on the Chajnantor Plateau in the Atacama desert in northern Chile, the Large Submillimeter Telescope (LST) and the Atacama Large Aperture Submillimeter Telescope (AtLAST). Both are designed to have a 50-meter diameter and operate at the planned ngEHT frequency bands of 86, 230 and 345\,GHz. A large aperture of 50\,m that is co-located with two existing EHT stations, the Atacama Large Millimeter/Submillimeter Array (ALMA) and the Atacama Pathfinder Experiment (APEX) Telescope in the excellent observing site of the Chajnantor Plateau, will offer excellent capabilities for highly sensitive, multi-frequency, and time-agile millimeter very long baseline interferometry (VLBI) observations with accurate data calibration relevant to key science cases of ngEHT. In addition to ngEHT, its unique location in Chile will substantially improve angular resolutions of the planned Next Generation Very Large Array in North America or any future global millimeter VLBI arrays if combined. LST and AtLAST will be a key element enabling transformative science cases with next-generation millimeter/submillimeter VLBI arrays.

Read this paper on arXiv…

K. Akiyama, J. Kauffmann, L. Matthews, et. al.
Tue, 13 Dec 22
3/105

Comments: 8 pages, 1 figure, submitted to the special issue of Galaxies “From Vision to Instrument: Creating a Next-Generation Event Horizon Telescope for a New Era of Black Hole Science” as a ngEHT white paper

Free-free absorption parameters of Cassiopeia A from low-frequency interferometric observations [HEAP]

http://arxiv.org/abs/2212.06104


Context. Cassiopeia A is one of the most extensively studied supernova remnants (SNRs) in our Galaxy. The analysis of its spectral features with the help of low frequency observations plays an important role for understanding the evolution of the radio source through the propagation of synchrotron emission to observers through the SNR environment and the interstellar medium. Aims. In this paper we present measurements of the integrated spectrum of Cas A to characterize the properties of free-free absorption towards this SNR. We also add new measurements to track its slowly evolving and decreasing integrated flux density. Methods. We use the Giant Ukrainian radio telescope (GURT) for measuring the continuum spectrum of Cassiopeia A within the frequency range of 16-72 MHz. The radio flux density of Cassiopeia A relative to the reference source of the radio galaxy Cygnus A has been measured on May-October, 2019 with two subarrays of the GURT, used as a two-element correlation interferometer. Results. We determine magnitudes of emission measure, electron temperature and an average number of charges of the ions for both internal and external absorbing ionized gas towards in Cassiopeia A. Generally, their values are close to the ones suggested by Arias et al. (2018), although for some there are slight differences. In the absence of clumping we find the unshocked ejecta of M = 2.61 solar mass at the electron density of 15.3 cm^-3 has a gas temperature of T=100 K. If the clumping factor is 0.67, then the unshocked ejecta of 0.96 solar mass the electron density of 18.7 cm^-3. Conclusions. The integrated flux density spectrum of Cassiopeia A obtained with the GURT interferometric observations is consistent with the theoretical model within measurement errors and also reasonably consistent with other recent results in the literature.

Read this paper on arXiv…

L. Stanislavsky, I. Bubnov, A. Konovalenko, et. al.
Tue, 13 Dec 22
32/105

Comments: 9 pages, 9 figures, 2 tables

Outdoor Systems Performace and Upgrade [CL]

http://arxiv.org/abs/2212.05131


Over the last two decades, the possibility of using RPCs in outdoors systems has increased considerably. Our group has participated in this effort having installed several systems and continues to work on their optimization, while simultaneously studying and developing new approaches that can to use of RPCs in outdoor applications.
In particular, some detectors were deployed in the field at the Pierre Auger Observatory in 2019 remained inactive, awaiting the commissioning of support systems. During the pandemic the detectors were left without gas flow for more than two years, but were recently reactivated with no major problems.
The LouMu project combines particle physics and geophysics in order to map large geologic structures, using Muon Tomography. The development of the RPC system used and the data from the last two years will be presented.
Finally, recent advances in a large area (1 m2) double gap-sealed RPC will be presented.

Read this paper on arXiv…

L. Lopes, S. Andringa, P. Assis, et. al.
Tue, 13 Dec 22
37/105

Comments: N/A

Target Detection Framework for Lobster Eye X-Ray Telescopes with Machine Learning Algorithms [IMA]

http://arxiv.org/abs/2212.05497


Lobster eye telescopes are ideal monitors to detect X-ray transients, because they could observe celestial objects over a wide field of view in X-ray band. However, images obtained by lobster eye telescopes are modified by their unique point spread functions, making it hard to design a high efficiency target detection algorithm. In this paper, we integrate several machine learning algorithms to build a target detection framework for data obtained by lobster eye telescopes. Our framework would firstly generate two 2D images with different pixel scales according to positions of photons on the detector. Then an algorithm based on morphological operations and two neural networks would be used to detect candidates of celestial objects with different flux from these 2D images. At last, a random forest algorithm will be used to pick up final detection results from candidates obtained by previous steps. Tested with simulated data of the Wide-field X-ray Telescope onboard the Einstein Probe, our detection framework could achieve over 94% purity and over 90% completeness for targets with flux more than 3 mCrab (9.6 * 10-11 erg/cm2/s) and more than 94% purity and moderate completeness for targets with lower flux at acceptable time cost. The framework proposed in this paper could be used as references for data processing methods developed for other lobster eye X-ray telescopes.

Read this paper on arXiv…

P. Jia, W. Liu, Y. Liu, et. al.
Tue, 13 Dec 22
48/105

Comments: Accepted by the APJS Journal. Full source code could be downloaded from the China VO with DOI of this https URL Docker version of the code could be obtained under request to the corresponding author

SIPGI: an interactive pipeline for spectroscopic data reduction [IMA]

http://arxiv.org/abs/2212.05580


SIPGI is a spectroscopic pipeline for the data reduction of optical/near-infrared data acquired by slit-based spectrographs. SIPGI is a complete spectroscopic data reduction environment retaining the high level of flexibility and accuracy typical of the standard “by-hand” reduction methods but with a significantly higher level of efficiency. This is obtained exploiting three main concepts: 1) a built-in data organiser to classify the data, together with a graphical interface; 2) the instrument model (analytic description of the main calibration relations); 3) the design and flexibility of the reduction recipes: the number of tasks required to perform a complete reduction is minimised, preserving the possibility to verify the accuracy of the main stages of data-reduction process. The current version of SIPGI manages data from the MODS and LUCI spectrographs mounted at the Large Binocular Telescope (LBT) with the idea to extend SIPGI to support other through-slit spectrographs.

Read this paper on arXiv…

S. Bisogni, A. Gargiulo, M. Fumana, et. al.
Tue, 13 Dec 22
78/105

Comments: 4 pages, 3 figure, to appear in proceedings of the Astronomical Data Analysis Software and Systems (ADASS) XXXII, virtual conference held 31 October – 4 November 2022

Photometric calibration in u-band using blue halo stars [IMA]

http://arxiv.org/abs/2212.05135


We develop a method to calibrate u-band photometry based on the observed color of blue galactic halo stars. The galactic halo stars belong to an old stellar population of the Milky Way and have relatively low metallicity. The “blue tip” of the halo population — the main sequence turn-off (MSTO) stars — is known to have a relatively uniform intrinsic edge u-g color with only slow spatial variation. In SDSS data, the observed variation is correlated with galactic latitude, which we attribute to contamination by higher-metallicity disk stars and fit with an empirical curve. This curve can then be used to calibrate u-band imaging if g-band imaging of matching depth is available. Our approach can be applied to single-field observations at $|b| > 30^\circ$, and removes the need for standard star observations or overlap with calibrated u-band imaging. We include in our method the calibration of g-band data with ATLAS-Refcat2. We test our approach on stars in KiDS DR 4, ATLAS DR 4, and DECam imaging from the NOIRLab Source Catalog (NSC DR2), and compare our calibration with SDSS. For this process, we use synthetic magnitudes to derive the color equations between these datasets, in order to improve zero-point accuracy. We find an improvement for all datasets, reaching a zero-point precision of 0.016 mag for KiDS (compared to the original 0.033 mag), 0.020 mag for ATLAS (originally 0.027 mag), and 0.016 mag for DECam (originally 0.041 mag). Thus, this method alone reaches the goal of 0.02 mag photometric precision in u-band for the Rubin Observatory’s Legacy Survey of Space and Time (LSST).

Read this paper on arXiv…

S. Liang and A. Linden
Tue, 13 Dec 22
90/105

Comments: Accepted for publication in MNRAS

A Green Bank Telescope search for narrowband technosignatures between 1.1-1.9 GHz during 12 Kepler planetary transits [EPA]

http://arxiv.org/abs/2212.05137


A growing avenue for determining the prevalence of life beyond Earth is to search for “technosignatures” from extraterrestrial intelligences/agents. Technosignatures require significant energy to be visible across interstellar space and thus intentional signals might be concentrated in frequency, in time, or in space, to be found in mutually obvious places. Therefore, it could be advantageous to search for technosignatures in parts of parameter space that are mutually-derivable to an observer on Earth and a distant transmitter. In this work, we used the L-band (1.1-1.9 GHz) receiver on the Robert C. Byrd Green Bank Telescope (GBT) to perform the first technosignature search pre-synchronized with exoplanet transits, covering 12 Kepler systems. We used the Breakthrough Listen turboSETI pipeline to flag narrowband hits ($\sim$3 Hz) using a maximum drift rate of $\pm$614.4 Hz/s and a signal-to-noise threshold of 5 – the pipeline returned $\sim 3.4 \times 10^5$ apparently-localized features. Visual inspection by a team of citizen scientists ruled out 99.6% of them. Further analysis found 2 signals-of-interest that warrant follow-up, but no technosignatures. If the signals-of-interest are not re-detected in future work, it will imply that the 12 targets in the search are not producing transit-aligned signals from 1.1-1.9 GHz with transmitter powers $>$60 times that of the former Arecibo radar. This search debuts a range of innovative technosignature techniques: citizen science vetting of potential signals-of-interest, a sensitivity-aware search out to extremely high drift rates, a more flexible method of analyzing on-off cadences, and an extremely low signal-to-noise threshold.

Read this paper on arXiv…

S. Sheikh, S. Kanodia, E. Lubar, et. al.
Tue, 13 Dec 22
94/105

Comments: 18 pages, 11 figures

A unified model for the LISA measurements and instrument simulations [CL]

http://arxiv.org/abs/2212.05351


LISA is a space-based mHz gravitational-wave observatory, with a planned launch in 2034. It is expected to be the first detector of its kind, and will present unique challenges in instrumentation and data analysis. An accurate pre-flight simulation of LISA data is a vital part of the development of both the instrument and the analysis methods. The simulation must include a detailed model of the full measurement and analysis chain, capturing the main features that affect the instrument performance and processing algorithms. Here, we propose a new model that includes, for the first time, proper relativistic treatment of reference frames with realistic orbits; a model for onboard clocks and clock synchronization measurements; proper modeling of total laser frequencies, including laser locking, frequency planning and Doppler shifts; better treatment of onboard processing and updated noise models. We then introduce two implementations of this model, LISANode and LISA Instrument. We demonstrate that TDI processing successfully recovers gravitational-wave signals from the significantly more realistic and complex simulated data. LISANode and LISA Instrument are already widely used by the LISA community and, for example, currently provide the mock data for the LISA Data Challenges.

Read this paper on arXiv…

J. Bayle and O. Hartwig
Tue, 13 Dec 22
103/105

Comments: 27 pages, 16 figures, 3 tables

A superconducting nanowire photon number resolving four-quadrant detector-based Gigabit deep-space laser communication receiver prototype [CL]

http://arxiv.org/abs/2212.04927


Deep space explorations require transferring huge amounts of data quickly from very distant targets. Laser communication is a promising technology that can offer a data rate of magnitude faster than conventional microwave communication due to the fundamentally narrow divergence of light. This study demonstrated a photon-sensitive receiver prototype with over Gigabit data rate, immunity to strong background photon noise, and simultaneous tracking ability. The advantages are inherited from a joint-optimized superconducting nanowire single-photon detector (SNSPD) array, designed into a four-quadrant structure with each quadrant capable of resolving six photons. Installed in a free-space coupled and low-vibration cryostat, the system detection efficiency reached 72.7%, the detector efficiency was 97.5%, and the total photon counting rate was 1.6 Gcps. Additionally, communication performance was tested for pulse position modulation (PPM) format. A series of signal processing methods were introduced to maximize the performance of the forward error correction (FEC) code. Consequently, the receiver exhibits a faster data rate and better sensitivity by about twofold (1.76 photons/bit at 800 Mbps and 3.40 photons/bit at 1.2 Gbps) compared to previously reported results (3.18 photon/bit at 622 Mbps for the Lunar Laser Communication Demonstration). Furthermore, communications in strong background noise and with simultaneous tracking ability were demonstrated aimed at the challenges of daylight operation and accurate tracking of dim beacon light in deep space scenarios.

Read this paper on arXiv…

H. Hao, Q. Zhao, Y. Huang, et. al.
Mon, 12 Dec 22
2/52

Comments: N/A

pyTANSPEC: A Data Reduction Package for TANSPEC [IMA]

http://arxiv.org/abs/2212.04815


The TIFR-ARIES Near Infrared Spectrometer (TANSPEC) instrument provides simultaneous wavelength coverage from 0.55 to 2.5 micron, mounted on India’s largest ground-based telescope, 3.6-m Devasthal Optical Telescope at Nainital, India. The TANSPEC offers three modes of observations, imaging with various filters, spectroscopy in the low-resolution prism mode with derived R~ 100-400 and the high-resolution cross-dispersed mode (XD-mode) with derived median R~ 2750 for a slit of width 0.5 arcsec. In the XD-mode, ten cross-dispersed orders are packed in the 2048 x 2048 pixels detector to cover the full wavelength regime. As the XD-mode is most utilized as well as for consistent data reduction for all orders and to reduce data reduction time, a dedicated pipeline is at the need. In this paper, we present the code for the TANSPEC XD-mode data reduction, its workflow, input/output files, and a showcase of its implementation on a particular dataset. This publicly available pipeline pyTANSPEC is fully developed in Python and includes nominal human intervention only for the quality assurance of the reduced data. Two customized configuration files are used to guide the data reduction. The pipeline creates a log file for all the fits files in a given data directory from its header, identifies correct frames (science, continuum and calibration lamps) based on the user input, and offers an option to the user for eyeballing and accepting/removing of the frames, does the cleaning of raw science frames and yields final wavelength calibrated spectra of all orders simultaneously.

Read this paper on arXiv…

S. Ghosh, J. Ninan, D. Ojha, et. al.
Mon, 12 Dec 22
3/52

Comments: 10 pages, 6 figures, accepted for publication in the Special Issue of Journal of Astrophysics & Astronomy, 2022, Star formation studies in context of NIR instruments on 3.6m DOT, held at ARIES, Nainital during 4-7, May, 2022

The MeerKAT Pulsar Timing Array: First Data Release [HEAP]

http://arxiv.org/abs/2212.04648


We present the first 2.5 years of data from the MeerKAT Pulsar Timing Array (MPTA), part of MeerTime, a MeerKAT Large Survey Project. The MPTA aims to precisely measure pulse arrival times from an ensemble of 88 pulsars visible from the Southern Hemisphere, with the goal of contributing to the search, detection and study of nanohertz-frequency gravitational waves as part of the International Pulsar Timing Array. This project makes use of the MeerKAT telescope, and operates with a typical observing cadence of two weeks using the L-band receiver that records data from 856-1712 MHz. We provide a comprehensive description of the observing system, software, and pipelines used and developed for the MeerTime project. The data products made available as part of this data release are from the 78 pulsars that had at least $30$ observations between the start of the MeerTime programme in February 2019 and October 2021. These include both sub-banded and band-averaged arrival times, as well as the initial timing ephemerides, noise models, and the frequency-dependent standard templates (portraits) used to derive pulse arrival times. After accounting for detected noise processes in the data, the frequency-averaged residuals of $67$ of the pulsars achieved a root-mean-square residual precision of $< 1 \mu \rm{s}$. We also present a novel recovery of the clock correction waveform solely from pulsar timing residuals, and an exploration into preliminary findings of interest to the international pulsar timing community. The arrival times, standards and full Stokes parameter calibrated pulsar timing archives are publicly available.

Read this paper on arXiv…

M. Miles, R. Shannon, M. Bailes, et. al.
Mon, 12 Dec 22
13/52

Comments: 18 pages, 6 figures

Analytic approximations of scattering effects on beam chromaticity in 21-cm global experiments [IMA]

http://arxiv.org/abs/2212.04526


Scattering from objects near an antenna produce correlated signals from strong compact radio sources in a manner similar to those used by the Sea Interferometer to measure the radio source positions using the fine frequency structure in the total power spectrum of a single antenna. These fringes or ripples due to correlated signal interference are present at a low level in the spectrum of any single antenna and are a major source of systematics in systems used to measure the global redshifted 21-cm signal from the early universe. In the Sea Interferometer a single antenna on a cliff above the sea is used to add the signal from the direct path to the signal from the path reflected from the sea thereby forming an interferometer. This was used for mapping radio sources with a single antenna by Bolton and Slee in the 1950s. In this paper we derive analytic expressions to determine the level of these ripples and compare these results in a few simple cases with electromagnetic modeling software to verify that the analytic calculations are sufficient to obtain the magnitude of the scattering effects on the measurements of the global 21-cm signal. These analytic calculations are needed to evaluate the magnitude of the effects in cases that are either too complex or take too much time to be modeled using software.

Read this paper on arXiv…

A. Rogers, J. Barrett, J. Bowman, et. al.
Mon, 12 Dec 22
15/52

Comments: N/A

Using Machine Learning to Link Black Hole Accretion Flows with Spatially Resolved Polarimetric Observables [HEAP]

http://arxiv.org/abs/2212.04852


We introduce a new library of 535,194 model images of the supermassive black holes and Event Horizon Telescope (EHT) targets Sgr A* and M87*, computed by performing general relativistic radiative transfer calculations on general relativistic magnetohydrodynamics simulations. Then, to infer underlying black hole and accretion flow parameters (spin, inclination, ion-to-electron temperature ratio, and magnetic field polarity), we train a random forest machine learning model on various hand-picked polarimetric observables computed from each image. Our random forest is capable of making meaningful predictions of spin, inclination, and the ion-to-electron temperature ratio, but has more difficulty inferring magnetic field polarity. To disentangle how physical parameters are encoded in different observables, we apply two different metrics to rank the importance of each observable at inferring each physical parameter. Details of the spatially resolved linear polarization morphology stand out as important discriminators between models. Bearing in mind the theoretical limitations and incompleteness of our image library, for the real M87* data, our machinery favours high-spin retrograde models with large ion-to-electron temperature ratios. Due to the time-variable nature of these targets, repeated polarimetric imaging will further improve model inference as the EHT and next-generation (EHT) continue to develop and monitor their targets.

Read this paper on arXiv…

R. Qiu, A. Ricarte, R. Narayan, et. al.
Mon, 12 Dec 22
18/52

Comments: 24 pages, 27 figures

Parameter Estimation for Stellar-Origin Black Hole Mergers In LISA [CL]

http://arxiv.org/abs/2212.04600


The population of stellar origin black hole binaries (SOBHBs) detected by existing ground-based gravitational wave detectors is an exciting target for the future space-based Laser Interferometer Space Antenna (LISA). LISA is sensitive to signals at significantly lower frequencies than ground-based detectors. SOBHB signals will thus be detected much earlier in their evolution, years to decades before they merge. The mergers will then occur in the frequency band covered by ground-based detectors. Observing SOBHBs years before merger can help distinguish between progenitor models for these systems. We present a new Bayesian parameter estimation algorithm for LISA observations of SOBHBs that uses a time-frequency (wavelet) based likelihood function. Our technique accelerates the analysis by several orders of magnitude compared to the standard frequency domain approach and allows for an efficient treatment of non-stationary noise.

Read this paper on arXiv…

M. Digman and N. Cornish
Mon, 12 Dec 22
21/52

Comments: 13 pages, 6 figures, 1 table

No need for extreme stellar masses at z~7: a test-case study for COS-87259 [GA]

http://arxiv.org/abs/2212.04511


Recent controversy regarding the existence of massive ($\log(M_/M_\odot) \gtrsim 11$) galaxies at $z>6$ is posing a challenge for galaxy formation theories. Hence, it is of critical importance to understand the effects of SED fitting methods on stellar mass estimates of Epoch of Re-ionisation galaxies. In this work, we perform a case study on the AGN-host galaxy candidate COS-87259 with spectroscopic redshift $z_{\rm spec}=6.853$, that is claimed to have an extremely high stellar mass of $\log(M_/M_\odot) \sim 11.2$. We test a suite of different SED fitting algorithms and stellar population models on our independently measured photometry in 17 broad bands for this source. Between five different code set-ups, the stellar mass estimates for COS-87259 span $\log(M_/M_\odot) = 10.24$–11.00, whilst the reduced $\chi^2$ values of the fits are all close to unity within $\Delta\chi^2_\nu=0.9$, so that the quality of the SED fits is basically indistinguishable. Only the Bayesian inference code Prospector using a non-parametric star formation history model yields a stellar mass exceeding $\log(M_/M_\odot)=11$. As this SED fitting prescription is becoming increasingly popular for James Webb Space Telescope high-redshift science, we stress the absolute importance to test various SED fitting routines particularly on apparently very massive galaxies at such high redshifts. Ultimately, we conclude that the extremely high stellar mass estimate for COS-87259 is not necessary, deriving equally good fits with stellar masses $\sim 1$ dex lower.

Read this paper on arXiv…

S. Mierlo, K. Caputi and V. Kokorev
Mon, 12 Dec 22
35/52

Comments: Submitted to ApJL

A Novel JupyterLab User Experience for Interactive Data Visualization [IMA]

http://arxiv.org/abs/2212.03907


In the Jupyter ecosystem, data visualization is usually done with “widgets” created as notebook cell outputs. While this mechanism works well in some circumstances, it is not well-suited to presenting interfaces that are long-lived, interactive, and visually rich. Unlike the traditional Jupyter notebook system, the newer JupyterLab application provides a sophisticated extension infrastructure that raises new design possibilities. Here we present a novel user experience (UX) for interactive data visualization in JupyterLab that is based on an “app” that runs alongside the user’s notebooks, rather than widgets that are bound inside them. We have implemented this UX for the AAS WorldWide Telescope (WWT) visualization tool. JupyterLab’s messaging APIs allow the app to smoothly exchange data with multiple computational kernels, allowing users to accomplish tasks that are not possible using the widget framework. A new Jupyter server extension allows the frontend to request data from kernels asynchronously over HTTP, enabling interactive exploration of gigapixel-scale imagery in WWT. While we have developed this UX for WWT, the overall design and the server extension are portable to other applications and have the potential to unlock a variety of new user activities that aren’t currently possible in “science platform” interfaces.

Read this paper on arXiv…

P. Williams, J. Carifio, H. Norman, et. al.
Fri, 9 Dec 22
7/75

Comments: Submitted to proceedings of ADASS32; 8 pages, 3 figures. Try the WWT app at this https URL

The design and performance of the XL-Calibur anticoincidence shield [IMA]

http://arxiv.org/abs/2212.04139


The XL-Calibur balloon-borne hard X-ray polarimetry mission comprises a Compton-scattering polarimeter placed at the focal point of an X-ray mirror. The polarimeter is housed within a BGO anticoincidence shield, which is needed to mitigate the considerable background radiation present at the observation altitude of ~40 km. This paper details the design, construction and testing of the anticoincidence shield, as well as the performance measured during the week-long maiden flight from Esrange Space Centre to the Canadian Northwest Territories in July 2022. The in-flight performance of the shield followed design expectations, with a veto threshold <100 keV and a measured background rate of ~0.5 Hz (20-40 keV). This is compatible with the scientific goals of the mission, where %-level minimum detectable polarisation is sought for a Hz-level source rate.

Read this paper on arXiv…

N. Iyer, M. Kiss, M. Pearce, et. al.
Fri, 9 Dec 22
19/75

Comments: Submitted to Nuclear Instruments and Methods A

GONG third generation camera: Detector selection and feasibility study [IMA]

http://arxiv.org/abs/2212.03963


Aging GONG second generation cameras (Silicon Mountain Design(TM) cameras) were planned to be replaced after their long service of more than a decade. This prompted a market-wide search for a potential replacement detector to meet the GONG science requirements. This report provides some history of the search process, a comparison between CMOS and CCD type sensors and then a quantitative evaluation of potential candidates to arrive at final selection. Further, a feasibility study of the selected sensor for adaptation to GONG optical system was done and sensor characteristics were independently verified in the laboratory. This technical report gives description of these studies and tests.

Read this paper on arXiv…

S. Gosain, J. Harvey, D. Branson, et. al.
Fri, 9 Dec 22
51/75

Comments: 13 pages, 12 figures

The wide-field, multiplexed, spectroscopic facility WEAVE: Survey design, overview, and simulated implementation [IMA]

http://arxiv.org/abs/2212.03981


WEAVE, the new wide-field, massively multiplexed spectroscopic survey facility for the William Herschel Telescope, will see first light in late 2022. WEAVE comprises a new 2-degree field-of-view prime-focus corrector system, a nearly 1000-multiplex fibre positioner, 20 individually deployable ‘mini’ integral field units (IFUs), and a single large IFU. These fibre systems feed a dual-beam spectrograph covering the wavelength range 366$-$959\,nm at $R\sim5000$, or two shorter ranges at $R\sim20\,000$. After summarising the design and implementation of WEAVE and its data systems, we present the organisation, science drivers and design of a five- to seven-year programme of eight individual surveys to: (i) study our Galaxy’s origins by completing Gaia’s phase-space information, providing metallicities to its limiting magnitude for $\sim$3 million stars and detailed abundances for $\sim1.5$ million brighter field and open-cluster stars; (ii) survey $\sim0.4$ million Galactic-plane OBA stars, young stellar objects and nearby gas to understand the evolution of young stars and their environments; (iii) perform an extensive spectral survey of white dwarfs; (iv) survey $\sim400$ neutral-hydrogen-selected galaxies with the IFUs; (v) study properties and kinematics of stellar populations and ionised gas in $z<0.5$ cluster galaxies; (vi) survey stellar populations and kinematics in $\sim25\,000$ field galaxies at $0.3\lesssim z \lesssim 0.7$; (vii) study the cosmic evolution of accretion and star formation using $>1$ million spectra of LOFAR-selected radio sources; (viii) trace structures using intergalactic/circumgalactic gas at $z>2$. Finally, we describe the WEAVE Operational Rehearsals using the WEAVE Simulator.

Read this paper on arXiv…

S. Jin, S. Trager, G. Dalton, et. al.
Fri, 9 Dec 22
59/75

Comments: 41 pages, 27 figures, accepted for publication by MNRAS

On the Application of Bayesian Leave-One-Out Cross-Validation to Exoplanet Atmospheric Analysis [EPA]

http://arxiv.org/abs/2212.03872


Over the last decade, exoplanetary transmission spectra have yielded an unprecedented understanding about the physical and chemical nature of planets outside our solar system. Physical and chemical knowledge is mainly extracted via fitting competing models to spectroscopic data, based on some goodness-of-fit metric. However, current employed metrics shed little light on how exactly a given model is failing at the individual data point level and where it could be improved. As the quality of our data and complexity of our models increases, there is an urgent need to better understand which observations are driving our model interpretations. Here we present the application of Bayesian leave-one-out cross-validation to assess the performance of exoplanet atmospheric models and compute the expected log pointwise predictive density (elpd$\text{LOO}$). elpd$\text{LOO}$ estimates the out-of-sample predictive accuracy of an atmospheric model at data point resolution providing interpretable model criticism. We introduce and demonstrate this method on synthetic HST transmission spectra of a hot Jupiter. We apply elpd$\text{LOO}$ to interpret current observations of HAT-P-41b and assess the reliability of recent inferences of H$^-$ in its atmosphere. We find that previous detections of H$^{-}$ are dependent solely on a single data point. This new metric for exoplanetary retrievals complements and expands our repertoire of tools to better understand the limits of our models and data. elpd$\text{LOO}$ provides the means to interrogate models at the single data point level, a prerequisite for robustly interpreting the imminent wealth of spectroscopic information coming from JWST.

Read this paper on arXiv…

L. Welbanks, P. McGill, M. Line, et. al.
Fri, 9 Dec 22
65/75

Comments: Accepted for publication in The Astronomical Journal

Measuring the 8621 Å Diffuse Interstellar Band in Gaia DR3 RVS Spectra: Obtaining a Clean Catalog by Marginalizing over Stellar Types [GA]

http://arxiv.org/abs/2212.03879


Diffuse interstellar bands (DIBs) are broad absorption features associated with interstellar dust and can serve as chemical and kinematic tracers. Conventional measurements of DIBs in stellar spectra are complicated by residuals between observations and best-fit stellar models. To overcome this, we simultaneously model the spectrum as a combination of stellar, dust, and residual components, with full posteriors on the joint distribution of the components. This decomposition is obtained by modeling each component as a draw from a high-dimensional Gaussian distribution in the data-space (the observed spectrum) — a method we call “Marginalized Analytic Data-space Gaussian Inference for Component Separation” (MADGICS). We use a data-driven prior for the stellar component, which avoids missing stellar features not included in synthetic line lists. This technique provides statistically rigorous uncertainties and detection thresholds, which are required to work in the low signal-to-noise regime that is commonplace for dusty lines of sight. We reprocess all public Gaia DR3 RVS spectra and present an improved 8621 \r{A} DIB catalog, free of detectable stellar line contamination. We constrain the rest-frame wavelength to $8623.14 \pm 0.087$ \r{A} (vacuum), find no significant evidence for DIBs in the Local Bubble from the $1/6^{\rm{th}}$ of RVS spectra that are public, and show unprecedented correlation with kinematic substructure in Galactic CO maps. We validate the catalog, its reported uncertainties, and biases using synthetic injection tests. We believe MADGICS provides a viable path forward for large-scale spectral line measurements in the presence of complex spectral contamination.

Read this paper on arXiv…

A. Saydjari, C. Zucker, J. Peek, et. al.
Fri, 9 Dec 22
66/75

Comments: 25 pages, 25 figures, submitted to ApJ

A Bayesian approach to modelling spectrometer data chromaticity corrected using beam factors — I. Mathematical formalism [CEA]

http://arxiv.org/abs/2212.03875


Accurately accounting for spectral structure in spectrometer data induced by instrumental chromaticity on scales relevant for detection of the 21-cm signal is among the most significant challenges in global 21-cm signal analysis. In the publicly available EDGES low-band data set (Bowman et al. 2018), this complicating structure is suppressed using beam-factor based chromaticity correction (BFCC), which works by dividing the data by a sky-map-weighted model of the spectral structure of the instrument beam. Several analyses of this data have employed models that start with the assumption that this correction is complete; however, while BFCC mitigates the impact of instrumental chromaticity on the data, given realistic assumptions regarding the spectral structure of the foregrounds, the correction is only partial, which complicates the interpretation of fits to the data with intrinsic sky models. In this paper, we derive a BFCC data model from an analytic treatment of BFCC and demonstrate using simulated observations that the BFCC data model enables unbiased recovery of a simulated global 21-cm signal from beam factor chromaticity corrected data.

Read this paper on arXiv…

P. Sims, J. Bowman, N. Mahesh, et. al.
Fri, 9 Dec 22
70/75

Comments: 26 pages, 8 figures

An analysis of the effect of data processing methods on magnetic propeller models in short GRBs [HEAP]

http://arxiv.org/abs/2212.04428


We present analysis of observational data from the Swift Burst Analyser for a sample of 15 short gamma-ray bursts with extended emission (SGRBEEs) which have been processed such that error propagation from Swift’s count-rate-to-flux conversion factor is applied to the flux measurements. We apply this propagation to data presented by the Burst Analyser at 0.3-10 keV and also at 15-50 keV, and identify clear differences in the morphologies of the light-curves in the different bands. In performing this analysis with data presented at both 0.3-10 keV, at 15-50 keV, and also at a combination of both bands, we highlight the impact of extrapolating data from their native bandpasses on the light-curve. We then test these data by fitting to them a magnetar-powered model for SGRBEEs, and show that while the model is consistent with the data in both bands, the model’s derived physical parameters are generally very loosely constrained when this error propagation is included and are inconsistent across the two bands. In this way, we highlight the importance of the Swift data processing methodology to the details of physical model fits to SGRBEEs.

Read this paper on arXiv…

T. Meredith, G. Wynn and P. Evans
Fri, 9 Dec 22
75/75

Comments: 14 pages, accepted for publication in MNRAS

A renewable power system for an off-grid sustainable telescope fueled by solar power, batteries and green hydrogen [CL]

http://arxiv.org/abs/2212.03823


A large portion of astronomy’s carbon footprint stems from fossil fuels supplying the power demand of astronomical observatories. Here, we explore various isolated low-carbon power system setups for the newly planned Atacama Large Aperture Submillimeter Telescope, and compare them to a business-as-usual diesel power generated system. Technologies included in the designed systems are photovoltaics, concentrated solar power, diesel generators, batteries, and hydrogen storage. We adapt the electricity system optimization model highRES to this case study and feed it with the telescope’s projected energy demand, cost assumptions for the year 2030 and site-specific capacity factors. Our results show that the lowest-cost system with LCOEs of $116/MWh majorly uses photovoltaics paired with batteries and fuel cells running on imported green hydrogen. Some diesel generators run for backup. This solution would reduce the telescope’s power-side carbon footprint by 95% compared to the business-as-usual case.

Read this paper on arXiv…

I. Viole, G. Valenzuela-Venegas, M. Zeyringer, et. al.
Thu, 8 Dec 22
1/63

Comments: 16 pages, 10 figures

A large facility for photosensors test at cryogenic temperature [CL]

http://arxiv.org/abs/2212.02296


Current generation of detectors using noble gases in liquid phase for direct dark matter search and neutrino physics need large area photosensors. Silicon based photo-detectors are innovative light collecting devices and represent a successful technology in these research fields. %of direct dark matter search detectors based on liquified noble gases. The DarkSide collaboration started a dedicated development and customization of SiPM technology for its specific needs resulting in the design, production and assembly of large surface modules of 20$\times$20 cm$^{2}$ named Photo Detection Unit for the DarkSide-20k experiment. Production of a large number of such devices, as needed to cover about 20 m$^{2}$ of active surface inside the DarkSide-20k detector, requires a robust testing and validation process. In order to match this requirement a dedicated test facility for the photosensor test was designed and commissioned at INFN-Naples laboratory. The first commissioning test was successfully performed in 2021. Since then a number of testing campaigns were performed. Detailed description of the facility is reported as well as results of some tests.

Read this paper on arXiv…

Z. Balmforth, A. Basco, A. Boiano, et. al.
Thu, 8 Dec 22
2/63

Comments: Prepared for submission to JINST – LIDINE2022 – September 21-23, 2022 – University of Warsaw Library

Inverse MultiView II: Microarcsecond Trigonometric Parallaxes for Southern Hemisphere 6.7~GHz Methanol Masers G232.62+00.99 and G323.74$-$00.26 [GA]

http://arxiv.org/abs/2212.03555


We present the first results from the Southern Hemisphere Parallax Interferometric Radio Astrometry Legacy Survey (\spirals): $10\mu$as-accurate parallaxes and proper motions for two southern hemisphere 6.7 GHz methanol masers obtained using the inverse MultiView calibration method. Using an array of radio telescopes in Australia and New Zealand, we measured the trigonometric parallax and proper motions for the masers associated with the star formation regions G232.62+00.99 of $\pi = 0.610\pm0.011$~mas, $\mu_x=-2.266\pm0.021$~mas/yr and $\mu_y=2.249\pm0.049$~mas/yr, which implies its distance to be $d=1.637\pm0.029$~kpc. These measurements represent an improvement in accuracy by more than a factor of 3 over the previous measurements obtained through Very Long Baseline Array observations of the 12~GHz methanol masers associated with this region. We also measure the trigonometric parallax and proper motion for G323.74–00.26 as $\pi = 0.364\pm0.009$~mas, $\mu_x=-3.239\pm0.025$~mas/yr and $\mu_y=-3.976\pm0.039$~mas/yr, which implies a distance of $d=2.747\pm0.068$~kpc. These are the most accurate measurements of trigonometric parallax obtained for 6.7~GHz class II methanol masers to date. We confirm that G232.62+00.99 is in the Local arm and find that G323.74–00.26 is in the Scutum-Centaurus arm. We also investigate the structure and internal dynamics of G323.74–00.26

Read this paper on arXiv…

L. Hyland, M. Reid, G. Orosz, et. al.
Thu, 8 Dec 22
3/63

Comments: 12 pages, 8 figures, 3 tables. Submitted to AAS

Electron polarization in ultrarelativistic plasma current filamentation instabilities [CL]

http://arxiv.org/abs/2212.03303


Plasma current filamentation of an ultrarelativistic electron beam impinging on an overdense plasma is investigated, with emphasis on radiation-induced electron polarization. Particle-in-cell simulations provide the classification and in-depth analysis of three different regimes of the current filaments, namely, the normal filament, abnormal filament, and quenching regimes. We show that electron radiative polarization emerges during the instability along the azimuthal direction in the momentum space, which significantly varies across the regimes. We put forward an intuitive Hamiltonian model to trace the origin of the electron polarization dynamics. In particular, we discern the role of nonlinear transverse motion of plasma filaments, which induces asymmetry in radiative spin flips, yielding an accumulation of electron polarization. Our results break the conventional perception that quasi-symmetric fields are inefficient for generating radiative spin-polarized beams, suggesting the potential of electron polarization as a source of new information on laboratory and astrophysical plasma instabilities.

Read this paper on arXiv…

Z. Gong, K. Hatsagortsyan and C. Keitel
Thu, 8 Dec 22
11/63

Comments: 7 pages, 5 figures

Ground-based Synoptic Studies of the Sun [IMA]

http://arxiv.org/abs/2212.03247


Ground-based synoptic solar observations provide critical contextual data used to model the large-scale state of the heliosphere. The next decade will see a combination of ground-based telescopes and space missions that will study our Sun’s atmosphere microscopic processes with unprecedented detail. This white paper describes contextual observations from a ground-based network needed to fully exploit this new knowledge of the underlying physics that leads to the magnetic linkages between the heliosphere and the Sun. This combination of a better understanding of small-scale processes and the appropriate global context will enable a physics-based approach to Space Weather comparable to Terrestrial Weather forecasting.

Read this paper on arXiv…

S. Gosain, V. Pillet, A. Pevtsov, et. al.
Thu, 8 Dec 22
12/63

Comments: 10 pages, 5 figures, White paper submitted to Heliodecadal 2024, Category: Basic Research, Solar Physics. arXiv admin note: text overlap with arXiv:1903.06944

A novel energy reconstruction method for the MAGIC stereoscopic observation [IMA]

http://arxiv.org/abs/2212.03592


We present a new gamma ray energy reconstruction method based on Random Forest to be commonly used for the data analysis of the MAGIC Telescopes, a system of two Imaging Atmospheric Cherenkov Telescopes.
The energy resolution with the new energy reconstruction improves compared to the one obtained with the LUTs method. For standard observations i.e. dark conditions with pointing zenith (Zd) less than 35 deg for a point-like source, the energy resolution goes from $\sim 20\%$ at 100 GeV to $\sim 10\%$ at a few TeV.
In addition, the new method suppresses the outlier population in the energy error distribution, which is thus better described by a Gaussian distribution. The new energy reconstruction method enhances the reliability especially for the sources with steep spectra, in higher energies and/or in observations at higher Zd pointings.
We validate the new method in different ways and demonstrate some cases of its remarkable benefit in spectral analysis with simulated observation data.

Read this paper on arXiv…

K. Ishio and D. Paneque
Thu, 8 Dec 22
13/63

Comments: Submitted version

A Neural Network Approach for Selecting Track-like Events in Fluorescence Telescope Data [IMA]

http://arxiv.org/abs/2212.03787


In 2016-2017, TUS, the world’s first experiment for testing the possibility of registering ultra-high energy cosmic rays (UHECRs) by their fluorescent radiation in the night atmosphere of Earth was carried out. Since 2019, the Russian-Italian fluorescence telescope (FT) Mini-EUSO (“UV Atmosphere”) has been operating on the ISS. The stratospheric experiment EUSO-SPB2, which will employ an FT for registering UHECRs, is planned for 2023. We show how a simple convolutional neural network can be effectively used to find track-like events in the variety of data obtained with such instruments.

Read this paper on arXiv…

M. Zotov and D. Sokolinskii
Thu, 8 Dec 22
15/63

Comments: 5 pages, to be published in proceedings of the 37th Russian Cosmic Ray Conference (2022)

Empirically-Driven Multiwavelength K-corrections At Low Redshift [CEA]

http://arxiv.org/abs/2212.03263


K-corrections, conversions between flux in observed bands to flux in rest-frame bands, are critical for comparing galaxies at various redshifts. These corrections often rely on fits to empirical or theoretical spectral energy distribution (SED) templates of galaxies. However, the templates limit reliable K-corrections to regimes where SED models are robust. For instance, the templates are not well-constrained in some bands (e.g., WISE W4), which results in ill-determined K-corrections for these bands. We address this shortcoming by developing an empirically-driven approach to K-corrections as a means to mitigate dependence on SED templates. We perform a polynomial fit for the K-correction as a function of a galaxy’s rest-frame color determined in well-constrained bands (e.g., rest-frame (g-r)) and redshift, exploiting the fact that galaxy SEDs can be described as a one parameter family at low redshift (0.01 < z < 0.09). For bands well-constrained by SED templates, our empirically-driven K-corrections are comparable to the SED fitting method of Kcorrect and SED template fitting employed in the GSWLC-M2 catalogue (the updated medium-deep GALEX-SDSS-WISE Legacy Catalogue). However, our method dramatically outperforms the available SED fitting K-corrections for WISE W4. Our method also mitigates incorrect template assumptions and enforces the K-correction to be 0 at z = 0. Our K-corrected photometry and code are publicly available.

Read this paper on arXiv…

C. Fielder, B. Andrews, J. Newman, et. al.
Thu, 8 Dec 22
36/63

Comments: 15 pages, 9 figures, submitted to MNRAS

#Change: How Social Media is Accelerating STEM Inclusion [IMA]

http://arxiv.org/abs/2212.03245


The vision of 2030STEM is to address systemic barriers in institutional structures and funding mechanisms required to achieve full inclusion in Science, Technology, Engineering, and Mathematics (STEM) and provide leadership opportunities for individuals from underrepresented populations across STEM sectors. 2030STEM takes a systems-level approach to create a community of practice that affirms diverse cultural identities in STEM. This is the first in a series of white papers based on 2030STEM Salons – discussions that bring together visionary stakeholders in STEM to think about innovative ways to infuse justice, equity, diversity, and inclusion into the STEM ecosystem. Our salons identify solutions that come from those who have been most affected by systemic barriers in STEM. Our first salon focused on the power of social media to accelerate inclusion and diversity efforts in STEM. Social media campaigns, such as the #XinSTEM initiatives, are powerful new strategies for accelerating change towards inclusion and leadership by underrepresented communities in STEM. This white paper highlights how #XinSTEM campaigns are redefining community, and provides recommendations for how scientific and funding institutions can improve the STEM ecosystem by supporting the #XinSTEM movement.

Read this paper on arXiv…

J. Adams, C. Berry, R. Cohen, et. al.
Thu, 8 Dec 22
42/63

Comments: 13 pages, 2 Figures, Also uploaded to the Biorxiv

Photometry and astrometry with JWST — II. NIRCam geometric distortion correction [IMA]

http://arxiv.org/abs/2212.03256


In preparation to make the most of our own planned James Webb Space Telescope investigations, we take advantage of publicly available calibration and early-science observations to independently derive and test a geometric-distortion solution for NIRCam detectors. Our solution is able to correct the distortion to better than ~0.2 mas. Current data indicate that the solution is stable and constant over the investigated filters, temporal coverage, and even over the available filter combinations. We successfully tested our geometric-distortion solution matching the JWST and archive HST catalogues. We considered three different applications: (i) cluster-field separation for the stars in the globular cluster M92; (ii) measuring the internal proper motions for M92’s stars; (iii) measuring the internal proper motions for the stars in the Large Magellanic Cloud system. While we were not able to detect significant variations of the geometric distortion solution over 22 days, it is clear that more data are still necessary to have a better understanding of the instrument and to characterise the solution to a higher level of accuracy. To our knowledge, the here-derived geometric-distortion solution for NIRCam is the best available and we publicly release it, as many other investigations could potentially benefit from it. Along with our geometric-distortion solution, we also release a Python tool to convert the raw-pixels coordinates of each detector into distortion-free positions, and also to put all the ten detectors of NIRCam into a common reference system.

Read this paper on arXiv…

M. Griggio, D. Nardiello and L. Bedin
Thu, 8 Dec 22
43/63

Comments: 13 pages, 12 figures (4 in low resolution), 2 tables. Submitted. Associated files soon at this https URL

Optical and Opto-mechanical Analysis and Design of the Telescope for the Ariel Mission [IMA]

http://arxiv.org/abs/2212.03686


The Atmospheric Remote-sensing Infrared Exoplanet Large-survey (Ariel) is the first space mission dedicated to measuring the chemical composition and thermal structures of thousands of transiting exoplanets. Ariel was adopted in 2020 as the M4 mission in ESA “Cosmic Vision” program, with launch expected in 2029. The mission will operate from the Sun-Earth Lagrange Point L2. The scientific payload consists of two instruments: a high resolution spectrometer in the waveband 1.95-7.8 microns, and a fine guidance system / visible photometer / low resolution near-infrared spectrometer. The instruments are fed a collimated beam from an unobscured, off-axis Cassegrain telescope. Instruments and telescope will operate at a temperature below 50 K. Telescope mirrors and supporting structures will be realized in aerospace-grade aluminum. Given the large aperture of the primary mirror (0.6 m$^2$), it is a choice of material that requires careful optical and opto-mechanical design, and technological advances in the three areas of mirror substrate thermal stabilization, optical surface polishing and optical coating. This thesis presents the work done by the author in these areas, as member of the team responsible for designing and manufacturing the telescope and mirrors, starting with a systematic review of the optical and opto-mechanical requirements and design choices of the Ariel telescope, in the context of previous development work and scientific goals and requirements of the mission. The review then progresses with opto-mechanical design, examining the most important choices in terms of structural and thermal design, and with a statistical analysis of the deformations of the optical surface of the telescope mirrors and of their alignment in terms of rigid body motions. The details of the qualification work on thermal stabilization, polishing and coating are then presented.

Read this paper on arXiv…

P. Chioetto
Thu, 8 Dec 22
51/63

Comments: Pre-print of the final dissertation for the PhD Course in “Sciences, Technologies and Measurements for Space”, 35th Series, at the Center for Studies and Space Activities “G.Colombo” – CISAS, University of Padova, Italy. Course coordinator: Prof. Francesco Picano, Supervisor: Dr. Paola Zuppella, Co-supervisor: Dr. Vania Da Deppo

JWST MIRI/MRS in-flight absolute flux calibration and tailored fringe correction for unresolved sources [IMA]

http://arxiv.org/abs/2212.03596


The MRS is one of the four observing modes of JWST/MIRI. Using JWST in-flight data of unresolved (point) sources, we can derive the MRS absolute spectral response function (ASRF) starting from raw data. Spectral fringing plays a critical role in the derivation and interpretation of the MRS ASRF. In this paper, we present an alternative way to calibrate the data. Firstly, we aim to derive a fringe correction that accounts for the dependence of the fringe properties on the MIRI pupil illumination and detector pixel sampling of the point spread function. Secondly, we aim to derive the MRS ASRF using an absolute flux calibrator observed across the full 5 to 28 $\mu$m wavelength range of the MRS. Thirdly, we aim to apply the new ASRF to the spectrum of a G dwarf and compare with the output of the JWST/MIRI default data reduction pipeline. Finally, we examine the impact of the different fringe corrections on the detectability of molecular features in the G dwarf and K giant. The absolute flux calibrator HD 163466 (A-star) is used to derive tailored point source fringe flats at each of the default dither locations of the MRS. The fringe-corrected point source integrated spectrum of HD 163466 is used to derive the MRS ASRF using a theoretical model for the stellar continuum. A cross-correlation is run to quantify the uncertainty on the detection of CO, SiO, and OH in the K giant and CO in the G dwarf for different fringe corrections. The point-source-tailored fringe correction and ASRF are found to perform at the same level as the current corrections, beating down the fringe contrast to the sub-percent level, whilst mitigating the alteration of real molecular features. The same tailored solutions can be applied to other MRS unresolved targets. A pointing repeatability issue in the MRS limits the effectiveness of the tailored fringe flats is at short wavelengths.

Read this paper on arXiv…

D. Gasman, I. Argyriou, G. Sloan, et. al.
Thu, 8 Dec 22
58/63

Comments: N/A

The Importance of Co-located VLBI Intensive Stations and GNSS Receivers: A case study of the Maunakea VLBI and GNSS stations during the 2018 Hawai`i earthquake [IMA]

http://arxiv.org/abs/2212.03453


Frequent, low-latency measurements of the Earth’s rotation phase, UT1$-$UTC, critically support the current estimate and short-term prediction of this highly variable Earth Orientation Parameter (EOP). Very Long Baseline Interferometry (VLBI) Intensive sessions provide the required data. However, the Intensive UT1$-$UTC measurement accuracy depends on the accuracy of numerous models, including the VLBI station position. Intensives observed with the Maunakea (Mk) and Pie Town (Pt) stations of the Very Long Baseline Array (VLBA) illustrate how a geologic event (i.e., the $M_w$ 6.9 Hawai`i Earthquake of May 4th, 2018) can cause a station displacement and an associated offset in the values of UT1$-$UTC measured by that baseline, rendering the data from the series useless until it is corrected. Using the non-parametric Nadaraya-Watson estimator to smooth the measured UT1$-$UTC values before and after the earthquake, we calculate the offset in the measurement to be 75.7 $\pm$ 4.6 $\mu$s. Analysis of the sensitivity of the Mk-Pt baseline’s UT1$-$UTC measurement to station position changes shows that the measured offset is consistent with the 67.2 $\pm$ 5.9 $\mu$s expected offset based on the 12.4 $\pm$ 0.6 mm total coseismic displacement of the Maunakea VLBA station determined from the displacement of the co-located global navigation satellite system (GNSS) station. GNSS station position information is known with a latency on the order of tens of hours, and thus can be used to correct the a priori position model of a co-located VLBI station such that it can continue to provide accurate measurements of the critical EOP UT1$-$UTC as part of Intensive sessions. The VLBI station position model would likely not be updated for several months. This contrast highlights the benefit of co-located GNSS and VLBI stations in support of the monitoring of UT1$-$UTC with single baseline Intensives. Abridged.

Read this paper on arXiv…

C. Dieck, M. Johnson and D. MacMillan
Thu, 8 Dec 22
61/63

Comments: 18 pages, 4 figures, accepted for publication in Journal of Geodesy

Relative astrometry in an annular field [IMA]

http://arxiv.org/abs/2212.03001


Background. Relative astrometry at or below the micro-arcsec level with a 1m class space telescope has been repeatedly proposed as a tool for exo-planet detection and characterization, as well as for several topics at the forefront of Astrophysics and Fundamental Physics. Aim. This paper investigates the potential benefits of an instrument concept based on an annular field of view, as compared to a traditional focal plane imaging a contiguous area close to the telescope optical axis. Method. Basic aspects of relative astrometry are reviewed as a function of the distribution on the sky of reference stars brighter than G = 12 mag (from Gaia EDR3). Statistics of field stars for targets down to G = 8 mag is evaluated by analysis and simulation. Results. Observation efficiency benefits from prior knowledge on individual targets, since source model is improved with few measurements. Dedicated observations (10-20 hours) can constrain the orbital inclination of exoplanets to a few degrees. Observing strategy can be tailored to include a sample of stars, materialising the reference frame, sufficiently large to average down the residual catalogue errors to the desired microarcsec level. For most targets, the annular field provides typically more reference stars, by a factor four to seven in our case, than the conventional field. The brightest reference stars for each target are up to 2 mag brighter. Conclusions. The proposed annular field telescope concept improves on observation flexibility and/or astrometric performance with respect to conventional designs. It appears therefore as an appealing contribution to optimization of future relative astrometry missions.

Read this paper on arXiv…

M. Gai, A. Vecchiato, A. Riva, et. al.
Wed, 7 Dec 22
4/74

Comments: 20 pages, 16 figures

Real-time Data Ingestion at the Keck Observatory Archive (KOA) [IMA]

http://arxiv.org/abs/2212.02576


Since February of this year, KOA began to prepare, transfer, and ingest data as they were acquired in near-real time; in most cases data are available to observers through KOA within one minute of acquisition. Real-time ingestion will be complete for all active instruments by the end of Summer 2022. The observatory is supporting the development of modern Python data reduction pipelines, which when delivered, will automatically create science-ready data sets at the end of each night for ingestion into the archive. This presentation will describe the infrastructure developed to support real-time data ingestion, itself part of a larger initiative at the Observatory to modernize end-to-end operations.
During telescope operations, the software at WMKO is executed automatically when a newly acquired file is recognized through monitoring a keyword-based observatory control system; this system is used at Keck to execute virtually all observatory functions. The monitor uses callbacks built into the control system to begin data preparation of files for transmission to the archive on an individual basis: scheduling scripts or file system related triggers are unnecessary. An HTTP-based system called from the Flask micro-framework enables file transfers between WMKO and NExScI and triggers data ingestion at NExScI. The ingestion system at NEXScI is a compact (4 KLOC), highly fault-tolerant, Python-based system. It uses a shared file system to transfer data from WMKO to NExScI. The ingestion code is instrument agnostic, with instrument parameters read from configuration files. It replaces an unwieldy (50 KLOC) C-based system that had been in use since 2004.

Read this paper on arXiv…

G. Berriman, M. Brodheim, M. Brown, et. al.
Wed, 7 Dec 22
19/74

Comments: 4 pages, 3 figures

Topological Designs for Scalar Vortex Coronagraphs [IMA]

http://arxiv.org/abs/2212.02633


The detection and characterization of Earth-like exoplanets around Sun-like stars for future flagship missions requires coronagraphs to achieve contrasts on the order of 1e-10 at close angular separations and over large spectral bandwidths (>=20%). We present our progress thus far on exploring the potential for scalar vortex coronagraphs (SVCs) in direct exoplanet imaging. SVCs are an attractive alternative to vector vortex coronagraphs (VVCs), which have recently demonstrated 6e-9 raw contrast in 20% broadband light but are polarization dependent. SVCs imprint the same phase ramp on the incoming light and do not require polarization splitting, but are inherently limited by their chromatic behavior. Several SVC designs have been proposed in recent years to solve this issue by modulating or wrapping the azimuthal phase function according to specific patterns. For one such design, the staircase SVC, we present our best experimental SVC results demonstrating raw contrast of 2e-7 in 10% broadband light. Since SVC broadband performance and aberration sensitivities are highly dependent on topology, we conducted a comparative study of several SVC designs to optimize for high contrast across a range of bandwidths. Furthermore, we present a new coronagraph optimization tool to predict performance in order to find an achromatic solution.

Read this paper on arXiv…

N. Desai, J. Llop-Sayson, A. Bertrou-Cantou, et. al.
Wed, 7 Dec 22
21/74

Comments: N/A

Self-supervised component separation for the extragalactic submillimeter sky [CEA]

http://arxiv.org/abs/2212.02847


We use a new approach based on self-supervised deep learning networks originally applied to transparency separation in order to simultaneously extract the components of the extragalactic submillimeter sky, namely the Cosmic Microwave Background (CMB), the Cosmic Infrared Background (CIB), and the Sunyaev-Zel’dovich (SZ) effect. In this proof-of-concept paper, we test our approach on the WebSky extragalactic simulation maps in a range of frequencies from 93 to 545 GHz, and compare with one of the state-of-the-art traditional method MILCA for the case of SZ. We compare first visually the images, and then statistically the full-sky reconstructed high-resolution maps with power spectra. We study the contamination from other components with cross spectra, and we particularly emphasize the correlation between the CIB and the SZ effect and compute SZ fluxes around positions of galaxy clusters. The independent networks learn how to reconstruct the different components with less contamination than MILCA. Although this is tested here in ideal case (without noise, beams, nor foregrounds), this method shows good potential for application in future experiments such as Simons Observatory (SO) in combination with the Planck satellite.

Read this paper on arXiv…

V. Bonjean, H. Tanimura, N. Aghanim, et. al.
Wed, 7 Dec 22
24/74

Comments: 10 pages, 7 figures

Coma Off It: Removing Variable Point Spread Functions from Astronomical Images [IMA]

http://arxiv.org/abs/2212.02594


We describe a method for regularizing, post-facto, the point-spread function of a telescope or other imaging instrument, across its entire field of view. Imaging instruments in general blur point sources of light by local convolution with a point-spread function that varies slowly across the field of view, due to coma, spherical aberration, and similar effects. It is possible to regularize the PSF in post-processing, producing data with a uniform and narrow “effective PSF” across the entire field of view. In turn, the method enables seamless wide-field astronomical mosaics at higher resolution than would otherwise be achievable, and potentially changes the design trade space for telescopes, lenses, and other optical systems where data uniformity is important. The method does not require access to the instrument that required the data, and can be bootstrapped from existing data sets that include starfield images.

Read this paper on arXiv…

J. Hughes, C. DeForest and D. Seaton
Wed, 7 Dec 22
27/74

Comments: 11 pages; submitted to Astrophysical Journal

Improving machine learning-derived photometric redshifts and physical property estimates using unlabelled observations [GA]

http://arxiv.org/abs/2212.02537


In the era of huge astronomical surveys, machine learning offers promising solutions for the efficient estimation of galaxy properties. The traditional, supervised' paradigm for the application of machine learning involves training a model on labelled data, and using this model to predict the labels of previously unlabelled data. The semi-supervisedpseudo-labelling’ technique offers an alternative paradigm, allowing the model training algorithm to learn from both labelled data and as-yet unlabelled data. We test the pseudo-labelling method on the problems of estimating redshift, stellar mass, and star formation rate, using COSMOS2015 broad band photometry and one of several publicly available machine learning algorithms, and we obtain significant improvements compared to purely supervised learning. We find that the gradient-boosting tree methods CatBoost, XGBoost, and LightGBM benefit the most, with reductions of up to ~15% in metrics of absolute error. We also find similar improvements in the photometric redshift catastrophic outlier fraction. We argue that the pseudo-labellng technique will be useful for the estimation of redshift and physical properties of galaxies in upcoming large imaging surveys such as Euclid and LSST, which will provide photometric data for billions of sources.

Read this paper on arXiv…

A. Humphrey, P. Cunha, A. Paulino-Afonso, et. al.
Wed, 7 Dec 22
35/74

Comments: 10 pages, 6 figures, accepted for publication in MNRAS

Astronomical source detection in radio continuum maps with deep neural networks [IMA]

http://arxiv.org/abs/2212.02538


Source finding is one of the most challenging tasks in upcoming radio continuum surveys with SKA precursors, such as the Evolutionary Map of the Universe (EMU) survey of the Australian SKA Pathfinder (ASKAP) telescope. The resolution, sensitivity, and sky coverage of such surveys is unprecedented, requiring new features and improvements to be made in existing source finders. Among them, reducing the false detection rate, particularly in the Galactic plane, and the ability to associate multiple disjoint islands into physical objects. To bridge this gap, we developed a new source finder, based on the Mask R-CNN object detection framework, capable of both detecting and classifying compact, extended, spurious, and poorly imaged sources in radio continuum images. The model was trained using ASKAP EMU data, observed during the Early Science and pilot survey phase, and previous radio survey data, taken with the VLA and ATCA telescopes. On the test sample, the final model achieves an overall detection completeness above 85\%, a reliability of $\sim$65\%, and a classification precision/recall above 90\%. Results obtained for all source classes are reported and discussed.

Read this paper on arXiv…

S. Riggi, D. Magro, R. Sortino, et. al.
Wed, 7 Dec 22
36/74

Comments: 18 pages, 11 figures

Probing Accretion Turbulence in the Galactic Center with EHT Polarimetry [IMA]

http://arxiv.org/abs/2212.02544


Magnetic fields grown by instabilities driven by differential rotation are believed to be essential to accretion onto black holes. These instabilities saturate in a turbulent state; therefore, the spatial and temporal variability in the horizon-resolving images of Sagittarius A* (Sgr A) will be able to empirically assess this critical aspect of accretion theory. However, interstellar scattering blurs high-frequency radio images from the Galactic center and introduces spurious small-scale structures, complicating the interpretation of spatial fluctuations in the image. We explore the impact of interstellar scattering on the polarized images of Sgr A and demonstrate that for credible physical parameters, the intervening scattering is non-birefringent. Therefore, we construct a scattering mitigation scheme that exploits horizon-resolving polarized millimeter/submillimeter VLBI observations to generate statistical measures of the intrinsic spatial fluctuations and therefore the underlying accretion flow turbulence. An optimal polarization basis is identified, corresponding to measurements of the fluctuations in magnetic field orientation in three dimensions. We validate our mitigation scheme using simulated data sets and find that current and future ground-based experiments will readily be able to accurately measure the image-fluctuation power spectrum.

Read this paper on arXiv…

C. Ni, A. Broderick and R. Gold
Wed, 7 Dec 22
62/74

Comments: N/A

Gravoturbulence in local disk simulations with an adaptive moving mesh [EPA]

http://arxiv.org/abs/2212.02526


Self-gravity plays an important role in the evolution of rotationally supported systems such as protoplanetary disks, accretion disks around black holes, or galactic disks, as it can both feed turbulence or lead to gravitational fragmentation. While such systems can be studied in the shearing box approximation with high local resolution, the large density contrasts that are possible in the case of fragmentation still limit the utility of Eulerian codes with constant spatial resolution. In this paper, we present a novel self-gravity solver for the shearing box based on the TreePM method of the moving-mesh code AREPO. The spatial gravitational resolution is adaptive which is important to make full use of the quasi-Lagrangian hydrodynamical resolution of the code. We apply our new implementation to two- and three-dimensional, self-gravitating disks combined with a simple $\beta$-cooling prescription. For weak cooling we find a steady, gravoturbulent state, while for strong cooling the formation of fragments is inevitable. To reach convergence for the critical cooling efficiency above which fragmentation occurs, we require a smoothing of the gravitational force in the two dimensional case that mimics the stratification of the three-dimensional simulations. The critical cooling efficiency we find, $\beta \approx 3$, as well as box-averaged quantities characterizing the gravoturbulent state, agree well with various previous results in the literature. Interestingly, we observe stochastic fragmentation for $\beta > 3$, which slightly decreases the cooling efficiency required to observe fragmentation over the lifetime of a protoplanetary disk. The numerical method outlined here appears well suited to study the problem of galactic disks as well as magnetized, self-gravitating disks.

Read this paper on arXiv…

O. Zier and V. Springel
Wed, 7 Dec 22
73/74

Comments: 20 pages, 16 figures, submitted to MNRAS

A Standardized Framework for Collecting Graduate Student Input in Faculty Searches [IMA]

http://arxiv.org/abs/2212.01456


We present a procedure designed to standardize input received during faculty searches with the goal of amplifying student voices. The framework was originally used to collect feedback from graduate students, but it can be adapted easily to collect feedback from undergraduate students, faculty, staff or other stakeholders. Implementing this framework requires agreement across participating parties and minimal organization prior to the start of faculty candidate visits.

Read this paper on arXiv…

Y. Asali, K. Gerbig, A. Ghosh, et. al.
Tue, 6 Dec 22
3/87

Comments: 9 Pages, 6 Figures, Posted on Bulletin of the American Astronomical Society (BAAS)

panco2: a Python library to measure intracluster medium pressure profiles from Sunyaev-Zeldovich observations [IMA]

http://arxiv.org/abs/2212.01439


We present panco2, an open-source Python library designed to extract galaxy cluster pressure profiles from maps of the thermal Sunyaev-Zeldovich effect. The extraction is based on forward modeling of the total observed signal, allowing to take into account usual features of millimeter observations, such as beam smearing, data processing filtering, and point source contamination. panco2 offers a large flexibility in the inputs that can be handled and in the analysis options, enabling refined analyses and studies of systematic effects. We detail the functionalities of the code, the algorithm used to infer pressure profile measurements, and the typical data products. We present examples of running sequences, and the validation on simulated inputs. The code is available on GitHub at https://github.com/fkeruzore/panco2, and comes with an extensive technical documentation to complement this paper at https://panco2.readthedocs.io.

Read this paper on arXiv…

F. Kéruzoré, F. Mayet, E. Artis, et. al.
Tue, 6 Dec 22
7/87

Comments: 15 pages, 12 figures, for submission to the Open Journal of Astrophysics

Algorithms and radiation dynamics for the vicinity of black holes I. Methods and codes [IMA]

http://arxiv.org/abs/2212.01532


We examine radiation and its effects on accretion disks orbiting astrophysical black holes. These disks are thermally radiating and can be geometrically and optically thin or thick. In this first paper of the series, we discuss the physics and the formulation required for this study. Subsequently, we construct and solve the relativistic radiative transfer equation, or find suitable solutions where that is not possible. We continue by presenting some of the accretion disks we considered for this work. We then describe the families of codes developed in order to study particle trajectories in strong gravity, calculate radiation forces exerted onto the disk material, and generate observation pictures of black hole systems at infinity. Furthermore, we also examine the veracity and accuracy of our work. Finally, we investigate how we can further use our results to estimate the black hole spin and the motion of disk material subjected to these radiation forces.

Read this paper on arXiv…

L. Koutsantoniou
Tue, 6 Dec 22
11/87

Comments: 22 pages, 20 figures, 4 tables, accepted in Astronomy & Astrophysics

Methods for quasar absorption system measurements of the fine structure constant in the 2020s and beyond [CEA]

http://arxiv.org/abs/2212.02458


This article reviews the two major recent developments that significantly improved cosmological measurements of fundamental constants derived from high resolution quasar spectroscopy. The first one is the deployment of astronomical Laser Frequency Combs on high resolution spectrographs and the second one is the development of spectral analysis tools based on Artificial Intelligence methods. The former all but eliminated the previously dominant source of instrumental uncertainty whereas the latter established optimal methods for measuring the fine structure constant ($\alpha$) in quasar absorption systems. The methods can be used on data collected by the new ESPRESSO spectrograph and the future ANDES spectrograph on the Extremely Large Telescope to produce unbiased $\Delta\alpha/\alpha$ measurements with unprecedented precision.

Read this paper on arXiv…

D. Milaković, C. Lee, P. Molaro, et. al.
Tue, 6 Dec 22
16/87

Comments: 10 pages, part of the HACK100 conference proceedings

Polarized Maser Emission with In-Source Faraday Rotation [IMA]

http://arxiv.org/abs/2212.01410


We discuss studies of polarization in astrophysical masers with particular emphasis on the case where the Zeeman splitting is small compared to the Doppler profile, resulting in a blend of the transitions between magnetic substates. A semi-classical theory of the molecular response is derived, and coupled to radiative transfer solutions for 1 and 2-beam linear masers, resulting in a set of non-linear, algebraic equations for elements of the molecular density matrix. The new code, PRISM, implements numerical methods to compute these solutions. Using PRISM, we demonstrate a smooth transfer between this case and that of wider splitting. For a J=1-0 system, with parameters based on the $v=1, J=1-0$ transition of SiO, we investigate the behaviour of linear and circular polarization as a function of the angle between the propagation axis and the magnetic field, and with the optical depth, or saturation state, of the model. We demonstrate how solutions are modified by the presence of Faraday rotation, generated by various abundances of free electrons, and that strong Faraday rotation leads to additional angles where Stokes-Q changes sign. We compare our results to a number of previous models, from the analytical limits derived by Goldreich, Keeley and Kwan in 1973, through computational results by W. Watson and co-authors, to the recent work by Lankhaar and Vlemmings in 2019. We find that our results are generally consistent with those of other authors given the differences of approach and the approximations made.

Read this paper on arXiv…

T. Tobin, M. Gray and A. Kemball
Tue, 6 Dec 22
17/87

Comments: 36 pages, 15 figures. Accepted for publication in ApJ

Giant Planet Observations in NASA's Planetary Data System [EPA]

http://arxiv.org/abs/2212.02492


While there have been far fewer missions to the outer Solar System than to the inner Solar System, spacecraft destined for the giant planets have conducted a wide range of fundamental investigations, returning data that continues to reshape our understanding of these complex systems, sometimes decades after the data were acquired. These data are preserved and accessible from national and international planetary science archives. For all NASA planetary missions and instruments the data are available from the science discipline nodes of the NASA Planetary Data System (PDS). Looking ahead, the PDS will be the primary repository for giant planets data from several upcoming missions and derived datasets, as well as supporting research conducted to aid in the interpretation of the remotely sensed giant planets data already archived in the PDS.

Read this paper on arXiv…

N. Chanover, J. Bauer, J. Blalock, et. al.
Tue, 6 Dec 22
36/87

Comments: Contributed to the special issue of Remote Sensing entitled “Remote Sensing Observations of the Giant Planets”

Analytic auto-differentiable $Λ$CDM cosmography [IMA]

http://arxiv.org/abs/2212.01937


I present general analytic expressions for distance calculations (comoving distance, time coordinate, and absorption distance) in the standard $\Lambda$CDM cosmology, allowing for the presence of radiation and for non-zero curvature. The solutions utilise the symmetric Carlson basis of elliptic integrals, which can be evaluated with fast numerical algorithms that allow trivial parallelisation on GPUs and automatic differentiation without the need for additional special functions. I introduce a PyTorch-based implementation in the phytorch.cosmology package and briefly examine its accuracy and speed in comparison with numerical integration and other known expressions (for special cases). Finally, I demonstrate an application to high-dimensional Bayesian analysis that utilises automatic differentiation through the distance calculations to efficiently derive posteriors for cosmological parameters from up to $10^6$ mock type Ia supernovae using variational inference.

Read this paper on arXiv…

K. Karchev
Tue, 6 Dec 22
37/87

Comments: 13 pages, 5 figures + appendix; phytorch available at this https URL

Improved binary black hole searches through better discrimination against noise transients [CL]

http://arxiv.org/abs/2212.02026


The short-duration noise transients in LIGO and Virgo detectors significantly affect the search sensitivity of compact binary coalescence (CBC) signals, especially in the high mass region. In the previous work by the authors \cite{Joshi_2021}, a $\chi^2$ statistic was proposed to distinguish them from CBCs. This work is an extension where we demonstrate the improved noise-discrimination of the optimal $\chi^2$ statistic in real LIGO data. The tuning of the optimal $\chi^2$ includes accounting for the phase of the CBC signal and a well informed choice of sine-Gaussian basis vectors to discern how CBC signals and some of the most worrisome noise-transients project differently on them~\cite{sunil_2022}. We take real blip glitches (a type of short-duration noise disturbance) from the second observational (O2) run of LIGO-Hanford and LIGO-Livingston detectors. The binary black hole signals were simulated using \textsc{IMRPhenomPv2} waveform and injected into real LIGO data from the same run. We show that in comparison to the traditional $\chi^2$, the optimal $\chi^2$ improves the signal detection rate by around 4\% in a lower-mass bin ($m_1,m_2 \in [20,40]M_{\odot}$) and by more than 5\% in a higher-mass bin ($m_1,m_2 \in [60,80]M_{\odot}$), at a false alarm probability of $10^{-3}$. We find that the optimal $\chi^2$ also achieves significant improvement over the sine-Gaussian $\chi^2$.

Read this paper on arXiv…

S. Choudhary, S. Bose, P. Joshi, et. al.
Tue, 6 Dec 22
38/87

Comments: 12pages, 6 figures

Observation of night-time emissions of the Earth in the near UV range from the International Space Station with the Mini-EUSO detector [IMA]

http://arxiv.org/abs/2212.02353


Mini-EUSO (Multiwavelength Imaging New Instrument for the Extreme Universe Space Observatory) is a telescope observing the Earth from the International Space Station since 2019. The instrument employs a Fresnel-lens optical system and a focal surface composed of 36 multi-anode photomultiplier tubes, 64 channels each, for a total of 2304 channels with single photon counting sensitivity. Mini-EUSO also contains two ancillary cameras to complement measurements in the near infrared and visible ranges. The scientific objectives of the mission range from the search for extensive air showers generated by Ultra-High Energy Cosmic Rays (UHECRs) with energies above 10$^{21}$ eV, the search for nuclearites and Strange Quark Matter (SQM), up to the study of atmospheric phenomena such as Transient Luminous Events (TLEs), meteors and meteoroids. Mini-EUSO can map the night-time Earth in the near UV range (between 290-430 nm) with a spatial resolution of about 6.3 km (full field of view of 44{\deg}) and a maximum temporal resolution of 2.5 $\mu$s, observing our planet through a nadir-facing UV-transparent window in the Russian Zvezda module. The detector saves triggered transient phenomena with a sampling rate of 2.5 $\mu$s and 320 $\mu$s, as well as continuous acquisition at 40.96 ms scale. In this paper we discuss the detector response and the flat-fielding and calibration procedures. Using the 40.96 ms data, we present $\simeq$6.3 km resolution night-time Earth maps in the UV band, and report on various emissions of anthropogenic and natural origin. We measure ionospheric airglow emissions of dark moonless nights over the sea and ground, studying the effect of clouds, moonlight, and artificial (towns, boats) lights. In addition to paving the way forward for the study of long-term variations of natural and artificial light, we also estimate the observation live-time of future UHECR detectors.

Read this paper on arXiv…

M. Casolino, D. Barghini, M. Battisti, et. al.
Tue, 6 Dec 22
42/87

Comments: 49 pages, 27 figures, 1 table, published in Remote Sensing of Environment

Astrometric precision tests on TESS data [IMA]

http://arxiv.org/abs/2212.02357


Background. Astrometry at or below the micro-arcsec level with an imaging telescope assumes that the uncertainty on the location of an unresolved source can be an arbitrarily small fraction of the detector pixel, given a sufficient photon budget. Aim. This paper investigates the geometric limiting precision, in terms of CCD pixel fraction, achieved by a large set of star field images, selected among the publicly available science data of the TESS mission. Method. The statistics of the distance between selected bright stars ($G \simeq 5\,mag$), in pixel units, is evaluated, using the position estimate provided in the TESS light curve files. Results. The dispersion of coordinate differences appears to be affected by long term variation and noisy periods, at the level of $0.01$ pixel. The residuals with respect to low-pass filtered data (tracing the secular evolution), which are interpreted as the experimental astrometric noise, reach the level of a few milli-pixel or below, down to $1/5,900$ pixel. Saturated images are present, evidencing that the astrometric precision is mostly preserved across the CCD columns, whereas it features a graceful degradation in the along column direction. The cumulative performance of the image set is a few micro-pixel across columns, or a few 10 micro-pixel along columns. Conclusions. The idea of astrometric precision down to a small fraction of a CCD pixel, given sufficient signal to noise ratio, is confirmed by real data from an in-flight science instrument to the $10^{-6}$ pixel level. Implications for future high precision astrometry missions are briefly discussed.

Read this paper on arXiv…

M. Gai, A. Vecchiato, A. Riva, et. al.
Tue, 6 Dec 22
49/87

Comments: 13 pages, 8 figures

Using the SourceXtractor++ package for data reduction [IMA]

http://arxiv.org/abs/2212.02428


The Euclid satellite is an ESA mission scheduled for launch in September 2023. To optimally perform critical stages of the data reduction, such as object detection and morphology determination, a new and modern software package was required. We have developed SourceXtractor++ as open source software for detecting and measuring sources in astronomical images. It is a complete redesign of the original SExtractor, written mainly in C++. The package follows a modular approach and facilitates the analysis of multiple overlapping sources over many images with different pixel grids. SourceXtractor++ is already operational in many areas of the Euclid processing, and we demonstrate here the capabilities of the current version v0.19 on the basis of a set of typical use cases, which are available for download

Read this paper on arXiv…

M. Kümmel, A. Álvarez-Ayllón, E. Bertin, et. al.
Tue, 6 Dec 22
50/87

Comments: 4 pages, 2 figures

Detection of separatrices and chaotic seas based on orbit amplitudes [EPA]

http://arxiv.org/abs/2212.02200


The Maximum Eccentricity Method (MEM) is a standard tool for the analysis of planetary systems and their stability. The method amounts to estimating the maximal stretch of orbits over sampled domains of initial conditions. The present paper leverages on the MEM to introduce a sharp detector of separatrices and chaotic seas. After introducing the MEM analogue for nearly-integrable action-angle Hamiltonians, i.e., diameters, we use low-dimensional dynamical systems with multi-resonant modes and junctions, supporting chaotic motions, to recognise the drivers of the diameter metric. Once this is appreciated, we present a second-derivative based index measuring the regularity of this application. This quantity turns to be a sensitive and robust indicator to detect separatrices, resonant webs and chaotic seas. We discuss practical applications of this framework in the context of $N$-body simulations for the planetary case affected by mean-motion resonances, and demonstrate the ability of the index to distinguish minute structures of the phase space, otherwise undetected with the original MEM.

Read this paper on arXiv…

J. Daquin and C. Charalambous
Tue, 6 Dec 22
59/87

Comments: Under review at Celestial Mechanics and Dynamical Astronomy. 8 Figures, 59 references, 17 pages. Comments and feedback welcome

A quantum-mechanical investigation of O($^3P$) + CO scattering cross sections at superthermal collision energies [CL]

http://arxiv.org/abs/2212.01799


The kinetics and energetic relaxation associated with collisions between fast and thermal atoms are of fundamental interest for escape and therefore also for the evolution of the Mars atmosphere. The total and differential cross-sections of fast O($^3P$) atom collisions with CO have been calculated from quantum mechanical calculations. The cross-sections are computed at collision energies from 0.4 to 5 eV in the center-of-mass frame relevant to the planetary science and astrophysics. All the three potential energy surfaces ($^3$A’, $^3$A” and 2 $^3$A” symmetry) of O($^3P$) + CO collisions separating to the atomic ground state have been included in calculations of cross-sections. The cross-sections are computed for all three isotopes of energetic O($^3P$) atoms collisions with CO. The isotope dependence of the cross-sections are compared. Our newly calculated data on the energy relaxation of O atoms and their isotopes with CO molecules will be very useful to improve the modeling of escape and energy transfer processes in the Mars’ upper atmosphere.

Read this paper on arXiv…

S. Chhabra, M. Gacesa, M. Khalil, et. al.
Tue, 6 Dec 22
62/87

Comments: 8 pages, 6 figures

Searching for Intelligent Life in Gravitational Wave Signals Part I: Present Capabilities and Future Horizons [IMA]

http://arxiv.org/abs/2212.02065


We show that the Laser Interferometer Gravitational Wave Observatory (LIGO) is a powerful instrument in the Search for Extra-Terrestrial Intelligence (SETI). LIGO’s ability to detect gravitational waves (GWs) from accelerating astrophysical sources, such as binary black holes, also provides the potential to detect extra-terrestrial mega-technology, such as Rapid And/or Massive Accelerating spacecraft (RAMAcraft). We show that LIGO is sensitive to RAMAcraft of $1$ Jupiter mass accelerating to a fraction of the speed of light (e.g. $10\%$) up to about $100\,{\rm kpc}$. Existing SETI searches probe on the order of thousands to tens of thousands of stars for human-scale technology (e.g. radiowaves), whereas LIGO can probe all $10^{11}$ stars in the Milky Way for RAMAcraft. Moreover, thanks to the $f^{-1}$ scaling of the GW signal produced by these sources, our sensitivity to these objects will increase as low-frequency, space-based detectors are developed and improved. In particular, we find that DECIGO and the Big Bang Observer (BBO) will be about 100 times more sensitive than LIGO, increasing the search volume by 10$^{6}$. In this paper, we calculate the waveforms for linearly accelerating RAMAcraft in a form suitable for LIGO, Virgo, or KAGRA searches and provide the range for a variety of possible masses and accelerations. We expect that the current and upcoming GW detectors will soon become an excellent complement to the existing SETI efforts.

Read this paper on arXiv…

L. Sellers, A. Bobrick, G. Martire, et. al.
Tue, 6 Dec 22
63/87

Comments: 18 pages, 12 figures, to be submitted to MNRAS, comments welcome

An Unsupervised Machine Learning Method for Electron–Proton Discrimination of the DAMPE Experiment [IMA]

http://arxiv.org/abs/2212.01843


Galactic cosmic rays are mostly made up of energetic nuclei, with less than $1\%$ of electrons (and positrons). Precise measurement of the electron and positron component requires a very efficient method to reject the nuclei background, mainly protons. In this work, we develop an unsupervised machine learning method to identify electrons and positrons from cosmic ray protons for the Dark Matter Particle Explorer (DAMPE) experiment. Compared with the supervised learning method used in the DAMPE experiment, this unsupervised method relies solely on real data except for the background estimation process. As a result, it could effectively reduce the uncertainties from simulations. For three energy ranges of electrons and positrons, 80–128 GeV, 350–700 GeV, and 2–5 TeV, the residual background fractions in the electron sample are found to be about (0.45 $\pm$ 0.02)$\%$, (0.52 $\pm$ 0.04)$\%$, and (10.55 $\pm$ 1.80)$\%$, and the background rejection power is about (6.21 $\pm$ 0.03) $\times$ $10^4$, (9.03 $\pm$ 0.05) $\times$ $10^4$, and (3.06 $\pm$ 0.32) $\times$ $10^4$, respectively. This method gives a higher background rejection power in all energy ranges than the traditional morphological parameterization method and reaches comparable background rejection performance compared with supervised machine learning~methods.

Read this paper on arXiv…

Z. Xu, X. Li, M. Cui, et. al.
Tue, 6 Dec 22
66/87

Comments: 10 pages, 5 figures, 1 table

High Resolution VLBI Astrometry with the $θ-θ$ Transform [IMA]

http://arxiv.org/abs/2212.01417


The recent development of $\theta-\theta$ techniques in pulsar scintillometry has opened the door for new high resolution imaging techniques of the scattering medium. By solving the phase retrieval problem and recovering the wavefield from a pulsar dynamic spectrum, the Doppler shift, time delay, and phase offset of individual images can be determined. However, the results of phase retrieval from a single dish are only know up to a constant phase rotation, which prevents their use for astrometry using Very Long Baseline Interferometry. We present an extension to previous $\theta-\theta$ methods using the interferometric visibilities between multiple stations to calibrate the wavefields. When applied to existing data for PSR B0834+06 we measure the effective screen distance and lens orientation with five times greater precision than previous works.

Read this paper on arXiv…

D. Baker, W. Brisken, M. Kerkwijk, et. al.
Tue, 6 Dec 22
70/87

Comments: N/A

Correction factors of the measurement errors of the LAMOST-LRS stellar parameters [IMA]

http://arxiv.org/abs/2212.02018


We aim to investigate the propriety of stellar parameter errors of the official data release of the LAMOST low-resolution spectroscopy (LRS) survey. We diagnose the errors of radial velocity (RV), atmospheric parameters ([Fe/H], T eff , log g) and {\alpha}-enhancement ([{\alpha}/M]) for the latest data release version of DR7, including 6,079,235 effective spectra of 4,546,803 stars. Based on the duplicate observational sample and comparing the deviation of multiple measurements to their given errors, we find that, in general, the error of [{\alpha}/M] is largely underestimated, and the error of radial velocity is slightly overestimated. We define a correction factor k to quantify these misestimations and correct the errors to be expressed as proper internal uncertainties. Using this self-calibration technique, we find that the k-factors significantly vary with the stellar spectral types and the spectral signal-to-noise ratio (SNR). Particularly, we reveal a strange but evident trend between k-factors and error themselves for all five stellar parameters. Larger errors tend to have smaller k-factor values, i.e., they were more overestimated. After the correction, we recreate and quantify the tight correlations between SNR and errors, for all five parameters, while these correlations have dependence on spectral types. It also suggests that the parameter errors from each spectrum should be corrected individually. Finally, we provide the error correction factors of each derived parameter of each spectrum for the entire LAMOST-LRS DR7.

Read this paper on arXiv…

S. Zhang, G. Hu, R. Liu, et. al.
Tue, 6 Dec 22
78/87

Comments: 17 pages, 9 figures, 4 Tables, Accepted for publication in Research in Astronomy and Astrophysics (RAA)

Applications of AI in Astronomy [IMA]

http://arxiv.org/abs/2212.01493


We provide a brief, and inevitably incomplete overview of the use of Machine Learning (ML) and other AI methods in astronomy, astrophysics, and cosmology. Astronomy entered the big data era with the first digital sky surveys in the early 1990s and the resulting Terascale data sets, which required automating of many data processing and analysis tasks, for example the star-galaxy separation, with billions of feature vectors in hundreds of dimensions. The exponential data growth continued, with the rise of synoptic sky surveys and the Time Domain Astronomy, with the resulting Petascale data streams and the need for a real-time processing, classification, and decision making. A broad variety of classification and clustering methods have been applied for these tasks, and this remains a very active area of research. Over the past decade we have seen an exponential growth of the astronomical literature involving a variety of ML/AI applications of an ever increasing complexity and sophistication. ML and AI are now a standard part of the astronomical toolkit. As the data complexity continues to increase, we anticipate further advances leading towards a collaborative human-AI discovery.

Read this paper on arXiv…

S. Djorgovski, A. Mahabal, M. Graham, et. al.
Tue, 6 Dec 22
79/87

Comments: 12 pages, 1 figure, an invited review chapter, to appear in: Artificial Intelligence for Science, eds. A. Choudhary, G. Fox and T. Hey, Singapore: World Scientific, in press (2023)

The Tracking Tapered Gridded Estimator (TTGE) for the power spectrum from drift scan observations [IMA]

http://arxiv.org/abs/2212.01251


Intensity mapping with the redshifted 21-cm line is an emerging tool in cosmology. Drift scan observations, where the antennas are fixed to the ground and the telescope’s pointing center (PC) changes continuously on the sky due to earth’s rotation, provide broad sky coverage and sustained instrumental stability needed for 21-cm intensity mapping. Here we present the Tracking Tapered Grided Estimator (TTGE) to quantify the power spectrum of the sky signal estimated directly from the visibilities measured in drift scan radio interferometric observations. The TTGE uses the data from the different PC to estimate the power spectrum of the signal from a small angular region located around a fixed tracking center (TC). The size of this angular region is decided by a suitably chosen tapering window function which serves to reduce the foreground contamination from bright sources located at large angles from the TC. It is possible to cover the angular footprint of the drift scan observations using multiple TC, and combine the estimated power spectra to increase the signal to noise ratio. Here we have validated the TTGE using simulations of $154 \, {\rm MHz}$ MWA drift scan observations. We show that the TTGE can recover the input model angular power spectrum $C_{\ell}$ within $20 \%$ accuracy over the $\ell$ range $40 < \ell < 700$.

Read this paper on arXiv…

S. Chatterjee, S. Bharadwaj, S. Choudhuri, et. al.
Mon, 5 Dec 22
12/63

Comments: Accepted for publication in MNRAS

Characterization of Low-noise Backshort-Under-Grid Kilopixel Transition Edge Sensor Arrays for PIPER [IMA]

http://arxiv.org/abs/2212.01370


We present laboratory characterization of kilo-pixel, filled backshort-under-grid (BUG) transition-edge sensor (TES) arrays developed for the Primordial Inflation Polarization ExploreR (PIPER) balloon-borne instrument. PIPER is designed to map the polarization of the CMB on the largest angular scales and characterize dust foregrounds by observing a large fraction of the sky in four frequency bands in the range 200 to 600 GHz. The BUG TES arrays are read out by planar SQUID-based time division multiplexer chips (2dMUX) of matching form factor and hybridized directly with the detector arrays through indium bump bonding. Here, we discuss the performance of the 2dMUX and present measurements of the TES transition temperature, thermal conductance, saturation power, and preliminary noise performance. The detectors achieve saturation power below 1 pW and phonon noise equivalent power (NEP) on the order of a few aW/rtHz. Detector performance is further verified through pre-flight tests in the integrated PIPER receiver, performed in an environment simulating balloon float conditions.

Read this paper on arXiv…

R. Datta, S. Dahal, E. Switzer, et. al.
Mon, 5 Dec 22
28/63

Comments: 11 pages, 11 figures

Managing Activities at the Lunar Poles for Science [CL]

http://arxiv.org/abs/2212.01363


The lunar poles are unique environments of both great scientific and, increasingly, commercial interest. Consequently, a tension exists between the twin objectives of (a) Exploring the lunar poles for both scientific and commercial purposes and ultimately supporting a lunar economy; and (b) Minimising the environmental impacts on the lunar polar regions so as to preserve them for future scientific investigations. We suggest that the best compromise between these equally valuable objectives would be to restrict scientific and commercial activities to the lunar South Pole, while placing a moratorium on activities at the North Pole until the full consequences of human activities at the South Pole are fully understood and mitigation protocols established. Depending on the pace at which lunar exploration proceeds, such a moratorium might last for several decades in order to properly assess the effects of exploration and commercial activities in regions surrounding the South Pole. A longer term possibility might be to consider designating the lunar North Polar region as a (possibly temporary) Planetary Park. Similar protected status might also be desirable for other unique lunar environments, and, by extension, other scientifically important localities elsewhere in the Solar System.

Read this paper on arXiv…

I. Crawford, P. Prem, C. Peters, et. al.
Mon, 5 Dec 22
31/63

Comments: Accepted for publication in Space Research Today; 7 pages, 1 figure

Ice Giant Exploration Philosophy: Simple, Affordable [IMA]

http://arxiv.org/abs/2212.00803


The key to the exploration of the Ice Giant planets is avoiding cutting edge technology. Complexity produces delay and financial roadblocks. Simple robot scouts can be launched in time to utilize gravity assists from Jupiter in the early 2030s. Demands on NASA’s budget from large missions, such as Mars sample return, will not allow Flagship missions to Uranus and Neptune in the near term. The science goals of Ice Giant exploration can be accomplished by a series of fast, simple, affordable (FSA) craft. Separate lines of cost-capped Orbiters and Probes would be launched at a cadence dictated by trajectories and funding. Contractors would be selected using competitive Announcements of Opportunity (AO). The march of progress in spacecraft technology offers hope and a path forward. The key is to start small and keep it affordable.

Read this paper on arXiv…

P. Horzempa
Mon, 5 Dec 22
35/63

Comments: N/A

Wide-spectrum optical synthetic aperture imaging via spatial intensity interferometry [CL]

http://arxiv.org/abs/2212.01036


High resolution imaging is achieved using increasingly larger apertures and successively shorter wavelengths. Optical aperture synthesis is an important high-resolution imaging technology used in astronomy. Conventional long baseline amplitude interferometry is susceptible to uncontrollable phase fluctuations, and the technical difficulty increases rapidly as the wavelength decreases. The intensity interferometry inspired by HBT experiment is essentially insensitive to phase fluctuations, but suffers from a narrow spectral bandwidth which results in a lack of detection sensitivity. In this study, we propose optical synthetic aperture imaging based on spatial intensity interferometry. This not only realizes diffraction-limited optical aperture synthesis in a single shot, but also enables imaging with a wide spectral bandwidth. And this method is insensitive to the optical path difference between the sub-apertures. Simulations and experiments present optical aperture synthesis diffraction-limited imaging through spatial intensity interferometry in a 100 $nm$ spectral width of visible light, whose maximum optical path difference between the sub-apertures reach $69.36\lambda$. This technique is expected to provide a solution for optical aperture synthesis over kilometer-long baselines at optical wavelengths.

Read this paper on arXiv…

C. Chu, Z. Liu, M. Chen, et. al.
Mon, 5 Dec 22
37/63

Comments: N/A

Astro-COLIBRI 2 — an advanced platform for real-time multi-messenger discoveries [IMA]

http://arxiv.org/abs/2212.00805


The study of flaring astrophysical events in the multi-messenger approach requires instantaneous follow-up observations to better understand the nature of these events through complementary observational data. We present Astro-COLIBRI as a meta-platform for the patchwork of different specific tools in the real-time multi-messenger ecosystem. The Astro-COLIBRI platform bundles and evaluates alerts about transients from various channels. It further automates the coordination of follow-up observations by providing and linking detailed information through its comprehensible graphical user interface. We present the functionalities using documented examples of Astro-COLIBRI usage through the community since its public release in August 2021. We highlight the use cases of Astro-COLIBRI for planning follow-up observations by professional and amateur astronomers, as well as checking predictions from theoretical models.

Read this paper on arXiv…

P. Reichherzer, F. Schüssler, V. Lefranc, et. al.
Mon, 5 Dec 22
46/63

Comments: Platform website: www.astro-colibri.com

Which countries are leading high-impact science in astronomy? [IMA]

http://arxiv.org/abs/2212.01295


Recent news reports claim that China is overtaking the United States and all other countries in scientific productivity and scientific impact. A straightforward analysis of high-impact papers in astronomy reveals that this is not true in our field. In fact, the United States continues to host, by a large margin, the authors that lead high-impact papers. Moreover, this analysis shows that 90% of all high-impact papers in astronomy are led by authors based in North America and Europe. That is, only about 10% of countries in the world host astronomers that publish “astronomy’s greatest hits”.

Read this paper on arXiv…

J. Madrid
Mon, 5 Dec 22
49/63

Comments: N/A

Morphological Parameters and Associated Uncertainties for 8 Million Galaxies in the Hyper Suprime-Cam Wide Survey [GA]

http://arxiv.org/abs/2212.00051


We use the Galaxy Morphology Posterior Estimation Network (GaMPEN) to estimate morphological parameters and associated uncertainties for $\sim 8$ million galaxies in the Hyper Suprime-Cam (HSC) Wide survey with $z \leq 0.75$ and $m \leq 23$. GaMPEN is a machine learning framework that estimates Bayesian posteriors for a galaxy’s bulge-to-total light ratio ($L_B/L_T$), effective radius ($R_e$), and flux ($F$). By first training on simulations of galaxies and then applying transfer learning using real data, we trained GaMPEN with $<1\%$ of our dataset. This two-step process will be critical for applying machine learning algorithms to future large imaging surveys, such as the Rubin-Legacy Survey of Space and Time (LSST), the Nancy Grace Roman Space Telescope (NGRST), and Euclid. By comparing our results to those obtained using light-profile fitting, we demonstrate that GaMPEN’s predicted posterior distributions are well-calibrated ($\lesssim 5\%$ deviation) and accurate. This represents a significant improvement over light profile fitting algorithms which underestimate uncertainties by as much as $\sim60\%$. For an overlapping sub-sample, we also compare the derived morphological parameters with values in two external catalogs and find that the results agree within the limits of uncertainties predicted by GaMPEN. This step also permits us to define an empirical relationship between the S\’ersic index and $L_B/L_T$ that can be used to convert between these two parameters. The catalog presented here represents a significant improvement in size ($\sim10 \times $), depth ($\sim4$ magnitudes), and uncertainty quantification over previous state-of-the-art bulge+disk decomposition catalogs. With this work, we also release GaMPEN’s source code and trained models, which can be adapted to other datasets.

Read this paper on arXiv…

A. Ghosh, C. Urry, A. Mishra, et. al.
Fri, 2 Dec 22
1/81

Comments: Submitted to ApJ. Comments welcome. arXiv admin note: text overlap with arXiv:2207.05107

The eOSSR library [IMA]

http://arxiv.org/abs/2212.00499


The astronomy, astroparticle and particle physics communities are brought together through the ESCAPE (European Science Cluster of Astronomy and Particle Physics ESFRI research infrastructures) project to create a cluster focused on common issues in data-driven research. Among the ESCAPE work packages, the OSSR (ESCAPE Open-source Scientific Software and Service Repository) is a curated, long-term, open-access repository that makes it possible for scientists to exchange software and services and promote open science. It has been developed on top of a Zenodo community, connected to other services. A Python library, the eOSSR, has been developed to take care of the interactivity between Zenodo, services and OSSR users, allowing an automated handling of the OSSR records. In this work, we present the eOSSR, its main functionalities and how it’s been used in the ESCAPE context to ease the publication of scientific software, analysis, and datasets by researchers

Read this paper on arXiv…

T. Vuillaume, E. Garcia, C. Tacke, et. al.
Fri, 2 Dec 22
13/81

Comments: N/A

Measuring the fine structure constant on white dwarf surfaces; uncertainties from continuum placement variations [IMA]

http://arxiv.org/abs/2212.00434


Searches for variations of fundamental constants require accurate measurement errors. There are several potential sources of errors and quantifying each one accurately is essential. This paper addresses one source of uncertainty relating to measuring the fine structure constant on white dwarf surfaces. Detailed modelling of photospheric absorption lines requires knowing the underlying spectral continuum level. Here we describe the development of a fully automated, objective, and reproducible continuum estimation method, based on fitting cubic splines to carefully selected data regions. Example fits to the Hubble Space Telescope spectrum of the white dwarf G191-B2B are given. We carry out measurements of the fine structure constant using two continuum models. The results show that continuum placement variations result in small systematic shifts in the centroids of narrow photospheric absorption lines which impact significantly on fine structure constant measurements. This effect must therefore be included in the overall error budget of future measurements. Our results also suggest that continuum placement variations should be investigated in other contexts, including fine structure constant measurements in stars other than white dwarfs, quasar absorption line measurements of the fine structure constant, and quasar measurements of cosmological redshift drift.

Read this paper on arXiv…

C. Lee, J. Webb, D. Dougan, et. al.
Fri, 2 Dec 22
18/81

Comments: 10 pages, 4 figures. 4 additional files provided as supplementary material. Submitted to MNRAS 1 Dec 2022

Improving astroBERT using Semantic Textual Similarity [CL]

http://arxiv.org/abs/2212.00744


The NASA Astrophysics Data System (ADS) is an essential tool for researchers that allows them to explore the astronomy and astrophysics scientific literature, but it has yet to exploit recent advances in natural language processing. At ADASS 2021, we introduced astroBERT, a machine learning language model tailored to the text used in astronomy papers in ADS. In this work we:
– announce the first public release of the astroBERT language model;
– show how astroBERT improves over existing public language models on astrophysics specific tasks;
– and detail how ADS plans to harness the unique structure of scientific papers, the citation graph and citation context, to further improve astroBERT.

Read this paper on arXiv…

F. Grezes, T. Allen, S. Blanco-Cuaresma, et. al.
Fri, 2 Dec 22
20/81

Comments: N/A

The First Flight of the Marshall Grazing Incidence X-ray Spectrometer (MaGIXS) [SSA]

http://arxiv.org/abs/2212.00665


The Marshall Grazing Incidence X-ray Spectrometer (MaGIXS) sounding rocket experiment launched on July 30, 2021 from the White Sands Missile Range in New Mexico. MaGIXS is a unique solar observing telescope developed to capture X-ray spectral images, in the 6 – 24 Angstrom wavelength range, of coronal active regions. Its novel design takes advantage of recent technological advances related to fabricating and optimizing X-ray optical systems as well as breakthroughs in inversion methodologies necessary to create spectrally pure maps from overlapping spectral images. MaGIXS is the first instrument of its kind to provide spatially resolved soft X-ray spectra across a wide field of view. The plasma diagnostics available in this spectral regime make this instrument a powerful tool for probing solar coronal heating. This paper presents details from the first MaGIXS flight, the captured observations, the data processing and inversion techniques, and the first science results.

Read this paper on arXiv…

S. Savage, A. Winebarger, K. Kobayashi, et. al.
Fri, 2 Dec 22
32/81

Comments: 20 pages, 18 figures

Detecting complex sources in large surveys using an apparent complexity measure [IMA]

http://arxiv.org/abs/2212.00349


Large area astronomical surveys will almost certainly contain new objects of a type that have never been seen before. The detection of ‘unknown unknowns’ by an algorithm is a difficult problem to solve, as unusual things are often easier for a human to spot than a machine. We use the concept of apparent complexity, previously applied to detect multi-component radio sources, to scan the radio continuum Evolutionary Map of the Universe (EMU) Pilot Survey data for complex and interesting objects in a fully automated and blind manner. Here we describe how the complexity is defined and measured, how we applied it to the Pilot Survey data, and how we calibrated the completeness and purity of these interesting objects using a crowd-sourced ‘zoo’. The results are also compared to unexpected and unusual sources already detected in the EMU Pilot Survey, including Odd Radio Circles, that were found by human inspection.

Read this paper on arXiv…

D. Parkinson and G. Segal
Fri, 2 Dec 22
33/81

Comments: 6 pages, 4 figures. Prepared for the proceedings of the International Astronomical Union Symposium 368 “Machine Learning in Astronomy: Possibilities and Pitfalls”

A Framework for Obtaining Accurate Posteriors of Strong Gravitational Lensing Parameters with Flexible Priors and Implicit Likelihoods using Density Estimation [IMA]

http://arxiv.org/abs/2212.00044


We report the application of implicit likelihood inference to the prediction of the macro-parameters of strong lensing systems with neural networks. This allows us to perform deep learning analysis of lensing systems within a well-defined Bayesian statistical framework to explicitly impose desired priors on lensing variables, to obtain accurate posteriors, and to guarantee convergence to the optimal posterior in the limit of perfect performance. We train neural networks to perform a regression task to produce point estimates of lensing parameters. We then interpret these estimates as compressed statistics in our inference setup and model their likelihood function using mixture density networks. We compare our results with those of approximate Bayesian neural networks, discuss their significance, and point to future directions. Based on a test set of 100,000 strong lensing simulations, our amortized model produces accurate posteriors for any arbitrary confidence interval, with a maximum percentage deviation of $1.4\%$ at $21.8\%$ confidence level, without the need for any added calibration procedure. In total, inferring 100,000 different posteriors takes a day on a single GPU, showing that the method scales well to the thousands of lenses expected to be discovered by upcoming sky surveys.

Read this paper on arXiv…

R. Legin, Y. Hezaveh, L. Perreault-Levasseur, et. al.
Fri, 2 Dec 22
35/81

Comments: Accepted for publication in The Astrophysical Journal, 17 pages, 11 figures

On the application of Jacobian-free Riemann solvers for relativistic radiation magnetohydrodynamics under M1 closure [HEAP]

http://arxiv.org/abs/2212.00370


Radiative transfer plays a major role in high-energy astrophysics. In multiple scenarios and in a broad range of energy scales, the coupling between matter and radiation is essential to understand the interplay between theory, observations and numerical simulations. In this paper, we present a novel scheme for solving the equations of radiation relativistic magnetohydrodynamics within the parallel code L\’ostrego. These equations, which are formulated taking successive moments of the Boltzmann radiative transfer equation, are solved under the gray-body approximation and the M1 closure using an IMEX time integration scheme. The main novelty of our scheme is that we introduce for the first time in the context of radiation magnetohydrodynamics a family of Jacobian-free Riemann solvers based on internal approximations to the Polynomial Viscosity Matrix, which were demonstrated to be robust and accurate for non-radiative applications. The robustness and the limitations of the new algorithms are tested by solving a collection of one-dimensional and multi-dimensional test problems, both in the free-streaming and in the diffusion radiation transport limits. Due to its stable performance, the applicability of the scheme presented in this paper to real astrophysical scenarios in high-energy astrophysics is promising. In future simulations, we expect to be able to explore the dynamical relevance of photon-matter interactions in the context of relativistic jets and accretion discs, from microquasars and AGN to gamma-ray bursts.

Read this paper on arXiv…

J. López-Miralles, J. Martí and M. Perucho
Fri, 2 Dec 22
38/81

Comments: 21 pages, 13 figures. Accepted for publication in Computer Physics Communications

Supervised machine learning on Galactic filaments Revealing the filamentary structure of the Galactic interstellar medium [GA]

http://arxiv.org/abs/2212.00463


Context. Filaments are ubiquitous in the Galaxy, and they host star formation. Detecting them in a reliable way is therefore key towards our understanding of the star formation process.
Aims. We explore whether supervised machine learning can identify filamentary structures on the whole Galactic plane.
Methods. We used two versions of UNet-based networks for image segmentation.We used H2 column density images of the Galactic plane obtained with Herschel Hi-GAL data as input data. We trained the UNet-based networks with skeletons (spine plus branches) of filaments that were extracted from these images, together with background and missing data masks that we produced. We tested eight training scenarios to determine the best scenario for our astrophysical purpose of classifying pixels as filaments.
Results. The training of the UNets allows us to create a new image of the Galactic plane by segmentation in which pixels belonging to filamentary structures are identified. With this new method, we classify more pixels (more by a factor of 2 to 7, depending on the classification threshold used) as belonging to filaments than the spine plus branches structures we used as input. New structures are revealed, which are mainly low-contrast filaments that were not detected before.We use standard metrics to evaluate the performances of the different training scenarios. This allows us to demonstrate the robustness of the method and to determine an optimal threshold value that maximizes the recovery of the input labelled pixel classification.
Conclusions. This proof-of-concept study shows that supervised machine learning can reveal filamentary structures that are present throughout the Galactic plane. The detection of these structures, including low-density and low-contrast structures that have never been seen before, offers important perspectives for the study of these filaments.

Read this paper on arXiv…

A. Zavagno, F. Dupé, S. Bensaid, et. al.
Fri, 2 Dec 22
40/81

Comments: 27 pages, 22 figures, accepted by Astronomy & Astrophysics