Data Reduction Process and Pipeline for the NIC Polarimetry Mode in Python, NICpolpy [IMA]

http://arxiv.org/abs/2212.14167


A systematic way of data reduction for the Nishiharima Infrared Camera (NIC) polarimetry mode has been devised and implemented to an open software called NICpolpy in the programming language python (tested on version 3.8–3.10 as of writing). On top of the classical methods, including vertical pattern removal, a new way of diagonal pattern (Fourier pattern) removal has been implemented. Each image undergoes four reduction steps, resulting in “level 1” to “level 4” products, as well as nightly calibration frames. A simple tutorial and in-depth descriptions are provided, as well as the descriptions of algorithms. The dome flat frames (taken on UT 2020-06-03) were analyzed, and the pixel positions vulnerable to flat error were found. Using the dark and flat frames, the detector parameters, gain factor (the conversion factor), and readout noise are also updated. We found gain factor and readout noise are likely constants over pixel or “quadrant”.

Read this paper on arXiv…

Y. Bach, M. Ishiguro, J. Takahashi, et. al.
Mon, 2 Jan 23
24/44

Comments: For the PyPI of the package NICpolpy, see this https URL ; For the development at GitHub, see this https URL; this http URL

Exploring the role of SKA surveys with upcoming cosmic microwave background missions in probing primordial features [CEA]

http://arxiv.org/abs/2212.14101


This present article is dedicated to thoroughly exploring the competency of the synergy of the upcoming Cosmic Microwave Background (CMB) missions and Square Kilometre Array (SKA) surveys in detecting features in the primordial power spectrum. Features are by definition specific scale-dependent modifications to the minimal power-law power spectrum. The functional form of the features depends on the inflationary scenarios taken into consideration. The identification of any conclusive deviation from the feature-less power-law power spectrum will allow us to largely fathom out the microphysics of the primordial universe. Here, we consider three vital theoretically motivated feature models, namely, Sharp feature signal, Resonance feature signal, and Bump feature. To investigate these features, we associate each feature model with a specific scale-dependent function called a template. Here we explore three distinct fiducial models for each feature model and for each fiducial model we compare the sensitivity of 36 different combinations of the cosmological surveys. We implement the Fisher matrix forecast method to obtain the possible constraints on the feature model parameters for the future CMB missions, namely, PICO, CORE-M5, LiteBIRD and CMB-S4 in synergy with upcoming SKA surveys, wherein we explore SKA-Cosmic Shear and SKA-Intensity Mapping surveys. Furthermore, the significance of combining EUCLID-Galaxy surveys with the SKA-Intensity Mapping survey is also explored. To consider the feasibility of propagating theoretical uncertainties of nonlinear scales in estimating the uncertainties on the feature parameters, we adopt redshift dependent upper limits of scales. To demonstrate the relative sensitivities of these future surveys towards the parameters of the feature models, we present a comparative analysis of all three feature models.

Read this paper on arXiv…

D. Chandra
Mon, 2 Jan 23
25/44

Comments: 63 pages, 10 figures, 21 tables

White dwarf binary modulation can help stochastic gravitational wave background search [CL]

http://arxiv.org/abs/2212.14519


For the stochastic gravitational wave backgrounds (SGWBs) search centred at the milli-Hz band, the galactic foreground produced by white dwarf binaries (WDBs) within the Milky Way contaminates the extra-galactic signal severely. Because of the anisotropic distribution pattern of the WDBs and the motion of the spaceborne gravitational wave interferometer constellation, the time-domain data stream will show an annual modulation. This property is fundamentally different from those of the SGWBs. In this Letter, we propose a new filtering method for the data vector based on the annual modulation phenomenon. We apply the resulted inverse variance filter to the LISA data challenge. The result shows that for the weaker SGWB signal, such as energy density $\Omega_{\rm astro}=5\times10^{-12}$, the filtering method can enhance the posterior distribution peak prominently. For the stronger signal, such as $\Omega_{\rm astro}=15\times10^{-12}$, the method can improve the Bayesian evidence from substantial' tostrong’ against null hypotheses. This method is model-independent and self-contained. It does not ask for other types of information besides the gravitational wave data.

Read this paper on arXiv…

S. Lin, B. Hu, X. Zhang, et. al.
Mon, 2 Jan 23
31/44

Comments: 5 pages, 3 figures

Mid-Infrared Spectroscopy of Components in Chondrites: Search for Processed Materials in Young Solar Systems and Comets [EPA]

http://arxiv.org/abs/2212.14835


We obtained mid-infrared spectra of chondrules, matrix, CAIs and bulk material from primitive type 1-4 chondrites in order to compare them with the dust material in young, forming solar systems and around comets. Our aim is to investigate whether there are similarities between the first processed materials in our early Solar System and protoplanetary disks currently forming around other stars. Chondrule spectra can be divided into two groups. 1) Chondrules dominated by olivine features at 11.3 micron and 10.0 micron. 2) mesostasis rich chondrules that show main features at 10 micron. Bulk ordinary chondrites show similar features to both groups. Fine-grained matrix is divided into three groups. 1) phyllosilicate-rich with a main band at 10 micron, 2) olivine-rich with bands at 11.3 micron and 10 micron, 3) pyroxene rich. Impact shock processed matrix from Murchison (CM2) shows features from phyllosilicate-rich, amorphous and olivine rich material. Astronomical spectra are divided into four groups based on their spectral characteristics, amorphous (group 1), pyroxene rich (group 2), olivine rich (group 3) and complex (group 4). Group 2 is similar to enstatite-rich fine grained material like e.g. Kakangari (K3) matrix. Group 3 and 4 can be explained by a combination of varying concentrations of olivine and mesostasis rich chondrules and fine grained matrix, but also show very good agreement with shock processed material. Comparison of band ratios confirms the similarity with chondritic material e.g. for HD100546, while the inner disk of HD142527 show no sign of chondrule material. Comparison between spectra indicate a general similarity between primitive solar system materials and circumstellar dust and comets.

Read this paper on arXiv…

A. Morlok, C. Lisse, A. Mason, et. al.
Mon, 2 Jan 23
32/44

Comments: N/A

Web-based telluric correction made in Spain: spectral fitting of Vega-type telluric standards [IMA]

http://arxiv.org/abs/2212.14068


Infrared spectroscopic observations from the ground must be corrected from telluric contamination to make them ready for scientific analyses. However, telluric correction is often a tedious process that requires significant expertise to yield accurate results in a reasonable time frame. To solve these inconveniences, we present a new method for telluric correction that employs a roughly simultaneous observation of a Vega analog to measure atmospheric transmission. After continuum reconstruction and spectral fitting, the stellar features are removed from the observed Vega-type spectrum and the result is used for cancelling telluric absorption features on science spectra. This method is implemented as TelCorAl (Telluric Correction from Alicante), a Python-based web application with a user-friendly interface, whose beta version will be released soon.

Read this paper on arXiv…

D. Fuente, A. Marco, L. Patrick, et. al.
Mon, 2 Jan 23
38/44

Comments: 6 pages, 2 figures. To be published in Highlights of Spanish Astrophysics XI, Proceedings of the XV Scientific Meeting of the Spanish Astronomical Society

Population of ground and lowest excited states of Sulfur via the dissociative recombination of SH+ in the diffuse interstellar medium [IMA]

http://arxiv.org/abs/2212.13538


Our previous study on dissociative recombination of ground state SH$^+$ into $^2\Pi$ states of SH is extended by taking into account the contribution of $^4\Pi$ states recently explored by quantum chemistry methods. Multichannel quantum defect theory is employed for the computation of cross sections and rate coefficients for dissociative recombination, but also for vibrational excitation. Furthermore, we produce the atomic yields resulting from recombination, quantifying the generation of sulfur atoms in their ground (\mbox{$^3$P}) and lowest excited (\mbox{$^1$D}) states respectively.

Read this paper on arXiv…

J. Boffelli, F. Gauchet, D. Kashinski, et. al.
Thu, 29 Dec 22
4/47

Comments: 9 pages, 8 figures, 3 tables

Deep Learning for Space Weather Prediction: Bridging the Gap between Heliophysics Data and Theory [IMA]

http://arxiv.org/abs/2212.13328


Traditionally, data analysis and theory have been viewed as separate disciplines, each feeding into fundamentally different types of models. Modern deep learning technology is beginning to unify these two disciplines and will produce a new class of predictively powerful space weather models that combine the physical insights gained by data and theory. We call on NASA to invest in the research and infrastructure necessary for the heliophysics’ community to take advantage of these advances.

Read this paper on arXiv…

J. Dorelli, C. Bard, T. Chen, et. al.
Thu, 29 Dec 22
8/47

Comments: Heliophysics 2050 White Paper

KNIFE, KAshima Nobeyama InterFErometer [IMA]

http://arxiv.org/abs/2212.13331


By connecting two antennas, Kashima 34~m and Nobeyama 45~m, an east-west baseline of 200~km is formed. At that time, because Nobeyama 45~m had the world’s number one sensitivity in the 43~GHz band, and also Kashima 34~m was the world’s third-largest one, the Kashima-Nobeyama baseline provided the highest sensitivity at 43~GHz VLBI (Figure 1). The construction of the Kashima 34~m antenna began in 1988, also almost at the same time, a domestic project of mm-VLBI (KNIFE, Kashima Nobeyama INterFrermeter) started. Nobeyama Radio Observatory provided the first cooled-HEMT 43~GHz receiver in the world to the Kashima 34~m. In October 1989, the first fringe at 43~GHz was detected. We here review the achievements of the KNIFE at that time.

Read this paper on arXiv…

M. Miyoshi
Thu, 29 Dec 22
11/47

Comments: 5 pages, 5 figures, manuscript in Proceedings of the 18th NICT TDC Symposium (Kashima, October 1, 2020)

United by skies, divided by language — astronomy publishing in languages with small reader base [CL]

http://arxiv.org/abs/2212.13434


The mysteries of the Universe are international, the skies are not crossed by borders. However, the knowledge is transmitted by language, imposing linguistic barriers that are often difficult to break through. Bulgaria is considered as an example of a country with relatively small reader base — it has a population of about 6.5 million (2021) and the Bulgarian language has probably $\sim$7 million speakers, if the diaspora in US, Germany and elsewhere is accounted for. The smaller-scale market, in comparison with larger non-English speaking countries, poses a number of limitation to the publishing landscape: (i) the local authors are discouraged to pen both popular and scientific astronomy books, because of the limited financial incentive; (ii) the market is heavily dominated by translations (from Russian before 1989, from English nowdays), but even those are fewer than in bigger countries, because the translation overhead costs are spread over smaller print runs. The history of the astronomy publishing in Bulgaria is summarized, with some distinct periods: pre-1944, the communist era 1944-1989, the modern times post 1989. A few notable publications are reviewed. Finally, some practices to help astronomy book publishing in languages with smaller reader bases are suggested, taking advantage of the recent technological developments.

Read this paper on arXiv…

V. Ivanov
Thu, 29 Dec 22
12/47

Comments: This is an extended write up of a poster presented at the European Week of Astronomy and Space Science (EAS) held in Valencia, Spain, Jun 26 — Jul 1, 2022, Special Session 34: Diversity and Inclusion in European Astronomy (8 pages, 5 figures)

ISAI: Investigating Solar Axion by Iron-57 [IMA]

http://arxiv.org/abs/2212.13708


The existence of the axion is a unique solution for the strong CP problem, and the axion is one of the most promising candidates of the dark matter. Investigating Solar Axion by Iron-57 (ISAI) is being prepared as a complemented table-top experiment to confirm the solar axion scenario. Probing an X-ray emission from the nuclear transitions associated with the axion-nucleon coupling is a leading approach. ISAI searches for the monochromatic 14.4 keV X-ray from the first excited state of 57Fe using a state-of-the-art pixelized silicon detector, dubbed XRPIX, under an extremely low-background environment. We highlight scientific objectives, experimental design and the latest status of ISAI.

Read this paper on arXiv…

T. Ikeda, T. Fujii, T. Tsuru, et. al.
Thu, 29 Dec 22
19/47

Comments: N/A

Heliophysics Discovery Tools for the 21st Century: Data Science and Machine Learning Structures and Recommendations for 2020-2050 [IMA]

http://arxiv.org/abs/2212.13325


Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.

Read this paper on arXiv…

R. McGranaghan, B. Thompson, E. Camporeale, et. al.
Thu, 29 Dec 22
35/47

Comments: 4 pages; Heliophysics 2050 White Paper

Artificial Intelligence to Enhance Mission Science Output for In-situ Observations: Dealing with the Sparse Data Challenge [IMA]

http://arxiv.org/abs/2212.13289


In the Earth’s magnetosphere, there are fewer than a dozen dedicated probes beyond low-Earth orbit making in-situ observations at any given time. As a result, we poorly understand its global structure and evolution, the mechanisms of its main activity processes, magnetic storms, and substorms. New Artificial Intelligence (AI) methods, including machine learning, data mining, and data assimilation, as well as new AI-enabled missions will need to be developed to meet this Sparse Data challenge.

Read this paper on arXiv…

M. Sitnov, G. Stephens, V. Merkin, et. al.
Thu, 29 Dec 22
36/47

Comments: 4 pages, 1 figure; Heliophysics 2050 White Paper

New Particle Identification Approach with Convolutional Neural Networks in GAPS [IMA]

http://arxiv.org/abs/2212.13454


The General Antiparticle Spectrometer (GAPS) is a balloon-borne experiment that aims to measure low-energy cosmic-ray antiparticles. GAPS has developed a new antiparticle identification technique based on exotic atom formation caused by incident particles, which is achieved by ten layers of Si(Li) detector tracker in GAPS. The conventional analysis uses the physical quantities of the reconstructed incident and secondary particles. In parallel with this, we have developed a complementary approach based on deep neural networks. This paper presents a new convolutional neural network (CNN) technique. A three-dimensional CNN takes energy depositions as three-dimensional inputs and learns to identify their positional/energy correlations. The combination of the physical quantities and the CNN technique is also investigated. The findings show that the new technique outperforms existing machine learning-based methods in particle identification.

Read this paper on arXiv…

M. Yamatani, Y. Nakagami, H. Fuke, et. al.
Thu, 29 Dec 22
37/47

Comments: 7 pages, 10 figures

Adaptive Optics system of the Evanescent Wave Coronagraph (EvWaCo): optimised phase plate and DM characterisation [IMA]

http://arxiv.org/abs/2212.13818


The Evanescent Wave Coronagraph (EvWaCo) is an achromatic coronagraph mask with adjustable size over the spectral domain [600nm, 900nm] that will be installed at the Thai National Observatory. We present in this work the development of a bench to characterise its Extreme Adaptive Optics system (XAO) comprising a DM192 ALPAO deformable mirror (DM) and a 15×15 Shack-Hartmann wavefront sensor (SH-WFS). In this bench, the turbulence is simulated using a rotating phase plate in a pupil plane. In general, such components are designed using a randomly generated phase screen. Such single realisation does not necessarily provide the wanted structure function. We present a solution to design the printed pattern to ensure that the beam sees a strict and controlled Kolmogorov statistics with the correct 2D structure function. This is essential to control the experimental conditions in order to compare the bench results with the numerical simulations and predictions. This bench is further used to deeply characterise the full 27 mm pupil of the ALPAO DM using a 54×54 ALPAO SH-WFS. We measure the average shape of its influence functions as well as the influence function of each single actuator to study their dispersion. We study the linearity of the actuator amplitude with the command as well as the linearity of the influence function profile. We also study the actuator offsets as well as the membrane shape at 0-command. This knowledge is critical to get a forward model of the DM for the XAO control loop.

Read this paper on arXiv…

A. Berdeu, S. Sukpholtham, P. Kongkaew, et. al.
Thu, 29 Dec 22
45/47

Comments: N/A

Modeling noise propagation in Fourier-filtering wavefront sensing, fundamental limits and quantitative comparison [IMA]

http://arxiv.org/abs/2212.13577


Adaptive optics (AO) is a technique allowing to drastically improve ground-based telescopes angular resolution. The wavefront sensor (WFS) is one of the key components of such systems, driving the fundamental performance limitations. In this paper, we focus on a specific class of WFS: the Fourier-filtering wavefront sensors (FFWFS). This class is known for its extremely high sensitivity. However, a clear and comprehensive noise propagation model for any kind of FFWFS is lacking. Considering read-out noise and photon noise, we derive a simple and comprehensive model allowing to understand how these noises propagates in the phase reconstruction in the linear framework. This new noise propagation model works for any kind of FFWFS, and allows to revisit the fundamental sensitivity limit of these sensors. Furthermore, a new comparison between widely used FFWFS is held. We focus on the two main used FFWFS classes: the Zernike WFS (ZWFS) and the pyramid WFS (PWFS), bringing new understanding of their behavior.

Read this paper on arXiv…

V. Chambouleyron, O. Fauvarque, C. Plantet, et. al.
Thu, 29 Dec 22
47/47

Comments: N/A

KNIFE, KAshima Nobeyama InterFErometer [IMA]

http://arxiv.org/abs/2212.13331


By connecting two antennas, Kashima 34~m and Nobeyama 45~m, an east-west baseline of 200~km is formed. At that time, because Nobeyama 45~m had the world’s number one sensitivity in the 43~GHz band, and also Kashima 34~m was the world’s third-largest one, the Kashima-Nobeyama baseline provided the highest sensitivity at 43~GHz VLBI (Figure 1). The construction of the Kashima 34~m antenna began in 1988, also almost at the same time, a domestic project of mm-VLBI (KNIFE, Kashima Nobeyama INterFrermeter) started. Nobeyama Radio Observatory provided the first cooled-HEMT 43~GHz receiver in the world to the Kashima 34~m. In October 1989, the first fringe at 43~GHz was detected. We here review the achievements of the KNIFE at that time.

Read this paper on arXiv…

M. Miyoshi
Thu, 29 Dec 22
38/47

Comments: 5 pages, 5 figures, manuscript in Proceedings of the 18th NICT TDC Symposium (Kashima, October 1, 2020)

New Particle Identification Approach with Convolutional Neural Networks in GAPS [IMA]

http://arxiv.org/abs/2212.13454


The General Antiparticle Spectrometer (GAPS) is a balloon-borne experiment that aims to measure low-energy cosmic-ray antiparticles. GAPS has developed a new antiparticle identification technique based on exotic atom formation caused by incident particles, which is achieved by ten layers of Si(Li) detector tracker in GAPS. The conventional analysis uses the physical quantities of the reconstructed incident and secondary particles. In parallel with this, we have developed a complementary approach based on deep neural networks. This paper presents a new convolutional neural network (CNN) technique. A three-dimensional CNN takes energy depositions as three-dimensional inputs and learns to identify their positional/energy correlations. The combination of the physical quantities and the CNN technique is also investigated. The findings show that the new technique outperforms existing machine learning-based methods in particle identification.

Read this paper on arXiv…

M. Yamatani, Y. Nakagami, H. Fuke, et. al.
Thu, 29 Dec 22
39/47

Comments: 7 pages, 10 figures

Modeling noise propagation in Fourier-filtering wavefront sensing, fundamental limits and quantitative comparison [IMA]

http://arxiv.org/abs/2212.13577


Adaptive optics (AO) is a technique allowing to drastically improve ground-based telescopes angular resolution. The wavefront sensor (WFS) is one of the key components of such systems, driving the fundamental performance limitations. In this paper, we focus on a specific class of WFS: the Fourier-filtering wavefront sensors (FFWFS). This class is known for its extremely high sensitivity. However, a clear and comprehensive noise propagation model for any kind of FFWFS is lacking. Considering read-out noise and photon noise, we derive a simple and comprehensive model allowing to understand how these noises propagates in the phase reconstruction in the linear framework. This new noise propagation model works for any kind of FFWFS, and allows to revisit the fundamental sensitivity limit of these sensors. Furthermore, a new comparison between widely used FFWFS is held. We focus on the two main used FFWFS classes: the Zernike WFS (ZWFS) and the pyramid WFS (PWFS), bringing new understanding of their behavior.

Read this paper on arXiv…

V. Chambouleyron, O. Fauvarque, C. Plantet, et. al.
Thu, 29 Dec 22
47/47

Comments: N/A

CUBES: a UV spectrograph for the future [IMA]

http://arxiv.org/abs/2212.12791


In spite of the advent of extremely large telescopes in the UV/optical/NIR range, the current generation of 8-10m facilities is likely to remain competitive at ground-UV wavelengths for the foreseeable future. The Cassegrain U-Band Efficient Spectrograph (CUBES) has been designed to provide high-efficiency (>40%) observations in the near UV (305-400 nm requirement, 300-420 nm goal) at a spectral resolving power of R>20,000, although a lower-resolution, sky-limited mode of R ~ 7,000 is also planned.
CUBES will offer new possibilities in many fields of astrophysics, providing access to key lines of stellar spectra: a tremendous diversity of iron-peak and heavy elements, lighter elements (in particular Beryllium) and light-element molecules (CO, CN, OH), as well as Balmer lines and the Balmer jump (particularly important for young stellar objects). The UV range is also critical in extragalactic studies: the circumgalactic medium of distant galaxies, the contribution of different types of sources to the cosmic UV background, the measurement of H2 and primordial Deuterium in a regime of relatively transparent intergalactic medium, and follow-up of explosive transients.
The CUBES project completed a Phase A conceptual design in June 2021 and has now entered the Phase B dedicated to detailed design and construction. First science operations are planned for 2028. In this paper, we briefly describe the CUBES project development and goals, the main science cases, the instrument design and the project organization and management.

Read this paper on arXiv…

S. Covino, S. Cristiani, J. Alcala’, et. al.
Tue, 27 Dec 22
1/30

Comments: Proceedings for the HACK100 conference, Trieste, June 2022. arXiv admin note: substantial text overlap with arXiv:2208.01672

Transformers as Strong Lens Detectors- From Simulation to Surveys [GA]

http://arxiv.org/abs/2212.12915


With the upcoming large-scale surveys like LSST, we expect to find approximately $10^5$ strong gravitational lenses among data of many orders of magnitude larger. In this scenario, the usage of non-automated techniques is too time-consuming and hence impractical for science. For this reason, machine learning techniques started becoming an alternative to previous methods. We propose a new machine learning architecture, based on the principle of self-attention, trained to find strong gravitational lenses on simulated data from the Bologna Lens Challenge. Self-attention-based models have clear advantages compared to simpler CNNs and highly competing performance in comparison to the current state-of-art CNN models. We apply the proposed model to the Kilo Degree Survey, identifying some new strong lens candidates, however, these have been identified among a plethora of false positives which made the application of this model not so advantageous. Therefore, throughout this paper, we investigate the pitfalls of this approach, and possible solutions, such as transfer learning, are proposed.

Read this paper on arXiv…

H. Thuruthipilly, M. Grespan and A. Zadrożny
Tue, 27 Dec 22
4/30

Comments: 8 pages, 7 figures

It's your software! Get it cited the way you want! [IMA]

http://arxiv.org/abs/2212.12683


Are others using software you’ve written in their research and citing it as you want it to be cited? Software can be cited in different ways, some good, and some not good at all for tracking and counting citations in indexers such as ADS and Clarivate’s Web of Science. Generally, these resources need to match citations to resources, such as journal articles or software records, they ingest. This presentation covered common reasons as to why a code might not be cited well (in a trackable/countable way), which citation methods are trackable, how to specify this information for your software, and where this information should be placed. It also covered standard software metadata files, how to create them, and how to use them. Creating a metadata file, such as a CITATION.cff or codemeta.json, and adding it to the root of your code repo is easy to do with the ASCL’s metadata file creation overlay, and will help out anyone wanting to give you credit for your computational method, whether it’s a huge carefully-written and tested package, or a short quick-and-dirty-but-oh-so-useful code.

Read this paper on arXiv…

A. Allen
Tue, 27 Dec 22
7/30

Comments: 2 figures, 1 table

Constraining polarisation flux density and angle of point sources by training a convolutional neural network [CEA]

http://arxiv.org/abs/2212.13055


Constraining the polarisation properties of extragalactic point sources is a relevant task not only because they are one of the main contaminants for primordial cosmic microwave background B-mode detection if the tensor-to-scalar ratio is lower than r = 0.001, but also for a better understanding of the properties of radio-loud active galactic nuclei. We develop and train a machine learning model based on a convolutional neural network to learn how to estimate the polarisation flux density and angle of point sources embedded in cosmic microwave background images knowing only their positions. To train the neural network, we use realistic simulations of patches of area 32×32 pixels at the 217 GHz Planck channel with injected point sources at their centres. The patches also contain a realistic background composed by dust, the CMB and instrumental noise. Firstly, we study the comparison between true and estimated polarisation flux densities for P, Q and U. Secondly, we analyse the comparison between true and estimated polarisation angles. Finally, we study the performance of our model with real data and we compare our results against the PCCS2. We obtain that our model is reliable to constrain the polarisation flux above 80 mJy. For this limit, we obtain errors lower than 30%. Training the same network with Q and U, the reliability limit is above +-250 mJy for determining the polarisation angle of both Q and U sources with a 1sigma uncertainty of +-29deg and +-32deg for Q and U sources respectively. We obtain similar results to the PCCS2 for some sources, although we also find discrepancies in the 300-400 mJy flux density range with respect to the Planck catalogue. Based on these results, our model seems to be a promising tool to give estimations of the polarisation flux densities and angles of point sources above 80 mJy in any catalogue with practically null computational time.

Read this paper on arXiv…

J. Casas, L. Bonavera, J. González-Nuevo, et. al.
Tue, 27 Dec 22
9/30

Comments: 8 pages, 9 Figures. Proposed for acceptance in the Astronomy & Astrophysics journal

Using the Astrophysics Source Code Library: Find, cite, download, parse, study, and submit [IMA]

http://arxiv.org/abs/2212.12682


The Astrophysics Source Code Library (ASCL) contains 3000 metadata records about astrophysics research software and serves primarily as a registry of software, though it also can and does accept code deposit. Though the ASCL was started in 1999, many astronomers, especially those new to the field, are not very familiar with it. This hands-on virtual tutorial was geared to new users of the resource to teach them how to use the ASCL, with a focus on finding software and information about software not only in this resource, but also by using Google and NASA’s Astrophysics Data System (ADS). With computational methods so important to research, finding these methods is useful for examining (for transparency) and possibly reusing the software (for reproducibility or to enable new research). Metadata about software is useful for, for example, knowing how to cite software when it is used for research and studying trends in the computational landscape. Though the tutorial was primarily aimed at new users, advanced users were also likely to learn something new.

Read this paper on arXiv…

A. Allen
Tue, 27 Dec 22
13/30

Comments: 4 figures

The Aditya-L1 mission of ISRO [SSA]

http://arxiv.org/abs/2212.13046


The Aditya-L1 is the first space-based solar observatory of the Indian Space Research Organization (ISRO). The spacecraft will carry seven payloads providing uninterrupted observations of the Sun from the first Lagrangian point. Aditya-L1 comprises four remote sensing instruments, {\it viz.} a coronagraph observing in visible and infrared, a full disk imager in Near Ultra-Violet (NUV), and two full-sun integrated spectrometers in soft X-ray and hard X-ray. In addition, there are three instruments for in-situ measurements, including a magnetometer, to study the magnetic field variations during energetic events. Aditya-L1 is truly a mission for multi-messenger solar astronomy from space that will provide comprehensive observations of the Sun across the electromagnetic spectrum and in-situ measurements in a broad range of energy, including magnetic field measurements at L1.

Read this paper on arXiv…

D. Tripathi, D. Chakrabarty, B. Prasad, et. al.
Tue, 27 Dec 22
14/30

Comments: 10 pages, 6 figures

The contribution of the modern amateur astronomer to the science of astronomy [IMA]

http://arxiv.org/abs/2212.12543


An amateur astronomer in the modern world has the opportunity not only to make visual observations for own interest, but can make scientific astronomical observations and new discoveries in astronomy.
In my example, as amateur astronomer and only through self-education, I inform about my discoveries: of the possible dwarf nova on the old digitized photographic plates and of new variable stars from sky surveys data by means of data mining; how I discovered (in the images of the sky surveys): astronomical transients, supernovae, planetary nebula candidates and new binary systems in the data of Gaia DR2; I describe my discoveries of three novae in the Andromeda Galaxy.
I report about some of my scientific observations using remote telescopes: of superhumps of cataclysmic variable stars; of echo outburst of AM CVn star; of maximum brightness of blazars; of optical afterglows of gamma-ray bursts (including GRB 221009A); of microlensing events; of rotation of near-Earth asteroid 2022 AB. I also describe my photometric follow-up observations of novae (including V1405 Cas and V1674 Her) and my astrometric observations of Solar System objects (including the confirmation of objects posted at the Confirmation Pages of the Minor Planet Center) including observations of comet 2I/Borisov, asteroids 2020 AV2 and (65803) Didymos. I also describe some of my observations of occultations: of the star by asteroid (159) Aemilia, of the star by Saturn’s moon Titan and of Uranus by the Moon during total lunar eclipse on November 8, 2022; and visual observations of variable stars, meteors and sunspots (including during the transit of Venus in 2012).
Some of my data already used in scientific papers, others were sent to the databases. I share my experience of discovery and research of astronomical objects and in my example, I show that an amateur astronomer can make a real contribution to the science.

Read this paper on arXiv…

F. Romanov
Tue, 27 Dec 22
16/30

Comments: 22 pages, 32 figures, 1 table. Presented as e-Poster during the IAUGA 2022: XXXIst General Assembly of the International Astronomical Union (August 2-11, 2022, in Busan, Republic of Korea), at the IAU Focus Meeting 10 “Synergy of Small Telescopes and Large Surveys for Solar System and Exoplanetary Bodies Research”

ELOISE — Reliable background simulation at sub-keV energies [CL]

http://arxiv.org/abs/2212.12634


$CaWO_4$ and $Al_2O_3$ are well-established target materials used by experiments searching for rare events like the elastic scattering off of a hypothetical Dark Matter particle. In recent years, experiments have reached detection thresholds for nuclear recoils at the 10 eV-scale. At this energy scale, a reliable Monte Carlo simulation of the expected background is crucial. However, none of the publicly available general-purpose simulation packages are validated at this energy scale and for these targets. The recently started ELOISE project aims to provide reliable simulations of electromagnetic particle interactions for this use case by obtaining experimental reference data, validating the simulation code against them, and, if needed, calibrating the code to the reference data.

Read this paper on arXiv…

H. Kluck
Tue, 27 Dec 22
19/30

Comments: IDM2022 proceedings submitted to SciPost

Thermal Control System to Easily Cool the GAPS Balloon-borne Instrument on the Ground [IMA]

http://arxiv.org/abs/2212.12862


This study developed a novel thermal control system to cool detectors of the General AntiParticle Spectrometer (GAPS) before its flights. GAPS is a balloon-borne cosmic-ray observation experiment. In its payload, GAPS contains over 1000 silicon detectors that must be cooled below $-40^{\circ}\mbox{C}$. All detectors are thermally coupled to a unique heat-pipe system (HPS) that transfers heat from the detectors to a radiator. The radiator is designed to be cooled below $-50^{\circ}\mbox{C}$ during the flight by exposure to space. The pre-flight state of the detectors is checked on the ground at 1 atm and ambient room temperature, but the radiator cannot be similarly cooled. The authors have developed a ground cooling system (GCS) to chill the detectors for ground testing. The GCS consists of a cold plate, a chiller, and insulating foam. The cold plate is designed to be attached to the radiator and cooled by a coolant pumped by the chiller. The payload configuration, including the HPS, can be the same as that of the flight. The GCS design was validated by thermal tests using a scale model. The GCS design is simple and provides a practical guideline, including a simple estimation of appropriate thermal insulation thickness, which can be easily adapted to other applications.

Read this paper on arXiv…

H. Fuke, S. Okazaki, A. Kawachi, et. al.
Tue, 27 Dec 22
20/30

Comments: 8 pages, 14 figures, 3 tables

Satellite edge computing for real-time and very-high resolution Earth observation [CL]

http://arxiv.org/abs/2212.12912


In real-time and high-resolution Earth observation imagery, Low Earth Orbit (LEO) satellites capture images that are subsequently transmitted to ground to create an updated map of an area of interest. Such maps provide valuable information for meteorology or environmental monitoring, but can also be employed in near-real time operation for disaster detection, identification, and management. However, the amount of data generated by these applications can easily exceed the communication capabilities of LEO satellites, leading to congestion and packet dropping. To avoid these problems, the Inter-Satellite Links (ISLs) can be used to distribute the data among the satellites for processing. In this paper, we address an energy minimization problem based on a general satellite mobile edge computing (SMEC) framework for real-time and very-high resolution Earth observation. Our results illustrate that the optimal allocation of data and selection of the compression parameters increase the amount of images that the system can support by a factor of 12 when compared to directly downloading the data. Further, energy savings greater than 11% were observed in a real-life scenario of imaging a volcanic island, while a sensitivity analysis of the image acquisition process demonstrates that potential energy savings can be as high as 92%.

Read this paper on arXiv…

I. Leyva-Mayorga, M. Gost, M. Moretti, et. al.
Tue, 27 Dec 22
24/30

Comments: submitted for publication to IEEE Transactions in Communications

The Aditya-L1 mission of ISRO [SSA]

http://arxiv.org/abs/2212.13046


The Aditya-L1 is the first space-based solar observatory of the Indian Space Research Organization (ISRO). The spacecraft will carry seven payloads providing uninterrupted observations of the Sun from the first Lagrangian point. Aditya-L1 comprises four remote sensing instruments, {\it viz.} a coronagraph observing in visible and infrared, a full disk imager in Near Ultra-Violet (NUV), and two full-sun integrated spectrometers in soft X-ray and hard X-ray. In addition, there are three instruments for in-situ measurements, including a magnetometer, to study the magnetic field variations during energetic events. Aditya-L1 is truly a mission for multi-messenger solar astronomy from space that will provide comprehensive observations of the Sun across the electromagnetic spectrum and in-situ measurements in a broad range of energy, including magnetic field measurements at L1.

Read this paper on arXiv…

D. Tripathi, D. Chakrabarty, B. Prasad, et. al.
Tue, 27 Dec 22
1/30

Comments: 10 pages, 6 figures

The contribution of the modern amateur astronomer to the science of astronomy [IMA]

http://arxiv.org/abs/2212.12543


An amateur astronomer in the modern world has the opportunity not only to make visual observations for own interest, but can make scientific astronomical observations and new discoveries in astronomy.
In my example, as amateur astronomer and only through self-education, I inform about my discoveries: of the possible dwarf nova on the old digitized photographic plates and of new variable stars from sky surveys data by means of data mining; how I discovered (in the images of the sky surveys): astronomical transients, supernovae, planetary nebula candidates and new binary systems in the data of Gaia DR2; I describe my discoveries of three novae in the Andromeda Galaxy.
I report about some of my scientific observations using remote telescopes: of superhumps of cataclysmic variable stars; of echo outburst of AM CVn star; of maximum brightness of blazars; of optical afterglows of gamma-ray bursts (including GRB 221009A); of microlensing events; of rotation of near-Earth asteroid 2022 AB. I also describe my photometric follow-up observations of novae (including V1405 Cas and V1674 Her) and my astrometric observations of Solar System objects (including the confirmation of objects posted at the Confirmation Pages of the Minor Planet Center) including observations of comet 2I/Borisov, asteroids 2020 AV2 and (65803) Didymos. I also describe some of my observations of occultations: of the star by asteroid (159) Aemilia, of the star by Saturn’s moon Titan and of Uranus by the Moon during total lunar eclipse on November 8, 2022; and visual observations of variable stars, meteors and sunspots (including during the transit of Venus in 2012).
Some of my data already used in scientific papers, others were sent to the databases. I share my experience of discovery and research of astronomical objects and in my example, I show that an amateur astronomer can make a real contribution to the science.

Read this paper on arXiv…

F. Romanov
Tue, 27 Dec 22
4/30

Comments: 22 pages, 32 figures, 1 table. Presented as e-Poster during the IAUGA 2022: XXXIst General Assembly of the International Astronomical Union (August 2-11, 2022, in Busan, Republic of Korea), at the IAU Focus Meeting 10 “Synergy of Small Telescopes and Large Surveys for Solar System and Exoplanetary Bodies Research”

Using the Astrophysics Source Code Library: Find, cite, download, parse, study, and submit [IMA]

http://arxiv.org/abs/2212.12682


The Astrophysics Source Code Library (ASCL) contains 3000 metadata records about astrophysics research software and serves primarily as a registry of software, though it also can and does accept code deposit. Though the ASCL was started in 1999, many astronomers, especially those new to the field, are not very familiar with it. This hands-on virtual tutorial was geared to new users of the resource to teach them how to use the ASCL, with a focus on finding software and information about software not only in this resource, but also by using Google and NASA’s Astrophysics Data System (ADS). With computational methods so important to research, finding these methods is useful for examining (for transparency) and possibly reusing the software (for reproducibility or to enable new research). Metadata about software is useful for, for example, knowing how to cite software when it is used for research and studying trends in the computational landscape. Though the tutorial was primarily aimed at new users, advanced users were also likely to learn something new.

Read this paper on arXiv…

A. Allen
Tue, 27 Dec 22
5/30

Comments: 4 figures

Satellite edge computing for real-time and very-high resolution Earth observation [CL]

http://arxiv.org/abs/2212.12912


In real-time and high-resolution Earth observation imagery, Low Earth Orbit (LEO) satellites capture images that are subsequently transmitted to ground to create an updated map of an area of interest. Such maps provide valuable information for meteorology or environmental monitoring, but can also be employed in near-real time operation for disaster detection, identification, and management. However, the amount of data generated by these applications can easily exceed the communication capabilities of LEO satellites, leading to congestion and packet dropping. To avoid these problems, the Inter-Satellite Links (ISLs) can be used to distribute the data among the satellites for processing. In this paper, we address an energy minimization problem based on a general satellite mobile edge computing (SMEC) framework for real-time and very-high resolution Earth observation. Our results illustrate that the optimal allocation of data and selection of the compression parameters increase the amount of images that the system can support by a factor of 12 when compared to directly downloading the data. Further, energy savings greater than 11% were observed in a real-life scenario of imaging a volcanic island, while a sensitivity analysis of the image acquisition process demonstrates that potential energy savings can be as high as 92%.

Read this paper on arXiv…

I. Leyva-Mayorga, M. Gost, M. Moretti, et. al.
Tue, 27 Dec 22
6/30

Comments: submitted for publication to IEEE Transactions in Communications

Transformers as Strong Lens Detectors- From Simulation to Surveys [GA]

http://arxiv.org/abs/2212.12915


With the upcoming large-scale surveys like LSST, we expect to find approximately $10^5$ strong gravitational lenses among data of many orders of magnitude larger. In this scenario, the usage of non-automated techniques is too time-consuming and hence impractical for science. For this reason, machine learning techniques started becoming an alternative to previous methods. We propose a new machine learning architecture, based on the principle of self-attention, trained to find strong gravitational lenses on simulated data from the Bologna Lens Challenge. Self-attention-based models have clear advantages compared to simpler CNNs and highly competing performance in comparison to the current state-of-art CNN models. We apply the proposed model to the Kilo Degree Survey, identifying some new strong lens candidates, however, these have been identified among a plethora of false positives which made the application of this model not so advantageous. Therefore, throughout this paper, we investigate the pitfalls of this approach, and possible solutions, such as transfer learning, are proposed.

Read this paper on arXiv…

H. Thuruthipilly, M. Grespan and A. Zadrożny
Tue, 27 Dec 22
7/30

Comments: 8 pages, 7 figures

ELOISE — Reliable background simulation at sub-keV energies [CL]

http://arxiv.org/abs/2212.12634


$CaWO_4$ and $Al_2O_3$ are well-established target materials used by experiments searching for rare events like the elastic scattering off of a hypothetical Dark Matter particle. In recent years, experiments have reached detection thresholds for nuclear recoils at the 10 eV-scale. At this energy scale, a reliable Monte Carlo simulation of the expected background is crucial. However, none of the publicly available general-purpose simulation packages are validated at this energy scale and for these targets. The recently started ELOISE project aims to provide reliable simulations of electromagnetic particle interactions for this use case by obtaining experimental reference data, validating the simulation code against them, and, if needed, calibrating the code to the reference data.

Read this paper on arXiv…

H. Kluck
Tue, 27 Dec 22
11/30

Comments: IDM2022 proceedings submitted to SciPost

Constraining polarisation flux density and angle of point sources by training a convolutional neural network [CEA]

http://arxiv.org/abs/2212.13055


Constraining the polarisation properties of extragalactic point sources is a relevant task not only because they are one of the main contaminants for primordial cosmic microwave background B-mode detection if the tensor-to-scalar ratio is lower than r = 0.001, but also for a better understanding of the properties of radio-loud active galactic nuclei. We develop and train a machine learning model based on a convolutional neural network to learn how to estimate the polarisation flux density and angle of point sources embedded in cosmic microwave background images knowing only their positions. To train the neural network, we use realistic simulations of patches of area 32×32 pixels at the 217 GHz Planck channel with injected point sources at their centres. The patches also contain a realistic background composed by dust, the CMB and instrumental noise. Firstly, we study the comparison between true and estimated polarisation flux densities for P, Q and U. Secondly, we analyse the comparison between true and estimated polarisation angles. Finally, we study the performance of our model with real data and we compare our results against the PCCS2. We obtain that our model is reliable to constrain the polarisation flux above 80 mJy. For this limit, we obtain errors lower than 30%. Training the same network with Q and U, the reliability limit is above +-250 mJy for determining the polarisation angle of both Q and U sources with a 1sigma uncertainty of +-29deg and +-32deg for Q and U sources respectively. We obtain similar results to the PCCS2 for some sources, although we also find discrepancies in the 300-400 mJy flux density range with respect to the Planck catalogue. Based on these results, our model seems to be a promising tool to give estimations of the polarisation flux densities and angles of point sources above 80 mJy in any catalogue with practically null computational time.

Read this paper on arXiv…

J. Casas, L. Bonavera, J. González-Nuevo, et. al.
Tue, 27 Dec 22
21/30

Comments: 8 pages, 9 Figures. Proposed for acceptance in the Astronomy & Astrophysics journal

Thermal Control System to Easily Cool the GAPS Balloon-borne Instrument on the Ground [IMA]

http://arxiv.org/abs/2212.12862


This study developed a novel thermal control system to cool detectors of the General AntiParticle Spectrometer (GAPS) before its flights. GAPS is a balloon-borne cosmic-ray observation experiment. In its payload, GAPS contains over 1000 silicon detectors that must be cooled below $-40^{\circ}\mbox{C}$. All detectors are thermally coupled to a unique heat-pipe system (HPS) that transfers heat from the detectors to a radiator. The radiator is designed to be cooled below $-50^{\circ}\mbox{C}$ during the flight by exposure to space. The pre-flight state of the detectors is checked on the ground at 1 atm and ambient room temperature, but the radiator cannot be similarly cooled. The authors have developed a ground cooling system (GCS) to chill the detectors for ground testing. The GCS consists of a cold plate, a chiller, and insulating foam. The cold plate is designed to be attached to the radiator and cooled by a coolant pumped by the chiller. The payload configuration, including the HPS, can be the same as that of the flight. The GCS design was validated by thermal tests using a scale model. The GCS design is simple and provides a practical guideline, including a simple estimation of appropriate thermal insulation thickness, which can be easily adapted to other applications.

Read this paper on arXiv…

H. Fuke, S. Okazaki, A. Kawachi, et. al.
Tue, 27 Dec 22
23/30

Comments: 8 pages, 14 figures, 3 tables

It's your software! Get it cited the way you want! [IMA]

http://arxiv.org/abs/2212.12683


Are others using software you’ve written in their research and citing it as you want it to be cited? Software can be cited in different ways, some good, and some not good at all for tracking and counting citations in indexers such as ADS and Clarivate’s Web of Science. Generally, these resources need to match citations to resources, such as journal articles or software records, they ingest. This presentation covered common reasons as to why a code might not be cited well (in a trackable/countable way), which citation methods are trackable, how to specify this information for your software, and where this information should be placed. It also covered standard software metadata files, how to create them, and how to use them. Creating a metadata file, such as a CITATION.cff or codemeta.json, and adding it to the root of your code repo is easy to do with the ASCL’s metadata file creation overlay, and will help out anyone wanting to give you credit for your computational method, whether it’s a huge carefully-written and tested package, or a short quick-and-dirty-but-oh-so-useful code.

Read this paper on arXiv…

A. Allen
Tue, 27 Dec 22
29/30

Comments: 2 figures, 1 table

CUBES: a UV spectrograph for the future [IMA]

http://arxiv.org/abs/2212.12791


In spite of the advent of extremely large telescopes in the UV/optical/NIR range, the current generation of 8-10m facilities is likely to remain competitive at ground-UV wavelengths for the foreseeable future. The Cassegrain U-Band Efficient Spectrograph (CUBES) has been designed to provide high-efficiency (>40%) observations in the near UV (305-400 nm requirement, 300-420 nm goal) at a spectral resolving power of R>20,000, although a lower-resolution, sky-limited mode of R ~ 7,000 is also planned.
CUBES will offer new possibilities in many fields of astrophysics, providing access to key lines of stellar spectra: a tremendous diversity of iron-peak and heavy elements, lighter elements (in particular Beryllium) and light-element molecules (CO, CN, OH), as well as Balmer lines and the Balmer jump (particularly important for young stellar objects). The UV range is also critical in extragalactic studies: the circumgalactic medium of distant galaxies, the contribution of different types of sources to the cosmic UV background, the measurement of H2 and primordial Deuterium in a regime of relatively transparent intergalactic medium, and follow-up of explosive transients.
The CUBES project completed a Phase A conceptual design in June 2021 and has now entered the Phase B dedicated to detailed design and construction. First science operations are planned for 2028. In this paper, we briefly describe the CUBES project development and goals, the main science cases, the instrument design and the project organization and management.

Read this paper on arXiv…

S. Covino, S. Cristiani, J. Alcala’, et. al.
Tue, 27 Dec 22
30/30

Comments: Proceedings for the HACK100 conference, Trieste, June 2022. arXiv admin note: substantial text overlap with arXiv:2208.01672

Impact of a binary black hole on its outer circumbinary disc [HEAP]

http://arxiv.org/abs/2212.12005


Accreting supermassive binary black holes (SMBBHs) are potential targets for multi-messenger astronomy as they emit gravitational waves (GW) while their environment emits electromagnetic (EM) waves. In order to get the most out of a joint GW-EM detection we first need to obtain theoretically-predicted EM signals unambiguously linked to BBHs. In that respect, this is the first of a series of papers dedicated to accreting pre-merger BBHs and their associated EM observables. Here, we extend our Numerical Observatory of Violent Accreting systems, e-NOVAs, to any spacetime. Unlike previous studies, almost exclusively focused on the inner regions, we investigated the impact of the BBH on its outer circumbinary disc, located in the radiation (or wave) zone, after implementing an approximate analytical spacetime of spinning, inspiralling BBHs in e-NOVAs. We follow the formation of a weak spiral structure in disc density arising from the retardation effects in the radiation zone metric. Simulation data are then post-processed with a general-relativistic ray-tracing code incorporating the same BBH spacetime, assuming SMBBH sources. The density spiral creates a small (<1%) but unambiguous modulation of the lightcurve at the semi-orbital period. This signal, although weak, is fundamentally different from that of an axisymmetric disc around a single BH providing a lower limit on the impact of a BBH on its outer disc. This potential difference being found, we study how binary parameters impact this modulation in order to find the optimal case which is a high source inclination of any binary mass ratio (from 0.1 to 1).

Read this paper on arXiv…

R. Mignon-Risse, P. Varniere and F. Casse
Mon, 26 Dec 22
10/39

Comments: 14 pages, 11 figures. Accepted for publication in MNRAS

The Low Energy Module (LEM): development of a CubeSat spectrometer for sub-MeV particles and Gamma Ray Burst detection [CL]

http://arxiv.org/abs/2212.12351


Accurate flux measurement of low energy charged particles, trapped in the magnetosphere, is necessary for Space Weather characterization and to study the coupling between the lithosphere and magnetosphere, allowing the investigation of the correlations between seismic events and particle precipitation from Van Allen Belts. In this work, the project of a CubeSat space spectrometer, the Low Energy Module (LEM), is shown. The detector will be able to perform an event-based measurement of energy, arrival direction, and composition of low-energy charged particles down to 0.1 MeV. Moreover, thanks to a CdZnTe mini-calorimeter, the LEM spectrometer also allows photon detection in the sub-MeV range, joining the quest for the investigation of the nature of Gamma Ray Bursts. The particle identification of the LEM relies on the $\Delta E – E$ technique performed by thin silicon detectors. This multipurpose spectrometer will fit within a 10x10x10 $\text{cm}^3$ CubeSat frame and it will be constructed as a joining project between the University of Trento, FBK, and INFN-TIFPA. To fulfil the size and mass requirements an innovative approach, based on active particle collimation, was designed for the LEM, this avoids heavy/bulky passive collimators of previous space detectors. In this paper, we will present the LEM geometry, its detection concept, and the results from the developed GEANT4 simulation.

Read this paper on arXiv…

R. Nicolaidis, F. Nozzoli, R. Iuppa, et. al.
Mon, 26 Dec 22
14/39

Comments: N/A

First Flight Performance of the Micro-X Microcalorimeter X-Ray Sounding Rocket [IMA]

http://arxiv.org/abs/2212.12064


The flight of the Micro-X sounding rocket on July 22, 2018 marked the first operation of Transition-Edge Sensors and their SQUID readouts in space. The instrument combines the microcalorimeter array with an imaging mirror to take high-resolution spectra from extended X-ray sources. The first flight target was the Cassiopeia~A Supernova Remnant. While a rocket pointing malfunction led to no time on-target, data from the flight was used to evaluate the performance of the instrument and demonstrate the flight viability of the payload. The instrument successfully achieved a stable cryogenic environment, executed all flight operations, and observed X-rays from the on-board calibration source. The flight environment did not significantly affect the performance of the detectors compared to ground operation. The flight provided an invaluable test of the impact of external magnetic fields and the instrument configuration on detector performance. This flight provides a milestone in the flight readiness of these detector and readout technologies, both of which have been selected for future X-ray observatories.

Read this paper on arXiv…

J. Adams, R. Baker, S. Bandler, et. al.
Mon, 26 Dec 22
15/39

Comments: N/A

J-PLUS Tracking Tool: Scheduler and Tracking software for the Observatorio Astrofísico de Javalambre (OAJ) [IMA]

http://arxiv.org/abs/2212.12270


The Javalambre Photometric Local Universe Survey (J-PLUS) is an ongoing 12 band photometric optical survey, observing thousands of square degrees of the Northern Hemisphere from the dedicated JAST80 telescope at the Observatorio Astrof\’isico de Javalambre (OAJ). Observational strategy is a critical point in this large survey. To plan the best observations, it is necessary to select pointings depending on object visibility, the pointing priority and status and location and phase of the Moon. In this context, the J-PLUS Tracking Tool, a web application, has been implemented, which includes tools to plan the best observations, as well as tools to create the command files for the telescope; to track the observations; and to know the status of the survey. In this environment, robustness is an important point. To obtain it, a feedback software system has been implemented. This software automatically decides and marks which observations are valid or which must be repeated. It bases its decision on the data obtained from the data management pipeline database using a complex system of pointing and filter statuses. This contribution presents J-PLUS Tracking Tool and all feedback software system.

Read this paper on arXiv…

T. Civera
Mon, 26 Dec 22
18/39

Comments: 4 pages, 2 figures, to be published in Proc. ADASS XXXII (2022)

The influence of laser relative intensity noise in the Laser Interferometer Space Antenna [CL]

http://arxiv.org/abs/2212.12052


LISA is an upcoming ESA mission that will detect gravitational waves in space by interferometrically measuring the separation between free-falling test masses at picometer precision. To reach the desired performance, LISA will employ the noise reduction technique time-delay interferometry (TDI), in which multiple raw interferometric readouts are time shifted and combined into the final scientific observables. Evaluating the performance in terms of these TDI variables requires careful tracking of how different noise sources propagate through TDI, as noise correlations might affect the performance in unexpected ways. One example of such potentially correlated noise is the relative intensity noise (RIN) of the six lasers aboard the three LISA satellites, which will couple into the interferometric phase measurements. In this article, we calculate the expected RIN levels based on the current mission architecture and the envisaged mitigation strategies. We find that strict requirements on the technical design reduce the effect from approximately 8.7 pm/rtHz per inter-spacecraft interferometer to that of a much lower sub-1 pm/rtHz noise, with typical characteristics of an uncorrelated readout noise after TDI. Our investigations underline the importance of sufficient balanced detection of the interferometric measurements.

Read this paper on arXiv…

L. Wissel, O. Hartwig, J. Bayle, et. al.
Mon, 26 Dec 22
23/39

Comments: 15 pages, 10 figures, 2 tables

Detection of Solar Filaments using Suncharts from Kodaikanal Solar Observatory Archive Employing a Clustering Approach [SSA]

http://arxiv.org/abs/2212.12176


With over 100 years of solar observations, the Kodaikanal Solar Observatory (KoSO) is a one-of-a-kind solar data repository in the world. Among its many data catalogues, the suncharts' at KoSO are of particular interest. These Suncharts (1904-2020) are coloured drawings of different solar features, such as sunspots, plages, filaments, and prominences, made on papers with a Stonyhurst latitude-longitude grid etched on them. In this paper, we analyze this unique data by first digitizing each suncharts using an industry-standard scanner and saving those digital images in high-resolution.tif’ format. We then examine the Cycle~19 and Cycle~20 data (two of the strongest cycles of the last century) with the aim of detecting filaments. To this end, we employed `k-means clustering’ method and obtained different filament parameters such as position, tilt angle, length, and area. Our results show that filament length (and area) increases with latitude and the pole-ward migration is clearly dominated by a particular tilt sign. Lastly, we cross-verified our findings with results from KoSO digitized photographic plate database for the overlapping time period and obtained a good agreement between them. This work, acting as a proof-of-the-concept, will kick-start new efforts to effectively use the entire hand-drawn series of multi-feature, full-disk solar data and enable researchers to extract new sciences, such as the generation of pseudo magnetograms for the last 100 years.

Read this paper on arXiv…

A. Priyadarshi, M. Hegde, B. Jha, et. al.
Mon, 26 Dec 22
33/39

Comments: 12 pages, 7 Figures, Accepted for publication in ApJ

Failure type detection and predictive maintenance for the next generation of imaging atmospheric Cherenkov telescopes [IMA]

http://arxiv.org/abs/2212.12381


The next generation of imaging atmospheric Cherenkov telescopes will be composed of hundreds of telescopes working together to attempt to unveil some fundamental physics of the high-energy Universe. Along with the scientific data, a large volume of housekeeping and auxiliary data coming from weather stations, instrumental sensors, logging files, etc., will be collected as well. Driven by supervised and reinforcement learning algorithms, such data can be exploited for applying predictive maintenance and failure type detection to these astrophysical facilities. In this paper, we present the project aiming to trigger the development of a model that will be able to predict, just in time, forthcoming component failures along with their kind and severity

Read this paper on arXiv…

F. Incardona, A. Costa and K. Munari
Mon, 26 Dec 22
35/39

Comments: N/A

Detecting neutrinos in IceCube with Cherenkov light in the South Pole ice [IMA]

http://arxiv.org/abs/2212.12142


The IceCube Neutrino Observatory detects GeV-to-PeV+ neutrinos via the Cherenkov light produced by secondary charged particles from neutrino interactions with the South Pole ice. The detector consists of over 5000 spherical Digital Optical Modules (DOM), each deployed with a single downward-facing photomultiplier tube (PMT) and arrayed across 86 strings over a cubic-kilometer. IceCube has measured the astrophysical neutrino flux, searched for their origins, and constrained neutrino oscillation parameters and cross sections. These were made possible by an in-depth characterization of the glacial ice, which has been refined over time, and novel approaches in reconstructions that utilize fast approximations of Cherenkov yield expectations.
After over a decade of nearly continuous IceCube operation, the next generation of neutrino telescopes at the South Pole are taking shape. The IceCube Upgrade will add seven additional strings in a dense infill configuration. Multi-PMT OMs will be attached to each string, along with improved calibration devices and new sensor prototypes. Its denser OM and string spacing will extend sensitivity to lower neutrino energies and further constrain neutrino oscillation parameters. The calibration goals of the Upgrade will help guide the design and construction of IceCube Gen2, which will increase the effective volume by nearly an order of magnitude.

Read this paper on arXiv…

T. Yuan
Mon, 26 Dec 22
38/39

Comments: 5 pages, 5 figures, proceeding from the 11th International Workshop on Ring Imaging Cherenkov Detectors (RICH2022)

NIRCam Performance on JWST In Flight [IMA]

http://arxiv.org/abs/2212.12069


The Near Infrared Camera for the James Webb Space Telescope is delivering the imagery that astronomers have hoped for ever since JWST was proposed back in the 1990s. In the Commissioning Period that extended from right after launch to early July 2022 NIRCam has been subjected to a number of performance tests and operational checks. The camera is exceeding pre-launch expectations in virtually all areas with very few surprises discovered in flight. NIRCam also delivered the imagery needed by the Wavefront Sensing Team for use in aligning the telescope mirror segments (\citealt{Acton_etal2022}, \citealt{McElwain_etal2022}).

Read this paper on arXiv…

M. Rieke, D. Kelly, K. Misselt, et. al.
Mon, 26 Dec 22
39/39

Comments: 17 pages, 18 figures Accepted for publication in PASP

Doppler effect in TianQin time-delay interferometry [CL]

http://arxiv.org/abs/2212.11437


The current design of space-based gravitational wave detectors utilizes heterodyne laser interferometry in inter-satellite science measurements. Frequency variations of the heterodyne beatnotes are predominantly caused by the Doppler effect from relative satellite motion along lines of sight. Generally considered to be outside the measurement band, the Doppler effect appears to have been largely overlooked in literature on numerical simulations of time-delay interferometry (TDI). However, the potential impact on the effectiveness of TDI should be assessed. The issue is particularly relevant to TianQin that features geocentric orbits, because of strong gravity disturbances from the Earth-Moon system at $<1\times 10^{-4}$ Hz. In this paper, based on high-precision orbital data obtained from detailed gravity field modeling, we incorporate the Doppler shift in the generation of TianQin’s beatnote phase signals. To remove the large-scale Doppler phase drift at $<1\times 10^{-4}$ Hz, we develop a high-performance high-pass filter and consider two possible processing sequences, i.e., applying the filter before or after TDI combinations. Our simulation results favor the former and demonstrate successful removal of the low-frequency gravity disturbances for TianQin without degrading the TDI performance, assuming 10 m pseudo-ranging uncertainty. The filtering scheme can be used in developing the initial noise-reduction pipeline for TianQin.

Read this paper on arXiv…

L. Zheng, S. Yang and X. Zhang
Fri, 23 Dec 22
10/58

Comments: 9 pages, 11 figures

CubeSats for Gamma-Ray Astronomy [IMA]

http://arxiv.org/abs/2212.11413


After many years of flying in space primarily for educational purposes, CubeSats – tiny satellites with form factors corresponding to arrangements of “1U” units, or cubes, each 10 cm on a side – have come into their own as valuable platforms for technology advancement and scientific investigations. CubeSats offer comparatively rapid, low-cost access to space for payloads that be built, tested, and operated by relatively small teams, with substantial contributions from students and early career researchers. Continuing advances in compact, low-power detectors, readout electronics, and flight computers have now enabled X-ray and gamma-ray sensing payloads that can fit within the constraints of CubeSat missions, permitting in-orbit demonstrations of new techniques and innovative high-energy astronomy observations. Gamma-ray-sensing CubeSats are certain to make an important contribution in the new era of multi-messenger, time-domain astronomy by detecting and localizing bright transients such as gamma-ray bursts, solar flares, and terrestrial gamma-ray flashes; however, other astrophysical science areas requiring long observations in a low-background environment, including gamma-ray polarimetry, studies of nuclear lines, and measurement of diffuse backgrounds, will likely benefit as well. We present the primary benefits of CubeSats for high-energy astronomy, highlight the scientific areas currently or soon to be studied, and review the missions that are currently operating, under development, or proposed. A rich portfolio of CubeSats for gamma-ray astronomy already exists, and the potential for a broad range of creative and scientifically productive missions in the near future is very high.

Read this paper on arXiv…

P. Bloser, D. Murphy, F. Fiore, et. al.
Fri, 23 Dec 22
13/58

Comments: Book chapter for the “Handbook of X-ray and Gamma-ray Astrophysics”, Section “Optics and Detectors for Gamma-ray Astrophysics” (Editors in chief: C. Bambi and A. Santangelo, Springer Singapore). 33 pages, 11 figures

Automatic Spectroscopic Data Reduction using BANZAI [IMA]

http://arxiv.org/abs/2212.11381


Time domain astronomy has both increased the data volume and the urgency of data reduction in recent years. Spectra provide key insights into astrophysical phenomena but require complex reductions. Las Cumbres Observatory has six spectrographs: two low-dispersion FLOYDS instruments and four NRES high-resolution echelle spectrographs. We present an extension of the data reduction framework, BANZAI, to process spectra automatically, with no human interaction. We also present interactive tools we have developed for human vetting and improvement of the spectroscopic reduction. Tools like those presented here are essential to maximize the scientific yield from current and future time domain astronomy.

Read this paper on arXiv…

C. McCully, M. Daily, G. Brandt, et. al.
Fri, 23 Dec 22
26/58

Comments: 12 pages, 8 figures, SPIE Proceedings 2022

Primary Objective Grating Telescopy: Optical Properties and Feasibility of Applications [IMA]

http://arxiv.org/abs/2212.11443


We develop the theoretical foundation for primary objective grating (POG) telescopy. In recent years, a wide range of telescope designs that collect the light over a large grating and focus it with a secondary receiving optic that is placed at grazing exodus have been proposed by Thomas D. Ditto, and are sometimes referred to as Dittoscopes. Applications include discovery and characterization of exoplanets, discovery of near-Earth asteroids, and spectroscopic surveys of the sky. These telescopes would have small aerial mass, and therefore provide a path forward to launch large telescopes into space. Because this series of telescope designs departs from traditional telescope designs, it has been difficult to evaluate which applications are most advantageous for this design. Here, we define a new figure of merit, the “modified etendue,” that characterizes the photon collection capability of a POG. It is demonstrated that the diffraction limit for observations is determined by the length of the grating. We evaluate the effects of atmospheric seeing for ground-based applications and the disambiguation of position vs. wavelength in the focal plane using a second dispersing element. Finally, some strategies for fully reaping the benefits of POG optical characteristics are discussed.

Read this paper on arXiv…

L. Swordy, H. Newberg and T. Ditto
Fri, 23 Dec 22
29/58

Comments: 25 pages, 12 figures, submitted to JATIS

Effect of solar free oscillations on TianQin's range acceleration noise [CL]

http://arxiv.org/abs/2212.11450


TianQin is a proposed space-based gravitational-wave detector mission to be deployed and operated in high Earth orbits. As a sequel to [Zhang et al. Phys. Rev. D 103, 062001 (2021)], we investigate a type of “orbital noise” in TianQin’s range acceleration that is caused by gravitational perturbation associated with solar free oscillations. Frequencies of such oscillations are typically within TianQin’s measurement band of 0.1 mHz–1 Hz, and the disturbance level needs careful assessment. By using high-precision orbit propagation and adding the Sun’s time-variable oblateness $J_2$ to detailed gravity-field models, we examine the effect in the frequency domain and show that the solar free oscillation noise is expected to be two orders of magnitude lower than the noise requirement on single links and hence has little impact on the mission.

Read this paper on arXiv…

K. Liu, C. Luo and X. Zhang
Fri, 23 Dec 22
31/58

Comments: 10 pages, 5 figures

Identify 46 New Open Clusters Candidates In Gaia EDR3 Using pyUPMASK and Random Forest Hybrid Method [SSA]

http://arxiv.org/abs/2212.11569


Open clusters (OCs) are regarded as tracers to understand stellar evolution theory and validate stellar models. In this study, we presented a robust approach to identifying OCs. A hybrid method of pyUPMASK and RF is first used to remove field stars and determine more reliable members. An identification model based on the RF algorithm built based on 3714 OC samples from Gaia DR2 and EDR3 is then applied to identify OC candidates. The OC candidates are obtained after isochrone fitting, the advanced stellar population synthesis (ASPS) model fitting, and visual inspection. Using the proposed approach, we revisited 868 candidates and preliminarily clustered them by the friends-of-friends algorithm in Gaia EDR3. Excluding the open clusters that have already been reported, we focused on the remaining 300 unknown candidates. From high to low fitting quality, these unrevealed candidates were further classified into Class A (59), Class B (21), and Class C (220), respectively. As a result, 46 new reliable open cluster candidates among classes A and B are identified after visual inspection.

Read this paper on arXiv…

H. Chi, S. Wei, F. Wang, et. al.
Fri, 23 Dec 22
32/58

Comments: 16 Pages, 14 figures, 4 tables, accepted by APJs

The ngEHT Analysis Challenges [IMA]

http://arxiv.org/abs/2212.11355


The next-generation Event Horizon Telescope (ngEHT) will be a significant enhancement of the Event Horizon Telescope (EHT) array, with $\sim 10$ new antennas and instrumental upgrades of existing antennas. The increased $uv$-coverage, sensitivity, and frequency coverage allow a wide range of new science opportunities to be explored. The ngEHT Analysis Challenges have been launched to inform development of the ngEHT array design, science objectives, and analysis pathways. For each challenge, synthetic EHT and ngEHT datasets are generated from theoretical source models and released to the challenge participants, who analyze the datasets using image reconstruction and other methods. The submitted analysis results are evaluated with quantitative metrics. In this work, we report on the first two ngEHT Analysis Challenges. These have focused on static and dynamical models of M87* and Sgr A*, and shown that high-quality movies of the extended jet structure of M87* and near-horizon hourly timescale variability of Sgr A* can be reconstructed by the reference ngEHT array in realistic observing conditions, using current analysis algorithms. We identify areas where there is still room for improvement of these algorithms and analysis strategies. Other science cases and arrays will be explored in future challenges.

Read this paper on arXiv…

F. Roelofs, L. Blackburn, G. Lindahl, et. al.
Fri, 23 Dec 22
37/58

Comments: 32 pages, 14 figures, accepted for publication in Galaxies

Gaia Data Release 3: Gaia scan-angle dependent signals and spurious periods [IMA]

http://arxiv.org/abs/2212.11971


Context: Gaia DR3 time series data may contain spurious signals related to the time-dependent scan angle. Aims: We aim to explain the origin of scan-angle dependent signals and how they can lead to spurious periods, provide statistics to identify them in the data, and suggest how to deal with them in Gaia DR3 data and in future releases. Methods: Using real Gaia data, alongside numerical and analytical models, we visualise and explain the features observed in the data. Results: We demonstrated with Gaia data that source structure (multiplicity or extendedness) or pollution from close-by bright objects can cause biases in the image parameter determination from which photometric, astrometric and (indirectly) radial velocity time series are derived. These biases are a function of the time-dependent scan direction of the instrument and thus can introduce scan-angle dependent signals, which in turn can result in specific spurious periodic signals. Numerical simulations qualitatively reproduce the general structure observed in the spurious period and spatial distribution of photometry and astrometry. A variety of statistics allows for identification of affected sources. Conclusions: The origin of the scan-angle dependent signals and subsequent spurious periods is well-understood and is in majority caused by fixed-orientation optical pairs with separation <0.5″ (amongst which binaries with P>>5y) and (cores of) distant galaxies. Though the majority of sources with affected derived parameters have been filtered out from the Gaia archive, there remain Gaia DR3 data that should be treated with care (e.g. gaia_source was untouched). Finally, the various statistics discussed in the paper can not only be used to identify and filter affected sources, but alternatively reveal new information about them not available through other means, especially in terms of binarity on sub-arcsecond scale.

Read this paper on arXiv…

B. Holl, C. Fabricius, J. Portell, et. al.
Fri, 23 Dec 22
46/58

Comments: 60 Figures, 2 Tables, submitted to A&A (v1: first partial revision incorporating various, but not yet all comments from the referee)

PACMAN: A pipeline to reduce and analyze Hubble Wide Field Camera 3 IR Grism data [IMA]

http://arxiv.org/abs/2212.11421


Here we present PACMAN, an end-to-end pipeline developed to reduce and analyze HST/WFC3 data. The pipeline includes both spectral extraction and light curve fitting. The foundation of PACMAN has been already used in numerous publications (e.g., Kreidberg et al., 2014; Kreidberg et al., 2018) and these papers have already accumulated hundreds of citations. The Hubble Space Telescope (HST) has become the preeminent workhorse facility for the characterization of extrasolar planets. HST currently has two of the most powerful space-based tools for characterizing exoplanets over a broad spectral range: The Space Telescope Imaging Spectrograph (STIS) in the UV and the Wide Field Camera 3 (WFC3) in the Near Infrared. With the introduction of a spatial scan mode on WFC3 where the star moves perpendicular to the dispersion direction during an exposure, WFC3 observations have become very efficient due to the reduction of overhead time and the possibility of longer exposures without saturation. For exoplanet characterization, WFC3 is used for transit and secondary eclipse spectroscopy, and phase curve observations. The instrument has two different grisms: G102 with a spectral range from 800 nm to up to 1150 nm and G141 encompassing 1075 nm to about 1700 nm. The spectral range of WFC3/G141 is primarily sensitive to molecular absorption from water at approximately 1.4 microns. This led to the successful detection of water in the atmosphere of over a dozen of exoplanets. The bluer part of WFC3, the G102 grism, is also sensitive to water and most notably led to the first detection of a helium exosphere.

Read this paper on arXiv…

S. Zieba and L. Kreidberg
Fri, 23 Dec 22
49/58

Comments: 8 pages, 2 figures, Published in JOSS, GitHub: this https URL

The Gaia AVU-GSR parallel solver: preliminary studies of a LSQR-based application in perspective of exascale systems [IMA]

http://arxiv.org/abs/2212.11675


The Gaia Astrometric Verification Unit-Global Sphere Reconstruction (AVU-GSR) Parallel Solver aims to find the astrometric parameters for $\sim$10$^8$ stars in the Milky Way, the attitude and the instrumental specifications of the Gaia satellite, and the global parameter $\gamma$ of the post Newtonian formalism. The code iteratively solves a system of linear equations, $\mathbf{A} \times \vec{x} = \vec{b}$, where the coefficient matrix $\mathbf{A}$ is large ($\sim$$10^{11} \times 10^8$ elements) and sparse. To solve this system of equations, the code exploits a hybrid implementation of the iterative PC-LSQR algorithm, where the computation related to different horizontal portions of the coefficient matrix is assigned to separate MPI processes. In the original code, each matrix portion is further parallelized over the OpenMP threads. To further improve the code performance, we ported the application to the GPU, replacing the OpenMP parallelization language with OpenACC. In this port, $\sim$95% of the data is copied from the host to the device at the beginning of the entire cycle of iterations, making the code $compute$ $bound$ rather than $data$$-$$transfer$ $bound$. The OpenACC code presents a speedup of $\sim$1.5 over the OpenMP version but further optimizations are in progress to obtain higher gains. The code runs on multiple GPUs and it was tested on the CINECA supercomputer Marconi100, in anticipation of a port to the pre-exascale system Leonardo, that will be installed at CINECA in 2022.

Read this paper on arXiv…

V. Cesare, U. Becciani, A. Vecchiato, et. al.
Fri, 23 Dec 22
57/58

Comments: 18 pages, 8 figures, 3 pseudocodes, published in Astronomy and Computing, Volume 41, October 2022, 100660, accepted for publication on 4th October 2022

XMM-Newton [IMA]

http://arxiv.org/abs/2212.10995


The X-ray Multi-mirror Mission (XMM-Newton) provides simultaneous non-dispersive spectroscopic X-ray imaging and timing, medium resolution dispersive X-ray spectroscopy and optical/UV imaging, spectroscopy and timing. In combination, the imaging cameras offer an effective area over the energy range from 150 eV to 12 keV of up to 2500 cm$^2$ at 1.5 keV and $\sim$1800 cm$^2$ at 5 keV. The gratings cover an energy range from 0.4 keV to 2.2 keV with a combined effective area of up to 120 cm$^2$ at 0.8 keV. XMM-Newton offers unique opportunities for a wide variety of sensitive X-ray observations accompanied by simultaneous optical/UV measurements. The majority of XMM-Newton’s observing time is made available to the astronomical community by peer-reviewed Announcements of Opportunity. The scientific exploitation of XMM-Newton data is aided by an observatory-class X-ray facility which provides analysis software, pipeline processing, calibration and catalogue generation. Around 380 refereed papers based on XMM-Newton data are published each year with a high fraction of papers reporting transformative scientific results.

Read this paper on arXiv…

N. Schartel, R. González-Riestra, P. Kretschmar, et. al.
Thu, 22 Dec 22
28/59

Comments: 37 pages, 28 figures, Invited chapter for {\it Handbook of X-ray and Gamma-ray Astrophysics} (Eds. C. Bambi and A. Santangelo, Springer Singapore, expected in 2022

Parameter Estimation of Eccentric Gravitational Waves with Decihertz Observatory and Its Cosmological Implications [CL]

http://arxiv.org/abs/2212.11131


Eccentricity of compact binaries can improve the parameter estimation of gravitational waves (GWs), which is due to the fact that the multiple harmonics induced by eccentricity can provide more information and break the degeneracy between waveform parameters. In this paper, we first investigate the parameter estimation of eccentric GWs with decihertz observatory. We consider two scenarios for the configuration of DECIGO, i.e., the one cluster of DECIGO with its design sensitivity and B-DECIGO which also has one cluster but with inferior sensitivity as a comparison. We adopt the Fisher matrix to estimate the parameter errors. By mocking up the typical binaries in GWTC-3, we find a nonvanishing eccentricity can significantly improve the estimation for almost all waveform parameters. In particular, the localization of typical binary black holes (BBH) can achieve $\mathcal{O}(10-10^{3.5})$ factors of improvement when the initial eccentricity $e_0=0.4$ at 0.1 Hz. The precise localization of binary neutron stars (BNS) and neutron star–black hole binaries (NSBH), together with the large improvement of localization of BBH from eccentricity in the mid-band, inspire us to construct the catalogs of golden dark sirens whose host galaxies can be uniquely identified. We find that with only one cluster of DECIGO running 1 year in its design sensitivity, hundreds of golden dark BNS, NSBH, and tens of golden dark BBH can be observed. Eccentricity can greatly increase the population of golden dark BBH from $\sim 7~(e_0=0)$ to $\sim 65~(e_0=0.2)$. Such an increase of population of golden dark BBH events can improve the precision of Hubble constant measurement from 2.06\% to 0.68\%, matter density parameter from 64\% to 16\% in $\Lambda$CDM model. Through the phenomenological parameterization of GW propagation, the constraints of modified gravity can be improved from 6.2\% to 1.6\%.

Read this paper on arXiv…

T. Yang, R. Cai, Z. Cao, et. al.
Thu, 22 Dec 22
38/59

Comments: 28 pages, 23 figures

Measures of Variance on Windowed Gaussian Processes [IMA]

http://arxiv.org/abs/2212.10684


The variance and fractional variance on a fixed time window (variously known as “rms percent” or “modulation index”) are commonly used to characterize the variability of astronomical sources. We summarize properties of this statistic for a Gaussian process.

Read this paper on arXiv…

D. Lee and C. Gammie
Thu, 22 Dec 22
55/59

Comments: 5 pages, 1 figure, submitted to RNAAS

Progress towards a 3D Monte Carlo radiative transfer code for outflow wind modelling [SSA]

http://arxiv.org/abs/2212.11016


Context: Radiative transfer modelling of expanding stellar envelopes is an important task in their analysis. To account for inhomogeneities and deviations from spherical symmetry, it is necessary to develop a 3D approach to radiative transfer modelling.
Aims: We present a 3D Monte Carlo code for radiative transfer modelling, which is aimed to calculate the plasma ionisation and excitation state with the statistical equilibrium equations, moreover, to implement photon-matter coupling. As a first step, we present our Monte Carlo radiation transfer routines developed and tested from scratch.
Methods: The background model atmosphere (the temperature, density, and velocity structure) can use an arbitrary grid referred to as the model grid (modGrid). The radiative transfer was solved using the Monte Carlo method in a Cartesian grid, referred to as the propagation grid (propgrid). This Cartesian grid was created based on the structure of the modgrid; correspondence between these two grids was set at the beginning of the calculations and then kept fixed. The propgrid can be either regular or adaptive; two modes of adaptive grids were tested. The accuracy and calculation speed for different propgrids was analysed. Photon interaction with matter was handled using the Lucy’s macroatom approach. Test calculations using our code were compared with the results obtained by a different Monte Carlo radiative transfer code.
Results: Our method and the related code for the 3D radiative transfer using the Monte Carlo and macroatom methods offer an accurate and reliable solution for the radiative transfer problem, and are especially promising for the inclusion and treatment of 3D inhomogeneities.

Read this paper on arXiv…

J. Fišák, J. Kubát, B. Kubátová, et. al.
Thu, 22 Dec 22
59/59

Comments: 21 pages

E-TEST prototype design report [IMA]

http://arxiv.org/abs/2212.10083


E-TEST (Einstein Telescope Euregio-Meuse-Rhin Site and Technology) is a project recently funded by the European program Ineterreg Euregio Meuse-Rhine. This program is dedicated to innovative cross boarder activities between Belgium, The Netherlands and Germany. With a total budget of15MC and a consortium of 11 partners from the three countries, the objective of the project is twofold. Firstly, to develop an eco-friendly and non-invasive imaging of the geological conditions as well as the development of an observatory of the underground in the EMR region. Secondly, to develop technologies necessary for 3rd generation gravitational wave detectors. In particular, it is proposed to develop a prototype of large suspended cryogenic silicon mirror, isolated from seismic vibrations at low frequency. The total budget of the project is equally spread over the two activities. The first activity is not discussed at all in this report. The E-TEST prototype will have some key unique features: a silicon mirror of 100 kg, a radiative cooling strategy (non contact), a low-frequency hybrid isolation stage, cryogenic sensors and electronics, a laser and optics at 2 microns, a low thermal noise coating.

Read this paper on arXiv…

A. Sider, L. Amez-Droz, A. Amorosi, et. al.
Wed, 21 Dec 22
8/81

Comments: N/A

Galaxy Image Classification using Hierarchical Data Learning with Weighted Sampling and Label Smoothing [IMA]

http://arxiv.org/abs/2212.10081


With the development of a series of Galaxy sky surveys in recent years, the observations increased rapidly, which makes the research of machine learning methods for galaxy image recognition a hot topic. Available automatic galaxy image recognition researches are plagued by the large differences in similarity between categories, the imbalance of data between different classes, and the discrepancy between the discrete representation of Galaxy classes and the essentially gradual changes from one morphological class to the adjacent class (DDRGC). These limitations have motivated several astronomers and machine learning experts to design projects with improved galaxy image recognition capabilities. Therefore, this paper proposes a novel learning method, “Hierarchical Imbalanced data learning with Weighted sampling and Label smoothing” (HIWL). The HIWL consists of three key techniques respectively dealing with the above-mentioned three problems: (1) Designed a hierarchical galaxy classification model based on an efficient backbone network; (2) Utilized a weighted sampling scheme to deal with the imbalance problem; (3) Adopted a label smoothing technique to alleviate the DDRGC problem. We applied this method to galaxy photometric images from the Galaxy Zoo-The Galaxy Challenge, exploring the recognition of completely round smooth, in between smooth, cigar-shaped, edge-on and spiral. The overall classification accuracy is 96.32\%, and some superiorities of the HIWL are shown based on recall, precision, and F1-Score in comparing with some related works. In addition, we also explored the visualization of the galaxy image features and model attention to understand the foundations of the proposed scheme.

Read this paper on arXiv…

X. Ma, X. Li, A. Luo, et. al.
Wed, 21 Dec 22
17/81

Comments: accepted by MNRAS

Demonstration of Ultrawideband Polarimetry Using VLBI Exploration of Radio Astrometry (VERA) [IMA]

http://arxiv.org/abs/2212.10144


We report on recent technical developments in the front- and back-ends for the four 20 m radio telescopes of the Japanese Very-Long-Baseline Interferometry (VLBI) project, VLBI Exploration of Radio Astrometry (VERA). We present a brief overview of a dual-circular polarization receiving and ultrawideband (16 Giga bit s$^{-1}$) recording systems that were installed on each of the four telescopes operating at 22 and 43 GHz bands. The wider-band capability improves the sensitivity of VLBI observations for continuum emission, and the dual-polarization capability enables the study of magnetic fields in relativistic jets ejected from supermassive black holes in active galactic nuclei and in sites of star formation and around evolved stars. We present the linear polarization intensity maps of extragalactic sources at 22 and 43 GHz obtained from the most recent test observations to show the state of the art of the VERA polarimetric observations. At the end of this article, given the realization of VLBI polarimetry with VERA, we describe the future prospects for scientific aims and further technical developments.

Read this paper on arXiv…

Y. Hagiwara, K. Hada, M. Takamura, et. al.
Wed, 21 Dec 22
23/81

Comments: 14 pages, 7 figures, 3 tables, published in the Special issue ” Challenges in Understanding Black Hole Powered Jets with VLBI”, Galaxies Journal

Cats vs Dogs, Photons vs Hadrons [IMA]

http://arxiv.org/abs/2212.10281


In gamma ray astronomy with Cherenkov telescopes, machine learning models are needed to guess what kind of particles generated the detected light, and their energies and directions. The focus in this work is on the classification task, training a simple convolutional neural network suitable for binary classification (as it could be a cats vs dogs classification problem), using as input uncleaned images generated by Montecarlo data for a single ASTRI telescope. Results show an enhanced discriminant power with respect to classical random forest methods.

Read this paper on arXiv…

F. Visconti
Wed, 21 Dec 22
33/81

Comments: 4 pages, 3 figures, 2 tables, to be published in Proceedings of ML4ASTRO conference, Poster category: this https URL

Using Machine Learning to Determine Morphologies of $z<1$ AGN Host Galaxies in the Hyper Suprime-Cam Wide Survey [GA]

http://arxiv.org/abs/2212.09984


We present a machine-learning framework to accurately characterize morphologies of Active Galactic Nucleus (AGN) host galaxies within $z<1$. We first use PSFGAN to decouple host galaxy light from the central point source, then we invoke the Galaxy Morphology Network (GaMorNet) to estimate whether the host galaxy is disk-dominated, bulge-dominated, or indeterminate. Using optical images from five bands of the HSC Wide Survey, we build models independently in three redshift bins: low $(0<z<0.25)$, medium $(0.25<z<0.5)$, and high $(0.5<z<1.0)$. By first training on a large number of simulated galaxies, then fine-tuning using far fewer classified real galaxies, our framework predicts the actual morphology for $\sim$ $60\%-70\%$ host galaxies from test sets, with a classification precision of $\sim$ $80\%-95\%$, depending on redshift bin. Specifically, our models achieve disk precision of $96\%/82\%/79\%$ and bulge precision of $90\%/90\%/80\%$ (for the 3 redshift bins), at thresholds corresponding to indeterminate fractions of $30\%/43\%/42\%$. The classification precision of our models has a noticeable dependency on host galaxy radius and magnitude. No strong dependency is observed on contrast ratio. Comparing classifications of real AGNs, our models agree well with traditional 2D fitting with GALFIT. The PSFGAN+GaMorNet framework does not depend on the choice of fitting functions or galaxy-related input parameters, runs orders of magnitude faster than GALFIT, and is easily generalizable via transfer learning, making it an ideal tool for studying AGN host galaxy morphology in forthcoming large imaging survey.

Read this paper on arXiv…

C. Tian, C. Urry, A. Ghosh, et. al.
Wed, 21 Dec 22
40/81

Comments: Accepted for publication in The Astrophysical Journal. 35 Pages. 25 Figures

Using Pulsar Parameter Drifts to Detect Sub-Nanohertz Gravitational Waves [HEAP]

http://arxiv.org/abs/2212.09751


Gravitational waves with frequencies below 1 nHz are notoriously difficult to detect. With periods exceeding current experimental lifetimes, they induce slow drifts in observables rather than periodic correlations. Observables with well-known intrinsic contributions provide a means to probe this regime. In this work, we demonstrate the viability of using observed pulsar timing parameters to discover such ”ultralow” frequency gravitational waves, presenting two complementary observables for which the systematic shift induced by ultralow-frequency gravitational waves can be extracted. Using existing data for these parameters, we search the ultralow frequency regime for continuous-wave signals, finding a sensitivity near the expected prediction from supermassive black hole mergers. We do not see an excess in the data, setting a limit on the strain of $ 7.1 \times 10 ^{ – 14} $ at 1 nHz with a sensitivity dropping approximately quadratically with frequency until 10 pHz. Our search method opens a new frequency range for gravitational wave detection and has profound implications for astrophysics, cosmology, and particle physics.

Read this paper on arXiv…

W. DeRocco and J. Dror
Wed, 21 Dec 22
51/81

Comments: 12 pages, 2 figures, 3 appendices

Image enhancement with wavelet-optimized whitening [IMA]

http://arxiv.org/abs/2212.10134


Due to its physical nature, the solar corona exhibits large spatial variations of intensity that make it difficult to simultaneously visualize the features present at all levels and scales. Many general-purpose and specialized filters have been proposed to enhance coronal images. However, most of them require the ad hoc tweaking of parameters to produce subjectively good results. Our aim was to develop a general purpose image enhancement technique that would produce equally good results, but based on an objective criterion. The underlying principle of the method is the equalization, or whitening, of power in the {\it `a trous} wavelet spectrum of the input image at all scales and locations. An edge-avoiding modification of the {\it `a trous} transform that uses bilateral weighting by the local variance in the wavelet planes is used to suppress the undesirable halos otherwise produced by discontinuities in the data. Results are presented for a variety of extreme ultraviolet (EUV) and white light images of the solar corona. The proposed filter produces sharp and contrasted output, without requiring the manual adjustment of parameters. Furthermore, the built-in denoising scheme prevents the explosion of high-frequency noise typical of other enhancement methods, without smoothing statistically significant small-scale features. The standard version of the algorithm is about two times faster than the widely used multiscale Gaussian normalization (MGN). The bilateral version is slower, but provides significantly better results in the presence of spikes or edges. Comparisons with other methods suggest that the whitening principle may correspond to the subjective criterion of most users when adjusting free parameters.

Read this paper on arXiv…

F. Auchère, E. Soubrié, G. Pelouze, et. al.
Wed, 21 Dec 22
52/81

Comments: N/A

Radiofrequency Ice Dielectric Measurements at Summit Station, Greenland [CL]

http://arxiv.org/abs/2212.10285


We recently reported on the radio-frequency attenuation length of cold polar ice at Summit Station, Greenland, based on bistatic radar measurements of radio-frequency bedrock echo strengths taken during the summer of 2021. Those data also include echoes attributed to stratified impurities or dielectric discontinuities within the ice sheet (layers), which allow studies of a) estimation of the relative contribution of coherent (discrete layers, e.g.) vs. incoherent (bulk volumetric, e.g.) scattering, b) the magnitude of internal layer reflection coefficients, c) limits on the azimuthal asymmetry of reflections (birefringence), and d) limits on signal dispersion in-ice over a bandwidth of ~100 MHz. We find that i) after averaging 10000 echo triggers, reflected signal observable over the thermal floor (to depths of approximately 1500 m) are consistent with being entirely coherent, ii) internal layer reflection coefficients are measured at approximately -60 to -70 dB, iii) birefringent effects for vertically propagating signals are smaller by an order of magnitude relative to comparable studies performed at South Pole, and iv) within our experimental limits, glacial ice is non-dispersive over the frequency band relevant for neutrino detection experiments.

Read this paper on arXiv…

J. Aguilar, P. Allison, D. Besson, et. al.
Wed, 21 Dec 22
56/81

Comments: N/A

Identifying hot subdwarf stars from photometric data using Gaussian mixture model and graph neural network [SSA]

http://arxiv.org/abs/2212.10072


Hot subdwarf stars are very important for understanding stellar evolution, stellar astrophysics, and binary star systems. Identifying more such stars can help us better understand their statistical distribution, properties, and evolution. In this paper, we present a new method to search for hot subdwarf stars in photometric data (b, y, g, r, i, z) using a machine learning algorithm, graph neural network, and Gaussian mixture model. We use a Gaussian mixture model and Markov distance to build the graph structure, and on the graph structure, we use a graph neural network to identify hot subdwarf stars from 86 084 stars, when the recall, precision, and f1 score are maximized on the original, weight and synthetic minority oversampling technique datasets. Finally, from 21 885 candidates, we selected approximately 6 000 stars that were the most similar to the hot subdwarf star.

Read this paper on arXiv…

W. Liu, Y. Bu, X. Kong, et. al.
Wed, 21 Dec 22
58/81

Comments: N/A

Spectral performance of the Microchannel X-ray Telescope on board the SVOM mission [IMA]

http://arxiv.org/abs/2212.09863


The Microchannel X-ray Telescope (MXT) is an innovative compact X-ray instrument on board the SVOM astronomical mission dedicated to the study of transient phenomena such as gamma-ray bursts. During 3 weeks, we have tested the MXT flight model at the Panter X-ray test facility under the nominal temperature and vacuum conditions that MXT will undergo in-flight. We collected data at series of characteristic energies probing the entire MXT energy range, from 0.28 keV up to 9 keV, for multiple source positions with the center of the point spread function (PSF) inside and outside the detector field of view (FOV). We stacked the data of the positions with the PSF outside the FOV to obtain a uniformly illuminated matrix and reduced all data sets using a dedicated pipeline. We determined the best spectral performance of MXT using an optimized data processing, especially for the energy calibration and the charge sharing effect induced by the pixel low energy thresholding. Our results demonstrate that MXT is compliant with the instrument requirement regarding the energy resolution (<80 eV at 1.5 keV), the low and high energy threshold, and the accuracy of the energy calibration ($\pm$20 eV). We also determined the charge transfer inefficiency (~$10^{-5}$) of the detector and modeled its evolution with energy prior to the irradiation that MXT will undergo during its in-orbit lifetime. Finally, we measured the relation of the energy resolution as function of the photon energy. We determined an equivalent noise charge of 4.9 $\pm$ 0.2 e- rms for the MXT detection chain and a Fano factor of 0.131 $\pm$ 0.003 in silicon at 208 K, in agreement with previous works. This campaign confirmed the promising scientific performance that MXT will be able to deliver during the mission lifetime.

Read this paper on arXiv…

B. Schneider, N. Renault-Tinacci, D. Götz, et. al.
Wed, 21 Dec 22
59/81

Comments: 20 pages, 10 figures, accepted for publication in Experimental Astronomy

Se-ResNet+SVM model: an effective method of searching for hot subdwarfs from LAMOST [SSA]

http://arxiv.org/abs/2212.10372


In this paper, we apply the feature-integration idea to fuse the abstract features extracted by Se-ResNet with experience features into hybrid features and input the hybrid features to the Support Vector Machine (SVM) to classify Hot subdwarfs. Based on this idea, we construct a Se-ResNet+SVM model, including a binary classification model and a four-class classification model. The four-class classification model can further screen the hot subdwarf candidates obtained by the binary classification model. The F1 values derived by the binary and the four-class classification model on the test set are 96.17% and 95.64%, respectively. Then, we use the binary classification model to classify 333,534 nonFGK type spectra in the low-resolution spectra of LAMOST DR8 and obtain a catalog of 3,266 hot subdwarf candidates, of which 1223 are newly-determined. Subsequently, the four-class classification model further filtered the 3,266 candidates, 409 and 296 are newly-determined respectively when the thresholds were set at 0.5 and 0.9. Through manual inspection, The true number of hot subdwarfs in the three newly-determined canditates are 176, 63, and 41, the corresponding precision of the classification model in the three cases are 67.94%, 84.88%, and 87.60%, respectively. Finally, we train a Se-ResNet regression model with MAE values of 1212.65 K for Teff, 0.32 dex for log g and 0.24 for [He/H], and predict the atmospheric parameters of these 176 hot subdwarf stars. This provides a certain amount of samples to help for future studies of hot subdwarfs.

Read this paper on arXiv…

C. Zhongding, K. xiaoming, W. Tianmin, et. al.
Wed, 21 Dec 22
61/81

Comments: N/A

TelePix — A fast region of interest trigger and timing layer for the EUDET Telescopes [CL]

http://arxiv.org/abs/2212.10248


Test beam facilities are essential to study the response of novel detectors to particles. At the DESY II Test Beam facility, users can test their detectors with an electron beam with a momentum from 1-6 GeV. To track the beam particles, EUDET-style telescopes are provided in each beam area. They provide excellent spatial resolution, but the time resolution is limited by the rolling shutter architecture to a precision of approximately 230 $\mu$s. Since the demand on particle rates — and hence track multiplicities — is increasing timing is becoming more relevant. DESY foresees several upgrades of the telescopes. TelePix is an upgrade project to provide track timestamping with a precision of better than 5 ns and a configurable region of interest to trigger the telescope readout. Small scale prototypes have been characterised in laboratory and test beam measurements. Laboratory tests with an injection corresponding to 2300 electrons show a S/N of above 20. Test beam characterization shows efficiencies of above 99% over a threshold range of more than 100 mV and time resolutions of 2.4 ns at low noise rates.

Read this paper on arXiv…

H. Augustin, S. Dittmeier, J. Hammerich, et. al.
Wed, 21 Dec 22
65/81

Comments: Preprint submitted to Proceedings of the 15th Pisa Meeting on Advanced Detectors

A New Period Determination Method for Periodic Variable Stars [IMA]

http://arxiv.org/abs/2212.10037


Variable stars play a key role in understanding the Milky Way and the universe. The era of astronomical big data presents new challenges for quick identification of interesting and important variable stars. Accurately estimating the periods is the most important step to distinguish different types of variable stars. Here, we propose a new method of determining the variability periods. By combining the statistical parameters of the light curves, the colors of the variables, the window function and the GLS algorithm, the aperiodic variables are excluded and the periodic variables are divided into eclipsing binaries and NEB variables (other types of periodic variable stars other than eclipsing binaries), the periods of the two main types of variables are derived. We construct a random forest classifier based on 241,154 periodic variables from the ASAS-SN and OGLE datasets of variables. The random forest classifier is trained on 17 features, among which 11 are extracted from the light curves and 6 are from the Gaia Early DR3, ALLWISE and 2MASS catalogs. The variables are classified into 7 superclasses and 17 subclasses. In comparison with the ASAS-SN and OGLE catalogs, the classification accuracy is generally above approximately 82% and the period accuracy is 70%-99%. To further test the reliability of the new method and classifier, we compare our results with the results of Chen et al. (2020) for ZTF DR2. The classification accuracy is generally above 70%. The period accuracy of the EW and SR variables is 50% and 53%, respectively. And the period accuracy of other types of variables is 65%-98%.

Read this paper on arXiv…

X. Xu, Q. Zhu, X. Li, et. al.
Wed, 21 Dec 22
81/81

Comments: 23 pages, 10 figures

Fastcc: fast colour corrections for broadband radio telescope data [IMA]

http://arxiv.org/abs/2212.09488


Broadband receiver data need colour corrections applying to correct for the different source spectra across their wide bandwidths. The full integration over a receiver bandpass may be computationally expensive and redundant when repeated many times. Colour corrections can be applied, however, using a simple quadratic fit based on the full integration instead. Here we describe fastcc and interpcc, quick Python and IDL codes that return, respectively, colour correction coefficients for different power-law spectral indices and modified black bodies for various Cosmic Microwave Background related experiments. The codes are publicly available, and can be easily extended to support additional telescopes.

Read this paper on arXiv…

M. Peel, R. Genova-Santos, C. Dickinson, et. al.
Tue, 20 Dec 22
4/97

Comments: 3 pages, 1 figure. Published in RNAAS

Taylor-Couette flow for astrophysical purposes [CL]

http://arxiv.org/abs/2212.08741


A concise review is given of astrophysically motivated experimental and theoretical research on Taylor-Couette flow. The flows of interest rotate differentially with inner cylinder faster than outer one but are linearly stable against Rayleigh’s inviscid centrifugal instability. At shear Reynolds numbers as large as 10^6, hydrodynamic flows of this type (quasi-keplerian) appear to be nonlinearly stable: no turbulence is seen that cannot be attributed to interaction with the axial boundaries, rather than the radial shear itself. Direct numerical simulations agree, although they cannot yet reach such high Reynolds numbers. This result indicates that accretion-disc turbulence is not purely hydrodynamic in origin, at least insofar as it is driven by radial shear. Theory, however, predicts linear magnetohydrodynamic (MHD) instabilities in astrophysical discs: in particular, the standard magnetorotational instability (SMRI). MHD Taylor-Couette experiments aimed at SMRI are challenged by the low magnetic Prandtl numbers of liquid metals. High fluid Reynolds numbers and careful control of the axial boundaries are required. The quest for laboratory SMRI has been rewarded with the discovery of some interesting inductionless cousins of SMRI, and the recently reported success in demonstrating SMRI itself by taking advantage of conducting axial boundaries. Some outstanding questions and near-future prospects are discussed, especially in connection with astrophysics.

Read this paper on arXiv…

H. Ji and J. Goodman
Tue, 20 Dec 22
16/97

Comments: 17 pages, 6 figures, and 2 tables, accepted as part of Theme issue: Taylor-Couette and Related Flows on the Centennial of Taylor’s Seminal Philosophical Transactions Paper in Phil. Trans. R. Soc. A

Direct D-atom incorporation in radicals: An overlooked pathway for deuterium fractionation [IMA]

http://arxiv.org/abs/2212.08680


Direct D-H exchange in radicals is investigated in a quasi-uniform flow employing chirped pulse mm-wave spectroscopy. Inspired by the H-atom catalyzed isomerization of C3H2 reported in our previous study, D atom reactions with the propargyl (C3H3) radical and its photoproducts were investigated. We observed very efficient D atom enrichment in the photoproducts through an analogous process of D addition/H elimination to C3H2 isomers occurring at 40K or below. Cyclic C3HD is the only deuterated isomer observed, consistent with the expected addition/elimination yielding the lowest energy product. The other expected addition/elimination product, deuterated propargyl, is not directly detected, although its presence is inferred by the observations in the latter part of the flow. There, in the high-density region of the flow, we observed both isotopomers of singly deuterated propyne attributed to stabilization of the H + C3H2D or D + C3H3 adducts. The implications of these observations for the deuterium fractionation of hydrocarbon radicals in astrochemical environments is discussed with the support of a monodeuterated chemical kinetic model.

Read this paper on arXiv…

N. Dias, R. Gurusinghe, B. Broderick, et. al.
Tue, 20 Dec 22
24/97

Comments: N/A

Data mining techniques on astronomical spectra data. II : Classification Analysis [IMA]

http://arxiv.org/abs/2212.09286


Classification is valuable and necessary in spectral analysis, especially for data-driven mining. Along with the rapid development of spectral surveys, a variety of classification techniques have been successfully applied to astronomical data processing. However, it is difficult to select an appropriate classification method in practical scenarios due to the different algorithmic ideas and data characteristics. Here, we present the second work in the data mining series – a review of spectral classification techniques. This work also consists of three parts: a systematic overview of current literature, experimental analyses of commonly used classification algorithms and source codes used in this paper. Firstly, we carefully investigate the current classification methods in astronomical literature and organize these methods into ten types based on their algorithmic ideas. For each type of algorithm, the analysis is organized from the following three perspectives. (1) their current applications and usage frequencies in spectral classification are summarized; (2) their basic ideas are introduced and preliminarily analysed; (3) the advantages and caveats of each type of algorithm are discussed. Secondly, the classification performance of different algorithms on the unified data sets is analysed. Experimental data are selected from the LAMOST survey and SDSS survey. Six groups of spectral data sets are designed from data characteristics, data qualities and data volumes to examine the performance of these algorithms. Then the scores of nine basic algorithms are shown and discussed in the experimental analysis. Finally, nine basic algorithms source codes written in python and manuals for usage and improvement are provided.

Read this paper on arXiv…

H. Yang, L. Zhou, J. Cai, et. al.
Tue, 20 Dec 22
26/97

Comments: 25 pages, 41 figures

Eliminating polarization leakage effect for neutral hydrogen intensity mapping with deep learning [IMA]

http://arxiv.org/abs/2212.08773


The neutral hydrogen (HI) intensity mapping (IM) survey is regarded as a promising approach for cosmic large-scale structure (LSS) studies. A major issue for the HI IM survey is to remove the bright foreground contamination. A key to successfully remove the bright foreground is to well control or eliminate the instrumental effects. In this work, we consider the instrumental effect of polarization leakage and use the U-Net approach, a deep learning-based foreground removal technique, to eliminate the polarization leakage effect.In this method, the principal component analysis (PCA) foreground subtraction is used as a preprocessing step for the U-Net foreground subtraction. Our results show that the additional U-Net processing could either remove the foreground residual after the conservative PCA subtraction or compensate for the signal loss caused by the aggressive PCA preprocessing. Finally, we test the robustness of the U-Net foreground subtraction technique and show that it is still reliable in the case of existing constraint error on HI fluctuation amplitude.

Read this paper on arXiv…

L. Gao, Y. Li, S. Ni, et. al.
Tue, 20 Dec 22
32/97

Comments: 12 pages, 11 figures

Skyglow inside your eyes: intraocular scattering and artificial brightness of the night sky [IMA]

http://arxiv.org/abs/2212.09103


The visual perception of the natural night sky in many places of the world is strongly disturbed by anthropogenic light. Part of this artificial light is scattered in the atmosphere and propagates towards the observer, adding to the natural brightness and producing a light polluted sky. However, atmospheric scattering is not the only mechanism contributing to increase the visual skyglow. The rich and diverse biological media forming the human eye also scatter light very efficiently and contribute, in some cases to a big extent, to the total sky brightness detected by the retinal photoreceptors. In this paper we quantify this effect and assess its relevance when the eye pupil is illuminated by light sources within the visual field. Our results show that intraocular scattering constitutes a significant part of the perceived sky brightness at short distances from streetlights. These results provide quantitative support to the everyday experience that substantial gains in naked-eye star limiting magnitudes can be achieved by blocking the direct light from the lamps that reaches the eye pupil.

Read this paper on arXiv…

S. Bará and C. Bao-Varela
Tue, 20 Dec 22
48/97

Comments: 9 pages, 5 figures

Ultra-Low-Frequency Radio Astronomy Observations from a Selenocentric Orbit: first results of the Longjiang-2 experiment [IMA]

http://arxiv.org/abs/2212.09590


This paper introduces the first results of observations with the Ultra-Long-Wavelength (ULW) — Low Frequency Interferometer and Spectrometer (LFIS) on board the selenocentric satellite Longjiang-2. We present a brief description of the satellite and focus on the LFIS payload. The in-orbit commissioning confirmed a reliable operational status of the instrumentation. We also present results of a transition observation, which offers unique measurements on several novel aspects. We estimate the RFI suppression required for such a radio astronomy instrumentation at the Moon distances from Earth to be of the order of 80 dB. We analyse a method of separating Earth- and satellite-originated radio frequency interference (RFI). It is found that the RFI level at frequencies lower than a few MHz is smaller than the receiver noise floor.

Read this paper on arXiv…

J. Yan, J. Wu, L. Gurvits, et. al.
Tue, 20 Dec 22
49/97

Comments: Accepted for publication in Experimental Astronomy; 22 pages, 11 figures

Applications of the source-frequency phase-referencing technique for ngEHT observations [IMA]

http://arxiv.org/abs/2212.08994


The source-frequency phase-referencing (SFPR) technique has been demonstrated to have great advantages for mm-VLBI observations. By implementing simultaneous multi-frequency receiving systems on the next generation Event Horizon Telescope (ngEHT) antennas, it is feasible to carry out a frequency phase transfer (FPT) which could calibrate the non-dispersive propagation errors and significantly increase the phase coherence in the visibility data. Such increase offers an efficient approach for weak source or structure detection. SFPR also makes it possible for high precision astrometry, including the core-shift measurements up to sub-mm wavelengths for Sgr A* and M87* etc. We also briefly discuss the technical and scheduling considerations for future SFPR observations with the ngEHT.

Read this paper on arXiv…

W. Jiang, G. Zhao, Z. Shen, et. al.
Tue, 20 Dec 22
63/97

Comments: 9 pages, in the special issue for ngEHT

A study on Performance Boost of a 17~m class Cherenkov telescope with a SiPM-based camera [IMA]

http://arxiv.org/abs/2212.09456


The current generation of Imaging Atmospheric Cherenkov Telescopes (IACTs), comprised of major installations such as the MAGIC telescopes, H.E.S.S. and VERITAS, is classified as the 3$^{\mathrm{rd}}$ generation of suchs instruments. These telescopes use multipixel cameras composed of thousands of photomultiplier tubes (PMTs). The total light throughput of such instruments depends, besides the PMT photon detection efficiency (PDE), on the mirror dish reflectivity, and the light absorption by the camera window. The supremacy of PMTs is currently being challenged by photon sensors rapidly spreading in popularity, the silicon photomultipliers (SiPMs), that are becoming a valid alternative thanks to their high PDE, low operating voltage and flexibility in installation. In this report, we investigate the performance of an existing 3$^{\mathrm{rd}}$-generation IACT array (taking as an example MAGIC) in which PMTs would be replaced with SiPMs, with minimal further hardware intervention. This would mean that other systems of the telescope responsible for the light collection, in particular the optics, would remain the same, and only the electronic to steer the different photodetectors would be modified. We find an increase of sensitivity up to a factor of 2 for energies below 200~GeV. Interestingly, we also find that the stronger sensitivity of SiPMs in the red part of the spectrum, a source of background for IACTs, does not affect this conclusion.

Read this paper on arXiv…

C. Arcaro, M. Doro, J. Sitarek, et. al.
Tue, 20 Dec 22
66/97

Comments: N/A

Can the gravitational effect of Planet X be detected in current-era tracking of the known planets? [EPA]

http://arxiv.org/abs/2212.09594


Using Fisher information matrices, we forecast the uncertainties $\sigma_M$ on the measurement of a “Planet X” at heliocentric distance $d_X$ via its tidal gravitational field’s action on the known planets. Using planetary measurements currently in hand, including ranging from the Juno, Cassini, and Mars-orbiting spacecraft, we forecast a median uncertainty (over all possible sky positions) of $\sigma_M=0.22M_\oplus (d_x/400\,\textrm{AU})^3.$ A definitive $(5\sigma)$ detection of a $5M_\oplus$ Planet X at $d_X=400$ AU should be possible over the full sky but over only 5% of the sky at $d_X=800$ AU. The gravity of an undiscovered Earth- or Mars-mass object should be detectable over 90% of the sky to a distance of 260 or 120 AU, respectively. Upcoming Mars ranging improves these limits only slightly. We also investigate the power of high-precision astrometry of $\approx8000$ Jovian Trojans over the 2023–2035 period from the upcoming Legacy Survey of Space and Time (LSST). We find that the dominant systematic errors in optical Trojan astrometry (photocenter motion, non-gravitational forces, and differential chromatic refraction) can be solved internally with minimal loss of information. The Trojan data allow useful cross-checks with Juno/Cassini/Mars ranging, but do not significantly improve the best-achievable $\sigma_M$ values until they are $\gtrsim10\times$ more accurate than expected from LSST. The ultimate limiting factor in searches for a Planet X tidal field is confusion with the tidal field created by the fluctuating quadrupole moment of the Kuiper Belt as its members orbit. This background will not, however, become the dominant source of Planet X uncertainty until the data get substantially better than they are today.

Read this paper on arXiv…

D. Gomes, Z. Murray, R. Gomes, et. al.
Tue, 20 Dec 22
67/97

Comments: To be submitted to Planetary Science Journal

The VISCACHA survey — VI. Dimensional study of the structure of 82 star clusters in the Magellanic Clouds [SSA]

http://arxiv.org/abs/2212.09685


We present a study of the internal structure of 82 star clusters located at the outer regions of the Large Magellanic Cloud and the Small Magellanic Cloud using data of the VISCACHA Survey. Through the construction of the minimum spanning tree, which analyzes the relative position of stars within a given cluster, it was possible to characterize the internal structure and explore the fractal or subclustered distribution for each cluster. We computed the parameters m (which is the average length of the connected segments normalized by the area), s (which is the mean points separation in units of cluster radius), and Q (the ratio of these components). These parameters are useful to distinguish between radial, homogeneous, and substructured distributions of stars. The dependence of these parameters with the different characteristics of the clusters, such as their ages and spatial distribution, was also studied. We found that most of the studied clusters present a homogeneous stellar distribution or a distribution with a radial concentration. Our results are consistent with the models, suggesting that more dynamically evolved clusters seem to have larger Q values, confirming previous results from numerical simulations. There also seems to be a correlation between the internal structure of the clusters and their galactocentric distances, in the sense that for both galaxies, the more distant clusters have larger Q values. We also paid particular attention to the effects of contamination by non-member field stars and its consequences finding that field star decontamination is crucial for these kinds of studies.

Read this paper on arXiv…

M. Rodríguez, C. Feinstein, G. Baume, et. al.
Tue, 20 Dec 22
69/97

Comments: 9 pages, 11 figures, accepted for publication in MNRAS

Wide-scale Monitoring of Satellite Lifetimes: Pitfalls and a Benchmark Dataset [EPA]

http://arxiv.org/abs/2212.08662


An important task within the broader goal of Space Situational Awareness (SSA) is to observe changes in the orbits of satellites, where the data spans thousands of objects over long time scales (decades). The Two-Line Element (TLE) data provided by the North American Aerospace Defense Command is the most comprehensive and widely-available dataset cataloguing the orbits of satellites. This makes it a highly-attractive data source on which to perform this observation. However, when attempting to infer changes in satellite behaviour from TLE data, there are a number of potential pitfalls. These mostly relate to specific features of the TLE data which are not always clearly documented in the data sources or popular software packages for manipulating them. These quirks produce a particularly hazardous data type for researchers from adjacent disciplines (such as anomaly detection or machine learning). We highlight these features of TLE data and the resulting pitfalls in order to save future researchers from being trapped. A seperate, significant, issue is that existing contributions to manoeuvre detection from TLE data evaluate their algorithms on different satellites, making comparison between these methods difficult. Moreover, the ground-truth in these datasets is often poor quality, sometimes being based on subjective human assessment. We therefore release and describe in-depth an open, curated, benchmark dataset containing TLE data for 15 satellites alongside high-quality ground-truth manoeuvre timestamps.

Read this paper on arXiv…

D. Shorten, Y. Yang, J. Maclean, et. al.
Tue, 20 Dec 22
78/97

Comments: N/A

Reversible time-step adaptation for the integration of few-body systems [IMA]

http://arxiv.org/abs/2212.09745


The time step criterion plays a crucial role in direct N-body codes. If not chosen carefully, it will cause a secular drift in the energy error. Shared, adaptive time step criteria commonly adopt the minimum pairwise time step, which suffers from discontinuities in the time evolution of the time step. This has a large impact on the functioning of time step symmetrisation algorithms. We provide new demonstrations of previous findings that a smooth and weighted average over all pairwise time steps in the N-body system, improves the level of energy conservation. Furthermore, we compare the performance of 27 different time step criteria, by considering 3 methods for weighting time steps and 9 symmetrisation methods. We present performance tests for strongly chaotic few-body systems, including unstable triples, giant planets in a resonant chain, and the current Solar System. We find that the harmonic symmetrisation methods (methods A3 and B3 in our notation) are the most robust, in the sense that the symmetrised time step remains close to the time step function. Furthermore, based on our Solar System experiment, we find that our new weighting method based on direct pairwise averaging (method W2 in our notation), is slightly preferred over the other methods.

Read this paper on arXiv…

T. Boekholt, T. Vaillant and A. Correia
Tue, 20 Dec 22
82/97

Comments: Accepted by MNRAS. 13 pages, 6 figures

Exploring the Universe via the Wide, Deep Near-infrared Imaging ESO Public Survey SHARKS [IMA]

http://arxiv.org/abs/2212.09471


The ESO Public Survey Southern H-ATLAS Regions Ks-band Survey (SHARKS) comprises 300 square degrees of deep imaging at 2.2 microns (the Ks band) with the VISTA InfraRed CAMera (VIRCAM) at the 4-metre Visible and Infrared Survey Telescope for Astronomy (VISTA). The first data release of the survey, comprising 5% of the data, was published via the ESO database on 31 January 2022. We describe the strategy and status of the first data release and present the data products. We discuss briefly different scientific areas being explored with the SHARKS data and conclude with an outline of planned data releases.

Read this paper on arXiv…

H. Dannerbauer, A. Carnero, N. Cross, et. al.
Tue, 20 Dec 22
83/97

Comments: Published in the ESO messenger #187: this https URL

Data mining techniques on astronomical spectra data. I : Clustering Analysis [IMA]

http://arxiv.org/abs/2212.08419


Clustering is an effective tool for astronomical spectral analysis, to mine clustering patterns among data. With the implementation of large sky surveys, many clustering methods have been applied to tackle spectroscopic and photometric data effectively and automatically. Meanwhile, the performance of clustering methods under different data characteristics varies greatly. With the aim of summarizing astronomical spectral clustering algorithms and laying the foundation for further research, this work gives a review of clustering methods applied to astronomical spectra data in three parts. First, many clustering methods for astronomical spectra are investigated and analysed theoretically, looking at algorithmic ideas, applications, and features. Secondly, experiments are carried out on unified datasets constructed using three criteria (spectra data type, spectra quality, and data volume) to compare the performance of typical algorithms; spectra data are selected from the Large Sky Area Multi-Object Fibre Spectroscopic Telescope (LAMOST) survey and Sloan Digital Sky Survey (SDSS). Finally, source codes of the comparison clustering algorithms and manuals for usage and improvement are provided on GitHub.

Read this paper on arXiv…

H. Yang, C. Shi, J. Cai, et. al.
Mon, 19 Dec 22
5/62

Comments: 28 pages, 53 figures

Global Extinction: Combined Gemini North and South GMOS Photometry Relative to the Gaia Catalog, and Long-Term Atmospheric Change [IMA]

http://arxiv.org/abs/2212.08093


Effects of long-term atmospheric change were looked for in photometry employing the Gemini North and South twin Multi-Object Spectrograph (GMOS-N and GMOS-S) archival data. The whole GMOS imaging database, beginning from 2003, was compared against the all-sky Gaia object catalog, yielding ~10^6 Sloan r’-filter samples, ending in 2021. These were combined with reported sky and meteorological conditions, versus a simple model of the atmosphere plus cloud together with simulated throughputs. One exceptionally extincted episode in 2009 is seen, as is a trend (similar at both sites) of about 2 mmag worsening attenuation per decade. This is consistent with solar-radiance transmissivity records going back over six decades, aerosol density measurements, and more than 0.2 deg C per decade rise in global air temperature, which has implications for calibration of historic datasets or future surveys.

Read this paper on arXiv…

E. Steinbring
Mon, 19 Dec 22
9/62

Comments: 12 pages, 13 figures, to appear in PASP

Temperature dependence of radiation damage annealing of Silicon Photomultipliers [IMA]

http://arxiv.org/abs/2212.08474


The last decade has increasingly seen the use of silicon photomultipliers (SiPMs) instead of photomultiplier tubes (PMTs). This is due to various advantages of the former on the latter like its smaller size, lower operating voltage, higher detection efficiency, insensitivity to magnetic fields and mechanical robustness to launch vibrations. All these features make SiPMs ideal for use on space based experiments where the detectors require to be compact, lightweight and capable of surviving launch conditions. A downside with the use of this novel type of detector in space conditions is its susceptibility to radiation damage. In order to understand the lifetime of SiPMs in space, both the damage sustained due to radiation as well as the subsequent recovery, or annealing, from this damage have to be studied. Here we present these studies for three different types of SiPMs from the Hamamatsu S13360 series. Both their behaviour after sustaining radiation equivalent to 2 years in low earth orbit in a typical mission is presented, as well as the recovery of these detectors while stored in different conditions. The storage conditions varied in temperature as well as in operating voltage. The study found that the annealing depends significantly on the temperature of the detectors with those stored at high temperatures recovering significantly faster and at recovering closer to the original performance. Additionally, no significant effect from a reasonable bias voltage on the annealing was observed. Finally the annealing rate as a function of temperature is presented along with various operating strategies for the future SiPM based astrophysical detector POLAR-2 as well as for future SiPM based space borne missions.

Read this paper on arXiv…

N. Angelis, M. Kole, F. Cadoux, et. al.
Mon, 19 Dec 22
27/62

Comments: 33 pages, 24 figures

Microcanonical Hamiltonian Monte Carlo [CL]

http://arxiv.org/abs/2212.08549


We develop Microcanonical Hamiltonian Monte Carlo (MCHMC), a class of models which follow a fixed energy Hamiltonian dynamics, in contrast to Hamiltonian Monte Carlo (HMC), which follows canonical distribution with different energy levels. MCHMC tunes the Hamiltonian function such that the marginal of the uniform distribution on the constant-energy-surface over the momentum variables gives the desired target distribution. We show that MCHMC requires occasional energy conserving billiard-like momentum bounces for ergodicity, analogous to momentum resampling in HMC. We generalize the concept of bounces to a continuous version with partial direction preserving bounces at every step, which gives an energy conserving underdamped Langevin-like dynamics with non-Gaussian noise (MCLMC). MCHMC and MCLMC exhibit favorable scalings with condition number and dimensionality. We develop an efficient hyperparameter tuning scheme that achieves high performance and consistently outperforms NUTS HMC on several standard benchmark problems, in some cases by more than an order of magnitude.

Read this paper on arXiv…

J. Robnik, G. Luca, E. Silverstein, et. al.
Mon, 19 Dec 22
28/62

Comments: 32 pages, 10 figures

Micro-arcsecond Astrometry Technology: Detector and Field Distortion Calibration [IMA]

http://arxiv.org/abs/2212.08129


Microarcsecond (uas) astrometry provides an indispensable way to survey earth-like exoplanets and fully characterize the orbits and masses for assessing their habitability. Highly accurate astrometric measurements can also probe the nature of dark matter, the primordial universe, black holes, and neutron stars for new astrophysics. This paper presents technology for calibrating array detectors and field distortions to achieve narrow field uas astrometry using a 6 m telescope with a focal plane array detector.

Read this paper on arXiv…

M. Shao, C. Zhai, B. Nemati, et. al.
Mon, 19 Dec 22
29/62

Comments: 16 pages, 10 figures, to be submitted to PASP

Morphological Classification of Radio Galaxies with wGAN-supported Augmentation [IMA]

http://arxiv.org/abs/2212.08504


Machine learning techniques that perform morphological classification of astronomical sources often suffer from a scarcity of labelled training data. Here, we focus on the case of supervised deep learning models for the morphological classification of radio galaxies, which is particularly topical for the forthcoming large radio surveys. We demonstrate the use of generative models, specifically Wasserstein GANs (wGANs), to generate data for different classes of radio galaxies. Further, we study the impact of augmenting the training data with images from our wGAN on three different classification architectures. We find that this technique makes it possible to improve models for the morphological classification of radio galaxies. A simple Fully Connected Neural Network (FCN) benefits most from including generated images into the training set, with a considerable improvement of its classification accuracy. In addition, we find it is more difficult to improve complex classifiers. The classification performance of a Convolutional Neural Network (CNN) can be improved slightly. However, this is not the case for a Vision Transformer (ViT).

Read this paper on arXiv…

L. Rustige, J. Kummer, F. Griese, et. al.
Mon, 19 Dec 22
43/62

Comments: 12 pages, 7+2 figures, 1+2 tables. Submitted, comments welcome

Small Spacecraft for Global Greenhouse Gas Monitoring [CL]

http://arxiv.org/abs/2212.07680


This work is devoted to the capabilities analysis of constellation and small spacecraft developed using CubeSat technology to solve promising problems of the Earth remote sensing in the area of greenhouse gases emissions. This paper presents the scientific needs for such tasks, followed by descriptions and discussions of the micro-technology application both in the small satellite platform design and in the payload design. The overview of analogical spacecraft is carried out. The design of a new spacecraft for determination the oxygen and carbon dioxide concentration in the air column along the line of sight of the spacecraft when it illuminated by reflected sunlight is introduced. A mock-up of the device was made for greenhouse gases remote sensing a Fourier Transform Infrared (FTIR) spectroradiometer is placed in the small spacecraft design. The results of long-term measurements of greenhouse gas concentrations using the developed Fourier spectrometer mock-up is presented.

Read this paper on arXiv…

V. Mayorova, A. Morozov, I. Golyak, et. al.
Fri, 16 Dec 22
1/72

Comments: 8 pages, 4 figures, 2 tables

Calculation of the High-Energy Neutron Flux for Anticipating Errors and Recovery Techniques in Exascale Supercomputer Centres [CL]

http://arxiv.org/abs/2212.07770


The age of exascale computing has arrived and the risks associated with neutron and other atmospheric radiation are becoming more critical as the computing power increases, hence, the expected Mean Time Between Failures will be reduced because of this radiation. In this work, a new and detailed calculation of the neutron flux for energies above 50 MeV is presented. This has been done by using state-of-the-art Monte Carlo astroparticle techniques and including real atmospheric profiles at each one of the next 23 exascale supercomputing facilities. Atmospheric impact in the flux and seasonal variations were observed and characterised, and the barometric coefficient for high-energy neutrons at each site was obtained. With these coefficients, potential risks of errors associated with the increase in the flux of energetic neutrons, such as the occurrence of single event upsets or transients, and the corresponding failure-in-time rates, can be anticipated just by using the atmospheric pressure before the assignation of resources to critical tasks at each exascale facility. For more clarity, examples about how the rate of failures is affected by the cosmic rays are included, so administrators will better anticipate which more or less restrictive actions could take for overcoming errors.

Read this paper on arXiv…

H. Asorey and R. Mayo-García
Fri, 16 Dec 22
3/72

Comments: 23 pages, 6 figures, 2 tables

Dark Matter stimulated neutrinoless double beta decay [CL]

http://arxiv.org/abs/2212.07832


Nuclei that are unstable with respect to double beta decay are investigated in this work for a novel Dark Matter (DM) direct detection approach. In particular, the diagram responsible for the neutrinoless double beta decay will be considered for the possible detection technique of a Majorana DM fermion inelastically scattering on a double beta unstable nucleus, stimulating its decay. The exothermic nature of the stimulated double beta decay would allow the direct detection also of a light DM fermion, a class of DM candidates that are difficult or impossible to investigate with the traditional elastic scattering techniques. The expected signal distribution for different DM masses and the upper limits on the nucleus scattering cross sections, are shown and compared with the existing data for the case of $^{136}$Xe nucleus.

Read this paper on arXiv…

F. Nozzoli and C. Cernetti
Fri, 16 Dec 22
4/72

Comments: 4 pages, 4 figures

Modeling Results and Baseline Design for an RF-SoC-Based Readout System for Microwave Kinetic Inductance Detectors [IMA]

http://arxiv.org/abs/2212.07938


Building upon existing signal processing techniques and open-source software, this paper presents a baseline design for an RF System-on-Chip Frequency Division Multiplexed readout for a spatio-spectral focal plane instrument based on low temperature detectors. A trade-off analysis of different FPGA carrier boards is presented in an attempt to find an optimum next-generation solution for reading out larger arrays of Microwave Kinetic Inductance Detectors (MKIDs). The ZCU111 RF SoC FPGA board from Xilinx was selected, and it is shown how this integrated system promises to increase the number of pixels that can be read out (per board) which enables a reduction in the readout cost per pixel, the mass and volume, and power consumption, all of which are important in making MKID instruments more feasible for both ground-based and space-based astrophysics. The on-chip logic capacity is shown to form a primary constraint on the number of MKIDs which can be read, channelised, and processed with this new system. As such, novel signal processing techniques are analysed, including Digitally Down Converted (DDC)-corrected sub-maximally decimated sampling, in an effort to reduce logic requirements without compromising signal to noise ratio. It is also shown how combining the ZCU111 board with a secondary FPGA board will allow all 8 ADCs and 8 DACs to be utilised, providing enough bandwidth to read up to 8,000 MKIDs per board-set, an eight-fold improvement over the state-of-the-art, and important in pursuing 100,000 pixel arrays. Finally, the feasibility of extending the operational frequency range of MKIDs to the 5 – 10 GHz regime (or possibly beyond) is investigated, and some benefits and consequences of doing so are presented.

Read this paper on arXiv…

C. Bracken, E. Baldwin, G. Ulbricht, et. al.
Fri, 16 Dec 22
14/72

Comments: N/A

Passive bistatic radar probes of the subsurface on airless bodies using high energy cosmic rays via the Askaryan effect [EPA]

http://arxiv.org/abs/2212.07689


We present a new technique to perform passive bistatic subsurface radar probes on airless planetary bodies. This technique uses the naturally occurring radio impulses generated when high-energy cosmic rays impact the body’s surface. As in traditional radar sounding, the downward-beamed radio emission from each individual cosmic ray impact will reflect off subsurface dielectric contrasts and propagate back up to the surface to be detected. We refer to this technique as Askaryan radar after the fundamental physics process, the Askaryan effect, that produces this radio emission. This technique can be performed from an orbiting satellite, or from a surface lander, but since the radio emission is generated beneath the surface, an Askaryan radar can completely bypass the effects of surface clutter and backscatter typically associated with surface-penetrating radar. We present the background theory of Askaryan subsurface radar and show results from both finite-difference time-domain (FDTD) and Monte Carlo simulations that confirm that this technique is a promising planetary radar sounding method, producing detectable signals for realistic planetary science applications.

Read this paper on arXiv…

R. Prechelt, E. Costello, R. Ghent, et. al.
Fri, 16 Dec 22
23/72

Comments: 18 pages, 5 figures

TrExoLiSTS: Transiting Exoplanets List of Space Telescope Spectroscopy [EPA]

http://arxiv.org/abs/2212.07966


We present the STScI WFC3 project webpage, Transiting Exoplanets List of Space Telescope Spectroscopy, TrExoLiSTS. It tabulates existing observations of transiting exoplanet atmospheres, available in the MAST archive made with HST WFC3 using the stare or spatial scan mode. A parallel page is available for all instruments aboard JWST using the spectral Time Series Observation (TSO) mode. The web pages include observations obtained during primary transits, secondary eclipses and phase curves. TREXOLISTS facilitates proposal preparation for programs that are highly-complementary to existing programs in terms of targets, wavelength coverage, as well as reduces duplication and redundant effort. Reference for the quality of the HST WFC3 visits taken more than 1.5 years ago are made available via including diagrams of the direct image, white light curve and drift of the spectral time series across the detector. Future improvements to the webpage will include: Expanding program query to other HST instruments and reference for the quality of JWST visits.

Read this paper on arXiv…

N. Nikolov, A. Kovacs and C. Martlin
Fri, 16 Dec 22
30/72

Comments: Accepted for publication in Research Notes of the AAS