Agile Systems Engineering for sub-CubeSat scale spacecraft [IMA]

http://arxiv.org/abs/2210.10653


Space systems miniaturization has been increasingly popular for the past decades, with over 1600 CubeSats and 300 sub-CubeSat sized spacecraft estimated to have been launched since 1998. This trend towards decreasing size enables the execution of unprecedented missions in terms of quantity, cost and development time, allowing for massively distributed satellite networks, and rapid prototyping of space equipment. Pocket-sized spacecraft can be designed in-house in less than a year and can reach weights of less than 10g, reducing the considerable effort typically associated with orbital flight. However, while Systems Engineering methodologies have been proposed for missions down to CubeSat size, there is still a gap regarding design approaches for picosatellites and smaller spacecraft, which can exploit their potential for iterative and accelerated development. In this paper, we propose a Systems Engineering methodology that abstains from the classic waterfall-like approach in favor of agile practices, focusing on available capabilities, delivery of features and design “sprints”. Our method, originating from the software engineering disciplines, allows quick adaptation to imposed constraints, changes to requirements and unexpected events (e.g. chip shortages or delays), by making the design flexible to well-defined modifications. Two femtosatellite missions, currently under development and due to be launched in 2023, are used as case studies for our approach, showing how miniature spacecraft can be designed, developed and qualified from scratch in 6 months or less. We claim that the proposed method can simultaneously increase confidence in the design and decrease turnaround time for extremely small satellites, allowing unprecedented missions to take shape without the overhead traditionally associated with sending cutting-edge hardware to space.

Read this paper on arXiv…

K. Kanavouras, A. Hein and M. Sachidanand
Thu, 20 Oct 22
10/74

Comments: 15 pages, 6 figures, 3 tables, presented in the 73rd International Astronautical Congress

A Brief Analysis of the Apollo Guidance Computer [CL]

http://arxiv.org/abs/2201.08230


The AGC was designed with the sole purpose of providing navigational guidance and spacecraft control during the Apollo program throughout the 1960s and early 1970s. The AGC sported 72kb of ROM, 4kb of RAM, and a whopping 14,245 FLOPS, roughly 30 million times fewer than the computer this report is being written on. These limitations are what make the AGC so interesting, as its programmers had to ration each individual word of memory due to the bulk of memory technology of the time. Despite these limitations (or perhaps due to them), the AGC was highly optimized, and arguably the most advanced computer of its time, as its computational power was only matched in the late 1970s by computers like the Apple II. It is safe to say that the AGC had no intended market, and was explicitly designed to enhance control of the Apollo Command Module and Apollo Lunar Module. The AGC was not entirely internal to NASA, however, and was designed in MIT’s Instrumentation Laboratory, and manufactured by Raytheon, a weapons and defense contractor.

Read this paper on arXiv…

C. Averill
Fri, 21 Jan 22
27/60

Comments: N/A

Z-checker: A Framework for Assessing Lossy Compression of Scientific Data [CL]

http://arxiv.org/abs/1707.09320


Because of vast volume of data being produced by today’s scientific simulations and experiments, lossy data compressor allowing user-controlled loss of accuracy during the compression is a relevant solution for significantly reducing the data size. However, lossy compressor developers and users are missing a tool to explore the features of scientific datasets and understand the data alteration after compression in a systematic and reliable way. To address this gap, we have designed and implemented a generic framework called Z-checker. On the one hand, Z-checker combines a battery of data analysis components for data compression. On the other hand, Z-checker is implemented as an open-source community tool to which users and developers can contribute and add new analysis components based on their additional analysis demands. In this paper, we present a survey of existing lossy compressors. Then we describe the design framework of Z-checker, in which we integrated evaluation metrics proposed in prior work as well as other analysis tools. Specifically, for lossy compressor developers, Z-checker can be used to characterize critical properties of any dataset to improve compression strategies. For lossy compression users, Z-checker can detect the compression quality, provide various global distortion analysis comparing the original data with the decompressed data and statistical analysis of the compression error. Z-checker can perform the analysis with either coarse granularity or fine granularity, such that the users and developers can select the best-fit, adaptive compressors for different parts of the dataset. Z-checker features a visualization interface displaying all analysis results in addition to some basic views of the datasets such as time series. To the best of our knowledge, Z-checker is the first tool designed to assess lossy compression comprehensively for scientific datasets.

Read this paper on arXiv…

D. Tao, S. Di, H. Guo, et. al.
Mon, 31 Jul 17
22/57

Comments: Submitted to The International Journal of High Performance Computing Application (IJHPCA), first revision, 17 pages, 13 figures, double column

Online characterization of planetary surfaces: PlanetServer, an open-source analysis and visualization tool [EPA]

http://arxiv.org/abs/1701.01726


In this paper we present the new PlanetServer, a set of tools comprising a web Geographic Information System (GIS) and a recently developed Python API capable of analyzing a wide variety of hyperspectral data from different planetary bodies. The research case studies are focusing on 1) the characterization of different hydrosilicates such as chlorites, prehnites and kaolinites in the Nili Fossae area on Mars, and 2) the characterization of ice (CO 2 and H 2 O ice) in two different areas of Mars where ice was reported in a nearly pure state. Results show positive outcome in hyperspectral analysis and visualization compared to previous literature, therefore we suggest using PlanetServer for such investigations.

Read this paper on arXiv…

R. Figuera, B. Huu, A. Rossi, et. al.
Tue, 10 Jan 17
43/75

Comments: N/A

Operations in the era of large distributed telescopes [IMA]

http://arxiv.org/abs/1612.02652


The previous generation of astronomical instruments tended to consist of single receivers in the focal point of one or more physical reflectors. Because of this, most astronomical data sets were small enough that the raw data could easily be downloaded and processed on a single machine.
In the last decade, several large, complex Radio Astronomy instruments have been built and the SKA is currently being designed. Many of these instruments have been designed by international teams, and, in the case of LOFAR span an area larger than a single country. Such systems are ICT telescopes and consist mainly of complex software. This causes the main operational issues to be related to the ICT systems and not the telescope hardware. However, it is important that the operations of the ICT systems are coordinated with the traditional operational work. Managing the operations of such telescopes therefore requires an approach that significantly differs from classical telescope operations.
The goal of this session is to bring together members of operational teams responsible for such large-scale ICT telescopes. This gathering will be used to exchange experiences and knowledge between those teams. Also, we consider such a meeting as very valuable input for future instrumentation, especially the SKA and its regional centres.

Read this paper on arXiv…

Y. Grange, K. Vinsen, J. Guzman, et. al.
Fri, 9 Dec 16
56/62

Comments: 4 pages; to be published in ADASS XXVI (held October 16-20, 2016) proceedings. Recording can be found here

Spectral Clustering for Optical Confirmation and Redshift Estimation of X-ray Selected Galaxy Cluster Candidates in the SDSS Stripe 82 [GA]

http://arxiv.org/abs/1607.04193


We develop a galaxy cluster finding algorithm based on spectral clustering technique to identify optical counterparts and estimate optical redshifts for X-ray selected cluster candidates. As an application, we run our algorithm on a sample of X-ray cluster candidates selected from the third XMM-Newton serendipitous source catalog (3XMM-DR5) that are located in the Stripe 82 of the Sloan Digital Sky Survey (SDSS). Our method works on galaxies described in the color-magnitude feature space. We begin by examining 45 galaxy clusters with published spectroscopic redshifts in the range of 0.1 to 0.8 with a median of 0.36. As a result, we are able to identify their optical counterparts and estimate their photometric redshifts, which have a typical accuracy of 0.025 and agree with the published ones. Then, we investigate another 40 X-ray cluster candidates (from the same cluster survey) with no redshift information in the literature and found that 12 candidates are considered as galaxy clusters in the redshift range from 0.29 to 0.76 with a median of 0.57. These systems are newly discovered clusters in X-rays and optical data. Among them 7 clusters have spectroscopic redshifts for at least one member galaxy.

Read this paper on arXiv…

E. Mahmoud, A. Takey and A. Shoukry
Fri, 15 Jul 16
13/54

Comments: 15 pages, 7 figures, 3 tables, 1 appendix, Accepted by Journal of “Astronomy and Computing”

Explicit Integration with GPU Acceleration for Large Kinetic Networks [CL]

http://arxiv.org/abs/1409.5826


We demonstrate the first implementation of recently-developed fast explicit kinetic integration algorithms on modern graphics processing unit (GPU) accelerators. Taking as a generic test case a Type Ia supernova explosion with an extremely stiff thermonuclear network having 150 isotopic species and 1604 reactions coupled to hydrodynamics using operator splitting, we demonstrate the capability to solve of order 100 realistic kinetic networks in parallel in the same time that standard implicit methods can solve a single such network on a CPU. This orders-of-magnitude decrease in compute time for solving systems of realistic kinetic networks implies that important coupled, multiphysics problems in various scientific and technical fields that were intractible, or could be simulated only with highly schematic kinetic networks, are now computationally feasible.

Read this paper on arXiv…

B. Brock, A. Belt, J. Billings, et. al.
Tue, 23 Sep 14
36/60

Comments: 19 pages, 8 figures, submitted to Computational Science and Discovery

Computational Gravitational Dynamics with Modern Numerical Accelerators [IMA]

http://arxiv.org/abs/1409.5474


We review the recent optimizations of gravitational $N$-body kernels for running them on graphics processing units (GPUs), on single hosts and massive parallel platforms. For each of the two main $N$-body techniques, direct summation and tree-codes, we discuss the optimization strategy, which is different for each algorithm. Because both the accuracy as well as the performance characteristics differ, hybridizing the two algorithms is essential when simulating a large $N$-body system with high-density structures containing few particles, and with low-density structures containing many particles. We demonstrate how this can be realized by splitting the underlying Hamiltonian, and we subsequently demonstrate the efficiency and accuracy of the hybrid code by simulating a group of 11 merging galaxies with massive black holes in the nuclei.

Read this paper on arXiv…

S. Zwart and J. Bedorf
Mon, 22 Sep 14
5/47

Comments: Accepted for publication in IEEE Computer