Wavelet Coherence Of Total Solar Irradiance and Atlantic Climate [CL]

http://arxiv.org/abs/2305.02319


The oscillations of climatic parameters of North Atlantic Ocean play important role in various events in North America and Europe. Several climatic indices are associated with these oscillations. The long term Atlantic temperature anomalies are described by the Atlantic Multidecadal Oscillation (AMO). The Atlantic Multidecadal Oscillation also known as Atlantic Multidecadal Variability (AMV), is the variability of the sea surface temperature (SST) of the North Atlantic Ocean at the timescale of several decades. The AMO is correlated to air temperatures and rainfall over much of the Northern Hemisphere, in particular in the summer climate in North America and Europe. The long-term variations of surface temperature are driven mainly by the cycles of solar activity, represented by the variations of the Total Solar Irradiance (TSI). The frequency and amplitude dependences between the TSI and AMO are analyzed by wavelet coherence of millennial time series since 800 AD till now. The results of wavelet coherence are compared with the detected common solar and climate cycles in narrow frequency bands by the method of Partial Fourier Approximation. The long-term coherence between TSI and AMO can help to understand better the recent climate change and can improve the long term forecast.

Read this paper on arXiv…

V. Kolev and Y. Chapanov
Thu, 4 May 23
30/60

Comments: pages 12, Proceedings of the XIII Bulgarian-Serbian Astronomical Conference (XIII BSAC), Velingrad, Bulgaria, 2022

Architecting Complex, Long-Lived Scientific Software [IMA]

http://arxiv.org/abs/2304.13797


Software is a critical aspect of large-scale science, providing essential capabilities for making scientific discoveries. Large-scale scientific projects are vast in scope, with lifespans measured in decades and costs exceeding hundreds of millions of dollars. Successfully designing software that can exist for that span of time, at that scale, is challenging for even the most capable software companies. Yet scientific endeavors face challenges with funding, staffing, and operate in complex, poorly understood software settings. In this paper we discuss the practice of early-phase software architecture in the Square Kilometre Array Observatory’s Science Data Processor. The Science Data Processor is a critical software component in this next-generation radio astronomy instrument. We customized an existing set of processes for software architecture analysis and design to this project’s unique circumstances. We report on the series of comprehensive software architecture plans that were the result. The plans were used to obtain construction approval in a critical design review with outside stakeholders. We conclude with implications for other long-lived software architectures in the scientific domain, including potential risks and mitigations.

Read this paper on arXiv…

N. Ernst, J. Klein, M. Bartolini, et. al.
Fri, 28 Apr 23
39/68

Comments: published at Journal of Systems and Software as In Practice article. Data package at doi:10.5281/zenodo.7868987

Software Architecture and System Design of Rubin Observatory [IMA]

http://arxiv.org/abs/2211.13611


Starting from a description of the Rubin Observatory Data Management System Architecture, and drawing on our experience with and involvement in a range of other projects including Gaia, SDSS, UKIRT, and JCMT, we derive a series of generic design patterns and lessons learned.

Read this paper on arXiv…

W. O’Mullane, F. Economou, K. Lim, et. al.
Mon, 28 Nov 22
26/93

Comments: 10 pages ADASS XXXII submission

Agile Scrum Development in an ad hoc Software Collaboration [CL]

http://arxiv.org/abs/2101.07779


Developing cyberinfrastructure for the growing needs of multi-messenger astrophysics requires expertise in both software development and domain science. However, due to the nature of scientific software development, many scientists neglect best practices for software engineering which results in software that is difficult to maintain. We present here a mitigation strategy where scientists adopt software development best practices by collaborating with professional software developers. Such a partnership brings inherent challenges. For the scientists, this can be a dependence on external resources and lack of control in the development process. For developers, this can be a reduction in effort available for core, non-scientific development goals. These issues can be alleviated by structuring the partnership using established software development practices, such as the Agile Scrum framework. This paper presents a case study wherein a scientist user group, the SuperNova Early Warning System (SNEWS), collaborated with a group of scientific software developers, the Scalable Cyberinfrastructure for Multi-Messenger Astrophysics (SCiMMA) project. The two organizations utilized an Agile Scrum framework to address the needs of each organization, mitigate the concerns of collaboration, and avoid pitfalls common to scientific software development. In the end, the scientists profited from a successful prototype and the software developers benefited from enhanced cyberinfrastructure and improved development skills. This suggests that structured collaborations could help address the prevailing difficulties in scientific computing.

Read this paper on arXiv…

A. Baxter, S. BenZvi, W. Bonivento, et. al.
Wed, 20 Jan 21
13/61

Comments: N/A

Agile methodologies in teams with highly creative and autonomous members [CL]

http://arxiv.org/abs/2009.05048


The Agile manifesto encourages us to value individuals and interactions over processes and tools, while Scrum, the most adopted Agile development methodology, is essentially based on roles, events, artifacts, and the rules that bind them together (i.e., processes). Moreover, it is generally proclaimed that whenever a Scrum project does not succeed, the reason is because Scrum was not implemented correctly and not because Scrum may have its own flaws. This grants irrefutability to the methodology, discouraging deviations to fit the actual needs and peculiarities of the developers. In particular, the members of the NASA ADS team are highly creative and autonomous whose motivation can be affected if their freedom is too strongly constrained. We present our experience following Agile principles, reusing certain Scrum elements and seeking the satisfaction of the team members, while rapidly reacting/keeping the project in line with our stakeholders expectations.

Read this paper on arXiv…

S. Blanco-Cuaresma, A. Accomazzi, M. Kurtz, et. al.
Mon, 14 Sep 20
-1568/54

Comments: To appear in the proceedings of the 29th annual international Astronomical Data Analysis Software & Systems (ADASS XXIX)

Introducing PyCross: PyCloudy Rendering Of Shape Software for pseudo 3D ionisation modelling of nebulae [IMA]

http://arxiv.org/abs/2005.02749


Research into the processes of photoionised nebulae plays a significant part in our understanding of stellar evolution. It is extremely difficult to visually represent or model ionised nebula, requiring astronomers to employ sophisticated modelling code to derive temperature, density and chemical composition. Existing codes are available that often require steep learning curves and produce models derived from mathematical functions. In this article we will introduce PyCross: PyCloudy Rendering Of Shape Software. This is a pseudo 3D modelling application that generates photoionisation models of optically thin nebulae, created using the Shape software. Currently PyCross has been used for novae and planetary nebulae, and it can be extended to Active Galactic Nuclei or any other type of photoionised axisymmetric nebulae. Functionality, an operational overview, and a scientific pipeline will be described with scenarios where PyCross has been adopted for novae (V5668 Sagittarii (2015) & V4362 Sagittarii (1994)) and a planetary nebula (LoTr1). Unlike the aforementioned photoionised codes this application does not require any coding experience, nor the need to derive complex mathematical models, instead utilising the select features from Cloudy/PyCloudy and Shape. The software was developed using a formal software development lifecycle, written in Python and will work without the need to install any development environments or additional python packages. This application, Shape models and PyCross archive examples are freely available to students, academics and research community on GitHub for download (https://github.com/karolfitzgerald/PyCross_OSX_App).

Read this paper on arXiv…

K. Fitzgerald, E. Harvey, N. Keaveney, et. al.
Thu, 7 May 20
5/62

Comments: 15 pages, 12 figures

SADAS: an integrated software system for the data of the SuperAGILE experiment [IMA]

http://arxiv.org/abs/2004.02237


SuperAGILE (SA) is a detection system on board of the AGILE satellite (Astro-rivelatore Gamma a Immagini LEggero), a Gamma-ray astronomy mission approved by the Italian Space Agency (ASI) as first project for the Program for Small Scientific Missions, with launch planned in the second part of 2005. The developing and testing of the instrument took a big effort in software building and applications, we realized an integrated system to handle and to analyse measurement data since prototype tests until flight observations. The software system was created with an Object Oriented software design approach, and this permits to employ suitable libraries developed by other research teams and the integration of applications developed during our past work. This method allowed us to apply our schemas and written code on several prototypes, to share the work among different developers with the help of standard modeling instruments such as UML schemas. We also used SQL-based database techniques to access large amounts of data stored in the archives, this will improve the scientific return from space observations. All this has allowed our team to minimize the cost of developing in terms of man-power and resources, to dispone of a flexible system to face future needs of the mission and to invest it on other experiments.

Read this paper on arXiv…

F. Lazzarotto and E. Monte
Tue, 7 Apr 20
6/72

Comments: 5 pages, 4 figures

High Quality Software for Planetary Science from Space [IMA]

http://arxiv.org/abs/2003.06248


Planetary science space missions need high quality software ed efficient algorithms in order to extract innovative scientific results from flight data. Reliable and efficient software technologies are increasingly vital to improve and prolong the exploiting of the results of a mission, to allow the application of established algorithms and technologies also to future space missions and for the scientific analysis of archived data. Here after will be given an in-depth analysis study accompanied by implementation examples on ESA and ASI missions and some remarkable results fruit of decades of important experience reached by space agencies and research institutes in the field. Space applications software quality analysis is not different from other application contexts, among the hi-tech and hi-reliability fields. We describe here a Software Quality study in general, then we will focus on the quality of space mission software (s/w) with details on some notable cases.

Read this paper on arXiv…

F. Lazzarotto, G. Cremonese, A. Lucchetti, et. al.
Mon, 16 Mar 20
56/57

Comments: presentation at XVI Congresso Nazionale di Scienze Planetarie (National Conference on Planetary Sciences) held at Centro Culturale San Gaetano, via Altinate, 71, Padova, Italy on 3-7 February, 2020 Affiliation: University of Padova, 6 pages, 4 figures

Using the Agile software development lifecycle to develop a standalone application for generating colour magnitude diagrams [IMA]

http://arxiv.org/abs/1906.11147


Virtual observatories allow the means by which an astronomer is able to discover, access, and process data seamlessly, regardless of its physical location. However, steep learning curves are often required to become proficient in the software employed to access, analyse and visualise this trove of data. It would be desirable, for both research and educational purposes, to have applications which allow users to visualise data at the click of a button. Therefore, we have developed a standalone application (written in Python) for plotting photometric Colour Magnitude Diagrams (CMDs) – one of the most widely used tools for studying and teaching about astronomical populations. The CMD Plot Tool application functions “out of the box” without the need for the user to install code interpreters, additional libraries and modules, or to modify system paths; and it is available on multiple platforms. Interacting via a graphical user interface (GUI), users can quickly and easily generate high quality plots, annotated and labelled as desired, from various data sources. This paper describes how CMD Plot Tool was developed using Object Orientated Programming and a formal software design lifecycle (SDLC). We highlight the need for the astronomical software development culture to identify appropriate programming paradigms and SDLCs. We outline the functionality and uses of CMD Plot Tool, with examples of star cluster photometry. All results plots were created using CMD Plot Tool on data readily available from various online virtual observatories, or acquired from observations and reduced with IRAF/PyRAF.

Read this paper on arXiv…

K. Fitzgerald, L. Browne and R. Butler
Thu, 27 Jun 19
28/62

Comments: N/A

A Scientific Workflow System for Satellite Data Processing with Real-Time Monitoring [IMA]

http://arxiv.org/abs/1812.02236


This paper provides a case study on satellite data processing, storage, and distribution in the space weather domain by introducing the Satellite Data Downloading System (SDDS). The approach proposed in this paper was evaluated through real-world scenarios and addresses the challenges related to the specific field. Although SDDS is used for satellite data processing, it can potentially be adapted to a wide range of data processing scenarios in other fields of physics.

Read this paper on arXiv…

M. Nguyen
Fri, 7 Dec 18
30/66

Comments: 4 pages, 1 figure, Mathematical Modeling and Computational Physics 2017 (MMCP 2017)

Monitoring activities of satellite data processing services in real-time with SDDS Live Monitor [IMA]

http://arxiv.org/abs/1812.02239


This work describes Live Monitor, the monitoring subsystem of SDDS – an automated system for space experiment data processing, storage, and distribution created at SINP MSU. Live Monitor allows operators and developers of satellite data centers to identify errors occurred in data processing quickly and to prevent further consequences caused by the errors. All activities of the whole data processing cycle are illustrated via a web interface in real-time. Notification messages are delivered to responsible people via emails and Telegram messenger service. The flexible monitoring mechanism implemented in Live Monitor allows us to dynamically change and control events being shown on the web interface on our demands. Physicists, whose space weather analysis models are functioning upon satellite data provided by SDDS, can use the developed RESTful API to monitor their own events and deliver customized notification messages by their needs.

Read this paper on arXiv…

M. Nguyen
Fri, 7 Dec 18
40/66

Comments: 7 pages, 3 figures

A distributed data warehouse system for astroparticle physics [IMA]

http://arxiv.org/abs/1812.01906


A distributed data warehouse system is one of the actual issues in the field of astroparticle physics. Famous experiments, such as TAIGA, KASCADE-Grande, produce tens of terabytes of data measured by their instruments. It is critical to have a smart data warehouse system on-site to store the collected data for further distribution effectively. It is also vital to provide scientists with a handy and user-friendly interface to access the collected data with proper permissions not only on-site but also online. The latter case is handy when scientists need to combine data from different experiments for analysis. In this work, we describe an approach to implementing a distributed data warehouse system that allows scientists to acquire just the necessary data from different experiments via the Internet on demand. The implementation is based on CernVM-FS with additional components developed by us to search through the whole available data sets and deliver their subsets to users’ computers.

Read this paper on arXiv…

M. Nguyen, A. Kryukov, J. Dubenskaya, et. al.
Thu, 6 Dec 18
28/52

Comments: 5 pages, 3 figures, The 8th International Conference “Distributed Computing and Grid-technologies in Science and Education” (GRID 2018)

Computational astrophysics for the future: An open, modular approach with agreed standards would facilitate astrophysical discovery [IMA]

http://arxiv.org/abs/1809.02600


Scientific discovery is mediated by ideas that, after being formulated in hypotheses, can be tested, validated, and quantified before they eventually lead to accepted concepts. Computer-mediated discovery in astrophysics is no exception, but antiquated code that is only intelligible to scientists who were involved in writing it is holding up scientific discovery in the field. A bold initiative is needed to modernize astrophysics code and make it transparent and useful beyond a small group of scientists. (abridged)

Read this paper on arXiv…

S. Zwart
Mon, 10 Sep 18
37/58

Comments: Published in Science

Inside a VAMDC data node – Putting standards into practical software [IMA]

http://arxiv.org/abs/1803.09217


Access to molecular and atomic data is critical for many forms of remote sensing analysis across different fields. Many atomic and molecular databases are however highly specialized for their intended application, complicating querying and combination data between sources. The Virtual Atomic and Molecular Data Centre, VAMDC, is an electronic infrastructure that allows each database to register as a “node”. Through services such as VAMDC’s portal website, users can then access and query all nodes in a homogenized way. Today all major Atomic and Molecular databases are attached to VAMDC.
This article describes the software tools we developed to help data providers create and manage a VAMDC node. It gives an overview of the VAMDC infrastructure and of the various standards it uses. The article then discusses the development choices made and how the standards are implemented in practice. It concludes with a full example of implementing a VAMDC node using a real-life case as well as future plans for the node software.

Read this paper on arXiv…

S. Regandell, T. Marquart and N. Piskunov
Wed, 28 Mar 18
142/148

Comments: 12 pages, 2 figures

Agile Software Engineering and Systems Engineering at SKA Scale [IMA]

http://arxiv.org/abs/1712.00061


Systems Engineering (SE) is the set of processes and documentation required for successfully realising large-scale engineering projects, but the classical approach is not a good fit for software-intensive projects, especially when the needs of the different stakeholders are not fully known from the beginning, and requirement priorities might change. The SKA is the ultimate software-enabled telescope, with enormous amounts of computing hardware and software required to perform its data reduction. We give an overview of the system and software engineering processes in the SKA1 development, and the tension between classical and agile SE.

Read this paper on arXiv…

J. Santander-Vela
Mon, 4 Dec 17
41/72

Comments: 4 pages, proceedings of the ADASS XXVII conference held in Santiago, October 2017

Cosmological Simulations in Exascale Era [IMA]

http://arxiv.org/abs/1712.00252


The architecture of Exascale computing facilities, which involves millions of heterogeneous processing units, will deeply impact on scientific applications. Future astrophysical HPC applications must be designed to make such computing systems exploitable. The ExaNeSt H2020 EU-funded project aims to design and develop an exascale ready prototype based on low-energy-consumption ARM64 cores and FPGA accelerators. We participate to the design of the platform and to the validation of the prototype with cosmological N-body and hydrodynamical codes suited to perform large-scale, high-resolution numerical simulations of cosmic structures formation and evolution. We discuss our activities on astrophysical applications to take advantage of the underlying architecture.

Read this paper on arXiv…

D. Goz, L. Tornatore, G. Taffoni, et. al.
Mon, 4 Dec 17
61/72

Comments: submitted to ASP

Use of Docker for deployment and testing of astronomy software [CL]

http://arxiv.org/abs/1707.03341


We describe preliminary investigations of using Docker for the deployment and testing of astronomy software. Docker is a relatively new containerisation technology that is developing rapidly and being adopted across a range of domains. It is based upon virtualization at operating system level, which presents many advantages in comparison to the more traditional hardware virtualization that underpins most cloud computing infrastructure today. A particular strength of Docker is its simple format for describing and managing software containers, which has benefits for software developers, system administrators and end users.
We report on our experiences from two projects — a simple activity to demonstrate how Docker works, and a more elaborate set of services that demonstrates more of its capabilities and what they can achieve within an astronomical context — and include an account of how we solved problems through interaction with Docker’s very active open source development community, which is currently the key to the most effective use of this rapidly-changing technology.

Read this paper on arXiv…

D. Morris, S. Voutsinas, N. Hambly, et. al.
Thu, 13 Jul 17
53/60

Comments: 29 pages, 9 figures, accepted for publication in Astronomy and Computing, ref ASCOM199

Corral Framework: Trustworthy and Fully Functional Data Intensive Parallel Astronomical Pipelines [IMA]

http://arxiv.org/abs/1701.05566


Data processing pipelines are one of most common astronomical software. This kind of programs are chains of processes that transform raw data into valuable information. In this work a Python framework for astronomical pipeline generation is presented. It features a design pattern (Model-View-Controller) on top of a SQL Relational Database capable of handling custom data models, processing stages, and result communication alerts, as well as producing automatic quality and structural measurements. This pat- tern provides separation of concerns between the user logic and data models and the processing flow inside the pipeline, delivering for free multi processing and distributed computing capabilities. For the astronomical community this means an improvement on previous data processing pipelines, by avoiding the programmer deal with the processing flow, and parallelization issues, and by making him focusing just in the algorithms involved in the successive data transformations. This software as well as working examples of pipelines are available to the community at https://github.com/toros-astro.

Read this paper on arXiv…

J. Cabral, B. Sanchez, M. Beroiz, et. al.
Mon, 23 Jan 17
15/55

Comments: 8 pages, 2 figures, submitted for consideration at Astronomy and Computing. Code available at this https URL

Porting the LSST Data Management Pipeline Software to Python 3 [IMA]

http://arxiv.org/abs/1611.00751


The LSST data management science pipelines software consists of more than 100,000 lines of Python 2 code. LSST operations will begin after support for Python 2 has been dropped by the Python community in 2020, and we must therefore plan to migrate the codebase to Python 3. During the transition period we must also support our community of active Python 2 users and this complicates the porting significantly. We have decided to use the Python future package as the basis for our port to enable support for Python 2 and Python 3 simultaneously, whilst developing with a mindset more suited to Python 3. In this paper we report on the current status of the port and the difficulties that have been encountered.

Read this paper on arXiv…

T. Jenness
Thu, 3 Nov 16
14/57

Comments: 4 pages, presented at Astronomical Data Analysis Software and Systems XXVI conference, Trieste, Italy, October 2016

The offline software framework of the DAMPE experiment [IMA]

http://arxiv.org/abs/1604.03219


A software framework has been developed for the DArk Matter Particle Explorer (DAMPE) mission, a satellite based experiment. The software framework of DAMPE is mainly written in C++ while the application under this framework is steered in Python script. The framework is comprised of four principal parts: an event data model which contains all reconstruction and simulation information based on ROOT input/output (I/O) streaming; a collection of processing models which are used to process each event data, called as algorithms; common tools which provide general functionalities like data communication between algorithms; and event filters. This article presents an overview of the DAMPE offline software framework, and the major architecture design choices during the development.

Read this paper on arXiv…

C. Wang, D. Liu, Y. Wei, et. al.
Wed, 13 Apr 16
52/60

Comments: 5 pages, 2 figures

IVOA recommendation: Parameter Description Language Version 1.0 [IMA]

http://arxiv.org/abs/1509.08303


This document discusses the definition of the Parameter Description Language (PDL). In this language parameters are described in a rigorous data model. With no loss of generality, we will represent this data model using XML. It intends to be a expressive language for self-descriptive web services exposing the semantic nature of input and output parameters, as well as all necessary complex constraints. PDL is a step forward towards true web services interoperability.

Read this paper on arXiv…

C. Zwolf, P. Harrison, J. Garrido, et. al.
Tue, 29 Sep 15
49/69

Comments: N/A

Learning from FITS: Limitations in use in modern astronomical research [IMA]

http://arxiv.org/abs/1502.00996


The Flexible Image Transport System (FITS) standard has been a great boon to astronomy, allowing observatories, scientists and the public to exchange astronomical information easily. The FITS standard, however, is showing its age. Developed in the late 1970s, the FITS authors made a number of implementation choices that, while common at the time, are now seen to limit its utility with modern data. The authors of the FITS standard could not anticipate the challenges which we are facing today in astronomical computing. Difficulties we now face include, but are not limited to, addressing the need to handle an expanded range of specialized data product types (data models), being more conducive to the networked exchange and storage of data, handling very large datasets, and capturing significantly more complex metadata and data relationships.
There are members of the community today who find some or all of these limitations unworkable, and have decided to move ahead with storing data in other formats. If this fragmentation continues, we risk abandoning the advantages of broad interoperability, and ready archivability, that the FITS format provides for astronomy. In this paper we detail some selected important problems which exist within the FITS standard today. These problems may provide insight into deeper underlying issues which reside in the format and we provide a discussion of some lessons learned. It is not our intention here to prescribe specific remedies to these issues; rather, it is to call attention of the FITS and greater astronomical computing communities to these problems in the hope that it will spur action to address them.

Read this paper on arXiv…

B. Thomas, T. Jenness, F. Economou, et. al.
Wed, 4 Feb 15
54/59

Comments: N/A

Architecture, implementation and parallelization of the software to search for periodic gravitational wave signals [CL]

http://arxiv.org/abs/1410.3677


The parallelization, design and scalability of the \sky code to search for periodic gravitational waves from rotating neutron stars is discussed. The code is based on an efficient implementation of the F-statistic using the Fast Fourier Transform algorithm. To perform an analysis of data from the advanced LIGO and Virgo gravitational wave detectors’ network, which will start operating in 2015, hundreds of millions of CPU hours will be required – the code utilizing the potential of massively parallel supercomputers is therefore mandatory. We have parallelized the code using the Message Passing Interface standard, implemented a mechanism for combining the searches at different sky-positions and frequency bands into one extremely scalable program. The parallel I/O interface is used to escape bottlenecks, when writing the generated data into file system. This allowed to develop a highly scalable computation code, which would enable the data analysis at large scales on acceptable time scales. Benchmarking of the code on a Cray XE6 system was performed to show efficiency of our parallelization concept and to demonstrate scaling up to 50 thousand cores in parallel.

Read this paper on arXiv…

G. Poghosyan, S. Matta, A. Streit, et. al.
Tue, 3 Feb 15
37/80

Comments: 11 pages, 9 figures. Submitted to Computer Physics Communications

SAMP, the Simple Application Messaging Protocol: Letting applications talk to each other [IMA]

http://arxiv.org/abs/1501.01139


SAMP, the Simple Application Messaging Protocol, is a hub-based communication standard for the exchange of data and control between participating client applications. It has been developed within the context of the Virtual Observatory with the aim of enabling specialised data analysis tools to cooperate as a loosely integrated suite, and is now in use by many and varied desktop and web-based applications dealing with astronomical data. This paper reviews the requirements and design principles that led to SAMP’s specification, provides a high-level description of the protocol, and discusses some of its common and possible future usage patterns, with particular attention to those factors that have aided its success in practice.

Read this paper on arXiv…

M. Taylor, T. Boch and J. Taylor
Wed, 7 Jan 15
6/67

Comments: 12 pages, 3 figures. Accepted for Virtual Observatory special issue of Astronomy and Computing

Virtual Observatory Publishing with DaCHS [IMA]

http://arxiv.org/abs/1408.5733


The Data Center Helper Suite DaCHS is an integrated publication package for building Virtual Observatory (VO) and Web services, supporting the entire workflow from ingestion to data mapping to service definition. It implements all major data discovery, data access, and registry protocols defined by the VO. DaCHS in this sense works as glue between data produced by the data providers and the standard protocols and formats defined by the VO. This paper discusses central elements of the design of the package and gives two case studies of how VO protocols are implemented using DaCHS’ concepts.

Read this paper on arXiv…

M. Demleitner, M. Neves, F. Rothmaier, et. al.
Tue, 26 Aug 14
22/59

Comments: N/A

Your data is your dogfood: DevOps in the astronomical observatory [IMA]

http://arxiv.org/abs/1407.6463


DevOps is the contemporary term for a software development culture that purposefully blurs distinction between software development and IT operations by treating “infrastructure as code.” DevOps teams typically implement practices summarised by the colloquial directive to “eat your own dogfood;” meaning that software tools developed by a team should be used internally rather thrown over the fence to operations or users. We present a brief overview of how DevOps techniques bring proven software engineering practices to IT operations. We then discuss the application of these practices to astronomical observatories.

Read this paper on arXiv…

F. Economou, J. Hoblitt and P. Norris
Fri, 25 Jul 14
11/57

Comments: 7 pages, invited talk at Software and Cyberinfrastructure for Astronomy III, SPIE Astronomical Telescopes and Instrumentation conference, June 2014, Paper ID 9152-38

IVOA Recommendation: DALI: Data Access Layer Interface Version 1.0 [IMA]

http://arxiv.org/abs/1402.4750


This document describes the Data Access Layer Interface (DALI). DALI defines the base web service interface common to all Data Access Layer (DAL) services. This standard defines the behaviour of common resources, the meaning and use of common parameters, success and error responses, and DAL service registration. The goal of this specification is to define the common elements that are shared across DAL services in order to foster consistency across concrete DAL service specifications and to enable standard re-usable client and service implementations and libraries to be written and widely adopted.

Read this paper on arXiv…

P. Dowler, M. Demleitner, M. Taylor, et. al.
Thu, 20 Feb 14
45/52

A practical approach to ontology-enabled control systems for astronomical instrumentation [IMA]

http://arxiv.org/abs/1310.5488


Even though modern service-oriented and data-oriented architectures promise to deliver loosely coupled control systems, they are inherently brittle as they commonly depend on a priori agreed interfaces and data models. At the same time, the Semantic Web and a whole set of accompanying standards and tools are emerging, advocating ontologies as the basis for knowledge exchange. In this paper we aim to identify a number of key ideas from the myriad of knowledge-based practices that can readily be implemented by control systems today. We demonstrate with a practical example (a three-channel imager for the Mercator Telescope) how ontologies developed in the Web Ontology Language (OWL) can serve as a meta-model for our instrument, covering as many engineering aspects of the project as needed. We show how a concrete system model can be built on top of this meta-model via a set of Domain Specific Languages (DSLs), supporting both formal verification and the generation of software and documentation artifacts. Finally we reason how the available semantics can be exposed at run-time by adding a “semantic layer” that can be browsed, queried, monitored etc. by any OPC UA-enabled client.

Read this paper on arXiv…

Date added: Tue, 22 Oct 13