From Thread to Transcontinental Computer: Disturbing Lessons in Distributed Supercomputing [IMA]

http://arxiv.org/abs/1507.01138


We describe the political and technical complications encountered during the astronomical CosmoGrid project. CosmoGrid is a numerical study on the formation of large scale structure in the universe. The simulations are challenging due to the enormous dynamic range in spatial and temporal coordinates, as well as the enormous computer resources required. In CosmoGrid we dealt with the computational requirements by connecting up to four supercomputers via an optical network and make them operate as a single machine. This was challenging, if only for the fact that the supercomputers of our choice are separated by half the planet, as three of them are located scattered across Europe and fourth one is in Tokyo. The co-scheduling of multiple computers and the ‘gridification’ of the code enabled us to achieve an efficiency of up to $93\%$ for this distributed intercontinental supercomputer. In this work, we find that high-performance computing on a grid can be done much more effectively if the sites involved are willing to be flexible about their user policies, and that having facilities to provide such flexibility could be key to strengthening the position of the HPC community in an increasingly Cloud-dominated computing landscape. Given that smaller computer clusters owned by research groups or university departments usually have flexible user policies, we argue that it could be easier to instead realize distributed supercomputing by combining tens, hundreds or even thousands of these resources.

Read this paper on arXiv…

D. Groen and S. Zwart
Tue, 7 Jul 15
26/65

Comments: Accepted for publication in IEEE conference on ERRORs

Separable projection integrals for higher-order correlators of the cosmic microwave sky: Acceleration by factors exceeding 100 [CL]

http://arxiv.org/abs/1503.08809


We study the optimisation and porting of the “Modal” code on Intel(R) Xeon(R) processors and/or Intel(R) Xeon Phi(TM) coprocessors using methods which should be applicable to more general compute bound codes. “Modal” is used by the Planck satellite experiment for constraining general non-Gaussian models of the early universe via the bispectrum of the cosmic microwave background. We focus on the hot-spot of the code which is the projection of bispectra from the end of inflation to spherical shell at decoupling which defines the CMB we observe. This code involves a three-dimensional inner product between two functions, one of which requires an integral, on a non-rectangular sparse domain. We show that by employing separable methods this calculation can be reduced to a one dimensional summation plus two integrations reducing the dimensionality from four to three. The introduction of separable functions also solves the issue of the domain allowing efficient vectorisation and load balancing. This method becomes unstable in certain cases and so we present a discussion of the optimisation of both approaches. By making bispectrum calculations competitive with those for the power spectrum we are now able to consider joint analysis for cosmological science exploitation of new data. We demonstrate speed-ups of over 100x, arising from a combination of algorithmic improvements and architecture-aware optimizations targeted at improving thread and vectorization behaviour. The resulting MPI/OpenMP code is capable of executing on clusters containing Intel(R) Xeon(R) processors and/or Intel(R) Xeon Phi(TM) coprocessors, with strong-scaling efficiency of 98.6% on up to 16 nodes. We find that a single coprocessor outperforms two processor sockets by a factor of 1.3x and that running the same code across a combination of processors and coprocessors improves performance-per-node by a factor of 3.38x.

Read this paper on arXiv…

J. Briggs, J. Jaykka, J. Fergusson, et. al.
Fri, 3 Apr 15
1/43

Comments: N/A

StratOS: A Big Data Framework for Scientific Computing [IMA]

http://arxiv.org/abs/1503.02233


We introduce StratOS, a Big Data platform for general computing that allows a datacenter to be treated as a single computer. With StratOS, the process of writing a massively parallel program for a datacenter is no more complicated than writing a Python script for a desktop computer. Users can run pre-existing analysis software on data distributed over thousands of machines with just a few keystrokes. This greatly reduces the time required to develop distributed data analysis pipelines. The platform is built upon industry-standard, open-source Big Data technologies, from which it inherits fast data throughput and fault tolerance. StratOS enhances these technologies by adding an intuitive user interface, automated task monitoring, and other usability features.

Read this paper on arXiv…

N. Stickley and M. Aragon-Calvo
Tue, 10 Mar 15
4/77

Comments: 10 pages, 7 figures

Building a scalable global data processing pipeline for large astronomical photometric datasets [IMA]

http://arxiv.org/abs/1502.02821


Astronomical photometry is the science of measuring the flux of a celestial object. Since its introduction, the CCD has been the principle method of measuring flux to calculate the apparent magnitude of an object. Each CCD image taken must go through a process of cleaning and calibration prior to its use. As the number of research telescopes increases the overall computing resources required for image processing also increases. Existing processing techniques are primarily sequential in nature, requiring increasingly powerful servers, faster disks and faster networks to process data. Existing High Performance Computing solutions involving high capacity data centres are complex in design and expensive to maintain, while providing resources primarily to high profile science projects. This research describes three distributed pipeline architectures, a virtualised cloud based IRAF, the Astronomical Compute Node (ACN), a private cloud based pipeline, and NIMBUS, a globally distributed system. The ACN pipeline processed data at a rate of 4 Terabytes per day demonstrating data compression and upload to a central cloud storage service at a rate faster than data generation. The primary contribution of this research is NIMBUS, which is rapidly scalable, resilient to failure and capable of processing CCD image data at a rate of hundreds of Terabytes per day. This pipeline is implemented using a decentralised web queue to control the compression of data, uploading of data to distributed web servers, and creating web messages to identify the location of the data. Using distributed web queue messages, images are downloaded by computing resources distributed around the globe. Rigorous experimental evidence is presented verifying the horizontal scalability of the system which has demonstrated a processing rate of 192 Terabytes per day with clear indications that higher processing rates are possible.

Read this paper on arXiv…

P. Doyle
Wed, 11 Feb 15
18/72

Comments: PhD Thesis, Dublin Institute of Technology

Distributed Radio Interferometric Calibration [IMA]

http://arxiv.org/abs/1502.00858


Increasing data volumes delivered by a new generation of radio interferometers require computationally efficient and robust calibration algorithms. In this paper, we propose distributed calibration as a way of improving both computational cost as well as robustness in calibration. We exploit the data parallelism across frequency that is inherent in radio astronomical observations that are recorded as multiple channels at different frequencies. Moreover, we also exploit the smoothness of the variation of calibration parameters across frequency. Data parallelism enables us to distribute the computing load across a network of compute agents. Smoothness in frequency enables us reformulate calibration as a consensus optimization problem. With this formulation, we enable flow of information between compute agents calibrating data at different frequencies, without actually passing the data, and thereby improving robustness. We present simulation results to show the feasibility as well as the advantages of distributed calibration as opposed to conventional calibration.

Read this paper on arXiv…

S. Yatawatta
Wed, 4 Feb 15
48/59

Comments: submitted to MNRAS, low resolution figures

Architecture, implementation and parallelization of the software to search for periodic gravitational wave signals [CL]

http://arxiv.org/abs/1410.3677


The parallelization, design and scalability of the \sky code to search for periodic gravitational waves from rotating neutron stars is discussed. The code is based on an efficient implementation of the F-statistic using the Fast Fourier Transform algorithm. To perform an analysis of data from the advanced LIGO and Virgo gravitational wave detectors’ network, which will start operating in 2015, hundreds of millions of CPU hours will be required – the code utilizing the potential of massively parallel supercomputers is therefore mandatory. We have parallelized the code using the Message Passing Interface standard, implemented a mechanism for combining the searches at different sky-positions and frequency bands into one extremely scalable program. The parallel I/O interface is used to escape bottlenecks, when writing the generated data into file system. This allowed to develop a highly scalable computation code, which would enable the data analysis at large scales on acceptable time scales. Benchmarking of the code on a Cray XE6 system was performed to show efficiency of our parallelization concept and to demonstrate scaling up to 50 thousand cores in parallel.

Read this paper on arXiv…

G. Poghosyan, S. Matta, A. Streit, et. al.
Tue, 3 Feb 15
37/80

Comments: 11 pages, 9 figures. Submitted to Computer Physics Communications

Montblanc: GPU accelerated Radio Interferometer Measurement Equations in support of Bayesian Inference for Radio Observations [CL]

http://arxiv.org/abs/1501.07719


We present Montblanc, a GPU implementation of the Radio interferometer measurement equation (RIME) in support of the Bayesian inference for radio observations (BIRO) technique. BIRO uses Bayesian inference to select sky models that best match the visibilities observed by a radio interferometer. To accomplish this, BIRO evaluates the RIME multiple times, varying sky model parameters to produce multiple model visibilities. Chi-squared values computed from the model and observed visibilities are used as likelihood values to drive the Bayesian sampling process and select the best sky model.
As most of the elements of the RIME and chi-squared calculation are independent of one another, they are highly amenable to parallel computation. Additionally, Montblanc caters for iterative RIME evaluation to produce multiple chi-squared values. Only modified model parameters are transferred to the GPU between each iteration.
We implemented Montblanc as a Python package based upon NVIDIA’s CUDA architecture. As such, it is easy to extend and implement different pipelines. At present, Montblanc supports point and Gaussian morphologies, but is designed for easy addition of new source profiles. Montblanc’s RIME implementation is performant: On an NVIDIA K40, it is approximately 250 times faster than MeqTrees on a dual hexacore Intel E5{2620v2 CPU. Compared to the OSKAR simulator’s GPU-implemented RIME components it is 7.7 and 12 times faster on the same K40 for single and double-precision oating point respectively. However, OSKAR’s RIME implementation is more general than Montblanc’s BIRO-tailored RIME.
Theoretical analysis of Montblanc’s dominant CUDA kernel suggests that it is memory bound. In practice, profiling shows that is balanced between compute and memory, as much of the data required by the problem is retained in L1 and L2 cache.

Read this paper on arXiv…

S. Perkins, P. Maraism, J. Zwart, et. al.
Mon, 2 Feb 15
21/49

Comments: Submitted to Astronomy and Computing (this http URL). The code is available online at this https URL 26 pages long, with 13 figures, 6 tables and 3 algorithms

Delivering SKA Science [IMA]

http://arxiv.org/abs/1501.05367


The SKA will be capable of producing a stream of science data products that are Exa-scale in terms of their storage and processing requirements. This Google-scale enterprise is attracting considerable international interest and excitement from within the industrial and academic communities. In this chapter we examine the data flow, storage and processing requirements of a number of key SKA survey science projects to be executed on the baseline SKA1 configuration. Based on a set of conservative assumptions about trends for HPC and storage costs, and the data flow process within the SKA Observatory, it is apparent that survey projects of the scale proposed will potentially drive construction and operations costs beyond the current anticipated SKA1 budget. This implies a sharing of the resources and costs to deliver SKA science between the community and what is contained within the SKA Observatory. A similar situation was apparent to the designers of the LHC more than 10 years ago. We propose that it is time for the SKA project and community to consider the effort and process needed to design and implement a distributed SKA science data system that leans on the lessons of other projects and looks to recent developments in Cloud technologies to ensure an affordable, effective and global achievement of SKA science goals.

Read this paper on arXiv…

P. Quinn, T. Axelrod, I. Bird, et. al.
Fri, 23 Jan 15
19/65

Comments: 27 pages, 14 figures, Conference: Advancing Astrophysics with the Square Kilometre Array June 8-13, 2014 Giardini Naxos, Italy

24.77 Pflops on a Gravitational Tree-Code to Simulate the Milky Way Galaxy with 18600 GPUs [GA]

http://arxiv.org/abs/1412.0659


We have simulated, for the first time, the long term evolution of the Milky Way Galaxy using 51 billion particles on the Swiss Piz Daint supercomputer with our $N$-body gravitational tree-code Bonsai. Herein, we describe the scientific motivation and numerical algorithms. The Milky Way model was simulated for 6 billion years, during which the bar structure and spiral arms were fully formed. This improves upon previous simulations by using 1000 times more particles, and provides a wealth of new data that can be directly compared with observations. We also report the scalability on both the Swiss Piz Daint and the US ORNL Titan. On Piz Daint the parallel efficiency of Bonsai was above 95%. The highest performance was achieved with a 242 billion particle Milky Way model using 18600 GPUs on Titan, thereby reaching a sustained GPU and application performance of 33.49 Pflops and 24.77 Pflops respectively.

Read this paper on arXiv…

J. Bedorf, E. Gaburov, M. Fujii, et. al.
Wed, 3 Dec 14
31/60

Comments: 12 pages, 4 figures, Published in: ‘Proceeding SC ’14 Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis’. Gordon Bell Prize 2014 finalist

Accelerating Cosmic Microwave Background map-making procedure through preconditioning [CEA]

http://arxiv.org/abs/1408.3048


Estimation of the sky signal from sequences of time ordered data is one of the key steps in Cosmic Microwave Background (CMB) data analysis, commonly referred to as the map-making problem. Some of the most popular and general methods proposed for this problem involve solving generalised least squares (GLS) equations with non-diagonal noise weights given by a block-diagonal matrix with Toeplitz blocks. In this work we study new map-making solvers potentially suitable for applications to the largest anticipated data sets. They are based on iterative conjugate gradient (CG) approaches enhanced with novel, parallel, two-level preconditioners. We apply the proposed solvers to examples of simulated non-polarised and polarised CMB observations, and a set of idealised scanning strategies with sky coverage ranging from nearly a full sky down to small sky patches. We discuss in detail their implementation for massively parallel computational platforms and their performance for a broad range of parameters characterising the simulated data sets. We find that our best new solver can outperform carefully-optimised standard solvers used today by a factor of as much as 5 in terms of the convergence rate and a factor of up to $4$ in terms of the time to solution, and to do so without significantly increasing the memory consumption and the volume of inter-processor communication. The performance of the new algorithms is also found to be more stable and robust, and less dependent on specific characteristics of the analysed data set. We therefore conclude that the proposed approaches are well suited to address successfully challenges posed by new and forthcoming CMB data sets.

Read this paper on arXiv…

M. Szydlarski, L. Grigori and R. Stompor
Thu, 14 Aug 14
36/54

Comments: 18 pages

Optimizing performance per watt on GPUs in High Performance Computing: temperature, frequency and voltage effects [IMA]

http://arxiv.org/abs/1407.8116


The magnitude of the real-time digital signal processing challenge attached to large radio astronomical antenna arrays motivates use of high performance computing (HPC) systems. The need for high power efficiency (performance per watt) at remote observatory sites parallels that in HPC broadly, where efficiency is an emerging critical metric. We investigate how the performance per watt of graphics processing units (GPUs) is affected by temperature, core clock frequency and voltage. Our results highlight how the underlying physical processes that govern transistor operation affect power efficiency. In particular, we show experimentally that GPU power consumption grows non-linearly with both temperature and supply voltage, as predicted by physical transistor models. We show lowering GPU supply voltage and increasing clock frequency while maintaining a low die temperature increases the power efficiency of an NVIDIA K20 GPU by up to 37-48% over default settings when running xGPU, a compute-bound code used in radio astronomy. We discuss how temperature-aware power models could be used to reduce power consumption for future HPC installations. Automatic temperature-aware and application-dependent voltage and frequency scaling (T-DVFS and A-DVFS) may provide a mechanism to achieve better power efficiency for a wider range of codes running on GPUs

Read this paper on arXiv…

D. Price, M. Clark, B. Barsdell, et. al.
Thu, 31 Jul 14
12/52

Comments: submitted to Computer Physics Communications

A Framework for HI Spectral Source Finding Using Distributed-Memory Supercomputing [IMA]

http://arxiv.org/abs/1407.4958


The latest generation of radio astronomy interferometers will conduct all sky surveys with data products consisting of petabytes of spectral line data. Traditional approaches to identifying and parameterising the astrophysical sources within this data will not scale to datasets of this magnitude, since the performance of workstations will not keep up with the real-time generation of data. For this reason, it is necessary to employ high performance computing systems consisting of a large number of processors connected by a high-bandwidth network. In order to make use of such supercomputers substantial modifications must be made to serial source finding code. To ease the transition, this work presents the Scalable Source Finder Framework, a framework providing storage access, networking communication and data composition functionality, which can support a wide range of source finding algorithms provided they can be applied to subsets of the entire image. Additionally, the Parallel Gaussian Source Finder was implemented using SSoFF, utilising Gaussian filters, thresholding, and local statistics. PGSF was able to search on a 256GB simulated dataset in under 24 minutes, significantly less than the 8 to 12 hour observation that would generate such a dataset.

Read this paper on arXiv…

S. Westerlund and C. Harris
Mon, 21 Jul 14
54/55

Comments: 15 pages, 6 figures

D4M 2.0 Schema: A General Purpose High Performance Schema for the Accumulo Database [CL]

http://arxiv.org/abs/1407.3859


Non-traditional, relaxed consistency, triple store databases are the backbone of many web companies (e.g., Google Big Table, Amazon Dynamo, and Facebook Cassandra). The Apache Accumulo database is a high performance open source relaxed consistency database that is widely used for government applications. Obtaining the full benefits of Accumulo requires using novel schemas. The Dynamic Distributed Dimensional Data Model (D4M)[this http URL] provides a uniform mathematical framework based on associative arrays that encompasses both traditional (i.e., SQL) and non-traditional databases. For non-traditional databases D4M naturally leads to a general purpose schema that can be used to fully index and rapidly query every unique string in a dataset. The D4M 2.0 Schema has been applied with little or no customization to cyber, bioinformatics, scientific citation, free text, and social media data. The D4M 2.0 Schema is simple, requires minimal parsing, and achieves the highest published Accumulo ingest rates. The benefits of the D4M 2.0 Schema are independent of the D4M interface. Any interface to Accumulo can achieve these benefits by using the D4M 2.0 Schema

Read this paper on arXiv…

J. Kepner, C. Anderson, W. Arcand, et. al.
Wed, 16 Jul 14
41/48

Comments: 6 pages; IEEE HPEC 2013

GPU accelerated Hybrid Tree Algorithm for Collision-less N-body Simulations [IMA]

http://arxiv.org/abs/1406.6158


We propose a hybrid tree algorithm for reducing calculation and communication cost of collision-less N-body simulations. The concept of our algorithm is that we split interaction force into two parts: hard-force from neighbor particles and soft-force from distant particles, and applying different time integration for the forces. For hard-force calculation, we can efficiently reduce the calculation and communication cost of the parallel tree code because we only need data of neighbor particles for this part. We implement the algorithm on GPU clusters to accelerate force calculation for both hard and soft force. As the result of implementing the algorithm on GPU clusters, we were able to reduce the communication cost and the total execution time to 40% and 80% of that of a normal tree algorithm, respectively. In addition, the reduction factor relative the normal tree algorithm is smaller for large number of processes, and we expect that the execution time can be ultimately reduced down to about 70% of the normal tree algorithm.

Read this paper on arXiv…

T. Watanabe and N. Nakasato
Wed, 25 Jun 14
61/67

Comments: Paper presented at Fifth International Symposium on Highly-Efficient Accelerators and Reconfigurable Technologies (HEART2014)

Achieving 100,000,000 database inserts per second using Accumulo and D4M [CL]

http://arxiv.org/abs/1406.4923


The Apache Accumulo database is an open source relaxed consistency database that is widely used for government applications. Accumulo is designed to deliver high performance on unstructured data such as graphs of network data. This paper tests the performance of Accumulo using data from the Graph500 benchmark. The Dynamic Distributed Dimensional Data Model (D4M) software is used to implement the benchmark on a 216-node cluster running the MIT SuperCloud software stack. A peak performance of over 100,000,000 database inserts per second was achieved which is 100x larger than the highest previously published value for any other database. The performance scales linearly with the number of ingest clients, number of database servers, and data size. The performance was achieved by adapting several supercomputing techniques to this application: distributed arrays, domain decomposition, adaptive load balancing, and single-program-multiple-data programming.

Read this paper on arXiv…

J. Kepner, W. Arcand, D. Bestor, et. al.
Fri, 20 Jun 14
2/48

Comments: 6 pages; to appear in IEEE High Performance Extreme Computing (HPEC) 2014

Efficient and Scalable Algorithms for Smoothed Particle Hydrodynamics on Hybrid Shared/Distributed-Memory Architectures [CL]

http://arxiv.org/abs/1404.2303


This paper describes a new fast and implicitly parallel approach to neighbour-finding in multi-resolution Smoothed Particle Hydrodynamics (SPH) simulations. This new approach is based on hierarchical cell decompositions and sorted interactions, within a task-based formulation. It is shown to be faster than traditional tree-based codes, and to scale better than domain decomposition-based approaches on hybrid shared/distributed-memory parallel architectures, e.g. clusters of multi-cores, achieving a $40\times$ speedup over the Gadget-2 simulation code.

Read this paper on arXiv…

P. Gonnet
Thu, 10 Apr 14
36/57

Creating A Galactic Plane Atlas With Amazon Web Services [IMA]

http://arxiv.org/abs/1312.6723


This paper describes by example how astronomers can use cloud-computing resources offered by Amazon Web Services (AWS) to create new datasets at scale. We have created from existing surveys an atlas of the Galactic Plane at 16 wavelengths from 1 {\mu}m to 24 {\mu}m with pixels co-registered at spatial sampling of 1 arcsec. We explain how open source tools support management and operation of a virtual cluster on AWS platforms to process data at scale, and describe the technical issues that users will need to consider, such as optimization of resources, resource costs, and management of virtual machine instances.

Read this paper on arXiv…

Wed, 25 Dec 13
11/23

Calculation of Stochastic Heating and Emissivity of Cosmic Dust Grains with Optimization for the Intel Many Integrated Core Architecture [IMA]

http://arxiv.org/abs/1311.4627


Cosmic dust particles effectively attenuate starlight. Their absorption of starlight produces emission spectra from the near- to far-infrared, which depends on the sizes and properties of the dust grains, and spectrum of the heating radiation field. The near- to mid-infrared is dominated by the emissions by very small grains. Modeling the absorption of starlight by these particles is, however, computationally expensive and a significant bottleneck for self-consistent radiation transport codes treating the heating of dust by stars. In this paper, we summarize the formalism for computing the stochastic emissivity of cosmic dust, which was developed in earlier works, and present a new library HEATCODE implementing this formalism for the calculation for arbitrary grain properties and heating radiation fields. Our library is highly optimized for general-purpose processors with multiple cores and vector instructions, with hierarchical memory cache structure. The HEATCODE library also efficiently runs on co-processor cards implementing the Intel Many Integrated Core (Intel MIC) architecture. We discuss in detail the optimization steps that we took in order to optimize for the Intel MIC architecture, which also significantly benefited the performance of the code on general-purpose processors, and provide code samples and performance benchmarks for each step. The HEATCODE library performance on a single Intel Xeon Phi coprocessor (Intel MIC architecture) is approximately 2 times a general-purpose two-socket multicore processor system with approximately the same nominal power consumption. The library supports heterogeneous calculations employing host processors simultaneously with multiple coprocessors, and can be easily incorporated into existing radiation transport codes.

Read this paper on arXiv…

Wed, 20 Nov 13
18/54

2HOT: An Improved Parallel Hashed Oct-Tree N-Body Algorithm for Cosmological Simulation [IMA]

http://arxiv.org/abs/1310.4502


We report on improvements made over the past two decades to our adaptive treecode N-body method (HOT). A mathematical and computational approach to the cosmological N-body problem is described, with performance and scalability measured up to 256k ($2^{18}$) processors. We present error analysis and scientific application results from a series of more than ten 69 billion ($4096^3$) particle cosmological simulations, accounting for $4 \times 10^{20}$ floating point operations. These results include the first simulations using the new constraints on the standard model of cosmology from the Planck satellite. Our simulations set a new standard for accuracy and scientific throughput, while meeting or exceeding the computational efficiency of the latest generation of hybrid TreePM N-body methods.

Read this paper on arXiv…

Date added: Fri, 18 Oct 13

Porting Large HPC Applications to GPU Clusters: The Codes GENE and VERTEX


We have developed GPU versions for two major high-performance-computing (HPC) applications originating from two different scientific domains. GENE is a plasma microturbulence code which is employed for simulations of nuclear fusion plasmas. VERTEX is a neutrino-radiation hydrodynamics code for “first principles”-simulations of core-collapse supernova explosions. The codes are considered state of the art in their respective scientific domains, both concerning their scientific scope and functionality as well as the achievable compute performance, in particular parallel scalability on all relevant HPC platforms. GENE and VERTEX were ported by us to HPC cluster architectures with two NVidia Kepler GPUs mounted in each node in addition to two Intel Xeon CPUs of the Sandy Bridge family. On such platforms we achieve up to twofold gains in the overall application performance in the sense of a reduction of the time to solution for a given setup with respect to a pure CPU cluster. The paper describes our basic porting strategies and benchmarking methodology, and details the main algorithmic and technical challenges we faced on the new, heterogeneous architecture.

Read this paper on arXiv…

Date added: Tue, 8 Oct 13