SLEPLET: Slepian Scale-Discretised Wavelets in Python [CL]

http://arxiv.org/abs/2304.10680


Wavelets are widely used in various disciplines to analyse signals both in space and scale. Whilst many fields measure data on manifolds (i.e., the sphere), often data are only observed on a partial region of the manifold. Wavelets are a typical approach to data of this form, but the wavelet coefficients that overlap with the boundary become contaminated and must be removed for accurate analysis. Another approach is to estimate the region of missing data and to use existing whole-manifold methods for analysis. However, both approaches introduce uncertainty into any analysis. Slepian wavelets enable one to work directly with only the data present, thus avoiding the problems discussed above. Applications of Slepian wavelets to areas of research measuring data on the partial sphere include gravitational/magnetic fields in geodesy, ground-based measurements in astronomy, measurements of whole-planet properties in planetary science, geomagnetism of the Earth, and cosmic microwave background analyses.

Read this paper on arXiv…

P. Roddy
Mon, 24 Apr 23
26/41

Comments: 4 pages

On the best lattice quantizers [CL]

http://arxiv.org/abs/2202.09605


A lattice quantizer approximates an arbitrary real-valued source vector with a vector taken from a specific discrete lattice. The quantization error is the difference between the source vector and the lattice vector. In a classic 1996 paper, Zamir and Feder show that the globally optimal lattice quantizer (which minimizes the mean square error) has white quantization noise: for a uniformly distributed source, the covariance of the error is the identity matrix, multiplied by a positive real factor. We generalize the theorem, showing that the same property holds (i) for any locally optimal lattice quantizer and (ii) for an optimal product lattice, if the component lattices are themselves locally optimal. We derive an upper bound on the normalized second moment (NSM) of the optimal lattice in any dimension, by proving that any lower- or upper-triangular modification to the generator matrix of a product lattice reduces the NSM. Using these tools and employing the best currently known lattice quantizers to build product lattices, we construct improved lattice quantizers in dimensions 13 to 15, 17 to 23, and 25 to 48. In some dimensions, these are the first reported lattices with normalized second moments below the Zador upper bound.

Read this paper on arXiv…

E. Agrell and B. Allen
Thu, 24 Feb 22
47/52

Comments: N/A

FAIR high level data for Cherenkov astronomy [CL]

http://arxiv.org/abs/2201.03247


We highlight here several solutions developed to make high-level Cherenkov data FAIR: Findable, Accessible, Interoperable and Reusable. The first three FAIR principles may be ensured by properly indexing the data and using community standards, protocols and services, for example provided by the International Virtual Observatory Alliance (IVOA). However, the reusability principle is particularly subtle as the question of trust is raised. Provenance information, that describes the data origin and all transformations performed, is essential to ensure this trust, and it should come with the proper granularity and level of details. We developed a prototype platform to make the first H.E.S.S. public test data findable and accessible through the Virtual Observatory (VO). The exposed high-level data follows the gamma-ray astronomy data format (GADF) proposed as a community standard to ensure wider interoperability. We also designed a provenance management system in connection with the development of pipelines and analysis tools for CTA (ctapipe and gammapy), in order to collect rich and detailed provenance information, as recommended by the FAIR reusability principle. The prototype platform thus implements the main functionalities of a science gateway, including data search and access, online processing, and traceability of the various actions performed by a user.

Read this paper on arXiv…

M. Servillat, C. Boisson, M. Fuessling, et. al.
Tue, 11 Jan 22
55/95

Comments: N/A

Towards a Provenance Management System for Astronomical Observatories [CL]

http://arxiv.org/abs/2109.07751


We present here a provenance management system adapted to astronomical projects needs. We collected use cases from various astronomy projects and defined a data model in the ecosystem developed by the IVOA (International Virtual Observatory Alliance). From those use cases, we observed that some projects already have data collections generated and archived, from which the provenance has to be extracted (provenance “on top”), and some projects are building complex pipelines that automatically capture provenance information during the data processing (capture “inside”). Different tools and prototypes have been developed and tested to capture, store, access and visualize the provenance information, which participate to the shaping of a full provenance management system able to handle detailed provenance information.

Read this paper on arXiv…

M. Servillat, F. Bonnarel, C. Boisson, et. al.
Fri, 17 Sep 21
41/67

Comments: N/A

Limits of Detecting Extraterrestrial Civilizations [CL]

http://arxiv.org/abs/2107.09794


The search for extraterrestrial intelligence (SETI) is a scientific endeavor which struggles with unique issues — a strong indeterminacy in what data to look for and when to do so. This has led to attempts at finding both fundamental limits of the communication between extraterrestrial intelligence and human civilizations, as well as benchmarks so as to predict what kinds of signals we might most expect. Previous work has been formulated in terms of the information-theoretic task of communication, but we instead argue it should be viewed as a detection problem, specifically one-shot (asymmetric) hypothesis testing. With this new interpretation, we develop fundamental limits as well as provide simple examples of how to use this framework to analyze and benchmark different possible signals from extraterrestrial civilizations. We show that electromagnetic signaling for detection requires much less power than for communication, that detection as a function of power can be non-linear, and that much of the analysis in this framework may be addressed using computationally efficient optimization problems, thereby demonstrating tools for further inquiry.

Read this paper on arXiv…

I. George, X. Chen and L. Varshney
Thu, 22 Jul 21
3/59

Comments: Main Text: 16 pages, 1 Figure. Comments welcome

Slepian Scale-Discretised Wavelets on the Sphere [CL]

http://arxiv.org/abs/2106.02023


This work presents the construction of a novel spherical wavelet basis designed for incomplete spherical datasets, i.e. datasets which are missing in a particular region of the sphere. The eigenfunctions of the Slepian spatial-spectral concentration problem (the Slepian functions) are a set of orthogonal basis functions which exist within a defined region. Slepian functions allow one to compute a convolution on the incomplete sphere by leveraging the recently proposed sifting convolution and extending it to any set of basis functions. Through a tiling of the Slepian harmonic line one may construct scale-discretised wavelets. An illustration is presented based on an example region on the sphere defined by the topographic map of the Earth. The Slepian wavelets and corresponding wavelet coefficients are constructed from this region, and are used in a straightforward denoising example.

Read this paper on arXiv…

P. Roddy and J. McEwen
Fri, 4 Jun 21
12/71

Comments: 10 pages, 8 figures

Sparse image reconstruction on the sphere: a general approach with uncertainty quantification [CL]

http://arxiv.org/abs/2105.04935


Inverse problems defined naturally on the sphere are becoming increasingly of interest. In this article we provide a general framework for evaluation of inverse problems on the sphere, with a strong emphasis on flexibility and scalability. We consider flexibility with respect to the prior selection (regularization), the problem definition – specifically the problem formulation (constrained/unconstrained) and problem setting (analysis/synthesis) – and optimization adopted to solve the problem. We discuss and quantify the trade-offs between problem formulation and setting. Crucially, we consider the Bayesian interpretation of the unconstrained problem which, combined with recent developments in probability density theory, permits rapid, statistically principled uncertainty quantification (UQ) in the spherical setting. Linearity is exploited to significantly increase the computational efficiency of such UQ techniques, which in some cases are shown to permit analytic solutions. We showcase this reconstruction framework and UQ techniques on a variety of spherical inverse problems. The code discussed throughout is provided under a GNU general public license, in both C++ and Python.

Read this paper on arXiv…

M. Price, L. Pratley and J. McEwen
Mon, 17 May 21
4/55

Comments: N/A

Bayesian variational regularization on the ball [CL]

http://arxiv.org/abs/2105.05518


We develop variational regularization methods which leverage sparsity-promoting priors to solve severely ill posed inverse problems defined on the 3D ball (i.e. the solid sphere). Our method solves the problem natively on the ball and thus does not suffer from discontinuities that plague alternate approaches where each spherical shell is considered independently. Additionally, we leverage advances in probability density theory to produce Bayesian variational methods which benefit from the computational efficiency of advanced convex optimization algorithms, whilst supporting principled uncertainty quantification. We showcase these variational regularization and uncertainty quantification techniques on an illustrative example. The C++ code discussed throughout is provided under a GNU general public license.

Read this paper on arXiv…

M. Price and J. McEwen
Thu, 13 May 21
52/60

Comments: N/A

Morphological components analysis for circumstellar disks imaging [IMA]

http://arxiv.org/abs/2101.12706


Recent developments in astronomical observations enable direct imaging of circumstellar disks. Precise characterization of such extended structure is essential to our understanding of stellar systems. However, the faint intensity of the circumstellar disks compared to the brightness of the host star compels astronomers to use tailored observation strategies, in addition to state-of-the-art optical devices. Even then, extracting the signal of circumstellar disks heavily relies on post-processing techniques. In this work, we propose a morphological component analysis (MCA) approach that leverages low-complexity models of both the disks and the stellar light corrupting the data. In addition to disks, our method allows to image exoplanets. Our approach is tested through numerical experiments.

Read this paper on arXiv…

B. Pairet, F. Cantalloube and L. Jacques
Mon, 1 Feb 21
65/69

Comments: in Proceedings of iTWIST’20, Paper-ID: 44, Nantes, France, December, 2-4, 2020

Sifting Convolution on the Sphere [CL]

http://arxiv.org/abs/2007.12153


A novel spherical convolution is defined through the sifting property of the Dirac delta on the sphere. The so-called sifting convolution is defined by the inner product of one function with a translated version of another, but with the adoption of an alternative translation operator on the sphere. This translation operator follows by analogy with the Euclidean translation when viewed in harmonic space. The sifting convolution satisfies a variety of desirable properties that are lacking in alternate definitions, namely: it supports directional kernels; it has an output which remains on the sphere; and is efficient to compute. An illustration of the sifting convolution on a topographic map of the Earth demonstrates that it supports directional kernels to perform anisotropic filtering, while its output remains on the sphere.

Read this paper on arXiv…

P. Roddy and J. McEwen
Fri, 24 Jul 20
-544/53

Comments: 5 pages, 3 figures

Applying Information Theory to Design Optimal Filters for Photometric Redshifts [IMA]

http://arxiv.org/abs/2001.01372


In this paper we apply ideas from information theory to create a method for the design of optimal filters for photometric redshift estimation. We show the method applied to a series of simple example filters in order to motivate an intuition for how photometric redshift estimators respond to the properties of photometric passbands. We then design a realistic set of six filters covering optical wavelengths that optimize photometric redshifts for $z <= 2.3$ and $i < 25.3$. We create a simulated catalog for these optimal filters and use our filters with a photometric redshift estimation code to show that we can improve the standard deviation of the photometric redshift error by 7.1% overall and improve outliers 9.9% over the standard filters proposed for the Large Synoptic Survey Telescope (LSST). We compare features of our optimal filters to LSST and find that the LSST filters incorporate key features for optimal photometric redshift estimation. Finally, we describe how information theory can be applied to a range of optimization problems in astronomy.

Read this paper on arXiv…

J. Kalmbach, J. VanderPlas and A. Connolly
Tue, 7 Jan 20
69/71

Comments: 29 pages, 17 figures, accepted to ApJ

Comparing Multi-class, Binary and Hierarchical Machine Learning Classification schemes for variable stars [IMA]

http://arxiv.org/abs/1907.08189


Upcoming synoptic surveys are set to generate an unprecedented amount of data. This requires an automatic framework that can quickly and efficiently provide classification labels for several new object classification challenges. Using data describing 11 types of variable stars from the Catalina Real-Time Transient Surveys (CRTS), we illustrate how to capture the most important information from computed features and describe detailed methods of how to robustly use Information Theory for feature selection and evaluation. We apply three Machine Learning (ML) algorithms and demonstrate how to optimize these classifiers via cross-validation techniques. For the CRTS dataset, we find that the Random Forest (RF) classifier performs best in terms of balanced-accuracy and geometric means. We demonstrate substantially improved classification results by converting the multi-class problem into a binary classification task, achieving a balanced-accuracy rate of $\sim$99 per cent for the classification of ${\delta}$-Scuti and Anomalous Cepheids (ACEP). Additionally, we describe how classification performance can be improved via converting a ‘flat-multi-class’ problem into a hierarchical taxonomy. We develop a new hierarchical structure and propose a new set of classification features, enabling the accurate identification of subtypes of cepheids, RR Lyrae and eclipsing binary stars in CRTS data.

Read this paper on arXiv…

Z. Hosenie, R. Lyon, B. Stappers, et. al.
Fri, 19 Jul 19
68/78

Comments: 16 pages, 11 figures, accepted for publication in MNRAS

Glancing Through Massive Binary Radio Lenses: Hardware-Aware Interferometry With 1-Bit Sensors [IMA]

http://arxiv.org/abs/1905.12528


Energy consumption and hardware cost of signal digitization together with the management of the resulting data volume form serious issues for high-rate measurement systems with multiple sensors. Switching to binary sensing front-ends results in resource-efficient systems but is commonly associated with significant distortion due to the nonlinear signal acquisition. In particular, for applications that require to solve high-resolution processing tasks under extreme conditions, it is a widely held belief that low-complexity 1-bit analog-to-digital conversion leads to unacceptable performance degradation. In the Big Science context of radio astronomy, we propose a telescope architecture based on simplistic binary sampling, precise hardware-aware probabilistic modeling, and advanced statistical data processing. We sketch the main principles, system blocks and advantages of such a radio telescope system which we refer to as The Massive Binary Radio Lenses. The open engineering science questions which have to be answered before building a physical prototype are outlined. We set sail for the academic technology study by deriving an algorithm for interferometric imaging from binary radio array measurements. Without bias, the method aims at extracting the full discriminative information about the spatial power distribution embedded in a binary sensor data stream. We use radio measurements obtained with the LOFAR telescope to test the developed imaging technique and present visual and quantitative results. These assessments shed light on the fact that binary radio telescopes are suited for surveying the universe.

Read this paper on arXiv…

M. Stein
Thu, 30 May 19
56/57

Comments: N/A

Information theory for fields [CEA]

http://arxiv.org/abs/1804.03350


A physical field has an infinite number of degrees of freedom, as it has a field value at each location of a continuous space. Knowing a field exactly from finite measurements alone is therefore impossible. Prior information on the field is essential for field inference, but will not specify the field entirely. An information theory for fields is needed to join the measurement and prior information into probabilistic statements on field configurations. Such an information field theory (IFT) is built on the language of mathematical physics, in particular on field theory and statistical mechanics. IFT permits the mathematical derivation of optimal imaging algorithms, data analysis methods, and even computer simulation schemes. The application of such IFT algorithms to astronomical datasets provides high fidelity images of the Universe and facilitates the search for subtle statistical signals from the Big Bang. The concepts of IFT might even pave the road to novel computer simulations that are aware of their own uncertainties.

Read this paper on arXiv…

T. Ensslin
Wed, 11 Apr 18
52/54

Comments: 13 pages, 4 figures, submitted

Robust period estimation using mutual information for multi-band light curves in the synoptic survey era [IMA]

http://arxiv.org/abs/1709.03541


The Large Synoptic Survey Telescope (LSST) will produce an unprecedented amount of light curves using six optical bands. Robust and efficient methods that can aggregate data from multidimensional sparsely-sampled time series are needed. In this paper we present a new method for light curve period estimation based on the quadratic mutual information (QMI). The proposed method does not assume a particular model for the light curve nor its underlying probability density and it is robust to non-Gaussian noise and outliers. By combining the QMI from several bands the true period can be estimated even when no single-band QMI yields the period. Period recovery performance as a function of average magnitude and sample size is measured using 30,000 synthetic multi-band light curves of RR Lyrae and Cepheid variables generated by the LSST Operations and Catalog simulators. The results show that aggregating information from several bands is highly beneficial in LSST sparsely-sampled time series, obtaining an absolute increase in period recovery rate up to 50%. We also show that the QMI is more robust to noise and light curve length (sample size) than the multiband generalizations of the Lomb Scargle and Analysis of Variance periodograms, recovering the true period in 10-30% more cases than its competitors. A python package containing efficient Cython implementations of the QMI and other methods is provided.

Read this paper on arXiv…

P. Huijse, P. Estevez, F. Forster, et. al.
Wed, 13 Sep 17
9/72

Comments: Accepted for publication ApJ Supplement Series: Special Issue on Solar/Stellar Astronomy Big Data

Exploration of Pattern-Matching Techniques for Lossy Compression on Cosmology Simulation Data Sets [CL]

http://arxiv.org/abs/1707.08205


Because of the vast volume of data being produced by today’s scientific simulations, lossy compression allowing user-controlled information loss can significantly reduce the data size and the I/O burden. However, for large-scale cosmology simulation, such as the Hardware/Hybrid Accelerated Cosmology Code (HACC), where memory overhead constraints restrict compression to only one snapshot at a time, the lossy compression ratio is extremely limited because of the fairly low spatial coherence and high irregularity of the data. In this work, we propose a pattern-matching (similarity searching) technique to optimize the prediction accuracy and compression ratio of SZ lossy compressor on the HACC data sets. We evaluate our proposed method with different configurations and compare it with state-of-the-art lossy compressors. Experiments show that our proposed optimization approach can improve the prediction accuracy and reduce the compressed size of quantization codes compared with SZ. We present several lessons useful for future research involving pattern-matching techniques for lossy compression.

Read this paper on arXiv…

D. Tao, S. Di, Z. Chen, et. al.
Thu, 27 Jul 17
9/49

Comments: 12 pages, 4 figures, accepted for DRBSD-1 in conjunction with ISC’17

Computing Entropies With Nested Sampling [CL]

http://arxiv.org/abs/1707.03543


The Shannon entropy, and related quantities such as mutual information, can be used to quantify uncertainty and relevance. However, in practice, it can be difficult to compute these quantities for arbitrary probability distributions, particularly if the probability mass functions or densities cannot be evaluated. This paper introduces a computational approach, based on Nested Sampling, to evaluate entropies of probability distributions that can only be sampled. I demonstrate the method on three examples: a simple gaussian example where the key quantities are available analytically; (ii) an experimental design example about scheduling observations in order to measure the period of an oscillating signal; and (iii) predicting the future from the past in a heavy-tailed scenario.

Read this paper on arXiv…

B. Brewer
Thu, 13 Jul 17
43/60

Comments: Submitted to Entropy. 18 pages, 3 figures. Software available at this https URL

Precise Real-Time Navigation of LEO Satellites Using a Single-Frequency GPS Receiver and Ultra-Rapid Ephemerides [IMA]

http://arxiv.org/abs/1704.02094


Precise (sub-meter level) real-time navigation using a space-capable single-frequency global positioning system (GPS) receiver and ultra-rapid (real-time) ephemerides from the international global navigation satellite systems service is proposed for low-Earth-orbiting (LEO) satellites. The C/A code and L1 carrier phase measurements are combined and single-differenced to eliminate first-order ionospheric effects and receiver clock offsets. A random-walk process is employed to model the phase ambiguities in order to absorb the time-varying and satellite-specific higher-order measurement errors as well as the GPS clock correction errors. A sequential Kalman filter which incorporates the known orbital dynamic model is developed to estimate orbital states and phase ambiguities without matrix inversion. Real flight data from the single-frequency GPS receiver onboard China’s SJ-9A small satellite are processed to evaluate the orbit determination accuracy. Statistics from internal orbit assessments indicate that three-dimensional accuracies of better than 0.50 m and 0.55 mm/s are achieved for position and velocity, respectively.

Read this paper on arXiv…

X. Sun, C. Han and P. Chen
Mon, 10 Apr 17
25/36

Comments: 25 pages, 6 figures, Ready for publication in Aerospace Science and Technology, 2017

Correlated signal inference by free energy exploration [CL]

http://arxiv.org/abs/1612.08406


The inference of correlated signal fields with unknown correlation structures is of high scientific and technological relevance, but poses significant conceptual and numerical challenges. To address these, we develop the correlated signal inference (CSI) algorithm within information field theory (IFT) and discuss its numerical implementation. To this end, we introduce the free energy exploration (FrEE) strategy for numerical information field theory (NIFTy) applications. The FrEE strategy is to let the mathematical structure of the inference problem determine the dynamics of the numerical solver. FrEE uses the Gibbs free energy formalism for all involved unknown fields and correlation structures without marginalization of nuisance quantities. It thereby avoids the complexity marginalization often impose to IFT equations. FrEE simultaneously solves for the mean and the uncertainties of signal, nuisance, and auxiliary fields, while exploiting any analytically calculable quantity. Finally, FrEE uses a problem specific and self-tuning exploration strategy to swiftly identify the optimal field estimates as well as their uncertainty maps. For all estimated fields, properly weighted posterior samples drawn from their exact, fully non-Gaussian distributions can be generated. Here, we develop the FrEE strategies for the CSI of a normal, a log-normal, and a Poisson log-normal IFT signal inference problem and demonstrate their performances via their NIFTy implementations.

Read this paper on arXiv…

T. Ensslin and J. Knollmuller
Wed, 28 Dec 16
31/46

Comments: 19 pages, 5 figures, submitted

Real-time kinematic positioning of LEO satellites using a single-frequency GPS receiver [IMA]

http://arxiv.org/abs/1611.04683


Due to their low cost and low power consumption, single-frequency GPS receivers are considered suitable for low-cost space applications such as small satellite missions. Recently, requirements have emerged for real-time accurate orbit determination at sub-meter level in order to carry out onboard geocoding of high-resolution imagery, open-loop operation of altimeters and radio occultation. This study proposes an improved real-time kinematic positioning method for LEO satellites using single-frequency receivers. The C/A code and L1 phase are combined to eliminate ionospheric effects. The epoch-differenced carrier phase measurements are utilized to acquire receiver position changes which are further used to smooth the absolute positions. A kinematic Kalman filter is developed to implement kinematic orbit determination. Actual flight data from China small satellite SJ-9A are used to test the navigation performance. Results show that the proposed method outperforms traditional kinematic positioning method in terms of accuracy. A 3D position accuracy of 0.72 m and 0.79 m has been achieved using the predicted portion of IGS ultra-rapid products and broadcast ephemerides, respectively.

Read this paper on arXiv…

P. Chen, J. Zhang and X. Sun
Wed, 16 Nov 16
32/64

Comments: 27 pages, 8 figures, ready for publication in GPS Solutions

Sparse image reconstruction on the sphere: analysis vs synthesis [CL]

http://arxiv.org/abs/1608.00553


We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularisation, exploiting sparsity in both axisymmetric and directional scale-discretised wavelet space. Denoising, inpainting, and deconvolution problems, and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l1 norm appearing in the regularisation problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353 GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.

Read this paper on arXiv…

C. Wallis, Y. Wiaux and J. McEwen
Thu, 22 Sep 16
24/62

Comments: 11 pages, 6 Figures

Autonomous Orbit Determination via Kalman Filtering of Gravity Gradients [IMA]

http://arxiv.org/abs/1609.05225


Spaceborne gravity gradients are proposed in this paper to provide autonomous orbit determination capabilities for near Earth satellites. The gravity gradients contain useful position information which can be extracted by matching the observations with a precise gravity model. The extended Kalman filter is investigated as the principal estimator. The stochastic model of orbital motion, the measurement equation and the model configuration are discussed for the filter design. An augmented state filter is also developed to deal with unknown significant measurement biases. Simulations are conducted to analyze the effects of initial errors, data-sampling periods, orbital heights, attitude and gradiometer noise levels, and measurement biases. Results show that the filter performs well with additive white noise observation errors. Degraded observability for the along-track position is found for the augmented state filter. Real flight data from the GOCE satellite are used to test the algorithm. Radial and cross-track position errors of less than 100 m have been achieved.

Read this paper on arXiv…

X. Sun, P. Chen, C. Macabiau, et. al.
Tue, 20 Sep 16
71/74

Comments: 29 pages, 15 figures, Ready for Publication in IEEE Transactions on Aerospace and Electronic Systems

Gravity Gradient Tensor Eigendecomposition for Spacecraft Positioning [IMA]

http://arxiv.org/abs/1608.03366


In this Note, a new approach to spacecraft positioning based on GGT inversion is presented. The gravity gradient tensor is initially measured in the gradiometer reference frame (GRF) and then transformed to the Earth-Centered Earth-Fixed (ECEF) frame via attitude information as well as Earth rotation parameters. Matrix Eigen-Decomposition is introduced to directly translate GGT into position based on the fact that the eigenvalues and eigenvectors of GGT are simplespecific functions of spherical coordinates of the observation position. without the need of an initial position. Unlike the strategy of inertial navigation aiding, no prediction or first guess of the spacecraft position is needed. The method makes use of the J2 gravity model, and is suitable for space navigation where higher frequency terrain contributions to the GGT signals can be neglected.

Read this paper on arXiv…

P. Chen, X. Sun and C. Han
Fri, 12 Aug 16
1/38

Comments: 18 pages, 9 figures

Low-Earth Orbit Determination from Gravity Gradient Measurements [IMA]

http://arxiv.org/abs/1608.03367


An innovative orbit determination method which makes use of gravity gradients for Low-Earth-Orbiting satellites is proposed. The measurement principle of gravity gradiometry is briefly reviewed and the sources of measurement error are analyzed. An adaptive hybrid least squares batch filter based on linearization of the orbital equation and unscented transformation of the measurement equation is developed to estimate the orbital states and the measurement biases. The algorithm is tested with the actual flight data from the European Space Agency Gravity field and steady-state Ocean Circulation Explorer. The orbit determination results are compared with the GPS-derived orbits. The radial and cross-track position errors are on the order of tens of meters, whereas the along-track position error is over one order of magnitude larger. The gravity gradient based orbit determination method is promising for potential use in GPS-denied spacecraft navigation.

Read this paper on arXiv…

X. Sun, P. Chen, C. Macabiau, et. al.
Fri, 12 Aug 16
20/38

Comments: 34 pages, 8 figures

Sparse image reconstruction on the sphere: analysis vs synthesis [CL]

http://arxiv.org/abs/1608.00553


We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularisation, exploiting sparsity in both axisymmetric and directional scale-discretised wavelet space. Denoising, inpainting, and deconvolution problems, and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l1 norm appearing in the regularisation problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353 GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.

Read this paper on arXiv…

C. Wallis, Y. Wiaux and J. McEwen
Tue, 2 Aug 16
20/80

Comments: 11 pages, 6 Figures

Second-Generation Curvelets on the Sphere [CL]

http://arxiv.org/abs/1511.05578


Curvelets are efficient to represent highly anisotropic signal content, such as local linear and curvilinear structure. First-generation curvelets on the sphere, however, suffered from blocking artefacts. We present a new second- generation curvelet transform, where scale-discretised curvelets are constructed directly on the sphere. Scale-discretised curvelets exhibit a parabolic scaling relation, are well-localised in both spatial and harmonic domains, support the exact analysis and synthesis of both scalar and spin signals, and are free of blocking artefacts. We present fast algorithms to compute the exact curvelet transform, reducing computational complexity from $\mathcal{O}(L^5)$ to $\mathcal{O}(L^3\log_{2}{L})$ for signals band-limited at $L$. The implementation of these algorithms is made publicly available. Finally, we present an illustrative application demonstrating the effectiveness of curvelets for representing directional curve-like features in natural spherical images.

Read this paper on arXiv…

J. Chan, B. Leistedt, T. Kitching, et. al.
Thu, 19 Nov 15
59/73

Comments: 10 pages, 7 figures, Code available at this http URL

Directional spin wavelets on the sphere [CL]

http://arxiv.org/abs/1509.06749


We construct a directional spin wavelet framework on the sphere by generalising the scalar scale-discretised wavelet transform to signals of arbitrary spin. The resulting framework is the only wavelet framework defined natively on the sphere that is able to probe the directional intensity of spin signals. Furthermore, directional spin scale-discretised wavelets support the exact synthesis of a signal on the sphere from its wavelet coefficients and satisfy excellent localisation and uncorrelation properties. Consequently, directional spin scale-discretised wavelets are likely to be of use in a wide range of applications and in particular for the analysis of the polarisation of the cosmic microwave background (CMB). We develop new algorithms to compute (scalar and spin) forward and inverse wavelet transforms exactly and efficiently for very large data-sets containing tens of millions of samples on the sphere. By leveraging a novel sampling theorem on the rotation group developed in a companion article, only half as many wavelet coefficients as alternative approaches need be computed, while still capturing the full information content of the signal under analysis. Our implementation of these algorithms is made publicly available.

Read this paper on arXiv…

J. McEwen, B. Leistedt, M. Buttner, et. al.
Thu, 24 Sep 15
10/60

Comments: 11 pages, 7 figures. Code available on www.s2let.org

3D weak lensing with spin wavelets on the ball [CEA]

http://arxiv.org/abs/1509.06750


We construct the spin flaglet transform, a wavelet transform to analyse spin signals in three dimensions. Spin flaglets can probe signal content localised simultaneously in space and frequency and, moreover, are separable so that their angular and radial properties can be controlled independently. They are particularly suited to analysing of cosmological observations such as the weak gravitational lensing of galaxies. Such observations have a unique 3D geometrical setting since they are natively made on the sky, have spin angular symmetries, and are extended in the radial direction by additional distance or redshift information. Flaglets are constructed in the harmonic space defined by the Fourier-Laguerre transform, previously defined for scalar functions and extended here to signals with spin symmetries. Thanks to various sampling theorems, both the Fourier-Laguerre and flaglet transforms are theoretically exact when applied to band-limited signals. In other words, in numerical computations the only loss of information is due to the finite representation of floating point numbers. We develop a 3D framework relating the weak lensing power spectrum to covariances of flaglet coefficients. We suggest that the resulting novel flaglet weak lensing estimator offers a powerful alternative to common 2D and 3D approaches to accurately capture cosmological information. While standard weak lensing analyses focus on either real or harmonic space representations (i.e., correlation functions or Fourier-Bessel power spectra, respectively), a wavelet approach inherits the advantages of both techniques, where both complicated sky coverage and uncertainties associated with the physical modelling of small scales can be handled effectively. Our codes to compute the Fourier-Laguerre and flaglet transforms are made publicly available.

Read this paper on arXiv…

B. Leistedt, J. McEwen, T. Kitching, et. al.
Thu, 24 Sep 15
11/60

Comments: 24 pages, 4 figures

Localisation of directional scale-discretised wavelets on the sphere [CL]

http://arxiv.org/abs/1509.06767


Scale-discretised wavelets yield a directional wavelet framework on the sphere where a signal can be probed not only in scale and position but also in orientation. Furthermore, a signal can be synthesised from its wavelet coefficients exactly, in theory and practice (to machine precision). Scale-discretised wavelets are closely related to spherical needlets (both were developed independently at about the same time) but relax the axisymmetric property of needlets so that directional signal content can be probed. Needlets have been shown to satisfy important quasi-exponential localisation and asymptotic uncorrelation properties. We show that these properties also hold for directional scale-discretised wavelets on the sphere and derive similar localisation and uncorrelation bounds in both the scalar and spin settings. Scale-discretised wavelets can thus be considered as directional needlets.

Read this paper on arXiv…

J. McEwen, C. Durastanti and Y. Wiaux
Thu, 24 Sep 15
16/60

Comments: 29 pages, 8 figures

A novel sampling theorem on the rotation group [CL]

http://arxiv.org/abs/1508.03101


We develop a novel sampling theorem for functions defined on the three-dimensional rotation group SO(3) by associating the rotation group with the three-torus through a periodic extension. Our sampling theorem requires $4L^3$ samples to capture all of the information content of a signal band-limited at $L$, reducing the number of required samples by a factor of two compared to other equiangular sampling theorems. We present fast algorithms to compute the associated Fourier transform on the rotation group, the so-called Wigner transform, which scale as $O(L^4)$, compared to the naive scaling of $O(L^6)$. For the common case of a low directional band-limit $N$, complexity is reduced to $O(N L^3)$. Our fast algorithms will be of direct use in speeding up the computation of directional wavelet transforms on the sphere. We make our SO3 code implementing these algorithms publicly available.

Read this paper on arXiv…

J. McEwen, M. Buttner, B. Leistedt, et. al.
Fri, 14 Aug 15
33/49

Comments: 5 pages, 2 figures

Complementary Lattice Arrays for Coded Aperture Imaging [CL]

http://arxiv.org/abs/1506.02160


In this work, we consider complementary lattice arrays in order to enable a broader range of designs for coded aperture imaging systems. We provide a general framework and methods that generate richer and more flexible designs than existing ones. Besides this, we review and interpret the state-of-the-art uniformly redundant arrays (URA) designs, broaden the related concepts, and further propose some new design methods.

Read this paper on arXiv…

J. Ding, M. Noshad and V. Tarokh
Tue, 9 Jun 15
41/56

Comments: N/A

Meta learning of bounds on the Bayes classifier error [CL]

http://arxiv.org/abs/1504.07116


Meta learning uses information from base learners (e.g. classifiers or estimators) as well as information about the learning problem to improve upon the performance of a single base learner. For example, the Bayes error rate of a given feature space, if known, can be used to aid in choosing a classifier, as well as in feature selection and model selection for the base classifiers and the meta classifier. Recent work in the field of f-divergence functional estimation has led to the development of simple and rapidly converging estimators that can be used to estimate various bounds on the Bayes error. We estimate multiple bounds on the Bayes error using an estimator that applies meta learning to slowly converging plug-in estimators to obtain the parametric convergence rate. We compare the estimated bounds empirically on simulated data and then estimate the tighter bounds on features extracted from an image patch analysis of sunspot continuum and magnetogram images.

Read this paper on arXiv…

K. Moon, V. Delouille and A. Hero
Tue, 28 Apr 15
36/70

Comments: 6 pages, 3 figures

The NIFTY way of Bayesian signal inference [IMA]

http://arxiv.org/abs/1412.7160


We introduce NIFTY, “Numerical Information Field Theory”, a software package for the development of Bayesian signal inference algorithms that operate independently from any underlying spatial grid and its resolution. A large number of Bayesian and Maximum Entropy methods for 1D signal reconstruction, 2D imaging, as well as 3D tomography, appear formally similar, but one often finds individualized implementations that are neither flexible nor easily transferable. Signal inference in the framework of NIFTY can be done in an abstract way, such that algorithms, prototyped in 1D, can be applied to real world problems in higher-dimensional settings. NIFTY as a versatile library is applicable and already has been applied in 1D, 2D, 3D and spherical settings. A recent application is the D3PO algorithm targeting the non-trivial task of denoising, deconvolving, and decomposing photon observations in high energy astronomy.

Read this paper on arXiv…

M. Selig
Wed, 24 Dec 14
18/37

Comments: 6 pages, 2 figures, refereed proceeding of the 33rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2013), software available at this http URL and this http URL

A Novel, Fully Automated Pipeline for Period Estimation in the EROS 2 Data Set [IMA]

http://arxiv.org/abs/1412.1840


We present a new method to discriminate periodic from non-periodic irregularly sampled lightcurves. We introduce a periodic kernel and maximize a similarity measure derived from information theory to estimate the periods and a discriminator factor. We tested the method on a dataset containing 100,000 synthetic periodic and non-periodic lightcurves with various periods, amplitudes and shapes generated using a multivariate generative model. We correctly identified periodic and non-periodic lightcurves with a completeness of 90% and a precision of 95%, for lightcurves with a signal-to-noise ratio (SNR) larger than 0.5. We characterize the efficiency and reliability of the model using these synthetic lightcurves and applied the method on the EROS-2 dataset. A crucial consideration is the speed at which the method can be executed. Using hierarchical search and some simplification on the parameter search we were able to analyze 32.8 million lightcurves in 18 hours on a cluster of GPGPUs. Using the sensitivity analysis on the synthetic dataset, we infer that 0.42% in the LMC and 0.61% in the SMC of the sources show periodic behavior. The training set, the catalogs and source code are all available in this http URL

Read this paper on arXiv…

P. Protopapas, P. Huijse, P. Estevez, et. al.
Mon, 8 Dec 14
5/61

Comments: N/A

On spin scale-discretised wavelets on the sphere for the analysis of CMB polarisation [IMA]

http://arxiv.org/abs/1412.1340


A new spin wavelet transform on the sphere is proposed to analyse the polarisation of the cosmic microwave background (CMB), a spin $\pm 2$ signal observed on the celestial sphere. The scalar directional scale-discretised wavelet transform on the sphere is extended to analyse signals of arbitrary spin. The resulting spin scale-discretised wavelet transform probes the directional intensity of spin signals. A procedure is presented using this new spin wavelet transform to recover E- and B-mode signals from partial-sky observations of CMB polarisation.

Read this paper on arXiv…

J. McEwen, M. Buttner, B. Leistedt, et. al.
Thu, 4 Dec 14
3/82

Comments: 4 pages, Proceedings IAU Symposium No. 306, 2014 (A. F. Heavens, J.-L. Starck, A. Krone-Martins eds.)

Application of Lossless Data Compression Techniques to Radio Astronomy Data flows [CL]

http://arxiv.org/abs/1405.5634


The modern practice of Radio Astronomy is characterized by extremes of data volume and rates, principally because of the direct relationship between the signal to noise ratio that can be achieved and the need to Nyquist sample the RF bandwidth necessary by way of support. The transport of these data flows is costly. By examining the statistical nature of typical data flows and applying well known techniques from the field of Information Theory the following work shows that lossless compression of typical radio astronomy data flows is in theory possible. The key parameter in determining the degree of compression possible is the standard deviation of the data. The practical application of compression could prove beneficial in reducing the costs of data transport and (arguably) storage for new generation instruments such as the Square Kilometer Array.

Read this paper on arXiv…

T. Natusch
Fri, 23 May 14
12/44

Comments: In preparation for submission

Slepian Spatial-Spectral Concentration on the Ball [CL]

http://arxiv.org/abs/1403.5553


We formulate and solve the Slepian spatial-spectral concentration problem on the three-dimensional ball. Both the standard Fourier-Bessel and also the Fourier-Laguerre spectral domains are considered since the latter exhibits a number of practical advantages (spectral decoupling and exact computation). The Slepian spatial and spectral concentration problems are formulated as eigenvalue problems, the eigenfunctions of which form an orthogonal family of concentrated functions. Equivalence between the spatial and spectral problems is shown. The spherical Shannon number on the ball is derived, which acts as the analog of the space-bandwidth product in the Euclidean setting, giving an estimate of the number of concentrated eigenfunctions and thus the dimension of the space of functions that can be concentrated in both the spatial and spectral domains simultaneously. Various symmetries of the spatial region are considered that reduce considerably the computational burden of recovering eigenfunctions, either by decoupling the problem into smaller subproblems or by affording analytic calculations. The family of concentrated eigenfunctions forms a Slepian basis that can be used be represent concentrated signals efficiently. We illustrate our results with numerical examples and show that the Slepian basis indeeds permits a sparse representation of concentrated signals.

Read this paper on arXiv…

Z. Khalid, R. Kennedy and J. McEwen
Mon, 24 Mar 14
49/50

On the art and theory of self-calilbration [IMA]

http://arxiv.org/abs/1312.1349


Calibration is the process of inferring how much measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate the “art” of self-calibration that augments an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration. This can be understood in terms of maximizing their joint probability. Thus, the full uncertainty structure of this probability around its maximum is not taken into account by these schemes. Therefore better schemes — in sense of minimal square error — can be designed that also reflect the uncertainties of signal and calibration reconstructions. We argue that at least the signal uncertainty should not be neglected in typical measurement situations, since the calibration solutions suffer from a systematic bias otherwise, which consequently distorts the signal reconstruction. Furthermore, we argue that non-parametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.

Read this paper on arXiv…

Fri, 6 Dec 13
27/55

D3PO – Denoising, Deconvolving, and Decomposing Photon Observations [IMA]

http://arxiv.org/abs/1311.1888


The analysis of astronomical images is a non-trivial task. The D3PO algorithm addresses the inference problem of denoising, deconvolving, and decomposing photon observations. The primary goal is the simultaneous reconstruction of the diffuse and point-like photon flux from a given photon count image. In order to discriminate between these morphologically different signal components, a probabilistic algorithm is derived in the language of information field theory based on a hierarchical Bayesian parameter model. The signal inference exploits prior information on the spatial correlation structure of the diffuse component and the brightness distribution of the spatially uncorrelated point-like sources. A maximum a posteriori solution and a solution minimizing the Gibbs free energy of the inference problem using variational Bayesian methods are discussed. Since the derivation of the solution does not dependent on the underlying position space, the implementation of the D3PO algorithm uses the NIFTY package to ensure operationality on various spatial grids and at any resolution. The fidelity of the algorithm is validated by the analysis of simulated data, including a realistic high energy photon count image showing a 32 x 32 arcmin^2 observation with a spatial resolution of 0.1 arcmin. In all tests the D3PO algorithm successfully denoised, deconvolved, and decomposed the data into a diffuse and a point-like signal estimate for the respective photon flux components.

Read this paper on arXiv…

Mon, 11 Nov 13
31/39