Article

Use of the MULTINEST algorithm for gravitational wave data analysis

Classical and Quantum Gravity (Impact Factor: 3.17). 04/2009; 26(21). DOI: 10.1088/0264-9381/26/21/215003
Source: arXiv
ABSTRACT
We describe an application of the MultiNest algorithm to gravitational wave data analysis. MultiNest is a multimodal nested sampling algorithm designed to efficiently evaluate the Bayesian evidence and return posterior probability densities for likelihood surfaces containing multiple secondary modes. The algorithm employs a set of live points which are updated by partitioning the set into multiple overlapping ellipsoids and sampling uniformly from within them. This set of live points climbs up the likelihood surface through nested iso-likelihood contours and the evidence and posterior distributions can be recovered from the point set evolution. The algorithm is model-independent in the sense that the specific problem being tackled enters only through the likelihood computation, and does not change how the live point set is updated. In this paper, we consider the use of the algorithm for gravitational wave data analysis by searching a simulated LISA data set containing two non-spinning supermassive black hole binary signals. The algorithm is able to rapidly identify all the modes of the solution and recover the true parameters of the sources to high precision. Comment: 18 pages, 4 figures, submitted to Class. Quantum Grav; v2 includes various changes in light of referee's comments
  • Source
    • "The reported computational cost of the Bayesian algorithm appears to be significantly higher than either of the Frequentist methods. For example, 48 cores are used in Taylor et al. (2014) to run a parallelized implementation of the MultiNest algorithm (Feroz et al. 2009) and the analysis is reported to typically take up to 45 minutes to complete at a network S/N ρ n = 10. The computational advantage of MaxPhase may become an increasingly important consideration as the number of pulsar in a PTA increases. "
    [Show abstract] [Hide abstract] ABSTRACT: Supermassive black hole binaries are one of the primary targets for gravitational wave searches using pulsar timing arrays. Gravitational wave signals from such systems are well represented by parametrized models, allowing the standard Generalized Likelihood Ratio Test (GLRT) to be used for their detection and estimation. However, there is a dichotomy in how the GLRT can be implemented for pulsar timing arrays: there are two possible ways in which one can split the set of signal parameters for semi-analytical and numerical extremization. The straightforward extension of the method used for continuous signals in ground-based gravitational wave searches, where the so-called pulsar phase parameters are maximized numerically, was addressed in an earlier paper (Wang et al. 2014). In this paper, we report the first study of the performance of the second approach where the pulsar phases are maximized semi-analytically. This approach is scalable since the number of parameters left over for numerical optimization does not depend on the size of the pulsar timing array. Our results show that, for the same array size (9 pulsars), the new method performs somewhat worse in parameter estimation, but not in detection, than the previous method where the pulsar phases were maximized numerically. The origin of the performance discrepancy is likely to be in the ill-posedness that is intrinsic to any network analysis method. However, scalability of the new method allows the ill-posedness to be mitigated by simply adding more pulsars to the array. This is shown explicitly by taking a larger array of pulsars.
    Full-text · Article · Jun 2015 · The Astrophysical Journal
  • Source
    • "Some methods focus on making individual inner products computationally cheaper: this may be achieved across regions of parameter space through direct interpolation [7] [8], or more generally by using a reduced order quadrature [9] [10]. Other methods seek to reduce the number of required inner products: either by accelerating the convergence to correlation maxima in * Electronic address: ajkc3@ast.cam.ac.uk † Electronic address: jgair@ast.cam.ac.uk a stochastic search [11] [12] [13], or through compressed-basis decomposition of the template bank in a grid search [14– 16]. "
    [Show abstract] [Hide abstract] ABSTRACT: One strategy for reducing the computational cost of matched-filter searches for gravitational wave sources is to introduce a compressed basis for the waveform template bank in a grid-based search. In this paper, we propose and investigate several tunable compression schemes that slide between maximal sensitivity and maximal compression; these might be useful for the fast detection and identification of sources with moderate to high signal-to-noise ratios. Lossless compression schemes offer automatic identification of the signal upon detection, but their accuracy is significantly reduced in the presence of noise. A lossy scheme that uses a straightforward partition of the template bank is found to yield better detection and identification performance at the same level of compression.
    Preview · Article · Apr 2015
  • Source
    • "This method, proposed by Skilling (2004), uses an unusual change of variables to calculate the model evidence. It has recently seen a surge of interest; e.g. for cosmological model fitting (Mukherjee et al. 2006; Mukherjee & Parkinson 2008), and the analysis of simulated gravitational wave data (Aylott et al. 2009; Feroz et al. 2009). The key idea of nested sampling is to populate the allowed prior space with a large number (∼ 100) of 'active points' which are initially chosen at random and subsequently evolved towards ensemble states of successively higher likelihood using MCMC methods. "
    [Show abstract] [Hide abstract] ABSTRACT: This paper revisits a sample of ultracool dwarfs in the Solar neighborhood previously observed with the Hubble Space Telescope's NICMOS NIC1 instrument. We have applied a novel high angular resolution data analysis technique based on the extraction and fitting of kernel phases to archival data. This was found to deliver a dramatic improvement over earlier analysis methods, permitting a search for companions down to projected separations of $\sim$1 AU on NIC1 snapshot images. We reveal five new close binary candidates and present revised astrometry on previously-known binaries, all of which were recovered with the technique. The new candidate binaries have sufficiently close separation to determine dynamical masses in a short-term observing campaign. We also present four marginal detections of objects which may be very close binaries or high contrast companions. Including only confident detections within 19 parsecs, we report a binary fraction of at least $\epsilon_b = 17.2^{+5.7}_{-3.7}%$. The results reported here provide new insights into the population of nearby ultracool binaries, while also offering an incisive case study of the benefits conferred by the kernel phase approach in the recovery of companions within a few resolution elements of the PSF core.
    Full-text · Article · Feb 2013 · The Astrophysical Journal
Show more