Publication History View all

  • [Show abstract] [Hide abstract]
    ABSTRACT: We extend the framework of Inductive Logic to Second Order languages and introduce Wilmers' Principle, a rational principle for probability functions on Second Order languages. We derive a representation theorem for functions satisfying this principle and investigate its relationship to the first order principles of Regularity and Super Regularity.
    Journal of Applied Logic 07/2014; DOI:10.1016/j.jal.2014.07.002
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Linear least squares problems are commonly solved by QR factorization. When multiple solutions need to be computed with only minor changes in the underlying data, knowledge of the difference between the old data set and the new can be used to update an existing factorization at reduced computational cost. We investigate the viability of implementing QR updating algorithms on GPUs and demonstrate that GPU-based updating for removing columns achieves speed-ups of up to 13.5x compared with full GPU QR factorization. We characterize the conditions under which other types of updates also achieve speed-ups.
    Parallel Computing 07/2014; 40(7). DOI:10.1016/j.parco.2014.03.003
  • [Show abstract] [Hide abstract]
    ABSTRACT: The need to estimate structured covariance matrices arises in a variety of applications and the problem is widely studied in statistics. A new method is proposed for regularizing the covariance structure of a given covariance matrix whose underlying structure has been blurred by random noise, particularly when the dimension of the covariance matrix is high. The regularization is made by choosing an optimal structure from an available class of covariance structures in terms of minimizing the discrepancy, defined via the entropy loss function, between the given matrix and the class. A range of potential candidate structures comprising tridiagonal Toeplitz, compound symmetry, AR(1), and banded Toeplitz is considered. It is shown that for the first three structures local or global minimizers of the discrepancy can be computed by one-dimensional optimization, while for the fourth structure Newton’s method enables efficient computation of the global minimizer. Simulation studies are conducted, showing that the proposed new approach provides a reliable way to regularize covariance structures. The approach is also applied to real data analysis, demonstrating the usefulness of the proposed approach in practice.
    Computational Statistics & Data Analysis 04/2014; 72:315–327. DOI:10.1016/j.csda.2013.10.004
  • [Show abstract] [Hide abstract]
    ABSTRACT: We investigate the effect of thermal expansion and gravity on the propagation of a triple flame in a horizontal channel with porous walls, where the fuel and oxidiser concentrations are prescribed. The triple flame therefore propagates in a direction perpendicular to the direction of gravity, a configuration that does not seem to have received any dedicated investigation in the literature. In particular, we examine the effect of the non-dimensional flame-front thickness ∊ on the propagation speed of the triple flame for different values of the thermal expansion coefficient α and the Rayleigh number Ra. When gravity is not accounted for (Ra = 0), and for small values of ∊, the numerically calculated propagation speed is found to agree with predictions made in previous studies based on scaling laws [1]. We show that the well known monotonic relationship between U and ∊, which exists in the constant density case when the Lewis numbers are of order unity or larger, persists for triple flames undergoing thermal expansion. Under strong enough gravitational effects (Ra ≫ 1), however, the relationship is no longer found to be monotonic. For a fixed value of ∊, the relationship between the Rayleigh number and the propagation speed is shown to vary qualitatively depending on the value of ∊ chosen, exhibiting hysteresis if ∊ is small enough and displaying local maxima, local minima or monotonic behaviour for other values of ∊. All of the steady solutions presented in the paper have been found to be stable, except for those on the middle branches of the hysteresis curves.
    Combustion and Flame 12/2013; 160(12):2800–2809. DOI:10.1016/j.combustflame.2013.06.017
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose an Analogy Principle in the context of Unary Inductive Logic and characterize the probability functions which satisfy it. In particular in the case of a language with just two predicates the probability functions satisfying this principle correspond to solutions of Skyrmsʼ ‘Wheel of Fortune’.
    Annals of Pure and Applied Logic 12/2013; 164(12):1293–1321. DOI:10.1016/j.apal.2013.06.013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of how to cloak objects from antiplane elastic waves using two alternative techniques. The first is the use of a layered metamaterial in the spirit of the work of Torrent and Sanchez-Dehesa (2008) who considered acoustic cloaks, motivated by homogenization theories, whilst the second is the use of a hyperelastic cloak in the spirit of the work of Parnell et al. (2012). We extend the hyperelastic cloaking theory to the case of a Mooney–Rivlin material since this is often considered to be a more realistic constitutive model of rubber-like media than the neo-Hookean case studied by Parnell et al. (2012), certainly at the deformations required to produce a significant cloaking effect. Although not perfect, the Mooney–Rivlin material appears to be a reasonable hyperelastic cloak. This is clearly encouraging for applications. We quantify the effectiveness of the various cloaks considered by plotting the scattering cross section as a function of frequency, noting that this would be zero for a perfect cloak.
    Wave Motion 11/2013; 50(7):1140-1152. DOI:10.1016/j.wavemoti.2013.06.006
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this article we address the problem of the nonlinear interaction of subdiffusive particles. We introduce the random walk model in which statistical characteristics of a random walker such as escape rate and jump distribution depend on the mean density of particles. We derive a set of nonlinear subdiffusive fractional master equations and consider their diffusion approximations. We show that these equations describe the transition from an intermediate subdiffusive regime to asymptotically normal advection-diffusion transport regime. This transition is governed by nonlinear tempering parameter that generalizes the standard linear tempering. We illustrate the general results through the use of the examples from cell and population biology. We find that a nonuniform anomalous exponent has a strong influence on the aggregation phenomenon.
    Physical Review E 09/2013; 88(3-1):032104. DOI:10.1103/PhysRevE.88.032104
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a reduced version of the new modified Weibull (NMW) distribution due to Almalki and Yuan \cite{meNMW} in order to avoid some estimation problems. The number of parameters in the NMW distribution is five. The number of parameters in the reduced version is three. We study mathematical properties as well as maximum likelihood estimation of the reduced version. Four real data sets (two of them complete and the other two censored) are used to compare the flexibility of the reduced version versus the NMW distribution. It is shown that the reduced version has the same desirable properties of the NMW distribution in spite of having two less parameters. The NMW distribution did not provide a significantly better fit than the reduced version for any of the four data sets.
    Reliability Engineering [?] System Safety 07/2013; 111. DOI:10.1016/j.ress.2012.10.018
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
    Physics in Medicine and Biology 07/2013; 58(15):5061-5083. DOI:10.1088/0031-9155/58/15/5061
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present an experimental and computational pipeline for the generation of kinetic models of metabolism, and demonstrate its application to glycolysis in Saccharomyces cerevisiae. Starting from an approximate mathematical model, we employ a "cycle of knowledge" strategy, identifying the steps with most control over flux. Kinetic parameters of the individual isoenzymes within these steps are measured experimentally under a standardised set of conditions. Experimental strategies are applied to establish a set of in vivo concentrations for isoenzymes and metabolites. The data are integrated into a mathematical model that is used to predict a new set of metabolite concentrations and reevaluate the control properties of the system. This bottom-up modelling study reveals that control over the metabolic network most directly involved in yeast glycolysis is more widely distributed than previously thought.
    FEBS letters 07/2013; 587(17). DOI:10.1016/j.febslet.2013.06.043
Information provided on this web page is aggregated encyclopedic and bibliographical information relating to the named institution. Information provided is not approved by the institution itself. The institution’s logo (and/or other graphical identification, such as a coat of arms) is used only to identify the institution in a nominal way. Under certain jurisdictions it may be property of the institution.
View all

Top publications last week

Numerical Methods for Fluid Dynamics II Edited by KW Morton, MJ Baines, 07/1986: pages 671-679; Oxford University Press. ISBN: 0-19-853610-0
Physica D Nonlinear Phenomena 06/1986; 20:217-236. DOI:10.1016/0167-2789(86)90031-X