Article

Bayesian and non-Bayesian probabilistic models for medical image analysis

Imaging Science and Biomedical Engineering Division, Medical School, University of Manchester, Stopford Building, Oxford Road, Manchester M13 9PT, UK
Image and Vision Computing (Impact Factor: 1.58). 09/2003; DOI: 10.1016/S0262-8856(03)00072-6
Source: DBLP

ABSTRACT Bayesian approaches to data analysis are popular in machine vision, and yet the main advantage of Bayes theory, the ability to incorporate prior knowledge in the form of the prior probabilities, may lead to problems in some quantitative tasks. In this paper we demonstrate examples of Bayesian and non-Bayesian techniques from the area of magnetic resonance image (MRI) analysis. Issues raised by these examples are used to illustrate difficulties in Bayesian methods and to motivate an approach based on frequentist methods. We believe this approach to be more suited to quantitative data analysis, and provide a general theory for the use of these methods in learning (Bayes risk) systems and for data fusion. Proofs are given for the more novel aspects of the theory. We conclude with a discussion of the strengths and weaknesses, and the fundamental suitability, of Bayesian and non-Bayesian approaches for MRI analysis in particular, and for machine vision systems in general.

0 Followers
 · 
79 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This document is dedicated to my grandmother, who taught me to play cards and who died on 21/10/2007, aged 95. Defining Probability for Science. Preface For many years I have recommended the short reference on statistics [2] and similar introductory texts to my students and researchers. Barlow's book can be considered a fair reflection of the main stream view of the topic [13]. Yet despite this, I generally gave the caveat that I disagreed with some parts of the book and indeed the conventional view. On a recent reading of the book I concluded that many of my objections were confined to chapter 7, and in particular the discussion regarding definitions of probability. I therefore decided to write this document to provide a record of how I would have presented these sections, for distribution within our group. The document explains a physics based approach motivated by scientific considerations of uniqueness, falsifiability and quantitation. These considerations are intended to eliminate aspects of 'black magic' or arbitrariness, a view which seems to me to be important yet lacking from general texts. It summarises what I regard as the reasons I work as I do when designing and testing algorithms and systems for computer vision and image processing. Although this document is self contained, the interested reader might wish to look at the original version first, before reading mine. You would then be in a good position to decide if you want to continue to take the conventional view of the topic, or take the rather bold step of being more critical and forming some conclusions.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: 1 Abstract One of the most common,problems in image analysis is the estimation and removal of noise or other artefacts (e.g.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new algorithm for the feature-space based segmentation of medical image volumes, based on a unified mathematical framework that incorporates both intensity and local gradient information. The algorithm addresses the problem of partial volume tissue estimation and is capable of using multiple image volumes, and the associated multi-dimensional image gradient, to increase tissue separability. Clustering is performed in the combined intensity and gradient histogram, followed by the use of Bayes theory to generate probability maps showing the most likely tissue volume fractions within each voxel, rather than a classification to a single tissue type. The approach also supports reconstruction of images from the estimates of volumetric voxel contents and the tissue model parameters. Evaluation of the algorithm comprised three stages. First, objective measurements of segmen- tation accuracy, and the increase in accuracy when local gradient information was included in the feature space, were produced using simulated magnetic resonance (MR) images of the normal brain. Second, application to clinical MR data was demonstrated using an exemplar medical problem, the measurement of cerebrospinal fluid (CSF) volume in 70 normal volunteers, through comparison to a "bronze-standard" consisting of previously published measurements. Third, the accuracy of the multi- dimensional approach was demonstrated by assessing the errors on reconstructed images produced from the segmentation result. We conclude that the inclusion of gradient information in the feature space can result in significant improvements in segmentation accuracy compared to the use of intensity information alone.

Preview (3 Sources)

Download
0 Downloads
Available from