George Biros

University of Texas at Austin, Austin, Texas, United States

Are you George Biros?

Claim your profile

Publications (105)100.62 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Several studies have examined correlates between imaging features of neoplasm and patient survival or tumor genetic composition; however, few have generated predictive models robust enough to enter clinical practice. In this study, we use advanced pattern analysis and machine learning to identify a combination of imaging features on initial magnetic resonance (MR) images to predict overall survival and molecular subtype in patients with glioblastoma (GB). We performed a retrospective followed by a prospective cohort study of GB patients. Imaging features were extracted from structural, diffusion, and perfusion MR images at time of diagnosis. A machine-learning algorithm was used to examine multiple features simultaneously to determine which set of features was most predictive of survival. The model was tested prospectively in a separate cohort of patients. In a subset of patients for which genetic data were obtained, machine learning was used to classify the likelihood of molecular subtype affiliation based on imaging. Tenfold cross-validation was performed. The accuracy of the model in predicting survival was 77% in the retrospective study (n = 105) and 79% in the prospective study (n = 29). Constellations of imaging markers related to infiltration and diffusion of tumor cells into edema, microvascularity, and blood-brain barrier compromise were predictive of shortened survival. A separate model was generated to predict molecular subtype. The accuracy of individual subtype predictions was 85% for classical (n = 20), 84% for mesenchymal (n = 28), 88% for neural (n = 29), and 86% for proneural (n = 22). Unlike prior studies, we analyzed the entirety of imaging data in an integrative fashion, leveraging the power of pattern analysis and machine learning to predict survival and molecular subtype with high accuracy and reproducibility in GB. Our noninvasive model utilizes multiparametric imaging obtained routinely for GB patients, making it readily translatable to the clinic.
    Neurosurgery 08/2015; 62 Suppl 1, CLINICAL NEUROSURGERY:209. DOI:10.1227/01.neu.0000467097.06935.d9 · 3.03 Impact Factor
  • Source
    Amir Gholami · Judith Hill · Dhairya Malhotra · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new library for parallel distributed Fast Fourier Transforms (FFT). Despite the large amount of work on FFTs, we show that significant speedups can be achieved for distributed transforms. The importance of FFT in science and engineering and the advances in high performance computing necessitate further improvements. AccFFT extends existing FFT libraries for x86 architectures (CPUs) and CUDA-enabled Graphics Processing Units (GPUs) to distributed memory clusters using the Message Passing Interface (MPI). Our library uses specifically optimized all-to-all communication algorithms, to efficiently perform the communication phase of the distributed FFT algorithm. The GPU based algorithm, effectively hides the overhead of PCIe transfers. We present numerical results on the Maverick and Stampede platforms at the Texas Advanced Computing Center (TACC) and on the Titan system at the Oak Ridge National Laboratory (ORNL). We compare the CPU version of AccFFT with P3DFFT and PFFT libraries and we show a consistent $2-3\times$ speedup across a range of processor counts and problem sizes. The comparison of the GPU code with FFTE library shows a similar trend with a $2\times$ speedup. The library is tested up to 131K cores and 4,096 GPUs of Titan, and up to 16K cores of Stampede.
  • Amir Gholami · Andreas Mang · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a numerical scheme for solving a parameter estimation problem for a model of low-grade glioma growth. Our goal is to estimate the spatial distribution of tumor concentration, as well as the magnitude of anisotropic tumor diffusion. We use a constrained optimization formulation with a reaction-diffusion model that results in a system of nonlinear partial differential equations. In our formulation, we estimate the parameters using partially observed, noisy tumor concentration data at two different time instances, along with white matter fiber directions derived from diffusion tensor imaging. The optimization problem is solved with a Gauss-Newton reduced space algorithm. We present the formulation and outline the numerical algorithms for solving the resulting equations. We test the method using a synthetic dataset and compute the reconstruction error for different noise levels and detection thresholds for monofocal and multifocal test cases.
    Journal of Mathematical Biology 05/2015; DOI:10.1007/s00285-015-0888-x · 2.39 Impact Factor
  • Source
    Andreas Mang · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose regularization schemes for deformable registration and efficient algorithms for its numerical approximation. We treat image registration as a variational optimal control problem. The deformation map is parametrized by a velocity field. Quadratic Tikhonov regularization ensures well-posedness of the problem. Our scheme augments standard smoothness vectorial operators based on $H^1$- and $H^2$-seminorms with a constraint on the divergence of the velocity field. Our formulation is motivated from Stokes flows in fluid mechanics. We invert for a stationary velocity field as well as a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. In addition, we design a novel regularization model that allows us to control shear. We use a globalized, preconditioned, matrix-free (Gauss-)Newton-Krylov scheme. We exploit variable elimination techniques to reduce the number of unknowns of our system: we only iterate on the reduced space of the velocity field. Our scheme can be used for problems in which the deformation map is expected to be nearly incompressible, as is often the case in medical imaging. Numerical experiments demonstrate that we can explicitly control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid over-smoothing of the deformation map. We demonstrate that our new formulation allows us to promote or penalize shear whilst controlling the determinant of the deformation gradient.
  • Source
    Andreas Mang · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on $H^1$- or $H^2$-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field rendering the deformation incompressible and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. We use a preconditioned, globalized, matrix-free, inexact Newton-Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton-Krylov methods with a globalized preconditioned gradient descent. We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation regularity, and computational efficiency with an optional control on local mass conservation. The Newton-Krylov methods clearly outperform the Picard method if high accuracy of the inversion is required.
    SIAM Journal on Imaging Sciences 02/2015; 8(2):1030-1069. DOI:10.1137/140984002 · 2.87 Impact Factor
  • Source
    Dhairya Malhotra · Amir Gholami · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel numerical scheme for solving the Stokes equation with variable coecients in the unit box. Our scheme is based on a volume integral equation formulation. Compared to finite element methods, our formulation de-couples the velocity and pressure, generates velocity fields that are by construction divergence free to high accuracy and its performance does not depend on the order of the basis used for discretization. In addition, we employ a novel adaptive fast multipole method for volume integrals to obtain a scheme that is algorithmically optimal. Our scheme supports non-uniform discretizations and is spectrally accurate. To increase per node performance, we have integrated our code with both NVIDIA and Intel accelerators. In our largest scalability test, we solved a problem with 20 billion unknowns, using a 14-order approximation for the velocity, on 2048 nodes of the Stampede system at the Texas Advanced Computing Center. We achieved 0.656 petaFLOPS for the overall code (23% eciency) and one petaFLOPS for the volume integrals (33% eciency). As an application example, we simulate Stokes flow in a porous medium with highly complex pore structure using a penalty formulation to enforce the no slip condition.
    SC14; 11/2014
  • Source
    William B. March · Bo Xiao · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a fast algorithm for kernel summation problems in high-dimensions. These problems appear in computational physics, numerical approximation, non-parametric statistics, and machine learning. In our context, the sums depend on a kernel function that is a pair potential defined on a dataset of points in a high-dimensional Euclidean space. A direct evaluation of the sum scales quadratically with the number of points. Fast kernel summation methods can reduce this cost to linear complexity, but the constants involved do not scale well with the dimensionality of the dataset. The main algorithmic components of fast kernel summation algorithms are the separation of the kernel sum between near and far field (which is the basis for pruning) and the efficient and accurate approximation of the far field. We introduce novel methods for pruning and approximating the far field. Our far field approximation requires only kernel evaluations and does not use analytic expansions. Pruning is not done using bounding boxes but rather combinatorially using a sparsified nearest-neighbor graph of the input. The time complexity of our algorithm depends linearly on the ambient dimension. The error in the algorithm depends on the low-rank approximability of the far field, which in turn depends on the kernel function and on the intrinsic dimensionality of the distribution of the points. The error of the far field approximation does not depend on the ambient dimension. We present the new algorithm along with experimental results that demonstrate its performance. We report results for Gaussian kernel sums for 100 million points in 64 dimensions, for one million points in 1000 dimensions, and for problems in which the Gaussian kernel has a variable bandwidth. To the best of our knowledge, all of these experiments are impossible or prohibitively expensive with existing fast kernel summation methods.
    SIAM Journal on Scientific Computing 10/2014; 37(2). DOI:10.1137/140989546 · 1.94 Impact Factor
  • Source
    William B. March · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider fast kernel summations in high dimensions: given a large set of points in $d$ dimensions (with $d \gg 3$) and a pair-potential function (the kernel function), we compute a weighted sum of all pairwise kernel interactions for each point in the set. Direct summation is equivalent to a (dense) matrix-vector multiplication and scales quadratically with the number of points. Fast kernel summation algorithms reduce this cost to log-linear or linear complexity. Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by constructing approximate representations of interactions of points that are far from each other. In algebraic terms, these representations correspond to low-rank approximations of blocks of the overall interaction matrix. Existing approaches require an excessive number of kernel evaluations with increasing $d$ and number of points in the dataset. To address this issue, we use a randomized algebraic approach in which we first sample the rows of a block and then construct its approximate, low-rank interpolative decomposition. We examine the feasibility of this approach theoretically and experimentally. We provide a new theoretical result showing a tighter bound on the reconstruction error from uniformly sampling rows than the existing state-of-the-art. We demonstrate that our sampling approach is competitive with existing (but prohibitively expensive) methods from the literature. We also construct kernel matrices for the Laplacian, Gaussian, and polynomial kernels -- all commonly used in physics and data analysis. We explore the numerical properties of blocks of these matrices, and show that they are amenable to our approach. Depending on the data set, our randomized algorithm can successfully compute low rank approximations in high dimensions. We report results for data sets from four dimensions up to 1000 dimensions.
  • Source
    Bryan Quaife · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We construct a high-order adaptive time stepping scheme for vesicle suspensions with viscosity contrast. The high-order accuracy is achieved using a spectral deferred correction (SDC) method, and adaptivity is achieved by estimating the local truncation error with the numerical error of physically constant values. Numerical examples demonstrate that our method can handle suspensions with vesicles that are tumbling, tank-treading, or both. Moreover, we demonstrate that a user-prescribed tolerance can be automatically achieved for simulations with long time horizons.
    08/2014; 16. DOI:10.1016/j.piutam.2015.03.011
  • Source
    Amir Gholami · Dhairya Malhotra · Hari Sundar · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We discuss the fast solution of the Poisson problem on a unit cube. We benchmark the performance of the most scalable methods for the Poisson problem: the Fast Fourier Transform (FFT), the Fast Multipole Method (FMM), the geometric multigrid (GMG) and algebraic multigrid (AMG). The GMG and FMM are novel parallel schemes using high-order approximation for Poisson problems developed in our group. The FFT code is from P3DFFT library and AMG code from ML Trilinos library. We examine and report results for weak scaling, strong scaling, and time to solution for uniform and highly refined grids. We present results on the Stampede system at the Texas Advanced Computing Center and on the Titan system at the Oak Ridge National Laboratory. In our largest test case, we solved a problem with 600 billion unknowns on 229,379 cores of Titan. Overall, all methods scale quite well to these problem sizes. We have tested all of the methods with different source distributions. Our results show that FFT is the method of choice for smooth source functions that can be resolved with a uniform mesh. However, it loses its performance in the presence of highly localized features in the source function. FMM and GMG considerably outperform FFT for those cases.
  • Source
    Amir Gholami · Andreas Mang · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a numerical scheme for solving a parameter estimation problem for a model of low-grade glioma growth. Our goal is to estimate tumor infiltration into the brain parenchyma for a reaction-diffusion tumor growth model. We use a constrained optimization formulation that results in a system of nonlinear partial differential equations (PDEs). In our formulation, we estimate the parameters using the data from segmented images at two different time instances, along with white matter fiber directions derived from diffusion tensor imaging (DTI). The parameters we seek to estimate are the spatial tumor concentration and the extent of anisotropic tumor diffusion. The optimization problem is solved with a Gauss-Newton reduced space algorithm. We present the formulation, outline the numerical algorithms and conclude with numerical experiments on synthetic datasets. Our results show the feasibility of the proposed methodology.
  • Source
    Bryan Quaife · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: The discretization of the double-layer potential integral equation for the interior Dirichlet Laplace problem in a domain with smooth boundary results in a linear system that has a bounded condition number. Thus, the number of iterations required for the convergence of a Krylov method is, asymptotically, independent of the discretization size $N$. Using the Fast Multipole Method (FMM) to accelerate the matrix-vector products, we obtain an optimal $\bigO(N)$ solver. I practice, however, when the geometry is complicated, the number of Krylov iterations behaves in an $N$-dependent manner and can be quite large. In many applications, such cost is prohibitively expensive. There is a need, therefore, for designing preconditioners that reduce the number of Krylov iterations. We summarize the different methodologies that have appeared in the literature (single-grid, multigrid, approximate sparse inverses) and we propose a new class of preconditioners based on an FMM-based spatial decomposition of the double-layer operator. We present an experimental study in which we compare the different approaches and we discuss the merits and shortcomings of our approach.
    Numerical Linear Algebra with Applications 06/2014; DOI:10.1002/nla.1940 · 1.42 Impact Factor
  • Source
    Bryan Quaife · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present an adaptive arbitrary-order accurate time-stepping numerical scheme for the flow of vesicles suspended in Stokesian fluids. Our scheme can be summarized as an approximate implicit spectral deferred correction (SDC) method. Applying a textbook fully implicit SDC scheme to vesicle flows is prohibitively expensive. For this reason we introduce several approximations. Our scheme is based on a semi-implicit linearized low-order time stepping method. (Our discretization is spectrally accurate in space.) We also use invariant properties of vesicle flows, constant area and boundary length in two dimensions, to reduce the computational cost of error estimation for adaptive time stepping. We present results in two dimensions for single-vesicle flows, constricted geometry flows, converging flows, and flows in a Couette apparatus. We experimentally demonstrate that the proposed scheme enables automatic selection of the step size and high-order accuracy.
  • Source
    Hari Sundar · Georg Stadler · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a comparison of different multigrid approaches for the solution of systems arising from high-order continuous finite element discretizations of elliptic partial differential equations on complex geometries. We consider the pointwise Jacobi, the Chebyshev-accelerated Jacobi and the symmetric successive over-relaxation (SSOR) smoothers, as well as elementwise block Jacobi smoothing. Three approaches for the multigrid hierarchy are compared: 1) high-order $h$-multigrid, which uses high-order interpolation and restriction between geometrically coarsened meshes; 2) $p$-multigrid, in which the polynomial order is reduced while the mesh remains unchanged, and the interpolation and restriction incorporate the different-order basis functions; and 3), a first-order approximation multigrid preconditioner constructed using the nodes of the high-order discretization. This latter approach is often combined with algebraic multigrid for the low-order operator and is attractive for high-order discretizations on unstructured meshes, where geometric coarsening is difficult. Based on a simple performance model, we compare the computational cost of the different approaches. Using scalar test problems in two and three dimensions with constant and varying coefficients, we compare the performance of the different multigrid approaches for polynomial orders up to 16. Overall, both $h$- and $p$-multigrid work well; the first-order approximation is less efficient. For constant coefficients, all smoothers work well. For variable coefficients, Chebyshev and SSOR smoothing outperforms Jacobi smoothing. While all of the tested methods converge in a mesh-independent number of iterations, none of them behaves completely independent of the polynomial order. When multigrid is used as a preconditioner in a Krylov method, the iteration number decreases significantly compared to using multigrid as a solver.
    Numerical Linear Algebra with Applications 02/2014; 22(4). DOI:10.1002/nla.1979 · 1.42 Impact Factor
  • Source
    Bryan Quaife · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider numerical algorithms for the simulation of the rheology of two-dimensional vesicles suspended in a viscous Stokesian fluid. The vesicle evolution dynamics is governed by hydrodynamic and elastic forces. The elastic forces are due to local inextensibility of the vesicle membrane and resistance to bending. Numerically resolving vesicle flows poses several challenges. For example, we need to resolve moving interfaces, address stiffness due to bending, enforce the inextensibility constraint, and efficiently compute the (non-negligible) long-range hydrodynamic interactions. Our method is based on the work of {\em Rahimian, Veerapaneni, and Biros, "Dynamic simulation of locally inextensible vesicles suspended in an arbitrary two-dimensional domain, a boundary integral method", Journal of Computational Physics, 229 (18), 2010}. It is a boundary integral formulation of the Stokes equations coupled to the interface mass continuity and force balance. We extend the algorithms presented in that paper to increase the robustness of the method and enable simulations with concentrated suspensions. In particular, we propose a scheme in which both intra-vesicle and inter-vesicle interactions are treated semi-implicitly. In addition we use special integration for near-singular integrals and we introduce a spectrally accurate collision detection scheme. We test the proposed methodologies on both unconfined and confined flows for vesicles whose internal fluid may have a viscosity contrast with the bulk medium. Our experiments demonstrate the importance of treating both intra-vesicle and inter-vesicle interactions accurately.
    Journal of Computational Physics 09/2013; 274. DOI:10.1016/j.jcp.2014.06.013 · 2.49 Impact Factor
  • Hari Sundar · Dhairya Malhotra · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we present HykSort, an optimized comparison sort for distributed memory architectures that attains more than 2× improvement over bitonic sort and samplesort. The algorithm is based on the hypercube quicksort, but instead of a binary recursion, we perform a k-way recursion in which the pivots are selected accurately with an iterative parallel select algorithm. The single-node sort is performed using a vectorized and multithreaded merge sort. The advantages of HykSort are lower communication costs, better load balancing, and avoidance of O(p)-collective communication primitives. We also present a staged communication samplesort, which is more robust than the original samplesort for large core counts. We conduct an experimental study in which we compare hypercube sort, bitonic sort, the original samplesort, the staged samplesort, and HykSort. We report weak and strong scaling results and study the effect of the grain size. It turns out that no single algorithm performs best and a hybridization strategy is necessary. As a highlight of our study, on our largest experiment on 262,144 AMD cores of the CRAY XK7 "Titan" platform at the Oak Ridge National Laboratory we sorted 8 trillion 32-bit integer keys in 37 seconds achieving 0.9TB/s effective throughput.
    Proceedings of the 27th international ACM conference on International conference on supercomputing; 06/2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: A self-aware aerospace vehicle can dynamically adapt the way it performs missions by gathering information about itself and its surroundings and responding intelligently. Achieving this DDDAS paradigm enables a revolutionary new generation of self-aware aerospace vehicles that can perform missions that are impossible using current design, flight, and mission planning paradigms. To make self-aware aerospace vehicles a reality, fundamentally new algorithms are needed that drive decision-making through dynamic response to uncertain data, while incorporating information from multiple modeling sources and multiple sensor fidelities. In this work, the specific challenge of a vehicle that can dynamically and autonomously sense, plan, and act is considered. The challenge is to achieve each of these tasks in real time-executing online models and exploiting dynamic data streams-while also accounting for uncertainty. We employ a multifidelity approach to inference, prediction and planning-an approach that incorporates information from multiple modeling sources, multiple sensor data sources, and multiple fidelities.
    Procedia Computer Science 12/2012; 9:1206-1210. DOI:10.1016/j.procs.2012.04.130
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a generative approach for simultaneously registering a probabilistic atlas of a healthy population to brain magnetic resonance (MR) scans showing glioma and segmenting the scans into tumor as well as healthy tissue labels. The proposed method is based on the expectation maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the original atlas into one with tumor and edema adapted to best match a given set of patient's images. The modified atlas is registered into the patient space and utilized for estimating the posterior probabilities of various tissue labels. EM iteratively refines the estimates of the posterior probabilities of tissue labels, the deformation field and the tumor growth model parameters. Hence, in addition to segmentation, the proposed method results in atlas registration and a low-dimensional description of the patient scans through estimation of tumor model parameters. We validate the method by automatically segmenting 10 MR scans and comparing the results to those produced by clinical experts and two state-of-the-art methods. The resulting segmentations of tumor and edema outperform the results of the reference methods, and achieve a similar accuracy from a second human rater. We additionally apply the method to 122 patients scans and report the estimated tumor model parameters and their relations with segmentation and registration results. Based on the results from this patient population, we construct a statistical atlas of the glioma by inverting the estimated deformation fields to warp the tumor segmentations of patients scans into a common space.
    08/2012; 31(10):1941-54. DOI:10.1109/TMI.2012.2210558
  • Source
    Stéphanie Chaillat · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose an algorithm to compute an approximate singular value decomposition (SVD) of least-squares operators related to linearized inverse medium problems with multiple events. Such factorizations can be used to accelerate matrix-vector multiplications and to precondition iterative solvers.We describe the algorithm in the context of an inverse scattering problem for the low-frequency time-harmonic wave equation with broadband and multi-point illumination. This model finds many applications in science and engineering (e.g., seismic imaging, subsurface imaging, impedance tomography, non-destructive evaluation, and diffuse optical tomography).We consider small perturbations of the background medium and, by invoking the Born approximation, we obtain a linear least-squares problem. The scheme we describe in this paper constructs an approximate SVD of the Born operator (the operator in the linearized least-squares problem). The main feature of the method is that it can accelerate the application of the Born operator to a vector.If Nω is the number of illumination frequencies, Ns the number of illumination locations, Nd the number of detectors, and N the discretization size of the medium perturbation, a dense singular value decomposition of the Born operator requires O(min(NsNωNd,N)]2×max(NsNωNd,N))O(min(NsNωNd,N)]2×max(NsNωNd,N)) operations. The application of the Born operator to a vector requires O(NωNsμ(N))O(NωNsμ(N)) work, where μ(N) is the cost of solving a forward scattering problem. We propose an approximate SVD method that, under certain conditions, reduces these work estimates significantly. For example, the asymptotic cost of factorizing and applying the Born operator becomes O(μ(N)Nω)O(μ(N)Nω). We provide numerical results that demonstrate the scalability of the method.
    Journal of Computational Physics 06/2012; 231(12):4403–4421. DOI:10.1016/j.jcp.2012.02.006 · 2.49 Impact Factor
  • Wei Zhu · Sung Ha Kang · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a novel geodesic-active-contour-based (GAC-based) variational model that uses two level-set functions to segment the right and left ventricles and the epicardium in short-axis magnetic resonance (MR) images. For the right ventricle, the myocardial wall is typically very thin and hard to identify using the resolution of existing MR scanners. We propose to use two level sets to identify both the endocardial wall by pushing away one level-set function from another, in the setting of the edge-driven GAC model with a new edge detection function. Existing edge detection functions have strict restrictions on the location of initial contours. We develop a new edge detection function that relaxes this restriction and propose an iterative method that uses a sequence of edge detection functions to minimize the energy of our model successively. Experimental results are presented to validate the effectiveness of the proposed model.
    International Journal of Computer Mathematics 06/2012; 2012(1–16). DOI:10.1080/00207160.2012.695355 · 0.72 Impact Factor

Publication Stats

2k Citations
100.62 Total Impact Points

Institutions

  • 2005–2014
    • University of Texas at Austin
      • Institute for Computational Engineering and Sciences
      Austin, Texas, United States
  • 2008–2011
    • Georgia Institute of Technology
      • Department of Biomedical Engineering
      Atlanta, Georgia, United States
  • 2005–2010
    • University of Pennsylvania
      • • Section of Biomedical Image Analysis - SBIA
      • • Department of Mechanical Engineering and Applied Mechanics
      • • Department of Computer and Information Science
      Filadelfia, Pennsylvania, United States
  • 2007–2008
    • William Penn University
      Filadelfia, Pennsylvania, United States
  • 1999–2005
    • Carnegie Mellon University
      Pittsburgh, Pennsylvania, United States
  • 2002–2004
    • CUNY Graduate Center
      New York, New York, United States
  • 1970
    • Oak Ridge National Laboratory
      Oak Ridge, Florida, United States