George Biros

University of Texas at Austin, Austin, Texas, United States

Are you George Biros?

Claim your profile

Publications (108)109.82 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Background: Glioblastoma is an aggressive and highly infiltrative brain cancer. Standard surgical resection is guided by enhancement on postcontrast T1-weighted (T1) magnetic resonance imaging, which is insufficient for delineating surrounding infiltrating tumor. Objective: To develop imaging biomarkers that delineate areas of tumor infiltration and predict early recurrence in peritumoral tissue. Such markers would enable intensive, yet targeted, surgery and radiotherapy, thereby potentially delaying recurrence and prolonging survival. Methods: Preoperative multiparametric magnetic resonance images (T1, T1-gadolinium, T2-weighted, T2-weighted fluid-attenuated inversion recovery, diffusion tensor imaging, and dynamic susceptibility contrast-enhanced magnetic resonance images) from 31 patients were combined using machine learning methods, thereby creating predictive spatial maps of infiltrated peritumoral tissue. Cross-validation was used in the retrospective cohort to achieve generalizable biomarkers. Subsequently, the imaging signatures learned from the retrospective study were used in a replication cohort of 34 new patients. Spatial maps representing the likelihood of tumor infiltration and future early recurrence were compared with regions of recurrence on postresection follow-up studies with pathology confirmation. Results: This technique produced predictions of early recurrence with a mean area under the curve of 0.84, sensitivity of 91%, specificity of 93%, and odds ratio estimates of 9.29 (99% confidence interval: 8.95-9.65) for tissue predicted to be heavily infiltrated in the replication study. Regions of tumor recurrence were found to have subtle, yet fairly distinctive multiparametric imaging signatures when analyzed quantitatively by pattern analysis and machine learning. Conclusion: Visually imperceptible imaging patterns discovered via multiparametric pattern analysis methods were found to estimate the extent of infiltration and location of future tumor recurrence, paving the way for improved targeted treatment. Abbreviations: UC, area under the curveBBB, blood-brain barrierDSC, dynamic susceptibility contrast-enhancedDTI, diffusion tensor imagingFLAIR, fluid-attenuated inversion recoveryPC, principal componentROI, region of interestT1, T1-weightedT2, T2-weighted.
    No preview · Article · Jan 2016 · Neurosurgery
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we describe the algorithms needed to handle flows with viscosity contrast accurately and efficiently. We show that a globally semi-implicit method does not have any time-step stability constraint for flows with single and multiple vesicles with moderate viscosity contrast and the computational cost per simulation unit time is comparable to or less than that of an explicit scheme. Automatic oversampling adaptation enables us to achieve high accuracy with very low spectral resolution. We conduct numerical experiments to investigate the stability, accuracy, and the computational cost of the algorithms. Overall, our method achieves several orders of magnitude speed-up compared to the standard explicit schemes.
    No preview · Article · Oct 2015
  • Dhairya Malhotra · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We describe our implementation of a parallel fast multipole method for evaluating potentials for discrete and continuous source distributions. The first requires summation over the source points and the second requiring integration over a continuous source density. Both problems require ( N 2 ) complexity when computed directly; however, can be accelerated to ( N ) time using FMM. In our PVFMM software library, we use kernel independent FMM and this allows us to compute potentials for a wide range of elliptic kernels. Our method is high order, adaptive and scalable. In this paper, we discuss several algorithmic improvements and performance optimizations including cache locality, vectorization, shared memory parallelism and use of coprocessors. Our distributed memory implementation uses space-filling curve for partitioning data and a hypercube communication scheme. We present convergence results for Laplace, Stokes and Helmholtz (low wavenumber) kernels for both particle and volume FMM. We measure efficiency of our method in terms of CPU cycles per unknown for different accuracies and different kernels. We also demonstrate scalability of our implementation up to several thousand processor cores on the Stampede platform at the Texas Advanced Computing Center.
    No preview · Article · Sep 2015 · Communications in Computational Physics
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Since exact evaluation of a kernel matrix requires O(N2) work, scalable learning algorithms using kernels must approximate the kernel matrix. This approximation must be robust to the kernel parameters, for example the bandwidth for the Gaussian kernel. We consider two approximation methods: Nystrom and an algebraic treecode developed in our group. Nystrom methods construct a global low-rank approximation of the kernel matrix. Treecodes approximate just the off-diagonal blocks, typically using a hierarchical decomposition. We present a theoretical error analysis of our treecode and relate it to the error of Nystrom methods. Our analysis reveals how the block-rank structure of the kernel matrix controls the performance of the treecode. We evaluate our treecode by comparing it to the classical Nystrom method and a state-of-the-art fast approximate Nystrom method. We test the kernel matrix approximation accuracy for several different bandwidths and datasets. On the MNIST2M dataset (2M points in 784 dimensions) for a Gaussian kernel with bandwidth h=1, the Nystrom methods' error is over 90% whereas our treecode delivers error less than 1%. We also test the performance of the three methods on binary classification using two models: a Bayes classifier and kernel ridge regression. Our evaluation reveals the existence of bandwidth values that should be examined in cross-validation but whose corresponding kernel matrices cannot be approximated well by Nystrom methods. In contrast, the treecode scheme performs much better for these values.
    Full-text · Conference Paper · Aug 2015
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background: MRI characteristics of brain gliomas have been used to predict clinical outcome and molecular tumor characteristics. However, previously reported imaging biomarkers have not been sufficiently accurate or reproducible to enter routine clinical practice and often rely on relatively simple MRI measures. The current study leverages advanced image analysis and machine learning algorithms to identify complex and reproducible imaging patterns predictive of overall survival and molecular subtype in glioblastoma (GB). Methods: One hundred five patients with GB were first used to extract approximately 60 diverse features from preoperative multiparametric MRIs. These imaging features were used by a machine learning algorithm to derive imaging predictors of patient survival and molecular subtype. Cross-validation ensured generalizability of these predictors to new patients. Subsequently, the predictors were evaluated in a prospective cohort of 29 new patients. Results: Survival curves yielded a hazard ratio of 10.64 for predicted long versus short survivors. The overall, 3-way (long/medium/short survival) accuracy in the prospective cohort approached 80%. Classification of patients into the 4 molecular subtypes of GB achieved 76% accuracy. Conclusions: By employing machine learning techniques, we were able to demonstrate that imaging patterns are highly predictive of patient survival. Additionally, we found that GB subtypes have distinctive imaging phenotypes. These results reveal that when imaging markers related to infiltration, cell density, microvascularity, and blood-brain barrier compromise are integrated via advanced pattern analysis methods, they form very accurate predictive biomarkers. These predictive markers used solely preoperative images, hence they can significantly augment diagnosis and treatment of GB patients.
    No preview · Article · Aug 2015 · Neurosurgery
  • Source
    Amir Gholami · Judith Hill · Dhairya Malhotra · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a new library for parallel distributed Fast Fourier Transforms (FFT). Despite the large amount of work on FFTs, we show that significant speedups can be achieved for distributed transforms. The importance of FFT in science and engineering and the advances in high performance computing necessitate further improvements. AccFFT extends existing FFT libraries for x86 architectures (CPUs) and CUDA-enabled Graphics Processing Units (GPUs) to distributed memory clusters using the Message Passing Interface (MPI). Our library uses specifically optimized all-to-all communication algorithms, to efficiently perform the communication phase of the distributed FFT algorithm. The GPU based algorithm, effectively hides the overhead of PCIe transfers. We present numerical results on the Maverick and Stampede platforms at the Texas Advanced Computing Center (TACC) and on the Titan system at the Oak Ridge National Laboratory (ORNL). We compare the CPU version of AccFFT with P3DFFT and PFFT libraries and we show a consistent $2-3\times$ speedup across a range of processor counts and problem sizes. The comparison of the GPU code with FFTE library shows a similar trend with a $2\times$ speedup. The library is tested up to 131K cores and 4,096 GPUs of Titan, and up to 16K cores of Stampede.
    Full-text · Article · Jun 2015
  • Source
    William B. March · Bo Xiao · Chenhan D. Yu · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a parallel tree code for fast kernel summation in high dimensions -- a common problem in data analysis and computational statistics. Fast kernel summations can be viewed as approximation schemes for dense kernel matrices. Tree code algorithms (or simply tree codes) construct low-rank approximations of certain off-diagonal blocks of the kernel matrix. These blocks are identified with the help of spatial data structures, typically trees. There is extensive work on tree codes and their parallelization for kernel summations in three dimensions, but there is little work on high-dimensional problems. Recently, we introduced a novel tree code, ASKIT, which resolves most of the shortcomings of existing methods. We introduce novel parallel algorithms for ASKIT, derive complexity estimates, and demonstrate scalability on synthetic, scientific, and image datasets. In particular, we introduce a local essential tree construction that extends to arbitrary dimensions in a scalable manner. We introduce data transformations for memory locality and use GPU acceleration. We report results on the "Maverick" and "Stampede" systems at the Texas Advanced Computing Centre. Our largest computations involve two billion points in 64 dimensions on 32, 768 x86 cores and 8 million points in 784 dimensions on 16, 384 x86 cores.
    Full-text · Conference Paper · May 2015
  • Amir Gholami · Andreas Mang · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a numerical scheme for solving a parameter estimation problem for a model of low-grade glioma growth. Our goal is to estimate the spatial distribution of tumor concentration, as well as the magnitude of anisotropic tumor diffusion. We use a constrained optimization formulation with a reaction-diffusion model that results in a system of nonlinear partial differential equations. In our formulation, we estimate the parameters using partially observed, noisy tumor concentration data at two different time instances, along with white matter fiber directions derived from diffusion tensor imaging. The optimization problem is solved with a Gauss-Newton reduced space algorithm. We present the formulation and outline the numerical algorithms for solving the resulting equations. We test the method using a synthetic dataset and compute the reconstruction error for different noise levels and detection thresholds for monofocal and multifocal test cases.
    No preview · Article · May 2015 · Journal of Mathematical Biology
  • Source
    Andreas Mang · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose regularization schemes for deformable registration and efficient algorithms for its numerical approxima- tion. We treat image registration as a variational optimal control problem. The deformation map is parametrized by its velocity. Tikhonov regularization ensures well-posedness. Our scheme augments standard smoothness regularization operators based on H1- and H2-seminorms with a constraint on the divergence of the velocity field, which resembles variational formulations for Stokes incompressible flows. In our formulation, we invert for a stationary velocity field and a mass source map. This allows us to explicitly control the compressibility of the deformation map and by that the determinant of the deformation gradient. We also introduce a new regularization scheme that allows us to control shear. We use a globalized, preconditioned, matrix-free, reduced space Gauss-Newton-Krylov scheme for numerical optimization. We exploit variable elimination techniques to reduce the number of unknowns of our system; we only iterate on the reduced space of the velocity field. The numerical experiments demonstrate that we can control the determinant of the deformation gradient without compromising registration quality. This additional control allows us to avoid oversmoothing of the deformation map. We also demonstrate that we can promote or penalize shear whilst controlling the determinant of the deformation gradient.
    Full-text · Article · Mar 2015
  • Source
    Andreas Mang · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on $H^1$- or $H^2$-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field rendering the deformation incompressible and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. We use a preconditioned, globalized, matrix-free, inexact Newton-Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton-Krylov methods with a globalized preconditioned gradient descent. We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation regularity, and computational efficiency with an optional control on local mass conservation. The Newton-Krylov methods clearly outperform the Picard method if high accuracy of the inversion is required.
    Full-text · Article · Feb 2015 · SIAM Journal on Imaging Sciences
  • Source
    Dhairya Malhotra · Amir Gholami · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a novel numerical scheme for solving the Stokes equation with variable coecients in the unit box. Our scheme is based on a volume integral equation formulation. Compared to finite element methods, our formulation de-couples the velocity and pressure, generates velocity fields that are by construction divergence free to high accuracy and its performance does not depend on the order of the basis used for discretization. In addition, we employ a novel adaptive fast multipole method for volume integrals to obtain a scheme that is algorithmically optimal. Our scheme supports non-uniform discretizations and is spectrally accurate. To increase per node performance, we have integrated our code with both NVIDIA and Intel accelerators. In our largest scalability test, we solved a problem with 20 billion unknowns, using a 14-order approximation for the velocity, on 2048 nodes of the Stampede system at the Texas Advanced Computing Center. We achieved 0.656 petaFLOPS for the overall code (23% eciency) and one petaFLOPS for the volume integrals (33% eciency). As an application example, we simulate Stokes flow in a porous medium with highly complex pore structure using a penalty formulation to enforce the no slip condition.
    Full-text · Conference Paper · Nov 2014
  • Source
    William B. March · Bo Xiao · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a fast algorithm for kernel summation problems in high-dimensions. These problems appear in computational physics, numerical approximation, non-parametric statistics, and machine learning. In our context, the sums depend on a kernel function that is a pair potential defined on a dataset of points in a high-dimensional Euclidean space. A direct evaluation of the sum scales quadratically with the number of points. Fast kernel summation methods can reduce this cost to linear complexity, but the constants involved do not scale well with the dimensionality of the dataset. The main algorithmic components of fast kernel summation algorithms are the separation of the kernel sum between near and far field (which is the basis for pruning) and the efficient and accurate approximation of the far field. We introduce novel methods for pruning and approximating the far field. Our far field approximation requires only kernel evaluations and does not use analytic expansions. Pruning is not done using bounding boxes but rather combinatorially using a sparsified nearest-neighbor graph of the input. The time complexity of our algorithm depends linearly on the ambient dimension. The error in the algorithm depends on the low-rank approximability of the far field, which in turn depends on the kernel function and on the intrinsic dimensionality of the distribution of the points. The error of the far field approximation does not depend on the ambient dimension. We present the new algorithm along with experimental results that demonstrate its performance. We report results for Gaussian kernel sums for 100 million points in 64 dimensions, for one million points in 1000 dimensions, and for problems in which the Gaussian kernel has a variable bandwidth. To the best of our knowledge, all of these experiments are impossible or prohibitively expensive with existing fast kernel summation methods.
    Preview · Article · Oct 2014 · SIAM Journal on Scientific Computing
  • Source
    William B. March · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider fast kernel summations in high dimensions: given a large set of points in $d$ dimensions (with $d \gg 3$) and a pair-potential function (the kernel function), we compute a weighted sum of all pairwise kernel interactions for each point in the set. Direct summation is equivalent to a (dense) matrix-vector multiplication and scales quadratically with the number of points. Fast kernel summation algorithms reduce this cost to log-linear or linear complexity. Treecodes and Fast Multipole Methods (FMMs) deliver tremendous speedups by constructing approximate representations of interactions of points that are far from each other. In algebraic terms, these representations correspond to low-rank approximations of blocks of the overall interaction matrix. Existing approaches require an excessive number of kernel evaluations with increasing $d$ and number of points in the dataset. To address this issue, we use a randomized algebraic approach in which we first sample the rows of a block and then construct its approximate, low-rank interpolative decomposition. We examine the feasibility of this approach theoretically and experimentally. We provide a new theoretical result showing a tighter bound on the reconstruction error from uniformly sampling rows than the existing state-of-the-art. We demonstrate that our sampling approach is competitive with existing (but prohibitively expensive) methods from the literature. We also construct kernel matrices for the Laplacian, Gaussian, and polynomial kernels -- all commonly used in physics and data analysis. We explore the numerical properties of blocks of these matrices, and show that they are amenable to our approach. Depending on the data set, our randomized algorithm can successfully compute low rank approximations in high dimensions. We report results for data sets from four dimensions up to 1000 dimensions.
    Preview · Article · Sep 2014 · Applied and Computational Harmonic Analysis
  • Source
    Bryan Quaife · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We construct a high-order adaptive time stepping scheme for vesicle suspensions with viscosity contrast. The high-order accuracy is achieved using a spectral deferred correction (SDC) method, and adaptivity is achieved by estimating the local truncation error with the numerical error of physically constant values. Numerical examples demonstrate that our method can handle suspensions with vesicles that are tumbling, tank-treading, or both. Moreover, we demonstrate that a user-prescribed tolerance can be automatically achieved for simulations with long time horizons.
    Preview · Article · Aug 2014
  • Source
    Amir Gholami · Dhairya Malhotra · Hari Sundar · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We discuss the fast solution of the Poisson problem on a unit cube. We benchmark the performance of the most scalable methods for the Poisson problem: the Fast Fourier Transform (FFT), the Fast Multipole Method (FMM), the geometric multigrid (GMG) and algebraic multigrid (AMG). The GMG and FMM are novel parallel schemes using high-order approximation for Poisson problems developed in our group. The FFT code is from P3DFFT library and AMG code from ML Trilinos library. We examine and report results for weak scaling, strong scaling, and time to solution for uniform and highly refined grids. We present results on the Stampede system at the Texas Advanced Computing Center and on the Titan system at the Oak Ridge National Laboratory. In our largest test case, we solved a problem with 600 billion unknowns on 229,379 cores of Titan. Overall, all methods scale quite well to these problem sizes. We have tested all of the methods with different source distributions. Our results show that FFT is the method of choice for smooth source functions that can be resolved with a uniform mesh. However, it loses its performance in the presence of highly localized features in the source function. FMM and GMG considerably outperform FFT for those cases.
    Full-text · Article · Aug 2014
  • Source
    Amir Gholami · Andreas Mang · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a numerical scheme for solving a parameter estimation problem for a model of low-grade glioma growth. Our goal is to estimate tumor infiltration into the brain parenchyma for a reaction-diffusion tumor growth model. We use a constrained optimization formulation that results in a system of nonlinear partial differential equations (PDEs). In our formulation, we estimate the parameters using the data from segmented images at two different time instances, along with white matter fiber directions derived from diffusion tensor imaging (DTI). The parameters we seek to estimate are the spatial tumor concentration and the extent of anisotropic tumor diffusion. The optimization problem is solved with a Gauss-Newton reduced space algorithm. We present the formulation, outline the numerical algorithms and conclude with numerical experiments on synthetic datasets. Our results show the feasibility of the proposed methodology.
    Preview · Article · Aug 2014
  • Source
    Bryan Quaife · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: The discretization of the double-layer potential integral equation for the interior Dirichlet Laplace problem in a domain with smooth boundary results in a linear system that has a bounded condition number. Thus, the number of iterations required for the convergence of a Krylov method is, asymptotically, independent of the discretization size $N$. Using the Fast Multipole Method (FMM) to accelerate the matrix-vector products, we obtain an optimal $\bigO(N)$ solver. I practice, however, when the geometry is complicated, the number of Krylov iterations behaves in an $N$-dependent manner and can be quite large. In many applications, such cost is prohibitively expensive. There is a need, therefore, for designing preconditioners that reduce the number of Krylov iterations. We summarize the different methodologies that have appeared in the literature (single-grid, multigrid, approximate sparse inverses) and we propose a new class of preconditioners based on an FMM-based spatial decomposition of the double-layer operator. We present an experimental study in which we compare the different approaches and we discuss the merits and shortcomings of our approach.
    Full-text · Article · Jun 2014 · Numerical Linear Algebra with Applications
  • Source
    Bryan Quaife · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present an adaptive arbitrary-order accurate time-stepping numerical scheme for the flow of vesicles suspended in Stokesian fluids. Our scheme can be summarized as an approximate implicit spectral deferred correction (SDC) method. Applying a textbook fully implicit SDC scheme to vesicle flows is prohibitively expensive. For this reason we introduce several approximations. Our scheme is based on a semi-implicit linearized low-order time stepping method. (Our discretization is spectrally accurate in space.) We also use invariant properties of vesicle flows, constant area and boundary length in two dimensions, to reduce the computational cost of error estimation for adaptive time stepping. We present results in two dimensions for single-vesicle flows, constricted geometry flows, converging flows, and flows in a Couette apparatus. We experimentally demonstrate that the proposed scheme enables automatic selection of the step size and high-order accuracy.
    Preview · Article · May 2014 · Journal of Computational Physics
  • Source
    Hari Sundar · Georg Stadler · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a comparison of different multigrid approaches for the solution of systems arising from high-order continuous finite element discretizations of elliptic partial differential equations on complex geometries. We consider the pointwise Jacobi, the Chebyshev-accelerated Jacobi and the symmetric successive over-relaxation (SSOR) smoothers, as well as elementwise block Jacobi smoothing. Three approaches for the multigrid hierarchy are compared: 1) high-order $h$-multigrid, which uses high-order interpolation and restriction between geometrically coarsened meshes; 2) $p$-multigrid, in which the polynomial order is reduced while the mesh remains unchanged, and the interpolation and restriction incorporate the different-order basis functions; and 3), a first-order approximation multigrid preconditioner constructed using the nodes of the high-order discretization. This latter approach is often combined with algebraic multigrid for the low-order operator and is attractive for high-order discretizations on unstructured meshes, where geometric coarsening is difficult. Based on a simple performance model, we compare the computational cost of the different approaches. Using scalar test problems in two and three dimensions with constant and varying coefficients, we compare the performance of the different multigrid approaches for polynomial orders up to 16. Overall, both $h$- and $p$-multigrid work well; the first-order approximation is less efficient. For constant coefficients, all smoothers work well. For variable coefficients, Chebyshev and SSOR smoothing outperforms Jacobi smoothing. While all of the tested methods converge in a mesh-independent number of iterations, none of them behaves completely independent of the polynomial order. When multigrid is used as a preconditioner in a Krylov method, the iteration number decreases significantly compared to using multigrid as a solver.
    Preview · Article · Feb 2014 · Numerical Linear Algebra with Applications
  • Source
    Bryan Quaife · George Biros
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider numerical algorithms for the simulation of the rheology of two-dimensional vesicles suspended in a viscous Stokesian fluid. The vesicle evolution dynamics is governed by hydrodynamic and elastic forces. The elastic forces are due to local inextensibility of the vesicle membrane and resistance to bending. Numerically resolving vesicle flows poses several challenges. For example, we need to resolve moving interfaces, address stiffness due to bending, enforce the inextensibility constraint, and efficiently compute the (non-negligible) long-range hydrodynamic interactions. Our method is based on the work of {\em Rahimian, Veerapaneni, and Biros, "Dynamic simulation of locally inextensible vesicles suspended in an arbitrary two-dimensional domain, a boundary integral method", Journal of Computational Physics, 229 (18), 2010}. It is a boundary integral formulation of the Stokes equations coupled to the interface mass continuity and force balance. We extend the algorithms presented in that paper to increase the robustness of the method and enable simulations with concentrated suspensions. In particular, we propose a scheme in which both intra-vesicle and inter-vesicle interactions are treated semi-implicitly. In addition we use special integration for near-singular integrals and we introduce a spectrally accurate collision detection scheme. We test the proposed methodologies on both unconfined and confined flows for vesicles whose internal fluid may have a viscosity contrast with the bulk medium. Our experiments demonstrate the importance of treating both intra-vesicle and inter-vesicle interactions accurately.
    Full-text · Article · Sep 2013 · Journal of Computational Physics

Publication Stats

2k Citations
109.82 Total Impact Points

Institutions

  • 2005-2015
    • University of Texas at Austin
      • Institute for Computational Engineering and Sciences
      Austin, Texas, United States
  • 2009-2011
    • University of Houston
      • Department of Computer Science
      Houston, TX, United States
  • 1970-2011
    • Georgia Institute of Technology
      • Department of Biomedical Engineering
      Atlanta, Georgia, United States
  • 2004-2009
    • University of Pennsylvania
      • • Section of Biomedical Image Analysis - SBIA
      • • Department of Mechanical Engineering and Applied Mechanics
      • • Department of Computer and Information Science
      Philadelphia, Pennsylvania, United States
  • 2008
    • William Penn University
      Filadelfia, Pennsylvania, United States
  • 1999-2005
    • Carnegie Mellon University
      Pittsburgh, Pennsylvania, United States
  • 2002-2004
    • CUNY Graduate Center
      New York, New York, United States