This paper describes an algorithm to extract adaptive and quality 3D meshes directly from volumetric imaging data. The extracted tetrahedral and hexahedral meshes are extensively used in the Finite Element Method (FEM). A top-down octree subdivision coupled with the dual contouring method is used to rapidly extract adaptive 3D finite element meshes with correct topology from volumetric imaging data. The edge contraction and smoothing methods are used to improve the mesh quality. The main contribution is extending the dual contouring method to crack-free interval volume 3D meshing with feature sensitive adaptation. Compared to other tetrahedral extraction methods from imaging data, our method generates adaptive and quality 3D meshes without introducing any hanging nodes. The algorithm has been successfully applied to constructing the geometric model of a biomolecule in finite element calculations.
This paper describes an automatic and efficient approach to construct unstructured tetrahedral and hexahedral meshes for a composite domain made up of heterogeneous materials. The boundaries of these material regions form non-manifold surfaces. In earlier papers, we developed an octree-based isocontouring method to construct unstructured 3D meshes for a single-material (homogeneous) domain with manifold boundary. In this paper, we introduce the notion of a material change edge and use it to identify the interface between two or several different materials. A novel method to calculate the minimizer point for a cell shared by more than two materials is provided, which forms a non-manifold node on the boundary. We then mesh all the material regions simultaneously and automatically while conforming to their boundaries directly from volumetric data. Both material change edges and interior edges are analyzed to construct tetrahedral meshes, and interior grid points are analyzed for proper hexahedral mesh construction. Finally, edge-contraction and smoothing methods are used to improve the quality of tetrahedral meshes, and a combination of pillowing, geometric flow and optimization techniques is used for hexahedral mesh quality improvement. The shrink set of pillowing schemes is defined automatically as the boundary of each material region. Several application results of our multi-material mesh generation method are also provided.
A three dimensional viscous finite element model is presented in this paper for the analysis of the acoustic fluid structure interaction systems including, but not limited to, the cochlear-based transducers. The model consists of a three dimensional viscous acoustic fluid medium interacting with a two dimensional flat structure domain. The fluid field is governed by the linearized Navier-Stokes equation with the fluid displacements and the pressure chosen as independent variables. The mixed displacement/pressure based formulation is used in the fluid field in order to alleviate the locking in the nearly incompressible fluid. The structure is modeled as a Mindlin plate with or without residual stress. The Hinton-Huang's 9-noded Lagrangian plate element is chosen in order to be compatible with 27/4 u/p fluid elements. The results from the full 3d FEM model are in good agreement with experimental results and other FEM results including Beltman's thin film viscoacoustic element  and two and half dimensional inviscid elements . Although it is computationally expensive, it provides a benchmark solution for other numerical models or approximations to compare to besides experiments and it is capable of modeling any irregular geometries and material properties while other numerical models may not be applicable.
We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces.
This paper describes an algorithm to extract adaptive and quality quadrilateral/hexahedral meshes directly from volumetric data. First, a bottom-up surface topology preserving octree-based algorithm is applied to select a starting octree level. Then the dual contouring method is used to extract a preliminary uniform quad/hex mesh, which is decomposed into finer quads/hexes adaptively without introducing any hanging nodes. The positions of all boundary vertices are recalculated to approximate the boundary surface more accurately. Mesh adaptivity can be controlled by a feature sensitive error function, the regions that users are interested in, or finite element calculation results. Finally, a relaxation based technique is deployed to improve mesh quality. Several demonstration examples are provided from a wide variety of application domains. Some extracted meshes have been extensively used in finite element simulations.
The vasculature consists of a complex network of vessels ranging from large arteries to arterioles, capillaries, venules, and veins. This network is vital for the supply of oxygen and nutrients to tissues and the removal of carbon dioxide and waste products from tissues. Because of its primary role as a pressure-driven chemomechanical transport system, it should not be surprising that mechanics plays a vital role in the development and maintenance of the normal vasculature as well as in the progression and treatment of vascular disease. This review highlights some past successes of vascular biomechanics, but emphasizes the need for research that synthesizes complementary advances in molecular biology, biomechanics, medical imaging, computational methods, and computing power for purposes of increasing our understanding of vascular physiology and pathophysiology as well as improving the design of medical devices and clinical interventions, including surgical procedures. That is, computational mechanics has great promise to contribute to the continued improvement of vascular health.
Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are most critical in determining response time. Finally, we outline several challenges to the G&R community for future work. We suggest that these tools provide the first systematic and efficient framework to quantify uncertainties and optimally identify G&R model parameters.
Articular cartilage exhibits viscoelasticity in response to mechanical loading that is well described using biphasic or poroelastic continuum models. To date, boundary element methods (BEMs) have not been employed in modeling biphasic tissue mechanics. A three dimensional direct poroelastic BEM, formulated in the Laplace transform domain, is applied to modeling stress relaxation in cartilage. Macroscopic stress relaxation of a poroelastic cylinder in uni-axial confined compression is simulated and validated against a theoretical solution. Microscopic cell deformation due to poroelastic stress relaxation is also modeled. An extended Laplace inversion method is employed to accurately represent mechanical responses in the time domain.
Asynchronous variational integration (AVI) is a tool which improves the numerical efficiency of explicit time stepping schemes when applied to finite element meshes with local spatial refinement. This is achieved by associating an individual time step length to each spatial domain. Furthermore, long-term stability is ensured by its variational structure. This article presents AVI in the context of finite elements based on a weakened weak form (W2) Liu (2009) , exemplified by continuous assumed gradient elements Wolff and Bucher (2011) . The article presents the main ideas of the modified AVI, gives implementation notes and a recipe for estimating the critical time step.
This article presents a novel approach to collision detection based on distance fields. A novel interpolation ensures stability of the distances in the vicinity of complex geometries. An assumed gradient formulation is introduced leading to a [Formula: see text]-continuous distance function. The gap function is re-expressed allowing penalty and Lagrange multiplier formulations. The article introduces a node-to-element integration for first order elements, but also discusses signed distances, partial updates, intermediate surfaces, mortar methods and higher order elements. The algorithm is fast, simple and robust for complex geometries and self contact. The computed tractions conserve linear and angular momentum even in infeasible contact. Numerical examples illustrate the new algorithm in three dimensions.
In this paper, we develop a geometrically flexible technique for computational fluid–structure interaction (FSI). The motivating application is the simulation of tri-leaflet bioprosthetic heart valve function over the complete cardiac cycle. Due to the complex motion of the heart valve leaflets, the fluid domain undergoes large deformations, including changes of topology. The proposed method directly analyzes a spline-based surface representation of the structure by immersing it into a non-boundary-fitted discretization of the surrounding fluid domain. This places our method within an emerging class of computational techniques that aim to capture geometry on non-boundary-fitted analysis meshes. We introduce the term “immersogeometric analysis” to identify this paradigm.
Many researchers have proposed the use of biomechanical models for high accuracy soft organ non-rigid image registration, but one main problem in using comprehensive models is the long computation time required to obtain the solution. In this paper we propose to use the Total Lagrangian formulation of the Finite Element method together with Dynamic Relaxation for computing intra-operative organ deformations. We study the best ways of estimating the parameters involved and we propose a termination criteria that can be used in order to obtain fast results with prescribed accuracy. The simulation results prove the accuracy and computational efficiency of the method, even in cases involving large deformations, nonlinear materials and contacts.
A multiscale procedure to couple a mesoscale discrete particle model and a macroscale continuum model of incompressible fluid flow is proposed in this study. We call this procedure the mesoscopic bridging scale (MBS) method since it is developed on the basis of the bridging scale method for coupling molecular dynamics and finite element models [G.J. Wagner, W.K. Liu, Coupling of atomistic and continuum simulations using a bridging scale decomposition, J. Comput. Phys. 190 (2003) 249-274]. We derive the governing equations of the MBS method and show that the differential equations of motion of the mesoscale discrete particle model and finite element (FE) model are only coupled through the force terms. Based on this coupling, we express the finite element equations which rely on the Navier-Stokes and continuity equations, in a way that the internal nodal FE forces are evaluated using viscous stresses from the mesoscale model. The dissipative particle dynamics (DPD) method for the discrete particle mesoscale model is employed. The entire fluid domain is divided into a local domain and a global domain. Fluid flow in the local domain is modeled with both DPD and FE method, while fluid flow in the global domain is modeled by the FE method only. The MBS method is suitable for modeling complex (colloidal) fluid flows, where continuum methods are sufficiently accurate only in the large fluid domain, while small, local regions of particular interest require detailed modeling by mesoscopic discrete particles. Solved examples - simple Poiseuille and driven cavity flows illustrate the applicability of the proposed MBS method.
It is now well known that altered hemodynamics can alter the genes that are expressed by diverse vascular cells, which in turn plays a critical role in the ability of a blood vessel to adapt to new biomechanical conditions and governs the natural history of the progression of many types of disease. Fortunately, when taken together, recent advances in molecular and cell biology, in vivo medical imaging, biomechanics, computational mechanics, and computing power provide an unprecedented opportunity to begin to understand such hemodynamic effects on vascular biology, physiology, and pathophysiology. Moreover, with increased understanding will come the promise of improved designs for medical devices and clinical interventions. The goal of this paper, therefore, is to present a new computational framework that brings together recent advances in computational biosolid and biofluid mechanics that can exploit new information on the biology of vascular growth and remodeling as well as in vivo patient-specific medical imaging so as to enable realistic simulations of vascular adaptations, disease progression, and clinical intervention.
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture.
Application of biomechanical modeling techniques in the area of medical image analysis and surgical simulation implies two conflicting requirements: accurate results and high solution speeds. Accurate results can be obtained only by using appropriate models and solution algorithms. In our previous papers we have presented algorithms and solution methods for performing accurate nonlinear finite element analysis of brain shift (which includes mixed mesh, different non-linear material models, finite deformations and brain-skull contacts) in less than a minute on a personal computer for models having up to 50.000 degrees of freedom. In this paper we present an implementation of our algorithms on a Graphics Processing Unit (GPU) using the new NVIDIA Compute Unified Device Architecture (CUDA) which leads to more than 20 times increase in the computation speed. This makes possible the use of meshes with more elements, which better represent the geometry, are easier to generate, and provide more accurate results.
This paper presents the formulation and implementation of an Error in Constitutive Equations (ECE) method suitable for large-scale inverse identification of linear elastic material properties in the context of steady-state elastodynamics. In ECE-based methods, the inverse problem is postulated as an optimization problem in which the cost functional measures the discrepancy in the constitutive equations that connect kinematically admissible strains and dynamically admissible stresses. Furthermore, in a more recent modality of this methodology introduced by Feissel and Allix (2007), referred to as the Modified ECE (MECE), the measured data is incorporated into the formulation as a quadratic penalty term. We show that a simple and efficient continuation scheme for the penalty term, suggested by the theory of quadratic penalty methods, can significantly accelerate the convergence of the MECE algorithm. Furthermore, a (block) successive over-relaxation (SOR) technique is introduced, enabling the use of existing parallel finite element codes with minimal modification to solve the coupled system of equations that arises from the optimality conditions in MECE methods. Our numerical results demonstrate that the proposed methodology can successfully reconstruct the spatial distribution of elastic material parameters from partial and noisy measurements in as few as ten iterations in a 2D example and fifty in a 3D example. We show (through numerical experiments) that the proposed continuation scheme can improve the rate of convergence of MECE methods by at least an order of magnitude versus the alternative of using a fixed penalty parameter. Furthermore, the proposed block SOR strategy coupled with existing parallel solvers produces a computationally efficient MECE method that can be used for large scale materials identification problems, as demonstrated on a 3D example involving about 400,000 unknown moduli. Finally, our numerical results suggest that the proposed MECE approach can be significantly faster than the conventional approach of L(2) minimization using quasi-Newton methods.
We present a method to solve a convection-reaction system based on a least-squares finite element method (LSFEM). For steady-state computations, issues related to recirculation flow are stated and demonstrated with a simple example. The method can compute concentration profiles in open flow even when the generation term is small. This is the case for estimating hemolysis in blood. Time-dependent flows are computed with the space-time LSFEM discretization. We observe that the computed hemoglobin concentration can become negative in certain regions of the flow; it is a physically unacceptable result. To prevent this, we propose a quadratic transformation of variables. The transformed governing equation can be solved in a straightforward way by LSFEM with no sign of unphysical behavior. The effect of localized high shear on blood damage is shown in a circular Couette-flow-with-blade configuration, and a physiological condition is tested in an arterial graft flow.
In this paper, we develop a "modified" immersed finite element method (mIFEM), a non-boundary-fitted numerical technique, to study fluid-structure interactions. Using this method, we can more precisely capture the solid dynamics by solving the solid governing equation instead of imposing it based on the fluid velocity field as in the original immersed finite element (IFEM). Using the IFEM may lead to severe solid mesh distortion because the solid deformation is been over-estimated, especially for high Reynolds number flows. In the mIFEM, the solid dynamics is solved using appropriate boundary conditions generated from the surrounding fluid, therefore produces more accurate and realistic coupled solutions. We show several 2-D and 3-D testing cases where the mIFEM has a noticeable advantage in handling complicated fluid-structure interactions when the solid behavior dominates the fluid flow.
We have recently developed and tested an efficient algorithm for solving the nonlinear inverse elasticity problem for a compressible hyperelastic material. The data for this problem are the quasi-static deformation fields within the solid measured at two distinct overall strain levels. The main ingredients of our algorithm are a gradient based quasi-Newton minimization strategy, the use of adjoint equations and a novel strategy for continuation in the material parameters. In this paper we present several extensions to this algorithm. First, we extend it to incompressible media thereby extending its applicability to tissues which are nearly incompressible under slow deformation. We achieve this by solving the forward problem using a residual-based, stabilized, mixed finite element formulation which circumvents the Ladyzenskaya-Babuska-Brezzi condition. Second, we demonstrate how the recovery of the spatial distribution of the nonlinear parameter can be improved either by preconditioning the system of equations for the material parameters, or by splitting the problem into two distinct steps. Finally, we present a new strain energy density function with an exponential stress-strain behavior that yields a deviatoric stress tensor, thereby simplifying the interpretation of pressure when compared with other exponential functions. We test the overall approach by solving for the spatial distribution of material parameters from noisy, synthetic deformation fields.
An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number.
This study presents the optimization of the maintenance scheduling of mechanical components under fatigue loading. The cracks of damaged structures may be detected during non-destructive inspection and subsequently repaired. Fatigue crack initiation and growth show inherent variability, and as well the outcome of inspection activities. The problem is addressed under the framework of reliability based optimization. The initiation and propagation of fatigue cracks are efficiently modeled using cohesive zone elements. The applicability of the method is demonstrated by a numerical example, which involves a plate with two holes subject to alternating stress.
Predicting the outcome of thermotherapies in cancer treatment requires an accurate characterization of the bioheat transfer processes in soft tissues. Due to the biological and structural complexity of tumor (soft tissue) composition and vasculature, it is often very difficult to obtain reliable tissue properties that is one of the key factors for the accurate treatment outcome prediction. Efficient algorithms employing in vivo thermal measurements to determine heterogeneous thermal tissues properties in conjunction with a detailed sensitivity analysis can produce essential information for model development and optimal control. The goals of this paper are to present a general formulation of the bioheat transfer equation for heterogeneous soft tissues, review models and algorithms developed for cell damage, heat shock proteins, and soft tissues with nanoparticle inclusion, and demonstrate an overall computational strategy for developing a laser treatment framework with the ability to perform real-time robust calibrations and optimal control. This computational strategy can be applied to other thermotherapies using the heat source such as radio frequency or high intensity focused ultrasound.
For chemical systems involving both fast and slow scales, stiffness presents challenges for efficient stochastic simulation. Two different avenues have been explored to solve this problem. One is the slow-scale stochastic simulation (ssSSA) based on the stochastic partial equilibrium assumption. The other is the tau-leaping method. In this paper we propose a new algorithm, the slow-scale tau-leaping method, which combines some of the best features of these two methods. Numerical experiments are presented which illustrate the effectiveness of this approach.