It has been recognized that fluid-structure interactions (FSI) play an important role in cardiovascular disease initiation and development. However, in vivo MRI multi-component FSI models for human carotid atherosclerotic plaques with bifurcation and quantitative comparisons of FSI models with fluid-only or structure-only models are currently lacking in the literature. A 3D non-Newtonian multi-component FSI model based on in vivo/ex vivo MRI images for human atherosclerotic plaques was introduced to investigate flow and plaque stress/strain behaviors which may be related to plaque progression and rupture. Both artery wall and plaque components were assumed to be hyperelastic, isotropic, incompressible and homogeneous. Blood flow was assumed to be laminar, non-Newtonian, viscous and incompressible. In vivo/ex vivo MRI images were acquired using histologically-validated multi-spectral MRI protocols. The 3D FSI models were solved and results were compared with those from a Newtonian FSI model and wall-only/fluid-only models. A 145% difference in maximum principal stresses (Stress-P(1)) between the FSI and wall-only models and 40% difference in flow maximum shear stress (MSS) between the FSI and fluid-only models were found at the throat of the plaque using a severe plaque sample (70% severity by diameter). Flow maximum shear stress (MSS) from the rigid wall model is much higher (20-40% in maximum MSS values, 100-150% in stagnation region) than those from FSI models.
Identifying ventricle material properties and its infarct area after heart attack noninvasively is of great important in clinical applications. An echo-based computational modeling approach was proposed to investigate left ventricle (LV) mechanical properties and stress conditions using patient-specific data. Echo data was acquired from one healthy volunteer (male, age: 58) and a male patient (age: 60) who had an acute inferior myocardial infarction one week before echo image acquisition. Standard echocardiograms were obtained using an ultrasound machine (E9, GE Mechanical Systems, Milwaukee, Wisconsin) with a 3V probe and data were segmented for model construction. Finite element models were constructed to obtain ventricle stress and strain conditions. A pre-shrink process was applied so that the model ventricle geometries under end-of-systole pressure matched in vivo data. Our results indicated that the modeling approach has the potential to be used to determine ventricle material properties. The equivalent Young's modulus value from the healthy LV (LV1) was about 30% softer than that of the infarct LV (LV2) at end of diastole, but was about 100% stiffer than that of LV2 at end of systole. This can be explained as LV1 has more active contraction reflected by stiffness variations. Using averaged values, at end-systole, longitudinal curvature from LV2 was 164% higher than that from LV1. LV stress from LV2 was 82% higher than that from LV1. At end-diastole, L-curvature from LV2 was still 132% higher than that from LV1, while LV stress from LV2 was only 9% higher than that from LV1. Longitudinal curvature and stress showed the largest differences between the two ventricles, with the LV with infarct having higher longitudinal curvature and stress values. Large scale studies are needed to further confirm our findings.
Previously, we introduced a computational procedure based on threedimensional meshless generalized finite difference (MGFD) method and serial magnetic resonance imaging (MRI) data to quantify patient-specific carotid atherosclerotic plaque growth functions and simulate plaque progression. Structure-only models were used in our previous report. In this paper, fluid-stricture interaction (FSI) was added to improve on prediction accuracy. One participating patient was scanned three times (T1, T2, and T3, at intervals of about 18 months) to obtain plaque progression data. Blood flow was assumed to laminar, Newtonian, viscous and incompressible. The Navier-Stokes equations with arbitrary Lagrangian- Eulerian (ALE) formulation were used as the governing equations. Plaque material was assumed to be uniform, homogeneous, isotropic, linear, and nearly incompressible. The linear elastic model was used. The 3D FSI plaque model was discretized and solved using a meshless generalized finite difference (GFD) method. Growth functions with a) morphology alone; b) morphology and plaque wall stress (PWS); morphology and flow shear stress (FSS), and d) morphology, PWS and FSS were introduced to predict future plaque growth based on previous time point data. Starting from the T2 plaque geometry, plaque progression was simulated by solving the FSI model and adjusting plaque geometry using plaque growth functions iteratively until T3 is reached. Numerically simulated plaque progression agreed very well with the target T3 plaque geometry with errors ranging from 8.62%, 7.22%, 5.77% and 4.39%, with the growth function including morphology, plaque wall stress and flow shear stress terms giving the best predictions. Adding flow shear stress term to the growth function improved the prediction error from 7.22% to
Right and left ventricle (RV/LV) combination models with three different patch materials (Dacron scaffold, treated pericardium, and contracting myocardium), two-layer construction, fiber orientation, and active anisotropic material properties were introduced to evaluate the effects of patch materials on RV function. A material-stiffening approach was used to model active heart contraction. Cardiac magnetic resonance (CMR) imaging was performed to acquire patient-specific ventricular geometries and cardiac motion from a patient with severe RV dilatation due to pulmonary regurgitation needing RV remodeling and pulmonary valve replacement operation. Computational models were constructed and solved to obtain RV stroke volume, ejection fraction, patch area variations, and stress/strain data for patch comparisons. Our results indicate that the patch model with contracting myocardium leads to decreased stress level in the patch area, improved RV function and patch area contractility. Maximum Stress-P(1) (maximum principal stress) value at the center of the patch from the Dacron scaffold patch model was 350% higher than that from the other two models. Patch area reduction ratio was 0.3%, 3.1% and 27.4% for Dacron scaffold, pericardium, and contracting myocardium patches, respectively. These findings suggest that the contracting myocardium patch model may lead to improved recovery of RV function in patients with severe chronic pulmonary regurgitation.
Atherosclerotic plaque rupture and progression have been the focus of intensive investigations in recent years. Plaque rupture is closely related to most severe cardiovascular syndromes such as heart attack and stroke. A computational procedure based on meshless generalized finite difference (MGFD) method and serial magnetic resonance imaging (MRI) data was introduced to quantify patient-specific carotid atherosclerotic plaque growth functions and simulate plaque progression. Participating patients were scanned three times (T(1), T(2), and T(3), at intervals of about 18 months) to obtain plaque progression data. Vessel wall thickness (WT) changes were used as the measure for plaque progression. Since there was insufficient data with the current technology to quantify individual plaque component growth, the whole plaque was assumed to be uniform, homogeneous, hyperelastic, isotropic and nearly incompressible. The linear elastic model was used. The 2D plaque model was discretized and solved using a meshless generalized finite difference (GFD) method. Starting from the T(2) plaque geometry, plaque progression was simulated by solving the solid model and adjusting wall thickness using plaque growth functions iteratively until T(3) is reached. Numerically simulated plaque progression agreed very well with actual plaque geometry at T(3) given by MRI data. We believe this is the first time plaque progression simulation based on multi-year patient-tracking data was reported. Serial MRI-based progression simulation adds time dimension to plaque vulnerability assessment and will improve prediction accuracy for potential plaque rupture risk.
Cardiovascular disease (CVD) is becoming the number one cause of death worldwide. Atherosclerotic plaque rupture and progression are closely related to most severe cardiovascular syndromes such as heart attack and stroke. Mechanisms governing plaque rupture and progression are not well understood. A computational procedure based on three-dimensional meshless generalized finite difference (MGFD) method and serial magnetic resonance imaging (MRI) data was introduced to quantify patient-specific carotid atherosclerotic plaque growth functions and simulate plaque progression. Participating patients were scanned three times (T1, T2, and T3, at intervals of about 18 months) to obtain plaque progression data. Vessel wall thickness (WT) changes were used as the measure for plaque progression. Since there was insufficient data with the current technology to quantify individual plaque component growth, the whole plaque was assumed to be uniform, homogeneous, isotropic, linear, and nearly incompressible. The linear elastic model was used. The 3D plaque model was discretized and solved using a meshless generalized finite difference (GFD) method. Four growth functions with different combinations of wall thickness, stress, and neighboring point terms were introduced to predict future plaque growth based on previous time point data. Starting from the T2 plaque geometry, plaque progression was simulated by solving the solid model and adjusting wall thickness using plaque growth functions iteratively until T3 is reached. Numerically simulated plaque progression agreed very well with the target T3 plaque geometry with errors ranging from 11.56%, 6.39%, 8.24%, to 4.45%, given by the four growth functions. We believe this is the first time 3D plaque progression simulation based on multi-year patient-tracking data was reported. Serial MRI-based progression simulation adds time dimension to plaque vulnerability assessment and will improve prediction accuracy for potential plaque rupture risk.
We present a new method for simulating two-phase flows in complex geometries, taking into account contact lines separating immiscible incompressible components. We combine the diffuse domain method for solving PDEs in complex geometries with the diffuse-interface (phase-field) method for simulating multiphase flows. In this approach, the complex geometry is described implicitly by introducing a new phase-field variable, which is a smooth approximation of the characteristic function of the complex domain. The fluid and component concentration equations are reformulated and solved in larger regular domain with the boundary conditions being implicitly modeled using source terms. The method is straightforward to implement using standard software packages; we use adaptive finite elements here. We present numerical examples demonstrating the effectiveness of the algorithm. We simulate multiphase flow in a driven cavity on an extended domain and find very good agreement with results obtained by solving the equations and boundary conditions in the original domain. We then consider successively more complex geometries and simulate a droplet sliding down a rippled ramp in 2D and 3D, a droplet flowing through a Y-junction in a microfluidic network and finally chaotic mixing in a droplet flowing through a winding, serpentine channel. The latter example actually incorporates two different diffuse domains: one describes the evolving droplet where mixing occurs while the other describes the channel.
The highly accurate and efficient Symmetric Galerkin Boundary Element Method (SGBEM), a Finite Element Method (FEM)-based alternating method, is proposed for analyzing three-dimensional non-planar cracks and their growth. The cracks are modeled using the symmetric Galerkin boundary element method as a distribution of displacement discontinuities, simulating an infinite medium. The finite element method only analyzes the stress for the uncracked body. The solution for the cracked structural component is determined by an iteration procedure. This process alternates between an FEM solution for the uncracked body and the SGBEM solution for a crack in an infinite body. Numerical analysis, and the Java code used, evaluate stress intensity factors and model fatigue crack growth. Examples of non-planar cracks in infinite media and planar cracks in finite bodies, as well as growth under fatigue, show the accuracy of the method.
This paper provides a numerical method for solving two- and three-dimensional unsteady incompressible flows. The vorticity-velocity formulation of the Navier–Stokes equations is considered, employing the vorticity transport equation and a second-order Poisson equation for the velocity. Second-order-accurate centred finite differences on a staggered grid are used for the space discretization. The vorticity equation is discretized in time using a fully implicit three-level scheme. At each physical time level, a dual-time stepping technique is used to solve the coupled system of non linear algebraic equations by various efficient relaxation schemes. Steady flows are computed by dropping the physical time derivative and converging the pseudo-time-dependent problem. A domain decomposition of the physical space is also employed: the multi-block algorithm allows one to handle multiply-connected domains and complex configurations and, more importantly, to solve each grid-block on a single processor of a parallel platform. The accuracy and efficiency of the proposed methodology is demonstrated by solving well known two-dimensional flow problems. Then, the steady and unsteady flows inside a cubic cavity are considered and the numerical results are compared with experimental and numerical data.
Adomian polynomials (AP's) are expressed in terms of new objects called reduced polynomials (RP's). These new objects, which carry two subscripts, are independent of the form of the nonlinear operator. Apart from the well-known two properties of AP's, curiously enough no further properties are discussed in the literature. We derive and discuss in full detail the properties of the RP's and AP's. We focus on the case where the nonlinear operator depends on one variable and construct the most general analytical expressions of the RP's for small values of the difference of their subscripts. It is shown that each RP depends on a number of functions equal to the difference of its subscripts plus one. These new properties lead to implement a dramatically simple and compact Mathematica program for the derivation of individual RP's and AP's in their general forms and provide useful hints for elegant hand calculations of AP's. Application of the program is considered. Comment: 18 pages, LaTeX; Typos corrected
An accurate and yet simple Meshless Local Petrov-Galerkin (MLPG) formulation for analyzing beam problems is presented. In the formulation, simple weight functions are chosen as test functions as in the conventional MLPG method. Linear test functions are also chosen, leading to a variation of the MLPG method that is computationally efficient compared to the conventional implementation. The MLPG method is evaluated by applying the formulation to a variety of patch tests, thin beam problems, and problems with load discontinuities. The formulation successfully reproduces exact solutions to machine accuracy when higher order power and spline functions are chosen as test functions or when the linear test function is used, and when constructing the trial functions, the order of the basis function is properly balanced by the order of the weight function. For mixed boundary value problems, deflections, slopes, moments, and shear forces are calculated to the same accuracy by the MLPG method without the use of elaborate post-processing techniques. Problems with load discontinuities require special care - when a reasonable number of nodes are used, the method yields very accurate results.
We consider a composite medium, which consists of a homogeneous matrix
containing a statistically homogeneous set of multimodal spherical inclusions.
This model is used to represent the morphology of heterogeneous solid
propellants (HSP) that are widely used in the rocket industry. The
Lubachevsky-Stillinger algorithm is used to generate morphological models of
HSP with large polydisperse packs of spherical inclusions. We modify the
algorithm by proposing a random shaking procedure that leads to the
stabilization of a statistical distribution of the simulated structure that is
homogeneous, highly mixed, and protocol independent (in sense that the
statistical parameters estimated do not depend on the basic simulation
algorithm). Increasing the number of shaking has a twofold effect. First, the
system becomes more homogeneous and well-mixed. Second, the stochastic
fluctuations of statistical parameters (such as e.g. radial distribution
function, RDF), estimated by averaging of these structures, tend to diminish.
We consider a linearly elastic composite medium, which consists of a
homogeneous matrix containing a statistically homogeneous set of multimodal
spherical inclusions modeling the morphology of heterogeneous solid propellants
(HSP). Estimates of effective elastic moduli are performed using the
multiparticle effective field method (MEFM) directly taking into account the
interaction of different inclusions. Because of this, the effective elastic
moduli of the HSP evaluated by the MEFM are sensitive to both the relative size
of the inclusions (i.e., their multimodal nature) and the RDFs estimated from
experimental data, as well as from the ensembles generated by the method
proposed. Moreover, the detected increased stress concentrator factors at the
larger particles in comparison with smaller particles in bimodal structures is
critical for any nonlinear localized phenomena such as onset of yielding,
failure initiation, and damage accumulation.
The effects of uncertainties on the predicted strength of a single lap shear joint are examined. Probabilistic and possibilistic methods are used to account for uncertainties. A total of ten variables are assumed to be random, with normal distributions. Both Monte Carlo Simulation and the First Order Reliability Method are used to determine the probability of failure. Triangular membership functions with upper and lower bounds located at plus or minus three standard deviations are used to model uncertainty in the possibilistic analysis. The alpha cut (or vertex) method is used to evaluate the possibility of failure. Linear and geometrically nonlinear finite element analyses are used calculate the response of the joint; fracture in the adhesive and material strength failure in the strap are used to evaluate its strength. Although probabilistic and possibilistic analyses provide significantly more information than do conventional deterministic analyses, they are computationally expensive. A novel scaling approach is developed and used to substantially reduce the computational cost of the probabilistic and possibilistic analyses. The possibilistic approach for treating uncertainties appears to be viable during the conceptual and preliminary design stages when limited data are available and high accuracies are not needed. However, this viability is mixed with several cautions that are discussed herein.
A numerical study of multi-phase granular materials based upon
micro-mechanical modelling is proposed. Discrete element simulations are used
to investigate capillary induced effects on the friction properties of a
granular assembly in the pendular regime. Capillary forces are described at the
local scale through the Young-Laplace equation and are superimposed to the
standard dry particle interaction usually well simulated through an
elastic-plastic relationship. Both effects of the pressure difference between
liquid and gas phases and of the surface tension at the interface are
integrated into the interaction model. Hydraulic hysteresis is accounted for
based on the possible mechanism of formation and breakage of capillary menisci
at contacts. In order to upscale the interparticular model, triaxial loading
paths are simulated on a granular assembly and the results interpreted through
the Mohr-Coulomb criterion. The micro-mechanical approach is validated with a
capillary cohesion induced at the macroscopic scale. It is shown that
interparticular menisci contribute to the soil resistance by increasing normal
forces at contacts. In addition, more than the capillary pressure level or the
degree of saturation, our findings highlight the importance of the density
number of liquid bonds on the overall behaviour of the material.
The effects of several critical assumptions and parameters on the computation of strain energy release rates for delamination and debond configurations modeled with plate elements have been quantified. The method of calculation is based on the virtual crack closure technique (VCCT), and models of the upper and lower surface of the delamination or debond that use two-dimensional (2D) plate elements rather than three-dimensional (3D) solid elements. The major advantages of the plate element modeling technique are a smaller model size and simpler configurational modeling. Specific issues that are discussed include: constraint of translational degrees of freedom, rotational degrees of freedom or both in the neighborhood of the debond front, shear deformation assumptions; and continuity of material properties and section stiffness in the vicinity of the debond front. Where appropriate, the plate element analyses are compared with corresponding two-dimensional plane strain analyses.
In this paper, a pore-scale network modeling method, based on the flow
continuity residual in conjunction with a Newton-Raphson non-linear iterative
solving technique, is proposed and used to obtain the pressure and flow fields
in a network of interconnected distensible ducts representing, for instance,
blood vasculature or deformable porous media. A previously derived analytical
expression correlating boundary pressures to volumetric flow rate in compliant
tubes for a pressure-area constitutive elastic relation has been used to
represent the underlying flow model. Comparison to a preceding equivalent
method, the one-dimensional Navier-Stokes finite element, was made and the
results were analyzed. The advantages of the new method have been highlighted
and practical computational issues, related mainly to the rate and speed of
convergence, have been discussed.
We consider a fuzzy linear system with crisp coefficient matrix and with an
arbitrary fuzzy number in parametric form on the right-hand side. It is known
that the well-known existence and uniqueness theorem of a strong fuzzy solution
is equivalent to the following: The coefficient matrix is the product of a
permutation matrix and a diagonal matrix. This means that this theorem can be
applicable only for a special form of linear systems, namely, only when the
system consists of equations, each of which has exactly one variable. We prove
an existence and uniqueness theorem, which can be use on more general systems.
The necessary and sufficient conditions of the theorem are dependent on both
the coefficient matrix and the right-hand side. This theorem is a
generalization of the well-known existence and uniqueness theorem for the
strong solution.
In this paper, linear systems with a crisp real coefficient matrix and with a
vector of fuzzy triangular numbers on the right-hand side are studied. A new
method, which is based on the geometric representations of linear
transformations, is proposed to find solutions. The method uses the fact that a
vector of fuzzy triangular numbers forms a rectangular prism in n-dimensional
space and that the image of a parallelepiped is also a parallelepiped under a
linear transformation. The suggested method clarifies why in general case
different approaches do not generate solutions as fuzzy numbers. It is
geometrically proved that if the coefficient matrix is a generalized
permutation matrix, then the solution of a fuzzy linear system (FLS) is a
vector of fuzzy numbers irrespective of the vector on the right-hand side. The
most important difference between this and previous papers on FLS is that the
solution is sought as a fuzzy set of vectors (with real components) rather than
a vector of fuzzy numbers. Each vector in the solution set solves the given FLS
with a certain possibility. The suggested method can also be applied in the
case when the right-hand side is a vector of fuzzy numbers in parametric form.
However, in this case, -cuts of the solution can not be determined by geometric
similarity and additional computations are needed.
Petroleum reservoir modelling requires e#ective multiscale methods for the numerical simulation of two-phase flow in porous media. This paper proposes the application of a novel meshfree particle method to the Buckley-Leverett model. The utilized meshfree advection scheme, called AMMoC, is essentially a method of characteristics, which combines an adaptive semi-Lagrangian method with local meshfree interpolation by polyharmonic splines. The method AMMoC is applied to the five-spot problem, a well-established model problem in petroleum reservoir simulation. The numerical results and subsequent numerical comparisons with two leading commercial reservoir simulators, ECLIPSE and FrontSim, show the good performance of our meshfree advection scheme AMMoC.
An advanced boundary element method (BEM) is developed in this paper for analyzing thin layered structures, such as thin films and coatings, under the thermal loading. The boundary integral equation (BIE) formulation for steady-state thermoelasticity is reviewed and a special case, that is, the BIE for a uniform distribution of the temperature change, is presented. The new nearly-singular integrals arising from the applications of the BIE/BEM to thin layered structures under thermal loading are treated in the same way as developed earlier for thin structures under the mechanical loading. Three 2-D test problems involving layered thin films and coatings on an elastic body are studied using the developed thermal BEM and a commercial FEM software. Numerical results for displacements and interfacial stresses demonstrate that the developed BIE/BEM remains to be very accurate, efficient in modeling, and surprisingly stable, for thin elastic materials with the thickness-to-length ratios down to 10 -9 (the nano-scale). This thermal BEM capability can be employed to investigate other more important and realistic thin film and coating problems, such as residual stresses, interfacial crack initiation and propagation (peelingoff) , in electronic packaging or other engineering applications. Correspondence to: Yijun Liu (E-mail: yijun.liu@uc.edu) Thermal stress analysis of multi-layer thin films and coatings by an advanced BEM 2 X. L. Chen and Y. J. Liu University of Cincinnati 1
To conduct numerical simulations by finite element methods, we often need to generate a high quality mesh, yet with a smaller number of elements. Moreover, the size of each of the elements in the mesh should be approximately equal to a given size requirement. Li et al. recently proposed a new method, named biting, which combines the strengths of advancing front and sphere packing. It generates high quality meshes with a theoretical guarantee. In this paper, we show that biting squares instead of circles not only generates high quality meshes but also has the following advantages. It is easier to generate high quality elements near the boundary with theoretical guarantee; it is very efficient time-wise; in addition, it is easier to implement. Furthermore, it provides simple and straightforward boundary protections in three dimensions.
We develop an unsplit convolutional perfectly matched layer (CPML) technique to absorb efficiently compressible viscous flows and their related supersonic or subsonic regimes at the outer boundary of a distorted computational domain. More particularly subsonic outgoing flows or subsonic wall-boundary layers close to the PML are well absorbed, which is difficult to obtain without creating numerical instabilities over long time periods. This new PML (CPML) introduces the calculation of auxiliary memory variables at each time step and allows an unsplit formulation of the PML. Damping functions involving a high shift in the frequency domain allow a much better absorption of the flow than for cases with no shift or low frequency shifts. The CPML has demonstrated its convenience because the time evolution of damping mechanisms do not need to be split and only the space derivatives of fluxes and primitive variables (velocities and temperature) need to be stored at each time step, reducing by this mean the number of computational arrays used in the numerical code. The results obtained show that CPML can absorb efficiently the out-going subsonic and supersonic fluxes at the outlet condition with very few reflections propagating back into the main domain.The Navier-Stokes equations are applied in an extremely wide variety of industrial processes and geophysical flow simulations. As an example of interest for the industry, CPML is applied to the particular case of a critical air ejector-diffuser simulation in which the flow propagates along a converging-diverging tube, one main goal being for instance to obtain an efficient tool to model numerically different diffuser designs. In this context we investigate the impact of the PML on an unsteady flow submitted to supersonic expansion at the end of the ejector-diffuser while it remains subsonic in the wall-boundary layer. The numerical integration of the whole system of equations introduces a two-step predictor-corrector time-stepping scheme and a finite difference spatial discretization using a curvilinear coordinates transformation that is adapted to the ejector geometry. In this distorted mesh, the spatial finite difference scheme involves a backward-forward discretization and the CPML is able to deal with the distorted mesh in the direction parallel to the base of the PML layer.
Mesh adaptation is a fairly established tool to obtain numerically accurate solutions for flow problems. Computational efficiency is, however, not always guaranteed for the adaptation strategies found in literature. Typically excessive mesh growth diminishes the potential efficiency gain. This paper, therefore, extends the strategy proposed by [Aftosmis and Berger (2002)] to compute the refinement threshold. The extended strategy computes the refinement threshold based on a user desired number of grid cells and adaptations, thereby ensuring high computational efficiency. Because our main interest is flow around wind turbines, the adaptation strategy has been optimized for flow around wind turbine airfoils. The proposed strategy was found to yield computationally efficient computations for flow around wind turbine airfoils as well as for other flow problems.
A technique for adaptive random field refinement for stochastic finite element reliability analysis of structures is presented in this paper. Refinement indicator based on global importance measures are proposed and used for carrying out adaptive random field mesh refinements. Reliability index based error indicator is proposed and used for assessing the percentage error in the estimation of notional failure probability. Adaptive mesh refinement is carried out using hierarchical graded mesh obtained through bisection of elements. Spatially varying stochastic system parameters (such as Young's modulus and mass density) and load parameters are modeled in general as non-Gaussian random fields with prescribed marginal distributions and covariance functions in conjunction with Nataf's models. Expansion optimum linear estimation method is used for random field discretisation. A framework is developed for spatial discretisation of random fields for system/load parameters considering Gaussian/non-Gaussian nature of random fields, multidimensional random fields, and multiple-random fields. Structural reliability analysis is carried out using first order reliability method with a few refined features, such as, treatment of multiple design points and/or multiple regions of comparable importance. The gradients of the performance function are computed using direct differentiation method. Problems of multiple performance functions, either in series, or in parallel, are handled using the method based on product of conditional marginals. The efficacy of the proposed adaptive technique is illustrated by carrying out numerical studies on a set of examples covering linear static, free vibration and forced vibration problems.
Unsplit convolutional perfectly matched layers (CPML) for the velocity and stress formulation of the seismic wave equation are classically computed based on a second-order finite-difference time scheme. However it is often of interest to increase the order of the time-stepping scheme in order to increase the accuracy of the algorithm. This is important for instance in the case of very long simulations. We study how to define and implement a new unsplit non-convolutional PML called the Auxiliary Differential Equation PML (ADE-PML), based on a high-order Runge-Kutta time-stepping scheme and optimized at grazing incidence. We demonstrate that when a second-order time-stepping scheme is used the convolutional PML can be derived from that more general non-convolutional ADE-PML formulation, but that this new approach can be generalized to high-order schemes in time, which implies that it can be made more accurate. We also show that the ADE-PML formulation is numerically stable up to 100,000 time steps.
Recent advances in numerical simulation technologies for various dynamic fracture phenomena are summarized. First, the basic concepts of fracture simulations are explained together with pertinent simulation results. Next, Examples of dynamic fracture simulations are presented.
A coupled hydro-mechanical formulation is presented for the analysis of landslide motion during crisis episodes. The mathematical formulation is used to model a natural slope affected by a multiple slip surface failure mechanism, in which pore water pressure evolution was identified as the main cause for movement accelerations. An elasto-plastic constitutive model is adopted for the behaviour of slip surfaces. Material parameters are obtained by combining the available laboratory tests and the back analysis of some crisis episodes. After being calibrated and validated, the model is applied to improve the understanding of the physical processes involved and to predict the landslide behaviour under different possible scenarios.
Computer aided design of powerful gyrotrons for electron cyclotron resonance heating and current drive of fusion plasmas requires adequate physical models and efficient software packages for analysis, comparison and optimization of their electron-optical systems through numerical experiments. In this paper, we present and discuss the current status of the simulation tools available to the researchers involved in the development of multi-megawatt gyrotrons for the ITER project, review some of their recent upgrades and formulate directions for further modifications and improvements. Illustrative examples used represent results from recent numerical investigations of real constructions. Some physical problems that are outside of the capabilities of the existing computer programs and call for development of novel generation of codes are also examined. The ongoing work in this direction as well as the most characteristic features of the codes under development are briefly reviewed.
The present paper focuses on the microme-chanical phenomena occurring in the polycrystalline metal materials. Correlations between the material hardening and the plastic lattice dislocation were discussed with the presence of the grain boundary. The characteristic distribution of the plastic strain gradient is numerically recognized, and hence the validity of incorporating the strain gradient term in the constitutive law is demonstrated. Also, the modeling of the inclusion interface sliding and debonding was performed on the equivalent inclusion theory to develop the constitutive law for the composite. The sliding model is considered to be effective to model the superplastic behavior of highly ductile metals. The superplastic phenomenon was recognized in the numerical test with the use of the presently suggested particle dispersed model, and its mechanism was attempted to be explained.
In this work we developed methods to automatically extract significant points of objects like hand palms and faces represented in images that can be used to build Point Distribution Models automatically. These models are further used to segment the modelled objects in new images, through the use of Active Shape Models or Active Appearance Models. These models showed to be efficient in the segmentation of objects, but had as drawback the fact that the labelling of the landmark points was usually manually made and consequently time consuming. Thus, in this paper we describe some methods capable to extract significant points of objects like hand palms and compare the segmentation results in new images.
This paper investigates the use of Genetic Programming (GP) to create an approximate model for the non-linear relationship between flexural stiffness, length, mass per unit length and rotation speed associated with rotating beams and their natural frequencies. GP, a relatively new form of artificial intelligence, is derived from the Darwinian concept of evolution and genetics and it creates computer programs to solve problems by manipulating their tree structures. GP predicts the size and structural complexity of the empirical model by minimizing the mean square error at the specified points of input-output relationship dataset. This dataset is generated using a finite element model. The validity of the GP-generated model is tested by comparing the natural frequencies at training and at additional input data points. It is found that by using a non-dimensional stiffness, it is possible to get simple and accurate function approximation for the natural frequency. This function approximation model is then used to study the relationships between natural frequency and various influencing parameters for uniform and tapered beams. The relations obtained with GP model agree well with FEM results and can be used for preliminary design and structural optimization studies.
The virtual-internal-bond (VIB) model [see e.g. H. Gao and P. Klein, J. Mech. Phys. Solids 46, No. 2, 187–218 (1998; Zbl 0974.74008)] has incorporated a cohesive-type law into the constitutive law of solids, such that fracture and failure of solids become a coherent part of the constitutive law, and no separate fracture or failure criteria are needed. A numerical algorithm is developed in this study for the VIB model under static loadings. The model is applied to study three examples, namely the crack nucleation and propagation from stress concentration, kinking and subsequent propagation of a mode II crack, and the buckling-driven delamination of a thin film from a substrate. The results demonstrate that the VIB model provides an effective method to study crack nucleation and propagation in engineering materials and systems.
A generalization of a NURBS based parametric mesh-free method (NPMM), recently proposed by Shaw and Roy (2008), is considered. A key feature of this parametric formulation is a geometric map that provides a local bijection between the physical domain and a rectangular parametric domain. This enables constructions of shape functions and their derivatives over the parametric domain whilst satisfying polynomial reproduction and interpolation properties over the (non-rectangular) physical domain. Hence the NPMM enables higher-dimensional B-spline based functional approximations over non-rectangular domains even as the NURBS basis functions are constructed via the usual tensor products of their one-dimensional counterparts. Nevertheless the method still lacks the universality that the FEM enjoys. In particular, for many non-simply connected domains, the geometric map may not be locally bijective everywhere and this severely restricts the general applicability of the NPMM. In this paper, a piecewise form of the NPMM is proposed, wherein the domain is decomposed into a collection of simply connected sub-domains or element patches (analogous to the FEM). The NPMM is then employed over each sub-domain without affecting the continuity of approximated functions across inter-sub-domain boundaries. This is quite unlike the usual FEM. The proposed procedure not only possesses the generality of the FEM, it is also equipped with higher order, globally smooth and interpolating basis functions. It may thus be interpreted as a seamless bridge between the FEM and mesh-free methods. In the context of weak implementations of the piecewise NPMM, we propose a conformal knot-grid integration scheme. Finally, we illustrate these schemes for weak numerical solutions of a few linear and nonlinear boundary value problems of engineering interest.
The material point method (MPM) is a numerical method for the solution of problems in continuum mechanics, including situations of large deformations. A generalization (GMPM) of this method was introduced by Bardenhagen and Kober (2004) in order to avoid some computational instabilities inherent to the original method (MPM). This generalization leads to a method more akin of the Petrov-Galerkin procedure. Although it is possible to find in the literature examples of the deduction and applications of the MPM/GMPM to specific problems, its detailed implementation is yet to be presented. Therefore, this paper attempts to describe all steps required for the explicit implementation of the material point method, including its generalization. Moreover, some caveats during the implementation are addressed. For example, the setting up of boundary conditions and the steps for the computation of values at nodes and material points are discussed. The influences of the time and space discretization are also verified, basing on numerical analyses. Two strategies for the update of stress, known as update stress first (USF) and update stress last (USL) are numerically investigated. It is shown that both the order for the computation of boundary conditions and the way that the grid values are extrapolated have high impact on the accuracy of the solution. The complete 3D algorithm is detailed and summarized in order to make easier the implementation of the GMPM/MPM.
In this study, we clarified the micro- to mesoscopic deformation behavior of a semicrystalline polymer by employing a large-deformation finite element homogenization method. The crystalline plasticity theory with a penalty method for the inextensibility of the chain direction and the nonaffine molecular chain network theory were applied for the representation of the deformation behavior of the crystalline and amorfphous phases, respectively, in the composite microstructure of the semicrystalline polymer. The 3D structure of lamellae in the spherulite of high-density polyethylene was modeled, and the tensile and compressive deformation behaviors were investigated. A series of computational simulations clarified the difference in the degree of strain hardening between tension and compression due to different directional chain orientations. In the spherulite, localized deformation occurred depending on the initial distribution of the lamella direction. Due to their interaction with their surrounding, the individual material points of the mesoscopic domain showed a conservative response as compared with that of the unit cell, and a nonuniform response depending on the location of a material point is observed; these are typical mesoscopic responses of semicrystalline polymers.
We present numerical enhancements of a multiscale domain decomposition strategy based on a LaTIn solver and dedicated to the computation of the debounding in laminated composites. We show that the classical scale separation is irrelevant in the process zones, which results in a drop in the convergence rate of the strategy. We show that performing nonlinear subresolutions in the vicinity of the front of the crack at each prediction stage of the iterative solver permits to restore the effectiveness of the method.
With the progress of miniaturization, in many modern applications the characteristic dimensions of the physical volume occupied by particle-reinforced composites are getting comparable with the reinforcement size and many of those composite materials undergo plastic deformations. In both experimental and modelling contexts, it is therefore very important to know whether, and up to which characteristic size, the description of the composites in terms of effective, homogenized properties is sufficiently accurate to represent their response in the actual geometry. Herein, the case of particle-reinforced composites with elastoviscoplastic matrix materials and polyhedral randomly arranged linear elastic reinforcement is considered since it is representative of many metal matrix composites of technical interests. A large parametric study based on 3D finite element microstructural models is carried out to study the dependence of the Representative Volume Element (RVE) size on the mechanical properties of the constituents, the reinforcement volume fraction and the average strain level. The results show that RVE size mainly depends on the reinforcement volume fraction and on the macroscopic strain level. The estimated RVE size for elastoplastic composites with 5% to 10% volume fraction of reinforcements is found in the range of 5-6 times the average size of reinforcement particles, while for higher volume fraction, e.g. 15% to 25%vol., the RVE size increases rapidly to 10 to 20 times the reinforcement size. Moreover insights on the influence of mesh refinement and boundary conditions on finite element homogenization analysis are obtained.
The aim of this work is to predict numerically the turbulent flow through a straight square duct using a nonlinear stress-strain model. The paper considers the application of the Craft et al.'s model [Craft, Launder, and Suga (1996)] to the case of turbulent incompressible flow in a straight square duct. In order to handle wall proximity effects, damping functions are introduced. Using a priori and a posteriori investigations, we show the performance of this model to predict such flows. The analysis of the flow anisotropy is made using the anisotropy-invariant map proposed by Lumley and Newman [Lumley and Newman (1977)]. This map shows the various possible states of the turbulence. The mean flow field and the turbulent statistics are compared with existing numerical and experimental data for square and rectangular duct flows. Overall, the model performance is shown to be satisfactory. In particular, the mean secondary velocity field and the streamwise vorticity are well predicted.
This paper presents a new assignment algorithm with order restriction. Our optimization algorithm was developed using dynamic programming. It was implemented and tested to determine the best global matching that preserves the order of the points that define two contours to be matched. In the experimental tests done, we used the affinity matrix obtained via the method proposed by Shapiro, based on geometric modeling and modal matching. \newline The proposed algorithm revealed an optimum performance, when compared with classic assignment algorithms: Hungarian Method, Simplex for Flow Problems and LAPm. Indeed, the quality of the matching improved when compared with these three algorithms, due to the disappearance of crossed matching, which is allowed by the conventional assignment algorithms. Moreover, the computational cost of this algorithm is much lower than the ones of other three, leading to enhanced execution times.
In this paper we investigate the essentially unexplored area of three-dimensional dynamic fracture mechanics. The general objective sought by this investigation is the understanding of three-dimensional dynamic crack propagation and arrest, and, specifically, the effect that the specimen thickness has on the dynamic fracture mechanism. In particular, in the context of the present paper, it is intended to provide a summary of the achievements on the issue of three-dimensional dynamic fracture parameters. Furthermore, the behavior of the three-dimensional field near the crack front is investigated. The issue that will be addressed is the extent of regions over which plane stress and plane strain analyses provide a good approximation to the actual three-dimensional fields. The results obtained in this paper offer some important new insights into the effect of the specimen thickness on dynamic fracture.
A simple back force model is proposed for a dislocation cutting into γ′ precipitate, taking the work for making and recovering an anti-phase boundary (APB) into account. The first dislocation, or a leading partial of a superdislocation, is acted upon by a back force whose magnitude is equal to the APB energy. The second dislocation, or a trailing partial of a superdislocation, is attracted by the APB with a force of the same magnitude. The model is encoded in the 3D discrete dislocation dynamics (DDD) code and applied to the cutting behavior of dislocations at a γ/γ′ interface covered by an interfacial dislocation network. Dislocations are generated from Frank-Read sources and approach the interface. The first dislocation piles up at the interface not by the stress field of the network but by the back force against making an APB. The second dislocation, however, stands off from the interface by the stress field of the first dislocation and the dislocation network. The finer mesh of the network, the further the second dislocation piles up. These two dislocations cut into the precipitate forming a superdislocation under the force from follow-on dislocations. It is also clarified that the penetration takes place from the interspace of the network.
In the dynamic fracture of metallic material, some cracks propagate with the incidence of plastic deformation, and distinct plastic strain remains near the post-propagation area. In order to elucidate these dynamic nonlinear fracture processes, the moving finite element method is developed for nonlinear crack propagation. The T * integral is used as the parameter to estimate crack tip condition. First, the effect of material viscosity and crack propagation velocity have been discussed based on the numerical results for fracture under pure mode I high speed loading. Under mixed mode loading, numerical simulations for fracture path prediction are demonstrated for various crack propagation velocities. In these numerical simulations, the maximum hoop stress criterion is used to predict the fracture path.
Meshless methods continue to generate strong interest as alternatives to conventional finite element methods. One major area of application as yet relatively unexplored with meshless methods is elasto-plasticity. In this paper we extend a novel numerical method, based on the Meshless Local Petrov-Galerkin (MLPG) method, to the modelling of elasto-plastic materials. The extended method is particularly suitable for problems in geomechanics, as it permits inclusion of infinite boundaries, and is demonstrated here on footing problems. The current usage of meshless methods for problems involving plasticity is reviewed and guidance is provided in the choice of various modelling parameters.
The recently proposed String Gradient Weighted Moving Finite Element (SGWMFE) method is extended to include remeshing and refining. The method simultaneously determines, at each time step, the solution of the governing partial differential equations and an optimal location of the finite element nodes. It has previously been applied to the nonlinear time-dependent two-dimensional shallow water equations, under the demanding conditions of large Coriolis forces, inducing large mesh and field rotation. Such effects are of major importance in geophysical fluid dynamics applications. Two deficiencies of the original SGWMFE method are (1) possible tangling of the mesh which causes the method's failure, and (2) no mechanism for global refinement when necessary due to the constant number of degrees of freedom. Here the method is extended in order to continue computing solutions when the meshes become too distorted, which happens quickly when the flow is rotationally dominant. Optimal rates of convergence are obtained when remeshing is applied. The method is also extended to include refinement to enable handling of new physical phenomena of a smaller scale which may appear during the solution process. It is shown that the errors in time are kept under control when refinement is necessary. Results of the extended method for some example problems of water hump release are presented.
Modeling plant microstructure is of great interest to food engineers to study and explain material properties related to mass transfer and mechanical deformation. In this paper, a novel ellipse tessellation algorithm to generate a 2D geometrical model of apple tissue is presented. Ellipses were used to quantify the orientation and aspect ratio of cells on a microscopic image. The cell areas and centroids of each cell were also determined by means of a numerical procedure. These characteristic quantities were then described by means of probability density functions. The model tissue geometry was generated from the ellipses, which were truncated when neighbouring areas overlap. As a result, a virtual microstructure consisting of truncated ellipses fills up the entire space with the same number of cells as that of microscopic images and with similar area, orientation and aspect ratio distribution. Statistical analysis showed that the virtual geometry generated with this approach yields spatially equivalent geometries to that of real fruit microstructures. Compared to the more common algorithm of Voronoi tessellation, ellipse tesselation was superior for generating the microstructure of fruit tissues. The extension of the algorithm to 3D is straightforward. These representative tissues can readily be exported into a finite element environment via interfacing codes to perform in silico experiments for estimating gas and moisture diffusivities and investigating their relation with fruit microstructure.
This paper presents an empirical study of the accuracy of multipole expansions of Helmholtz-like kernels with complex wavenumbers of the form $k=(\alpha+\rmi\beta)\vartheta$, with $\alpha=0,\pm1$ and $\beta>0$, which, the paucity of available studies notwithstanding, arise for a wealth of different physical problems. It is suggested that a simple point-wise error indicator can provide an a-priori indication on the number $N$ of terms to be employed in the Gegenbauer addition formula in order to achieve a prescribed accuracy when integrating single layer potentials over surfaces. For $\beta\geq 1$ it is observed that the value of $N$ is independent of $\beta$ and of the size of the octree cells employed while, for $\beta<1$, simple empirical formulas are proposed yielding the required $N$ in terms of $\beta$.
A novel approach is presented for the identification of constitutive parameters of linear poroelastic materials from indentation tests. Load-controlled spherical indentation with a ramp-hold creep profile is considered. The identification approach is based on the normalization of the time-displacement indentation response, in analogy to the well-known one-dimensional consolidation problem. The identification algorithm consists of two nested optimization routines, one in the time-displacement domain and the other in a normalized domain. The procedure is validated by identifying poroelastic parameters from the displacement-time outputs of finite element simulations; the new identification scheme proves both quantitatively reliable and fast. The procedure is also tested on the identification of the constitutive parameters of gelatin gel and bone from experimental indentation data and succeeds in providing quantitative results almost in real time.
The goal of this work is to automatically extract the contour of an object and to simulate its deformation using a physical approach. In this work, to segment an object represented in an image, an initial contour is manually defined for it that will then automatically evolve until it equals the border of the desired object. In our approach, the contour is modelled by physical formulation, using the Finite Elements Method (FEM), and its temporal evolution to the desired final contour is driven by internal forces, defined by the intrinsic material characteristics adopted for the physically model built and the interrelation between its nodes, and external forces, determined in function of the image features that are most representative of the object to be segmented. To build the physical model of the contour used in the segmentation process, we adopted the isoparametric finite element proposed by Sclaroff, and to obtain its evolution towards the object border is used the methodology presented by Nastar that consists in solving the dynamic equilibrium equation between two consecutive instants. As for the simulation of the deformation of an object into another one, or between two different instances of one object, after the initial and final contours have been properly modelled, again using FEM, modal analysis complemented with global optimization technique are employed to establish correspondences between their nodes (data points). After the matching phase, the displacements field between the two contours is simulated using the dynamic equilibrium equation. Our approach seems to be very satisfactory in the segmentation of objects represented in images as well as to simulate the deformation involved between two images. Moreover, it is governed by physically principles, and so the results are coherent with expected behaviour of the objects modelled.
The standard velocity projection scheme for the Material Point Method (MPM) and a typical form of the GIMP Method are examined. It is demonstrated that the fidelity of information transfer from a particle representation to the computational grid is strongly dependent on particle density and location. In addition, use of non-uniform grids and even non-uniform particle sizes are shown to introduce error. An enhancement to the projection operation is developed which makes use of already available velocity gradient information. This enhancement facilitates exact projection of linear functions and reduces the dependence of projection accuracy on particle location and density for non-linear functions. The efficacy of this formulation for reducing error is demonstrated in solid mechanics simulations in one and two dimensions.