Archives of Computational Methods in Engineering

Published by Springer Verlag
Online ISSN: 1886-1784
Print ISSN: 1134-3060
Publications
Geometric measures. (a) Length; (b) Area  
Linearly precise metric coordinate shape functions. (a) Shape function for the boundary node; (b) Shape function for the interior node
Voronoi cell and natural neighbors (filled circle) of point p  
Maximum entropy shape functions in one-dimension  
MAXENT computations on element A. (a) φa; (b) Maximum entropy distribution
This paper is an overview of recent developments in the construction of finite element interpolants, which areC 0-conforming on polygonal domains. In 1975, Wachspress proposed a general method for constructing finite element shape functions on convex polygons. Only recently has renewed interest in such interpolants surfaced in various disciplines including: geometric modeling, computer graphics, and finite element computations. This survey focuses specifically on polygonal shape functions that satisfy the properties of barycentric coordinates: (a) form a partition of unity, and are non-negative; (b) interpolate nodal data (Kronecker-delta property), (c) are linearly complete or satisfy linear precision, and (d) are smooth within the domain. We compare and contrast the construction and properties of various polygonal interpolants—Wachspress basis functions, mean value coordinates, metric coordinate method, natural neighbor-based coordinates, and maximum entropy shape functions. Numerical integration of the Galerkin weak form on polygonal domains is discussed, and the performance of these polygonal interpolants on the patch test is studied.
 
The present study is divided in two parts. In the first one the complete elasto-plastic microcontact model of anisotropic rough surfaces is given. Rough surfaces are modelled as a random process in which the height of the surface is considered to be a two-dimensional random variable. It is assumed that the surface is statistically homogeneous. The description of anisotropic random surfaces is concentrated on strongly rough surfaces; for such surfaces the summits are represented by highly eccentric elliptic paraboloids. The model is based on the volume conservation of asperities with the plasticity index modified to suit more general geometric contact shapes during plastic deformation process. This model is utilized to determine the total contact area, contact load and contact stiffness which are a combination of the elastic, elasto-plastic and plastic components. The elastic and elasto-plastic stiffness coefficients decrease with increasing variance of the surface height about the mean plane. The standard deviation of slopes and standard deviation of curvatures have no observable effects on the normal contact stiffness. The part two deals with the solution of the fully three-dimensional contact/friction problem taking into account contact stiffnesses in the normal and tangential directions. An incremental non-associated hardening friction law model analogous to the classical theory of plasticity is used. Two numerical examples are selected to show applicability of the method proposed.
 
The paper is devoted to research of movement of meteoric bodies in the terrestrial atmosphere. There is a review of the existing models describing movement, i.e. deceleration, ablation and fragmentation a meteoric body in atmosphere, in the paper beginning; namely a theory of a single body and a theory of consecutive fragmentation. Methods of determination of meteor body parameters by observation data are reviewed. Further the described models and methods have been applied to the analysis of trajectories of several bolides. It is obtained that the model of a single body with the account of ablation describes trajectories of considered bolides with the best accuracy. The trajectory analysis of Benesov’s bolide is carried out, for which there are detailed data of observation. Basic parameters of the bolide are determined, including initial mass. Comparison of the obtained data with results of other authors is made. The second part of the work is devoted to research of interaction of meteoroid fragments in a supersonic flow. We proposed an approximation of numerical data for transverse coefficient by simple analytical function. Further, we obtained analytical solution of a problem on separation of two spherical fragments under the decreasing transverse force without resistance. The new model of layer-by-layer scattering of meteoroid fragments moving as a system of bodies is constructed on the basis of the analytical solutions derived in this work and the numerical data.
 
This article is a bibliographical revision concerning acoustic absorbing materials, also known as poroelastics. These absorbing materials are a passive medium use extensively in the industry to reduce noise. This review presents the fundamental parameters that define each of the parts comprising these materials, as well as current experimental methods used to measure said parameters. Further along, we will analyze the principle models of characterization in order to study the behaviour of poroelastic materials. Given the lack of accuracy of the standing wave method three absorbing materials are characterized using said principle models. A comparison between measurements with the standing wave method and the predicted surface impedance with the models is shown.
 
The main purpose of this paper is to review a posteriori error estimators for the simulation of acoustic wave propagation problems by computational methods. Residual-type (explicit and implicit) and recovery-type estimators are presented in detail in the case of the Helmholtz problem. Recent work on goal-oriented error estimation techniques with respect to so-called quantities of interest or output functionals are also accounted for. Fundamental results from a priori error estimation are presented and issues dealing with pollution error at large wave numbers are extensively discussed.
 
Over the last decade, Computer Aided Engineering (CAE) tools have become essential in the assessment and optimization of the acoustic characteristics of products and processes. The possibility of evaluating these characteristics on virtual prototypes at almost any stage of the design process reduces the need for very expensive and time consuming physical prototype testing. However, despite their steady improvements and extensions, CAE techniques are still primarily used by analysis specialists. In order to turn them into easy-to-use, versatile tools that are also easily accessible for designers, several bottlenecks have to be resolved. The latter include, amongst others, the lack of efficient numerical techniques for solving system-level functional performance models in a wide frequency range. This paper reviews the CAE modelling techniques which can be used for the analysis of time-harmonic acoustic problems and focusses on techniques which have the Trefftz approach as baseline methodology. The basic properties of the different methods are highlighted and their strengths and limitations are discussed. Furthermore, an overview is given of the state-of-the-art of the extensions and the enhancements which have been recently investigated to enlarge the application range of the different techniques. Specific attention is paid to one very promising Trefftz-based technique, which is the so-called wave based method. This method has all the necessary attributes for putting a next step in the evolution towards truly virtual product design.
 
The paper presents a hierarchic model for the analysis of multifield problems related to multilayered plates subjected to mechanical, electric and thermal loads. In the framework of a unified formulation (UF), the finite element method has been used to derive a complete family of plate elements distinguished from one another by the variational statement and the laminate kinematic assumptions upon which each of them is based. Depending on the accuracy required by the analysis, the most appropriate element can be easily derived choosing the primary unknowns of the model by selecting between displacement or partially mixed formulations (Principle of Virtual Displacement, Reissner’s mixed variational theorem). The complete derivation of fully coupled variational statements (classical and partially mixed) is also given for thermopiezoelastic analysis. The description of the unknowns can then be chosen between global (ESL) and layerwise (LW). Finally the order of the expansion can be set in the range from 1 to 4 thus selecting first order or higher order plate models. Then, results in form of tables and graphs are given in order to validate the proposed elements.
 
The paper discusses error estimation and adaptive finite element procedures for elasto-static and dynamic problems based on superconvergent patch recovery (SPR) techniques. The SPR is a postprocessing procedure to obtain improved finite element solutions by the least squares fitting of superconvergent stresses at certain sampling points in local patches. An enhancement of the original SPR by accounting for the equilibirum equations and boundary conditions is proposed. This enhancement improves the quality of postprocessed solutions considerably and thus provides an even more effective error estimate. The patch configuration of SPR can be either the union of elements surrounding a vertex node, thenode patch, or, the union of elements surrounding an element, theelement patch. It is shown that these two choices give normally comparable quality of postprocessed solutions. The paper is also concerned with the application of SPR techniques to a wide range of problems. The plate bending problem posted in mixed form where force and displacement variables are simultaneously used as unknowns is considered. For eigenvalue problems, a procedure of improving eigenpairs and error estimation of the eigenfrequency is presented. A postprocessed type of error estimate and an adaptive procedure for the semidiscrete finite element method are discussed. It is shown that the procedure is able to update the spatial mesh and the time step size so that both spatial and time discretization errors are controlled within specified tolerances. A discontinuous Galerkin method for solving structural dynamics is also presented.
 
Embedded mesh, immersed body or fictitious domain techniques have been used for many years as a way to discretize geometrically complex domains with structured grids. The use of such techniques within adaptive, unstructured grid solvers is relatively recent. The combination of body-fitted functionality for some portion of the domain, together with embedded mesh or immersed body functionality for another portion of the domain offers great advantages, which are increasingly being exploited. The present paper reviews the methodologies pursued so far, addresses implementational issues and shows the possibilities such techniques offer.
 
This paper reviews the state of the art and discusses very recent mathematical developments in the field of adaptive boundary element methods. This includes an overview of available a posteriori error estimates as well as a state-of-the-art formulation of convergence and quasi-optimality of adaptive mesh-refining algorithms.
 
This paper focuses on discrete and continuous adjoint approaches and direct differentiation methods that can efficiently be used in aerodynamic shape optimization problems. The advantage of the adjoint approach is the computation of the gradient of the objective function at cost which does not depend upon the number of design variables. An extra advantage of the formulation presented below, for the computation of either first or second order sensitivities, is that the resulting sensitivity expressions are free of field integrals even if the objective function is a field integral. This is demonstrated using three possible objective functions for use in internal aerodynamic problems; the first objective is for inverse design problems where a target pressure distribution along the solid walls must be reproduced; the other two quantify viscous losses in duct or cascade flows, cast as either the reduction in total pressure between the inlet and outlet or the field integral of entropy generation. From the mathematical point of view, the three functions are defined over different parts of the domain or its boundaries, and this strongly affects the adjoint formulation. In the second part of this paper, the same discrete and continuous adjoint formulations are combined with direct differentiation methods to compute the Hessian matrix of the objective function. Although the direct differentiation for the computation of the gradient is time consuming, it may support the adjoint method to calculate the exact Hessian matrix components with the minimum CPU cost. Since, however, the CPU cost is proportional to the number of design variables, a well performing optimization scheme, based on the exactly computed Hessian during the starting cycle and a quasi Newton (BFGS) scheme during the next cycles, is proposed.
 
The aim of this article is twofold. The first purpose is to propose a review of existing RANS-LES methods and is addressed in the first part in a comprehensive way, detailing the advantages and the drawbacks of the different techniques. In a second time, a hybrid RANS-LES approach is presented, which can be interpreted as the most general case of the NLDE approach as defined by Morriset al. A decomposition into three parts of the exact solution of the Navier-Stokes equations is considered: mean flow, resolved fluctuations and unresolved (subgrid) fluctuations. The mean flow is computed using a classical RANS method, while resolved fluctuations are derived from a LES method. Several features on this approach are at first discussed in this paper, that are: the development of a non-zero mean for the resolved fluctuations (also called hereafter details), the computational problems due to the use of different schemes and meshes for the RANS and LES calculations, and the use of a boundary condition suited to the fluctuating part of the field. This approach is then used to simulate the acoustic sources of the flow around the slat of a high-lift system in landing configuration. The mean instabilities of the flow are studied and the resulting acoustic near field is carefully investigated.
 
Numerical modelling of non-Newtonian flows usually involves the coupling between equations of motion characterized by an elliptic character, and the fluid constitutive equation, which defines an advection problem linked to the fluid history. There are different numerical techniques to treat the hyperbolic advection equations. In non-recirculating flows, Eulerian discretizations can give a convergent solution within a short computing time. However, the existence of steady recirculating flow areas induces additional difficulties. Actually, in these flows neither boundary conditions nor initial conditions are known. In this paper we compares different advanced strategies (some of them recently proposed and extended here for addressing complex flows) when they are applied to the solution of the kinetic theory description of a short fiber suspension fluid flows.
 
A review of methods applicable to the study of masonry historical construction, encompassing both classical and advanced ones, is presented. Firstly, the paper offers a discussion on the main challenges posed by historical structures and the desirable conditions that approaches oriented to the modeling and analysis of this type of structures should accomplish. Secondly, the main available methods which are actually used for study masonry historical structures are referred to and discussed. The main available strategies, including limit analysis, simplified methods, FEM macro- or micro-modeling and discrete element methods (DEM) are considered with regard to their realism, computer efficiency, data availability and real applicability to large structures. A set of final considerations are offered on the real possibility of carrying out realistic analysis of complex historic masonry structures. In spite of the modern developments, the study of historical buildings is still facing significant difficulties linked to computational effort, possibility of input data acquisition and limited realism of methods.
 
This paper revisits a powerful discretization technique, the Proper Generalized Decomposition—PGD, illustrating its ability for solving highly multidimensional models. This technique operates by constructing a separated representation of the solution, such that the solution complexity scales linearly with the dimension of the space in which the model is defined, instead the exponentially-growing complexity characteristic of mesh based discretization strategies. The PGD makes possible the efficient solution of models defined in multidimensional spaces, as the ones encountered in quantum chemistry, kinetic theory description of complex fluids, genetics (chemical master equation), financial mathematics, … but also those, classically defined in the standard space and time, to which we can add new extra-coordinates (parametric models, …) opening numerous possibilities (optimization, inverse identification, real time simulations, …).
 
In this article recent advances in the MAC method will be reviewed. The MAC technique dates back to the early sixties at the Los Alamos Laboratories and this paper starts with a historical review, and then a summary of related techniques. Improvements since the early days of MAC (and the Simplified MAC-SMAC) include automatic time-stepping, the use of the conjugate gradient method to solve the Poisson equation for the corrected velocity potential, greater efficiency through stripping out all particles (markers) other than those near the free surface, more accurate approximations of the free surface boundary conditions, the addition of a bounded high accuracy upwinding for the convected terms (thereby being able to solve higher Reynolds number flows), and a (dynamic) flow visualization facility. This article will concentrate, in the main, on a three-dimensional version of the SMAC method. It will show how to approximate curved boundaries by considering one configurational example in detail; the same will also be done for the free surface. The article will avoid validation, but rather focus on many of the examples and applications that the MAC method can solve from turbulent flows to rheology. It will conclude with some speculative comments on the future direction of the methodology.
 
There is a growing awareness of the impact of non-deterministic model properties on the numerical simulation of physical phenomena. These non-deterministic aspects are of great importance when there is a large amount of information to be retrieved from the numerical analysis, as for instance in a numerical reliability study or reliability based optimisation during a design process. Therefore, the non-deterministic properties form a primordial part of a trustworthy virtual prototyping environment. The implementation of such a virtual prototyping environment requires the inclusion of non-deterministic properties in the numerical finite element framework. This articel gives an overview of the emerging non-probabilistic approaches for non-deterministic numerical analysis, and compares them to the classical probabilistic methodology. Their applicability in the context in engineering design is discussed. The typical implementation strategies applied in literature are reviewed. A new concept is introduced for the calculation of envelope frequency response functions. This method is explained in detail and illustrated on a numerical example.
 
The Dirichlet-to-Neumann (DtN) Finite Element Method is a general technique for the solution of problems in unbounded domains, which arise in many fields of application. Its name comes from the fact that it involves the nonlocal Dirichlet-to-Neumann (DtN) map on an artificial boundary which encloses the computational domain. Originally the method has been developed for the solution of linear elliptic problems, such as wave scattering problems governed by the Helmholtz equation or by the equations of time-harmonic elasticity. Recently, the method has been extended in a number of directions, and further analyzed and improved, by the author's group and by others. This article is a state-of-the-art review of the method. In particular, it concentrates on two major recent advances: (a) the extension of the DtN finite element method tononlinear elliptic and hyperbolic problems; (b) procedures forlocalizing the nonlocal DtN map, which lead to a family of finite element schemes with local artificial boundary conditions. Possible future research directions and additional extensions are also discussed.
 
In this paper we survey the development of fast iterative solvers aimed at solving 2D/3D Helmholtz problems. In the first half of the paper, a survey on some recently developed methods is given. The second half of the paper focuses on the development of the shifted Laplacian preconditioner used to accelerate the convergence of Krylov subspace methods applied to the Helmholtz equation. Numerical examples are given for some difficult problems, which had not been solved iteratively before.
 
Linear parabolic diffusion theories based on Fourier’s or Fick’s laws predict that disturbances can propagate at infinite speed. Although in some applications, the infinite speed paradox may be ignored, there are many other applications in which a theory that predicts propagation at finite speed is mandatory. As a consequence, several alternatives to the linear parabolic diffusion theory, that aim at avoiding the infinite speed paradox, have been proposed over the years. This paper is devoted to the mathematical, physical and numerical analysis of a hyperbolic convection-diffusion theory.
 
In this paper we consider (hierarchical, La-grange)reduced basis approximation anda posteriori error estimation for linear functional outputs of affinely parametrized elliptic coercive partial differential equa-tions. The essential ingredients are (primal-dual)Galer-kin projection onto a low-dimensional space associated with a smooth “parametric manifold” - dimension re-duction; efficient and effective greedy sampling meth-ods for identification of optimal and numerically stable approximations - rapid convergence;a posteriori er-ror estimation procedures - rigorous and sharp bounds for the linear-functional outputs of interest; and Offine-Online computational decomposition strategies - min-imummarginal cost for high performance in the real-time/embedded (e.g., parameter-estimation, control)and many-query (e.g., design optimization, multi-model/ scale)contexts. We present illustrative results for heat conduction and convection-diffusion,inviscid flow, and linear elasticity; outputs include transport rates, added mass,and stress intensity factors.
 
This work is an overview of algebraic pressure segregation methods for the incompressible Navier-Stokes equations. These methods can be understood as an inexactLU block factorization of the original system matrix. We have considered a wide set of methods: algebraic pressure correction methods, algebraic velocity correction methods and the Yosida method. Higher order schemes, based on improved factorizations, are also introduced. We have also explained the relationship between these pressure segregation methods and some widely used preconditioners, and we have introduced predictor-corrector methods, one-loop algorithms where nonlinearity and iterations towards the monolithic system are coupled.
 
This paper describes a fully implicit algorithm developed and optimized to simulate sheet metal forming processes. This algorithm was implemented in the in-house code DD3IMP. Attention is paid to the augmented lagrangian method adopted to treat the contact with friction problem. The global resolution of the coupled equilibrium and contact problem is performed in a single loop, with a static implicit iterative Newton-Raphson scheme. This demands particular attention in the contact search algorithm, which in this case adopts a parametric description of the tools. In order to highlight the adopted strategies a review of the state-of-the-art in sheet metal forming simulation is presented, with respect to models reliability and efficiency.
 
Via new perspectives, for the time dimension, the present exposition overviews new and recent advances describing a standardized formal theory towards the evolution, classification, characterization and generic design of time discretized operators for transient/dynamic applications. Of fundamental importance in the present exposition are the developments encompassing the evolution of time discretized operators leading to the theoretical design of computational algorithms and their subsequent classification and characterization. And, the overall developments are new and significantly different from the way traditional modal type and a wide variety of step-by-step time marching approaches which we are mostly familiar with have been developed and described in the research literature and in standard text books over the years. The theoretical ideas and basis towards the evolution of a generalized methodology and formulations emanate under the umbrella and framework and are explained via a generalized time weighted philosophy encompassing the semi-discretized equations pertinent to transient/dynamic systems. It is herein hypothesized that integral operators and the associated representations and a wide variety of the so-called integration operators pertain to and emanate from the same family, with the burden which is being carried by a virtual field or weighted time field specifically introduced for the time discretization is strictly enacted in a mathematically consistent manner so as to first permit obtaining the adjoint operator of the original semi-discretized equation system. Subsequently, the selection or burden carried by the virtual or weighted time fields originally introduced to facilitate the time discretization process determines the formal development and outcome of “exact integral operators”, “approximate integral operators”, including providing avenues leading to the design of new computational algorithms which have not been exploited and/or explored to-date and the recovery of most of the existing algorithms, and also bridging the relationships systematically leading to the evolution of a wide variety of “integration operators”. Thus, the overall developments not only serve as a prelude towards the formal developments for “exact integral operators”, but also demonstrate that the resulting “approximate integral operators” and a wide variety of “new and existing integration operators and known methods” are simply subsets of the generalizations of a standardizedW p -Family, and emanate from the principles presented herein. The developments first leading to integral operators in time, and the resulting consequences then systematically leading to not only providing new avenues but additionally also explaining a wide variety of generalized integration operators in time of which single-step time integration operators and various widely recognized algorithms which we are familiar are simply subsets, the associated multi-step time integration operators, and a class of finite element in time integration operators, and their relationships are particularly addressed. The theoretical design developments encompass and explain a variety of time discretized operators, the recovery of various original methods of algorithmic development, and the development of new computational algorithms which have not been exploited and/or explored to-date, and furthermore, permit time discretized operators to be uniquely classified and characterized by algorithmic markers. The resulting and so-called discrete numerically assigned [DNA] algorithmic markers not only serve as a prelude towards providing a standardized formal theory of development of time discretized operators and forum for selecting and identifying time discretized operators, but also permit lucid communication when referring to various time discretized operators. That which constitutes characterization of time discretized operators are the so-called DNA algorithmic markers which essentially comprise of both: (i) the weighted time fields introduced for enacting the time discretization process, and (ii) the corresponding conditions (if any) these weighted time fields impose (dictate) upon the approximations for the dependent field variables and updates in the theoretical development of time discretized operators. As such, recent advances encompassing the theoretical design and development of computational algorithms for transient/dynamic analysis of time dependent phenomenon encountered in engineering, mathematical and physical sciences are overviewed.
 
The objective of this paper is to investigate the efficiency of various optimization methods based on mathematical programming and evolutionary algorithms for solving structural optimization problems under static and seismic loading conditions. Particular emphasis is given on modified versions of the basic evolutionary algorithms aiming at improving the performance of the optimization procedure. Modified versions of both genetic algorithms and evolution strategies combined with mathematical programming methods to form hybrid methodologies are also tested and compared and proved particularly promising. Furthermore, the structural analysis phase is replaced by a neural network prediction for the computation of the necessary data required by the evolutionary algorithms. Advanced domain decomposition techniques particularly tailored for parallel solution of large-scale sensitivity analysis problems are also implemented. The efficiency of a rigorous approach for treating seismic loading is investigated and compared with a simplified dynamic analysis adopted by seismic codes in the framework of finding the optimum design of structures with minimum weight. In this context a number of accelerograms are produced from the elastic design response spectrum of the region. These accelerograms constitute the multiple loading conditions under which the structures are optimally designed. The numerical tests presented demonstrate the computational advantages of the discussed methods, which become more pronounced in large-scale optimization problems.
 
The numerical treatment of contact problems involves the formulation of the geometry, the statement of interface laws, the variational formulation and the development of algorithms. In this paper we give an overview with regard to the different topics which are involved when contact problems have to be simulated. To be most general we will derive a geometrical model for contact which is valid for large deformations. Furthermore interface laws will be discussed for the normal and tangential stress components in the contact area. Different variational formulations can be applied to treat the variational inequalities due to contact. Several of these different techniques will be presented. Furthermore the discretization of a contact problem in time and space is of great importance and has to be chosen with regard to the nature of the contact problem. Thus the standard discretization schemes will be discussed as well as techiques to search for contact in case of large deformations.
 
The available theories and finite elements developed for multilayered, anisotropic, composite plate and shell structures were reviewed. The different approaches to plate and shell structures were listed as three-dimensional approaches, continuum-based methods, axiomatic and asymptotic two-dimensional theories, and equivalent single layer and layer wise variable descriptions. The extension of finite elements developed for isotropic one layered structures to multilayered plates and shells was also discussed.
 
This work is an overview of available constitutive laws used in finite element codes to model elastoplastic metal anisotropy behaviour at a macroscopic level. It focuses on models with strong links with the phenomena occurring at microscopic level. Starting from macroscopic well-known models such as Hill or Barlat's laws, the limits of these macroscopic phenomenological yield loci are defined, which helps to understand the current trends to develop micro-macro laws. The characteristics of micro-macro laws, where physical behaviour at the level of grains and crystals are taken into account to provide an average macroscopic answer are described. Some basic knowledge about crystal plasticity models is given for non-specialists, so every one can understand the microscopic models used to reach macroscopic values. The assumptions defining the transition between the microscopic and macroscopic scales are summarized: full constraint or relaxed Taylor's model, self-consistent approach, homogenisation technique. Then, the two generic families of micromacro models are presented: macroscopic laws without yield locus where computations on discrete set of crystals provide the macroscopic material behaviour and macroscopic laws with macroscopic yield locus defined by microscopic computations. The models proposed by Anand, Dawson, Miehe, Geers, Kalidindi or Nakamachi belong to the first family when proposals by Montheillet, Lequeu, Darrieulat, Arminjon, Van Houtte, Habraken enter the second family. The characteristics of all these models are presented and commented. This paper enhances interests of each model and suggests possible future developments.
 
Fabricated two material composite substrate (left) (LTCC with ε = 100 filled with stycast polymer of ε = 3) and measured return loss behaviour with a probe fed patch antenna (Figure 5) 
Optimization history for the spectral filter design with double layer FSS geometry (Figure 8). Design parameters: n = 2, η = 60%, ε solid = 4.84 (ZiS) and ε initial = 1.1
Transmission response for the initial (ε = 1.1) vs. the optimized material distribution of the TPV filter employing the double layer FSS geometry (Figure 8). Design parameters: n = 2, η = 60%, ε solid = 4.84 (ZnS) and ε initial = 1.1
In this paper a novel design procedure based on the integration of full wave Finite Element Analysis (FEA) and a topology design method employing Sequential Linear Programming (SLP) is introduced. The employed design method is the Solid Isotropic Material with Penalization (SIMP) technique formulated as a general non-linear optimization problem. SLP is used to solve the optimization problem with the sensitivity analysis based on the adjoint variable method for complex variables. A key aspect of the proposed design method is the integration of optimization tools with a fast simulator based on the finite element-boundary integral (FE-BI) method. The capability of the design method is demonstrated by two design examples. First, we developed a metamaterial substrate with arbitrary material composition and subject to a pre-specified antenna bandwidth enhancement. The design is verified and its performance is evaluated via measurements and simulation. As a second example, the material distribution for a Thermo-Photovoltaic (TPV) filter subject to pre-specified bandwidth and compactness criteria is designed. Results show that the proposed design method is capable of designing full three-dimensional volumetric material textures and printed conductor topologies for filters and patch antennas with enhanced performance.
 
This article presents a review of the state of the art of techniques on computational stochastic structural dynamics applying the finite elements. Linear systems under random excitations, quasi-linear systems, nonlinear systems excited by stochastic disturbances, and spatially stochastic nonlinear systems are included. For spatially stochastic linear and nonlinear systems under stochastic and deterministic excitations, alternative approaches, proposed by the author, to the so-called stochastic finite element method are introduced. The introduced approaches are free from the limitations of the: (a) normal mode analysis for linear and quasi-linear systems, (b) normal mode with statistical linearization for nonlinear systems, and (c) perturbation approximation techniques for spatially stochastic systems. The introduced approaches for nonlinear systems are applicable to systems undergoing large deformations of finite strains and finite rotations. Selected computed results are included to demonstrate their simplicity and efficiency.
 
General criteria of instability in time-independent elastic-plastic solids and the related computational approaches are reviewed. The distinction between instability of equilibrium and instability of a deformation process is discussed with reference to instabilities of dynamic, geometric or material type. Comparison is made between the bifurcation, energy and initial imperfection approaches. The effect of incremental nonlinearity of the constitutive law, associated with formation of a yield-surface vertex, on instability predictions is examined. A survey of the methods of post-critical analysis is presented.
 
The numerical treatment and the production of related software for solving large sparse linear systems of algebraic equations, derived mainly from the discretization of partial differential equation, by preconditioning techniques has attracted the attention of many researchers. In this paper we give an overview of explicit approximate inverse matrix techniques for computing explicitly various families of approximate inverses based on Choleski and LU—type approximate factorization procedures for solving sparse linear systems, which are derived from the finite difference, finite element and the domain decomposition discretization of elliptic and parabolic partial differential equations. Composite iterative schemes, using inner-outer schemes in conjunction with Picard and Newton method, based on approximate inverse matrix techniques for solving non-linear boundary value problems, are presented. Additionally, isomorphic iterative methods are introduced for the efficient solution of non-linear systems. Explicit preconditioned conjugate gradient—type schemes in conjunction with approximate inverse matrix techniques are presented for the efficient solution of linear and non-linear system of algebraic equations. Theoretical estimates on the rate of convergence and computational complexity of the explicit preconditioned conjugate gradient method are also presented. Applications of the proposed methods on characteristic linear and non-linear problems are discussed and numerical results are given.
 
Short fiber reinforced composites have gained increasing technological importance due to their versatility that lends them to a wide range of applications. These composites are useful because they include a reinforcing phase in which high tensile strengths can be reached, and a matrix that allows to hold the reinforcement and to transfer applied stress to it. It is a well-known fact, that such materials can have excellent mechanical, thermal and electrical properties that make them widely used in industry. During the manufacture process, fibers adopt a preferential orientation that can vary significantly across the geometry. Once the suspension is cooled or cured to make a solid composite, the fiber orientation becomes a key feature of the final product since it affects the elastic modulus, the thermal and electrical conductivities, and the strength of the composite material. In this work we analyzed the state-of-the-art and the recent developments in the numerical modeling of short fiber suspensions involved in industrial flows.
 
The article presents the up to date review and discussion of approaches used to express mechanical behavior of artery walls. The physiology of artery walls and its relation to the models is discussed. Presented models include the simplest 0d and 1d ones but emphasis is put to the most sophisticated approaches which are based on the theory of 3d nonlinear elasticity. Also the alternative approach which consists in simple delinearization of the Koiter shell equations is presented.
 
Design knowledge incorporates information about designed objects and their attributes, as well as about other aspects of the design process. Such information about designed artifacts and any associated design concepts can be represented in several different forms or languages. This paper describes the languages of design, emphasizing particularly the representation of designed objects. Inasmuch as some of these design languages derive from computational styles, and since all are used to develop computational models of design, these languages form a useful backdrop for understanding and furthering the role of computers in engineering design.
 
Bodies with exotic properties display material substructural complexity from nano to meso-level. Various models have been built up in condensed matter physics to represent the behavior of special classes of complex bodies. In general, they fall within the setting of an abstract model building framework which is not only a unifying structure of existing models but—above all—atool to construct special models of new exotic materials. We describe here basic elements of this framework, the one ofmultifield theories, trying to furnish a clear idea of the subtle theoretical and computational problems arising within it. We present the matter in a form that allows one to construct appropriate algorithms in special cases of physical interest. We discuss also issues related to the construction of compatible and mixed finite elements in linearized setting, the extension of extended finite element methods to analyze the influnce of material substructures on crack growth, the evolution of sharp discontinuity surfaces in complex bodies. Concrete examples of complex bodies are also presented with a number of details.
 
This work is a sequel of a previous author’s article: “Theories and Finite Elements for Multilayered. Anisotropic, Composite Plates and Shell”, Archive of Computational Methods in Engineering Vol 9, no 2, 2002; in which a literature overview of available modelings for layered flat and curved structures was given. The two following topics, which were not addressed in the previous work, are detailed in this review: 1. derivation of governing equations and finite element matrices for some of the most relevant plate/shell theories; 2. to present an extensive numerical evaluations of available results, along with assessment and benchmarking. The article content has been divided into four parts. An introduction to this review content is given in Part I. A unified description of several modelings based on displacements and transverse stress assumptions ins given in Part II. The order of the expansion in the thickness directions has been taken as a free parameter. Two-dimensional modelings which include Zig-Zag effects, Interlaminar Continuity as well as Layer-Wise (LW), and Equivalent Single Layer (ESL) description have been addressed. Part III quotes governing equations and FE matrices which have been written in a unified manner by making an extensive use of arrays notations. Governing differential equations of double curved shells and finite element matrices of multilayered plates are considered. Principle of Virtual Displacement (PVD) and Reissner’s Mixed Variational Theorem (RMVT), have been employed as statements to drive variationally consistent conditions, e.g.C z 0-Requirements, on the assumed displacements and stransverse stress fields. The number of the nodes in the element has been taken as a free parameter. As a results both differential governing equations and finite element matrices have been written in terms of a few 3×3 fundamental nuclei which have 9 only terms each. A vast and detailed numerical investigation has been given in Part IV. Performances of available theories and finite elements have been compared by building about 40 tables and 16 figures. More than fifty available theories and finite elements have been compared to those developed in the framework of the unified notation discussed in Parts II and III. Closed form solutions and and finite element results related to bending and vibration of plates and shells have been addressed. Zig-zag effects and interlaminar continuity have been evaluated for a number of problems. Different possibilities to get transverse normal stresses have been compared. LW results have been systematically compared to ESL ones. Detailed evaluations of transverse normal stress effects are given. Exhaustive assessment has been conducted in the Tables 28–39 which compare more than 40 models to evaluate local and global response of layered structures. A final Meyer-Piening problem is used to asses two-dimensional modelings vs local effects description.
 
Various sorts of asymptotic-numerical methods have been propsed in the literature: the reduced basis technique, direct computation of series or the use of Padé approximants. The efficiency of the method may also depend on the chosen path parameter, on the order of truncature and on alternative parameters. In this paper, we compare the three classes of asymptotic-numerical method, with a view to define the “best” numerical strategy.
 
The finite element method can be viewed as a machine that automates the discretization of differential equations, taking as input a variational problem, a finite element and a mesh, and producing as output a system of discrete equations. However, the generality of the framework provided by the finite element method is seldom reflected in implementations (realizations), which are often specialized and can handle only a small set of variational problems and finite elements (but are typically parametrized over the choice of mesh). This paper reviews ongoing research in the direction of a complete automation of the finite element method. In particular, this work discusses algorithms for the efficient and automatic computation of a system of discrete equations from a given variational problem, finite element and mesh. It is demonstrated that by automatically generating and compiling efficient low-level code, it is possible to parametrize a finite element code over variational problem and finite element in addition to the mesh.
 
Depth averaged models are widely used in engineering practice in order to model environmental flows in river and coastal regions, Depth averaged models are widely used in engineering practice in order to model environmental flows in river and coastal regions, as well as shallow flows in hydraulic structures. This paper deals with depth averaged turbulence modelling. The most important as well as shallow flows in hydraulic structures. This paper deals with depth averaged turbulence modelling. The most important and widely used depth averaged turbulence models are reviewed and discussed, and a depth averaged algebraic stress model is and widely used depth averaged turbulence models are reviewed and discussed, and a depth averaged algebraic stress model is presented. A finite volume model for solving the depth averaged shallow water equations coupled with several turbulence models presented. A finite volume model for solving the depth averaged shallow water equations coupled with several turbulence models is described with special attention to the modelling of wet-dry fronts. In order to asses the performance of the model, several is described with special attention to the modelling of wet-dry fronts. In order to asses the performance of the model, several flows are modelled and the numerical results are compared with experimental data. flows are modelled and the numerical results are compared with experimental data.
 
The problem addressed in the paper is the coupling between heat radiation and convection in participating media. While convection is modelled by finite volumes, heat radiation is solved using the boundary element method (BEM). The latter is a technique of solving the integral equations of radiation using weighted residuals. BEM can be seen as an alternative approach to the well established zoning method or FEM, its higher order generalization. When compared with these approaches, BEM offers substantial computing time economy due to the reduction of the integration dimension and lack of volume integrals. Coupling convection solution with heat radiation is accomplished in an iterative way. First the initial temperatures of the medium is computed by the convection solver for given walls temperature assuming no interaction with radiation. Using this temperature field the radiative heat fluxes and sources are computed and their values substituted to the corrected energy convection equation. The procedure is repeated until a required accuracy is reached. Mild underrelaxaction of the heat sources improves the convergence. BEM radiation procedure requires numerical integration over all discrete surface elements, and ray tracing of the Gaussian rays connecting the collocation point and the nodes of the Gaussian quadrature. The latter is the most time consuming operation. Numerical tests have shown that standard ray tracing on convection meshes leads to prohibitively long computing time. To accelerate the procedure the ray tracing is performed on a much coarser structural grid. This is an acceptable approximation as heat radiation volumetric grid does not need to capture small scale phenomena which is in contrast with the convection grid where the resolution of the resulting fields depends strongly on the mesh density. This assumption accelerates the ray tracing by at least two orders of magnitude. The transition between the radiative and convection nodes is accomplished using the radial basis function network concept. Several industrial problems have been solved using this model. Commercial CFD code Fluent has been used to solve the convection equations. The interaction between the in-house radiative code BERTA and Fluent was maintained by modifying source term of energy balance equation within latter. The coupling was programmed at a level of a script. The results have been compared with some available benchmark solutions and with the radiative transfer solvers (Discrete Ordinates and Discrete Transfer) installed in the CFD code. Very good agreement has been observed. The ray tracing concept has been extended to cylindrical coordinates systems to solve axisymmetric problems. The technique has been also used to model the interaction of radiation and conduction in semitransparent, non gray media. Numerical results of both some benchmark solutions and industrial problems are shown in the paper.
 
Inclined cracks that take place in reinforced concrete elements due to tangential internal forces, such as shear and torsion, produce a non-isotropic response on the structure in the post-cracked regime and up to failure, also known as crack-induced-anisotropy. The result is that all six internal forces acting in a cross-section are generally coupled. A generalized beam formulation for the nonlinear coupled analysis of non-isotropic elements under six internal forces is presented. The theory is based on a cross-section analysis approach with both warping and distortion capabilities, which were proved necessary to correctly handle the problem with frame element analysis. In this paper, the non-linear mechanical aspects of cracked concrete structures under tangential forces are summarized. A state of the art review of beam formulations for the non-linear analysis of concrete structures is presented, and the approaches followed to account for the interaction of shear and torsion forces are discussed. After presenting the proposed formulation, its capabilities are shown by means of an application example of a cross section under coupled bending-shear and torsion, finally main conclusions are drawn.
 
The edge based Galerkin finite element formulation is used as the basic building block for the construction of multidimensional generalizations, on unstructured grids, of several higher order upwind biased procedures originally designed for the solution of the 1D compressible Euler system of equations. The use of a central type discretization for the viscous flux terms enables the simulation of multidimensional flows governed by the laminar compressible Navier Stokes equations. Numerical issues related to the development and implementation of multidimensional solution algorithms are considered. A number of inviscid and viscous flow simulations, in different flow regimes, are analyzed to enable the reader to assess the performance of the surveyed formulations.
 
Higher-order upwind biased procedures for solving the equations of 1-D compressible unsteady flow are surveyed. Approaches based upon the use of either switched artificial viscosity, flux-limiting or slopelimiting are considered and described within a unified framework. The approaches are implemented within the context of an edge-based finite element solution algorithm, which represents the basis for a future multi-dimensional extension on general grids. The performance of the different approaches is illustrated by application to the solution of the shock tube problem in different flow regimes.
 
First-order upwind biased formulations for simulating 1-D compressible flows are presented within a unified framework. A detailed study and comparison of the different formulations is important, as high-resolution extensions are known to inherit both the good and bad features of the first-order formulations used for their construction. The most popular flux difference splitting, flux vector splitting and some recently proposed hybrid splitting methodologies are considered. A finite element solution approach is adopted, as this provides a framework for the multidimensional extension of the solvers. Representative one-dimensional test cases are considered in order to provide evidence of the effectiveness and performance of the formulations. The results that are presented, together with the corresponding exact solutions, provide a set of benchmark test cases for comparison purposes.
 
This work aims to present a valuable and predictive model for the description of polymeric soft foams which are widely exploited as packaging, padding, or cushioning materials. Belonging to the class of cellular solids, polymeric foams and synthetic sponges can be regarded as permeable or impermeable fluid-saturated porous materials depending on whether their microstructure is open-celled or closed-celled. The pore space formed by the cellular polymer skeleton is filled with a pore fluid (e.g., liquid or gas) which can significantly affect the transient compressive and impact response making particularly air-filled foamed plastics a light-weight and shock-absorbing material par excellence. In view of their practical application, it is the intention to model the macroscopic bulk response of high-porosity foams without getting lost in the description of the deformation habits of a distinct cell on the microstructure. It is rather the goal to phenomenologically capture the coupled dissipative phenomena stemming from the viscoelasticity of the polymer matrix and the usually underestimated or even neglected influence of the viscous pore-fluid motion which essentially constitute the favorable characteristics of these materials. Therefore, a general framework is derived by merging the advances in porous media theories and the state of the art in finite solid viscoelasticity. Against a thermodynamically consistent background, a constitutive setting is presented where, based on the internal variable concept, an extended Ogden-type viscoelasticity law is embedded into the macroscopic Theory of Porous Media (TPM). By regarding foamed polymers as immiscible binary solid-fluid aggregates, essential nonlinearities are modularly included in the formulation. In detail, the developed biphasic continuum approach accounts for the relevant physical properties emanating from the cellular microstructure, such as the densification behavior under finite compression, the trapped or potentially moving and interacting viscous pore fluid (compressible or incompressible), and the superimposed intrinsic viscoelasticity of the polymeric skeleton itself. The numerical treatment of this strongly coupled solid-fluid problem is carried out through the Finite Element Method (FEM) where the governing equations are formulated in the primary variables solid displacement and pore-fluid pressure. The two-field variational problem is then efficiently solved by recourse to stable mixed finite element formulations and an appropriate implicit time-stepping scheme in combination with a two-stage solution strategy for the discrete nonlinear equation system. Finally, in order to demonstrate its capabilities, the presented model is calibrated to the energy-absorbing response of an open-celled polyurethane (PUR) foam as it shows all conceivable nonlinearities under absolute finite viscoelastic deformations. The FE simulations of a real impact experiment on fragmented PUR foam blocks and the fast compressive loading of a whole car seat cushion excellently mimic the flow-dependent size effect and the pneumatic damping behavior of the material. Moreover, the overall performance of the numerical implementation is shown by convergence studies for different mixed FE discretizations. In summary, the presented details convincingly demonstrate that for a quantitative simulation particularly of permeable soft foams under compression, the consideration of an independent pore-fluid phase is indispensable. It becomes moreover apparent that only a multiphasic continuum approach which accounts for the essential nonlinearities and rate-dependent solid and fluid properties is appropriate for the realistic description of viscoelastic low-density foams.
 
Blanking and machining are commonly used in processes to obtain the shape of many mechanical pieces. Although considerable number of experimental results exist, certain essential aspects of cutting are still not well understood. This comes from the complexity of the thermomechanical phenomena induced by the material separation as well as from the complexity of the dynamical behaviour of the whole workpiece/tool/machine system. Numerical simulations make it possible to go further in the comprehension and the prediction of machining and cutting processes. In this work the state-of-the art is analysed and we present the most recent developments in the contribution of computational mechanics to numerical simulation of machining and blanking. This contribution is, on one hand, developed at a very global scale calledmacroscopic scale. At this scale a representation of the deformations of the piece is necessary, for example when thin walls are present, and when both predictions of the geometrical state of final surface and/or stability of the process are expected. On the other hand, the contribution is also located at a more local scale: themesoscopic scale. At this scale, the aim is the determination of thermomechanical sollicitations applied to the tool, the simulation of chip formation, or the description of residual states (mechanical, chemical) inside the workpiece after machining.
 
A review of the state of the art in computational modeling and analysis of the mechanical behavior of living bone is given. Particular attention is placed on algorithms for the simulation of the stress or strain induced remodeling processes. A special remodeling algorithm is presented which allow the simulation of internal bone remodeling taking into account not only adaptation of the spatial distribution of the effective mass density, but also the adaptation of the orientation of the material axes and of the orientation dependent stiffness parameters. Such remodeling algorithms require a sound formulation of the constitutive relations of bony material. For this purpose some micro-macro mechanical descriptions of bone in its different microstructural configurations are discussed. In conjunction with the above mentioned remodeling algorithm a new unified material model is derived for describing the linear elastic, orthotropic behavior of bone in the full range of micro-structures of cancellous and cortical bone. The application of the novel remodeling algorithm is demonstrated in an example.
 
Evolution of the Bouc-Wen model literature
Methodologies of the analysis of the variation of ¯ w( ¯ x)
Structural systems often show nonlinear behavior under severe excitations generated by natural hazards. In that condition, the restoring force becomes highly nonlinear showing significant hysteresis. The hereditary nature of this nonlinear restoring force indicates that the force cannot be described as a function of the instantaneous displacement and velocity. Accordingly, many hysteretic restoring force models were developed to include the time dependent nature using a set of differential equations. This survey contains a review of the past, recent developments and implementations of the Bouc-Wen model which is used extensively in modeling the hysteresis phenomenon in the dynamically excited nonlinear structures.
 
Boundary element methodologies for the determination of the response of inelastic two-and three-dimensional solids and structures as well as beams and flexural plates to dynamic loads are briefly presented and critically discussed. Elastoplastic and viscoplastic material behaviour in the framework of small deformation theories are considered. These methodologies can be separated into four main categories: those which employ the elastodynamic fundamental solution in their formulation, those which employ the elastostatic fundamental solution in their formulation, those which combine boundary and finite elements for the creation of an efficient hybrid scheme and those representing special boundary element techniques. The first category, in addition to the boundary discretization, requires a discretization of those parts of the interior domain expected to become inelastic, while the second category a discretization of the whole interior domain, unless the inertial domain integrals are transformed by the dual reciprocity technique into boundary ones, in which case only the inelastic parts of the domain have to be discretized. The third category employs finite elements for one part of the structure and boundary elements for its remaining part in an effort to combine the advantages of both methods. Finally, the fourth category includes special boundary element techniques for inelastic beams and plates and symmetric boundary element formulations. The discretized equations of motion in all the above methodologies are solved by efficient step-by-step time integration algorithms. Numerical examples involving two-and three-dimensional solids and structures and flexural plates are presented to illustrate all these methodologies and demonstrate their advantages. Finally, directions for future research in the area are suggested.
 
Top-cited authors
Munish Kumar
  • Maharaja Ranjit Singh Punjab Technical University, Bathinda, Punjab, INDIA
Shaveta Dargan Nagpal
  • Guru Nanak College Sri Muktsar Sahib Panjab
J. P. Amezquita-Sanchez
  • Autonomous University of Queretaro
Patrick Mäder
  • Technische Universität Ilmenau
Bs Pabla
  • National Institute of Technical Teachers Training and Research Chandigarh