Probabilistic Engineering Mechanics

Published by Elsevier BV

Print ISSN: 0266-8920

Articles


Micromechanics as a basis of random elastic continuum approximations. Probablistic Engineering Mechanics, 8(2): 107-114
  • Article

December 1993

·

20 Reads

The problem of the determination of stochastic constitutive laws for input to continuum-type boundary value problems is analyzed from the standpoint of the micromechanics of polycrystals and matrix-inclusion composites. Passage to a sought-for random continuum is based on a scale dependent window playing the role of a Representative Volume Element (RVE). It turns out that an elastic microstructure with piecewise continuous realizations of random tensor fields of stiffness cannot be uniquely approximated by a random field of stiffness with continuous realizations. Rather, two random continuum fields may be introduced to bound the material response from above and from below. As the size of the RVE relative to the crystal size increases to infinity, both fields converge to a deterministic continuum with a progressively decreasing strength of fluctuations. Since the RVE corresponds to a single finite element cell not infinitely larger than the crystal size, two random fields are to be used to bound the solution of a given boundary value problem at a given scale of resolution. The method applies to a number of other elastic microstructures, and provides the basis for stochastic finite differences and elements. The latter point is illustrated by an example of a stochastic boundary value problem of a heterogeneous membrane.
Share

Fig. 1. A macroscopic body of size L macro with a mesoscale window of size L, in which a microstructure of size d is shown. d-Dependence of apparent moduli of a disk-matrix composite at contrasts C (i) /C (m) Z10 (a) and C (i) /C (m) Z1000 (b), at volume fraction 35%, under boundary conditions (2.11)-(2.16).
Fig. 2. Anti-plane elastic responses of a matrix-inclusion composite, with nominal 35% volume fraction of inclusions, at decreasing contrasts: (a) C (i) /C (m) Z1, (b) C (i) /C (m) Z0.2, (c) C (i) /C (m) Z0.05, (d) C (i) /C (m) Z0.02. For (bd), the first figure shows response under Dirichlet boundary conditions, while the second shows response under Neumann boundary conditions with s 0 equal to the volume average s of stress computed in the Dirichlet problem.
Fig. 3. Possible loading under the orthogonal-mixed boundary condition (2.13).
Fig. 4. Six tests-#1, #2,.,#6 from left to right-to determine the six unknowns of the in-plane stiffness tensor C ijkl .
Fig. 5. Mesoscale bounds on tr ( C eff ) for a random disk-matrix at contrasts 10 

+8

Ostoja-Starzewski, M.: Material spatial randomness: from statistical to representative volume element. Probab. Eng. Mech. 21(2), 112-132
  • Article
  • Full-text available

April 2006

·

1,174 Reads

The material spatial randomness forces one to re-examine various basic concepts of continuum solid mechanics. In this paper we focus on the Representative Volume Element (RVE) that is commonly taken for granted in most of deterministic as well as in stochastic solid mechanics, although in the latter case it is called the Statistical Volume Element (SVE). The key issue is the scale over which homogenization is being carried out—it is called the mesoscale, separating the microscale (level of microheterogeneities) from the macroscale (level of RVE). As the mesoscale grows, the SVE tends to become the RVE. This occurs in terms of two hierarchies of bounds stemming from Dirichlet and Neumann boundary value problems on the mesoscale, respectively. Since generally there is no periodicity in real random media, the RVE can only be approached approximately on finite scales. We review results on this subject in the settings of linear elasticity, finite elasticity, plasticity, viscoelasticity, thermoelasticity, and permeability.
Download

A univariate dimension-reduction method for multi-dimensional integration in stochastic mechanics (vol 19, pg 393, 2004)

October 2004

·

391 Reads

This paper presents a new, univariate dimension-reduction method for calculating statistical moments of response of mechanical systems subject to uncertainties in loads, material properties, and geometry. The method involves an additive decomposition of a multi-dimensional response function into multiple one-dimensional functions, an approximation of response moments by moments of single random variables, and a moment-based quadrature rule for numerical integration. The resultant moment equations entail evaluating N number of one-dimensional integrals, which is substantially simpler and more efficient than performing one N-dimensional integration. The proposed method neither requires the calculation of partial derivatives of response, nor the inversion of random matrices, as compared with commonly used Taylor expansion/perturbation methods and Neumann expansion methods, respectively. Nine numerical examples involving elementary mathematical functions and solid-mechanics problems illustrate the proposed method. Results indicate that the univariate dimension-reduction method provides more accurate estimates of statistical moments or multidimensional integration than first- and second-order Taylor expansion methods, the second-order polynomial chaos expansion method, the second-order Neumann expansion method, statistically equivalent solutions, the quasi-Monte Carlo simulation, and the point estimate method. While the accuracy of the univariate dimension-reduction method is comparable to that of the fourth-order Neumann expansion, a comparison of CPU time suggests that the former is computationally far more efficient than the latter.

Simulation of earthquake records using time-varying ARMA (2,1) model

January 2002

·

106 Reads

In this paper, the time-varying auto regressive moving average (ARMA) process is used as a simple yet efficient method for simulating earthquake ground motions. This model is capable of reproducing the nonstationary amplitude as well as the frequency content of the earthquake ground accelerations. The moving time-window technique is used to estimate the time variation of the model parameters from the actual earthquake records. The method is applied to synthesize the near field earthquakes, Naghan 1977, Tabas 1978, and Manjil 1990 recorded on dense soils in Iran, as well as the Mexico City 1985 earthquake recorded on a site with soft soil. It is shown that the selected ARMA (2,1) model and the algorithm used for generating the accelerograms are able to preserve the features of the real earthquake records with different frequency content. The moving time-window technique is used to estimate the time-varying ARMA parameters. Using a general guideline, it is found that a reasonable estimate of the moving window and overlapping sizes can be obtained with a few trials. The required window and overlapping sizes generally reduce, as the local soil of the site becomes stiffer. The statistical response and Fourier spectra of the simulated accelerograms are compared with those of the actual records and reasonable agreement was found. The relationship between the ARMA (2,1) model and the continuous Kanai–Tajimi model is also studied. It is shown that the parameters of the nonstationary Kanai–Tajimi model for simulation of artificial earthquakes can be evaluated using the present modeling procedure.

Analysis of Bernoulli beams with 3D stochastic heterogeneity

October 2003

·

16 Reads

The behavior of stochastically heterogeneous beams, composed of isotropic sub-elements of randomly distributed stiffness is studied. Cross sectional as well as longitudinal heterogeneity are included. Average displacements, reaction forces and their statistical variance are found analytically by a functional perturbation method. Ratio of sub-element to beam characteristic size is not negligible and the use of an equivalent homogeneous structure with the classical effective material properties is not sufficient. The major aim is to study the relation between various microstructure properties (grain size, shape, modulus, statistical correlation lengths etc.) and the overall behavior of linear elastic Bernoulli beams. For the statically determinate case, only cross sectional 2D microstructure statistics is found to affect the elastic response, so that an equal average displacement can be achieved by an equivalent, non-isotropic homogeneous beam. For the indeterminate case, the average values of macro properties are affected by the 3D morphological features. Therefore, the proper equivalent homogeneous beam has to include non-local elastic properties. A simple reciprocal relation, connecting two separate loading systems is found, relating their external forces and displacement statistical variances. Morphological parameters, like two point probability moments, used in the final results are derived analytically, and their physical interpretations are discussed.

Dynamics analysis of distributed parameter system subjected to a moving oscillator with random mass, velocity and acceleration

January 2002

·

20 Reads

The problem of calculating the response of a distributed parameter system excited by a moving oscillator with random mass, velocity and acceleration is investigated. The system response is a stochastic process although its characteristics are assumed to be deterministic. In this paper, the distributed parameter system is assumed as a beam with Bernoulli–Euler type analytical behaviour. By adopting the Galerkin's method, a set of approximate governing equations of motion possessing time-dependent uncertain coefficients and forcing function is obtained. The statistical characteristics of the deflection of the beam are computed by using an improved perturbation approach with respect to mean value. The method, useful to gathering the stochastic structural effects due to the oscillator–beam interaction, is simple and leads to results very close to Monte Carlo simulation for a wide interval of variation of the uncertainties.

Generation of response-spectrum-compatible artificial earthquake accelerograms with random joint time–frequency distributions

April 2012

·

433 Reads

The sustained dissemination of databases of recorded accelerograms along with the increasing number of strong-motion networks installed worldwide revealed that the current methodologies for simulating artificial earthquakes possess the drawback that the simulated time–histories do not manifest the large variability of the seismological parameters as well as of the joint-time frequency distribution observed for natural accelerograms. As a consequence, the dispersion of the output of structural response analysis can be underestimated. In order to take into account the natural variability of earthquakes a methodology for simulating artificial earthquake accelerograms matching mean and mean ± standard deviation response spectra is proposed in this paper. This dispersion can be determined from attenuation relationships or evaluated from selected accelerograms of a strong-motion database. The procedure requires the definition of an evolutionary response-spectrum-compatible power spectral density function with random parameters. It is shown in the paper that the simulated ground motion time–histories will manifest variability similar to that one observed in natural records.Highlights► Current methodologies simulate accelerograms too similar to each other. ► The proposed methodology considers the natural variability of real earthquakes. ► The accelerograms matches mean and standard deviation response spectra. ► Theory of fully non-stationary Gaussian random processes is exploited. ► Evolutionary power spectral density function with random coefficients.

Modeling of damage accumulation for Duffing-type systems under severe random excitations

January 2004

·

23 Reads

The focus of this investigation is on the prediction of the fatigue life of aircraft panels subjected to thermal effects and a severe random acoustic excitation. The prototypical equations for this problem, i.e. the single and double well Duffing oscillators subjected to a bandlimited white noise, are first considered. A review of some currently available approaches, i.e. the Rayleigh approximation and the single spectral moment method both with and without Gaussianity correction, strongly suggests that an accurate prediction of the fatigue life for this non-linear system requires a dedicated model. To this end, an approximation of the probability density function of the peaks of the stationary response of the Duffing oscillators is derived. This model is then used in conjunction with either a narrowband assumption or the single spectral moment methodology to yield a prediction of the fatigue life. The application of this approach to simulation data from both single and double well Duffing oscillators, as well as on the experimental response of an unbuckled panel, demonstrates the reliability of this novel approximation.

A method for accurate estimation of the fatigue damage induced by bimodal processes

January 2010

·

46 Reads

This paper presents a method for calculating the fatigue damage from a stochastic bimodal process, in which the high frequency (HF) and low frequency (LF) components are narrowband Gaussian processes. Rainflow cycle counting identifies the following: small but numerous cycles, and large but fewer ones. In existing methods, the small-cycle amplitudes are assumed to be identical to that of the HF cycles, whereas the large-cycle amplitudes are approximated as the sum of the HF and LF amplitudes. The novelty of the present approach lies in the recognition and incorporation of two effects, which concern the reduction of the small-cycle amplitudes caused by the LF process, and the offset between the HF and LF peaks. Parametric studies are conducted, investigating a wide range of parameters. Using time domain simulation as a benchmark, the present method is found to provide a vast improvement over existing methods, with a root-mean-square error of ∼1%.

Introduction to computational models of damage dynamics under stochastic actions

April 1996

·

18 Reads

This paper reviews and discusses some basic ingredients necessary for the study of damaged continua with diffused defects like microcracks, pores, dislocations, etc., under stochastic loading histories and, in particular, under sequences of impulses described by Poisson arrival processes. The mechanical model of a continuum with microstructure is adopted: in other words, the state of the continuum is described by the usual displacement field and by an additional field of a second-order non-symmetric tensor which describes the microstructural rearrangement of the material due to the presence of defects. It is shown that the time evolution of this tensor, usually assumed empirically on the basis of experimental results, is governed by a balance equation. The discretization of the problem and integral measures of damage, useful for the numerical solutions, are also discussed.

First passage probability of elasto-plastic systems by importance sampling with adapted process

April 2008

·

13 Reads

A new importance sampling method is presented for computing the first passage probability of elasto-plastic systems under white noise excitations. The importance sampling distribution corresponds to shifting the mean of the excitation to an ‘adapted’ (‘predictable’) stochastic process whose future is determined based on information only up to the present. Choosing the adapted process involves designing an adaptive control force algorithm in a stochastic environment that targets to drive the response to first passage failure based on information up to the present. Algorithms for single-degree-of-freedom linear and elasto-plastic systems are proposed and their resulting computational efficiency investigated. Numerical results show that the use of adapted process is particularly useful for nonlinear hysteretic systems where hysteretic effects undermine the effectiveness of conventional importance sampling method based on fixed design points.

An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis

April 2010

·

2,561 Reads

Polynomial chaos (PC) expansions are used in stochastic finite element analysis to represent the random model response by a set of coefficients in a suitable (so-called polynomial chaos) basis. The number of terms to be computed grows dramatically with the size of the input random vector, which makes the computational cost of classical solution schemes (may it be intrusive (i.e.of Galerkin type) or non-intrusive) unaffordable when the deterministic finite element model is expensive to evaluate.To address such problems, this paper describes a non-intrusive method that builds a sparse PC expansion. An adaptive regression-based algorithm is proposed for automatically detecting the significant coefficients of the PC expansion. Besides the sparsity of the basis, the experimental design used at each step of the algorithm is systematically complemented in order to ensure the well-posedness of the various regression problems. The accuracy of the PC model is checked using classical tools of statistical learning theory (e.g. leave-one-out cross-validation). As a consequence, a rather small number of PC terms is eventually retained (sparse representation), which may be obtained at a reduced computational cost compared to the classical “full” PC approximation. The convergence of the algorithm is shown on an academic example. Then the method is illustrated on two stochastic finite element problems, namely a truss and a frame structure involving 10 and 21 input random variables, respectively.

Probabilistic approach to corrosion risk due to carbonation via an adaptive response surface method

July 2006

·

47 Reads

A study about a probabilistic approach to corrosion risk of reinforcements embedded in concrete due to carbonation is presented. The carbonation model is based on a single non-linear diffusion equation of the carbon dioxide. A global balance relationship between the carbon dioxide partial pressure and the solid calcium content in the hydrates of concrete is used in order to express the sink term of the equation and to render the solving easily tractable in a classical finite element analysis. The performance function invoked in the probabilistic approach is the deviation between the carbonation depth, i.e. the output of the carbonation model, and the concrete cover. The Hasofer–Lind reliability index is determined by the Rackwitz–Fiessler algorithm in which the performance function is replaced by a quadratic response surface in order to reduce computational cost and gain accuracy. An adaptive building of the numerical experimental design is proposed: points efficiently positioned with respect to the design point are re-used in the new iteration of the experimental design. In the case of explicit performance functions frequently reported in the literature comparisons with surface response techniques previously developed reveal the interest of the proposed technique. A practical application to a concrete girder shows that the reliability index decreases significantly with time.

Adaptive polynomial chaos expansions applied to statistics of extremes in nonlinear random vibration

April 1998

·

21 Reads

This paper presents a new module towards the development of efficient computational stochastic mechanics. Specifically, the possibility of an adaptive polynomial chaos expansion is investigated. Adaptivity in this context refers to retaining, through an iterative procedure, only those terms in a representation of the solution process that are significant to the numerical evaluation of the solution. The technique can be applied to the calculation of statistics of extremes for nongaussian processes. The only assumption involved is that these processes be the response of a nonlinear oscillator excited by a general stochastic process. The proposed technique is an extension of a technique developed by the second author for the solution of general nonlinear random vibration problems. Accordingly, the response process is represented using its Karhunen-Loeve expansion. This expansion allows for the optimal encapsulation of the information contained in the stochastic process into a set of discrete random variables. The response process is then expanded using the polynomial chaos basis, which is a complete orthogonal set in the space of second-order random variables. The time dependent coefficients in this expansion are then computed by using a Galerkin projection scheme which minimizes the approximation error involved in using a finite-dimensional subspace. These coefficients completely characterize the solution process, and the accuracy of the approximation can be assessed by comparing the contribution of successive coefficients. A significant contribution of this paper is the development and implimentation of adaptive schemes for the polynomial chaos expansion. These schemes permit the inclusion of only those terms in the expansion that have a significant contribution.

An efficient simulation method for reliability analysis using simple additive rules of probability

January 2005

·

129 Reads

This paper presents a simulation technique for reliability analysis of linear dynamical systems. It is based on simple additive rules of probability (in contrast to other probabilistic approaches such as importance sampling). It is shown that the proposed appoach is identical to a newly developed approach, Importance Sampling using Elementary Events (ISEE) [Au SK, Beck JL. First excursion probabilities for linear sytems by very efficient importance sampling. Probabl Eng Mech 2001;16(3):193–208]. A simple formula for the coefficient of variation of the estimator of the failure probability using the samples is also given. A 10-story building model with nonstationary excitation is utilized to demonstrate the accuracy and efficiency of the proposed method.

Adhesively bonded joints composed of pultruded adherends: Considerations at the upper tail of the material strength statistical distribution

July 2009

·

52 Reads

·

·

·

[...]

·

The Weibull distribution, used to describe the scaling of strength of materials, has been verified on a wide range of materials and geometries; however, the quality of the fitting tended to be less good towards the upper tail. Based on a previously developed probabilistic strength prediction method for adhesively bonded joints composed of pultruded glass fiber-reinforced polymer (GFRP) adherends, where it was verified that a two-parameter Weibull probabilistic distribution was not able to model accurately the upper tail of a material strength distribution, different improved probabilistic distributions were compared to enhance the quality of strength predictions. The following probabilistic distributions were examined: a two-parameter Weibull (as a reference), m-fold Weibull, a Grafted Distribution, a Birnbaum–Saunders Distribution and a Generalized Lambda Distribution. The Generalized Lambda Distribution turned out to be the best analytical approximation for the strength data, providing a good fit to the experimental data, and leading to more accurate joint strength predictions than the original two-parameter Weibull distribution. It was found that a proper modeling of the upper tail leads to a noticeable increase of the quality of the predictions.


Polynomial approximation of aerodynamic coefficients based on the statistical description of the wind incidence

April 2009

·

276 Reads

In civil engineering applications, the aerodynamic coefficients are usually measured in wind tunnels for several wind incidences. The measurement results need to be linearized in order to perform the design of the structure. This paper justifies the use of different linearization techniques for different assessments as divergence or buffeting analysis. In this latter context, it is proposed to linearize the aerodynamic coefficient by the least-square method, using the probability density function of the wind incidence as a weighting function. First this probability density function is computed for a 2-D wind flow, as a function of the wind intensities and their correlation. Then, the comparison of results from different linearization techniques provides surprising results indicating that what is usually performed should be considered with care.

First Passage Problem: A Probabilistic Dynamic Analysis for Hot Aerospace Components

September 1991

·

1 Read

Structures subjected to non-white random excitations with uncertain system parameters affected by surrounding environments are studied. Methods are developed to determine the statistics of dynamic responses such as time-varying mean, standard deviation and autocorrelation functions. Moreover, the first-passage problems with deterministic and stationary/nonstationary random barriers are evaluated. Time-varying (joint) mean crossing rate and the probability density function of the first-passage time for various random barriers are derived.

Decentralized random decrement technique for efficient data aggregation and system identification in wireless smart sensor networks

January 2011

·

55 Reads

Smart sensors have been recognized as a promising technology with the potential to overcome many of the inherent difficulties and limitations associated with traditional wired structural health monitoring (SHM) systems. The unique features offered by smart sensors, including wireless communication, on-board computation, and cost effectiveness, enable deployment of the dense array of sensors that are needed for monitoring of large-scale civil infrastructure. Despite the many advances in smart sensor technologies, power consumption is still considered as one of the most important challenges that should be addressed for the smart sensors to be more widely adopted in SHM applications. Data communication, the most significant source of the power consumption, can be reduced by appropriately selecting data processing schemes and the related network topology. This paper presents a new decentralized data aggregation approach for system identification based on the Random Decrement Technique (RDT). Following a brief overview of the RDT, which is an output-only system identification approach, a decentralized hierarchical approach is described and shown to be suitable for implementation in the intrinsically distributed computing environment found in wireless smart sensor networks (WSSNs). RDT-based decentralized data aggregation is then implemented on the Imote2 smart sensor platform based on the Illinois Structural Health Monitoring Project (ISHMP) Services Toolsuite. Finally, the efficacy of the RDT method is demonstrated experimentally in terms of the required data communication and the accuracy of identified dynamic properties.

Response probability structure of a structurally nonlinear fluttering airfoil in turbulent flow

April 2003

·

17 Reads

The stationary probability structure for the aeroelastic response of a structurally nonlinear fluttering airfoil subject to random turbulent flow is examined numerically. The airfoil is modelled as a two-dimensional flat plate with degrees of freedom in torsion and heave (vertical displacement). The nonlinearity is a hardening cubic stiffness force in the torsional direction. The aerodynamic force and moment are assumed to be linear, thus limiting the analysis to small oscillations; unsteady effects are retained. Furthermore, both parametric and external random coloured excitations are considered. It is found that depending on the value of turbulence variance and nonlinear cubic stiffness coefficient, the pitch marginal probability density functions (PDF) exhibits uni-, bi- or double bi-modality; the nature of the bi-modality is not unique. An explanation of the behaviour is provided via an analysis of the joint PDF in pitch and pitch rate for which both the deterministic and random responses are examined. More generally, it is found that the random excitation effectively ‘decouples’ the nonlinear responses such that the pitch, pitch rate, heave and heave rate marginal PDFs transition from uni- to bi-modality at different airspeeds. It is argued that a fundamental cause of the observed behaviour is the synergy between the nonlinearity and the random external excitation.

Non-Gaussian simulation using Hermite polynomial expansion: Convergences and algorithms

July 2002

·

112 Reads

Mathematical justifications are given for a Monte Carlo simulation technique based on memoryless transformations of Gaussian processes. Different types of convergences are given for the approaching sequence. Moreover an original numerical method is proposed in order to solve the functional equation yielding the underlying Gaussian process autocorrelation function.

Reliability of structures in high dimensions, Part I: Algorithms and applications

October 2004

·

70 Reads

The present paper is concerned with the estimation of structural reliability when a large number of random variables is present. A sampling technique which uses lines in order to probe the failure domain, is presented. The latter is employed in conjunction with a stepwise procedure which makes use of Markov Chains. The resulting algorithm exhibits accelerated convergence.

The Chebyshev norm as a deterministic alternative for problems with uncertainties

October 1996

·

5 Reads

The problem of determining, for a dynamic system, the absolute optimum response characteristics when portions of the system are not fully prescribed, and the best and worst possible performance when the excitation is uncertain is formulated as a mathematical programming problem with equality and inequality constraints. The objective is to extremize the maximum time value of a response, i.e. a Chebyshev norm is employed. A linear example problem and its solution are presented to illustrate the formulation.

Bayesian time–domain approach for modal updating using ambient data

July 2001

·

204 Reads

The problem of identification of the modal parameters of a structural model using measured ambient response time histories is addressed. A Bayesian time–domain approach for modal updating is presented which is based on an approximation of a conditional probability expansion of the response. It allows one to obtain not only the optimal values of the updated modal parameters but also their associated uncertainties, calculated from their joint probability distribution. Calculation of the uncertainties of the identified modal parameters is very important if one plans to proceed in a subsequent step with the updating of a theoretical finite-element model based on modal estimates. The proposed approach requires only one set of response data. It is found that the updated PDF can be well approximated by a Gaussian distribution centered at the optimal parameters at which the updated PDF is maximized. Examples using simulated data are presented to illustrate the proposed method.

Top-cited authors