It is shown that the determinant of the tangent stiffness matrix has a maximum in the prebuckling regime if and only if the determinant of a specific linear combination of the first and the third derivative of this matrix with respect to a dimensionless load factor vanishes. The mathematical tool for this proof is the so-called consistently linearized eigenproblem in the frame of the Finite Element Method. The physical meaning of the mentioned maximum is the one of a minimum of the percentage bending energy of the total strain energy. The paper provides mathematical and physical background knowledge on numerical results that were obtained 35 years ago.
A review of work in the field of space-time signal processing
(STSP), which takes into consideration complicated antenna motion,
noises, and medium all together, is presented. The works led in three
directions. The first one is the development of the theory of STSP for
moving antennas, the second one is the development of fast STSP
algorithms realized complicated multichannel processing with a few
calculating efforts, and the third one is the implementation of the
results in sonar. A new approach for optimizing STSP in complicated
dynamic conditions is proposed. By using it new STSP algorithms for
coherent and stochastic signals are developed. Research of these
algorithms shows that in some cases complicated antenna motion plays a
great positive role. STSP systems with mobile antennas may have
essentially higher space-time selectivity and noise immunity than
systems with static antennas. These effects may be used for system
improvement. In systems with line arrays they can be used, in
particular, for alienating ambiguity in the estimation of signal
direction. A series of fast STSP algorithms for different types of
moving antennas are developed. The results are obtained both for
coherent and for stochastic signals. It is found that in the case of
parallel processing in millions of space-time channels the new fast
algorithms decrease calculating efforts by a factor of ten times and
more. The main positions of the theory are confirmed by experimental
In recent years, new and more effective procedures for applying collocation have been published. This article is devoted to present a revision of this subject and complement its developments. From the general theory two broad approaches are derived, which yield the direct and the indirect TH-collocation methods. The former approach had not been published before, and it is a dual of the indirect approach. In particular, second order differential equations of elliptic type are considered and several orthogonal collocation algorithms are developed for them. In TH-collocation, the approximations on the internal boundary and in the subdomain interiors are completely independent. This yields clear computational advantages that are illustrated through the construction of such algorithms. In the implementations presented, three dimensional problems are included. In passing, single-point-collocation methods that have been the subject of several recent publications are revised.
This paper gives a bibliographical review of the object-oriented programming applied to the finite element method as well as to the boundary element method. The bibliography at the end of the paper contains references to papers, conference proceedings and theses/dissertations on the subject that were published between 1990 and 2003. The following topics are included: finite element method—object-oriented programming philosophy, mesh modelling, distributed/parallel processing, toolkits and libraries, object-oriented specific applications (aerospace, civil engineering/geomechanics, coupled problems, dynamical systems, electromagnetics, fracture mechanics and contact problems, material simulations/manufacturing, mechanical engineering, nonlinear structural simulations, optimization, others); boundary element method. Totally 408 references are listed.
When conducting a finite element analysis (FEA) one way to reduce the total number of degrees of freedom is to use a mixed-dimensional model. Using beam elements to model long and slender components can significantly reduce the total number of elements. Problems arise when trying to connect elements with different dimensions in part due to incompatible degrees of freedom between different types of finite elements. This paper focuses on problems that occur in coupling beams and solids, which means coupling 1D and 3D finite elements. This paper presents a mesh-based solution to these problems only using specific arrangements of classical 1D and 3D finite elements without requiring the use of additional constraint equations. Two alternative solutions are detailed, evaluated and compared in this paper through series of computational experiments. The implementation of both solutions is also presented and involves mesh and geometry processing operations along with an adaptation of boundary representation (BREP) classical data structures.
In this paper, the complete multiple reciprocity method is adopted to solve the one-dimensional (1D) Helmholtz equation for the semi-infinite domain. In order to recover the information that is missing when the conventional multiple reciprocity method is used, an appropriate complex number in the zeroth order fundamental solution is added such that the kernels derived using this proposed method are fully equivalent to those derived using the complex-valued formulation. Two examples including the Dirichlet and Neumann boundary conditions are investigated to show the validity of the proposed method analytically and numerically. The numerical results show good agreement with the analytical solutions.
In recent years, microprocessor-based multiprocessors with a memory hierarchy have become increasingly popular. In this paper, we discuss implementation details and performance results on the SGI Cray Origin 2000 system. Here, we choose an inexact additive Schwarz preconditioned conjugate gradient (AS PCG) method as our benchmarching kernel. This method is quite effective in utilizing cache, particularly when the number of sweeps on the subdomain is large. We will investigate the performance characteristics of this method in two different shared-memory programming models. On 64 processors, the parallel speedup of the optimized computer program is approximately 50, and the maximum performance is approximately 6 GFlops/s for the Richardson-iterative AS PCG method, and 8.5 GFlops/s for the SSOR-iterative AS PCG method.
In this study, the usage and the comparison of some discontinuous boundary elements (constant, linear and quadratic) are investigated for 2D soil–structure interaction (SSI) problems. Based on the formulations presented in this study, some general purpose computer programs coded in FORTRAN77 are developed for each type of discontinuous boundary elements for elastic or visco-elastic 2D SSI problems. The programs perform the analysis in Fourier transform space. The results of 2D dynamic SSI problems are compared with those in the literature. Examples studied here indicate that present formulations have sufficient computational accuracy for analyzing 2D SSI problems. As a result of this study, the use of constant element is more sufficient than the other type of elements.
As the extensive use of solid models becomes widespread, it is important to have a mechanism by which existing engineering drawings can be converted into solid models. Therefore, a geometric assistance that can aid in visual reasoning and constructing of solid models is beneficial. In this paper, we present key operations for a system called the Assistant for Reasoning and Construction of Solids (ARCS), which provides this assistance given a set of two orthographic views. The geometric domain of ARCS encompasses curved solids with cylindrical and spherical surfaces, such as those found in typical mechanical parts. We have devised the Cylindrical and Spherical Warping operations to create cylindrical and spherical surfaces, which use interactive computer graphics that guide a human user to curved faces of a solid. These operations are then illustrated with examples using ARCS to create the solid models of typical mechanical parts from their orthographic projections.
A computer program idor2d GIS was developed to simulate the currents and pollutant transport in lakes and coastal areas. The model is a closely-coupled two-dimensional, hydrodynamic/pollutant transport geographic information systems (GIS) model that operates within arc-view GIS. The use of this GIS based interface module facilitates the improved communication of the basic patterns and relationships associated with hydrodynamic/pollutant transport simulation and the application of this information to water resources, planning and management. Model functionality for data capture, data editing, pre-processing, embedded artificial intelligence, and result interpretation is described. The functionality of this model is illustrated through a case study on Suda Bay located in Crete, Greece.
Modern digital technology has made it possible to manipulate multi-dimensional signals with systems that range from simple digital circuits to advanced parallel computers. The goal of this manipulation can be divided into three categories:•Image Processing image in → image out.•Image Analysis image in → measurements out.•Image Understanding image in → high-level description out.Further, we will restrict ourselves to two-dimensional (2D) image processing although most of the concepts and techniques that are to be described can be extended easily to three or more dimensions. The Wiener filter is a solution to the restoration problem based upon the hypothesized use of a linear filter and the minimum mean-square (or rms) error criterion. In the example given below the image a[m, n] was distorted by a bandpass filter and then white noise was added to achieve an SNR = 30 dB.
A computer aided design (CAD) tool has been specifically developed for rapid and easy design of solid models for surfboard and sailboard fins. This tool simplifies the lofting of advanced fin cross-sectional foils, in this instance based upon the family of standard airfoil series set by the National Advisory Committee for Aeronautics (NACA), whilst retaining a basic parametric description at each cross-section.This paper describes the way in which non-uniform rational B-spline (NURBS) surfaces are created from 2D profile splines, and are then used to generate 3D geometrical surfaces of the fins, which can be imported directly into commercial software packages for finite element stress analysis (FEA) and computational fluid dynamics (CFD).Pressure distributions, lift and drag forces are determined from a CFD flow analysis for various fins designed with this tool, and the results suggest that the incorporation of advanced foils into surfboard fins could indeed lead to increased performance over fins foiled using current standard techniques.
With the advent of the Intel 80386 chip, the road has been cleared for application programmers to build large PC software products. This article describes our experience developing a complex engineering package designed specifically for the PC-386. The topics discussed include planning of the development approach, choice of the operating system and the programming language, programming methods and tools, and software integration techniques. The integration involves connecting analytical modules with each other and with menu-driven interfaces and graphics. The construction of a simple built-in text editor linked to specialized graphics modules is discussed. The paper also contains our comments and suggestions regarding the development tools and standards for programming languages. We hope our experience will be useful for application programmers, system programmers and developers of programming tools.
A fixed cylindrical circular cavity and a cylindrical circular column of fluid of infinite length submerged in a homogeneous fluid medium, and subjected to a pressure point source, for which closed form solutions are known, are used to assess the performance of constant, linear and quadratic boundary elements in the analysis of acoustic scattering.This aim is accomplished by evaluating the error committed by the boundary element method (BEM) for a wide range of frequencies and wave numbers. First, the position of dominant BEM errors in the frequency versus spatial wave number domains are identified and related to the natural modes of vibration of the cylindrical circular inclusion. Then, the errors that occur by using constant, linear and quadratic elements are compared when the inclusion is modelled with the same number of nodes (i.e. maintaining computational cost). Finally, the importance of the position of the nodal points inside discontinuous boundary elements is analysed.
The objective of vehicle crash accident reconstruction is to investigate the pre-impact velocity. Elastic–plastic deformation of the vehicle and the collision objects are the important information produced during vehicle crash accidents, and the information can be fully utilized based on the finite element method (FEM), which has been widely used as simulation tools for crashworthiness analyses and structural optimization design. However, the FEM is not becoming popular in accident reconstruction because it needs lots of crash simulation cycles and the FE models are getting bigger, which increases the simulation time and cost. The use of neural networks as global approximation tool in accident reconstruction is here investigated. Neural networks are used to map the relation between the initial crash parameter and deformation, which can reduce the simulation cycles apparently. The inputs and outputs of the artificial neural networks (ANN) for the training process are obtained by explicit finite element analyses performed by LS-DYNA. The procedure is applied to a typical traffic accident as a validation. The deformation of the key points on the frontal longitudinal beam and the mudguard could be measured according to the simulation results. These results could be used to train the neural networks adapted back-propagation learning rule. The pre-impact velocity could be got by the trained neural networks, which can provide a scientific foundation for accident judgments and can be used for vehicle accidents without tire marks.
This paper, presents 3D non-linear FEM models developed to predict the mechanical behaviour of timber–concrete joints made with dowel-type-fasteners. They consider isotropic behaviour for steel and concrete and orthotropic behaviour for timber, all the materials are modelled with non-linear mechanical behaviour. Besides, the interaction between materials is modelled using contact elements associated with friction. The results obtained in the numerical simulations are evaluated and compared with results obtained in laboratory shear tests. The model developed showed the capacity to simulate the behaviour of the joints if the materials used are properly modelled. Nevertheless further research is still necessary to improve the modelling of the materials particularly timber.
The authors have applied the 3D FDTD technique to simulate the propagation of electrical signals on a microstrip antenna using a four-transputer array connected to a 386-based PC. This relates to work currently being carried out into the propagation of very high speed digital pulse in printed circuit boards in high bandwidth systems. Methods have been devised by the authors which reduce the amount of stored data and improve run-time. These methods include subgridding, shielded volume and concurrent processing of the problem. Results have been obtained and rre presented in this paper to show improved accuracy and decreased run-time. An analysis of run-time to the number of transputers in an array for optimum operation has been presented. A trade-off between intertransputer communication of segment boundary conditions and the size of each segment has also been discussed in the paper.
The method here proposed for the correction of finite-element meshes is based on the modification of the coordinates of certain mesh nodes, the aim being to derive elements with an aspect relation as close as possible to unity. This entirely automatic method can be applied to any finite-element mesh corresponding to a 3D solid model, while furthermore it is quickly adaptable to various other types, including plane models or surfaces. The outcome in each case is a mesh of higher quality, a sound basis for a self-adaptive mesh scheme, and a promise of better results.
This article reviews the prevailing geometric modelling techniques, based on Non-Uniform Rational B-Splines (NURBS). Emphasis is placed on the most important properties of NURBS surfaces and the available techniques for modelling real natural or artificial objects given a cloud of three-dimensional data points on their surface, possibly taken from a scanning device.
The extension of an approach, suitable for bolting structures impact computation with a large number of unilateral friction contact surfaces, and with local plasticity of the bolts, is presented. It is a modular approach based on a mixed domain decomposition method and the LATIN method. This iterative resolution process operates over the entire time–space domain. A 3D Finite-Element code is presented and dedicated to applications concerning connection refined models for which the structure components are assumed elastic. Several examples are analysed to show the method's capability of describing shocks throw real three-dimensional assembly. Comparisons between classical dynamic code LS-DYNA3D are presented.
This paper describes an investigation into the error bounds of Gauss-Legendre integration for non-singular integrals in three-dimensional boundary element analyses. Based on these numerical results, reliable and efficient criteria for adaptive integration routines are proposed. Because these criteria are in simple forms, they can be implemented within boundary element algorithms without difficulty. Numerical examples have been presented to demonstrate the efficiency and accuracy of the proposed integration schemes.
This paper presents an Internet-based finite-element analysis framework, named Web-FEM, which allows users to access finite-element analysis service from remote sites over the Internet by using an Internet-connected machine only. The implementation utilizes modern computer graphics, parallel processing, and information technology to provide features such as platform-independence, 3D graphical interface, system performance, multiple-user management, and fault tolerance in comparison with other Internet-based analytical systems. These features make its usage like using a traditional finite-element package installed and run on a local machine, and its performance like using a high performance computing facility. The object model design and the implementation of this system are presented in detail.
This paper presents research that led to the design and implementation of an extensible and scalable software framework for the dynamic 3D visualization of simulated construction operations. In the domain of operations design and analysis, the ability to see a 3D animation of processes that have been simulated allows for two very important things: verification and validation. In addition, a model can be communicated effectively which, coupled with verification and validation, makes it “credible” and thus used in making decisions. In the presented research, a set of core animation statements of the most general use from the extension viewpoint is first identified. Second, methods to design an add-on interface to the identified core animation methods are investigated by capitalizing on documented principles of application framework design. Finally, the designed add-on interface and its scalability are validated by implementing the extensible framework on multiple computing platforms and then extending the language with several non-trivial extensions using the designed add-on interface. The research concluded that geometric transformation-based animation statements are: (1) collectively sufficient to visually describe a broad class of common construction processes, and (2) are at a level of abstraction that they can be logically concatenated to describe higher-level motion dynamics involved in performing construction. In addition, it was found that an open, loosely-coupled visualization scheme and direct interface methods to append the animation statement interpreter’s vocabulary with new add-on designed statements: (1) allows language extensions without modification to or understanding of the underlying methods, and (2) presents users with a consistent interface to visually describe construction processes, thereby providing complexity concealment and a uniform end-user interface. The presented framework is implemented as an extension (add-on) interface to the VITASCOPE visualization system.
Total hip replacement (THR) is a surgical procedure that replaces a diseased hip joint with implants. Due to the various size and shapes of human hip joint of every individual, a chosen commercial hip implant sometimes may not be the best-fit to a patient, or even it cannot be applied because of its discrepancy. To solve the problem of a possible geometric mismatch between a selected implant and the hip joint, we develop a software system that designs a patient-specific hip implant by investigating the anatomical geometry of the patient’s hip joints. The major technical challenge of the proposed system is to extract some typical 3D geometry parameters with respect to the patient’s 3D bone anatomy and then creates a custom-made hip implant based upon the extracted parameters. This paper describes the overall procedure of creating a patient-specific hip implant based upon the geometry of the patient’s femur. The parameters include femoral shaft isthmus, anatomical femoral axis, femoral head center/radius, head offset length, femoral neck, neck shaft angle, anteversion, and canal flare index (CFI). Some of them are semi-automatically recognized and extracted, but others must be determined with the surgeon’s intervention. All the parameters are sufficient to construct a primitive 3D geometry model of the specified femur, so that a patient-specific hip implant for individual patients can then be determined from the information of those extracted geometry parameters. The stability of patient-specific implants has been increased by maximizing contact area with the bone. Currently the proposed system is exclusive for the design of a hip implant; however, the concepts are general enough to be applied to any other human joints such as the shoulder, the knee, or the spine. The feasibility and reliability of the method has been tested on some examples.
This study presents the efforts in archiving Chinese architecture using a long-range 3D laser scanner. A historical architecture, the main hall of the Pao-An Temple, was preserved in a digital format with the architectural shapes retrieved more accurate than traditional manual measurements did. The difficulties in measuring as-built free forms and curves up to the size of a building were encountered and solved to enable the display of the hidden inter-relationship between outdoor and indoor profiles through sections. This research identified the most error-prone measurements done by traditional approach by comparing original drawings with the final models which registered 1958 scans and sub-scans. To represent the special characteristics of as-built 3D temple form, the study includes the application of metadata in architecture, the information management of digital data, and the Internet display of large 3D data sets.
The 3D traffic situation simulation system combines the multibody based mathematical model of a vehicle, the multibody mathematical model of human body, the database of vehicle and human body data and the display subsystem. Together with the model of driving surface the system can be used to simulate and analyse vehicle and its occupant behaviour under different road conditions and different driving regimes. The result obtained this way can be used to investigate safety related parameters and optimise the driver–vehicle–road system regarding to arbitrary criteria (safety, comfort, speed, etc.). The results of simulations are available as numerical data as well as animations in virtual 3D environment.
Many environmental processes can be modelled as transient convection–diffusion–reaction problems. This is the case, for instance, of the operation of activated-carbon filters. For industrial applications there is a growing demand for 3D simulations, so efficient linear solvers are a major concern. We have compared the numerical performance of two families of incomplete Cholesky factorizations as preconditioners of conjugate gradient iterations: drop-tolerance and prescribed-memory strategies. Numerical examples show that the former are computationally more efficient, but the latter may be preferable due to their predictable memory requirements.
Many physical phenomena in science and engineering can be modeled by partial differential equations and solved by means of the finite element method. Such a method uses as computational spatial support a mesh of the domain where the equations are formulated. The ‘mesh quality’ is a key-point for the accuracy of the numerical simulation. One can show that this quality is related to the shape and the size of the mesh elements. In the case where the element sizes are not specified in advance, a quality mesh is a regular mesh (whose elements are almost equilateral). This problem is a particular case of a more general mesh generation problem whose purpose is to construct meshes conforming to a prespecified isotropic size field associated with the computational domain. Such meshes can be seen as ‘unit meshes’ (whose elements are of unit size) in an appropriate non-Euclidean metric. In this case, a quality mesh of the domain is a unit mesh as regular as possible. In this paper, we are concerned with the generation of such a mesh and we propose a method to achieve this goal. First, the boundary of the domain is meshed using an indirect scheme via parametric domains and then the mesh of the three-dimensional (3D) domain is generated. In both cases, an empty mesh is first constructed, then enriched by field points, and finally optimized. The field points are defined following an algebraic or an advancing-front approach and are connected using a generalized Delaunay type method. To show the overall meshing process, we give an example of a 3D domain encountered in a classical computational fluid dynamics problem.
An efficient technique to visualize primary and secondary results for combined finite element method/boundary element method models as contours is presented. The technique is based on dividing higher-order surfaces into triangles and on using texture interpolation to produce contour plots. Since results of high accuracy with significant gradients can be obtained using sparse meshes of boundary elements and finite elements, special attention is devoted to element face subdivision. Subdivision density is defined on the basis of both face edge curvature and ranges of result fields over element faces. Java 3D API is employed for code development.
It is critical to detect the spatio-temporal conflicts in a project schedule, since many construction conflicts occur due to constraints in construction space and unavailability of intermediate functions of the in-progress building. This paper introduces a temporal 3D space system modelling method using a COmponent State network CEntric Model (COSCEM) to integrate such project aspects as product, process, space, and intermediate functions. Based on COSCEM, a 3D CAD model can be transformed into a temporal 3D space system. The concept of ‘existence vector’ and the Boolean logic operations are defined for depicting and deriving the dynamic characteristics of project entities. The procedures for detecting spatio-temporal conflicts are also presented. A case study of moving a truck crane on an excavated access road is selected to illustrate the proposed spatio-temporal detection methodology.
Tire tread is composed of many grooves and blocks in complex pattern for the sake of the major tire running performances, but the 3D tire analysis has been performed conventionally by either neglecting tread blocks or modeling only circumferential grooves. As a result, such simplified analyses lead to considerably poor numerical expectations. In this context, this paper addresses an effective mesh generation procedure for 3D automobile tires in which the detailed tread blocks with variable anti-skid depth are fully considered. Tire body and tread pattern meshes are constructed separately in the beginning, and then both are to be assembled by the incompatible tying method. Detailed pattern mesh is inserted either partially or fully depending on the analysis purpose. Through the tire contact analysis, we verified that the meshing technique introduced does not cause any meshing error and the detailed tire mesh expects the contact pressure more consistent with the experimental results.
Constitutive models for concrete based on the microplane concept have
repeatedly proven their ability to well-reproduce its non-linear response on
material as well as structural scales. The major obstacle to a routine
application of this class of models is, however, the calibration of
microplane-related constants from macroscopic data. The goal of this paper is
two-fold: (i) to introduce the basic ingredients of a robust inverse procedure
for the determination of dominant parameters of the M4 model proposed by Bazant
and co-workers based on cascade Artificial Neural Networks trained by
Evolutionary Algorithm and (ii) to validate the proposed methodology against a
representative set of experimental data. The obtained results demonstrate that
the soft computing-based method is capable of delivering the searched response
with an accuracy comparable to the values obtained by expert users.
Over the last decade, there has been an increased awareness of the benefits of employing Object-Oriented (OO) design and methodologies for development of software. Among the various languages available for OO development, Fortran 95 has some clear advantages for scientific and engineering programming. It offers features similar to other OO languages like C++ and Smalltalk as well as extensive and efficient numerical abilities. This paper will describe the OO design and implementation of P-adaptive finite element analysis (FEA) using Fortran. We will demonstrate how various OO principles were successfully employed to achieve greater flexibility, easier maintainability and extensibility. This is helpful for a complex program like an adaptive finite element implementation.
The accuracy of numerical provisions of the sound propagation outdoors made with the newly developed pyramid tracing algorithm, was tested using a directive loudspeaker as the sound source. The pyramid tracing code was compared against experimental measurements and with the ISO-DIS 9613 standards. Both the experimental and numerical data were used to build graphical plots, enabling a direct comparison of the results: they show that the capability of accurately modeling the source directivity produces generally a better estimate using the pyramid tracing algorithm, but the shielding effects and excess attenuation are more accurately modeled by the ISO 9613 code.
Numerical and experimental analyses are carried out to study the free convection above a horizontal plate submitted to a uniform heat flux and placed in a semi-infinite medium (Pr=0.7 or 7). The surface of the plate in contact with the fluid is described by a sinusoidal profile. The natural convection flow is considered laminar and two-dimensional.The numerical results show that the flow pattern is strongly dependent on operating heat flux and on parameters of the surface profile (amplitude and period).The isothermal lines spreads along the surface of the plate. So, the local Nusselt number is maximal above the top of the protuberances and minimal in the hollows of the profile. We also show that the local heat transfer decreases with the amplitude and increases with the period of the sinusoidal profile. Whatever parameters of the surface were used, the heat transfer above a sinusoidal plate is lower than those obtained from a plane plate of equal projected area.The numerical results are experimentally checked by using an experimental apparatus which allows us to measure the air temperature and the position of this measure above the plate.
This paper concerns with the development of the crashworthiness maximization technique for tubular structures, and the application to the axial crushing problems of cylindrical tubes. In the program system presented in this study, an explicit finite element code, DYNA3D is adopted for simulating complicated crushing behavior of tubular structures. A series of aluminum cylindrical tubes are tested under axial impact condition for the experimental validation of the numerical solutions. Moreover, the response surface approximation technique is applied to construct an approximated design subproblem for optimization in the pre-assigned design space by using the technique of design-of-experiment. The approximated subproblem is then solved by the usual mathematical programming technique. These optimization processes are repeated until the given convergence conditions are satisfied.
The AC servo motor system is a non-linear system with uncertainty of parameters. Therefore, it is not easy to identify the system with traditional mathematical model. In this paper, there are four algorithms, that is, SDBP, MOBP, VLBP and LMBP of back-propagation network are trained, and choose better one to identify the system. The method of designing controller is easy and can be applied conveniently in engineering. Simulation results show that LMBP has more fast convergence and it is suitable for system identification and controller design. In the proposed scheme, we successfully integrate MATLAB/Simulink and LabVIEW with neural network to develop a SCADA system of AC servo motor. Numerical simulation results are provided to confirm the performance and effectiveness of the proposed control approach.
A method for the acceleration of a fully implicit solution of nonlinear unsteady boundary value problem is presented. The principle of acceleration is for provide to the inexact Newton backtracking method a better initial guess, for the current time step, than the conventional choice from the previous time step. This initial guess is built on the reduced model obtained by a proper orthogonal decomposition of solutions at the previous time steps. This approach is appealing to GRID computing: spare processors may help to improve the numerical efficiency and to manage the computing in a reliable way.
Peak ground acceleration is a very important factor that must be considered in construction site for examining the potential damage resulting from earthquake. The actual records by seismometer at stations related to the site may be taken as a basis, but a reliable estimating method may be useful for providing more detailed information of the strong motion characteristics. Therefore, the purpose of this study was by using back-propagation neural networks to develop a model for estimating peak ground acceleration at two main line sections of Kaohsiung Mass Rapid Transit in Taiwan. Additionally, the microtremor measurements with Nakamura transformation technique were taken to further validate the estimations. Three neural networks models with different inputs including epicentral distance, focal depth and magnitude of the earthquake records were trained and the output results were compared with available nonlinear regression analysis. The comparisons exhibited that the present neural networks model did have a better performance than that of the other methods, as the calculation results were more reasonable and closer to the actual seismic records. Besides, the distributions of estimating peak ground acceleration from both of computations and measurements might provide valuable information from theoretical and practical standpoints.
The following problem is solved: Given a Cellular Automaton with continuous state space which simulates a physical system or process, use a Genetic Algorithm in order to find a Cellular Automaton with discrete state space, having the smallest possible lattice size and the smallest possible number of discrete states, the results of which are as close as possible to the results of the Cellular Automaton with continuous state space. The Cellular Automaton with discrete state space evolves much faster than the Cellular Automaton with continuous state space. The state spaces of two Cellular Automata have been discretized using a Genetic Algorithm. The first Cellular Automaton simulates the two-dimensional photoresist etching process in integrated circuit fabrication and the second is used to predict forest fire spreading. A general method for the discretization of the state space of Cellular Automata using a Genetic Algorithm is also presented. The aim of this work is to provide a method for accelerating the execution of algorithms based on Cellular Automata (Cellular Automata algorithms) and to build a bridge between Cellular Automata as models for physical systems and processes and Cellular Automata as a VLSI architecture.
This paper describes a prototype implementation of an engineering data access system for a finite element analysis (FEA) program. The system incorporates a commercial off-the-shelf (COTS) database as the backend to store selected analysis results; and the Internet is utilized as the communication channel to access the analysis results. The objective of using an engineering database is to provide the users the needed engineering information from readily accessible sources in a ready-to-use format for further manipulation. Three key issues regarding the engineering data access system are discussed, including data modeling, data representation, and data retrieval. The engineering data access system gives great flexibility and extendibility to the data management in finite element programs and can provide additional features to enhance the applicability of FEA software.
Today grid applications require not only lots of computational power but data at a very large scale too. Although grid computing was initially conceptualized as the technology that focuses on solving compute-intensive problems, this focus has gradually shifted to applications where data is distributed over various locations. Access to these data resources stored in heterogeneous grid storage systems located at geographically distributed virtual organizations in an integrated and uniform way is a challenging problem. The Web Services Resource Framework (WSRF) has recently emerged as the standard for the development and integration of grid services. This paper proposes and presents Gravy4WS, a middleware architecture based on WSRF Web services that enables the dynamic access to virtualized grid data resources. A novel scheduling algorithm called DCE (Delegating-Cluster-Execution based Scheduling) is proposed to improve load balancing of the system. The implementation of Gravy4WS using WSRF libraries and services provided by Globus Toolkit 4 is described together with its performance evaluation.
A Python tool for manipulating netCDF files in a parallel infrastructure is proposed. The parallel interface, PyPnetCDF, manages netCDF properties in a similar way to the serial version from ScientificPython, but hiding parallelism to the user. Implementations details and capabilities of the developed interfaces are given. Numerical experiments that show the friendly use of the interfaces and their behaviour compared with the native routines, are presented.
Recent legal changes have increased the need for developing accessible user interfaces in computer-based systems. In this sense, previously existing user interfaces are intended to be modified and new user interfaces are intended to be designed taking accessibility guidelines into account. Typically, model-based approaches have been used when developing accessible user interfaces or redefining existing ones. But the use of static models leads to the development of not dynamically adaptable user interfaces. Dynamic adaptation in accessible user interfaces is important due to the fact that interaction difficulties on people with disabilities may change through use. In this paper, we present some contributions that can be obtained from the application of the Dichotomic View of plasticity in the personalization of user interfaces. With the double perspective defined in this approach, it is intended to go further from a mere adaptation to certain user stereotypes, offering also a dynamic support to real limitations or difficulties users can encounter during the use of the UI. This goal is achieved analyzing user logs by an inference engine that dynamically infers modifications in the user interface to adjust it to varying user needs. A case study is presented in order to show how the guidelines and software support defined in the Dichotomic View of plasticity can be applied to develop a component for a particular system aimed at performing dynamic user interface adaptations with accessibility purposes. This approach includes some innovations that make it different from conventional adaptable mechanisms applied to accessibility in some important aspects.
This paper studies the feasibility of investigating a traffic accident and offering initial data for traffic accident reconstruction (TAR) using a photogrammetric technique. Compared with the conventional roller tape applied by the traffic police of Shanghai Municipal Bureau of Public Security in 142 traffic cases, photogrammetry is proven to be a time saving and cost effective method for accident investigation. The 2D photogrammetry method and the trajectory analysis accident reconstruction technique are applied to actual traffic accidents. With the assistance of a Portable-Control System as the reference system, the 3D photogrammetry method can be used in a vehicle deformation survey. The measurement results obtained in accordance with the CRASH survey criteria for vehicle deformation can be adopted as initial information for the damage analysis reconstruction technique. It appears that photogrammetry has greater potential for application to traffic accident investigation and reconstruction.
A database system for a road accident analysis application has been developed using the new functional database language PFL. This application requires extensive data validation and restructuring, and queries (and hence data retrieval patterns) tend to be of a complex and ad hoc nature. PFL adapts functional programming to deductive databases and possesses features that are desirable for applications characterised by large volumes of both data and programs, and by complex structures of both data and queries. In this paper we describe the application domain and provide an overview of the salient features of PFL. We then discuss the development of a road accident database using PFL and comment upon the insight this has provided in terms of both the application and the language.
Road accident records in Britain each comprise two components: coded data in a predefined format, and plain English in free format. This paper describes a natural language understanding system for information retrieval from the latter to verify and extend the former. We adopt the description logic system BACK to achieve a common representation of information from each of the two sources to facilitate comparison. A sub-category grammar is adapted to achieve automatic classification in BACK, and a bidirectional chart parser is adapted to operate with this grammar. This gives good independence between grammar rules, and provides flexibility, expressiveness, and the ability to resolve ambiguities.
As a basic study for the establishment of an accuracy estimation method in the finite element method, this paper deals with the problems of transverse bending of thin, flat plates. From the numerical experiments for uniform mesh division, the following relation was deduced, ϵ ∝ (h/a)k, k ≧1, where ϵ is the error of the computed value by the finite element method relative to the exact solution and h/a is the dimensionless mesh size. Using this relation, an accuracy estimation method, which was based on the adaptive determination of local mesh sizes from two preceding analyses by uniform mesh division, was presented.A computer program using this accuracy estimation method was developed and applied to 28 problems with various shapes and loading conditions. The usefulness of this accuracy estimation method was illustrated by these application results.
Our earlier paper demonstrated that the singular boundary element (BE) theory, which hitherto had been considered unsuitable for the highly nonlinear variably saturated transient flow problem, could be applied to the same problem when implemented along the lines of the Green element method (GEM) (Taigbenu, A. E. and Onyejekwe, O. O., Green element simulations of transient nonlinear unsaturated flow equation, Applied Mathematical Modelling, 1995, 19, 675–684). Here, the transient one-dimensional unsaturated flow problem is revisited with a Green element model which incorporates the cubic Hermitian interpolation basis functions to approximate the distribution of the primary variable and the soil constitutive relations. Because the soil parameters vary appreciably by several orders of magnitude over small intervals of soil moisture, approximating those parameters by linear interpolation functions, as earlier done, could be inadequate, and this fact is demonstrated here using two numerical examples of infiltration into vertical soil columns. With the first example, there is good agreement between the solutions of Hermitian GEM and Hermitian finite element method (FEM) which confirms that the discrepancy of less amount of soil moisture in the linear GEM solution, earlier observed, is due to an error from interpolation (Taigbenu, A. E. and Onyejekwe, O. O., Green element simulations of transient nonlinear unsaturated flow equation, Applied Mathematical Modelling, 1995, 19, 675–684). The second example serves to show that comparable accuracy between Hermitian and linear GE models can be achieved using a fewer number of elements in the former model.
A mixed, eight-node solid element is developed with the aim to accurately and efficiently capture local stresses in composites. The nodal degrees of freedom are the three displacements and the three interlaminar stresses. Characteristic features, C0, tri-linear, serendipity shape functions are used to interpolate these quantities across the element volume. With this choice, the intraelement stress fields satisfy the equilibrium equations in integral form. Integration is exact. It is carried out by a symbolic calculus tool. To test the element performances, the intricate stress fields of thick sandwich composites with undamaged and damaged face layers, piezoelectrically actuated beams, thermally loaded laminates and close to a two-material wedge singularity are investigated. The element appears robust, stable and rather accurate using reasonably fine meshes. Compared to displacement-based counterpart elements, the computational effort is not larger.