Conference Paper

Tensor-Decomposition-Based Sequential Design of Experiments for Computer Simulations

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
TWINKLE is a library for building families of solvers to perform Canonical Polyadic Decomposition (CPD) of tensors. The common characteristic of these solvers is that the data structure supporting the tuneable solution strategy is based on a Galerkin projection of the phase space. This allows processing and recovering tensors described by highly sparse and unstructured data. For achieving high performance, TWINKLE is written in C++ and uses the Armadillo open source library for linear algebra and scientific computing, based on LAPACK (Linear Algebra PACKage) and BLAS (Basic Linear Algebra Subprograms) routines. The library has been implemented keeping in mind its future extensibility and adaptability to fulfil the different users’ needs in academia and industry regarding Reduced Order Modelling (ROM) and data analysis by means of tensor decomposition. It is especially focused on post-processing data from Computer-Aided-Engineering (CAE) simulation tools.
Conference Paper
Full-text available
The sequential design methodology for global surrogate modeling of complex systems consists of iteratively training the model on a growing set of samples. Sample selection is a critical step in the process and influences the final quality of the model. It is desirable to use as few samples as possible while building an accurate model using insight gained in previous iterations. A robust sampling scheme is considered that employs Monte Carlo Voronoi tessellations for exploration, linear gradients for exploitation and different schemes are investigated to balance their trade-off. The experimental results on benchmark examples indicate that some schemes can result in a substantially smaller model error especially when the system under consideration has a highly non-linear behavior.
Article
In this article, we present a detailed overview of the literature on the design of computer experiments. We classify the existing literature broadly into two categories, viz. static and adaptive design of experiments (DoE). We begin with the abundant literature available on static DoE, its chronological evolution, and its pros and cons. Our discussion naturally points to the challenges that are faced by the static techniques. The adaptive DoE techniques employ intelligent and iterative strategies to address these challenges by combining system knowledge with space-filling for sample placement. We critically analyze the adaptive DoE literature based on the key features of placement strategies. Our numerical and visual analyses of the static DoE techniques reveal the excellent performance of Sobol sampling (SOB3) for higher dimensions; and that of Hammersley (HAM) and Halton (HAL) sampling for lower dimensions. Finally, we provide several potential opportunities for the future modern DoE research.