Clemens Elster’s research while affiliated with German National Metrology Institute and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (289)


Example of a successful adversarial attack on an image from the Caltech 256 dataset. The perturbation in (b) is generated using the proposed adversarial attack based on the OMIG method. The Carlini-Wagner perturbation is shown in (c). Reproduced from [18]. CC BY 4.0.
Comparison of perturbations δ for the input images (a) generated by the CW attack in (b) and the OMIG attack with and without certain components (c-e), cf subcaptions. Reproduced from [18]. CC BY 4.0.
Four analyzed examples from the Caltech 256 dataset. The true classes are ‘porcupine’, ‘chimp’, ‘ostrich’, and ‘gorilla’. Reproduced from [18]. CC BY 4.0.
Comparison between the analyzed adversarial attacks for Caltech 256. The first rows show the adversarial perturbations δ, while the second rows show the attacked images x′ . Each column represents an adversarial attack, from left to right: OMIG, DeepFool, CW, PGD, FGSM, and BIM. The values in black rectangles represent SSIM(δ,x) for each attack with maximum values marked in bold. Reproduced from [18]. CC BY 4.0.
Four analyzed examples from the CUB-200 dataset. The true classes are ‘Brewer’s Sparrow’, ‘grasshopper sparrow’, ‘clay-colored sparrow’, and ‘hooded merganser’. Reproduced from [46]. CC BY 4.0.

+4

Creating adversarial attacks for neural networks with improved interpretability
  • Article
  • Full-text available

May 2025

·

17 Reads

N Amanova

·

·

C Elster

Adversarial attacks pose a significant threat to the robustness of neural networks. Recent studies have shown that an imperceptible altering of the input can fool trained models to yield false predictions. However, the perturbations that underlie adversarial attacks are usually hard to interpret. This paper proposes a novel interpretable adversarial attack based on a recently introduced explainability technique known as oriented, modified integrated gradients. We will demonstrate the effectiveness of the proposed method on popular image classification datasets and compare it with state-of-the-art adversarial attacks. In addition, we will analyze the visual similarity between original images and the corresponding adversarial perturbations, both qualitatively and quantitatively, and demonstrate that the proposed approach generates adversarial perturbations with significantly improved interpretability.

Download


Using virtual experiments to improve data analysis

March 2025

·

10 Reads

In the data analysis of measurements, the simplified assumption of homoscedastic Gaussian noise is often made to account for the random fluctuations between observations. This may be an inadequate assumption which can deteriorate the results of a data analysis. Repeated measurements, which can be used to infer the true distribution of the data, may be inaccessible, making the true distribution hard to find. In such circumstances, thoroughly designed virtual experiments (VEs) can mimic real and possibly complex measurement processes to infer the true data distribution, which can subsequently be accounted for in an improved analysis of real observations. We explore the potential benefit of such an approach in terms of a metrological application, the tilted-wave interferometer (TWI). Our VE for the TWI yields not just the mean of the data, but also their physically modelled, random fluctuations arising in repeated observations. We use the virtual data to derive a statistical data model that includes correlations and heteroscedasticity. In applying a Bayesian data analysis procedure utilising said statistical model in conjunction with a vague prior for the quantity of interest, virtual data with a known ground truth are analysed and the quality of the resulting estimates are assessed. In addition, a comparison is carried out to the often-employed, simplified approach assuming homoscedastic, independent noise. We observe a significant improvement in the results when a more adequate statistical model for the data is utilised, along with a reliable uncertainty quantification. The work proposes the idea to extend the utilisation of a VE to inferring the noise characteristics of real observations, in turn leading to significantly improved data analysis procedures. The potential benefit is demonstrated to be substantial in terms of the considered metrological case study. Future research is discussed, including other ways that VEs could be used to further improve data analysis.


Uncertainty evaluation using virtual experiments: bridging JCGM 101 and a Bayesian framework

March 2025

·

5 Reads

tm - Technisches Messen

The applications of virtual metrology and the concept of using virtual experiments (VEs) or digital twins in measurement tasks have gained significant interest in recent years. When real measurements are costly or scarce, a VE can replicate complex mathematical and physical models. As a digital representation of a measurement process, a VE can, among other use-cases, be used in an uncertainty evaluation. Here, simulated and real observations can be combined without explicit knowledge of a model for the measurand – a usual requirement for uncertainty evaluation according to the GUM and its supplement JCGM 101. Including the variability of repeated measurements in a VE can be a challenging task. However, metrologists are aware of the potential impact of repeatability conditions and usually have significant knowledge about the precision of their measurement instrument. In this work, we develop a Monte Carlo sampling approach and software to perform an uncertainty evaluation that includes such prior knowledge about the precision of the measurement device, requiring only evaluations of the VE. Using an example of a calibrated measurement application, we demonstrate that there is a notable benefit to using available prior knowledge during an uncertainty evaluation, particularly in the case of limited observations.


Deep Errors-in-Variables using a diffusion model

February 2025

·

5 Reads

Machine Learning

Errors-in-Variables is the statistical concept used to explicitly model input variable errors caused, for example, by noise. While it has long been known in statistics that not accounting for such errors can produce a substantial bias, the vast majority of deep learning models have thus far neglected Errors-in-Variables approaches. Reasons for this include a significant increase of the numerical burden and the challenge in assigning an appropriate prior in a Bayesian treatment. To date, the attempts made to use Errors-in-Variables for neural networks do not scale to deep networks or are too simplistic to enhance the prediction performance. This work shows for the first time how Bayesian deep Errors-in-Variables models can increase the prediction performance. We present a scalable variational inference scheme for Bayesian Errors-in-Variables and demonstrate a significant increase in prediction performance for the case of image classification. Concretely, we use a diffusion model as input posterior to obtain a distribution over the denoised image data. We also observe that training the diffusion model on an unnoisy surrogate dataset can suffice to achieve an improved prediction performance on noisy data.



Figure 1. Experimental setting for the generic polynomial regression example of Section 4.1. The dotted green line indicates the unknown polynomial, from which noisy observations (black crosses) can be conducted at the nominal measurement locations {1, . . . , 10}. The resulting (mean) estimates of the virtual experiment approach (solid red line) and the JCGM-102 Monte Carlo approach (dashed blue line) are shown.
Figure 2. Marginal densities of the measurand, i.e., polynomial coefficients, of the generic polynomial regression example are shown. The solid red line represents the result of the application of Algorithm 2 using only the virtual experiment and its gradient. The dashed blue line is the result of the JCGM-102 Monte Carlo approach and Algorithm 1 using the corresponding measurement model (13).
Figure 3. Marginal densities of the measurand, in terms of circle radius and position, of the coordinate measuring machine example are shown. The solid red line represents the result of applying Algorithm 2 using only the virtual experiment and its gradient. The dashed blue line is the result of the JCGM-102 Monte Carlo approach and Algorithm 1 using the corresponding measurement model (15), and the dotted green line corresponds to the JCGM-102 Monte Carlo approach and Algorithm 1 applied to the alternative measurement model (19).
Using a Multivariate Virtual Experiment for Uncertainty Evaluation with Unknown Variance

October 2024

·

47 Reads

·

3 Citations

Metrology

Virtual experiments are a digital representation of a real measurement and play a crucial role in modern measurement sciences and metrology. Beyond their common usage as a modeling and validation tool, a virtual experiment may also be employed to perform a parameter sensitivity analysis or to carry out a measurement uncertainty evaluation. For the latter to be compliant with statistical principles and metrological guidelines, the procedure to obtain an estimate and a corresponding measurement uncertainty requires careful consideration. We employ a Monte Carlo sampling procedure using a virtual experiment that allows one to perform a measurement uncertainty evaluation according to the Monte Carlo approach of JCGM-101 and JCGM-102, two widely applied guidelines for uncertainty evaluation in metrology. We extend and formalize a previously published approach for simple additive models to account for a large class of non-linear virtual experiments and measurement models for multidimensionality of the data and output quantities, and for the case of unknown variance of repeated measurements. With the algorithm developed here, a simple procedure for the evaluation of measurement uncertainty is provided that may be applied in various applications that admit a certain structure for their virtual experiment. Moreover, the measurement model commonly employed for uncertainty evaluation according to JCGM-101 and JCGM-102 is not required for this algorithm, and only evaluations of the virtual experiment are performed to obtain an estimate and an associated uncertainty of the measurand. We demonstrate the efficacy of the developed approach and the effect of the underlying assumptions for a generic polynomial regression example and an example of a simplified coordinate measuring machine and its virtual representation. The results of this work highlight that considerable effort, diligence, and statistical considerations need to be invested to make use of a virtual experiment for uncertainty evaluation in a way that ensures equivalence with the accepted guidelines.


for the methodology and contributions in this work. Informative Bayesian Type A uncertainty evaluation and subsequent Monte Carlo propagation to obtain a state of knowledge distribution for the measurand is presented. If independent sampling from the marginal posterior πμ(m|x1, … , xm), cf. (1), is possible, plain Monte Carlo sampling can be used to sample from the posterior distribution for the measurand by combining samples from πμ(m|x1, … , xn) with samples from the Type B state of knowledge PDF πZ (z). The theoretical justification is given in Section 2 and practical examples are given in Section 3.
Results for the example of Section 3.1. We show the resulting state of knowledge distribution for the measurand mX, denoting the deviation of a weight of mass 10 kg. We show the PDFs resulting using the noninformative prior (NIP) in blue, the mildly informative prior (MIP) in green and the strongly informative prior (SIP) in red. Both, MIP and SIP, are computed once using plain Monte Carlo (MC) and once using a vague prior for the measurand and MCMC.
Results for the lognormal example of Section 3.2. We show the state of knowledge distributions for the measurand δL. The resulting PDF using the noninformative prior (NIP) is shown as a solid blue line and the informative prior (SIP (MC)) as a solid red line. The PDF resulting from the conventional Bayesian approach (SIP (MCMC)) using a vague measurand prior is shown by the dotted red line.
Results for the example of Section 4. We show the resulting state of knowledge distribution for the measurand A, denoting the activity of a radiating specimen in kBq. We show the PDFs resulting from the noninformative prior (JCGM 101) in blue, the informative χ² prior in green and the truncated normal prior in red. (MC) denotes the use of plain Monte Carlo sampling and (MCMC) indicates Markov Chain Monte Carlo sampling.
Utilizing prior knowledge about the measurement process for uncertainty evaluation through plain Monte Carlo sampling

August 2024

·

72 Reads

·

1 Citation

International Journal of Metrology and Quality Engineering

Type A uncertainty evaluation can significantly benefit from incorporating prior knowledge about the precision of an employed measurement device, which allows for reliable uncertainty assessments with limited observations. The Bayesian framework, employing Bayes theorem and Markov Chain Monte Carlo (MCMC), is recommended to incorporate such prior knowledge in a statistically rigorous way. While MCMC is recommended, metrologists are usually well-familiar with plain Monte Carlo sampling and previous work demonstrated the integration of similar prior knowledge into an uncertainty evaluation framework following the plain Monte Carlo sampling of JCGM 101–the Supplement 1 to the GUM. In this work, we explore the potential and limitations of such an approach, presenting classes of data distributions for informative Type A uncertainty evaluations. Our work justifies an informative extension of the JCGM 101 Type A uncertainty evaluation from a statistical perspective, providing theoretical insight and practical guidance. Explicit distributions are proposed for input quantities in Type A scenarios, aligning with Bayesian uncertainty evaluations. In addition, inherent limitations of the JCGM 101 Monte Carlo approach are discussed concerning general Bayesian inference. Metrological examples support the theoretical findings, significantly expanding the applicability of the JCGM 101 Monte Carlo technique from a Bayesian perspective.


Bayesian uncertainty evaluation applied to the tilted-wave interferometer

May 2024

·

11 Reads

·

3 Citations

The tilted-wave interferometer is a promising technique for the development of a reference measurement system for the highly accurate form measurement of aspheres and freeform surfaces. The technique combines interferometric measurements, acquired with a special setup, and sophisticated mathematical evaluation procedures. To determine the form of the surface under test, a computational model is required that closely mimics the measurement process of the physical measurement instruments. The parameters of the computational model, comprising the surface under test sought, are then tuned by solving an inverse problem. Due to this embedded structure of the real experiment and computational model and the overall complexity, a thorough uncertainty evaluation is challenging. In this work, a Bayesian approach is proposed to tackle the inverse problem, based on a statistical model derived from the computational model of the tilted-wave interferometer. Such a procedure naturally allows for uncertainty quantification to be made. We present an approximate inference scheme to efficiently sample quantities of the posterior using Monte Carlo sampling involving the statistical model. In particular, the methodology derived is applied to the tilted-wave interferometer to obtain an estimate and corresponding uncertainty of the pixel-by-pixel form of the surface under test for two typical surfaces taking into account a number of key influencing factors. A statistical analysis using experimental design is employed to identify main influencing factors and a subsequent analysis confirms the efficacy of the method derived.


Flexible Bayesian reliability demonstration testing

April 2024

·

4 Reads

Applied Stochastic Models in Business and Industry

The aim is to demonstrate the reliability of a population at consecutive points in time, where a sample at each current point must prove that at least 100% of the devices function until the next point with a probability of at least . To test the reliability of the population, we flexibilise standard lifetime models by allowing the unknown parameter(s) of the corresponding counting process to vary in time. At the same time, we assign a prior distribution that assumes the parameters to be constant within a certain interval. This flexibilisation has several advantages: it can be applied for all parametric lifetimes; its Markov property allows the efficient derivation of the number of defective devices, even for a large number of testing times; and the inference is less certain and hence more realistic and leads to less frequent acceptance of poor quality populations. On the other hand, the inference is stabilised by the informative prior. Based on the flexibilisation of the homogeneous Poisson process (HPP), we derive acceptance sampling plans to test the future reliability of a population. Applying the zero failure sampling plans on simulations of Weibull processes shows their good frequentist properties and their robustness. In the case of utility meters subject to German regulations (Mess‐ und Eichverordnung (MessEV). 2014: 2010–2073.), application of the derived sequential sampling plans when the conditions of these plans are met can lead to an extension of the verification validity period. These sampling plans protect the consumer better than those from an HPP and are still cost efficient.


Citations (65)


... In metrology, VEs support the development of novel and improved measurement systems. For instance, in scenarios where knowledge that was previously required is unknown, such as the actual measurement model, VEs of a certain form can be utilised to provide a valid uncertainty evaluation following uncertainty frameworks in metrology [10][11][12] or the Bayesian paradigm [13,14]. Furthermore, VEs through their ability to generate data are not restricted by the potentially limited availability of existing observations, and allow the user to control input parameters to assess the relevant behaviours of their measurement process, device, or instrument [15]. ...

Reference:

Using virtual experiments to improve data analysis
JCGM 101-compliant uncertainty evaluation using virtual experiments
  • Citing Article
  • January 2025

Measurement Sensors

... In metrology, VEs support the development of novel and improved measurement systems. For instance, in scenarios where knowledge that was previously required is unknown, such as the actual measurement model, VEs of a certain form can be utilised to provide a valid uncertainty evaluation following uncertainty frameworks in metrology [10][11][12] or the Bayesian paradigm [13,14]. Furthermore, VEs through their ability to generate data are not restricted by the potentially limited availability of existing observations, and allow the user to control input parameters to assess the relevant behaviours of their measurement process, device, or instrument [15]. ...

Using a Multivariate Virtual Experiment for Uncertainty Evaluation with Unknown Variance

Metrology

... In metrology, VEs support the development of novel and improved measurement systems. For instance, in scenarios where knowledge that was previously required is unknown, such as the actual measurement model, VEs of a certain form can be utilised to provide a valid uncertainty evaluation following uncertainty frameworks in metrology [10][11][12] or the Bayesian paradigm [13,14]. Furthermore, VEs through their ability to generate data are not restricted by the potentially limited availability of existing observations, and allow the user to control input parameters to assess the relevant behaviours of their measurement process, device, or instrument [15]. ...

Bayesian uncertainty evaluation applied to the tilted-wave interferometer

... The OMIG method was originally introduced for machine learningbased mammography image quality assessment [16], where it has been shown to produce explainability maps that are more interpretable than those obtained by other popular methods, thereby confirming the trustworthiness of the employed models. In [17] the method was applied as an explainability method for classification problems. Although of different origin, the OMIG method works similarly to the well-established Carlini-Wagner adversarial attack. ...

Finding the input features that reduce the entropy of a neural network’s prediction

Applied Intelligence

... This leads to an expansion of the sample which, in turn, generates oscillations of the AFM-cantilever. Therefore, the chemical nature of the substance and the location can be assessed at a resolution similar to that of AFM [152]. The amplitude of the cantilever ringdown decay curve directly correlates with the IR absorbance of the sample. ...

Compressed AFM-IR hyperspectral nanoimaging

... Recent advances in the field of machine learning have opened the way for new approaches to represent the prior PDF. In particular, Generative Adversarial Networks (GAN), a class of unsupervised machine learning algorithms (Goodfellow et al., 2014), aims at generating data that mimic a target distribution, and has been leveraged to represent the prior (Arridge et al., 2019;Holden et al., 2022;Patel et al., 2022 ;Marschall et al., 2023). ...

Machine learning based priors for Bayesian inversion in MR imaging

... Another approach involved identifying the largest coherent subset that exhibited pairwise equivalence, based on bilateral degrees of equivalence to a desired significance level [13,14]. Achieving consistency on the subsets were also investigated by Bergoglio et al [15] and Bodnar et al [16,17,18], who also included polynomial interpolation of the drift trend. Since no clear outliers were identified despite observing high χ 2 values, we employed an estimator proposed by Paule and Mandel [19]. ...

On modelling of artefact instability in interlaboratory comparisons

... The inclusion of an additional probabilistic element following a homoscedastic Gaussian distribution to account for noise in data analysis has been applied recently in cases where VEs are used in conjunction with real measurements, e.g. [22,23]. However, this can have potential issues in cases where the homoscedastic Gaussian assumption for the errors is not the most appropriate option. ...

Virtual experiments for the assessment of data analysis and uncertainty quantification methods in scatterometry

... Most of the employed approaches learn an inverse mapping from a large database containing pairs of data and the corresponding solutions [27], or they are designed to further improve approximate solutions such as the Moore-Penrose inverse [28]. In some cases, deep learning has also been utilized to determine prior distributions in a Bayesian inversion through so-called generative models [29,30,31]. Typical approaches to generative models are variational autoencoder (VAE) [32] and generative adversarial networks (GANs) [33]. ...

Generative models and Bayesian inversion using Laplace approximation

Computational Statistics