Article

Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

An algorithm, the bootstrap filter, is proposed for implementing recursive Bayesian filters. The required density of the state vector is represented as a set of random samples, which are updated and propagated by the algorithm. The method is not restricted by assumptions of linearity or Gaussian noise: it may be applied to any state transition or measurement model. A simulation example of the bearings only tracking problem is presented. This simulation includes schemes for improving the efficiency of the basic algorithm. For this example, the performance of the bootstrap filter is greatly superior to the standard extended Kalman filter

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The original idea of using resampling as a solution to the degeneracy of the samples in an SMC method was proposed in [5], and presents a resampling scheme called multinomial resampling. The objective of resampling is to replace the current population of N weighted samples with a new degeneracy-free population of N samples by removing the samples that have low weights and replacing them with copies of the samples with high weights (i.e. the healthy samples), in such a way that the estimates after resampling remain unbiased. ...
... Several alternative resampling schemes have been proposed since the original multinomial resampling in [5]. Some of the most commonly used schemes are stratified resampling [3,8], and systematic resampling [1,8]. ...
... where the fraction immediately preceding the expectation is the result of the sum of the small weights being less than unity. The proof of the remainder of (21) is provided in [5], and here is omitted for brevity. For the term (II) in the right hand-side of (20), we consider IS conditional on the value of θ such that we pose an estimate of a function of x, by averaging over samples of θ, as follows: ...
Conference Paper
Sequential Monte Carlo (SMC) samplers are a family of powerful Bayesian inference methods that combine sampling and resampling to sample from challenging posterior distributions. This makes SMC widely used in several application domains of statistics and Machine Learning. The aim of this paper is to introduce a new resampling framework , called Conditional Importance Resampling (CIR) that reduces the quantization error arising in the application of traditional resampling schemes. To assess the impact of this approach, we conduct a comparative study between two SMC samplers, differing solely in their resampling schemes: one utilizing systematic resampling and the other employing CIR. The overall improvement is demonstrated by theoretical results and numerical experiments for sampling a forest of Bayesian Decision Trees, focusing on its application in classification and regression tasks.
... The underlying principle behind PFs is that of approximating posteriors with weighted samples, which are drawn from user-chosen distributions called proposals. The publication of the bootstrap PF (BPF) [38] sparked decades of methodological research into PFs, which are impossible to summarize fully and spanned several communities including statistics [35], signal processing [18], and machine learning [34]. [21] and later [24] described the use of optimal proposals within PFs. [17] introduced sequential Monte Carlo samplers (SMC), i.e., a framework that generalizes PFs and makes the methodology applicable to general sequences of distributions outside the context of SSMs. ...
... This is called adaptive resampling (see, e.g., [8,Chapter 10.2]). The most popular choice for qpx t |y t , x pmq t¡1 q is f px t |x pmq t¡1 q and leads to the bootstrap particle filter (BPF) [38]. ...
... Now, since given A t¡1 the particles at time t are conditionally independent with pdf ψ t px t q, then we have that the integrals within (38) are identical: ...
... To calculate messages that, due to nonlinearities in the system model, cannot be evaluated in closed form, SPAbased methods for MOT typically rely on particle-based computations that closely follow the bootstrap particle filter (BPF) [23]- [25] and rely on importance sampling. A known drawback of this approach is that it typically fails in tracking problems where (i) the states of individual objects have dimensions higher than four, (ii) measurements are very informative compared to the predicted/prior pdfs. ...
... Sometimes particle degeneracy can be avoided by using vast numbers of particles or by implementing regularization strategies [16], [25], [27]. A straightforward approach to improve sampling efficiency and avoid particle degeneracy is to design proposal pdfs that are similar to the posterior pdfs [23]- [25]. However, finding a distribution that is easy to sample from and simultaneously similar to posterior pdfs Assuming no measurement noise, the 1-D TDOA measurement describes potential 3-D object locations on the hyperboloid shown in red. ...
... A simple choice for the proposal pdf used in the update step of the conventional "bootstrap" particle filter [23], [25] is the prior pdf f (x). However, for a feasible number of particles N p and most choices of the proposal pdf, importance sampling can suffer from particle degeneracy [26]. ...
Article
Full-text available
Passive monitoring of acoustic or radio sources has important applications in modern convenience, public safety, and surveillance. A key task in passive monitoring is multiobject tracking (MOT). This paper presents a Bayesian method for multisensor MOT for challenging tracking problems where the object states are high-dimensional, and the measurements follow a nonlinear model. Our method is developed in the framework of factor graphs and the sum-product algorithm (SPA) and implemented using random samples or “particles”. The multimodal probability density functions (pdfs) provided by the SPA are effectively represented by a Gaussian mixture model (GMM). To perform the operations of the SPA with improved sample efficiency, we make use of particle flow (PFL). Here, particles are migrated towards regions of high likelihood based on the solution of a partial differential equation. This makes it possible to obtain good object detection and tracking performance even in challenging multisensor MOT scenarios with single sensor measurements that have a lower dimension than the object positions. We perform a numerical evaluation in a passive acoustic monitoring scenario where multiple sources are tracked in 3-D from 1-D time-difference-of-arrival (TDOA) measurements provided by pairs of hydrophones. Our numerical results demonstrate favorable detection and estimation accuracy compared to state-of-the-art reference techniques.
... The state space model is an elegant statistical framework for describing time series data, with practical applications in various research fields due to its flexibility for interpreting observed data [14][15][16][17][18] . Simple state space models, such as linear and Gaussian type models, can be estimated efficiently using the Kalman-filter 19 . ...
... Furthermore, seismic activity varies in time, suggesting that the optimal window width also varies in time. The state space model is an elegant statistical framework for describing time series data, with practical applications in various research fields due to its flexibility for interpreting observed data [14][15][16][17][18] . Simple state space models, such as linear and Gaussian type models, can be estimated efficiently using the Kalman-filter 19 . ...
... A nonlinear and non-Gaussian state space model, in which the Kalman-filter might not be effective, can be estimated using a particle filter, also known as the sequential Monte Carlo method. The particle filter approximates the posterior probability density function of the state variables using a set of particles, in which each particle represents a possible state of the system and its weight reflects the likelihood of the observations 14,15,17 . A flexible and widely applicable method that combines a state space model and a particle filter enables real-time estimation that robustly follows unsteady-changing objects such as the b value of the GR law. ...
Article
Full-text available
Earthquakes follow an exponential distribution referred to as the Gutenberg–Richter law, which is characterized by the b value that represents a ratio of the number of large earthquakes to that of small earthquakes. Spatial and temporal variation in the b value is important for assessing the probability of a larger earthquake. Conventionally, the b value is obtained by a maximum-likelihood estimation based on past earthquakes with a certain sample size. To properly assess the occurrence of earthquakes and understand their dynamics, determining this parameter with a statistically optimal method is important. Here, we discuss a method that uses a state space model and a particle filter, as a framework for time-series data, to estimate temporal variation in the b value. We then compared our output with that of a conventional method using data of earthquakes that occurred in Tohoku and Kumamoto regions in Japan. Our results indicate that the proposed method has the advantage of estimating temporal variation of the b value and forecasting magnitude. Moreover, our research suggests no heightened probability of a large earthquake in the Tohoku region, in contrast to previous studies. Simultaneously, there is the potential of a large earthquake in the Kumamoto region, emphasizing the need for enhanced monitoring.
... And many important problems in time series analysis can be solved using state-space models ( Kitagawa and Gersch 1984). In the 1990's, various sequential Monte Carlo methods, referred to as bootstrap filters, Monte Carlo filters, and particle filters, were developed (Gordon et al. 1993, Kitagawa 1993, 1996, Doucet et al. 2000, 2001. In these methods, arbitrary distributions of the state and the system noise are expressed by many particles. ...
... Sequential Monte Carlo filter and smoother, hereinafter referred to as particle filter, were developed to mitigate this problem. In this method, each distribution appeared in recursive filter and smoother is approximated by many "particles" that can be considered as realizations from that distribution (Gordon et al. 1993, Kitagawa 1993, 1996. ...
Preprint
Particle filters are applicable to a wide range of nonlinear, non-Gaussian state-space models and have already been applied to a variety of problems. However, there is a problem in the calculation of smoothed distributions, where particles gradually degenerate and accuracy is reduced. The purpose of this paper is to consider the possibility of generating multiple particles in the prediction step of the particle filter and to empirically verify the effect using real data.
... For nonlinear and non-Gaussian Bayesian state estimation, [12] propose a bootstrap filter, informed by the results of [30], to represent and generate recursively an approximation to the state distribution, using a set of random samples. For recursive Bayesian estimation, the general model takes the following form (Gordon et al. 12): ...
... For nonlinear time series in particular, sequential sampling methods will be confronted with outlying or extreme values, the presence of which can necessitate an extremely large number of particles, N p , as observed by [12]. As we shall see in Sect. ...
Article
Full-text available
We present an approach to selecting the distributions in sampling-resampling which improves the efficiency of the weighted bootstrap. To complement the standard scheme of sampling from the prior and reweighting with the likelihood, we introduce a reversed scheme, which samples from the (normalized) likelihood and reweights with the prior. We begin with some motivating examples, before developing the relevant theory. We then apply the approach to the particle filtering of time series, including nonlinear and non-Gaussian Bayesian state-space models, a task that demands efficiency, given the repeated application of the weighted bootstrap. Through simulation studies on a normal dynamic linear model, Poisson hidden Markov model, and stochastic volatility model, we demonstrate the gains in efficiency obtained by the approach, involving the choice of the standard or reversed filter. In addition, for the stochastic volatility model, we provide three real-data examples, including a comparison with importance sampling methods that attempt to incorporate information about the data indirectly into the standard filtering scheme and an extension to multivariate models. We determine that the reversed filtering scheme offers an advantage over such auxiliary methods owing to its ability to incorporate information about the data directly into the sampling, an ability that further facilitates its performance in higher-dimensional settings.
... To capture these properties, we incorporated the particle filtering (PF) method to cope with the uncertainties in the non-stationary aging process. The PF method [34,35] reformulates the model under the Bayesian framework and utilizes a set of particles associated with weights to approximate the state probability density function (pdf). Particles are sampled from a prior estimation of the state pdf and propagated through the modeling process. ...
... However, the variance of these weights increases stochastically over time, which ends up with a few particles dominating the approximation of the state pdf and ignoring other particles [36]. This degeneracy phenomenon prevents the particle filter from being useful until the resampling stage has been included [34]. Model parameters can be assumed as another kind of state to be tracked. ...
Article
Full-text available
Accurate prediction of remaining useful life (RUL) is crucial to the safety and reliability of the lithium-ion battery management system (BMS). However, the performance of a lithium-ion battery deteriorates nonlinearly and is heavily affected by capacity-regeneration phenomena during practical usage, which makes battery RUL prediction challenging. In this paper, a rest-time-based regeneration-phenomena-detection module is proposed and incorporated into the Coulombic efficiency-based degradation model. The model is estimated with the particle filter method to deal with the nonlinear uncertainty during the degradation and regeneration process. The discrete regeneration-detection results should be reflected by the model state instead of the model parameters during the particle filter-estimation process. To decouple the model state and model parameters during the estimation process, a dual-particle filtering estimation framework is proposed to update the model parameters and model state, respectively. A kernel smoothing method is adopted to further smooth the evolution of the model parameters, and the regeneration effects are imposed on the model states during the updating. Our proposed model and the dual-estimation framework were verified with the NASA battery datasets. The experimental results demonstrate that our proposed method is capable of modeling capacity-regeneration phenomena and provides a good RUL-prediction performance for lithium-ion batteries.
... Note that Eq. (15) is analytically intractable except some especial cases using linear models and Gaussian uncertainties. A followed solution for the general case of both nonlinear and non-Gaussian state-space models is by the adoption of particle methods like particle filters (PF) (Gordon, Salmond, & Smith, 1993;Arulampalam et al., 2002) to obtain an approximation for the required posterior PDF by means of a set of K samples or particles with 2 The maximum-entropy PDF for the error terms is the one that produces the most prediction uncertainty (largest Shannon entropy). 3 The conditioning on θ has been dropped for simpler notation. ...
... Then, by substituting Eq. (15) into Eq. (17), and by assuming q(x n |x n−1 ) = p(x n |x n−1 ) (Gordon et al., 1993;Tanizaki & Mariano, 1998), the unnormalized importance weight for the i-th particle at cycle n rewrites as: ...
Article
Railway track geometry deterioration due to traffic loading is a complex problem with important implications in cost and safety. Without appropriate maintenance, track deterioration can lead to severe speed restrictions or disruptions, and in extreme cases, to train derailment. This paper proposes a physics-based reliability-based prognostics framework as a paradigm shift to approach the problem of railway track management. As key contribution, a geo-mechanical elastoplastic model for cyclic ballast settlement is adopted and embedded into a particle filtering algorithm for sequential state estimation and RUL prediction. The suitability of the proposed methodology is investigated and discussed through a case study using published data taken from a laboratory simulation of train loading and tamping on ballast carried out at the University of Nottingham (UK).
... The model has the capability to select the optimal state model with a sliding window, according to the characteristics of the data. PF, which is based on the sequential Monte Carlo method and Bayesian inference, is specialized to tackle with the nonlinear system [15]. The key step of the PF-based degradation trend prediction method is the establishment of the state space model. ...
... where p (x t |x t−1 ) is state transition PDF, which can be calculated by (15). In update step, the observation value y t at time t is used to correct the prior PDF, thereby the posterior PDF can be obtained, that is ...
Article
Full-text available
High accuracy prediction of degradation trend provides valuable information in establishing reasonable maintenance decision-making with the goal of improving the maintenance efficiency and avoiding sudden downtime. The extraction of degradation features and the prediction algorithm are the key factors in degradation trend prediction. In this work, based on composite multiscale grey entropy (CMGE) and dynamic particle filter (PF), a novel prediction architecture is proposed to improve accuracy under different working conditions. The CMGE is proposed as the degradation feature indicator (DFI) extracted from rolling bearing vibration signal. The dynamic PF is proposed to predict the degradation trend of rolling bearing. Two rolling bearing accelerated life tests were conducted to evaluate the performance of the proposed method for rolling bearing degradation trend prediction. Experimental results demonstrate CMGE has good monotonicity and weak data length dependence, which can effectively describe the degradation trend of rolling bearing, and the proposed dynamic PF achieves higher prediction accuracy than the traditional PF and GM model, respectively.
... However, as it involves linearization of the state space, the filtering equation is suboptimal. On the other hand, for discrete-time state space, particle filtering has been predominantly used for the estimation, see (Arulampalam et al., 2002;Doucet et al., 2001;Gordon et al., 1993;Kitagawa, 1993;Ramadan and Bitmead, 2022;Sarkka and Svensson, 2023;Surya, 2024). ...
... Identity (11) will be used in derivation of the dynamic of x t . Unlike in the (discrete-time) Bayesian filtering (Arulampalam et al., 2002;Doucet et al., 2001;Gordon et al., 1993;Kitagawa, 1993), (11) considers x t as an unknown fixed value rather than a random quantity, which is the common belief in the Bayesian method. ...
Preprint
Full-text available
This paper develops a novel method for solving maximum a posteriori (MAP) and maximum likelihood (ML) nonlinear filtering problems in continuous-time state space. Some distributional identities for the statistics of incomplete information are established as the continuous-time counterpart of those given in \cite{Surya}. Using these identities and the duality principle between estimation theory and optimal control, which was pointed out in \cite{Kalman} for discrete-time state space, a new set of exact explicit filtering equations are derived. They consist of the governing dynamics of MAP and ML state estimators and the corresponding covariance matrix. In particular, unless in the linear state-space, the governing dynamic of covariance matrix is influenced/corrected by the observation process in similar fashion to that of the state estimator. This finding constitutes the main appealing feature of the new nonlinear filtering equations. The results generalize earlier works of \cite{Bucy}, in particular \citep{Bryson,Cox,Jazwinski,Mortensen} for nonlinear state-space. Discrete-time representation of the obtained filtering equation is provided. It serves as an alternative to the extended Kalman-Bucy filter.</p
... This is an inverse problem and is addressed through data assimilation [1]. Typical data assimilation includes an ensemble Kalman filter (EnKF) [2,3], an EnKF with multiple data assimilations (EnKF-MDA) [4], an ensemble smoother (ES) [5,6], an ES with multiple data assimilations (ES-MDA) [7][8][9], and a particle filter [10][11][12]. ...
... where d ij is the normalized Euclidean distance between elites i and j calculated from (10). A decrease in D implies a decrease in model diversity among elites. ...
Article
Full-text available
The nonlinearity nature of land subsidence and limited observations cause premature convergence in typical data assimilation methods, leading to both underestimation and miscalculation of uncertainty in model parameters and prediction. This study focuses on a promising approach, the combination of evolutionary-based data assimilation (EDA) and ensemble model output statistics (EMOS), to investigate its performance in land subsidence modeling using EDA with a smoothing approach for parameter uncertainty quantification and EMOS for predictive uncertainty quantification. The methodology was tested on a one-dimensional subsidence model in Kawajima (Japan). The results confirmed the EDA’s robust capability: Model diversity was maintained even after 1000 assimilation cycles on the same dataset, and the obtained parameter distributions were consistent with the soil types. The ensemble predictions were converted to Gaussian predictions with EMOS using past observations statistically. The Gaussian predictions outperformed the ensemble predictions in predictive performance because EMOS compensated for the over/under-dispersive prediction spread and the short-term bias, a potential weakness for the smoothing approach. This case study demonstrates that combining EDA and EMOS contributes to groundwater management for land subsidence control, considering both the model parameter uncertainty and the predictive uncertainty.
... Using SMC to approximate the posterior distribution p x 1:t jy 1:t ð Þ is also synonymously called particle filtering. The idea was proposed in the beginning of the 1990s (Stewart and McCarty, 1992;Gordon et al., 1993;Kitagawa, 1993) and there are by now several tutorials (Doucet and Johansen, 2011;Naesseth et al., 2019) and textbooks (Särkkä, 2013;Chopin and Papaspiliopoulos, 2020) available on the subject. The use of a particle filter for localization in a known magnetic field map is illustrated in Figure 2 (estimated magnetic anomaly map to the left, localization result to the right). ...
Article
Full-text available
Simultaneous localization and mapping (SLAM) is the task of building a map representation of an unknown environment while at the same time using it for positioning. A probabilistic interpretation of the SLAM task allows for incorporating prior knowledge and for operation under uncertainty. Contrary to the common practice of computing point estimates of the system states, we capture the full posterior density through approximate Bayesian inference. This dynamic learning task falls under state estimation, where the state-of-the-art is in sequential Monte Carlo methods that tackle the forward filtering problem. In this paper, we introduce a framework for probabilistic SLAM using particle smoothing that does not only incorporate observed data in current state estimates, but it also backtracks the updated knowledge to correct for past drift and ambiguities in both the map and in the states. Our solution can efficiently handle both dense and sparse map representations by Rao-Blackwellization of conditionally linear and conditionally linearized models. We show through simulations and real-world experiments how the principles apply to radio (Bluetooth low-energy/Wi-Fi), magnetic field, and visual SLAM. The proposed solution is general, efficient, and works well under confounding noise.
... • First of all, the robot's localization is usually considered as a probabilistic and multi-sensor fusion issue, considering that the sensor data are influenced by measurement errors. Consequently, according to the literature, the localization can be estimated by using Markov and Bayesian methods [22], Extended Kalman Filters, Particle Filters, Factor graphs, and evolutionary techniques using differential scaling, genetic approaches, and particle swarm optimization [23]. ...
... One technique for implementing this recursive estimation is PF [49]. The PF approximates the probability distributions using a crowd of elementary particles (samples), enabling the handling of nonnormal and arbitrarily shaped distributions. ...
Article
Full-text available
This paper presents a methodology that aims to enhance the accuracy of probability density estimation in mobility pattern analysis by integrating prior knowledge of system dynamics and contextual information into the particle filter algorithm. The quality of the data used for density estimation is often inadequate due to measurement noise, which significantly influences the distribution of the measurement data. Thus, it is crucial to augment the information content of the input data by incorporating additional sources of information beyond the measured position data. These other sources can include the dynamic model of movement and the spatial model of the environment, which influences motion patterns. To effectively combine the information provided by positional measurements with system and environment models, the particle filter algorithm is employed, which generates discrete probability distributions. By subjecting these discrete distributions to exploratory techniques, it becomes possible to extract more certain information compared to using raw measurement data alone. Consequently, this study proposes a methodology in which probability density estimation is not solely based on raw positional data but rather on probability-weighted samples generated through the particle filter. This approach yields more compact and precise modeling distributions. Specifically, the method is applied to process position measurement data using a nonparametric density estimator known as kernel density estimation. The proposed methodology is thoroughly tested and validated using information-theoretic and probability metrics. The applicability of the methodology is demonstrated through a practical example of mobility pattern analysis based on forklift data in a warehouse environment.
... The same uncertainty limits the animals' predictive capacities and licenses plasticity (which is often known as conditioning). Different Bayesian models represent this uncertainty accurately (in closed form) or via various deterministic Kruschke, 2011) or stochastic Gordon et al., 1993) approximations. Different models of the latent characteristics, of their lability over time, and of their connection to observations that the animals can make (such as pairings between stimuli and outcomes) provide accounts of many different results in conditioning. ...
... Furthermore, the continuing growth in compute hardware performance has allowed running increasingly complex and high-resolution models which are able to resolve non-linear processes happening at a fine spatial scale (Vetra-Carvalho et al., 2018), creating an increasing demand for DA methods which are able to accurately quantify uncertainty in such settings. Particle filters (PFs) (Gordon et al., 1993) are an alternative approach which offer the promise of consistent DA for problems with non-linear dynamics and non-Gaussian noise distributions. Traditionally the main difficulty with particle filtering techniques has been the "curse of dimensionality" Bickel et al., 2008;Snyder, 2011), where in highdimensional settings, filtering leads to degeneracy of the importance weights associated with each particle and loss of diversity within an ensemble unless the ensemble size scales exponentially with the observation dimension. ...
Article
Full-text available
Digital twins of physical and human systems informed by real-time data are becoming ubiquitous across weather forecasting, disaster preparedness, and urban planning, but researchers lack the tools to run these models effectively and efficiently, limiting progress. One of the current challenges is to assimilate observations in highly non-linear dynamical systems, as the practical need is often to detect abrupt changes. We have developed a software platform to improve the use of real-time data in non-linear system representations where non-Gaussianity limits the applicability of data assimilation algorithms such as the ensemble Kalman filter and variational methods. Particle-filter-based data assimilation algorithms have been implemented within a user-friendly open-source software platform in Julia – ParticleDA.jl. To ensure the applicability of the developed platform in realistic scenarios, emphasis has been placed on numerical efficiency and scalability on high-performance computing systems. Furthermore, the platform has been developed to be forward-model agnostic, ensuring that it is applicable to a wide range of modelling settings, for instance unstructured and non-uniform meshes in the spatial domain or even state spaces that are not spatially organized. Applications to tsunami and numerical weather prediction demonstrate the computational benefits and ease of using the high-level Julia interface with the package to perform filtering in a variety of complex models.
... PF adopts a set of particles with associated weights to approximate the fault state. The objective is to obtain a new set of particle by propagating the previous particles based on the fault growth model (Gordon, Salmond, & Smith, 1993) and represent the fault state estimation by the new set of particles. The details of PF are described as follows: ...
Article
Lebesgue sampling-based fault diagnosis and prognosis (LSFDP) is developed with the advantage of less computation requirement and smaller uncertainty accumulation. Same as other diagnostic and prognostic approaches, the accuracy and precision of LS-FDP are significantly influenced by the diagnostic and prognostic models. The predicted results will show great discrepancy with the real remaining useful life (RUL) in applications if the model is not accurate. In addition, the fixed model parameters cannot accommodate the varying stress factors that affect the fault dynamics. To address this problem, the parameters in the models are treated as time-varying ones and are adjusted online to accommodate changing dynamics. In this paper, a recursive least square (RLS) based method with a forgetting factor is employed to make the diagnostic and prognostic models online adaptive in LS-FDP. The design and implementation of LS-FDP are based on a particle filtering algorithm and are illustrated with experiments of Li-ion batteries. The experimental results show that the performance of LS-FDP with model adaptation is improved on both battery capacity estimation and RUL prediction.
... The nonlinear dynamic adjustments in unobserved states and parameters require simulation methods instead of analytical methods. This is a topic of much recent research, see, for instance, Herbst and Schorfheide (2016) and the references cited there, in particular the seminal paper on Sequential Monte Carlo due to Gordon, Salmond, and Smith (1993). ...
Article
Full-text available
This essay is about Bayesian econometrics with a purpose. Specifically, six societal challenges and research opportunities that confront twenty first century Bayesian econometricians are discussed using an important feature of modern Bayesian econometrics: conditional probabilities of a wide range of economic events of interest can be evaluated by using simulation-based Bayesian inference. The enormous advances in hardware and software have made this Bayesian computational approach a very attractive vehicle of research in many subfields in economics where novel data patterns and substantial model complexity are predominant. In this essay the following challenges and opportunities are briefly discussed, including the scientific results obtained in the twentieth century leading up to these challenges: Posterior and predictive analysis of everything: connecting micro-economic causality with macro-economic issues; the need for speed: model complexity and the golden age of algorithms; learning about models, forecasts and policies including their uncertainty; temporal distributional change due to polarisation, imbalances and shocks; climate change and the macroeconomy; finally and most importantly, widespread, accessible, advanced high-level training.
... Due to the computational power of computing devices at that time and the complexity and degradation of SIS computation, the problem of inefficiency, long computation time and inaccurate results were faced by this method. Until 1993 Gordon et al. proposed a Bayesian bootstrap filtering method [9], which introduced a resampling step based on the original sequential importance sampling, thus reducing the effect of particle degradation, and produced the Sampling Importance Resampling(SIR) particle filter algorithm. In the literature [10] the name particle filter (PF) was formally proposed. ...
Article
Full-text available
The problem of weight degradation is inevitable in particle filtering algorithms, and the resampling approach is an important method to reduce the particle degradation phenomenon. To solve the problem of particle diversity loss in existing resampling methods, this paper proposes a new digital twin-based resampling algorithm to improve the accuracy of particle filter estimation based on the traditional resampling algorithm. The digital twin-based resampling algorithm continuously improves the resampling process through the data interaction between the data model and the physical model, and realizes the real-time correction capability of particle weights that traditional resampling methods do not have. The new algorithm calibration rules are divided according to the size of particle weights, with particles of large weights retained and particles of small weights selectively processed. Compared with the traditional resampling algorithm, the new resampling algorithm reduces the mean square error of the particle filter estimation results by 16.62%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}, 16.49%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}, and 13.86%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}, and improves the computing speed by 7.67%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}, 2.25%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}, and 7.54%\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\%$$\end{document}, respectively, in the simulation experiments of nonlinear systems with univariate unsteady state growth model. The algorithm is experimentally demonstrated to accurately track a person in motion in an indoor building in a non-rigid target tracking application, which illustrates the effectiveness and reasonableness of the digital twin-based resampling algorithm.
... Iterated filtering uses a basic bootstrap particle filter (Gordon et al., 1993) and, therefore, avoids the need to directly evaluate the transition density ( | −1 ; ). Instead, it only requires the capability to simulate from this density (simulation-based approach). ...
Article
Full-text available
Aim: The paper aims to propose a new estimation method for the Cholesky Multivariate Stochastic Volatility Model based on the iterated filtering algorithm (Ionides et al., 2006, 2015). Methodology: The iterated filtering method is a frequentist-based technique that through multiple repetitions of the filtering process, provides a sequence of iteratively updated parameter estimates that converge towards the maximum likelihood estimate. Results: The effectiveness of the proposed estimation method was shown in an empirical example in which the Cholesky Multivariate Stochastic Volatility Model was used in a study on safe-haven assets of one market index: Standard and Poor’s 500 and three safe-haven candidates: gold, Bitcoin and Ethereum. Implications and recommendations: In further research, the iterating filtering method may be used for more advanced multivariate stochastic volatility models that take into account, for example, the leverage effect (as in Ishihara et al., 2016) and heavy-tailed errors (as in Ishihara and Omori, 2012). Originality/Value: The main contribution of the paper is the proposition of a new estimation method for the Cholesky Multivariate Stochastic Volatility Model based on iterated filtering algorithm This is one of the few frequentist-based statistical inference methods for multivariate stochastic volatility models.
... As the major directions of SMC method, particle filter (PF) uses many weighted particles to describe the posterior PDF and simulate the propagation characteristics of the probability distribution [27,28]. Weighted particles can flexibly describe arbitrary distribution, which makes PF have a greater potential to solve complex nonlinear/non-Gaussian filtering problems [29,31]. However, the development of the PF is always confused by the particle degeneracy, impoverishment problems and dimension curse, which might affect the accuracy and efficiency and even lead to filtering divergence [32][33][34][35][36]. ...
Article
Full-text available
The confidence partitioning sampling filter (CPSF) method proposed in this paper is a novel approach for solving the generic nonlinear filtering problem. First, the confidence probability space (CPS) is defined, which restricts the state transition in a bounded and closed state space in the recursive Bayesian filtering. In the posterior CPS, the weighted grid samples, represented the posterior PDF, are obtained by using the partitioning sampling technique (PST). Each weighted grid sample is treated as an impulse function. The approximate expression of the posterior PDF, as key for the PST implementation, is obtained by using the properties of the impulse function in the integral operation. By executing the selection of the CPS and the PST step repeatedly, the CPSF framework is formed to achieve the approximation of the recursive Bayesian filtering. Second, the difficulty of the CPSF framework implementation lies in obtaining the real posterior CPS. Therefore, the space intersection (SI) method is suggested to obtain the approximate posterior CPS. On this basis, the SI_CPSF algorithm, as an executable algorithm, is formed to solve the generic nonlinear filtering problem. Third, the approximate error between the CPSF framework and the recursive Bayesian filter is analyzed theoretically. The consistency of the CPSF framework to the recursive Bayesian filter is proved. Finally, the performances of the SI_CPSF algorithm, including robustness, accuracy and efficiency, are evaluated using four representative simulation experiments. The simulation results showed that SI_CSPF requires far less samples than particle filter (PF) under the same accuracy. Its computation is on average one order of magnitude less than that of the PF. The robustness of the proposed algorithm is also evaluated in the simulations.
... When the particle number is sufficiently large, the posterior probability density function of particles will be sufficiently accurate to guarantee the accuracy of the state mean and variance. However, the particle filter suffers from particle degradation, and there can be difficulty in selecting an appropriate importance density function for importance sampling [18][19][20]. Research efforts have been dedicated to improving the PF. Pitt et al. used auxiliary samples to adjust the resampling order to improve the PF s performance [21]. ...
Article
Full-text available
In vehicle navigation, it is quite common that the dynamic system is subject to various constraints, which increases the difficulty in nonlinear filtering. To address this issue, this paper presents a new constrained cubature particle filter (CCPF) for vehicle navigation. Firstly, state constraints are incorporated in the importance sampling process of the traditional cubature particle filter to enhance the accuracy of the importance density function. Subsequently, the Euclidean distance is employed to optimize the resampling process by adjusting particle weights to avoid particle degradation. Further, the convergence of the proposed CCPF is also rigorously proved, showing that the posterior probability function is converged when the particle number N → ∞. Our experimental results and the results of a comparative analysis regarding GNSS/DR (Global Navigation Satellite System/Dead Reckoning)-integrated vehicle navigation demonstrate that the proposed CCPF can effectively estimate system state under constrained conditions, leading to higher estimation accuracy than the traditional particle filter and cubature particle filter.
... Each ensemble member was assigned an unnormalized weight ′ =  ( ;̂) , thus emphasizing those whose subsidence matched the observations (Hesterberg, 1995). Formally, this corresponds to importance sampling using the prior distribution as proposal, similar to the original particle filter (Gordon et al., 1993). Importance sampling can suffer from large variance induced by a handful of samples being assigned exceptionally large weights. ...
Article
Full-text available
Excess ground ice formation and melt drive surface heave and settlement, and are critical components of the water balance in Arctic soils. Despite the importance of excess ice for the geomorphology, hydrology and biogeochemistry of permafrost landscapes, we lack fine‐scale estimates of excess ice profiles. Here, we introduce a Bayesian inversion method based on remotely sensed subsidence. It retrieves near‐surface excess ice profiles by probing the ice content at increasing depths as the thaw front deepens over summer. Ice profiles estimated from Sentinel‐1 interferometric synthetic aperture radar (InSAR) subsidence observations at 80 m resolution were spatially associated with the surficial geology in two Alaskan regions. In most geological units, the estimated profiles were ice poor in the central and, to a lesser extent, the upper active layer. In a warm summer, units with ice‐rich permafrost had elevated inferred ice contents at the base of the active layer and the (previous years') upper permafrost. The posterior uncertainty and accuracy varied with depth. In simulations, they were best (≲0.1) in the central active layer, deteriorating (≳0.2) toward the surface and permafrost. At two sites in the Brooks Foothills, Alaska, the estimates compared favorably to coring‐derived profiles down to 35 cm, while the increase in excess ice below the long‐term active layer thickness of 40 cm was only reproduced in a warm year. Pan‐Arctic InSAR observations enable novel observational constraints on the susceptibility of permafrost landscapes to terrain instability and on the controls, drivers and consequences of ground ice formation and loss.
... Another interesting approach is Markov Localization, a probabilistic algorithm that maintains a probability distribution throughout all hypotheses instead of just one [8]. An alternative method is the Monte Carlo Localization, which uses numerous samples (particles) with different weights, which represent hypotheses of the interest variable [9] [10]. ...
Chapter
Full-text available
Accurate localization in autonomous robots enables effective decision-making within their operating environment. Various methods have been developed to address this challenge, encompassing traditional techniques, fiducial marker utilization, and machine learning approaches. This work proposes a deep-learning solution employing Convolutional Neural Networks (CNN) to tackle the localization problem, specifically in the context of the RobotAtFactory 4.0 competition. The proposed approach leverages transfer learning from the pre-trained VGG16 model to capitalize on its existing knowledge. To validate the effectiveness of the approach, a simulated scenario was employed. The experimental results demonstrated an error within the millimeter scale and rapid response times in milliseconds. Notably, the presented approach offers several advantages, including a consistent model size regardless of the number of training images utilized and the elimination of the need to know the absolute positions of the fiducial markers.
... Thereafter, we calculated the average of analysis errors in 1000 steps. The definition of the analysis error is the same as that expressed in Eq. (9). However, we excluded the time steps before step 30 from the average because the DA was stable after step 30. Figure 9 implies that mean analysis errors are sensitive to p min ≤ 0.001 but less sensitive to any α and p min ≥ 0.002 except if these parameters are exceedingly large. ...
Preprint
Full-text available
Estimating the states of chaotic cellular automata (CA) based on noisy imperfect data is challenging due to the nonlinearity and discreteness of the dynamical system.This paper proposes particle filter (PF)-based data assimilation (DA) for three-state chaotic CA and demonstrates that the PF-based DA can predict the present and future state even with noisy and sparse observations.The chaotic CA used in the present study comprised a competitive system of $land$, $grass$, and $sheep$.To the best of the authors' knowledge, this is the first application of DA to chaotic CA.The performance of DA for different observation sets was evaluated in terms of observational error, density, and frequency, and a series of sensitivity tests of the internal parameters in the DA was conducted.The inflation and localization parameters were tuned according to the sensitivity tests.
... The Particle Filter algorithm, introduced initially in 1955 [21], estimates system state by simulating numerous particles or "molecules." The algorithm was later renamed as the Bootstrap filter in 1993 [22], upon its implementation as a recursive Bayesian filter. The fundamental premise of this algorithm is to construct a posterior distribution using differently weighted samples, which are calculated and updated at each sampling interval based on measurement data. ...
Article
Full-text available
The collision-free movement for a mobile robot in the presence of dynamic obstacles remains a significant challenge. In addition to self-localization, we also need to worry about the location of the moving obstacles, taking into account the noise in the sensors and the uncertainty in the movement of these obstacles. In this paper, we propose an approach for omnidirectional robot maneuvering in a 2D workspace that combines a Particle Filter for the estimation of the obstacles from LiDAR data and a variation of the Velocity Obstacles (VO) technique. The position and the velocity vector of the obstacles can be perceived by the Particles Filter and an uncertainty degree is also calculated. These outputs are combined with the VO algorithm to achieve motion planning that takes into account the current level of uncertainty as well as a cost function that expresses the risk tolerance of the user. We validate the approach in simulation and in experiments with a physical robot.
Article
Full-text available
A prevalent problem in statistical signal processing, applied statistics, and time series analysis arises from the attempt to identify the hidden state of Markov process based on a set of available noisy observations. In the context of sequential data, filtering refers to the probability distribution of the underlying Markovian system given the measurements made at or before the time of the estimated state. In addition to the filtering, the smoothing distribution is obtained from incorporating measurements made after the time of the estimated state into the filtered solution. This work proposes a number of new filters and smoothers that, in contrast to the traditional schemes, systematically make use of the process noises to give rise to enhanced performances in addressing the state estimation problem. In doing so, our approaches for the resolution are characterized by the application of the graphical models; the graph-based framework not only provides a unified perspective on the existing filters and smoothers but leads us to design new algorithms in a consistent and comprehensible manner. Moreover, the graph models facilitate the implementation of the suggested algorithms through message passing on the graph.
Article
Full waveform inversion has emerged as the state-of-the art high resolution seismic imaging technique, both in seismology for global and regional scale imaging and in the industry for exploration purposes. While gaining in popularity, full waveform inversion, at an operational level, remains a heavy computational process involving the repeated solution of large-scale 3D wave propagation problems. For this reason it is a common practice to focus the interpretation of the results on the final estimated model. This is forgetting full waveform inversion is an ill-posed inverse problem in a high dimensional space for which the solution is intrinsically non-unique. This is the reason why being able to qualify and quantify the uncertainty attached to a model estimated by full waveform inversion is key. To this end, we propose to extend at an operational level the concepts introduced in a previous study related to the coupling between ensemble Kalman filters and full waveform inversion. These concepts had been developed for 2D frequency-domain full waveform inversion. We extend it here to the case of 3D time-domain full waveform inversion, relying on a source subsampling strategy to assimilate progressively the data within the Kalman filter. We apply our strategy to an ocean bottom cable field dataset from the North Sea to illustrate its feasibility. We explore the convergence of the filter in terms of number of elements, and extract variance and covariance information showing which part of the model are well constrained and which are not. Analyzing the variance helps to gain insight on how well the final estimated model is constrained by the whole full waveform inversion workflow. The variance maps appears as the superposition of a smooth trend related to the geometrical spreading and a high resolution trend related to reflectors. Mapping lines of the covariance (or correlation matrix) to the model space helps to gain insight on the local resolution. Through a wave propagation analysis, we are also able to relate variance peaks in the model space to variance peaks in the data space. Compared to other posterior-covariance approximation scheme, our combination between Ensemble Kalman filter and full waveform inversion is intrinsically scalable, making it a good candidate for exploiting the recent exascale high performance computing machines.
Research
Full-text available
In this paper Edge Preserved Particle Filter (EP-PF) have been developed for edge preservation and noise reduction. The probability density function (PDF) aims to find a spatial representation of different structures through the computation of the relative position of similar patches. Given this density, the mos t appropriate set of neighbors is determined for the estimation of noise-free intensity of a given pixel. Laplacian of Guassian (LoG) operator is used for measuring the strength of the edges. The magnitude of the LoG strength reflects the local edge structure. It can distinguish smooth regions from edge regions. For evaluation surveillance image is degraded with Additive Gaussian white noise level ranging from 10 to 50. The results obtained from the proposed approach EP-PF have been compared with the Standard Particle Filter (S-PF). This new approach is to be very effective in reduction of noise, while preserving the edge information in digital images. The observation of Quantitative comparison has been made in terms of PSNR, MSE, NAE, SNR, MAE and NCC the proposed method offers superior performance compared to S-PF both noise reduction and edge preservation.
Article
Lithium-ion batteries (LIBs) have gained immense popularity as a power source in various applications. Accurately predicting the health status of these batteries is crucial for optimizing their performance, minimizing operating expenses, and preventing failures. In this paper, we present a comprehensive review of the latest developments in predicting the state of charge (SOC), state of health (SOH), and remaining useful life (RUL) of LIBs, and particularly focus on machine learning techniques. This paper delves into the degradation mechanisms of LIBs and their underlying theories, providing an in-depth analysis of the strengths and limitations of various machine learning techniques used to predict SOC, SOH and RUL. Furthermore, this review sheds light on the challenges encountered in the practical application of electric vehicles, especially concerning battery degradation. It also offers valuable insights into the future research directions for LIBs. While machine learning methods hold great promise in enhancing the accuracy of predicting SOC, SOH, and RUL, there remain numerous technical and practical obstacles that must be overcome to make them more applicable in real-world scenarios.
Article
An essential type of Bayesian recursive filters known as the sequential Monte Carlo (alias, the particle filter) is used to estimate hidden Markov target states from noisy sensor data. Utilising sensor data and a collection of weighted particles, the filter makes an approximation of the posterior probability density of the target state. These particles are made to recursively propagate in time and are then updated using the incoming sensor information. The auxiliary particle filter improves over the traditional particle filter by guiding particles into regions of importance of the probability density using a lookahead scheme. This facilitates in the use of fewer particles and improved accuracy. However, when the sensor observations are extremely informative and the state transition noise is strong, the filter suffers badly. This is because the high state transition noise causes the particles that are determined to be important by the lookahead step could guide themselves to unimportant regions of the posterior in the final sampling process. Recent improvements of the auxiliary particle filter explored better weighting strategies but the said problem has not been explored closely. This paper seeks to solve the problem by adopting an auxiliary lookahead technique with two predictive support points to estimate the particles that will be located in regions of high importance after final sampling. The proposed method is successfully tested using a nonlinear model using simulations.