Greedy selection of the next sensor involves choosing the next pivot column of Ψ T from the set of allowable sensor locations specified by the constraint.

Greedy selection of the next sensor involves choosing the next pivot column of Ψ T from the set of allowable sensor locations specified by the constraint.

Source publication
Article
Full-text available
The deployment of extensive sensor arrays in nuclear reactors is infeasible due to challenging operating conditions and inherent spatial limitations. Strategically placing sensors within defined spatial constraints is essential for the reconstruction of reactor flow fields and the creation of nuclear digital twins. We develop a data-driven techniqu...

Context in source publication

Context 1
... main driver of this optimization is the point at which constraints are introduced into the pivoting procedure, as allowing upper triangularization to proceed normally in the starting iterations maximizes the leading diagonal entries of R, ensuring that domain-specific constraints do not drastically affect the diagonal dominance property, but only the trailing R ii , which are optimized by choosing the best pivot from the allowable locations. The three types of spatial constraints handled by the algorithm (Figure 2) are: 1) Region constrained: This type of constraint arises when we can place either a maximum of or exactly s sensors in a certain region, while the remaining r − s sensors must be placed outside the constraint region. ...

Citations

... Ref. [59] introduces a 5G-enabled battery-less smart skins sensor that adopts a Van Atta array design, enabling ubiquitous local strain monitoring of the monitoring target with a wide detection angle. Ref. [60] proposes a datadriven sensor layout optimization technique for nuclear digital twins, integrating spatial constraints into the optimization framework to minimize reconstruction errors under noisy sensor measurement conditions, ensuring that data transmitted from physical processes enable remote monitoring and real-time control. ...
Article
Full-text available
Digital twin technology, a new type of digital technology emerging in recent years, realizes real-time simulation, prediction and optimization by digitally modeling the physical world, providing a new idea and method for the design, operation and management of water conservancy projects, which is of great significance for the realization of the transformation of water conservancy informatization to intelligent water conservancy. In view of this, this paper systematically discusses the concept and development history of digital twin smart water conservancy, compares its differences with traditional water conservancy models, and further proposes the digital twin smart water conservancy five-dimensional model. Based on the five-dimensional model of digital twin water conservancy, the research progress of digital twin smart water conservancy is summarized by focusing on six aspects, namely digital twin water conservancy data perception, data transmission, data analysis and processing, digital twin water conservancy model construction, digital twin water conservancy interaction and collaboration and digital twin water conservancy service application, and the challenges and problems of digital twin technology in the application of smart water conservancy. Finally, the development trend of digital twin technology and the direction of technological breakthroughs are envisioned, aiming to provide reference and guidance for the research on digital twin technology in the field of smart water conservancy and to promote the further development of the field.
... In essence, these heuristic approaches offer a balance between speed and optimality. They can be a valuable tool for initial sensor placement, but further refinement might be needed using more rigorous optimization methods (Gao et al., 2023;Manohar et al., 2019) especially when constraints are involved (Karnik et al., 2024). ...
... Extracted mrDMD modes are then used to determine the optimal sensor placement while considering real-world system constraints. The study proposes a novel adaptation of a pivoted QR factorization technique (Drmač & Gugercin, 2016;Higham, 2000;Karnik et al., 2024;Manohar et al., 2018Manohar et al., , 2022. This adaptation introduces a regularization term that builds upon the algorithm's existing constraints. ...
... As established in the unconstrained QR decomposition with pivoting, the (k+1)th iteration selects a column from the submatrix ). To incorporate system constraints and the regularization term (introduced in Equation 14) into the QR decomposition with pivoting, the column selection process is modified (Karnik et al., 2024). Unlike the unconstrained case where the column with the maximum two-norm from ( ) 22 is chosen, the constrained scenario restricts the selection to a subset of allowable indices that adhere to the imposed limitations. ...
Article
Full-text available
Accurate wind pressure analysis on high-rise buildings is critical for wind load prediction. However, traditional methods struggle with the inherent complexity and multiscale nature of these data. Furthermore, the high cost and practical limitations of deploying extensive sensor networks restrict the data collection capabilities. This study addresses these limitations by introducing a novel framework for optimal sensor placement on high-rise buildings. The framework leverages the strengths of multiresolution dynamic mode decomposition (mrDMD) for feature extraction and incorporates a novel regularization term within an existing sensor placement algorithm under constraints. This innovative term enables the algorithm to consider real-world system constraints during sensor selection, leading to a more practical and efficient solution for wind pressure analysis. mrDMD effectively analyzes the multiscale features of wind pressure data. The extracted mrDMD modes, combined with the enhanced constrained QR decomposition technique, guide the selection of informative sensor locations. This approach minimizes the required number of sensors while ensuring accurate pressure field reconstruction and adhering to real-world placement constraints. The effectiveness of this method is validated using data from a scaled building model tested in a wind tunnel. This approach has the potential to revolutionize wind pressure analysis for high-rise buildings, paving the way for advancements in digital twins, real-time monitoring, and risk assessment of wind loads.
... Re-cent work [16] formulates D-optimal criteria to minimize error covariance of physics-based reconstruction, achieving near-optimal sensor placements and reconstruction under noisy measurements, and have been extended to actuator placement for control [36], greedy cost constraints [37,38], multi-fidelity sensors [39], and multi-scale physics [40]. These approaches can be further extended to provide a sensor optimization landscape based on data-induced interactions to inform the placement of new sensors and quantify reconstruction uncertainty at each grid point [41] and incorporate placement constraints during optimization [42]. This constrained sensing approach is leveraged in this work to optimize sensor placement for components of NPPs. ...
... This work leverages computational models developed for nuclear subsystems to apply a constrained data-driven sensor optimization approach [42] to establish instrumentation during the design stage of NPPs. Our target application is the reconstruction of fields of interest from optimized sensor measurements of temperature, pressure, velocity, and heat flux during the service phase. ...
... Each case study is analyzed using optimal sensor placement with uncertainty developed in our previous work [41,42], summarized next. The reconstruction setup encodes sensor placement as a sparse measurement operator of high-dimensional system states, where the resulting vector of p measurements (y ∈ R n ) can contain additive zero-mean Gaussian noise η ∼ N (0, β 2 ): ...
Article
Full-text available
Nuclear power plants (NPPs) require continuous monitoring of various systems, structures, and components to ensure safe and efficient operations. The critical safety testing of new fuel compositions and the analysis of the effects of power transients on core temperatures can be achieved through modeling and simulations. They capture the dynamics of the physical phenomenon associated with failure modes and facilitate the creation of digital twins (DTs). Accurate reconstruction of fields of interest (e.g., temperature, pressure, velocity) from sensor measurements is crucial to establish a two-way communication between physical experiments and models. Sensor placement is highly constrained in most nuclear subsystems due to challenging operating conditions and inherent spatial limitations. This study develops optimized data-driven sensor placements for full-field reconstruction within reactor and steam generator subsystems of NPPs. Optimized constrained sensors reconstruct field of interest within a tri-structural isotropic (TRISO) fuel irradiation experiment, a lumped parameter model of a nuclear fuel test rod and a steam generator. The optimization procedure leverages reduced-order models of flow physics to provide a highly accurate full-field reconstruction of responses of interest, noise-induced uncertainty quantification and physically feasible sensor locations. Accurate sensor-based reconstructions establish a foundation for the digital twinning of subsystems, culminating in a comprehensive DT aggregate of an NPP.
... Peherstorfer et al. [32] carry out a probabilistic analysis to quantify the reconstruction error of DEIM when the observations are noisy. A similar analysis, in the context of D-optimal design, is carried out in [23,29]. Callaham et al. [8] propose several methods which regularize the underlying DEIM optimization problem in order to decrease its sensitivity to noise. ...
Preprint
Full-text available
Discrete empirical interpolation method (DEIM) estimates a function from its incomplete pointwise measurements. Unfortunately, DEIM suffers large interpolation errors when few measurements are available. Here, we introduce Sparse DEIM (S-DEIM) for accurately estimating a function even when very few measurements are available. To this end, S-DEIM leverages a kernel vector which has been neglected in previous DEIM-based methods. We derive theoretical error estimates for S-DEIM, showing its relatively small error when an optimal kernel vector is used. When the function is generated by a continuous-time dynamical system, we propose a data assimilation algorithm which approximates the optimal kernel vector using sparse observational time series. We prove that, under certain conditions, data assimilated S-DEIM converges exponentially fast towards the true state. We demonstrate the efficacy of our method on two numerical examples.
Article
The complex shape of the structure and the new needs for high-precision in digital twin modeling pose challenges for sensor placement optimization. A novel optimal sensor placement towards the high-precision digital twin (OSP-HDT) method is proposed for complex curved structures. It comprises three key aspects. Firstly, leveraging the spatial dimensionality reduction method, the complex curved surface is simplified into a planar representation. Subsequently, candidate sensor placement points can be easily identified by dividing the background mesh in the plane and screening them within the curved surface. These candidate points are then binary encoded to facilitate the subsequent optimization. Secondly, the method collects result data from the finite element model, treating it as virtual sensor data. Using this data, a surrogate model is constructed and then the objective function is formulated based on both the global and local critical areas precision of the surrogate model. Thirdly, the sensor placement optimization model is constructed, followed by optimization design using the efficient multi-objective covariance matrix adaptive evolutionary strategy. Through the steps above, the optimal sensor placement can be identified. To validate the proposed OSP-HDT method, an experiment is conducted on an S-shaped variable cross-section stiffened shell, with the construction of the corresponding digital twin. Compared to the uniform placement with an equivalent number of sensors, the OSP-HDT method demonstrated a significant 9.0% improvement in global precision and a remarkable 62.1% enhancement in local precision of critical areas. Furthermore, when compared to the random sensor placement strategies, the OSP-HDT method exhibited a 20.5% increase in global precision, together with a 44.2% increase in the local precision. Notably, even when compared to the full sensor placement, the OSP-HDT method can maintain comparable local precision, while significantly reducing the number of sensors by 77.6%. The above comparison indicates that the proposed OSP-HDT method can build a digital twin model with higher global and local precision for complex structures.