Article

Output-only modal identification by compressed sensing: Non-uniform low-rate random sampling

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... [1][2][3]19 In the field of civil engineering, a series of problems can be formulated as a general sparse inverse problem with noisy linear measurements (the nonlinear components can be converted into a linear space by kernel functions, or linearly truncated by the Taylor expansion), where the measured data are often insufficient. For example, in structural health monitoring (SHM), it is important to perform accurate identification of load, 20 modal properties, 21 and damage parameters 22 to facilitate the automatic assessment of structural conditions, while only spatially sparse sensors are available. Similarly, in non-destructive testing (NDT), there is a need to reconstruct meaningful information from a limited number of sensors in order to detect structural damages while reducing the testing time and data collection volume. ...
... Some researchers in civil engineering also studied these CS algorithms based on convex optimization techniques. 18,20,21 Reference 67 introduced how to solve the p norm regularized problem. In addition to the greedy algorithms and the convex relaxation algorithms, there is another combinatorial algorithm 68 that can also solve CS problem, but its model is too specific and the suitable scenarios are limited. ...
... Yang et al. 21 performed modal identification based on non-uniform, low-rate randomly sampled response signals (i.e. the signals are under-sampled in the temporal domain, as shown in Fig. 6). They first obtained the mode shapes by blind source separation, and then reconstructed the uniform, high-rate modal responses based on CS, from which the frequencies and damping ratios were also accurately identified. ...
Article
Full-text available
Compressive sampling (CS) is a novel signal processing paradigm whereby the data compression is performed simultaneously with the sampling, by measuring some linear functionals of original signals in the analog domain. Once the signal is sparse sufficiently under some bases, it is strictly guaranteed to stably decompress/reconstruct the original one from significantly fewer measurements than that required by the sampling theorem, bringing considerable practical convenience. In the field of civil engineering, there are massive application scenarios for CS, as many civil engineering problems can be formulated as sparse inverse problems with linear measurements. In recent years, CS has gained extensive theoretical developments and many practical applications in civil engineering. Inevitable modelling and measurement uncertainties have motivated the Bayesian probabilistic perspective into the inverse problem of CS reconstruction. Furthermore, the advancement of deep learning techniques for efficient representation has also contributed to the elimination of the strict assumption of sparsity in CS. This paper reviews the advancements and applications of CS in civil engineering, focusing on challenges arising from data acquisition and analysis. The reviewed theories also have applicability to inverse problems in broader scientific fields.
... CS has been used successfully in structural monitoring of civil infrastructure (Yang & Nagarajaiah, 2015). Other related work discussed the use of CS techniques with video of vibrating structures (Martinez et al., 2020). ...
... Though there exist some reports of prototypes of CS hardware samplers (Yazicigil et al., 2019), those too are not commercially available. As such, like others before us (Yang & Nagarajaiah, 2015;Martinez et al., 2020), we too will demonstrate the use of CS by randomly down-sampling data originally sampled at uniform rates. However, instead of down-sampling real measured signals, we will demonstrate the methods with synthesized signals that are representative of real cutting tool signals. ...
... We will keep δ = 0 for the entirety of this paper for the sake of generality. Hence, with δ=0, Eq. (5) becomes the basis pursuit problem (Yang & Nagarajaiah, 2015;Chen & Donoho, 1994): ...
Article
A change in the modal parameters of cutting tools could signal tool wear, tool breakage, or other instabilities. The cutting process must be continuously monitored using vibration signals to detect such changes. Since tools vibrate with frequencies of up to a few kHz, continuous monitoring requires sampling at rates of tens of kHz to respect the Nyquist limit. Processing and storing such large data for decision making is cumbersome. To address this issue, this paper discusses the use of a compressed sensing framework that enables non-uniform random sampling at rates below the Nyquist limit. For cutting tools, we show for the first time using synthesized data that it is possible to reconstruct original signals from as few as 1% of the original data. We numerically test the method to characterize the influence of damping, noise, and multiple modes. Recovered modal parameters from the reconstructed signal agree with signals sampled properly.
... Bao et al [27] presented a group sparse optimization algorithm for reconstructing the original measurements, with good accuracy. Yang and Nagarajaiah [28] proposed an output-only modal identification method that combined CS with blind source separation to recover the modal responses from non-uniform low-rate random samples. Gkoktsi and Giaralis [29] compared the CS-based method and the power spectrum blind sampling technique to recover the power spectral density matrix. ...
... To obtain compressed measurements by using random matrices, we need to select optimal sampling schemes. As uniform sampling multiplied by the Bernoulli random matrix (an M × L random matrix, only contains elements of 1 and − 1) and non-uniform low-rate random sampling schemes (L randomly selected time points for sampling) show good performance via other methods [28][29][30][31][32][33]. They are used to compress displacement matrix X for the proposed SDPI. ...
... Hence, even for CR = 100, SDPI can still accurately identify modal parameters. Note that SDPI failed to identify the first-order mode shape at CR = 100 as this mode was weekly excited, as discussed in [28]. As the accuracy of SDPI is over 98%, SDPI demonstrates good performance using compressed measurements in random vibration tests. ...
Article
Compressive Sensing (CS) provides a solution for modal analysis with fewer/compressed measurements. However, operational modal analysis by using compressed measurements directly has not been studied. In this work, a new approach is proposed, which deploys the identified modal parameters from the most recent step as prior information to automatically set the frequency and damping ratio search ranges and uses CS to identify modal parameters from compressed measurements. Numerical studies on a spring-mass system and experimental studies on a wind turbine model are performed to demonstrate the effectiveness of the proposed approach. The results are accurate on damped structures including those with heavy noise and complex responses, under free or random vibration conditions. Its performance using compressed measurements is comparable to those of state-of-the-art methods using original uncompressed measurements. It may find practical applications as an efficient and accurate operational modal analysis tool for online structural health monitoring.
... Based on the CS theory, structural responses can be effectively compressed, and the applications of the compressed coefficients have been studied in the SHM field [25][26][27]. Because the structural responses are not sparse in time domain, some dictionaries are established to obtain sparse expressions of responses. ...
... Because the structural responses are not sparse in time domain, some dictionaries are established to obtain sparse expressions of responses. The Fourier basis [25], the sinusoid basis [26] and the Daubechies wavelet family [27] are respectively adopted as a dictionary in these studies. These studies involve the applications about modal identification [26], response signal recovery and structural damage detection [25,27]. ...
... The Fourier basis [25], the sinusoid basis [26] and the Daubechies wavelet family [27] are respectively adopted as a dictionary in these studies. These studies involve the applications about modal identification [26], response signal recovery and structural damage detection [25,27]. ...
Article
Moving force identification (MFI) techniques have been widely studied in recent years. However, the contradiction between response acquisition and energy consumption limits applications of existing MFI methods and has become one of the most prominent issues in the field of structural health monitoring (SHM). In fact, sample length of response data can be shortened by exploiting compressed coefficients of responses based on compressed sensing (CS) theory. In order to mitigate this contradiction and to study if these compressed coefficients can be efficiently exploited for MFI, a novel method is proposed for MFI based on CS with redundant dictionaries in this study. Firstly, a redundant dictionary is designed for creating a sparse expression on each moving force based on prior knowledge of moving forces. Then, by the aid of relationship between moving forces and responses, an indirect way is presented to design dictionaries for different types of structural responses, sparse expression of responses is established simultaneously, and a MFI governing equation is formulated by directly exploiting compressed coefficients of responses via CS. Moreover, sparse regularization is introduced to ensure the accuracy of MFI results. Finally, the proposed method is validated by both numerical simulations and experimental verifications. The illustrated results show that the sample length of each acquired data can be obviously shortened and the compressed coefficients rather than structural responses can be directly used for MFI. The identified moving forces are in good agreement with the true ones, which shows the effectiveness and applicability of the proposed method. In addition, the proposed method can estimate the total weight of the car with a good accuracy and a strong robustness to noise.
... Nonetheless, wireless sensors are constrained by frequent battery replacement requirements leading to increase maintenance costs while their bandwidth limitations pose restrictions to the amount of data that can be reliably transmitted. It has been established that the above disadvantages may be alleviated by considering system identification techniques using measurements sampled at low rates, significantly below the nominal application-dependent Nyquist rate [4][5][6][7][8][9]. ...
... Most of these techniques rely on the compressive sensing (CS) paradigm in which response acceleration time-histories are randomly sampled in time at sub-Nyquist rates at the front-end (sensor level) and, then, sparse recovery algorithms are applied to the compressed measurements at the back-end (base-station level) to retrieve the acceleration time-series [4,7] or, directly, modal data [5,6]. In CS-based techniques, the achieved level of data compression (sub-Nyquist rate) for faithful time-series recovery and/or modal properties extraction depends on the acceleration signals sparsity, i.e., non-zero signal coefficients on a given basis. ...
... The Multiple Signal Classification (MUSIC) algorithm is a super-resolution pseudospectrum estimation method, which relies on the eigenvalue decomposition of autocorrelation matrices estimated by field measurements (e.g., [14]). Herein, the MUSIC algorithm is applied to the autocorrelation matrix Rss in Eq. (6), which is decomposed as ...
Conference Paper
Full-text available
Motivated by practical needs to reduce data transmission payloads in wireless sensors for vibration-based monitoring of civil engineering structures, this paper proposes a novel approach for identifying resonant frequencies of white-noise excited structures using acceleration measurements acquired at rates significantly below the Nyquist rate. The approach adopts the deterministic co-prime sub-Nyquist sampling scheme, originally developed to facilitate telecommunication applications, to estimate the autocorrelation function of response acceleration time-histories of low-amplitude white-noise excited structures treated as realizations of a stationary stochastic process. This is achieved without posing any sparsity conditions to the signals. Next, the standard MUSIC algorithm is applied to the estimated autocorrelation function to derive a denoised super-resolution pseudo-spectrum in which natural frequencies are marked by prominent spikes. The accuracy and applicability of the proposed approach is numerically assessed using computer-generated noise-corrupted acceleration time-history data obtained by a simulation-based framework pertaining to a white-noise excited structural system with two closely-spaced modes of vibration carrying the same amount of energy, and a third isolated weakly excited vibrating mode. All three natural frequencies are accurately identified by sampling at as low as 78% below Nyquist rate for signal to noise ratio as low as 0dB (i.e., energy of additive white noise equal to the signal energy), suggesting that the proposed approach is robust and noise-immune while it can reduce data transmission requirements in acceleration wireless sensors for natural frequency identification of engineering structures.
... Moreover, RS, as a NUS process, is proposed in DASP in order to take advantage of its properties of anti-aliasing and sampling with low frequencies which are proven by Shapiro and Silverman in [27], [28]. Finally, a new way of sampling sparse signals is proposed to decrease the data storage and the sampling rate to minimize the energy consumption by sensors that are used in WSN [29]. This sampling process is known as the Compressed Sensing (CS), that is based on random sensing matrices [30] which is, somehow, equivalent to RS. ...
... Therefore, the CS is used in many different domains where the data compression is needed or the number of measurements is limited. For instance, CS is mainly a research subject in Medical Resonance Imaging (MRI) [39] [40], in astronomy imaging [41], Microarray sequencing in Biology [42], seismic imaging, and modal identification within civil engineering [38] and in many WSN based solutions [29]. ...
... Time Process for n>=1Most of the recent studies on RS are based on the theory developed by Frederic Beutler and Oscar Leneman in the 1960's, where they concluded some important properties concerning the random point processes and their stationarity conditions[29]-[30]. Thus, they introduced the random impulse process as defined in(1.7). ...
Thesis
Nowadays, machine monitoring and supervision became one of the most important domains of research. Many axes of exploration are involved in this domain: signal processing, machine learning and several others. Besides, industrial systems can now be remotely monitored because of the internet availability. In fact, as many other systems, machines can now be connected to any network by a specified address due to the Internet of Things (IOT) concept. However, this combination is challenging in data acquisition and storage. In 2004, the compressive sensing was introduced to provide data with low rate in order to save energy consumption within wireless sensor networks. This aspect can also be achieved using random sampling (RS). This approach is found to be advantageous in acquiring data randomly with low frequency (much lower than Nyquist rate) while guaranteeing an aliasing-free spectrum. However, this method of sampling is still not available by hardware means in markets. Thus, a comprehensive review on its concept, its impact on sampled signal and its implementation in hardware is conducted. In this thesis, a study of RS and its different modes is presented with their conditions and limitations in time domain. A detailed examination of the RS’s spectral analysis is then explained. From there, the RS features are concluded. Also, recommendations regarding the choice of the adequate mode with the convenient parameters are proposed. In addition, some spectral analysis techniques are proposed for RS signals in order to provide an enhanced spectral representation. In order to validate the properties of such sampling, simulations and practical studies are shown. The research is then concluded with an application on vibration signals acquired from bearing and gear. The obtained results are satisfying, which proves that RS is quite promising and can be taken as a solution for reducing sampling frequencies and decreasing the amount of stored data. As a conclusion, the RS is an advantageous sampling process due to its anti-aliasing property. Further studies can be done in the scope of reducing its added noise that was proven to be cyclostationary of order 1 or 2 according to the chosen parameters
... The signal-based damage index method is directly extracting dynamic characteristics (or dynamic fingerprints) that can reflect structural damage based on monitoring signals, and directly determine whether the damage is damaged or not according to their changes. These dynamic characteristics include structural frequency [8][9][10], structural mode shape (including modal guarantee criterion MAC and coordinate mode guarantee criterion COMAC [11,12], strain mode or curvature mode [13][14][15][16], modal flexibility [17] (including damage localization vector DLV method [18], uniform load flexural surface method [19,20]), modal strain energy [21][22][23][24][25], etc. Farrar's I- 40 bridge-based damage test study clarifies that although various damage index methods can correctly diagnose the most severe damage conditions, the effect is not satisfactory when the damage is mild [26,27]. Several current studies have demonstrated that environmental conditions, particularly changes in temperature, interfere significantly with injury diagnosis [28][29][30][31]. ...
... (8) Step 8: Recursive update W (t), P (t). (9) Step 9: Solve system matrices A and C. (10) Step 10: Obtain the structural modal parameter f t , ζ t and φ t at the current time t. (11) Step 11: t = t + 1, repeat the solution process in steps (7) to (10) until the end of the data period. ...
Article
Full-text available
In bridge health monitoring, in order to closely monitor the structural state changes of the bridge under heavy traffic load and other harsh environments, the monitoring system is required to give the change process of structural modal parameters. Due to the symmetric variables of bridge monitoring during operation, the evaluation needs to be completed by the recursive identification of modal parameters based on environmental excitation, especially the recursive recognition of the random subspace method with high recognition accuracy. We have studied the recursive identification methods of covariance-driven and data-driven random subspace categories respectively, established the corresponding recursive format, and used the model structure of the ASCE structural health monitoring benchmark problem as a numerical example to verify the reliability of the proposed method. First, based on the similar interference environment of the observation data at the same time, a reference point covariance-driven random subspace recursive algorithm (IV-RSSI/Cov) based on the auxiliary variable projection approximation tracking (IV-PAST) algorithm is established. The recursive format of the system matrix and modal parameters is obtained. Based on Givens rotation, the rank-2 update form of the row space projection matrix is established, and the recursive format of the data-driven recursive random subspace method (RSSI/Data) under the PAST algorithm is obtained. Then, based on the benchmark problem of ASCE-SHM, the response of the model structure under environmental excitation is numerically simulated, the frequency, damping ratio and vibration mode of the structure are recursively tracked, and their reliability and shortcomings are studied. After improving the recursive method, the frequency tracking accuracy has been improved, with a maximum accuracy of 99.8%.
... The use of complex self-powered energy modules and high-performance processors increases the size and power consumption of microsystems, while the use of low-power components reduces system performance. The microsystem with limited performance, such as low-precision sensing and low-computing power, hinders its signal sensing and target recognition capabilities [13]. In many cases, data acquired from microsystem sensors are sent to the parent system for further processing, which puts a burden on the parent system [14][15][16]. ...
... During this process, due to the interference of the medium propagating properties, sensor errors, and other vibration source disturbances, the signal-to-noise ratio of the target vibration signal decreases rapidly with the increase in the distance between the sensor and the target, which affects the detection and recognition of the target. Selecting appropriate methods to preprocess the original data can improve the signal quality significantly [13]. In this paper, the mean filtering and autocovariance method are used for preprocessing with small computational complexity, as shown in Figure 6a. ...
Article
Full-text available
Microsystems play an important role in the Internet of Things (IoT). In many unattended IoT applications, microsystems with small size, lightweight, and long life are urgently needed to achieve covert, large-scale, and long-term distribution for target detection and recognition. This paper presents for the first time a low-power, long-life microsystem that integrates self-power supply, event wake-up, continuous vibration sensing, and target recognition. The microsystem is mainly used for unattended long-term target perception and recognition. A composite energy source of solar energy and battery is designed to achieve self-powering. The microsystem’s sensing module, circuit module, signal processing module, and transceiver module are optimized to further realize the small size and low-power consumption. A low-computational recognition algorithm based on support vector machine learning is designed and ported into the microsystem. Taking the pedestrian, wheeled vehicle, and tracked vehicle as targets, the proposed microsystem of 15 cm3 and 35 g successfully realizes target recognitions both indoors and outdoors with an accuracy rate of over 84% and 65%, respectively. Self-powering of the microsystem is up to 22.7 mW under the midday sunlight, and 11 min self-powering can maintain 24 h operation of the microsystem in sleep mode.
... This framework is based on an intimate combination of compressive sampling theory with blind source separation (BSS)-based [45] techniques, implemented through a low-rate, random sampling of the frames. CS enables low-rate random sampling, and has been demonstrated in structural dynamics and health monitoring applications [46][47][48][49][50][51]. Compressive sampling is able to exactly recover a sparse signal from far fewer incoherent random measurements than is suggested by the Nyquist sampling theorem. ...
... If one is particularly perceptive it is possible to see that the algorithm for full-field high-resolution structural identification inherently possesses properties of compressive sampling similar to those found in [47]. In fact, the algorithm in its current form allows for the automatic estimation of the mode shapes when only a randomly selected subset of the frames are used for analysis. ...
Article
Full-text available
Video-based techniques for identification of structural dynamics have the advantage that they are very inexpensive to deploy compared to conventional accelerometer or strain gauge techniques. When structural dynamics from video is accomplished using full-field, high-resolution analysis techniques utilizing algorithms on the pixel time series such as principal components analysis and solutions to blind source separation the added benefit of high-resolution, full-field modal identification is achieved. An important property of video of vibrating structures is that it is particularly sparse. Typically video of vibrating structures has a dimensionality consisting of many thousands or even millions of pixels and hundreds to thousands of frames. However the motion of the vibrating structure can be described using only a few mode shapes and their associated time series. As a result, emerging techniques for sparse and random sampling such as compressive sensing should be applicable to performing modal identification on video. This work presents how full-field, high-resolution, structural dynamics identification frameworks can be coupled with compressive sampling. The techniques described in this work are demonstrated to be able to recover mode shapes from experimental video of vibrating structures when 70% to 90% of the frames from a video captured in the conventional manner are removed.
... Still, wireless sensors are constrained by frequent battery replacement requirements leading to increase maintenance costs while their bandwidth transmission limitations pose restrictions to the amount of acceleration measurements that can be reliably transmitted. To this end, it has been recently established that the above disadvantages of wireless sensors may be alleviated by developing compressive system identification approaches using acceleration measurements sampled at rates significantly below the nominal application-dependent Nyquist rate [13][14][15][16][17][18][19][20][21][22]. ...
... Probabilistic Engineering Mechanics, accepted: 27/12/2019. 4 directly from sub-Nyquist measurements with natural frequencies seen as by-products or completely overlooked [14,15]. In other approaches, the sub-Nyquist sampled data are postprocessed using various off-line computational techniques to recover/estimate either Nyquistsampled time-histories [13,16,19] or their Fourier-based power spectral density function (PSD) [17,18,[20][21][22] from which natural frequencies may be estimated using standard time-domain or frequency-domain OMA system identification, respectively. ...
Article
Full-text available
Motivated by practical needs to reduce data transmission payloads in wireless sensors for vibration-based monitoring of engineering structures, this paper proposes a novel approach for identifying resonant frequencies of white-noise excited structures using acceleration measurements acquired at rates significantly below the Nyquist rate. The approach adopts the deterministic co-prime sub-Nyquist sampling scheme, originally developed to facilitate telecommunication applications, to estimate the autocorrelation function of response acceleration time-histories of low-amplitude white-noise excited structures treated as realizations of a stationary stochastic process. Next, the standard multiple signal classification (MUSIC) spectral estimator is applied to the estimated autocorrelation function enabling the identification of structural natural frequencies with high resolution by simple peak picking in the frequency domain without posing any sparsity conditions to the signals. This is achieved by processing autocorrelation estimates without undertaking any (typically computationally expensive) signal reconstruction step in the time-domain, as required by various recently proposed in the literature sub-Nyquist compressive sensing-based Gkoktsi K and Giaralis A (2020) A compressive MUSIC spectral approach for identification of closely-spaced structural natural frequencies and post-earthquake damage detection. Probabilistic Engineering Mechanics, accepted: 27/12/2019. 2 approaches for structural health monitoring, while filtering out any broadband noise added during data acquisition. The accuracy and applicability of the proposed approach is first numerically assessed using computer-generated noise-corrupted acceleration time-history data obtained by a simulation-based framework examining white-noise excited structural systems with two closely-spaced modes of vibration carrying the same amount of energy, and a third isolated weakly excited vibrating mode. Further, damage detection potential of the developed method is numerically illustrated using a white-noise excited reinforced concrete 3-storey frame in a healthy and two damaged states caused by ground motions of increased intensity. The damage assessment relies on shifts in natural frequencies between the pre-earthquake and post-earthquake state. Overall, numerical results demonstrate that the considered approach can accurately identify structural resonances and detect structural damage associated with changes to natural frequencies as minor as 1% by sampling up to 78% below Nyquist rate for signal to noise ratio as low as 10dB. These results suggest that the adopted approach is robust and noise-immune while it can reduce data transmission requirements in acceleration wireless sensors for natural frequency identification and damage detection in engineering structures.
... Although structural responses are not sparse in the time domain, they can be readily transformed into another domain using sparsifying dictionaries. Sparsity of vibration data in the transformed domain paves the way for the application of advanced signal processing tools like Sparse Component Analysis (SCA) and Compressive Sensing (CS) in structural engineering [17,18]. In this study, CS is applied to € u t ð Þ, _ u t ð Þ and u t ð Þ. Compressive Sensing attempts to capture potentially sparse signals with far fewer measurements. ...
... Because there is no randomness in the associated encoding stage and all iterations will lead to the same results. The success of recovery is measured by Eq. (18). ...
Article
Compressive Sensing (CS) is an emerging signal sampling technique, which can be useful in the long-term monitoring of civil infrastructures by reducing the storage space and transmission bandwidth drastically. In this paper, a fully deterministic approach is adopted to enhance the previously proposed CS-based methods. First, the sparsifying dictionary is trained using a vibration data set. Then, a deterministic projection matrix is computed based on the trained dictionary. Second, a new index is defined to determine the number of measurements in advance, without any trial and error in the reconstruction stage. This index which is coined as NPI (Normalized Power Index) is derived using the singular value decomposition of the trained dictionary. The capability of the proposed method is investigated using vibration signals of the Tianjin Yonghe Bridge with traffic excitation. Both accuracy and computational time of the deterministic CS was compared to different data compression algorithms.
... Sparse recovery assuming a DFT expansion basis, as well as an empirically specified level of sparsity was applied to the compressed data to estimate the response acceleration power spectral density (PSD) matrix. The latter matrix was used in conjunction with the standard frequency domain decomposition (FDD) algorithm to extract mode shapes and natural frequencies within OMA. Yang and Nagarajaiah (2015) and Park et al. (2014) contributed two significantly different approaches for mode shape estimation from CS-based nonuniform in time random sampling of structural vibration timehistories at sub-Nyquist rates. In Yang and Nagarajaiah (2015) mode shape estimation relies on modal structural responses obtained by application of blind source separation directly to the compressed measurements of structural response signals. ...
... Yang and Nagarajaiah (2015) and Park et al. (2014) contributed two significantly different approaches for mode shape estimation from CS-based nonuniform in time random sampling of structural vibration timehistories at sub-Nyquist rates. In Yang and Nagarajaiah (2015) mode shape estimation relies on modal structural responses obtained by application of blind source separation directly to the compressed measurements of structural response signals. Sparse signal recovery (reconstruction) in the time-domain is next applied to each compressed modal response vector to retrieve the underlying structural natural frequencies and modal damping ratios. ...
Article
Full-text available
The consideration of wireless acceleration sensors is highly promising for cost-effective output-only system identification in the context of operational modal analysis (OMA) of large-scale civil structures as they alleviate the need for wiring. However, practical monitoring implementations for OMA using wireless units suffer a number of drawbacks related to wireless transmission of densely sampled acceleration time-series including the energy self-sustainability of the sensing nodes. In this work, two recently proposed approaches for output-only modal identification addressing the above issues through balancing monitoring accuracy with data transmission costs are comparatively studied and numerically assessed using field recorded acceleration datasets from two different structures: (i) an operating on-shore wind turbine, (ii) an open to traffic highway bridge. One approach utilizes non-uniform-in-time deterministic multi-coset sampling at sub-Nyquist rates to capture structural response acceleration time-series under ambient excitation assuming stationary signal conditions. In this approach, a power spectrum blind sampling technique is used to estimate the response acceleration power spectral density matrix from the low-rate sampled measurements and is coupled with the Frequency Domain Decomposition method of OMA. The other is a spectro-temporal compressive sensing approach which recovers response acceleration signals through time-series reconstruction in the time domain from sub-Nyquist non-uniform-in-time randomly sampled measurements. Prior knowledge of signal structure in the spectral domain is exploited through smart on-sensor operations and sensor/server communication. The benefits and limitations of the considered approaches are discussed and demonstrated by processing the field recorded datasets for different levels of signal compression and by estimating battery lifetime gains at a single sensor achieved by reduced data transmission. It is concluded that the two approaches are readily applicable in OMA of large-scale structures and can be used complementarily depending on the requirements of any particular acceleration monitoring campaign: time-series extraction for further interrogation versus solely modal properties estimation.
... An over-smooth solution is inconsistent with structural local damages because local damage cases have few element stiffness changes, and the distribution of damage parameters is sparse. Recently, the concept of sparsity has been applied to structural health monitoring in the form of l1-norm regularization [33][34][35]. Studies have shown that sparse regularization is superior to the classical sensitivity method for structural local damages. ...
... However, in accordance with the probability density function presented in Figure 22, the total observation noise can be considered similar to Gaussian noise. Therefore, the damage identification problem in Equations (35) and (36) can be analyzed directly by the proposed EKF algorithm. Notably, the response u r related to road roughness will increase for poor road condition, and then the increase in the difference between observation noiseṼ and standard Gaussian noise may lead to a decrease in identification accuracy. ...
Article
An innovative damage detection method for bridge structures under moving vehicular load is proposed on the basis of extended Kalman filter (EKF) and l1-norm regularization. An augmented state vector includes structural damage parameters and motion state variables of bridge and vehicle. Through a recursive process of the EKF, the structural damage parameters and state variables of a bridge are updated continually to obtain an optimal estimate using bridge responses due to a moving vehicle. The distribution of element stiffness reduction of a structure with local damages is sparse. Thus, l1-norm regularization is introduced into the updating process of the EKF using pseudo-measurement (PM) technology to improve the ill-posedness of the inverse problem. Numerical studies on a simple-supported and continuous beam bridge deck, with a smooth road surface that is subject to a moving vehicle, are performed to test the proposed approach. Furthermore, using the robustness of the EKF, the proposed algorithm is applied as a simplified method to the case where a bridge deck with road roughness is considered. Results show that the proposed identification algorithm is robust and effective for different vehicle speeds and measurement noises under smooth and good road conditions.
... To further increase the data-reconstruction accuracy of CS in SHM, Bao et al. [18] have proposed a group sparse optimization algorithm that considers the group sparsity of the structure vibration signal (the measured vibration data at different locations of a structure has a very similar sparse structure in the frequency domain) for CS data reconstruction; this algorithm will be discussed in Section 2. Using the idea of random data sampling and the data reconstruction of CS theory, Bao et al. [19] and Zou et al. [20] have proposed CS-based data-loss recovery methods for wireless sensors and sensor networks. The CS method has also been utilized for system identification tasks, such as structural modal identification, structural damage identification, and load identification [21][22][23][24][25][26][27][28][29][30]. For structural modal identification, modal parameters are directly identified from the compressed measurements [21,22]; however, for structural damage identification and load identification, the spatial sparsity of structural damage and load distributions is utilized to solve optimization problems involved in the identification [23][24][25][26][27][28][29][30]. ...
... The CS method has also been utilized for system identification tasks, such as structural modal identification, structural damage identification, and load identification [21][22][23][24][25][26][27][28][29][30]. For structural modal identification, modal parameters are directly identified from the compressed measurements [21,22]; however, for structural damage identification and load identification, the spatial sparsity of structural damage and load distributions is utilized to solve optimization problems involved in the identification [23][24][25][26][27][28][29][30]. ...
Article
Full-text available
Structural health monitoring (SHM) is a multi-discipline field that involves the automatic sensing of structural loads and response by means of a large number of sensors and instruments, followed by a diagnosis of the structural health based on the collected data. Because an SHM system implemented into a structure automatically senses, evaluates, and warns about structural conditions in real time, massive data is a significant feature of SHM. The techniques related to massive data are referred to as data science and engineering, and include acquisition techniques, transition techniques, management techniques, and processing and mining algorithms for massive data. This paper provides a brief review of the state of the art of data science and engineering in SHM as investigated by these authors, and covers the compressive sampling-based data-acquisition algorithm, the anomaly data diagnosis approach using a deep learning algorithm, crack identification approaches using computer vision techniques, and condition assessment approaches for bridges using machine learning algorithms. Future trends are discussed in the conclusion.
... For displacement measurements obtained from radar, RTK-GNSS, or vision cameras, it's not possible to use an anti-aliasing filter. Some techniques have attempted to address the temporal aliasing issue when using non-uniform [24] or uniform [25] low-sampling vision measurements, but they only focus on structural modal identification. ...
Article
This paper proposes an FIR filter-based two-stage fusion technique for high-sampled (HS) structural displacement estimation using HS acceleration and temporally aliased low-sampled (TLS) displacement measurements. First, the temporally aliased error in the TLS displacement measurement is estimated using the acceleration measurement and then eliminated to obtain an anti-aliased low-sampled (ALS) displacement. Next, a low-frequency displacement is estimated from the ALS displacement, and a high-frequency displacement is estimated from the HS acceleration measurement. Finally, the HS displacement is estimated by combining the estimated low-and high-frequency displacements. The proposed technique is also applied to estimate the HS structural displacement by fusing a vision camera and an accelerometer. An automated algorithm is proposed to estimate a scale factor for converting a translation in a pixel unit to a displacement in a length unit and to align measurements of two sensors using short-period HS acceleration and TLS vision measurements. The performance of the proposed technique was numerically and experimentally validated. A significant improvement in the displacement estimation accuracy was achieved compared to existing FIR filter-based techniques owing to the explicit elimination of the temporally aliased error.
... To address these issues, this paper introduces the CP algorithm for extracting structural damage sources and establishes the damage thresholds using the extreme value theory. CP as a new BSS technology has been introduced in studies [30][31][32][33] to decompose the vibrational response of structures into individual modal contributions. Compared to other renowned BSS methods, such as ICA and SOBI, the CP algorithm has been proven to have numerous advantages, including computational efficiency, user-friendliness, and better separation capabilities for tightly spaced and highly correlated source signals. ...
Article
Full-text available
Bridge structures are susceptible to environmental and operational variations (EOVs). Improperly handling these influences may result in incorrect assessments of the bridge’s health condition. Blind source separation (BSS) techniques show promising potential in suppressing the effects of EOVs. However, major challenges such as high data variability, difficulty in parameter selection, lack of reliable decision thresholds, and practical engineering validation have seriously hindered the application of such techniques in bridge health monitoring. Consequently, this paper proposes a new method for bridge damage detection that combines complexity pursuit (CP) and extreme value theory (EVT). This method first uses the exponentially weighted moving average (EWMA) technique to preprocess the measured modal frequencies. The CP algorithm and information entropy are then used to extract structural damage sources from the preprocessed data automatically. Based on the extracted structural damage sources, the damage index (DI) is defined using k-means clustering and Euclidean distance. Following that, the generalized extreme value (GEV) distribution is used to fit the DI data under the normal condition of the bridge, and the damage detection threshold is given according to the fitted distribution. Benchmark data of the KW51 railway bridge are considered to verify the effectiveness of the proposed method along with several comparative studies. The results show that even under strong EOV influences, the proposed method still maintains good damage detection accuracy and robustness, and its effectiveness is superior to some well-known damage detection methods.
... The utilization of high-performance processors and high-precision sensors increases the size and power consumption of microsystems. In contrast, the use of low-power devices, such as low-performing processors and low-precision sensors, impedes their intelligent sensing and processing capabilities [8]. Sending the data collected by microsystems to high-performance equipment for further processing results in an increase in communication costs and response time. ...
Article
Full-text available
Citation: Hou, L.; Duan, W.; Xuan, G.; Xiao, S.; Li, Y.; Li, Y.; Zhao, J. Intelligent Microsystem for Sound Event Recognition in Edge Computing Using End-to-End Mesh Networking. Sensors 2023, 23, 3630. Abstract: Wireless acoustic sensor networks (WASNs) and intelligent microsystems are crucial components of the Internet of Things (IoT) ecosystem. In various IoT applications, small, lightweight, and low-power microsystems are essential to enable autonomous edge computing and networked cooperative work. This study presents an innovative intelligent microsystem with wireless networking capabilities, sound sensing, and sound event recognition. The microsystem is designed with optimized sensing, energy supply, processing, and transceiver modules to achieve small size and low power consumption. Additionally, a low-computational sound event recognition algorithm based on a Convolutional Neural Network has been designed and integrated into the microsystem. Multiple microsystems are connected using low-power Bluetooth Mesh wireless networking technology to form a meshed WASN, which is easily accessible, flexible to expand, and straightforward to manage with smartphones. The microsystem is 7.36 cm 3 in size and weighs 8 g without housing. The mi-crosystem can accurately recognize sound events in both trained and untrained data tests, achieving an average accuracy of over 92.50% for alarm sounds above 70 dB and water flow sounds above 55 dB. The microsystems can communicate wirelessly with a direct range of 5 m. It can be applied in the field of home IoT and border security.
... Mascarenas et al. [36] heuristically selected a fixed value of 1.0 as the l 1 -regularization factor for compressed sensing data reconstruction in structural health monitoring. Yang and Nagarajaiah [37] adopted the l 1 -regularized leastsquares algorithm to solve the modal identification problem with the regularization parameter of 0.01, and the solution was not sensitive to the regularization parameter. Wang and Lu [38] proposed a threshold setting method to properly determine the sparse regularization parameter for a damage identification approach that combined the incomplete modal data with the sparse regularization. ...
Article
Full-text available
Structural damage detection is usually an ill-posed inverse problem due to the contamination of measurement noise and model error in structural health monitoring. To deal with the ill-posed damage detection problem, l2-regularization is widely used. However, l2-regularization tends to provide nonsparse solutions and distribute identified damage to many undamaged elements, potentially leading to false alarms. Therefore, an adaptive sparse regularization method is proposed, which considers spatially sparse damage as a prior constraint since structural damage often occurs in some locations with stiffness reduction at the sparse elements out of the large total number of elements in an entire structure. First, a response covariance-based convex cost function is established by incorporating an l1-regularized term and an adaptive regularization factor to formulate the sparse regularization-based damage detection problem. Then, optimal sensor placement is conducted to determine the optimal measurement locations where the acceleration responses are adopted for computing the response covariance-based damage index and cost function. Further, the predictor-corrector primal-dual path-following approach, an efficient and robust convex optimization algorithm, is applied to search for solutions to the damage detection problem. Finally, a comparison study with the Tikhonov regularization-based damage detection method is conducted to examine the performance of the proposed adaptive sparse regularization-based method by using an overhanging beam model subjected to different damage scenarios and noise levels. The numerical study demonstrates that the proposed method can effectively and accurately identify damage under multiple damage scenarios with various noise levels, and it outperforms the Tikhonov regularization-based method in terms of high accuracy and few false alarms. The analyses on time consumption, adaptiveness of the sparse regularization factor, model-error resistance, and sensor number influence are conducted for further discussions of the proposed method.
... In practice, for mechanical systems, responses can only be measured at specific strategic locations owing to limitations in the acquisition hardware, and internal full model operators are seldom available. Sparse system identification techniques such as blind source separation [34,35,63,64], sparse component analysis [65,66], empirical mode decomposition [67-69] and synchro-squeezed transform [70,71] have been effectively applied for modal identification of dynamical systems with limited measurements, in the context of structural health monitoring. However, incorporating these techniques in the framework of DMD is challenging and hence, researchers have taken inspiration from Taken's embedding theory [72] and proposed applying the DMD procedure on timeshifted coordinates [40,50,[54][55][56]. ...
Article
Dynamic mode decomposition (DMD) has emerged as a popular data-driven modeling approach to identifying spatio-temporal coherent structures in dynamical systems, owing to its strong relation with the Koopman operator. For dynamical systems with external forcing, the identified model should not only be suitable for a specific forcing function but should generally approximate the input–output behavior of the underlying dynamics. A novel methodology for modeling those classes of dynamical systems is proposed in the present work, using wavelets in conjunction with the input–output dynamic mode decomposition (ioDMD). The wavelet-based dynamic mode decomposition (WDMD) builds on the ioDMD framework without the restrictive assumption of full state measurements. Our non-intrusive approach constructs numerical models directly from trajectories of the full model’s inputs and outputs, without requiring the full-model operators. These trajectories are generated by running a simulation of the full model or observing the original dynamical systems’ response to inputs in an experimental framework. Hence, the present methodology is applicable for dynamical systems whose internal state vector measurements are not available. Instead, data from only a few output locations are only accessible, as often the case in practice. The present methodology’s applicability is explained by modeling the input–output response of an Euler–Bernoulli finite element beam model. The WDMD provides a linear state-space representation of the dynamical system using the response measurements and the corresponding input forcing functions. The developed state-space model can then be used to simulate the beam’s response towards different types of forcing functions. The method is further validated on a real (experimental) data set using modal analysis on a simple free–free beam, demonstrating the efficacy of the proposed methodology as an appropriate candidate for modeling practical dynamical systems despite having no access to internal state measurements and treating the full model as a black-box.
... Blind source separation is a signal processing technique that solely separates the source signal from the observed mixed signal when the mixing mode and source signal are unknown [1]. It has been widely used in a variety of fields, including speech signal processing [2], fault diagnosis [3], electromagnetic signal processing [4], biomedical signal processing [5] etc. Since the number of potential source signals is generally unknown, the number of observation devices is usually less than the number of source signals, hence research on underdetermined blind source separation (UBSS) is more essential [6]. ...
Article
Full-text available
It is essential to accurately estimate the mixing matrix and determine the number of source signals in the problem of underdetermined blind source separation. The problem is solved in this paper via sparse subspace clustering, which can be used to found low-dimensional data structures in observed data. To enhance the linear clustering characteristics of time-frequency points, the high energy points are reserved first, and the angle difference of real and imaginary portions is employed to screen single source points. After that, the time-frequency points are clustered using sparse subspace clustering, and the number of source signals is identified. Finally, the local density of eigenvectors is used to determine the mixing matrix. The proposed algorithm is capable of accurately estimating the mixing matrix. It has strong robustness and adaptable to a wide range of mixing circumstances. The proposed method’s effectiveness is demonstrated by theoretical analysis and experimental data.
... As particular applications, CS has been employed to collect vibration data of cantilever beams [61][62][63], concrete slab [64], bridges [65], steel pipes [66], and railroad systems [67,68]. Furthermore, other bases rather than frequency domain and mode shapes [69] have been employed to express SHM/NDE data in a sparse format. These bases include frequency warped basis [70], Fourier [71,72], discrete Fourier transform [73], discrete cosine transform [74], adaptive wavelet [75], and trained dictionary [76,77]. ...
Article
This paper aims to review high-dimensional data analytic (HDDA) methods for structural health monitoring (SHM) and non-destructive evaluation (NDE) applications. High-dimensional data is a type of data in which the number of features for each observation is much larger than the number of all observations. High-dimensional data may violate assumptions of the classic methods for statistical modeling and data analysis. Then, classic statistical modeling will no longer be applicable. HDDA methods were developed to overcome this challenge and analyze these types of data. In the field of SHM/NDE, there are several sources of high-dimensionality. Examples include a large number of data points in continuous waves/signals or high-resolution images/videos. HDDA methods are used as a dimension-reduction tool to preprocess data for further analysis, or they are directly implemented for damage detection and localization. This paper reviews six HDDA methods as well as existing and potential applications in SHM/NDE. Particularly, this paper discusses the vast range of implemented SHM/NDE applications from crack detection to missing data imputation. Furthermore, experimental and simulated datasets have been used to show the application of HDDA methods as hands-on examples. It is shown that the potential of HDDA for SHM/NDE studies is significantly more than the existing studies in the literature, and these methods can be used as a powerful tool that provides vast opportunities in SHM/NDE.
... And CS has been used for addressing for memoryefficient and economic wireless data transmission, where very few measurement data is transmitted wireless and CS reconstruct the full signal from those few measurements in the data storage itself [38][39][40]. The health monitoring community also uses CS in modal identification from low-rate samples [41,42], data compression [43,44], spatio-temporal full-field data compression [45], image compression and transmission taken from UAV and robotic sensors [46]. In the wireless system, CS can also recover the data packet loss which has been also studied [32,47,48]. ...
Article
Full-text available
Stay-cables in the cable-stayed bridge are the most vital components as they carry the bridge deck’s load and transmit the force to the bridge pylons. However, dynamics loads due to vortex-induced vibration, ambient wind excitation, and even vehicular vibration cause fatigue in the stay-cable. Hence continuous real-time performance monitoring of such cables is necessary for maintenance to avoid any kind of damage to the cable. Wireless sensors are contact-based sensor that provide accurate measurement, and it does not involve any wiring cost like conventional wired sensors. Monitoring cable health using such wireless sensors is a good choice provided packet loss (which occurs while transmitting the measured data to the base station) that invariably occurs is addressed by data processing. Such discontinuity in data (due to packet loss) may interrupt the real-time/online cable health monitoring process - depending on the window length of the data loss. In general, online health monitoring using multiple sensors reduces the estimation errors. In this paper, we propose a framework that takes the wireless sensor data as the input, then reconstructs the packet lost samples (if any), and finally, provides a real-time tension estimation as an output. The novel framework, first adopts compressive sensing algorithm to reconstruct the data due to packet loss. Subsequently, we synthesize the reconstructed responses from multiple sensors to estimate the real-time frequency variation using Blind Source Separation (BSS) Technique. As the cable response due to ambient vibration contains a large number of modes, the dominant modal response or the corresponding dominant frequency is estimated from very few measurements using a variant of the BSS technique named Sparse Component Analysis (SCA). Finally, real-time cable tension is estimated from the frequency variation using the taut-string theory. The proposed technique is applied to a real full-scale cable-stayed bridge. The mean tension obtained from the framework is comparable with the cable’s actual design tension. The accurate estimation of real-time stay-cable tension by the proposed algorithm shows great potential in the field of structural health monitoring.
... Alternatively, a CS-based sparse coding strategy was efficiently complemented with a nonconvex shrinkage algorithm to reconstruct the original data from incomplete measurements in the field of large-span structures [12]. Furthermore, Yang & Nagarajaiah explored the possibility to extract mode shapes directly from low-rate random measurements by tackling the modal identification task with a CS-driven Blind Source Separation (BSS-CS) approach [13]. ...
Article
The main challenge in the implementation of long-lasting vibration monitoring systems is to tackle the complexity of modern 'mesoscale' structures. Thus, the design of energy aware solutions is promoted for the joint optimization of data sampling rates, on-board storage requirements, and communication data payloads. In this context, the present work explores the feasibility of the model-assisted rakeness-based compressed sensing (MRak-CS) approach to tune the sensing mechanism on the second-order statistics of measured data by pivoting on numerical priors. Moreover, a signal-adapted sparsity basis relying on the Wavelet Packet Transform is conceived, which aims at maximizing the signal sparsity while allowing for a precise time-frequency localization. The adopted solutions were tested with experiments performed on a sensorized pinned-pinned steel beam. Results prove that the proposed compression strategies are superior to conventional eigenvalue approaches and to standard CS methods.The achieved compression ratio is equal to 7 and the quality of the reconstructed structural parameters is preserved even in presence of defective configurations.
... Further, a scheme was devised in [159] based on a combination of blind feature extraction and sparse representation classification, in conjunction with a modal analysis treatment, for locating the structural damage and assessing its severity. The same authors proposed an output-only identification approach in [160] by coupling CS with blind source separation schemes for determining the mode shape matrix of the structural model. In [161] the approach was modified to account for video camera based vibration measurements. ...
Article
Full-text available
A Wiener path integral variational formulation with free boundaries is developed for determining the stochastic response of high-dimensional nonlinear dynamical systems in a computationally efficient manner. Specifically, a Wiener path integral representation of a marginal or lower-dimensional joint response probability density function is derived. Due to this a priori marginalization, the associated computational cost of the technique becomes independent of the degrees of freedom (d.f.) or stochastic dimensions of the system, and thus, the ‘curse of dimensionality’ in stochastic dynamics is circumvented. Two indicative numerical examples are considered for highlighting the capabilities of the technique. The first relates to marine engineering and pertains to a structure exposed to nonlinear flow-induced forces and subjected to non-white stochastic excitation. The second relates to nano-engineering and pertains to a 100-d.f. stochastically excited nonlinear dynamical system modelling the behaviour of large arrays of coupled nano-mechanical oscillators. Comparisons with pertinent Monte Carlo simulation data demonstrate the computational efficiency and accuracy of the developed technique.
... Mascarenas et al. [127] heuristically selected the regularisation parameter as a unit. Yang and Nagarajaiah [128] set the regularisation parameter as 0.01 in CS-based modal identification. Yang and Nagarajaiah [129] calculated the regularisation parameter by using β = 1 / √ N , where N is the number of the time history sampling points corresponding to the dimension of an unknown vector. ...
Article
Structural damage identification has received considerable attention during the past decades. Although several reviews have been presented, some new developments have emerged in this area, particularly machine learning and artificial intelligence techniques. This article reviews the progress in the area of vibration-based damage identification methods over the past 10 years. These methods are classified in terms of different damage indices and analytical/numerical techniques used with discussions of their advantages and disadvantages. The challenges and future research for vibration-based damage identification are summarised. This review aims to help researchers and practitioners in implementing existing damage detection algorithms effectively and developing more reliable and practical methods for civil engineering structures in the future.
... Further, a scheme was devised in [159] based on a combination of blind feature extraction and sparse representation classification, in conjunction with a modal analysis treatment, for locating the structural damage and assessing its severity. The same authors proposed an output-only identification approach in [160] by coupling CS with blind source separation schemes for determining the mode shape matrix of the structural model. In [161] the approach was modified to account for video camera based vibration measurements. ...
... This study presents a time-domain output data identification model for pipeline magnetic field data using the unsupervised blind source separation technique termed complexity pursuit (CP) [22] that was independently formulated in [23]. CP learning algorithms have been successfully applied for system identification and damage detection in [24][25][26][27]. The main contribution of this paper is to reproduce the CP algorithms in terms of its applicability towards the accurate time-based modal identification of pipelines noisy magnetic field data. ...
... 21 CS has been widely used in many fields, including consumer camera imaging, 22,23 medical magnetic resonance imaging, 13 remote sensing, 24 seismic exploration, 25 and communications, especially for wireless sensor networks (WSN). 26,27 In SHM, the applications of CS theory have also been investigated in structural vibration data acquisition, 28 wireless data transmission and lost data recovery, 29-34 structural modal identification, 35,36 structural sparse damage identification, [37][38][39][40][41][42] and sparse heavy-vehicle-loads identification of longspan bridges. 43 As mentioned above, the nature of CS is to solve an ill-posed inverse problem. ...
Article
Full-text available
Compressive sensing has been studied and applied in structural health monitoring for data acquisition and reconstruction, wireless data transmission, structural modal identification, and spare damage identification. The key issue in compressive sensing is finding the optimal solution for sparse optimization. In the past several years, many algorithms have been proposed in the field of applied mathematics. In this article, we propose a machine learning–based approach to solve the compressive-sensing data-reconstruction problem. By treating a computation process as a data flow, the solving process of compressive sensing–based data reconstruction is formalized into a standard supervised-learning task. The prior knowledge, i.e. the basis matrix and the compressive sensing–sampled signals, is used as the input and the target of the network; the basis coefficient matrix is embedded as the parameters of a certain layer; and the objective function of conventional compressive sensing is set as the loss function of the network. Regularized by l1-norm, these basis coefficients are optimized to reduce the error between the original compressive sensing–sampled signals and the masked reconstructed signals with a common optimization algorithm. In addition, the proposed network is able to handle complex bases, such as a Fourier basis. Benefiting from the nature of a multi-neuron layer, multiple signal channels can be reconstructed simultaneously. Meanwhile, the disassembled use of a large-scale basis makes the method memory-efficient. A numerical example of multiple sinusoidal waves and an example of field-test wireless data from a suspension bridge are carried out to illustrate the data-reconstruction ability of the proposed approach. The results show that high reconstruction accuracy can be obtained by the machine learning–based approach. In addition, the parameters of the network have clear meanings; the inference of the mapping between input and output is fully transparent, making the compressive-sensing data-reconstruction neural network interpretable.
... This paper presents a time-domain output data identification model for pipeline magnetic field data using the unsupervised blind source separation technique termed complexity pursuit (CP) [24] that was independently formulated in [25]. CP learning algorithms have been successfully applied for system identification and damage detection in [26][27][28][29][30]. The main contribution of this paper is to apply the CP algorithms to the pipelines noisy magnetic field data, towards an accurate time-based modal identification. ...
Article
Full-text available
The measured sensor data of underground ferromagnetic pipelines consist of hidden damage information that can be explored by developing output data identification models due to the availability of only output signal responses. The current research findings elaborate output-only modal identification method Complexity Pursuit based on blind signal separation. An attempt is made to apply the complexity pursuit algorithms, for time-based damage detection of underground ferromagnetic pipelines which blindly estimates the modal parameters from the measured magnetic field signals for targeting the pipeline defect locations. Numerical simulations for multi-degree of freedom systems show that the proposed tested method could precisely identify the structural parameters. Experiments are conducted primarily under well-equipped controlled laboratory conditions followed by confirmation in the real field on pipeline magnetic field data, recorded through high precision magnetic field sensors. The measured recorded structural responses are given as input to the blind source separation model where the complexity pursuit algorithms blindly extracted the least complex signals from the observed mixtures guaranteed to be source signals. The output power spectral densities calculated from the estimated modal responses unveiled elegant physical interpretation of the underground ferromagnetic pipeline structures.
... CS has been widely used in many fields, including consumer camera imaging [22,23], medical magnetic-resonance imaging [13], remote sensing [24], seismic exploration [25], and communications, especially for wireless sensor networks (WSN) [26,27]. In SHM, the applications of CS theory have also been investigated in structural vibration data acquisition [28], wireless data transmission and lost data recovery [29][30][31][32][33][34], structural modal identification [35,36], structural sparse damage identification [37][38][39][40][41][42], and sparse heavy-vehicle-loads identification of long-span bridges [43]. ...
Preprint
Full-text available
Compressive sensing (CS) has been studied and applied in structural health monitoring for wireless data acquisition and transmission, structural modal identification, and spare damage identification. The key issue in CS is finding the optimal solution for sparse optimization. In the past years, many algorithms have been proposed in the field of applied mathematics. In this paper, we propose a machine-learning-based approach to solve the CS data-reconstruction problem. By treating a computation process as a data flow, the process of CS-based data reconstruction is formalized into a standard supervised-learning task. The prior knowledge, i.e., the basis matrix and the CS-sampled signals, are used as the input and the target of the network; the basis coefficient matrix is embedded as the parameters of a certain layer; the objective function of conventional compressive sensing is set as the loss function of the network. Regularized by l1-norm, these basis coefficients are optimized to reduce the error between the original CS-sampled signals and the masked reconstructed signals with a common optimization algorithm. Also, the proposed network can handle complex bases, such as a Fourier basis. Benefiting from the nature of a multi-neuron layer, multiple signal channels can be reconstructed simultaneously. Meanwhile, the disassembled use of a large-scale basis makes the method memory-efficient. A numerical example of multiple sinusoidal waves and an example of field-test wireless data from a suspension bridge are carried out to illustrate the data-reconstruction ability of the proposed approach. The results show that high reconstruction accuracy can be obtained by the machine learning-based approach. Also, the parameters of the network have clear meanings; the inference of the mapping between input and output is fully transparent, making the CS data reconstruction neural network interpretable.
Chapter
Cutting tools vibrate with small motion over frequencies ranging from a few 100 Hz to a few kHz. Estimating small motion over these wide range of frequencies using newer vision-based modal analysis methods requires video to be recorded at high frame rates and resolutions. High frame rates are necessary for proper temporal resolution and high resolution is necessary for properly spatially resolving small motion. However, since most cameras trade resolution for speed, registering high-frequency small motion with video becomes nontrivial. To recover cutting tool modal parameters from high resolution but potentially temporally aliased video, this paper discusses the use of the compressed sensing technique. Compressed sensing enables non-uniform random sampling at sub-Nyquist rates and leverages sparse structures of signals to allow for exact recovery of signals that are not aliased. Though compressed sensing has significant potential, it requires video to be randomly sampled at the time of acquisition. Since existing camera hardware does not allow for this yet, this paper instead demonstrates modal parameter recovery from motion registered from video that is randomly downsampled at non-uniform rates from video that was originally properly and uniformly sampled. Recovered parameters from aliased video agree with those from video sampled properly.
Article
The inverse analysis and evaluation of in-service bridges by considering structural health monitoring (SHM) data, which is something of a black box problem, is usually affected by a number of uncertain factors such as the monitoring items, the monitoring cycles and the SHM data analysis methods. It is also generally accepted that the storage device and analysis software, such as computer hard disk and CPU, are often more demanding when dealing with large amounts of data. In addition, data quality issues such as excessive noise, poor periodicity and incomplete data are often encountered for analysis of large volumes of data. Relatively speaking, it has become the first choice of many researchers to intercept some partial data with good data quality as samples for analysis. For example, a sample of 1 day’s data or a sample of 1 week’s data is usually selected in traditional statistic analysis. However, the SHM data are not being used to its full potential and the sample data are often accompanied by a degree of subjectivity, randomness and fuzziness. More importantly, the deeper multidimensional characterisation and visualisation of the massive SHM data set itself is also in urgent need of development. This usually includes the sparse matrix characteristics of the SHM data set as well as the common correlation of different monitoring items. Therefore, it is necessary to carry out the analysis of SHM data as a whole. In this study, the multidimensional tensor analysis method from computational mathematics is applied, and then, a tensor analysis strategy is proposed for SHM data. As a case study, the in-service prefabricated slab-on-girder bridge is also presented. In particular, all the monitoring items and the whole monitoring cycle can be taken into account and visualised in a better way. A flow chart consisting of five stages is also provided, including the initial tensor construction, the tensor decomposition, the tensor prediction, the tensor reconstruction and the error analysis. The multidimensional tensor coupling is also a further development from the traditional correlation analysis of different types of monitoring data. It is then to be expected that this will further reflect the actual operational status of the bridge in-service. Some critical issues such as the rank value and the dynamic tensor model are also discussed. It is expected that the strategies proposed herein will be applied not only to the construction of smart cities, but also to big data in the industrial sector.
Article
This paper studies the effectiveness of joint compression and denoising strategies with realistic, long-term guided wave structural health monitoring data. We leverage the high correlation between nearby collections of guided waves in time to create sparse and low-rank representations. While compression and denoising schemes are not new, they are almost exclusively designed and studied with relatively simple datasets. In contrast, guided wave structural health monitoring datasets have much more complex operational and environmental conditions, such as temperature, that distort data and for which the requirements to achieve effective compression and denoising are not well understood. The paper studies how to optimize our data collection and algorithms to best utilize guided wave data for compression, denoising, and damage detection based on seven million guided wave measurements collected over 2 years.
Article
Compressed sensing (CS) utilizes the signal’s sparsity to reconstruct signals from far less linear measurements. However, ambient vibration response in structural dynamics typically lacks sparsity on a regular transform basis. Hence, when the vibration signals are reconstructed through the CS, significant errors are unavoidably induced, especially at high compression ratios, limiting the CS applicability in structural health monitoring and damage detection. To address these issues, this paper proposes an enhanced error reduction method, exploited as a post-processing scheme for signal reconstruction. The suggested method constructs an autoregression model, whose residuum increases to correspond to the reconstruction error based on empirical observations. Through minimizing the residuum under the constraint of compressed measurements, the reconstruction error is then minimized, which leads to an optimized result of the reconstructed signals. The suggested method is validated using ambient vibration data collected from a laboratory-scale shear frame model and a full-scale cable-stayed bridge.
Article
Structured compressed sensing takes signal structure into account, thereby outperforming earlier compressed sensing methods. However, results are usually based on sampling in the Fourier domain, such as in Magnetic Resonance Imaging. In the time domain, the benefits of structured compressed sensing are still unknown. This paper introduces concepts that incorporate the signal structure into both the acquisition and reconstruction of compressed sensing in time and image domain applications. First, a stratified-random sampling pattern is proposed to improve the recovery of the dominant low-frequency range of natural signals. A heuristic decay of primes criterion is developed to evaluate the properties of the sensing matrix and is used to optimize the sampling pattern. Second, the sparsity of the Fourier transform as the representation domain is improved by estimating the signal structure in a preprocessing step, and then adapting the grid of the Fourier transform. In contrast to existing methods, grid stretching is integrated into the fast Fourier transform to reduce computational complexity. Both structured acquisition and reconstruction are evaluated using simulations, as well as two real-world applications: wireless sensor networks in structural health monitoring and electron microscopy. Results show that both reconstruction errors and robustness can significantly be improved by incorporating structure into the acquisition and reconstruction.
Research
Abstract: Modal parameters characterize how tools vibrate. Correct evaluation of modal parameters depends on how signals are sampled. Since tools can vibrate with frequencies of up to several kHz, respecting the Nyquist criterion requires sampling potentially at tens of kHz. This is easy enough with modern data acquisition systems. However, if/when using modal parameters to monitor condition of tools, transmitting, storing, and processing large data sets becomes difficult. Moreover, when extracting modal parameters using newer vision-based methods, it may not always be possible to acquire high resolution images at rates that avoid aliasing. This paper present solutions to address such cases by recovering modal parameters from signals sampled potentially below the Nyquist limit. No a priori knowledge of the system order is assumed, and folding properties of signals are leveraged to recover parameters from fractionally uncorrelated signals using notions of set theory. To aid recovery, we suggest formal procedures to group candidates of likely modes together and resolve the case of modes being potentially confounded. Special spatiotemporal decoupling properties inherent to modal analysis are leveraged to recover eigenvectors from potentially aliased signals. Recovery is illustrated using the eigensystem realization algorithm. Numerical experiments with systems of different orders, of signals with noise, and of systems in which likely modes can be confounded with other likely candidates are designed to test robustness of the method. Those findings guide experiments with measured accelerations and video of an end mill. Results confirm that parameters recovered using proposed methods agree with those extracted from accelerations and videos sampled properly. Keywords: Dynamics, Cutting tool, Signal aliasing, Sub-Nyquist sampling, Modal parameters, Visual vibrometry
Article
Vision-based modal analysis methods are non-contact, do not require data acquisition systems, and facilitate full-field shape analysis. Leveraging these advantages for industrial use is precluded by the need for expensive high-speed cameras. This paper presents new methods to recover modal parameters from potentially temporally aliased video recordings of cutting tools using economical medium-speed cameras. Folding properties of fractionally uncorrelated aliased signals are used together with the eigensystem realization algorithm to recover modal parameters from tool motion extracted using image processing schemes. Results agree with those from accelerations sampled properly. Methods are generalized for use with other sensors.
Article
The resiliency of the communication channels to data loss is of prime importance in the networked control of civil structures. In this study, compressive sensing (CS) as an emerging data acquisition technique is used to recover the lost packets in real‐time in the communication channel from sensors to the controller. The basic idea is to apply CS to the state vector, for example, displacement and velocity profile of the building, in the feedback channel of the closed‐loop control system. The encoded measurement vector is first packetized and then transmitted over the unreliable communication channel. On the controller side, rather than waiting for the unreceived packets to be received, the state vector is recovered using the partially observed data. Dictionary learning is used to train the sparsifying dictionary via the application‐specific data set. In addition, the smoothed zero norm (SL‐0) algorithm is employed for solving the underdetermined system of linear equations at the decoding stage. This approach is fully data‐driven and does not require knowledge of the system. Once the state vector is reconstructed, any state feedback control strategy can be used to calculate the required control force. In this study, a 76‐story benchmark building equipped with an active tuned mass damper (ATMD) is used to investigate the performance of the proposed data transmission scheme. The reconstruction accuracy of the signals is compared to the K‐nearest neighbor (KNN) method and the perfect communication case. Simulation results revealed that the CS‐based approach yields high accuracy with reasonable computational time.
Article
Onboard monitoring plays an important role in real‐time condition assessment of rail systems. However, the data amount is typically tremendous due to the high sampling rate needed and long traveling distance, especially for vibration data collected from high‐speed trains (HSTs). As for fault diagnosis of mechanical systems, compressive sensing (CS) has been increasingly adopted to reduce the data amount. In comparison to rotary bearings and bolted joints in machinery that operate in relatively steady working environments, HSTs run in an open and varying environment throughout the traveling mileage, and the data amount is normally much larger, making it tricky to directly apply the classical CS methods. This study aims to bridge the gap by investigating the sparsity of HST vibration signals and CS approaches. Considering the lack of sparsity and long reconstruction time, we propose an efficient adaptive CS approach for dynamic responses of HSTs. More specifically, we unroll the iterative soft thresholding algorithm (ISTA) in a deep learning (DL) framework and configure it into a data reconstruction machine. Compared to the conventional CS methods, our approach exhibits two advantages: (i) The dictionary learning and signal reconstruction are integrated into one neural network and can be conducted in an end‐to‐end manner; (ii) the process is highly efficient since encapsulating ISTA in a DL framework can naturally leverage the capability of GPU. The proposed approach is validated using data collected from an in‐service HST, and results show that our approach achieves superior reconstruction performance over fixed bases and redundant dictionaries.
Article
Experimental Modal Analysis (EMA) allows to assess the dynamical properties of a mechanical component or structure by estimating the modal parameters. Whereas EMA is usually based on local accelerometers or laser vibrometer data, in this paper we focus on camera-based EMA as cameras offer full field and contact-less data. However, besides few very specific controlled cases, camera-based EMA is limited by the low frame rate of the camera in comparison to accelerometers and vibrometers. In this paper we propose a novel acquisition scheme that allows to estimate modal parameters above the Nyquist–Shannon limit (i.e., half of the camera frame rate) by employing a random sampling scheme in time in combination with one accelerometer. With this information we reconstruct the Impulse Response Function (IRF) modal model through a nonlinear optimization problem, where the accelerometer ensures a global solution by providing an initial guess of the eigenfrequencies. We investigate numerically the accuracy of the methodology by simulating multiple damped sine waves. Furthermore, we present an experimental validation on a clamped–clamped beam excited by an impact hammer. Thereby, the displacement information is captured by a single camera triggered by random pulses, and computed by Lucas–Kanade (LK) optical flow. The complexity and modal assurance criterion (MAC) of the modes show that all modes whose amplitudes are higher than the noise level are measured successfully with only one excitation hit, where the highest mode, at 218 Hz, is measured with a random sampling scheme comparable to 50 fps (to reach 218 Hz, a regular sampling with 436 fps would be required).
Article
The present state of condition monitoring of civil infrastructure involves the application of a large number of contact-based vibration sensors at different locations of the structures. The traditional vibration sensors, such as accelerometers or strain gauges, require considerable effort in laying out the connection wires. Also, their cost of maintenance is quite high considering the harsh environment they are exposed to. In recent times, there is a growing urge in developing contactless, vision-based vibration sensors that potentially alleviate the previously mentioned drawbacks. In a previous paper, the authors proposed a framework of acquiring full-field spatiotemporal Lagrangian displacement response of a continuous vibrating edge from its video by computing the optical-flow of the edge using d’Alembertian of Gaussian filter. Such spatially dense measurements are required to compute full-field mode shapes. But, from the perspective of condition monitoring of large infrastructures, such spatially dense measurements record a large amount of high dimensional data over the whole period of acquisition time. It poses a considerable challenge in the form of an increase in storage and data transmission capacity. In this paper, such drawbacks are addressed by suggesting a computational strategy of spatiotemporal compressive sensing of the full-field Lagrangian displacement response from the video of the vibrating structure with unknown geometric properties and boundary conditions. The non-uniformly sampled data, simultaneously both in the spatial and temporal dimension, still retains the low order dynamics of the system from which the full-field and high dimensional displacement responses are reconstructed. Subsequently, the modal parameters and full-field mode shapes are estimated from the reconstructed full-field Lagrangian displacement response. The experimental validation of the proposed method satisfactorily reconstructs the dense full-field displacement response along with estimating full-field dynamic modes from the low-dimensional spatiotemporal sub-sampled measurement data for a three-story steel frame and an aluminum cantilever beam for different compression ratios. The applicability of the proposed method is demonstrated on a simulated wind turbine tower model with unknown geometry or boundary conditions. The full-field displacement response and mode shapes of the non-prismatic wind turbine tower with a concentrated mass at the top are successfully reconstructed from the noise-induced sub-sampled data.
Article
During these years, blind source separation (BSS) techniques have been demonstrated as a promising tool for operational modal identification of large-scale engineering structures only from output responses. However, plenty of BSS identification approaches are based on the assumption that the sources are sparse or statistically independent, limiting their application scopes. Furthermore, it has been challenging to perform operational modal identification in underdetermined cases where the number of observed sensors is less than the number of active modes. In allusion to the problems above, a novel tensor-based approach for operational modal identification with limited sensors is proposed, in which the low-rank characteristics of vibration measurements is utilized. This paper reveals the intrinsic connection between tensor decomposition and modal expansion. Firstly, a third-order tensor is constructed through a set of generated matrices, in which each observed signal is reshaped into a matrix by segmentation operation. Then, the tensorial observed signals are decomposed into multilinear rank-Lr,Lr,1 terms by block-termed decomposition (BTD). And a collection of sub-tensors that correspond to the mode shapes matrix and modal responses can be obtained, from which the modal parameters are estimated. Finally, the effectiveness of the proposed method is validated with a series of numerical studies and experimental investigations, even in closely-spaced modes. The simulation and experimental results indicate that the proposed method can identify the modal parameters accurately and robustly in both determined and underdetermined situations.
Article
In multi-sensor, long-distance fault monitoring of rolling bearings, the bearing signals are compressively sampled, transmitted, and reconstructed according to the theory of compressive sensing. However, the reconstruction accuracy and speed are limited and are affected by the noise afflicting the collected signals. In this paper, a high-precision signal recovery method, based on signal fusion and the variable stepsize forward–backward pursuit (VSFBP) algorithm, is proposed. First, the method adaptively adjusts the best estimate of the traditional random weighted fusion algorithm, by using the relative fluctuation value, which can fuse variable signals and reduce the noise component of the detection signal. Second, two fuzzy parameters are used to control the step sizes of the atom selection and deletion in the two-stage matching pursuit algorithm; this improves the reconstruction accuracy and speed of the algorithm under a high compression ratio. Finally, to prevent excessive backtracking, in the two-stage matching pursuit algorithm, the observation matrix is updated after each iteration, which improves the reconstruction accuracy of the algorithm further. Simulation and experimental results are compared to verify the effectiveness of the proposed method.
Article
We demonstrate a new class of ultra-wideband (UWB) sparse radio frequency (RF) signal receiver relying on direct sub-sampling in frequency domain and compressive sensing (CS) techniques. The sampling process is realized with a hybrid RF-photonic architecture for discrete Fourier transform (DFT) on broadband RF signals using a variable-pitch frequency comb. Sub-sampled DFT coefficients are used to recover the input RF signal which is sparse in a chosen domain. Instead of digitizing signals with a full-rate analog to digital converter (ADC), the new approach requires sub-rate quantization, reducing hardware complexity via sub-sampling in frequency domain. Both simulated and experimental verifications were achieved by high-fidelity reception of various sparse signals within 3 GHz to 7.9 GHz band.
Article
Full-text available
Impulse response function (IRF) is an ideal structural damage index for the identification of structural damage associated with changes in modal properties. However, IRFs from multiple excitations applied at different degrees-of-freedoms jointly contribute to the dynamic response, and their estimation is often underdetermined. Although some efforts have been devoted to the estimation of IRF for a structure under single excitation, the case under multiple excitations has not been fully investigated yet. The estimation of IRF under multiple excitations is generally an ill-conditioned inverse problem such that an incorrect or non-feasible solution is common, preventing its application to damage detection. This work explores this problem by introducing dimensionality reduction transformation matrices relating two sets of IRFs of a structure with discussions on the performance of the non-unique transformation matrices. Then, the extraction of IRF via wavelet-based and Tikhonov regularization-based methods are compared. Finally, a numerical study with a truss structure is conducted to validate the estimation of the IRFs and to demonstrate their applicability for damage detection under seismic excitations. Both the damage locations and severity are accurately identified, indicating the proposed methodology can enable the IRFs estimation under multiple excitations for successful damage detection.
Article
The pipeline in-service needs to be inspected in certain period to master its structural health status. Ultrasonic guided wave which can propagate along pipelines with less energy loss provides an efficient method for long-term in situ inspection. Guided wave can detect both corrosion and cracks existing in structures. To overcome the problem of huge amounts of data and to maintain defect identification accuracy, the compressed sensing method for guided wave inspection is proposed. The compression process is essentially a scheme of analog to information conversion to compress the signal. It is accomplished by random demodulation and the equivalent sampling rate below the Nyquist rate helps to save most of storage. The compressed data is recovered to the sparse spatial domain based on the constructed dictionary from a guided wave propagation model. To verify the effectiveness of proposed method, both numerical simulations and experimental investigations are conducted. The results indicate the availability of compression and high accuracy of defect location after recovery. The influences of different compression schemes and compression ratios are further analyzed. In addition, the comparisons with direct recovery without compression and traditional analysis methods demonstrate the advantageous performance of the proposed method.
Article
Optically-acquired data, typically from digital image correlation, is increasingly being used in the area of structural dynamics, particularly modal testing and damage identification. One of the problems with such data is its extremely large size. Single images regularly extend to tens or even hundreds of thousands of data points and many thousands of images may be required for a vibration test. Such data must be stored and transmitted efficiently for later remote reconstruction and analysis, typically operational modal analysis. It is this requirement that is addressed in the research presented in this paper. This research builds upon previous work whereby digitised optical data was projected onto an orthogonal basis with coefficients (shape descriptors)of either greater or lesser significance; those deemed to be insignificant, according to a chosen threshold being removed. Data reduction by a combination of shape-descriptor decomposition and compressed-sensing is applied to an industrial printed circuit board and reconstructed for operational modal analysis by ℓ 1 optimisation.
Article
Full-text available
We present a wide bandwidth, compressed sensing based nonuniform sampling (NUS) system with a custom sample-and-hold chip designed to take advantage of a low average sampling rate. By sampling signals nonuniformly, the average sample rate can be more than a magnitude lower than the Nyquist rate, provided that these signals have a relatively low information content as measured by the sparsity of their spectrum. The hardware design combines a wideband Indium-Phosphide heterojunction bipolar transistor sample-and-hold with a commercial off-the-shelf analog-to-digital converter to digitize an 800 MHz to 2 GHz band (having 100 MHz of noncontiguous spectral content) at an average sample rate of 236 Ms/s. Signal reconstruction is performed via a nonlinear compressed sensing algorithm, and the challenges of developing an efficient implementation are discussed. The NUS system is a general purpose digital receiver. As an example of its real signal capabilities, measured bit-error-rate data for a GSM channel is presented, and comparisons to a conventional wideband 4.4 Gs/s ADC are made.
Article
Full-text available
In recent years, there has been an increasing interest in the adoption of emerging sensing technologies for instrumentation within a variety of structural systems. Wire- less sensors and sensor networks are emerging as sensing paradigms that the structural engineering field has begun to consider as substitutes for traditional tethered monitoring systems. A benefit of wireless structural monitoring systems is that they are inexpensive to install because extensive wir- ing is no longer required between sensors and the data acquisition system. Researchers are discovering that wire- less sensors are an exciting technology that should not be viewed as simply a substitute for traditional tethered monitor- ing systems. Rather, wireless sensors can play greater roles in the processing of structural response data; this feature can be utilized to screen data for signs of structural damage. Also, wireless sensors have limitations that require novel system architectures and modes of operation. This paper is intended to serve as a summary review of the collective experience the structural engineering community has gained from the use of wireless sensors and sensor networks for monitoring structural performance and health.
Article
Full-text available
This work introduces the use of compressed sensing (CS) algorithms for data compression in wireless sensors to address the energy and telemetry bandwidth constraints common to wireless sensor nodes. Circuit models of both analog and digital implementations of the CS system are presented that enable analysis of the power/performance costs associated with the design space for any potential CS application, including analog-to-information converters (AIC). Results of the analysis show that a digital implementation is significantly more energy-efficient for the wireless sensor space where signals require high gain and medium to high resolutions. The resulting circuit architecture is implemented in a 90 nm CMOS process. Measured power results correlate well with the circuit models, and the test system demonstrates continuous, on-the-fly data processing, resulting in more than an order of magnitude compression for electroencephalography (EEG) signals while consuming only 1.9 $\mu$W at 0.6 V for sub-20 kS/s sampling rates. The design and measurement of the proposed architecture is presented in the context of medical sensors, however the tools and insights are generally applicable to any sparse data acquisition.
Article
Full-text available
The base-isolated University of Southern California (USC) hospital building experienced strong motion during the 1994 Northridge earthquake. California Strong Motion Instrumentation Program data of the response are available for performance evaluation. The objective of this study is to evaluate the seismic performance of the base-isolated USC hospital building during the 1994 Northridge earthquake. A nonlinear analytical model of the USC hospital building is developed and verified using system identification. The response computed, using the presented analytical modeling techniques, is verified using recorded data. Structural behavior during the Northridge earthquake is evaluated in detail. The base-isolated USC hospital building performed well and reduced the response when compared to a fixed-base structure. The free-field acceleration was 0.49g and peak foundation/ground acceleration was 0.37g. The peak roof acceleration was reduced to 0.21g, nearly 50% of the peak ground acceleration. The peak drift was <30% of the code specification. The bearings yielded and dissipated energy (20%). The superstructure was elastic due to the effectiveness of base isolation. The building is expected to perform well in future earthquakes similar to those used in the original design.
Article
Full-text available
The primary objective of this paper is to develop output only modal identification and structural damage detection. Identification of multi-degree of freedom (MDOF) linear time invariant (LTI) and linear time variant (LTV—due to damage) systems based on Time-frequency (TF) techniques—such as short-time Fourier transform (STFT), empirical mode decomposition (EMD), and wavelets—is proposed. STFT, EMD, and wavelet methods developed to date are reviewed in detail. In addition a Hilbert transform (HT) approach to determine frequency and damping is also presented. In this paper, STFT, EMD, HT and wavelet techniques are developed for decomposition of free vibration response of MDOF systems into their modal components. Once the modal components are obtained, each one is processed using Hilbert transform to obtain the modal frequency and damping ratios. In addition, the ratio of modal components at different degrees of freedom facilitate determination of mode shape. In cases with output only modal identification using ambient/random response, the random decrement technique is used to obtain free vibration response. The advantage of TF techniques is that they are signal based; hence, can be used for output only modal identification. A three degree of freedom 1:10 scale model test structure is used to validate the proposed output only modal identification techniques based on STFT, EMD, HT, wavelets. Both measured free vibration and forced vibration (white noise) response are considered. The secondary objective of this paper is to show the relative ease with which the TF techniques can be used for modal identification and their potential for real world applications where output only identification is essential. Recorded ambient vibration data processed using techniques such as the random decrement technique can be used to obtain the free vibration response, so that further processing using TF based modal identification can be performed.
Conference Paper
Full-text available
Separation of underdetermined mixtures is an important problem in signal processing that has attracted a great deal of attention over the years. Prior knowledge is required to solve such problems and one of the most common forms of structure exploited is sparsity. Another central problem in signal processing is sampling. Recently, it has been shown that it is possible to sample well below the Nyquist limit whenever the signal has additional structure. This theory is known as compressed sensing or compressive sampling and a wealth of theoretical insight has been gained for signals that permit a sparse representation. In this paper we point out several similarities between compressed sensing and source separation. We here mainly assume that the mixing system is known, i.e. we do not study blind source separation. With a particular view towards source separation, we extend some of the results in compressed sensing to more general overcomplete sparse representations and study the sensitivity of the solution to errors in the mixing system.
Article
Full-text available
This work studies the problem of simultaneously separating and reconstructing signals from compressively sensed linear mixtures. We assume that all source signals share a common sparse representation basis. The approach combines classical Compressive Sensing (CS) theory with a linear mixing model. It allows the mixtures to be sampled independently of each other. If samples are acquired in the time domain, this means that the sensors need not be synchronized. Since Blind Source Separation (BSS) from a linear mixture is only possible up to permutation and scaling, factoring out these ambiguities leads to a minimization problem on the so-called oblique manifold. We develop a geometric conjugate subgradient method that scales to large systems for solving the problem. Numerical results demonstrate the promising performance of the proposed algorithm compared to several state of the art methods.
Article
Full-text available
The blind source separation problem is to extract the underlying source signals from a set of linear mixtures, where the mixing matrix is unknown. This situation is common in acoustics, radio, medical signal and image processing, hyperspectral imaging, and other areas. We suggest a two-stage separation process: a priori selection of a possibly overcomplete signal dictionary (for instance, a wavelet frame or a learned dictionary) in which the sources are assumed to be sparsely representable, followed by unmixing the sources by exploiting the their sparse representability. We consider the general case of more sources than mixtures, but also derive a more efficient algorithm in the case of a nonovercomplete dictionary and an equal numbers of sources and mixtures. Experiments with artificial signals and musical sounds demonstrate significantly better separation than other known techniques.
Article
Full-text available
A measure of temporal predictability is defined and used to separate linear mixtures of signals. Given any set of statistically independent source signals, it is conjectured here that a linear mixture of those signals has the following property: the temporal predictability of any signal mixture is less than (or equal to) that of any of its component source signals. It is shown that this property can be used to recover source signals from a set of linear mixtures of those signals by finding an un-mixing matrix that maximizes a measure of temporal predictability for each recovered signal. This matrix is obtained as the solution to a generalized eigenvalue problem; such problems have scaling characteristics of O (N3), where N is the number of signal mixtures. In contrast to independent component analysis, the temporal predictability method requires minimal assumptions regarding the probability density functions of source signals. It is demonstrated that the method can separate signal mixtures in which each mixture is a linear combination of source signals with supergaussian, sub-gaussian, and gaussian probability density functions and on mixtures of voices and music.
Article
Full-text available
Suppose x is an unknown vector in Ropf<sup>m</sup> (a digital image or signal); we plan to measure n general linear functionals of x and then reconstruct. If x is known to be compressible by transform coding with a known transform, and we reconstruct via the nonlinear procedure defined here, the number of measurements n can be dramatically smaller than the size m. Thus, certain natural classes of images with m pixels need only n=O(m<sup>1/4</sup>log<sup>5/2</sup>(m)) nonadaptive nonpixel samples for faithful recovery, as opposed to the usual m pixel samples. More specifically, suppose x has a sparse representation in some orthonormal basis (e.g., wavelet, Fourier) or tight frame (e.g., curvelet, Gabor)-so the coefficients belong to an lscr<sub>p</sub> ball for 0<ples1. The N most important coefficients in that expansion allow reconstruction with lscr<sub>2</sub> error O(N<sup>1/2-1</sup>p/). It is possible to design n=O(Nlog(m)) nonadaptive measurements allowing reconstruction with accuracy comparable to that attainable with direct knowledge of the N most important coefficients. Moreover, a good approximation to those N important coefficients is extracted from the n measurements by solving a linear program-Basis Pursuit in signal processing. The nonadaptive measurements have the character of "random" linear combinations of basis/frame elements. Our results use the notions of optimal recovery, of n-widths, and information-based complexity. We estimate the Gel'fand n-widths of lscr<sub>p</sub> balls in high-dimensional Euclidean space in the case 0<ples1, and give a criterion identifying near- optimal subspaces for Gel'fand n-widths. We show that "most" subspaces are near-optimal, and show that convex optimization (Basis Pursuit) is a near-optimal way to extract information derived from these near-optimal subspaces
Article
Full-text available
.<F3.733e+05> The grey level profiles of adjacent image regions tend to be different, whilst the `hidden' physical parameters associated with these regions (e.g. surface depth, edge orientation) tend to have similar values. We demonstrate that a network in which adjacent units receive inputs from adjacent image regions learns to code for hidden parameters. The learning rule takes advantage of the spatial smoothness of physical parameters in general to discover particular parameters embedded in grey level profiles which vary rapidly across an input image. We provide examples in which networks discover stereo disparity and feature orientation as invariances underlying image data.<F3.74e+05> 1. Introduction<F3.733e+05> A crucial requirement for an intelligent system operating in a complex environment is that it can `see the wood for the trees', i.e. it can determine the significant `hidden' parameters underlying large streams of confusing input data. This problem is confronted by a child...
Article
The grey level profiles of adjacent image regions tend to be different, whilst the ‘hidden’ physical parameters associated with these regions (e.g. surface depth, edge orientation) tend to have similar values. We demonstrate that a network in which adjacent units receive inputs from adjacent image regions learns to code for hidden parameters. The learning rule takes advantage of the spatial smoothness of physical parameters in general to discover particular parameters embedded in grey level profiles which vary rapidly across an input image. We provide examples in which networks discover stereo disparity and feature orientation as invariances underlying image data.
Article
Bridge operation safety is critical to national security and people's livelihood. The structural health monitoring of bridges has emerged as an increasingly active research area. Wireless sensor networks (WSNs) technology is known to be easy to deploy and inexpensive to maintain. It is thus suitable for structural health monitoring of bridges. This paper gives a survey on structural health monitoring systems based on WSNs technology. Some basic theories and typical methods in subsystems are presented, and critical technologies are analyzed. At the end, varies issues in existing systems and directions on future work are analyzed and summarized..
Article
The time-frequency and time-scale communities have recently developed a large number of overcomplete waveform dictionaries-stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the method of frames (MOF), Matching pursuit (MP), and, for special dictionaries, the best orthogonal basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l(1) norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP, and BOB, including better sparsity and superresolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation denoising, and multiscale edge denoising. BP in highly overcomplete dictionaries leads to large-scale optimization problems. With signals of length 8192 and a wavelet packet dictionary, one gets an equivalent linear program of size 8192 by 212,992. Such problems can be attacked successfully only because of recent advances in linear programming by interior-point methods. We obtain reasonable success with a primal-dual logarithmic barrier method and conjugate-gradient solver.
Article
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f∈C<sup>N</sup> and a randomly chosen set of frequencies Ω. Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set Ω? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)=σ<sub>τ∈T</sub>f(τ)δ(t-τ) obeying |T|≤C<sub>M</sub>·(log N)<sup>-1</sup> · |Ω| for some constant C<sub>M</sub>>0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N<sup>-M</sup>), f can be reconstructed exactly as the solution to the ℓ<sub>1</sub> minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C<sub>M</sub> which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T|·logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N<sup>-M</sup>) would in general require a number of frequency samples at least proportional to |T|·logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.
Article
Output-only algorithms are needed for modal identification when only structural responses are available. The recent years have witnessed the fast development of blind source separation (BSS) as a promising signal processing technique, pursuing to recover the sources using only the measured mixtures. As the most popular tool solving the BSS problem, independent component analysis (ICA) is able to directly extract the time-domain modal responses, which are viewed as virtual sources, from the observed system responses; however, it has been shown that ICA loses accuracy in the presence of higher-level damping. In this study, the modal identification issue, which is incorporated into the BSS formulation, is transformed into a time-frequency framework. The sparse time-frequency representations of the monotone modal responses are proposed as the targeted independent sources hidden in those of the system responses which have been short-time Fourier-transformed (STFT); they can then be efficiently extracted by ICA, whereby the time-domain modal responses are recovered such that the modal parameters are readily obtained. The simulation results of a multidegree-of-freedom system illustrate that the proposed output-only STFT-ICA method is capable of accurately identifying modal information of lightly and highly damped structures, even in the presence of heavy noise and nonstationary excitation. The laboratory experiment on a highly damped three-story frame and the analysis of the real measured seismic responses of the University of Southern California hospital building demonstrate the capability of the method to perform blind modal identification in practical applications.
Article
SUMMARY Output-only modal identification is needed when only structural responses are available. As a powerful unsupervised learning algorithm, blind source separation (BSS) technique is able to recover the hidden sources and the unknown mixing process using only the observed mixtures. This paper proposes a new time-domain output-only modal identification method based on a novel BSS learning algorithm, complexity pursuit (CP). The proposed concept—independent ‘physical systems’ living on the modal coordinates—connects the targeted constituent sources (and their mixing process) targeted by the CP learning rule and the modal responses (and the mode matrix), which can then be directly extracted by the CP algorithm from the measured free or ambient system responses. Numerical simulation results show that the CP method realizes accurate and robust modal identification even in the closely spaced mode and the highly damped mode cases subject to non-stationary ambient excitation and provides excellent approximation to the non-diagonalizable highly damped (complex) modes. Experimental and real-world seismic-excited structure examples are also presented to demonstrate its capability of blindly extracting modal information from system responses. The proposed CP is shown to yield clear physical interpretation in modal identification; it is computational efficient, user-friendly, and automatic, requiring little expertise interactions for implementations. Copyright © 2013 John Wiley & Sons, Ltd.
Article
One of the key tasks in cognitive radio and communications intelligence is to detect active bands in the radio-frequency (RF) spectrum. In order to perform spectral activity detection in wideband RF signals, expensive and energy-inefficient high-rate analog-to-digital converters (ADCs) in combination with sophisticated digital detection circuitry are typically used. In many practical situations, however, the RF spectrum is sparsely populated, i.e., only a few frequency bands are active at a time. This property enables the design of so-called analog-to-information (A2I) converters, which are capable of acquiring and directly extracting the spectral activity information at low cost and low power by means of compressive sensing (CS). In this paper, we present the first very-large-scale integration (VLSI) design of a monolithic wideband CS-based A2I converter that includes a signal acquisition stage capable of acquiring RF signals having large bandwidths and a high-throughput spectral activity detection unit. Low-cost wideband signal acquisition is obtained via CS-based randomized temporal subsampling in combination with a 4-bit flash ADC. High-throughput spectrum activity detection from the coarsely quantized and compressive measurements is achieved by means of a massively-parallel VLSI design of a novel accelerated sparse spectrum dequantization (ASSD) algorithm. The resulting monolithic A2I converter is designed in 28 nm CMOS, acquires RF signals up to 6 GS/s, and the on-chip ASSD unit detects the active RF bands at a rate 30 × below real-time.
Article
The long-standing analog-to-digital conversion paradigm based on Shannon/Nyquist sampling has been challenged lately, mostly in situations such as radar and communication signal processing where signal bandwidth is so large that sampling architectures constraints are simply not manageable. Compressed sensing (CS) is a new emerging signal acquisition/compression paradigm that offers a striking alternative to traditional signal acquisition. Interestingly, by merging the sampling and compression steps, CS also removes a large part of the digital architecture and might thus considerably simplify analog-to-information (A2I) conversion devices. This so-called “analog CS,” where compression occurs directly in the analog sensor readout electronics prior to analog-to-digital conversion, could thus be of great importance for applications where bandwidth is moderate, but computationally complex, and power resources are severely constrained. In our previous work (Mamaghanian, 2011), we quantified and validated the potential of digital CS systems for real-time and energy-efficient electrocardiogram compression on resource-constrained sensing platforms. In this paper, we review the state-of-the-art implementations of CS-based signal acquisition systems and perform a complete system-level analysis for each implementation to highlight their strengths and weaknesses regarding implementation complexity, performance and power consumption. Then, we introduce the spread spectrum random modulator pre-integrator (SRMPI), which is a new design and implementation of a CS-based A2I read-out system that uses spread spectrum techniques prior to random modulation in order to produce the low rate set of digital samples. Finally, we experimentally built an SRMPI prototype to compare it with state-of-the-art CS-based signal acquisition systems, focusing on critical system design parameters and constraints, and show that this new proposed architecture offers a compelling alternativ- , in particular for low power and computationally-constrained embedded systems.
Article
Blind source separation (BSS) based methods have been shown to be efficient and powerful to perform output-only modal identification. Existing BSS modal identification methods, however, require the number of sensors at least equal to that of sources (active modes).This paper proposes a new modal identification algorithm based on a novel BSS technique termed sparse component analysis (SCA) to handle even the underdetermined problem where sensors may be highly limited compared to the number of active modes. The developed SCA method reveals the essence of modal expansion that the monotone modal responses with disjoint sparsest representations in frequency domain naturally cluster in the directions of the mode matrix's columns (modeshapes), which are readily extracted from the measured system responses using a simple clustering algorithm. Then, in determined case where sensor number equals that of modes, the estimated square mode matrix directly decouples the system responses to obtain the modal responses, whereby computing their frequencies and damping ratios; whereas with limited sensors, the modal responses are efficiently recovered via the ℓ1-minimization sparse recovery technique from the incomplete knowledge of the partial mode matrix and the system responses of inadequate sensors. Numerical simulations and experimental example show that whether in determined or underdetermined situations, the SCA method performs accurate and robust identification of a wide range of structures including those with closely-spaced and highly-damped modes. The SCA method is simple and efficient to conduct reliable output-only modal identification even with limited sensors.
Article
Second-order blind source separation (SOBSS) has gained recent interest in operational modal analysis (OMA), since it is able to separate a set of system responses into modal coordinates from which the system poles can be extracted by single-degree-of-freedom techniques. In addition, SOBSS returns a mixing matrix whose columns are the estimates of the system mode shapes. The objective of this paper is threefold. First, a theoretical analysis of current SOBSS methods is conducted within the OMA framework and its precise conditions of applicability are established. Second, a new separation method is proposed that fixes current limitations of SOBSS: It returns estimate of complex mode shapes, it can deal with more active modes than the number of available sensors, and it shows superior performance in the case of heavily damped and/or strongly coupled modes. Third, a theoretical connection is drawn between SOBSS and stochastic subspace identification (SSI), which stands as one of the points of reference in OMA. All approaches are finally compared by means of numerical simulations.
Article
Blind Source Separation (BSS) is an important issue in the coherent processing of multi-dimensional data. To recover and separate the sources from underdetermined mixtures, some prior information like sparse representation is required. The principle is very similar to the new technique named Compressed Sensing (CS), which asserts that one can recover a sparse signal from a limited number of random projections. In this paper, the relationship between BSS and CS is studied by equivalent transformation, then we propose the linear operator by which the relationship between the sources and the mixtures is modeled in two ways: RIP and incoherence, and give some instructive conclusions for the operator design. Numerical simulation applying the FOOMP algorithm and a operator we propose are conducted to demonstrate the good performance of the whole framework.
Article
In structural health monitoring (SHM) of civil structures, data compression is often needed to reduce the cost of data transfer and storage, because of the large volumes of sensor data generated from the monitoring system. The traditional framework for data compression is to first sample the full signal and, then to compress it. Recently, a new data compression method named compressive sampling (CS) that can acquire the data directly in compressed form by using special sensors has been presented. In this article, the potential of CS for data compression of vibration data is investigated using simulation of the CS sensor algorithm. For reconstruction of the signal, both wavelet and Fourier orthogonal bases are examined. The acceleration data collected from the SHM system of Shandong Binzhou Yellow River Highway Bridge is used to analyze the data compression ability of CS. For comparison, both the wavelet-based and Huffman coding methods are employed to compress the data. The results show that the values of compression ratios achieved using CS are not high, because the vibration data used in SHM of civil structures are not naturally sparse in the chosen bases.
Article
Recently, blind source separation (BSS) methods have gained significant attention in the area of signal processing. Independent component analysis (ICA) and second-order blind identification (SOBI) are two popular BSS methods that have been applied to modal identification of mechanical and structural systems. Published results by several researchers have shown that ICA performs satisfactorily for systems with very low levels of structural damping, for example, for damping ratios of the order of 1% critical. For practical structural applications with higher levels of damping, methods based on SOBI have shown significant improvement over ICA methods. However, traditional SOBI methods suffer when nonstationary sources are present, such as those that occur during earthquakes and other transient excitations. In this paper, a new technique based on SOBI, called the modified cross-correlation method, is proposed to address these shortcomings. The conditions in which the problem of structural system identification can be posed as a BSS problem is also discussed. The results of simulation described in terms of identified natural frequencies, mode shapes, and damping ratios are presented for the cases of synthetic wind and recorded earthquake excitations. The results of identification show that the proposed method achieves better performance over traditional ICA and SOBI methods. Both experimental and large-scale structural simulation results are included to demonstrate the applicability of the newly proposed method to structural identification problems.
Book
Preface. 1. Introduction. 2. Finite Element Modelling. 3. Vibration Testing. 4. Comparing Numerical Data with Test Results. 5. Estimation Techniques. 6. Parameters for Model Updating. 7. Direct Methods Using Modal Data. 8. Iterative Methods Using Modal Data. 9. Methods Using Frequency Domain Data. 10. Case Study: an Automobile Body M. Brughmans, J. Leuridan, K. Blauwkamp. 11. Discussion and Recommendations. Index.
Book
Edited by the people who were forerunners in creating the field, together with contributions from 34 leading international experts, this handbook provides the definitive reference on Blind Source Separation, giving a broad and comprehensive description of all the core principles and methods, numerical algorithms and major applications in the fields of telecommunications, biomedical engineering and audio, acoustic and speech processing. Going beyond a machine learning perspective, the book reflects recent results in signal processing and numerical analysis, and includes topics such as optimization criteria, mathematical tools, the design of numerical algorithms, convolutive mixtures, and time frequency approaches. This Handbook is an ideal reference for university researchers, R&D engineers and graduates wishing to learn the core principles, methods, algorithms, and applications of Blind Source Separation. Covers the principles and major techniques and methods in one bookEdited by the pioneers in the field with contributions from 34 of the world's expertsDescribes the main existing numerical algorithms and gives practical advice on their designCovers the latest cutting edge topics: second order methods; algebraic identification of under-determined mixtures, time-frequency methods, Bayesian approaches, blind identification under non negativity approaches, semi-blind methods for communicationsShows the applications of the methods to key application areas such as telecommunications, biomedical engineering, speech, acoustic, audio and music processing, while also giving a general method for developing applications
Conference Paper
The problem of underdetermined blind audio source separation is usually addressed under the framework of sparse signal representation. In this paper, we develop a novel algorithm for this problem based on compressed sensing which is an emerging technique for efficient data reconstruction. The proposed algorithm consists of two stages. The unknown mixing matrix is firstly estimated from the audio mixtures in the transform domain, as in many existing methods, by a K-means clustering algorithm. Different from conventional approaches, in the second stage, the sources are recovered by using a compressed sensing approach. This is motivated by the similarity between the mathematical models adopted in compressed sensing and source separation. Numerical experiments including the comparison with a recent sparse representation approach are provided to show the good performance of the proposed method.
Article
The present study carries out output-only modal analysis using two blind source separation (BSS) techniques, namely independent component analysis and second-order blind identification. The concept of virtual source is exploited and renders the application of these BSS techniques possible. The proposed modal analysis method is illustrated using numerical and experimental examples.
Article
This paper focuses on the relation between the vibration modes of mechanical systems and the modes computed through a blind source separation technique called independent component analysis (ICA). For free and random vibrations of weakly damped systems, a one-to-one relationship between the vibration modes and the ICA modes is demonstrated using the concept of virtual source. Based on this theoretical link, a time-domain structural system identification technique is proposed and is illustrated using numerical examples.
Article
In this paper, a second-order statistical method employed in blind source separation (BSS) is adapted for use in modal parameter identification. Modal responses and mode shapes are estimated by the use of second-order blind identification (SOBI) on an expanded and pre-treated dataset. Frequency and damping can be obtained from the modal responses by simple single degree of freedom methods. Using this approach, a class of new non-parametric output-only modal identification algorithms is proposed and examples of its use are provided. It is demonstrated that the proposed methodology provides a novel and robust approach to modal identification. For the example shown, it is deduced that quality of the modal parameters produced by the method is competitive with the state of the art parametric methods.
Article
Stone's method is one of the novel approaches to the blind source separation (BSS) problem and is based on Stone's conjecture. However, this conjecture has not been proved. We present a simple simulation to demonstrate that Stone's conjecture is incorrect. We then modify Stone's conjecture and prove this modified conjecture as a theorem, which can be used a basis for BSS algorithms.
Article
Separation of sources consists of recovering a set of signals of which only instantaneous linear mixtures are observed. In many situations, no a priori information on the mixing matrix is available: The linear mixture should be “blindly” processed. This typically occurs in narrowband array processing applications when the array manifold is unknown or distorted. This paper introduces a new source separation technique exploiting the time coherence of the source signals. In contrast with other previously reported techniques, the proposed approach relies only on stationary second-order statistics that are based on a joint diagonalization of a set of covariance matrices. Asymptotic performance analysis of this method is carried out; some numerical simulations are provided to illustrate the effectiveness of the proposed method
Article
Recently, a lot of attention has been paid to regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as -regularized least-squares programs (LSPs), which can be reformulated as convex quadratic programs, and then solved by several standard methods such as interior-point methods, at least for small and medium size problems. In this paper, we describe a specialized interior-point method for solving large-scale -regularized LSPs that uses the preconditioned conjugate gradients algorithm to compute the search direction. The interior-point method can solve large sparse problems, with a million variables and observations, in a few tens of minutes on a PC. It can efficiently solve large dense problems, that arise in sparse signal recovery with orthogonal transforms, by exploiting fast algorithms for these transforms. The method is illustrated on a magnetic resonance imaging data set.
Article
Conventional approaches to sampling signals or images follow Shannon's theorem: the sampling rate must be at least twice the maximum frequency present in the signal (Nyquist rate). In the field of data conversion, standard analog-to-digital converter (ADC) technology implements the usual quantized Shannon representation - the signal is uniformly sampled at or above the Nyquist rate. This article surveys the theory of compressive sampling, also known as compressed sensing or CS, a novel sensing/sampling paradigm that goes against the common wisdom in data acquisition. CS theory asserts that one can recover certain signals and images from far fewer samples or measurements than traditional methods use.
Article
The Time-Frequency and Time-Scale communities have recently developed a large number of overcomplete waveform dictionaries --- stationary wavelets, wavelet packets, cosine packets, chirplets, and warplets, to name a few. Decomposition into overcomplete systems is not unique, and several methods for decomposition have been proposed, including the Method of Frames (MOF), Matching Pursuit (MP), and, for special dictionaries, the Best Orthogonal Basis (BOB). Basis Pursuit (BP) is a principle for decomposing a signal into an "optimal" superposition of dictionary elements, where optimal means having the smallest l 1 norm of coefficients among all such decompositions. We give examples exhibiting several advantages over MOF, MP! and BOB, including better sparsity, and super-resolution. BP has interesting relations to ideas in areas as diverse as ill-posed problems, in abstract harmonic analysis, total variation de-noising, and multi-scale edge de-noising. Basis Pursuit in highly ...
A compressed sensing camera: new theory and an implementation using digital micromirrors
  • D Takhar
  • V Bansal
  • M Wakin
  • M Duarte
  • D Baron
  • K F Kelly
  • R G Baraniuk
D. Takhar, V. Bansal, M. Wakin, M. Duarte, D. Baron, K.F. Kelly, R.G. Baraniuk, A compressed sensing camera: new theory and an implementation using digital micromirrors, in: Proceedings of the Conference on Computational Imaging IV at SPIE Electronic Imaging, San Jose, California, 2006.
Independent Component Analysis Fig. 12. (a) The estimated modeshapes of the USC building by CP–CS in non-uniform low-rate random sensing framework, comparing to those by CP in Nyquist uniform sensing framework and to those by FEM; (b) from M ¼100 non-uniform low-rate random samples
  • A Hyvärien
  • J Karhunen
  • E Oja
A. Hyvärien, J. Karhunen, E. Oja, Independent Component Analysis, Wiley, New York, NY, 2001. Fig. 12. (a) The estimated modeshapes of the USC building by CP–CS in non-uniform low-rate random sensing framework, comparing to those by CP in Nyquist uniform sensing framework and to those by FEM; (b) from M ¼100 non-uniform low-rate random samples (M¼ N/15), CP–CS recovers N ¼1500-dimensional uniform modal responses. Y. Yang, S. Nagarajaiah / Mechanical Systems and Signal Processing 56-57 (2015) 15–34
An introduction to compressive sampling
  • E Candès
  • M Wakin
E. Candès, M. Wakin, An introduction to compressive sampling, IEEE Signal Process. Magazine 25 (2008) 21-30.
The estimated modeshapes of the USC building by CP–CS in non-uniform low-rate random sensing framework, comparing to those by CP in Nyquist uniform sensing framework and to those by FEM; (b) from M ¼100 non-uniform low-rate random samples
  • Fig
Fig. 12. (a) The estimated modeshapes of the USC building by CP–CS in non-uniform low-rate random sensing framework, comparing to those by CP in Nyquist uniform sensing framework and to those by FEM; (b) from M ¼100 non-uniform low-rate random samples (M¼ N/15), CP–CS recovers N ¼1500- dimensional uniform modal responses.
Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency domain
  • E Candès
  • J Romberg
  • T Tao
E. Candès, J. Romberg, T. Tao, Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency domain, IEEE Trans. Inf. Theory 52 (2006) 489-509.