Article

Smoothing and differentiation of data by simplified least square procedure

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... To calculate the cell growth rate, , for a λ( ) given cell, we first calculated the natural logarithm of the cell length in μm. Then, we used a Savitzky-Golay filter 46 with a third-order polynomial and a window size of eight data points to smooth the signal. The growth rate was calculated as the first derivative of this smoothed signal (the difference between sequential data points divided by the imaging interval). ...
... -Result of applying the Savitzky-Golay filter 46 to ln( ( )) using a third-order polynomial and a window size of eight data points. ...
... From the YFP signal, an approximate production start time was calculated using the slope of the YFP signal. The YFP signal was first filtered using a Savitzky-Golay filter 46 , with a third-order polynomial and a window size of eight data points. The first derivative of this filtered signal was then calculated to determine the slope. ...
Preprint
Full-text available
When a lytic bacteriophage infects a cell, it hijacks the cell’s resources to replicate itself, ultimately causing cell lysis and the release of new virions. As phages function as obligate parasites, each stage of the infection process depends on the physiological parameters of the host cell. Given the inherent variability of bacterial physiology, we ask how the phage infection dynamic reflects such heterogeneity. Here, we introduce a pioneering assay for investigating the kinetics of individual infection steps by a single T7 phage on a single bacterium. The high-throughput, time-resolved nature of the assay allows us to monitor the infection progression simultaneously in multiple cells, revealing substantial heterogeneity in each step and correlations between the infection dynamics and the infected cell’s properties. Simulations of competing phage populations with distinct lysis time distributions indicate that this heterogeneity can have considerable impact on phage fitness, recognizing variability in infection kinetics as a potential evolutionary driver of phage-bacteria interactions.
... Figure 2 shows the use of the Mahalanobis distance to remove outliers; a total of 6 abnormal samples were removed. In addition, to amplify information and further reduce noise, the following methods were applied to spectral pretreatment: (i) Savitzky-Golay smoothing: A window of size 5 and a polynomial of order 2 were chosen for smoothing to achieve maximum noise removal [41]. (ii) First derivatives: First derivatives were used to achieve the effect of enhancing the detailed features of a spectrum and eliminating baseline drift and background noise interference [42]. ...
... The model consists of an input layer, a hidden layer, and an output layer whose model memory is dependent on one or more memory units of the hidden layer. A memory cell ( Figure 3) can be regarded as a neuron with the ability to remember; each cell contains an internal state and an external state ℎ, representing the long-term and short-term memory of the cell, respectively; the In addition, to amplify information and further reduce noise, the following methods were applied to spectral pretreatment: (i) Savitzky-Golay smoothing: A window of size 5 and a polynomial of order 2 were chosen for smoothing to achieve maximum noise removal [41]. (ii) First derivatives: First derivatives were used to achieve the effect of enhancing the detailed features of a spectrum and eliminating baseline drift and background noise interference [42]. ...
Article
Full-text available
Soil analysis using near-infrared spectroscopy has shown great potential to be an alternative to traditional laboratory analysis, and there is continuously increasing interest in building large-scale soil spectral libraries (SSLs). However, due to issues such as high non-linearity in soil spectral data and complexity in soil spatial variation, the establishment of robust prediction models for soil spectral libraries remains a challenge. This study aimed to investigate the performance of deep learning algorithms, including long short-term memory (LSTM) and LSTM–convolutional neural networks (LSTM–CNN) integrated models, to predict the soil organic matter (SOM) of a provincial-scale SSL, and compare it to the normally used local weighted regression (LWR) model. The Hebei soil spectral library (HSSL) contains 425 topsoil samples (0–20 cm), of which every 3 soil samples were collected from dry land, irrigated land, and paddy fields, respectively, in different counties of Hebei Province, China. The results show that the accuracy of the validation dataset rank as follows: LSTM–CNN (R²p = 0.96, RMSEp = 1.66 g/kg) > LSTM (R²p = 0.83, RMSEp = 3.42 g/kg) > LWR (R²p = 0.82, RMSEp = 3.79 g/kg). The LSTM–CNN model performed the best, mainly due to its comprehensive ability to effectively extract spatial and temporal features. Meanwhile, the LSTM model achieved higher accuracy than the LWR model, owing to its built-in memory unit and its advantage of faster feature band extraction. Thus, it was suggested to use deep learning algorithms for SOM predictions in SSLs. However, their performance on larger-scale SSLs such as continental/global SSLs still needs to be further investigated.
... Data was analyzed in Origin (Origin2018, Origin Lab Corporation, UK) software. According to the Savitzky-Golay smoothing filtering method (Steinier et al. 1972;Weng 2016), smoothing filtering models of 5, 7, 11, 13, 15, 17, 21, and 23 points were compared. After comparing, 13 points were finalized for the smoothing filtering model. ...
... The original Micro-FTIR infrared spectrums in Fig. 8a-e were processed into second derivative spectrum (2nd Der) in Fig. 8f. The second derivative spectrum removed the baseline influences, improved the resolution and accuracy of absorption peak position, and its valleys roughly corresponded to the peaks of the original spectrum (Steinier et al. 1972;Weng 2016;Qin et al. 2012). For parenchyma cell radial and tangential walls, the height of valleys from adjacent parenchyma cells radial walls were similar to the corresponding valleys from the adjacent tangential walls except for 1512 cm − 1 of the aromatic ring in lignin (Chen et al. 2022;Zhu et al. 2021). ...
Article
Full-text available
The anisotropic mechanical properties of bamboo significantly affected its processing and wide utilization. It was reasonable to speculate that bamboo anisotropic mechanical performance could be related to parenchyma cells, due to parenchyma cells’ considerable volume fraction and different fracture modes when bamboo was subjected to different directions loads. However, few studies had investigated the anisotropic mechanical properties of parenchyma tissue and their effects on bamboo anisotropy, due to the difficulty to prepare parenchyma tissue samples in different directions. This paper investigated the mechanical properties of parenchyma tissue in three typical directions, namely, longitudinal, radial, and tangential directions, and summarized the tensile fracture behaviors and modes of parenchyma tissue and its influencing factors. The results indicated that the anisotropic tensile performance of parenchyma tissue was reflected in the tensile properties, fracture modes and crack paths. The longitudinal, radial, and tangential tensile strength were 20.92 MPa, 24.22 MPa, and 27.59 MPa, respectively. The transverse, radial and tangential crack were propagated along different interfaces at different crack deflection angles. The microfibrils angle, chemical composition, pits size and distribution density, cell quantity, and cell dislocation degree affected the parenchyma tissue’ anisotropic mechanical properties. Compared to the effect on bamboo strength, the parenchyma tissue’ anisotropic mechanical performance might have a greater effect on the zigzag crack growth path of bamboo failure. This finding could help designing biomimetic materials with high toughness.
... Prior modelling, spectra of the two spots of the same leaf (three spectra x two spots) were averaged and pre-processed with standard normal variate (SNV) (Naes et al. 2002) and the Savitzky-Golay filter (n-smoothing points, 2nd polynomial order, and y-derivative order) and further mean centred. The PLSDA models were optimized: (I) for the spectral regions (15 distinct regions, defined in Fig. S1 of the supplementary material, alone and/or combined in the following manner: 1-Region R1; 2-Region R2; 3-Region R3; 4-Region R4; 5-Region R1 + R2 + R3 + R4; 6-Region R1 + R2; 7-Region R1 + R3; 8-Region R1 + R4; 9-Region R2 + R3; 10-Region R2 + R4; 11-Region R3 + R4; 12-Region R1 + R2 + R3; 13-Region R1 + R2 + R4; 14-Region R1 + R3 + R4; 15-Region R2 + R3 + R4); (II) spectra pre-processing method (Savitzky-Golay filter parameters: 9, 12, and 15 smoothing points and 1st and 2nd order derivative) (Steinier et al. 1972) and; (III) number of latent variables (LVs: 2, 3, 4 and 5). ...
... For photosynthetic pigments (chlorophyll a, chlorophyll b, total chlorophyll, anthocyanins, and carotenoids), antioxidant activity (ABTS and DPHH scavenging effect), and mineral concentration estimates, partial least squares (PLS) regression models (Geladi and Kowalski 1986) were used. Prior to modelling, spectra were pre-processed with (SNV) (Naes et al. 2002) and Savitzky-Golay filter (9, 12, and 15 smoothing points, 2nd polynomial order, and 1st and 2nd order derivative) (Steinier et al. 1972) to remove baseline drifts and further mean centred. The number of LVs used were 3, 4, 5, 6, and 7. Before the development of the PLS models, spectra triplicates were averaged (three spectra obtained in the same leaf spot) and randomly divided into two independent data sets (70%/30%) to be used for the calibrations/ cross-validation (70%) and for the validation (30%) of the PLS models. ...
Article
Full-text available
Aims The excessive use of fertilizers is a problem in current agricultural systems, and sustainable farming practices, including precision agriculture, demand the use of new technologies to manage plant stress at an early stage. To sustainably manage iron (Fe) fertilization in agricultural fields, it is urgent to develop early detection methods for Fe deficiency, and linked oxidative stress, in plant leaves. Herein, the potential of using Fourier Transform Infrared (FTIR) spectroscopy for Fe deficiency and oxidative stress detection in soybean plants was evaluated. Methods After a period of two weeks of hydroponic growth under optimum conditions, soybean plants were grown under Fe-sufficient (Fe+) and Fe-deficient (Fe–) hydroponic conditions for four weeks. Sampling occurred every week, infrared (IR) spectra were acquired and biological parameters (total chlorophyll, anthocyanins and carotenoids concentration, and ABTS and DPPH free radical scavenging ability), mineral concentrations, and the Fe-related genes’ expression - FRO2- and IRT1-like - were evaluated. Results Two weeks after imposing Fe deficiency, plants displayed decreased antioxidant activity, and increased expression levels of FRO2- and IRT1-like genes. Regarding the PLS models developed to estimate the biological parameters and mineral concentrations, satisfactory calibration models were globally obtained with R²C from 0.93 to 0.99. FTIR spectroscopy was also able to discriminate between Fe + and Fe– plants from an early stage of stress induction with 96.3% of correct assignments. Conclusion High reproducibility was observed among the different spectra of each sample and FTIR spectroscopy may be an early, non-invasive, cheap, and environmentally friendly technique for IDC management.
... Experimental γ(f) measured while phonating the neutral vowel /ə/ (in the target word "herd"), magnitude (above), and phase (below). Blue line shows the raw measurement, while the red line shows the smoothed with Savitsky-Golay algorithm [14] and interpolated spectra (having removed harmonics of the voice), revealing vocal tract resonances as "notches" (magnitude spectrum) and "dips" (phase spectrum), indicated by yellow vertical bands; (b) corresponding analytical γ(f) spectra modeled for an ideal baffled open-closed cylinder with a length of 170 mm; the "notches" and "dips" correspond closely in frequency and general structure with the experimental spectra in (a). ...
... / (in the target word "herd"), magnitude (above), and phase (below). Blue line shows the raw measurement, while the red line shows the smoothed with Savitsky-Golay algorithm [14] and interpolated spectra (having removed harmonics of the voice), revealing vocal tract resonances as "notches" (magnitude spectrum) and "dips" (phase spectrum), indicated by yellow vertical bands; (b) corresponding analytical γ(f) spectra modeled for an ideal baffled open-closed cylinder with a length of 170 mm; the "notches" and "dips" correspond closely in frequency and general structure with the experimental spectra in (a). ...
Article
Full-text available
Broadband excitation introduced at the speaker’s lips and the evaluation of its corresponding relative acoustic impedance spectrum allow for fast, accurate and non-invasive estimations of vocal tract resonances during speech and singing. However, due to radiation impedance interactions at the lips at low frequencies, it is challenging to make reliable measurements of resonances lower than 500 Hz due to poor signal to noise ratios, limiting investigations of the first vocal tract resonance using such a method. In this paper, various physical configurations which may optimize the acoustic coupling between transducers and the vocal tract are investigated and the practical arrangement which yields the optimal vocal tract resonance detection sensitivity at low frequencies is identified. To support the investigation, two quantitative analysis methods are proposed to facilitate comparison of the sensitivity and quality of resonances identified. Accordingly, the optimal configuration identified has better acoustic coupling and low-frequency response compared with existing arrangements and is shown to reliably detect resonances down to 350 Hz (and possibly lower), thereby allowing the first resonance of a wide range of vowel articulations to be estimated with confidence.
... To assess the performance of the model, we typically rely on metrics such as the coefficient of determination (R 2 ), mean absolute error (MAE), and root mean square error (RMSE). A higher R 2 value and lower MAE and RMSE values indicate better preprocessing effects [15]. The specific results of the LSTM correction model are presented in Table 1. ...
Article
Full-text available
The origin of agricultural products is crucial to their quality and safety. This study explored the differences in chemical composition and structure of rice from different origins using fluorescence detection technology. These differences are mainly affected by climate, environment, geology and other factors. By identifying the fluorescence characteristic absorption peaks of the same rice seed varieties from different origins, and comparing them with known or standard samples, this study aims to authenticate rice, protect brands, and achieve traceability. The study selected the same variety of rice seed planted in different regions of Jilin Province in the same year as samples. Fluorescence spectroscopy was used to collect spectral data, which was preprocessed by normalization, smoothing, and wavelet transformation to remove noise, scattering, and burrs. The processed spectral data was used as input for the long short-term memory (LSTM) model. The study focused on the processing and analysis of rice spectra based on NZ-WT-processed data. To simplify the model, uninformative variable elimination (UVE) and successive projections algorithm (SPA) were used to screen the best wavelengths. These wavelengths were used as input for the support vector machine (SVM) prediction model to achieve efficient and accurate predictions. Within the fluorescence spectral range of 475–525 nm and 665–690 nm, absorption peaks of nicotinamide adenine dinucleotide (NADPH), riboflavin (B2), starch, and protein were observed. The origin tracing prediction model established using SVM exhibited stable performance with a classification accuracy of up to 99.5%.The experiment demonstrated that fluorescence spectroscopy technology has high discrimination accuracy in tracing the origin of rice, providing a new method for rapid identification of rice origin.
... This preprocessing method aims to mitigate data interference and enhance the resolution between the raw spectra, thereby facilitating more robust and reliable experimental data. Three preprocessing methods were used in this study: moving average (MA) [19], savitzky-golay (SG) [20], multivariate scatter correction (MSC) [21]. ...
Article
Full-text available
As a popular nutraceutical, collagen powder is the target of adulteration like other high-value food products, therefore detection of adulteration in collagen powder is of great practical significance to ensure market order and the health of the people. To achieve accurate detection of adulterants in collagen powder, a modeling method of fluorescence hyperspectral technology combined with machine learning algorithm was proposed. This study employed various preprocessing methods for denoising and spectral correction to enhance the effective spectral data. Principal component analysis was used to visualize the spectral data and initially revealed the spatial distribution of adulterated collagen powder. Genetic algorithm-k nearest neighbor, particle swarm optimization-support vector machine, and gradient boosting decision tree were used to construct classification models to identify adulterated collagen powder, with the best 2-class discriminant model accuracy, 4-class discriminant model accuracy, and 5-class discriminant model accuracy reaching 99%, 94%, and 98%, respectively. In the quantitative detection models of adulteration level, the random forest model performed best with correlation coefficient (R²) of 0.95 to 0.99. These results suggested that fluorescence hyperspectral technology combined with machine learning algorithm can be effectively used to detect adulterated collagen powder.
... The SG filter was proposed by Savitzky and Golay in 1972 [31]. Its advantage is that it can effectively retain the detailed information of the signal while smoothing, and can make model predictions based on data characteristics. ...
Article
Full-text available
The Global Navigation Satellite System interferometric reflectometry (GNSS-IR) technique has been confirmed to retrieve the sea level using signal-to-noise ratio (SNR) data. To investigate the suitability of different GNSS-IR sea level retrieval methods, several strategies were tested during the data process, including: the whole arc Lomb-Scargle periodogram (whole arc LSP) method and window LSP (WinLSP) method when spectral analysis on SNR data; the tidal harmonic analysis (THA) and dynamic SNR method when using dynamic correction on retrievals; and the moving window smoothing method proposed to apply on processing of retrievals. Furthermore, the THA method is improved by segmented SNR data, and the moving window smoothing with robust local weighted regression (RLOWESS) and Savitzky-Golay (SG) filter was adopt for better serviceability. One-month data from the SC02 station were used to test all the strategies by comparing with local tide gauge records. HKQT station further verified the usability of moving window smoothing. The results confirmed that the WinLSP method can obtain more retrievals implying higher temporal resolution, but whole arc LSP method reaches better precision. The former is easily susceptible to sampling rate of SNR. Correction results of dynamic SNR method are better than that of segmented THA method. When the former is applied to retrievals of WinLSP method, the retrievals of all windows from multiple SNR arcs should be adjusted simultaneously. The moving window smoothing based RLOWESS and SG filter is more applicable to the retrievals corrected by segmented THA method, and the smoothing effect of two smoothing methods has little difference. In addition, wind speed more than 20m/s seriously affects the effect of GNSS-IR sea level retrieval. GNSS-IR technique has the potential of centimeter-level retrieved sea level.
... On the other hand, the independent variables Figure 3 went through a filtering sub-stage applying a Savitzky-Golay filter with a 7-point window and a second-degree polynomial. The most used smoothing and differencing technique in chemometrics is the Savitzky-Golay method [36]- [38], which consists of a local polynomial regression requiring equidistant spectrum values, as shown in (1). * = 1 ∑ ℎ +ℎ ℎ=− (1) where: * is the value of the curve to be smoothed or derived. is the number of points in the window. ...
Article
Full-text available
Mango is a very popular climacteric fruit in America and Europe. Among the internal properties of the mango, total soluble solids (TSS) are an adequate indicator to estimate the quality of mango, however, the measurement of this indicator requires destructive tests. Several research have addressed similar issues; they have made use of pre-processing transformations without making it clear which of them is statistically better. Here, we created a new spectral database to build machine learning (ML) models. We analyzed a total of 18 principal component regression (PCR) models and 18 partial least squared regression (PLSR) models, where 4 types of transformations, 3 different feature extractors, and 3 different pre-processing techniques are combined. The research proposes a double cross validation (CV) both to determine the optimal number of components and to obtain the final metrics. The best model had a root mean square error (RMSE) of 1.1382 °Brix and a RMSE on the transformed scale of 0.5140. The best model used 4 components, used y<sup>2</sup> transformation, reflectance R as the independent variable and MSC as a pre-processing technique.
... The experimental data was smoothed by using a Savitzky-Golay filter [64] to fit a second order polynomial function over a span of 20 data points. A sample data of the obtained free vibration is presented in Fig. 4, where time histories of mass displacement and acceleration and phase trajectory of the mass are shown. ...
Article
Full-text available
Impacting systems are widely used in many engineering applications, such as self-propelled robots, energy harvesting and percussive drilling, which exhibit rich and complex nonlinear phenomena. Among these applications, predicting nonlinearities and estimating system parameters are of great interest of nonlinear dynamics research community. Backbone curve is an analytical tool that captures the frequency-amplitude dependence of nonlinear systems. In this paper, we estimate the impacting stiffness of a single-degree-of-freedom non-smooth dynamical system qualitatively by using the backbone curve. It was found that an increase of the impacting stiffness may lead to lowering the backbone curve. An adaptive differential evolution algorithm with the Metropolis criterion is proposed to identify the parameters of the impacting system quantitatively using experimental data, which are consistent with our theoretical predictions. Finally, the identified parameters are verified, and the limitations of the backbone curve are drawn. The nonlinear characteristics identification method studied in this paper could be extended to other vibro-impact systems and is potentially applicable to structural health monitoring and robotic sensing.
... The derivative algorithm eliminates interference from baseline drift or background, separates overlapping peaks, and improves resolution and sensitivity [36]. Smoothing can be used to reduce random errors [37]. Indeed, different data sources require special treatment according to characteristics. ...
Article
Full-text available
Traditional Chinese medicine is characterized by numerous chemical constituents, complex components, and unpredictable interactions among constituents. Therefore, a single analytical technique is usually unable to obtain comprehensive chemical information. Data fusion is an information processing technology that can improve the accuracy of test results by fusing data from multiple devices, which has a broad application prospect by utilizing chemometrics methods, adopting low-level, mid-level, and high-level data fusion techniques, and establishing final classification or prediction models. This paper summarizes the current status of the application of data fusion strategies based on spectroscopy, mass spectrometry, chromatography, and sensor technologies in traditional Chinese medicine (TCM) in light of the latest research progress of data fusion technology at home and abroad. It also gives an outlook on the development of data fusion technology in TCM analysis to provide references for the research and development of TCM.
... The cubes span a pixel of size 3 km × 3 km centered on the EC towers in each location. We further preprocessed the data by averaging the NDVI for each timestep as the mean of all the pixels within the cube, see 1. Additionally, we smoothed the signal using a Savitzky-Golay filter (Savitzky and Golay, 1964;Steinier et al., 1972) with a 7-day window and a polynomial order of 3 (Chen et al., 2004) to eliminate any potential artifacts caused by noise. 125 We selected study sites based on their forest cover percentage, ensuring over 80% forest cover in each cube, see 1. ...
Preprint
Full-text available
Vegetation state variables are key indicators of land-atmosphere interactions characterized by long-term trends, seasonal fluctuations, and responses to weather anomalies. This study investigates the potential of neural networks in capturing vegetation state responses, including extreme behavior driven by atmospheric conditions. While machine learning methods, particularly neural networks, have significantly advanced in modeling nonlinear dynamics, it has become standard practice to approach the problem using recurrent architectures capable of capturing nonlinear effects and accommodating both long and short-term memory. We compare four recurrence-based learning models, which differ in their training and architecture: 1) recurrent neural networks (RNNs), 2) long short-term memory-based networks (LSTMs), 3) gated recurrent unit-based networks (GRUs), and 4) echo state networks (ESNs). While our results show minimal quantitative differences in their performances, ESNs exhibit slightly superior results across various metrics. Overall, we show that recurrent network architectures prove generally suitable for vegetation state prediction yet exhibit limitations under extreme conditions. This study highlights the potential of recurrent network architectures for vegetation state prediction, emphasizing the need for further research to address limitations in modeling extreme conditions within ecosystem dynamics.
... The employed spectral transformation methods included Standard Normal Variate (SNV), Multiplicative Scatter Correction (MSC), First Order Derivative (D1), Second Order Derivative (D2), Continuum Removal (CR), and Logarithm Reciprocal (CL). Further, the Savitzky-Golay (SG) algorithm with a window size of 12 and a polynomial order of 2 [59] was used to reduce noise and enhance signals. The PCC was then used to select the wavelength bands in the vis-NIR and XRF spectra that exhibited significant correlations with soil Pb content. ...
Article
Full-text available
Traditional methods for obtaining soil heavy metal content are expensive, inefficient, and limited in monitoring range. In order to meet the needs of soil environmental quality evaluation and health status assessment, visible near-infrared spectroscopy and XRF spectroscopy for monitoring heavy metal content in soil have attracted much attention, because of their rapid, nondestructive, economical, and environmentally friendly features. The use of either of these spectra alone cannot meet the accuracy requirements of traditional measurements, while the synergistic use of the two spectra can further improve the accuracy of monitoring heavy metal lead content in soil. Therefore, this study applied various spectral transformations and preprocessing to vis-NIR and XRF spectra; used the whale optimization algorithm (WOA) and competitive adaptive re-weighted sampling (CARS) algorithms to identify feature spectra; designed a combination variable model (CVM) based on multi-layer spectral data fusion, which improved the spectral preprocessing and spectral feature screening process to increase the efficiency of spectral fusion; and established a quantitative model for soil Pb concentration using partial least squares regression (PLSR). The estimation performance of three spectral fusion strategies, CVM, outer-product analysis (OPA), and Granger-Ramanathan averaging (GRA), was discussed. The results showed that the accuracy and efficiency of the CARS algorithm in the fused spectra estimation model were superior to those of the WOA algorithm, with an average coefficient of determination (R2) value of 0.9226 and an average root mean square error (RMSE) of 0.1984. The accuracy of the estimation models established, based on the different spectral types, to predict the Pb content of the soil was ranked as follows: the CVM model > the XRF spectral model > the vis-NIR spectral model. Within the CVM fusion strategy, the estimation model based on CARS and PLSR (CARS_D1+D2) performed the best, with R2 and RMSE values of 0.9546 and 0.2035, respectively. Among the three spectral fusion strategies, CVM had the highest accuracy, OPA had the smallest errors, and GRA showed a more balanced performance. This study provides technical means for on-site rapid estimation of Pb content based on multi-source spectral fusion and lays the foundation for subsequent research on dynamic, real-time, and large-scale quantitative monitoring of soil heavy metal pollution using high-spectral remote sensing images.
... MOD11C3 and MOD13C2 were monthly synthetic LST, NDVI, and spectral reflectance products, respectively, with a spatial resolution of 0.05 • . For the MODIS product, quality control files were used to eliminate invalid values in the images, and Savitzky-Golay filtering was used to correct for low-quality values in the images (Steinier et al., 1972). ...
Article
Drought has a serious impact on the health of terrestrial ecosystems, socio-economy and human health. It is important to accurately monitor drought conditions. Soil moisture (SM) can directly characterize surface dryness and wetness. Precipitation (PPT), evapotranspiration (ET), land surface temperature (LST), and vegetation status indirectly relate to SM, and they influence each other. Therefore, using one index cannot comprehensively and objectively reflect the real situation of surface dryness and wetness. In this study, based on the Z-score data of LST, OCO-2-based solar-induced chlorophyll fluorescence product (GOSIF) and water balance (WB) from 2003 to 2018, a new three-dimensional (3D) drought index, namely temperature-SIF-Water Balance Dryness Index (TSWDI), was constructed using Euclidean distance method. The accuracy of TSWDI was verified by soil moisture (SM), standardized precipitation index (SPI), and standardized precipitation evapotranspiration index (SPEI), and compared with other recognized drought indices. The results showed that TSWDI had a higher correlation with SM (R = 0.340) and SPEI (R = 0.533) than vegetation health index (VHI) (R was 0.201 and 0.207, respectively), temperature-vegetation-soil moisture dryness index (TVMDI) (R was −0.022 and −0.031, respectively), and temperature vegetation precipitation dryness index (TVPDI) (R was 0.123 and 0.132, respectively). Furthermore, taking the frequency of drought occurrences monitored by SPEI and the normalized anomaly of SM as the benchmark, the errors of drought frequency between TSWDI/TVMDI/TVPDI/VHI and SM/SPEI were obtained, respectively. The results showed that the mean error between TSWDI and SM, SPEI were 0.12 and 0.21, respectively, which were lower than that of VHI (0.19, 0.24), TVMDI (0.87, 0.63) and TVPDI (0.89, 0.69). These all indicated that TSWDI could better monitor regional drought conditions than VHI, TVMDI, and TVPDI in our study area. TSWDI had good applicability in monitoring meteorological and agricultural drought. In addition, the spatial variations and patterns of drought conditions in the study area were explored by the annual time series of TSWDI. The results showed that the study area became wetter at an average rate of 0.0064/year. The pixels reaching significant wetting (P < 0.05) accounted for 49.18%, and only 12.14% of the area reached significant drying (P < 0.05), which was consistent with that of SPEI. Finally, the results from the cross-correlation method indicated that the percentage of TSWDI which was ahead, synchronized, and lag of SM changes were 10.13%, 89.47%, and 0.4%, respectively. TSWDI was further employed to identify drought events in China. It demonstrated that TSWDI and VHI can more accurately capture the onset, evolution, and end of drought than TVMDI and TVPDI in the five southwestern provinces of China. Moreover, R (0.627) between TSWDI and the normalized anomaly of SM was higher than that of VHI (R = 0.619). It indicated that TSWDI can recognize more accurately than TVMDI, TVPDI, and VHI for drought events in the five southwestern provinces of China. Therefore, TSWDI can more accurately monitor regional drought conditions and respond in advance or synchronously to SM changes. TSWDI contains several key indicators (SIF, LST, and WB) of surface drought, in which SIF can reflect the physiological characteristics of vegetation, temperature can affect vegetation transpiration and WB affects vegetation absorbing water from the soil. When drought occurs, TSWDI can synthesize the changes in these three indicators. It can be concluded that TSWDI has great potential for evaluating drought conditions in China.
... The prospectr package (Mullen and Stokkum, 2007) was used for spectral processing to effectively avoid errors caused by noise and baseline translation (Divya and Gopinathan, 2019). FD and SD spectral calculations were performed using the Savitzky-Golay function (Steinier et al., 1972) with a window size of 13. The standard Normal Variate function and MSC function (Rogers et al., 1995) were used to obtain SNV and MSC spectral data, respectively. ...
Article
Full-text available
Pigment content is a critical assessment indicator in the study of plant physiological metabolism, stress resistance, ornamental characteristics, and forest health. Spectral imaging technology is widely used for rapid and non-destructive determination of plant physicochemical parameters. To address the shortcomings of previous models of spectral reflectance prediction of chlorophyll content of needles only from the perspective of traditional algorithms and ignoring physical models, this research integrates variable complexity and refined classification of physical models to validate the increased accuracy of both the conventional partial least squares (PLS) method and the traditional neural network algorithm. The results of the conifer chlorophyll models of Picea koraiensis Nakai with different needle ages based on spectral reflectance and vegetation index parameters showed that the improved nonlinear state transition algorithm-backpropagation (STA-BP) neural network model approach ( R ² of 0.73–0.89) and the nonlinear Stacking partial least squares (Stacking-PLS) model approach ( R ² of 0. 67–0.85) is slightly more robust than the traditional algorithms nonlinear BP model ( R ² of 0.63–0.82) and linear PLS model ( R ² of 0.60–0.76). This finding suggests that the nonlinear fitting of chlorophyll content in needles of different needle ages in P. koraiensis Nakai surpasses the traditional linear model fitting methodology. Furthermore, the model fitting of chlorophyll content in conifers of different needle ages outperforms the mixed P. koraiensis Nakai model, suggesting that chlorophyll models using needle refinement classification help to improve model robustness. This study provides data and theoretical support for rapid and non-invasive characterization of physiological and biochemical properties of needles of different needle ages using spectral imaging techniques to predict growth and community structure productivity of forest trees in the coming years.
... These techniques allow simultaneous analysis of multiple wavenumbers or wavelengths, facilitating comprehensive biochemical analysis. The advantage of this approach is that it allows the evaluation of different sample components without the need for spectral resolution or deconvolution [36,37]. Analytically referenced calibration models provide the basis for accurate multivariate calibrations in NIR spectroscopy. ...
Article
Full-text available
Early cancer diagnosis plays a critical role in improving treatment outcomes and increasing survival rates for certain cancers. NIR spectroscopy offers a rapid and cost-effective approach to evaluate the optical properties of tissues at the microvessel level and provides valuable molecular insights. The integration of NIR spectroscopy with advanced data-driven algorithms in portable instruments has made it a cutting-edge technology for medical applications. NIR spectroscopy is a simple, non-invasive and affordable analytical tool that complements expensive imaging modalities such as functional magnetic resonance imaging, positron emission tomography and computed tomography. By examining tissue absorption, scattering, and concentrations of oxygen, water, and lipids, NIR spectroscopy can reveal inherent differences between tumor and normal tissue, often revealing specific patterns that help stratify disease. In addition, the ability of NIR spectroscopy to assess tumor blood flow, oxygenation, and oxygen metabolism provides a key paradigm for its application in cancer diagnosis. This review evaluates the effectiveness of NIR spectroscopy in the detection and characterization of disease, particularly in cancer, with or without the incorporation of chemometrics and machine learning algorithms. The report highlights the potential of NIR spectroscopy technology to significantly improve discrimination between benign and malignant tumors and accurately predict treatment outcomes. In addition, as more medical applications are studied in large patient cohorts, consistent advances in clinical implementation can be expected, making NIR spectroscopy a valuable adjunct technology for cancer therapy management. Ultimately, the integration of NIR spectroscopy into cancer diagnostics promises to improve prognosis by providing critical new insights into cancer patterns and physiology.
... The spectrums of the samples were measured using an Analytical Spectral Devices (ASD) FieldSpec4 portable object spectrometer, while excluding the range of 350-400 nm and 2451-2500 nm due to interference from moisture and systematic errors associated with the instrument. The resulting data, consisting of 2050 spectral bands, were processed using the Savitzky-Golay (SG) smoothing method [26] to eliminate noise. Additionally, five nonlinear transformations were applied, namely reciprocal (1/R), reciprocal logarithmic (ln(1/R)), square (R 2 ), cubic (R 3 ), and root mean square ( √ R), to the original spectrum to explore the indicative signatures further. ...
Article
Full-text available
How to extract the indicative signatures from the spectral data is an important issue for further retrieval based on remote sensing technique. This study provides new insight into extracting indicative signatures by identifying oblique extremum points, rather than local extremum points traditionally known as absorption points. A case study on retrieving soil organic matter (SOM) contents from the black soil region in Northeast China using spectral data revealed that the oblique extremum method can effectively identify weak absorption signatures hidden in the spectral data. Moreover, the comparison of retrieval outcomes using various indicative signature extraction methods reveals that the oblique extremum method outperforms the correlation analysis and traditional extremum methods. The experimental findings demonstrate that the radial basis function (RBF) neural network retrieval model exposes the nonlinear relationship between reflectance (or reflectance transformation results) and the SOM contents. Additionally, an improved oblique extremum method based on the second-order derivative is provided. Overall, this research presents a novel perspective on indicative signature extraction, which could potentially offer better retrieval performance than traditional methods.
... where the Y * j is the newly obtained NDVI; Y j+i is the i-th NDVI value before and after Y j before smoothing in the sliding window; N is the weight factor in the smoothing of the moving window (2 m + 1); C i is the correlation coefficient of the i-th NDVI value of the smoothing window (Steinier et al., 1972). We eliminate this noise with an S-G filter, and the final constructed NDVI time series of evergreen forest, deciduous forest, farmland, IS, and water is shown in Fig. 5 (f ~ j). ...
... The experimental data were smoothed with the wellknown algorithm by A. Savitzky and M.J.E. Golay [described in Ref. 37, corrected by Steinier, Termonia, and Deltour in Ref. 38 was used. The measurements were performed at room temperature in the angular range of 10°< 2Θ < 110°with 0.025°resolution and exposure 1 s. ...
Article
Full-text available
The present work focuses on synthesis of Fe3O4 magnetite core@shell type nanoparticles modified with three types of ligands: Magnetite with activated carbon (MAC), Magnetite with silica (tetraethoxysilane, TEOS, and 3-aminopropyltriethoxysilane, APTES) (MTA), and Magnetite with silica, APTES and humic acids (MTAH). The MTAH sample shows greater porosity in comparison to MTA, and MAC sample. The band gap of MTAH is 4.08 eV, which is higher than MTA (2.92 eV), and MAC (2.80 eV). Rietveld quantitative phase analysis of all derivatives was performed and compared for all three samples. The LPG sensing at room temperature shows the highest sensor response of 9.42, in comparison to 3.87 sensor response for MAC, and 4.60 for MTA. This approximately double sensor response increment is justified with the help of band gap, porosity, and size of all three the samples. The MTAH sample shows the lowest response-recovery time of 9.33 and 10.78 sec, respectively, in comparison to MAC and MTA samples.
... Thus, the canopy spectral reflectance at 350-1350 nm was selected to estimate the CCC in this study. The Savitzky-Golay smoothing filtering process with a second-order polynomial and nine smoothing points curves were applied to all the canopy reflectance to filter the noise, and the canopy reflectance was resampled to a spectral interval of 1 nm to obtain the OS [38,39]. Finally, the first derivative spectrum (FDS) [40] and continuum removal spectrum (CRS) [41] were determined based on the OS. ...
Article
Full-text available
Canopy chlorophyll content (CCC) is closely related to crop nitrogen status, crop growth and productivity, detection of diseases and pests, and final yield. Thus, accurate monitoring of chlorophyll content in crops is of great significance for decision support in precision agriculture. In this study, winter wheat in the Guanzhong Plain area of the Shaanxi Province, China, was selected as the research subject to explore the feasibility of canopy spectral transformation (CST) combined with a machine learning method to estimate CCC. A hyperspectral canopy ground dataset in situ was measured to construct CCC prediction models for winter wheat over three growth seasons from 2014 to 2017. Sensitive-band reflectance (SR) and narrow-band spectral index (NSI) were established based on the original spectrum (OS) and CSTs, including the first derivative spectrum (FDS) and continuum removal spectrum (CRS). Winter wheat CCC estimation models were constructed using univariate regression, partial least squares (PLS) regression, and random forest (RF) regression based on SR and NSI. The results demonstrated the reliability of CST combined with the machine learning method to estimate winter wheat CCC. First, compared with OS-SR (683 nm), FDS-SR (630 nm) and CRS-SR (699 nm) had a larger correlation coefficient between canopy reflectance and CCC; secondly, among the parametric regression methods, the univariate regression method with CRS-NDSI as the independent variable achieved satisfactory results in estimating the CCC of winter wheat; thirdly, as a machine learning regression method, RF regression combined with multiple independent variables had the best winter wheat CCC estimation accuracy (the determination coefficient of the validation set (Rv2) was 0.88, the RMSE of the validation set (RMSEv) was 3.35 and relative prediction deviation (RPD) was 2.88). Thus, this modeling method could be used as a basic method to predict the CCC of winter wheat in the Guanzhong Plain area.
... Smoothing filter on the elevation recording (Savitzky Golay filter [29,30]) to improve the road grade calculation. This filter is applied in a window of 31 subsequent values. ...
... To get instantaneous growth rate, we applied the Savitzky-Golay filter 59 to N {t i , ln(OD 600,i )} dataset. Recall that because of medium switch, OD 600 was not continuously at t = 0. Therefore, we scaled the data in post-shift region before calculating growth rate to make the first OD 600 after shift (t < 5 min) equal to the value exponentially extrapolated from pre-shift data. ...
Article
Full-text available
Bacterial fitness depends on adaptability to changing environments. In rich growth medium, which is replete with amino acids, Escherichia coli primarily expresses protein synthesis machineries, which comprise ~40% of cellular proteins and are required for rapid growth. Upon transition to minimal medium, which lacks amino acids, biosynthetic enzymes are synthesized, eventually reaching ~15% of cellular proteins when growth fully resumes. We applied quantitative proteomics to analyse the timing of enzyme expression during such transitions, and established a simple positive relation between the onset time of enzyme synthesis and the fractional enzyme ‘reserve’ maintained by E. coli while growing in rich media. We devised and validated a coarse-grained kinetic model that quantitatively captures the enzyme recovery kinetics in different pathways, solely on the basis of proteomes immediately preceding the transition and well after its completion. Our model enables us to infer regulatory strategies underlying the ‘as-needed’ gene expression programme adopted by E. coli.
... For this purpose, the Savitzky-Golay transformation (Savitzky & Golay, 1964) was employed, using a localized linear regression of several neighboring points to determine the best-fit polynomial differentiated and evaluated at the x values (in this case energy values). The differentiation procedure is performed by a convolution with a set of derived coefficients (Steinier et al., 1972). A Savitzky-Golay first-order smoothing normalization coupled with the first derivative of the spectra data was performed using the mdatools package (Kucheryavskiy, 2020) in R-Studio (version 2021.09.0 Build 351). ...
Article
Full-text available
The high demand and economic relevance of cephalopods make them prone to food fraud, including related to harvest location. Therefore, there is a growing need to develop tools to unequivocally confirm their capture location. Cephalopod beaks are non-edible, making this material ideal for traceability studies as it can also be removed without loss of commodity economic value. Within this context, common octopus (Octopus vulgaris) specimens were captured in five fishing areas along the Portuguese coast. Untargeted multi-elemental TXRF analysis of the octopus beaks revealed a high abundance of Ca, Cl, K, Na, S and P, concomitant with the keratin and calcium phosphate nature of the material. We tested a suite of discrimination models on both elemental and spectral data, where the elements contributing most to discriminate capture location were typically associated with diet (As), human-related pressures (Zn, Se, Mn) or geological features (P, S, Mn, Zn). Among the six different chemometric approaches used to classify individuals to their capture location according to their beaks’ element concentration, classification trees attained a classification accuracy of 76.7 %, whilst reducing the number of explanatory variables for sample classification and highlighting variable importance for group discrimination. However, using X-ray spectral features of the octopus beaks further improved classification accuracy, with the highest classification of 87.3% found with Partial Least Squares Discriminant Analysis (PLS-DA). Ultimately, element and spectral analyses of non-edible structures such as octopus beaks can provide an important, complementary, and easily accessible means to support seafood provenance and traceability, whilst integrating anthropogenic and/or geological gradients.
Article
Full-text available
In Mycobacterium tuberculosis, Rv3806c is a membrane-bound phosphoribosyltransferase (PRTase) involved in cell wall precursor production. It catalyses pentosyl phosphate transfer from phosphoribosyl pyrophosphate to decaprenyl phosphate, to generate 5-phospho-β-ribosyl-1-phosphoryldecaprenol. Despite Rv3806c being an attractive drug target, structural and molecular mechanistic insight into this PRTase is lacking. Here we report cryogenic electron microscopy structures for Rv3806c in the donor- and acceptor-bound states. In a lipidic environment, Rv3806c is trimeric, creating a UbiA-like fold. Each protomer forms two helical bundles, which, alongside the bound lipids, are required for PRTase activity in vitro. Mutational and functional analyses reveal that decaprenyl phosphate and phosphoribosyl pyrophosphate bind the intramembrane and extramembrane cavities of Rv3806c, respectively, in a distinct manner to that of UbiA superfamily enzymes. Our data suggest a model for Rv3806c-catalysed phosphoribose transfer through an inverting mechanism. These findings provide a structural basis for cell wall precursor biosynthesis that could have potential for anti-tuberculosis drug development.
Article
Raman spectroscopy is a powerful technique widely used in analytical chemistry. However, spectral noise emerging during detection introduces potential to compromise the signal-to-noise ratio, undermining the accuracy and reliability of sample analyses. This limitation has driven the development of Raman spectrum denoising algorithms to encourage the application of Raman spectroscopy in more complicated domains of analytical chemistry, including unknown compound identification, trace detection, single-particle sensing, ultrafast imaging, and in-depth in vivo detections. In this review, we outline the essential concepts of Raman spectroscopy, the origins of spectral noise and noise reduction through various strategies. Furthermore, we present a comprehensive summary of Raman spectral denoising algorithms and their progressions in three categories of moving window smoothing, power spectrum estimation, and deep learning algorithms. Finally, challenges and future directions in denoising algorithms are discussed. This review severs as a valuable resource to shed light on algorithmic solutions to enhance Raman spectroscopy analysis.
Article
Background Risk stratification for ventricular arrhythmias currently relies on static measurements that fail to adequately capture dynamic interactions between arrhythmic substrate and triggers over time. We trained and internally validated a dynamic machine learning (ML) model and neural network that extracted features from longitudinally collected electrocardiograms (ECG), and used these to predict the risk of malignant ventricular arrhythmias. Methods A multicentre study in patients implanted with an implantable cardioverter-defibrillator (ICD) between 2007 and 2021 in two academic hospitals was performed. Variational autoencoders (VAEs), which combine neural networks with variational inference principles, and can learn patterns and structure in data without explicit labelling, were trained to encode the mean ECG waveforms from the limb leads into 16 variables. Supervised dynamic ML models using these latent ECG representations and clinical baseline information were trained to predict malignant ventricular arrhythmias treated by the ICD. Model performance was evaluated on a hold-out set, using time-dependent receiver operating characteristic (ROC) and calibration curves. Findings 2942 patients (61.7 ± 13.9 years, 25.5% female) were included, with a total of 32,129 ECG recordings during a mean follow-up of 43.9 ± 35.9 months. The mean time-varying area under the ROC curve for the dynamic model was 0.738 ± 0.07, compared to 0.639 ± 0.03 for a static (i.e. baseline-only model). Feature analyses indicated dynamic changes in latent ECG representations, particularly those affecting the T-wave morphology, were of highest importance for model predictions. Interpretation Dynamic ML models and neural networks effectively leverage routinely collected longitudinal ECG recordings for personalised and updated predictions of malignant ventricular arrhythmias, outperforming static models. Funding This publication is part of the project DEEP RISK ICD (with project number 452019308) of the research programme Rubicon which is (partly) financed by the 10.13039/501100003246Dutch Research Council (NWO). This research is partly funded by the 10.13039/100019741Amsterdam Cardiovascular Sciences (personal grant F.V.Y.T).
Article
Full-text available
Feature selection is a difficult but highly important preliminary step for bearings remaining useful life (RUL) estimation. To avoid the weights setting problem in hybrid metric, this work devotes to conduct feature selection by using a single metric. Due to noise and outliers, an existing feature selection metric, called monotonicity, used for estimating bearings RUL, requires data smoothing processing before adequate implementation. Such a smoothing process may remove significant part of meaningful information from data. To overcome this issue, a mixture distribution analysis-based feature selection metric is proposed. Moreover, based on this new metric, a feature selection approach for bearings RUL estimation is proposed. Numerical experiments benchmarking the proposed method and the existing metric monotonicity method on available real datasets highlight its effectiveness.
Article
Full-text available
The exploration of the chiral configurations of enantiomers represents a highly intriguing realm of scientific inquiry due to the distinct roles played by each enantiomer (D and L) in chemical reactions and their practical utilities. This study introduces a pioneering analytical methodology, termed fast Fourier transform capacitance voltammetry (FFT-CPV), in conjunction with principal component analysis (PCA), for the identification and quantification of the chiral forms of tartaric acid (TA), serving as a representative model system for materials exhibiting pronounced chiral characteristics. The proposed methodology relies on the principle of chirality, wherein the capacitance signal generated by the adsorption of D-TA and L-TA onto the surface of a platinum electrode (Pt-electrode) in an acidic solution is harnessed. The capacitance voltammograms were meticulously recorded under optimized experimental conditions. To compile the final dataset for the analyte, the average of the FFT capacitance voltammograms of the acidic solution (without the presence of the analyte) was subtracted from those containing the analyte. A distinct arrangement was obtained by employing PCA as a linear data transformation method, representing D-TA and L-TA in a two/three-dimensional space. The outcomes of the study reveal the successful detection of the two chiral forms of TA with a considerable degree of precision and reproducibility. Moreover, the proposed method facilitated the establishment of two linear response ranges for the concentration values of each enantiomer, spanning from 1 to 20 µM, and 50 to 500 µM. The respective detection limits were also determined to be 0.4 µM for L-TA and 1.3 µM for D-TA. These findings underscore the satisfactory sensitivity and efficiency of the proposed method in both qualitative and quantitative assessments of the chiral forms of TA.
Article
Ferrocene was dissolved in a water‐in‐salt solution containing a high concentration of a supporting electrolyte. Open‐circuit potential, electronic absorption spectra, and cyclic voltammetry revealed that a part of the dissolved ferrocene existed as ferrocenium ions. Spectroelectrochemical measurements and higher‐order derivative spectra revealed that the water‐in‐salt electrolyte aqueous solution is an excellent medium for electrochemical and spectroscopic measurement of redox‐active species with water insolubility.
Article
Dynamic crushing of composites is studied proposing an optimised experimental design: flat, millimetre-scale specimens reduce geometrical influences and facilitate detailed numerical studies. Crushing behaviour is investigated on a universal testing machine and a drop-tower at varying loading rates using two different force measurement techniques. The results show a decrease in the maximum stress and absorbed energy at elevated rates due to a higher extent of delamination, caused by strain-rate-induced increase of intralaminar damage property values. The experiments are simulated in an explicit finite element solver with different through-thickness discretisation, showing that a stacked-laminate model is incapable of reproducing the crushing response at high velocities. However, a discrete-ply model is capable of describing the main failure mechanisms and energy absorption for all loading rates. Further improvement of the model’s predictive capability could be achieved by adding a plasticity formulation in the ply material model.
Article
Full-text available
The objective of this research was to compare technical skills composed of kinematic and kinetic variables in the complex motor task of a tumble turn between 9 elites and 9 sub-elite female swimmers. The best tumble turn among three attempts was analyzed using a three-dimensional underwater protocol. A total of 37 kinematic variables were derived from a Direct Linear Transformation algorithm for 3D reconstruction, and 16 kinetic variables measured by a piezoelectric 3D force platform. Data were analyzed by Student's t-test and effect size statistics. Pearson correlations were applied to the data of the eighteen swimmers to relate the association of 53 kinematic, kinetic variables to the performance of the tumble-turn (3 meters Round Trip Time, 3m RTT). The approach and the whole turn times were faster for elite swimmers compared to sub elites (1.09±0.06 vs. 1.23±0.08 sec, and 2.89±0.07 vs. 3.15±0.11 sec.), as well as the horizontal speeds of the swimmers’ head 1 m before the rotation (1.73±0.13 vs. 1.57±0.13 m/sec.), at the end of the push-off on force platform (2.55±0.15 vs. 2.31±0.22 m/sec.) and 3 m after the wall (2.01±0.19 vs. 1.68±0.12 m/sec.). Large differences (|d| > 0.8) in favor of the elite swimmers were identified for the index of upper body extension at the beginning of the push-off, the lower limb extension index at the end of push-off, and among the kinetic variables, the horizontal impulse and lateralization of the push-off. Correlations for the whole group revealed a moderate to strong relationship between 6 body extension indices and 3mRTT performance. For the kinetic variables, the correlations indicated the fastest swimmers in 3mRTT showed large lateral impulse during placement (r=0.46), maximum horizontal force during the push-off (r=0.45) and lateralization of the push-off (r=0.44) (all p<0.05). Elite female swimmers had higher approach and push-off speeds, were more streamlined through the contact, and showed a higher horizontal impulse and lateralization of the push-off, than their sub-elite counterparts.
Article
Full-text available
Depth-resolved functional magnetic resonance imaging (fMRI) is an emerging field growing in popularity given the potential of separating signals from different computational processes in cerebral cortex. Conventional acquisition schemes suffer from low spatial and temporal resolutions. Line-scanning methods allow depth-resolved fMRI by sacrificing spatial coverage to sample blood oxygenated level-dependent (BOLD) responses at ultra-high temporal and spatial resolution. For neuroscience applications, it is critical to be able to place the line accurately to (1) sample the right neural population and (2) target that neural population with tailored stimuli or tasks. To this end, we devised a multi-session framework where a target cortical location is selected based on anatomical and functional properties. The line is then positioned according to this information in a separate second session, and we tailor the experiment to focus on the target location. Anatomically, the precision of the line placement was confirmed by projecting a nominal representation of the acquired line back onto the surface. Functional estimates of neural selectivities in the line, as quantified by a visual population-receptive field model, resembled the target selectivities well for most subjects. This functional precision was quantified in detail by estimating the distance between the visual field location of the targeted vertex and the location in visual cortex (V1) that most closely resembled the line-scanning estimates; this distance was on average ~5.5 mm. Given the dimensions of the line, differences in acquisition, session, and stimulus design, this validates that line-scanning can be used to probe local neural sensitivities across sessions. In summary, we present an accurate framework for line-scanning MRI; we believe such a framework is required to harness the full potential of line-scanning and maximize its utility. Furthermore, this approach bridges canonical fMRI experiments with electrophysiological experiments, which in turn allows novel avenues for studying human physiology non-invasively.
Article
Full-text available
We study, for the first time, the physical coupling and detectability of meteotsunamis in the earth’s atmosphere. We study the June 13, 2013 event off the US East Coast using Global Navigation Satellite System (GNSS) radio occultation (RO) measurements, Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) temperatures, and ground-based GNSS ionospheric total electron content (TEC) observations. Hypothesizing that meteotsunamis also generate gravity waves (GWs), similar to tsunamigenic earthquakes, we use linear GW theory to trace their dynamic coupling in the atmosphere by comparing theory with observations. We find that RO data exhibit distinct stratospheric GW activity at near-field that is captured by SABER data in the mesosphere with increased vertical wavelength. Ground-based GNSS-TEC data also detect a far-field ionospheric response 9 h later, as expected by GW theory. We conclude that RO measurements could increase understanding of meteotsunamis and how they couple with the earth’s atmosphere, augmenting ground-based GNSS TEC observations.
Article
Traditional mechanical models and sensors face challenges in obtaining the dynamometer diagram of the sucker rod pump system (SRPS) due to difficulties in model solving, high application costs, and maintenance difficulties. Since the electric motor powers the SRPS, its power output is highly correlated with the working state of the entire device. Therefore, a hy-brid method based on electric motor power and SPRS mechanical parameter prediction is proposed to predict the dyna-mometer diagram. First, a long short-term memory neural network (LSTM) is used to establish the LSTM-L model for predicting the dynamometer load based on electric motor power. Then, a mathematical and physical calculation model (FLM-D) of the dynamometer diagram displacement at the hanging point is constructed by combining the four-bar linkage structure of the sucker rod pump. Finally, the experimental production data of oil wells are collected through an edge computing device to verify the prediction performance of the LSTM-L&FLM-D hybrid model. Experimental results show that the proposed LSTM-L&FLM-D model has a high fitting degree of 99.3%, which is more robust than other models considered in this study, and exhibits better generalization ability.
Article
Full-text available
The mafic-ultramafic cumulate rocks and mantle peridotites of an ophiolite are excellent recorders of the geodynamic evolution of a fossil subduction zone. The spectro-radiometric absorption features of olivine, orthopyroxene, clinopyroxene and serpentine, which are the dominant constituents of these ultramafic rocks, have emerged as essential tools for their identification. This study uses six different kinds of ultramafic rock samples-spinel lherzolite, spinel harzburgite, wehrlite, orthopyroxenite, olivine web-sterite, and dunite compositions from the Nagaland Ophiolite Complex, NE India. These are investigated with integrated petrographic, mineral and bulk rock compositional and reflectance spectroscopic studies to identify the constituent minerals. The study indicates that the most dominant olivine absorption features vary from 1038 to 1049 nm in Ni-poor ultramafic cumulates of wehrlite (Mg # = 89, Ni = 355 ppm) and olivine websterite (Mg # = 90.5, Ni = 361 ppm) to 1066-1097 nm in nickeliferous mantle peridotites of dunite (Mg # = 87, Ni = 3658 ppm) and spinel lherzolite (Mg # = 87, Ni = 2708 ppm) compositions. Reversal of the 625 nm reflectance maximum in olivine occurs for Ni concentrations in the mineral between 2200 and 2955 ppm. The low-Ca pyroxene (cf. orthopyroxene) species show characteristic absorption features at 902-926 nm in lherzolite and harzburgite samples and 918 nm in monomineralic orthopyroxenite. High-Ca pyroxene (cf. clinopyroxene) in most rocks records absorption features at 643-666 and 709-811 nm. The study also recognises major absorption bands at 1382-1400, 1913-1948, and 2304-2326 nm as indicative of secondary hydrous minerals as a product of alteration in the investigated rocks. The significance of major (Mg-Fe) and minor (Mn, Ni and Cr) element compositions of these ultramafic rocks to explain specific spectral signatures in olivine, orthopyroxene and clinopyroxene is discussed. The findings are expected to aid the interpretation of remotely sensed data from terrestrial and extra-terrestrial bodies, mapping ultramafic cumulate and mantle peridotite rocks in remote, covered, and unmapped terranes of ophiolitic origin, and in the process, identification of fossil oceanic subduction systems.
Article
Full-text available
Pair distribution function (PDF) analysis is a powerful technique to understand atomic scale structure in materials science. Unlike X-ray diffraction (XRD)-based PDF analysis, the PDF calculated from electron diffraction patterns (EDPs) using transmission electron microscopy can provide structural information from specific locations with high spatial resolution. The present work describes a new software tool for both periodic and amorphous structures that addresses several practical challenges in calculating the PDF from EDPs. The key features of this program include accurate background subtraction using a nonlinear iterative peak-clipping algorithm and automatic conversion of various types of diffraction intensity profiles into a PDF without requiring external software. The present study also evaluates the effect of background subtraction and the elliptical distortion of EDPs on PDF profiles. The EDP2PDF software is offered as a reliable tool to analyse the atomic structure of crystalline and non-crystalline materials.
Article
Full-text available
The content of nicotine, a critical component of tobacco, significantly influences the quality of tobacco leaves. Near-infrared (NIR) spectroscopy is a widely used technique for rapid, non-destructive, and environmentally friendly analysis of nicotine levels in tobacco. In this paper, we propose a novel regression model, Lightweight one-dimensional convolutional neural network (1D-CNN), for predicting nicotine content in tobacco leaves using one-dimensional (1D) NIR spectral data and a deep learning approach with convolutional neural network (CNN). This study employed Savitzky-Golay (SG) smoothing to preprocess NIR spectra and randomly generate representative training and test datasets. Batch normalization was used in network regularization to reduce overfitting and improve the generalization performance of the Lightweight 1D-CNN model under a limited training dataset. The network structure of this CNN model consists of four convolutional layers to extract high-level features from the input data. The output of these layers is then fed into a fully connected layer, which uses a linear activation function to output the predicted numerical value of nicotine. After the comparison of the performance of multiple regression models, including support vector regression (SVR), partial least squares regression (PLSR), 1D-CNN, and Lightweight 1D-CNN, under the preprocessing method of SG smoothing, we found that the Lightweight 1D-CNN regression model with batch normalization achieved root mean square error (RMSE) of 0.14, coefficient of determination (R 2) of 0.95, and residual prediction deviation (RPD) of 5.09. These results demonstrate that the Lightweight 1D-CNN model is objective and robust and outperforms existing methods in terms of accuracy, which has the potential to significantly improve quality control processes in the tobacco industry by accurately and rapidly analyzing the nicotine content.
Article
Reliable origin certification methods are essential for the protection of high-value genuine medicinal material with designated origins and geographical indication (GI) products. Aconiti Lateralis Radix Praeparata (Fuzi), one well-known traditional Chinese medicine and geographical indication products have remarkable efficacy and wide clinical application, with high demand in domestic and international markets. The efficacy and price of Fuzi from different origins vary, and it is difficult for the general public to accurately identify them through traditional experience. The mass spectrometry detection technology based on the plant metabolomics is tedious and lengthy in test sample preparation, complicated in operation, long in detection time, and low in reproducibility. As a sophisticated, green, fast, and low-loss detection technique, infrared spectroscopy is integrated by machine learning to bring new ways for quality regulation and control of traditional Chinese medicines. An analytical method based on mid-infrared spectroscopy combined with a random forest algorithm was developed to verify the geographical origin of authentic herbs and/or GI products. The method successfully predicted and classified three varieties of Chinese GI Fuzi and four varieties of non-GI Fuzi. In this study, an environment-friendly traceability strategy with fast analysis, low sample loss and high precision was used to provide a new strategy for identifying the origin of Fuzi.
ResearchGate has not been able to resolve any references for this publication.