Article

# Support Vector Networks

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

## Abstract

The support-vector network is a new learning machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very high-dimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.

## No full-text available

... Over the past decades, support vector machine (SVM) introduced by Vapnik [1,2] has become a popular machine learning tool for classification problems because of its solid mathematical foundation and excellent performance on many real-world problems of applications such as face recognition, text categorization, bioinformatics [3][4][5]. The classical SVM finds an optimal hyperplane separating between the positive and negative class of input vectors with maximum margin [1,2]. ...
... Over the past decades, support vector machine (SVM) introduced by Vapnik [1,2] has become a popular machine learning tool for classification problems because of its solid mathematical foundation and excellent performance on many real-world problems of applications such as face recognition, text categorization, bioinformatics [3][4][5]. The classical SVM finds an optimal hyperplane separating between the positive and negative class of input vectors with maximum margin [1,2]. This leads to solving a convex quadratic programming problem B S. Balasundaram balajnu@gmail.com ...
... P. Anagha anaghasoorya@gmail.com 1 School of Computer and Systems Sciences, Jawaharlal Nehru University, New Delhi 110067, India (QPP) [2,6] consists of the regularization and misclassification error terms where the error is computed using hinge loss. ...
Article
Full-text available
In this paper, a novel robust L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{1}$$\end{document}-norm based twin bounded support vector machine with pinball loss- having regularization term, scatter loss and misclassification loss- is proposed to enhance robustness in the presence of feature noise and outliers. Unlike in twin bounded support vector machine (TBSVM), pinball is used as the misclassification loss in place of hinge loss to reduce noise sensitivity. To further boost robustness, the scatter loss of the class of vectors is minimized using L1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L_{1}$$\end{document}-norm. As an equivalent problem in simple form, a pair of quadratic programming problems (QPPs) is constructed (L1-Pin-TBSVM) with m variables where m is the number of training vectors. Unlike TBSVM, the proposed L1-Pin-TBSVM is free from inverse kernel matrix and the non-linear problem can be obtained directly from its linear formulation by applying the kernel trick. The efficacy and robustness of L1-Pin-TBSVM has been demonstrated by experiments performed on synthetic and UCI datasets in the presence of noise.
... In 1986, David Rumelhart, Geoff Hinton and Ronald J. Williams popularised the method of backpropagation for training of neural networks [114] (continuous backpropagation was initially derived by Henry J. Kelley [59] in the context of control theory). A number of more powerful supervised machine learning algorithms including random forest [9] and support vector machines (SVMs) [22] were published in the 1990s (the original linear support vector machines were invented and published by Vladimir Vapnik and Alexey Chervonenkis in the early 1960s [133]. In 1997, Sepp ...
... Platt's method, also known as "Platt's scaling" or "Platt calibration" was designed to learn the function mapping classification scores produces by the SVM classifier to probability distribution over classes. The method (described by Platt in [109]) was invented to address specific disadvantage of a very popular classification algorithm -support vector machines [22] -that produced accurate assignment of the objects to classes, but required further calibration of the class scores to align them with true class probabilities. Support vector machines (SVM) do not produce class probabilities direct, but instead output uncalibrated classification scores that reflect distance to the maximum-margin hyperplane. ...
Thesis
Full-text available
Prediction is the key objective of many machine learning applications. Accurate, reliable and robust predictions are essential for optimal and fair decisions by downstream components of artificial intelligence systems, especially in high-stakes applications, such as personalised health, self-driving cars, finance, new drug development, forecasting of election outcomes and pandemics. Many modern machinelearning algorithms output overconfident predictions, resulting in incorrect decisions and technology acceptance issues. Classical calibration methods rely on artificial assumptions and often result in overfitting, whilst modern calibration methods attempt to solve calibration issues by modifying components of black-box deeplearning systems. While this provides a partial solution, such modifications do not provide mathematical guarantees of predictions validity, are intrusive, complex, and costly to implement. This thesis introduces novel methods for producing well-calibrated probabilistic predictions for machine learning classification and regression problems. A new method for multi-class classification problems is developed and compared to traditional calibration approaches. In the regression setting, the thesis develops novel methods for probabilistic regression to derive predictive distribution functions that are valid under under a nonparametric IID assumption in terms of guaranteed coverage and contain more information when compared to classical conformal prediction methods whilst improving computational efficiency. Experimental studies of the methods introduced in this thesis demonstrate advantages with regard to state-of-the-art. The main advantage of split conformal predictive systems is their guaranteed validity, whilst cross-conformal predictive systems enjoy higher predictive efficiency andempiricalvalidity in the absence of excess randomisation.
... In the support vector machine (SVM) algorithm the X block is nonlinearly mapped into a high-dimensional feature space using a specific transfer kernel function (Cortes and Vapnik, 1995). After, in this feature space a linear decision surface (hyperplane) is calculated to solve the regression or classification problem. ...
... The slack variables ξ i and ξ * i are used as a limit to determine the support vectors (Üstün et al., 2005). Comprehensive concepts regarding SVM can be found in Cortes and Vapnik (1995) and Üstün et al. (2005). During this whole process, the parameters ε, C and γ must be optimized. ...
Article
Full-text available
Precision and sustainable agriculture requires information about soil pH and organic matter (OM) content at higher spatial and temporal scales than current agronomic sampling and analytical methods allow. This study examined the accuracy of spectral models using high throughput screening (HTS) in diffuse reflectance mode in mid Infra-red (MIR)/DRIFT combined with machine learning algorithms to predict soil pH(CaCl2) and %OM in shallow and deeper topsoils compared to laboratory methods. Models were developed from an archive of samples taken on a 4 km² grid from the northern half of Ireland (Terra Soil project), which includes 18,859 samples (9,396 shallow + 9,463 deeper). The application of Cubist models showed that for different depths there are minor different spectral group associations with pH and %OM values. These differences resulted in a loss of accuracy in the extrapolation of the topsoil model to predict values from deeper topsoils or vice versa. Therefore we recommend the use of samples from both depths to build a calibration model. The proposed methodology was able to determine %OM and pH using a unique multivariate regression model for both depths, with RMSEP values of 1.12 and 0.89 %; RPIQ values of 42.34 and 38.48; R²val of 0.9989 and 0.9993 for %OM determinations in shallow and deeper topsoils, respectively. For pH determinations the RMSEP values obtained were 0.25 and 0.34; RPIQ values of 6.04 and 4.94; R²val 0.9385 and 0.8954. Both regression models are classified as excellent predictions models, yielding RPIQ values >4.05 for shallow and deeper topsoils. The results demonstrated the high potential of HTS-DRIFT combined with machine learning algorithms as a rapid, accurate, and cost-effective method to build large soil spectral libraries, displaying predicted results similar to two separate soil laboratory methods (pH and LOI).
... Support vector machine (SVM) is a popular supervised learning method, which is widely used for classification. The linear SVM problem can be expressed as [7,38] min ...
... Here, the second inequality follows from the Lipschitz continuity of Θ over {y k } k∈N , the third inequality is from (7), and the equality is due to the definition of m. Since γ > 0 is taken arbitrarily, we have shown Θ(y k ) −Θ → 0 as k → ∞. ...
Preprint
Nonsmooth optimization finds wide applications in many engineering fields. In this work, we propose to utilize the {Randomized Coordinate Subgradient Method} (RCS) for solving both nonsmooth convex and nonsmooth nonconvex (nonsmooth weakly convex) optimization problems. At each iteration, RCS randomly selects one block coordinate rather than all the coordinates to update. Motivated by practical applications, we consider the {linearly bounded subgradients assumption} for the objective function, which is much more general than the Lipschitz continuity assumption. Under such a general assumption, we conduct thorough convergence analysis for RCS in both convex and nonconvex cases and establish both expected convergence rate and almost sure asymptotic convergence results. In order to derive these convergence results, we establish a convergence lemma and the relationship between the global metric subregularity properties of a weakly convex function and its Moreau envelope, which are fundamental and of independent interests. Finally, we conduct several experiments to show the possible superiority of RCS over the subgradient method.
... (Abdou & Pointon, 2011). PCC (classification indicator or accuracy) is a standard for assessing discrimination (Cortes & Vapnik, 1995). Sensitivity refers to the ratio between a correctly classified positive borrower and the total number of positive borrowers, and specificity refers to the ratio between a correctly classified negative borrower and the total number of negative borrowers. ...
Conference Paper
Full-text available
Healthcare systems face challenges posed by internal and external factors, including economic crises or pandemic outbreaks. In the COVID-19 pandemic, particular attention was focused on the financing activities to mitigate its impact, strengthening emergency medicine, healthcare human and technical resources. At the same time, it becomes noticeable that the proposed solutions and significant financial resources are not sufficient or appropriate to resolve potential and existing challenges. Consequently, the question arises as to whether there are any hidden obstructions in the healthcare system itself that hinder its operation. Therefore, this research aimed to identify areas in which the in-depth research would contribute to the performance of healthcare systems, alongside the allocation of funding. To achieve the goal of the study, a systematic literature review was conducted to identify the views of scientists on healthcare systems' operation in the COVID-19 spread period. The results of the performed literature review indicated several obstructions and potentials of the healthcare systems. Dominant attention is dedicated to shifting from the hierarchical structural governance principles to the people-centered approach, towards collaborative, inclusive, and participative practices of engagement and involvement of healthcare professionals and patients, moving from fragmentation to adaptive self-organization, creating well-integrated, equitable, and prosperous societies, by redesigning health policy thinking and respecting the principles of democracy and human rights. Further theoretical research could strengthen the practical implementation of more sustainable and inclusive healthcare systems.
... Different variants of the SVM algorithms have been proposed by different authors in recent times. Although originally proposed in (Cortes and Vapnik 1995) for classification problems, SVMs are capable of handling regression-based problems including function fitting time-series prediction and nonlinear modelling (Wong and Hsu 2006;Ahmad et al. 2014). ...
Article
Full-text available
The process of material discovery and design can be simplified and accelerated if we can effectively learn from existing data. In this study, we explore the use of machine learning techniques to learn the relationship between the structural properties of pyrochlore compounds and their lattice constants. We proposed a support vector regression (SVR) and artificial neural network (ANN) models to predict the lattice constants of pyrochlore materials. Our study revealed that the lattice constants of pyrochlore compounds, generically represented A2B2O7 (A and B cations), can be effectively predicted from the ionic radii and electronegativity data of the constituting elements. Furthermore, we compared the accuracy of our ANN, SVR models with an existing linear model in the literature. The analysis revealed that the SVR model exhibits a better accuracy with a correlation coefficient of 99.34 percent with the experimental data. Therefore, the proposed SVR model provides an avenue toward a precise estimation of the lattice constants of pyrochlore compounds.
... As synthetic screening experiments are considered to be limited in their ability to study the overall relationship between different parts of gene regulatory structures and co-regulation, deep convolutional neural network (DCNN) was used to predict gene expression levels from natural DNA sequences (Agarwal and Shendure, 2020;Zrimec et al, 2020). Support-vector network (SVM) is a two-group classification model, whose basic model is defined as a linear classifier with maximum interval on the feature space, and whose learning strategy is interval maximization, which can eventually be translated into the solution of a convex quadratic programming problem (Cortes et al, 1995). Lin et al (2020) proposed a new ML pipeline using the support-vector regression (SVR) algorithm that is a regression version of SVM. ...
Article
Full-text available
Drug repurposing is of interest for therapeutics innovation in many human diseases including coronavirus disease 2019 (COVID-19). Methodological innovations in drug repurposing are currently being empowered by convergence of omics systems science and digital transformation of life sciences. This expert review article offers a systematic summary of the application of artificial intelligence (AI), particularly machine learning (ML), to drug repurposing and classifies and introduces the common clustering, dimensionality reduction, and other methods. We highlight, as a present-day high-profile example, the involvement of AI/ML-based drug discovery in the COVID-19 pandemic and discuss the collection and sharing of diverse data types, and the possible futures awaiting drug repurposing in an era of AI/ML and digital technologies. The article provides new insights on convergence of multi-omics and AI-based drug repurposing. We conclude with reflections on the various pathways to expedite innovation in drug development through drug repurposing for prompt responses to the current COVID-19 pandemic and future ecological crises in the 21st century.
... e SVM [28,29] is popular for an ML approach because it produces significant accuracy while requiring low computing costs. e goal of the SVM was to find a hyperplane in N-dimensional space (N: number of input features) that clearly classified the data by maximizing the distance between data points from both classes. ...
Article
Full-text available
Postural sway indicates controlling stability in response to standing balance perturbations and determines risk of falling. In order to assess balance and postural sway, costly laboratory equipment is required, making it impractical for clinical settings. The study aimed to develop a triaxial inertial sensor and apply machine learning (ML) algorithms for predicting trajectory of the center of pressure (COP) path of postural sway. Fifty-three healthy adults, with a mean age of 46 years, participated. The inertial sensor prototype was investigated for its concurrent validity relative to the COP path length obtained from the force platform measurement. Then, ML was applied to predict the COP path by using sensor-sway metrics as the input. The results of the study revealed that all variables from the sensor prototype demonstrated high concurrent validity against the COP path from the force platform measurement (ρ > 0.75; p < 0.001 ). The agreement between sway metrics, derived from the sensor and ML algorithms, illustrated good to excellent agreement (ICC; 0.89–0.95) between COP paths from the sensor metrics, with respect to the force plate measurement. This study demonstrated that the inertial sensor, in comparison to the standard tool, would be an option for balance assessment since it is of low-cost, conveniently portable, and comparable to the accuracy of standard force platform measurement.
... A classification algorithm is necessary to perform the IFS method. In this study, four different classification algorithms were selected, namely, RF [39], kNN [42], support vector machine (SVM) [43], and decision tree (DT) [44]. We compared the performances of the classifiers generated based on these algorithms in IFS. ...
Article
Background: COVID-19 displays an increased mortality rate and higher risk of severe symptoms with increasing age, which is thought to be a result of the compromised immunity of elderly patients. However, the underlying mechanisms of aging-associated immunodeficiency against Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) remains unclear. Epigenetic modifications show considerable changes with age, causing altered gene regulations and cell functions during the aging process. The DNA methylation patterns among patients with coronavirus 2019 disease (COVID-19) who had different ages were compared to explore the effect of aging-associated methylation modifications in SARS-CoV-2 infection. Methods: Patients with COVID-19 were divided into three groups according to age. Boruta was used on the DNA methylation profiles of the patients to remove irrelevant features and retain essential signature sites to identify substantial aging-associated DNA methylation changes in COVID-19. Next, these features were ranked using the minimum redundancy maximum relevance (mRMR) method, and the feature list generated by mRMR was processed into the incremental feature selection method with decision tree (DT), random forest, k-nearest neighbor, and support vector machine to obtain the key methylation sites, optimal classifier, and decision rules. Results: Several key methylation sites that showed distinct patterns among the patients with COVID-19 who had different ages were identified, and these methylation modifications may play crucial roles in regulating immune cell functions. An optimal classifier was built based on selected methylation signatures, which can be useful to predict the aging-associated disease risk of COVID-19. Conclusions: Existing works and our predictions suggest that the methylation modifications of genes, such as NHLH2, ZEB2, NWD1, ELOVL2, FGGY, and FHL2, are closely associated with age in patients with COVID-19, and the 39 decision rules extracted with the optimal DT classifier provides quantitative context to the methylation modifications in elderly patients with COVID-19. Our findings contribute to the understanding of the epigenetic regulations of aging-associated COVID-19 symptoms and provide the potential methylation targets for intervention strategies in elderly patients.
... In machine learning, SVMs are much effective models with ability to solve non-linear problems with less training data [326]. Vapnik et al [327] developed this popular ML algorithm in 1995. They can be employed for regression problem and for classification. ...
Article
Full-text available
The main and pivot part of electric companies is the load forecasting. Decision-makers and think tank of power sectors should forecast the future need of electricity with large accuracy and small error to give uninterrupted and free of load shedding power to consumers. The demand of electricity can be forecasted amicably by many Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI) techniques among which hybrid methods are most popular. The present technologies of load forecasting and present work regarding combination of various ML, DL and AI algorithms are reviewed in this paper. The comprehensive review of single and hybrid forecasting models with functions; advantages and disadvantages are discussed in this paper. The comparison between the performance of the models in terms of Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), and Mean Absolute Percentage Error (MAPE) values are compared and discussed with literature of different models to support the researchers to select the best model for load prediction. This comparison validates the fact that the hybrid forecasting models will provide a more optimal solution. INDEX TERMS load forecasting, machine learning, load shedding, root mean squared error, mean absolute percentage error.
... Two approaches destined to increase learning quality of neural networks were introduced a few years later, namely support vector machines (SVMs) 93 and random forest (RF), 94 and both were used to refine the quality of glycosite prediction. All of the corresponding contributions demonstrated how they outperformed NetNGlyc and NetOGlyc following this enhancement. ...
Article
Artificial intelligence (AI) methods have been and are now being increasingly integrated in prediction software implemented in bioinformatics and its glycoscience branch known as glycoinformatics. AI techniques have evolved in the past decades, and their applications in glycoscience are not yet widespread. This limited use is partly explained by the peculiarities of glyco-data that are notoriously hard to produce and analyze. Nonetheless, as time goes, the accumulation of glycomics, glycoproteomics, and glycan-binding data has reached a point where even the most recent deep learning methods can provide predictors with good performance. We discuss the historical development of the application of various AI methods in the broader field of glycoinformatics. A particular focus is placed on shining a light on challenges in glyco-data handling, contextualized by lessons learnt from related disciplines. Ending on the discussion of state-of-the-art deep learning approaches in glycoinformatics, we also envision the future of glycoinformatics, including development that need to occur in order to truly unleash the capabilities of glycoscience in the systems biology era.
... Methods. Popular single-class classification methods directly detect novelty by measuring the gap between the original and reconstructed inputs, such as Support Vector Machine (SVM) [44,47] and SVDD [48,49]. In general, video-level methods treat anomaly detection as a novel detection problem. ...
Article
Full-text available
As reported by the United Nations in 2021, road accidents cause 1.3 million deaths and 50 million injuries worldwide each year. Detecting traffic anomalies timely and taking immediate emergency response and rescue measures are essential to reduce casualties, economic losses, and traffic congestion. This paper proposed a three-stage method for video-based traffic anomaly detection. In the first stage, the ViVit network is employed as a feature extractor to capture the spatiotemporal features from the input video. In the second stage, the class and patch tokens are fed separately to the segment-level and video-level traffic anomaly detectors. In the third stage, we finished the construction of the entire composite traffic anomaly detection framework by fusing outputs of two traffic anomaly detectors above with different granularity. Experimental evaluation demonstrates that the proposed method outperforms the SOTA method with 2.07% AUC on the TAD testing overall set and 1.43% AUC on the TAD testing anomaly subset. This work provides a new reference for traffic anomaly detection research.
... Its working principle is to maximize the distance between the nearest sample points on both sides by constructing an optimal separating hyperplane. It has the advantages of high accuracy, strong popularization, and good generalization ability in processing high-dimensional invisible data [58]. In this research, the training set is set as ...
Article
Full-text available
After the "5·12" Wenchuan earthquake in 2008, collapses and landslides have occurred continuously, resulting in the accumulation of a large quantity of loose sediment on slopes or in gullies, providing rich material source reserves for the occurrence of debris flow and flash flood disasters. Therefore, it is of great significance to build a collapse and landslide susceptibility evaluation model in Wenchuan County for local disaster prevention and mitigation. Taking Wenchuan County as the research object and according to the data of 1081 historical collapse and landslide disaster points, as well as the natural environment, this paper first selects six categories of environmental factors (13 environmental factors in total) including topography (slope, aspect, curvature , terrain relief, TWI), geological structure (lithology, soil type, distance to fault), meteorology and hydrology (rainfall, distance to river), seismic impact (PGA), ecological impact (NDVI), and impact of human activity (land use). It then builds three single models (LR, SVM, RF) and three CF-based hybrid models (CF-LR, CF-SVM, CF-RF), and makes a comparative analysis of the accuracy and reliability of the models, thereby obtaining the optimal model in the research area. Finally, this study discusses the contribution of environmental factors to the collapse and the landslide susceptibility prediction of the optimal model. The research results show that (1) the areas prone to extremely high collapse and landslide predicted by the six models (LR, CF-LR, SVM, CF-SVM, RF and CF-RF) have an area of 730.595 km 2 , 377.521 km 2 , 361.772 km 2 , 372.979 km 2 , 318.631 km 2 , and 306.51 km 2 , respectively, and the frequency ratio precision of collapses and landslides is 0.916, 0.938, 0.955, 0.956, 0.972, and 0.984, respectively; (2) the ranking of the comprehensive index based on the confusion matrix is CF-RF>RF>CF-SVM>CF-LR>SVM>LR and the ranking of the AUC value is CF-RF>RF>CF-SVM>CF-LR>SVM>LR. To a certain extent, the coupling models can improve precision more over the single models. The CF-RF model ranks the highest in all indexes , with a POA value of 257.046 and an AUC value of 0.946; (3) rainfall, soil type, and distance to river are the three most important environmental factors, accounting for 24.216%, 22.309%, and 11.41%, respectively. Therefore, it is necessary to strengthen the monitoring of mountains and rock masses close to rivers in case of rainstorms in Wenchuan county and other similar areas prone to post-earthquake landslides.
... 4.2.1. This list comprised recognition models such as Random Forest [26], Ridge classifier [27], Support Vector Machine (SVM) [28], and Extra-Trees classifier [29]. Random forests were grown with 50 and 100 trees; the Ridge classifier was trained using a regularization strength of α = 40; the SVM was trained using a linear kernel and with a regularization parameter C = 1; and the Extra-Trees were grown with fifty trees. ...
Article
Full-text available
Identifying objects during the early phases of robotic grasping in unstructured environments is a crucial step toward successful dexterous robotic manipulation. Underactuated hands are versatile and quickly conform to unknown object surfaces to ensure a firm grasp. The trade-off of using such hands is that extracting information and recognizing objects is challenging due to the uncertainty introduced by the hand’s flexibility and unexpected object movements under manipulation. Combining tactile sensors and machine learning models can provide valuable information about manipulated objects to overcome such drawbacks. The present paper explores tactile object identification under two situations: single grasp, analogous to the haptic glance in humans, and through brief exploratory procedures where a robotic thumb displaces the grasped object to excite the sensors. In both scenarios, a fuzzy controller ensures that data collection occurs under approximately the same conditions in terms of forces and vibrations. Machine learning methods used for the single-grasp and short-exploratory data confirm that the former can improve object recognition.
... Finally, the predicted RSSI value was evaluated as the average RSSI value of the training set points that were associated with the same cluster. (6) Support vector machine (SVM)-based method [26]: tries to fit the best line within a predefined or threshold error value. In our study, we used the radial basis function (RBF) kernel with gamma = scale, and the default values of the other parameters. ...
Article
The advances made in wireless communication technology have led to efforts to improve the quality of reception, prevent poor connections and avoid disconnections between wireless and cellular devices. One of the most important steps toward preventing communication failures is to correctly estimate the received signal strength indicator (RSSI) of a wireless device. RSSI prediction is important for addressing various challenges such as localization, power control, link quality estimation , terminal connectivity estimation, and handover decisions. In this study, we compare different machine learning (ML) techniques that can be used to predict the received signal strength values of a device, given the received signal strength values of other devices in the region. We consider various ML methods, such as multi-layer ANN, K nearest neighbors, decision trees, random forest, and the K-means based method, for the prediction challenge. We checked the accuracy level of the learning process using a real dataset provided by a major national cellular operator. Our results show that the weighted K nearest neighbors algorithm, for K=3 neighbors, achieved, on average, the most accurate RSSI predictions. We conclude that in environments where the size of data is relatively small, and data of close geographical points is available, a method that predicts the coverage of a point using the coverage near geographical points can be more successful and more accurate compared with other ML methods.
... Support vector regression (SVR) is based on the Vapnik-Chervonenkis dimension theory and structural risk minimization principle. 27 SVR has been widely used in the field of PHM. 35 The basic idea of SVR is to find a function ( ) = ⟨ , ⟩ + to make sure the predicted value f(x) has the smallest difference from the training label, where ⟨⋅⟩ means matrix inner product, w is the normal vector, b denotes the bias. ...
Article
Remaining useful life (RUL) plays an important role in prognostic and health management to reduce maintenance costs and avoid possible accidents. Massive multi-sensor data makes it challenging to extract degradation features and predict RUL. This paper develops a novel RUL prediction framework consisted of a multi-sensor fusion model and a hybrid prediction model. HI curves are constructed by synthesizing multiple metrics including time correlation, mono-tonicity, robustness, and consistency, making our data fusion method different from existing methods considering only one metric. In many practical situations, the initial degradation levels of obtained HI curves are different, and treating all curves without classification will result in errors in RUL prediction. The K-means method is used to partition degradation paths into discrete states and the obtained HI curves are grouped by their initial states and then trained separately. A hybrid prediction model combining Long Short-Term Memory (LSTM) network and Support Vector Regression (SVR) is developed to predict the RUL. The dataset of turbofan engines is used to verify the proposed method. The results show that the proposed method performs better than many existing methods.
... Recently, leverage scores have found applications in other areas, such as kernel methods (Schölkopf et al., 2002). Despite their potential for effective machine-learning algorithm design (Cortes & Vapnik, 1995), kernels often require handling large matrices. For that reason, sampling methods and approximate factorizations are often studied in this context. ...
Preprint
Full-text available
In problems involving matrix computations, the concept of leverage has found a large number of applications. In particular, leverage scores, which relate the columns of a matrix to the subspaces spanned by its leading singular vectors, are helpful in revealing column subsets to approximately factorize a matrix with quality guarantees. As such, they provide a solid foundation for a variety of machine-learning methods. In this paper we extend the definition of leverage scores to relate the columns of a matrix to arbitrary subsets of singular vectors. We establish a precise connection between column and singular-vector subsets, by relating the concepts of leverage scores and principal angles between subspaces. We employ this result to design approximation algorithms with provable guarantees for two well-known problems: generalized column subset selection and sparse canonical correlation analysis. We run numerical experiments to provide further insight on the proposed methods. The novel bounds we derive improve our understanding of fundamental concepts in matrix approximations. In addition, our insights may serve as building blocks for further contributions.
... SVM was first proposed by Cortes and Vapnik (1995), which has minimized structural risks and strong generalization performance. The linear estimation function of SVR is denoted as: ...
Article
Accurate carbon price forecasting is essential to reduce carbon dioxide emissions and slow down global warming. However, a key issue in the carbon trading market is the diversity and uncertainty of external factors. Some studies began to focus on the impact of a single external factor, but few of them considered the application of multi-source information on carbon prices. In addition, the selection of the decomposition method is still controversial, making carbon price forecasting inefficient and unstable. Therefore, this paper proposes a carbon price forecasting method based on multi-source information fusion (MSIF) and hybrid multi-scale decomposition (HMSD). First, MSIF can provide complete, interactive, and timely information for raw carbon prices, including historical data, influencing factors (coal prices, oil prices), and unstructured data (Baidu index, social media sentiment). Second, HMSD is used to completely extract the internal features of multi-source information and avoid the problem of decomposition method selection. Third, due to the linear and nonlinear characteristics of carbon prices, a combination strategy based on Holt, ARIMA, SVR, BPNN, and LSTM can achieve satisfactory results. Finally, to evaluate the effectiveness of the proposed framework, seven types of comparative experiments (based on historical data, influencing factors, Baidu index, and sentiment analysis) are carried out. The results show that MSIF is superior to single-source information in improving carbon price forecasting performance. Furthermore, the HMSD is stronger than the single multi-scale decomposition method in information extraction. Therefore, the proposed hybrid framework is a state-of-the-art carbon price forecasting approach.
... Support Vector Machine (SVM) classifier (Cortes and Vapnik, 1995) is widely used in monitoring the wetland vegetation because it can better deal with the imbalance of wetland vegetation samples (Ahmed et al., 2021;Zhang and Lin, 2022). The SVM classifier firstly normalizes the data, and then the data to be classified is mapped to the factor space of high dimension to find the optimal decision boundary and classify data into different categories. ...
Article
Full-text available
Vegetation is the functional subject in the wetland ecosystem and plays an irreplaceable role in biodiversity conservation. It is of great significance to monitor wetland vegetation for scientific assessment of the impact of vegetation on ecological environment and biodiversity. In this paper, a method for extracting wetland vegetation based on short time series Normalized Difference Vegetation Index (NDVI) data set was constructed. First, time series NDVI data were constructed using Sentinel-2 images. Then, the Support Vector Machine (SVM) classifier was used to classify the wetland vegetation types. The distributions of the main wetland vegetation in the study area in 2018 and 2020 were got. Finally, the land cover transfer matrix was calculated to analyze the spatial pattern and change of wetland vegetation emphatically from 2018 to 2020. Based on 46 Sentinel-2 images acquired in 2018 and 2020, the spatial pattern and change of vegetation in the Yellow River Delta wetlands were extracted and analyzed in this paper. The results show that: (1) The method for extracting wetland vegetation in estuary delta based on PIE-Engine platform and short time series NDVI data constructed in this paper can effectively extract the wetland vegetation information. The overall accuracy of the classification results reached 90.47% in 2018 and 80.30% in 2020. The Kappa coefficient of the classification results are 0.874 in 2018 and 0.739 in 2020 respectively. Compared with the results from the random forest classification method and the maximum likelihood classification method, the accuracy is improved by 6.40% and 13.04%, and the Kappa coefficient is improved by 0.055 and 0.069. (2) There were significant changes in vegetation coverage in the Yellow River Delta wetlands from 2018 to 2020. The Spartina alterniflora increased by 3.74km2. The Suaeda salsa degraded seriously, and the total area decreased by 20.38km2. In addition, the increase of Spartina alterniflora effectively guaranteed the stability of the coastline in the study area. This study can provide a theoretical basis for wetlands vegetation classificaton, and the classificaton results can provide scientific reference for protecting the ecological environment of wetlands and maintaining ecological stability.
... The LBP histogram features extracted by using the MB-LBP operators are in a large number of dimensions. The SVM method can be applied to effectively solve the data problems of small samples, non-linearity, high dimensions, and local minima (Cortes and Vapnik, 1995;Burges, 1998). Therefore, the SVM method was used to build distinction models of the images of ring rot and anthracnose on apple fruits in this study. ...
Article
Full-text available
Ring rot caused by Botryosphaeria dothidea and anthracnose caused by Colletotrichum gloeosporioides are two important apple fruit diseases. It is critical to conduct timely and accurate distinction and diagnosis of the two diseases for apple disease management and apple quality control. The automatic distinction between the two diseases was investigated based on image processing technology in this study. The acquired disease images were preprocessed via image scaling, color image contrast stretching, and morphological opening and closing reconstruction. Then, two lesion segmentation methods based on circle fitting were proposed and used to conduct lesion segmentation. After comparison with the manual segmentation results obtained via the software Adobe Photoshop CC, Lesion segmentation method 1 was chosen for further disease image processing. The gray images on the nine components in the RGB, HSI, and L*a*b* color spaces of the segmented lesion images were filtered by using multi-scale block local binary pattern operators with the sizes of pixel blocks of 1 × 1, 2 × 2, and 3 × 3, respectively, and the corresponding local binary pattern (LBP) histogram vectors were calculated as the features of the lesion images. Subsequently, support vector machine (SVM) models and random forest models were built based on individual LBP histogram features or different LBP histogram feature combinations for distinguishing the diseases. The optimal SVM model with the distinction accuracies of the training and testing sets equal to 100 and 95.12% and the optimal random forest model with the distinction accuracies of the training and testing sets equal to 100 and 90.24% were achieved. The results indicated that the distinction between the two diseases could be implemented with high accuracy by using the proposed method. In this study, a method based on image processing technology was provided for the distinction of ring rot and anthracnose on apple fruits.
... The basic principles of these four classifiers are introduced as follows. Support Vector Machine Support Vector Machine(SVM), which is proposed by Vapnik with colleagues [5], is a kind of generalized linear classifier, it can separate data in a supervised learning manner, and its decision boundary is the maximum margin hyperplane. When there are more than two categories to be divided, SVM will convert this problem into multiple binary classification problems. ...
Article
Full-text available
Classifier ensemble is an important research content of ensemble learning, which combines several base classifiers to achieve better performance. However, the ensemble strategy always brings difficulties to integrate multiple classifiers. To address this issue, this paper proposes a multi-classifier ensemble algorithm based on D-S evidence theory. The principle of the proposed algorithm adheres to two primary aspects. (a) Four probability classifiers are developed to provide redundant and complementary decision information, which is regarded as independent evidence. (b) The distinguishing fusion strategy based on D-S evidence theory is proposed to combine the evidence of multiple classifiers to avoid the mis-classification caused by conflicting evidence. The performance of the proposed algorithm has been tested on eight different public datasets, and the results show higher performance than other methods.
... , G β (g) 2 ≤ r g+G , g = 1, . . . , G Support vector machines (Cortes & Vapnik, 1995) are supervised learning methods for binary classification. Support vector machines solve ...
Preprint
Full-text available
Gradient-based optimization methods for hyperparameter tuning guarantee theoretical convergence to stationary solutions when for fixed upper-level variable values, the lower level of the bilevel program is strongly convex (LLSC) and smooth (LLS). This condition is not satisfied for bilevel programs arising from tuning hyperparameters in many machine learning algorithms. In this work, we develop a sequentially convergent Value Function based Difference-of-Convex Algorithm with inexactness (VF-iDCA). We show that this algorithm achieves stationary solutions without LLSC and LLS assumptions for bilevel programs from a broad class of hyperparameter tuning applications. Our extensive experiments confirm our theoretical findings and show that the proposed VF-iDCA yields superior performance when applied to tune hyperparameters.
... In sensitivity analysis, we applied other frequently used machine learning algorithms such as random forest and support vector machine to our dataset for comparison (28,29). Additionally, we assessed the performance of the SOFA score, SAPS-II and XGBoost model using data gathered in the early period after ICU admission, i.e., the first 12 h, in predicting in-hospital mortality in the first 28 days. ...
Article
Full-text available
Background Sepsis-associated acute kidney injury (SA-AKI) is common in critically ill patients, which is associated with significantly increased mortality. Existing mortality prediction tools showed insufficient predictive power or failed to reflect patients' dynamic clinical evolution. Therefore, the study aimed to develop and validate machine learning-based models for real-time mortality prediction in critically ill patients with SA-AKI. Methods The multi-center retrospective study included patients from two distinct databases. A total of 12,132 SA-AKI patients from the Medical Information Mart for Intensive Care IV (MIMIC-IV) were randomly allocated to the training, validation, and internal test sets. An additional 3,741 patients from the eICU Collaborative Research Database (eICU-CRD) served as an external test set. For every 12 h during the ICU stays, the state-of-the-art eXtreme Gradient Boosting (XGBoost) algorithm was used to predict the risk of in-hospital death in the following 48, 72, and 120 h and in the first 28 days after ICU admission. Area under the receiver operating characteristic curves (AUCs) were calculated to evaluate the models' performance. Results The XGBoost models, based on routine clinical variables updated every 12 h, showed better performance in mortality prediction than the SOFA score and SAPS-II. The AUCs of the XGBoost models for mortality over different time periods ranged from 0.848 to 0.804 in the internal test set and from 0.818 to 0.748 in the external test set. The shapley additive explanation method provided interpretability for the XGBoost models, which improved the understanding of the association between the predictor variables and future mortality. Conclusions The interpretable machine learning XGBoost models showed promising performance in real-time mortality prediction in critically ill patients with SA-AKI, which are useful tools for early identification of high-risk patients and timely clinical interventions.
... In the final diagnosis module, the segmentation of OD and OC is used to extract features mean cup-to-disk ratio (MCDR) and ISNT (inferior, superior, nasal, and temporal) score. en, a two-dimensional classification line is established by support vector machines (SVM) [25]. e final diagnostic criteria are as follows: (1) if there is an RNFLD, then it is predicted as glaucoma; (2) if there is no RNFLD, then make a prediction based on the two-dimensional classification line. ...
Article
Full-text available
Purpose: By comparing the performance of different models between artificial intelligence (AI) and doctors, we aim to evaluate and identify the optimal model for future usage of AI. Methods: A total of 500 fundus images of glaucoma and 500 fundus images of normal eyes were collected and randomly divided into five groups, with each group corresponding to one round. The AI system provided diagnostic suggestions for each image. Four doctors provided diagnoses without the assistance of the AI in the first round and with the assistance of the AI in the second and third rounds. In the fourth round, doctor B and doctor D made diagnoses with the help of the AI and the other two doctors without the help of the AI. In the last round, doctor A and doctor B made diagnoses with the help of AI and the other two doctors without the help of the AI. Results: Doctor A, doctor B, and doctor D had a higher accuracy in the diagnosis of glaucoma with the assistance of AI in the second (p=0.036, p=0.003, and p ≤ 0.000) and the third round (p=0.021, p ≤ 0.000, and p ≤ 0.000) than in the first round. The accuracy of at least one doctor was higher than that of AI in the second and third rounds, in spite of no detectable significance (p=0.283, p=0.727, p=0.344, and p=0.508). The four doctors' overall accuracy (p=0.004 and p ≤ 0.000) and sensitivity (p=0.006 and p ≤ 0.000) as a whole were significantly improved in the second and third rounds. Conclusions: This "Doctor + AI" model can clarify the role of doctors and AI in medical responsibility and ensure the safety of patients, and importantly, this model shows great potential and application prospects.
... No mesmo ano, Shafiq et al. [3] propuseram uma nova métrica para seleção de características denominada CorrAUC, baseada na combinação de outras duas métricas: CAE (Correlation Attribute Evaluation) e AUC (Area Under the Curve). A proposta foi avaliada no conjunto de dados Bot-IoT [4] com quatro modelos de aprendizado de máquina: árvore de decisão C4.5 [5], classificador Bayesiano ingênuo (Naïve Bayes), floresta aleatória (Random Forest) [6] e máquina de vetores de suporte [7]. A solução proposta apresentou precisão na detecção de tráfego malicioso superior a 96%. ...
Preprint
Full-text available
Dispositivos IoT estão cada vez mais presentes em nosso dia-a-dia, tanto em contextos particulares quanto em am-bientes públicos. Consequentemente, a segurança deles também deve ser tratada com atenção. Após revisar diversas implemen-tações da literatura, este trabalho propõe uma abordagem para detecção de ameaças baseada em análise de tráfego de rede, realizada por modelos de aprendizado de máquina. Após extensa experimentação e avaliação, foi possível produzir um modelo rapidamente treinável e altamente confiável, comprovando a eficácia da proposta e apontando direções para trabalhos futuros. Palavras-Chave-Internet das Coisas, cibersegurança, apren-dizado de máquina. Abstract-IoT devices are increasingly present in our daily lives, both in private contexts and in public environments. Consequently, their security must also be thought with care. After reviewing relevant literature, this work proposes an approach to threat detection based on network traffic analysis, performed by machine learning models. After extensive experimentation and evaluation, it was possible to produce a quickly trainable and highly reliable model, proving the effectiveness of the proposal and pointing out directions for future work.
... A further reduction was applied to the feature subsets, which was also useful to work with a homogeneous number of features throughout all sub-classifications. A wrapper-based feature selector employing a soft-margins linear SVM [42], trained with Platt's SMO Optimizer [43] on a single feature at a time, was used to perform ranking. We empirically considered the first 50 ranked features, and retained them as the final set to train the Adaboost classifier. ...
Article
Alongside the currently used nasal swab testing, the COVID-19 pandemic situation would gain noticeable advantages from low-cost tests that are available at any-time, anywhere, at a large-scale, and with real time answers. A novel approach for COVID-19 assessment is adopted here, discriminating negative subjects versus positive or recovered subjects. The scope is to identify potential discriminating features, highlight mid and short-term effects of COVID on the voice and compare two custom algorithms. A pool of 310 subjects took part in the study; recordings were collected in a low-noise, controlled setting employing three different vocal tasks. Binary classifications followed, using two different custom algorithms. The first was based on the coupling of boosting and bagging, with an AdaBoost classifier using Random Forest learners. A feature selection process was employed for the training, identifying a subset of features acting as clinically relevant biomarkers. The other approach was centred on two custom CNN architectures applied to mel-Spectrograms, with a custom knowledge-based data augmentation. Performances, evaluated on an independent test set, were comparable: Adaboost and CNN differentiated COVID-19 positive from negative with accuracies of 100% and 95% respectively, and recovered from negative individuals with accuracies of 86.1% and 75% respectively. This study highlights the possibility to identify COVID-19 positive subjects, foreseeing a tool for on-site screening, while also considering recovered subjects and the effects of COVID-19 on the voice. The two proposed novel architectures allow for the identification of biomarkers and demonstrate the ongoing relevance of traditional ML versus deep learning in speech analysis.
... Along with ensemble machine learning and generic programming algorithms, SVM represents a newer and advanced generation of ML algorithms (Du et al. 2020). In simple terms, SVM generates an optimum hyperplane clusters in a hyper-surface using a binary classifier (Acortes, and Vapnik 1995). In generic machine learning, training of the algorithm is done to minimize the empirical training error, which leads to an overfitting problem (Deiss et al., 2020). ...
Article
Full-text available
The present study has attempted to address the issue of sensitivity of different clusters of factors towards gully erosion in the Mayurakshi river basin. Firstly, the gully erosion susceptibility of the basin area has been mapped by integrating using 18 parameters divided into four factor-cluster, viz. erodibility, erosivity, resistance, and topographical cluster, with the help of four machine learning (ML) models such as random forest (RF), gradient boost (GBM), extreme gradient boost (XGB), and support vector machine (SVM). Results show that almost 20% and 25% of the upper catchment of the basin belongs to extreme and high gully erosion susceptibility. Among the applied algorithms, RF is appeared as the best performing model. The spatial association of factor cluster-based models with the final susceptibility model is found the highest for the erosivity cluster, followed by the erodibility cluster. From the sensitivity analysis, it becomes clear that geology and soil texture are dominant contributing factors to gully erosion susceptibility. The geological formation of unclassified granite gneiss and geomorphological formation of denudational origin pediment-pediplain complex is dominant over the entire upper catchment of the basin, and therefore, can be considered regional factors of importance. Since the study has figured out the different grades of susceptible areas with dominant factors and factor cluster, it would be useful for devising planning for gully erosion check measures. From economic particularly food security purpose, it is very essential since it is concerned with precious soil loss and negative effects on agriculture.
... In this study, we apply the support vector machine algorithm SVC [26] to predict protein complexes from the protein interaction network. SVC is a support vector machine algorithm mainly used to solve classification problems. ...
Article
Full-text available
Background Protein complexes are essential for biologists to understand cell organization and function effectively. In recent years, predicting complexes from protein–protein interaction (PPI) networks through computational methods is one of the current research hotspots. Many methods for protein complex prediction have been proposed. However, how to use the information of known protein complexes is still a fundamental problem that needs to be solved urgently in predicting protein complexes. Results To solve these problems, we propose a supervised learning method based on network representation learning and gene ontology knowledge, which can fully use the information of known protein complexes to predict new protein complexes. This method first constructs a weighted PPI network based on gene ontology knowledge and topology information, reducing the network's noise problem. On this basis, the topological information of known protein complexes is extracted as features, and the supervised learning model SVCC is obtained according to the feature training. At the same time, the SVCC model is used to predict candidate protein complexes from the protein interaction network. Then, we use the network representation learning method to obtain the vector representation of the protein complex and train the random forest model. Finally, we use the random forest model to classify the candidate protein complexes to obtain the final predicted protein complexes. We evaluate the performance of the proposed method on two publicly PPI data sets. Conclusions Experimental results show that our method can effectively improve the performance of protein complex recognition compared with existing methods. In addition, we also analyze the biological significance of protein complexes predicted by our method and other methods. The results show that the protein complexes predicted by our method have high biological significance.
... Support vector machine (SVM) is a typical algorithm for forming a margin boundary. The ideal hyper-plane is to represent the largest separation (or margin) between different classes [10]. For example, in [5], the authors treated abnormal detection in multiple unit spatio-temporal sequences in video and text processing. ...
Article
Full-text available
Intelligent detection of abnormal behaviors meets the need of engineering applications for identifying anomalies and alerting operators. However, most existing methods tackle the high-dimensional sequential video data with key frame extraction, which ignore the redundancy effect of inter- and intra- video frames. In this paper, a novel Abnormal Detection method based on double sparseness LSSVMoc (AD_LSSVMoc) is proposed, which combine both sample (i.e. frame) selection and feature selection simultaneously in a uniform sparse model. For the feature extraction, both handcrafted features and learned features are aggregated into effective descriptors. To achieve feature selection and sample selection, a improved LSSVMoc is proposed with sparse primal and dual optimization strategy, and alternating direction method of multipliers is used to solve the constrained linear equations problem raised in AD_LSSVMoc. Experiments show that the proposed AD_LSSVMoc method achieves a competitive detection performance and high detecting speed compared to state-of-the-art methods.
... Support Vector Machine (SVM) (Cortes and Vapnik, 1995) is a kind of machine learning. The SVM firstly normalized the data and then the data to be classified is mapped to the factor space of high dimension to find the optimal decision boundary and divide the data into different categories. ...
Article
Full-text available
In recent years, the Yellow River Delta has been affected by invasive species Spartina alterniflora (S. alterniflora), resulting in a fragile ecological environment. It is of great significance to monitor the ground object types in the Yellow River Delta wetlands. The classification accuracy based on Synthetic Aperture Radar (SAR) backscattering coefficient is limited by the small difference between some ground objects. To solve this problem, a decision tree classification method for extracting the ground object types in wetland combined time series SAR backscattering and coherence characteristics was proposed. The Yellow River Delta was taken as the study area and the 112 Sentinel-1A GRD data with VV/VH dual-polarization and 64 Sentinel-1A SLC data with VH polarization were used. The decision tree method was established, based on the annual mean VH and VV backscattering characteristics, the new constructed radar backscattering indices, and the annual mean VH coherence characteristics were suitable for extracting the wetlands in the Yellow River Delta. Then the classification results in the Yellow River Delta wetlands from 2018 to 2021 were obtained using the new method proposed in this paper. The results show that the overall accuracy and Kappa coefficient of the proposed method w5ere 89.504% and 0.860, which were 9.992% and 0.127 higher than multi-temporal classification by Support Vector Machine classifier. Compared with the decision tree without coherence, the overall accuracy and Kappa coefficient were improved by 8.854% and 0.108. The spatial distributions of wetland types in the Yellow River Delta from 2018 to 2021 were obtained using the constructed decision tree. The spatio-temporal evolution analysis was conducted. The results showed that the area of S. alterniflora decreased significantly in 2020 but it increased to the area of 2018 in 2021. In addition, S. alterniflora seriously affected the living space of Phragmites australis (P. australis) and in 4 years, 10.485 km2 living space of P. australis was occupied by S. alterniflora. The proposed method can provide a theoretical basis for higher accuracy SAR wetland classification and the monitoring results can provide an effective reference for local wetland protection.
Purpose In the field of hospitality, most studies use English reviews and neglect non-English sources. The purpose of this paper is to exploit a predictive framework for review helpfulness that can process both Chinese and English textual comments. Design/methodology/approach This study develops some methods for feature extraction from Chinese online reviews, extracts more comprehensive predictors and proposes a novel prediction framework of classification before regression. Hofstede’s cultural theory is used to explain differences in the determinants of the helpfulness of reviews in Chinese and English. Findings The findings reveal that travelers from various countries do have discrepant perspectives on reviews helpfulness. Chinese tourists pay more attention to the reviewer profiles, whereas American tourists pay more attention to the review-related features. Practical implications This research offers hoteliers with actionable implications for meeting the needs of travelers from dissimilar cultural societies. The authors’ prediction framework can be used by website developers to create a review helpfulness rating system that allows visitors to acquire beneficial information. Originality/value On the one hand, the methods developed for extracting features of Chinese review, the hybrid set of features with several novel predictors and the prediction framework proposed in this study contribute to the methodology. On the other hand, this study is one of the few articles based on Hofstede’s cultural theory to guide a cross-cultural study on reviews helpfulness in hotel sector, which in turn contributes to the theory.
Chapter
With the ubiquity of voice assistants across the UK and the world, speech recognition of the regional accents across the British Isles has proven challenging due to varying pronunciations. This paper proposes an automated recognition of the geographical origin and gender of a voice sample based on the six regional dialects of the United Kingdom. Twenty six features are extracted from 17,877 voice samples and then used to design, implement and evaluate machine learning classifiers based on Artificial Neural Networks (ANNs), Support Vector Machine (SVM), Random Forest (RF) and k-nearest neighbors (k-NN) algorithms. The results suggest that the proposed approach could be applicable for areas such as e-commerce and the service industry, and it provides a contribution to NLP audio research.
Chapter
In the modern health care research, protein phosphorylation has gained an enormous attention from the researchers across the globe and requires automated approaches to process a huge volume of data on proteins and their modifications at the cellular level. The data generated at the cellular level is unique as well as arbitrary, and an accumulation of massive volume of information is inevitable. Biological research has revealed that a huge array of cellular communication aided by protein phosphorylation and other similar mechanisms imply different and diverse meanings. This led to a collection of huge volume of data to understand the biological functions of human evolution, especially for combating diseases in a better way. Text mining, an automated approach to mine the information from an unstructured data, finds its application in extracting protein phosphorylation information from the biomedical literature databases such as PubMed. This chapter outlines a recent text mining protocol that applies natural language parsing (NLP) for named entity recognition and text processing, and support vector machines (SVM), a machine learning algorithm for classifying the processed text related human protein phosphorylation. We discuss on evaluating the text mining system which is the outcome of the protocol on three corpora, namely, human Protein Phosphorylation (hPP) corpus, Integrated Protein Literature Information and Knowledge corpus (iProLink), and Phosphorylation Literature corpus (PLC). We also present a basic understanding on the chemistry and biology that drive the protein phosphorylation process in a human body. We believe that this basic understanding will be useful to advance the existing text mining systems for extracting protein phosphorylation information from PubMed.
Article
Hydration free energy (HFE) is a key factor in improving protein-ligand binding free energy (BFE) prediction accuracy. The HFE itself can be calculated using the three-dimensional reference interaction model (3D-RISM); however, the BFE predictions solely evaluated using 3D-RISM are not correlated to the experimental BFE for abundant protein-ligand pairs. In this study, to predict the BFE for multiple sets of protein-ligand pairs, we propose a machine learning approach incorporating the HFEs obtained using 3D-RISM, termed 3D-RISM-AI. In the learning process, structural metrics, intra-/intermolecular energies, and HFEs obtained via 3D-RISM of ∼4000 complexes in the PDBbind database (ver. 2018) were used. The BFEs predicted using 3D-RISM-AI were well correlated to the experimental data (Pearson's correlation coefficient of 0.80 and root-mean-square error of 1.91 kcal/mol). As important factors for the prediction, the difference in the solvent accessible surface area between the bound and unbound structures and the hydration properties of the ligands were detected during the learning process.
Article
Full-text available
Classical dancers all over the world use various body gestures to communicate to the audience the intended meaning. The study of these gestures can help in better understanding of the dance forms and also for annotation purposes. Bharatanatyam, an Indian classical dance, uses elegant hand gestures (mudras), facial expressions and body movements to convey the meaning to the audience. There are 28 Asamyukta Hastas (single-hand gestures) and 23 Samyukta Hastas (double-hand gestures) in Bharatanatyam. Open datasets on Bharatanatyam dance gestures are not presently available. An exhaustive open dataset of 15,396 distinct single-hand gesture images and 13,035 distinct double-hand gesture images was created. In this paper, we intend to find an optimal feature descriptor for this dataset. Various feature descriptors like scale-invariant feature transform (SIFT), speeded-up robust features (SURF), oriented FAST and rotated BRIEF (ORB), KAZE, KAZE extended, accelerated-KAZE (A-KAZE), binary robust invariant scalable keypoints (BRISK) and SIKA were explored. The feature descriptors were coded using bag of visual words and classified using several classifiers like support vector machines (SVM), multilayer perceptron (MLP), Naive Bayes, logistic regression, decision tree, AdaBoost and random forest. From all these investigations, it is observed that KAZE descriptor and random forest combination has the topmost classification accuracy.
Article
Full-text available
Electroencephalogram (EEG) plays a crucial role in the study of working memory, which involves the complex coordination of brain regions. In this research, we designed and conducted series of experiments of memory with various memory loads or target forms and collected behavioral data as well as 32-lead EEG simultaneously. Combined with behavioral data analysis, we segmented EEG into slices; then, we calculated phase-locking value (PLV) of Gamma rhythms between every two leads, conducted binarization, constructed brain function network, and extracted three network characteristics of node degree, local clustering coefficient, and betweenness centrality. Finally, we inputted these network characteristics of all leads into support vector machines (SVM) for classification and obtained decent performances; i.e., all classification accuracies are greater than 0.78 on an independent test set. Particularly, PLV application was restricted to the narrow-band signals, and rare successful application to EEG Gamma rhythm, defined as wide as 30-100 Hz, had been reported. In order to address this limitation, we adopted simulation on band-pass filtered noise with the same frequency band as Gamma to help determine the PLV binarizing threshold. It turns out that network characteristics based on binarized PLV have the ability to distinguish the presence or absence of memory, as well as the intensity of the mental workload at the moment of memory. This work sheds a light upon phase-locking investigation between relatively wide-band signals, as well as memory research via EEG.
Article
Recently, integrated machine learning (ML) metaheuristic algorithms, such as the artificial bee colony (ABC) algorithm, genetic algorithm (GA), gray wolf optimization (GWO) algorithm, particle swarm optimization (PSO) algorithm, and water cycle algorithm (WCA), have become predominant approaches for landslide displacement prediction. However, these algorithms suffer from poor reproducibility across replicate cases. In this study, a hybrid approach integrating k-fold cross validation (CV), metaheuristic support vector regression (SVR), and the nonparametric Friedman test is proposed to enhance reproducibility. The five previously mentioned metaheuristics were compared in terms of accuracy, computational time, robustness, and convergence. The results obtained for the Shuping and Baishuihe landslides demonstrate that the hybrid approach can be utilized to determine the optimum hyperparameters and present statistical significance, thus enhancing accuracy and reliability in ML-based prediction. Significant differences were observed among the five metaheuristics. Based on the Friedman test, which was performed on the root mean square error (RMSE), Kling-Gupta efficiency (KGE), and computational time, PSO is recommended for hyperparameter tuning for SVR-based displacement prediction due to its ability to maintain a balance between precision, computational time, and robustness. The nonparametric Friedman test is promising for presenting statistical significance, thus enhancing reproducibility.
Article
Full-text available
The number of buds is an important index for the classification of cut lily flowers. Since manual counting is time-consuming and laborious, the cut flowers can be easily damaged, and the cut flowers’ quality can be greatly affected. To build an efficient and non-destructive automatic counting of the lily cut flower grading system, we proposed a method for counting lily buds based on machine vision. However, in the images, the color of immature buds, stems, and leaves is similar. The buds are obscured by each other and by leaves, which may affect bud counting accuracy. In this paper, the threshold segmentation of color space transformation is applied to rough segmentation. Then the SVM is used for the second segmentation to extract the complete flower buds. Aiming at the flower buds shaded by each other and by the leaves, the ellipse fitting, and bud counting were was performed by arcs combination and direct least square method in the end. A total of 80 cut lily images (292 flower buds) were counted by the method, and the counting accuracy rate is 81.2% and 91.4% of flower buds were successfully fitted. The fitting accuracy of 91 flower buds was analyzed, and the mean relative errors of the long-axis and the short-axis of the fitting ellipses were less than 5%. When counting an image, the algorithm took a time of 2.371 s. The experimental results show that the proposed algorithm can count and fit flower buds better than other algorithms, which lays a foundation for the automatic classification of cut lily flowers to save labor costs and provides ideas and methods for ellipse fitting of ellipsoid-like objects shaded by each other.
Article
Full-text available
Artificial intelligence is a method that is increasingly becoming widespread in all areas of life and enables machines to imitate human behavior. Machine learning is a subset of artificial intelligence techniques that use statistical methods to enable machines to evolve with experience. As a result of the advancement of technology and developments in the world of science, the interest and need for machine learning is increasing day by day. Human beings use machine learning techniques in their daily life without realizing it. In this study, ensemble learning algorithms, one of the machine learning techniques, are mentioned. The methods used in this study are Bagging and Adaboost algorithms which are from Ensemble Learning Algorithms. The main purpose of this study is to find the best performing classifier with the Classification and Regression Trees (CART) basic classifier on three different data sets taken from the UCI machine learning database and then to obtain the ensemble learning algorithms that can make this performance better and more determined using two different ensemble learning algorithms. For this purpose, the performance measures of the single basic classifier and the ensemble learning algorithms were compared
Chapter
In this work we present an empirical study where we demonstrate the possibility of developing an artificial agent that is capable to autonomously explore an experimental scenario. During the exploration, the agent is able to discover and learn interesting options allowing to interact with the environment without any assigned task, and then abstract and re-use the acquired knowledge to solve the assigned tasks. We test the system in the so-called Treasure Game domain described in the recent literature and we empirically demonstrate that the discovered options can be abstracted in an probabilistic symbolic planning model (using the PPDDL language), which allowed the agent to generate symbolic plans to achieve extrinsic goals.
Article
For the purpose of batter evaluating ground fissure susceptibility (GFS), this study developed a hybrid model based on factor optimization and support vector machines (SVM). Firstly, an evaluation index system of GFS was established containing 15 influence factors. Then, the data sample was normalized by certainty factors (CF) for the preparation of data analysis and machine learning. In the process of factor optimization, Schmidt orthogonalization (SO) was used to reduce collinearity in the data, and Pearson correlation coefficient (PCC) analysis was utilized to estimate its effect. Following, the principal component analysis (PCA) was applied to integrate and reduce the dimension of samples, and 9 component vectors were selected for the construction of SVM eventually. In the procedure of SVM modeling, the modeling data set was composed of 140 land fissure data and 140 non-deformation points. The cross-validation (CV) method was adopted to determine the required parameters, and the performance of the completed model was verified by statistical analysis and receiver operating characteristic (ROC) curve. The prediction accuracy of the final model reached a level of 0.852 with selected parameters. Finally, the GFS values of whole data in the study area were calculated by the trained model, and the GFS map was produced with its effect assessed. In conclusion, the model in this research can evaluate the GFS accurately and has a high value in both scientific research and engineering application.
Article
An arrayed host:guest fluorescence sensor system can discriminate DNA G-quadruplex structures that differ only in the presence of single oxidation or methylation modification in the guanine base. These small modifications make subtle changes to G4 folding that are often not detectable by CD but induce differential fluorescence responses in the array. The sensing is functional in diluted serum and is capable of distinguishing individual modifications in DNA mixtures, providing a powerful method of detecting folding changes caused by DNA damage.
Article
Polymer-surface interactions are crucial to many biological processes and industrial applications. Here we propose a machine learning method to connect a model polymer's sequence with its adhesion to decorated surfaces. We simulate the adhesive free energies of 20000 unique coarse-grained one-dimensional polymer sequences interacting with functionalized surfaces and build support vector regression models that demonstrate inexpensive and reliable prediction of the adhesive free energy as a function of sequence. Our work highlights the promising integration of coarse-grained simulation with data-driven machine learning methods for the design of functional polymers and represents an important step toward linking polymer compositions with polymer-surface interactions.
Article
Full-text available
Early prediction of grain yield helps scientists to make better breeding decisions for wheat. Use of machine learning (ML) methods for fusion of unmanned aerial vehicle (UAV)-based multi-sensor data can improve the prediction accuracy of crop yield. For this, five ML algorithms including Cubist, support vector machine (SVM), deep neural network (DNN), ridge regression (RR) and random forest (RF) were used for multi-sensor data fusion and ensemble learning for grain yield prediction in wheat. A set of thirty wheat cultivars and breeding lines were grown under three irrigation treatments i.e., light, moderate and high irrigation treatments to evaluate the yield prediction capabilities of a low-cost multi-sensor (RGB, multi-spectral and thermal infrared) UAV platform. Multi-sensor data fusion-based yield prediction showed higher accuracy compared to individual-sensor data in each ML model. The coefficient of determination (R 2) values for Cubist, SVM, DNN and RR models regarding grain yield prediction were observed from 0.527 to 0.670. Moreover, the results of ensemble learning through integrating the above models illustrated further increase in accuracy. The predictions of ensemble learning showed high R 2 values up to 0.692, which was higher as compared to individual ML models across the multi-sensor data. Root mean square error (RMSE), residual prediction deviation (RPD) and ratio of prediction performance to inter-quartile range (RPIQ) were calculated to be 0.916 t ha-1, 1.771 and 2.602, respectively. The results proved that low altitude UAV-based multi-sensor data can be used for early grain yield prediction using data fusion and an ensemble learning framework with high accuracy. This high-throughput phenotyping approach is valuable for improving the efficiency of selection in large breeding activities. Supplementary information: The online version contains supplementary material available at 10.1007/s11119-022-09938-8.
Article
Full-text available
The aim of this research is to introduce innovative automated text mining process to extract operation risks from accounting narratives and to further examine the association between these risk types and operating performance. Specifically, we perform topic modeling to decompose a large amount of unstructured textual disclosures into some topics and preserve these topics, which are relevant to business operation risk. Sequentially, we propose a measure for the degree of financial default, referred to as the “intensity of risk-word list,” by joint utilization of text mining and a statistical approach. The analyzed results are then fed into a support vector machine-based model to construct the forecasting model. The results show that the textual-based risk indicators are significantly and positively related to a corporate’s operation efficiency. This study also echoes the recent trend of financial reporting regulations to add a new section on risk factors in annual reports.
Article
Full-text available
Unsuccessful drillings are issues in groundwater exploration using electrical resistivity profiling (ERP) and vertical electrical sounding (VES). Many geophysical companies spend a lot of money without obtaining the flow rate (FR) required during the campaigns for drinking water supply (CDWS). To solve this problem, we applied the support vector machines (SVMs) to real-world data to predict the FRs before any drilling operations. First, from the ERP and VES, the features such as shape, type, power, magnitude, pseudo-fracturing index, and ohmic-area were defined including the geology of the survey area. Secondly, the FRs were categorized into four classes (dry: FR0 (FR=0), unsustainable: FR1 (0<FR≤1), and productive boreholes: FR2 (1<FR ≤3) and FR3 (FR>3 m^3/h)) and associated with the features to compose two separated datasets: a multiclass dataset (D) for common prediction during the CDWS and a binary dataset D_b (FR<FR2,FR≥FR2) addressed to the population living in a rural area. Features were vectorized and data were transformed before feeding to the SVM algorithms. As a result, the SVM models performed 77% of good predictions on D and 83% on D_b. Better performances with the optimal hyper-parameters in D (81.61%) and D_b (87.36%) were achieved using the polynomial and radial basis function kernels respectively. Furthermore, the learning curves have shown that the performance scores on D can be improved if larger training data becomes available (275 test samples at least) while it is not necessarily so for D_b. As a benefit, the proposed approach could minimize the rate of unsuccessful drillings during future CDWS.
Article
Full-text available
The performance of a state-of-the-art neural network classifier for hand-written digits is compared to that of a k-nearest-neighbor classifier and to human performance. The neural network has a clear advantage over the k-nearest-neighbor method, but at the same time does not yet reach human performance. Two methods for combining neural-network ideas and the k-nearest-neighbor algorithm are proposed. Numerical experiments for these methods show an improvement in performance.
Article
Linear procedures for classifying an observation as coming from one of two multivariate normal distributions are studied in the case that the two distributions differ both in mean vectors and covariance matrices. We find the class of admissible linear procedures, which is the minimal complete class of linear procedures. It is shown how to construct the linear procedure which minimizes one probability of misclassification given the other and how to obtain the minimax linear procedure; Bayes linear procedures are also discussed.
We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1% error rate and about a 9% reject rate on zipcode digits provided by the U.S. Postal Service. 1 INTRODUCTION The main point of this paper is to show that large back-propagation (BP) networks can be applied to real image-recognition problems without a large, complex preprocessing stage requiring detailed engineering. Unlike most previous work on the subject (Denker et al., 1989), the learning network is directly fed with images, rather than feature vectors, thus demonstrating the ability of BP networks to deal with large amounts of low level information. Previous work performed on simple digit images (Le Cun, 1989) showed that the architecture of the network s...
A training algorithm for optimal margin classifiers
• B E Boser
• I Guyon
• V N Vapnik
Boser, B.E., Guyon, I., & Vapnik, V.N. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop of Computational Learning Theory, 5, 144-152, Pittsburgh, ACM.
Comparison of classifier methods: A case study in handwritten digit recognition
• L Bottou
• C Cortes
• J S Denker
• H Drucker
• I Guyon
• L D Jacket
• Y Lecun
• E Sackinger
• P Simard
• V Vapnik
• U A Miller
Bottou, L., Cortes, C, Denker, J.S., Drucker, H., Guyon, I., Jacket, L.D., LeCun, Y., Sackinger, E., Simard, P., Vapnik, V., & Miller, U.A. (1994). Comparison of classifier methods: A case study in handwritten digit recognition. Proceedings of 12th International Conference on Pattern Recognition and Neural Network.
• R Courant
• D Hilbert
Courant, R., & Hilbert, D. (1953). Methods of Mathematical Physics, Interscience, New York.
A la Frontiere de I'Intelligence Artificielle des Sciences de la Connaissance des Neurosciences
• Y Lecun
LeCun, Y. (1985). Une procedure d'apprentissage pour reseau a seuil assymetrique. Cognitiva 85: A la Frontiere de I'Intelligence Artificielle des Sciences de la Connaissance des Neurosciences, 599-604, Paris.
Principles ofNeurodynamics
• F Rosenblatt
Rosenblatt, F. (1962). Principles ofNeurodynamics, Spartan Books, New York.