September 1984
·
4,810 Reads
·
14,507 Citations
Biometrics
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
September 1984
·
4,810 Reads
·
14,507 Citations
Biometrics
January 1984
·
690 Reads
·
17,942 Citations
January 1984
·
357 Reads
·
1,081 Citations
January 1984
·
310 Reads
·
433 Citations
January 1983
·
603 Reads
·
2,028 Citations
January 1981
·
18 Reads
·
7 Citations
This research has focused on the problem of obtaining confidence intervals for extreme quantiles based on a random sample form a distribution of unknown form. Three confidence interval procedures were studied, both analytically and by means of an extensive Monte Carlo experiment. The experiment involved three sampled sizes (100, 200, 400) and twenty underlying distributions (five Weibulls, five mixed Weibulls, five lognormals, five mixed lognormals). The Monte Carlo results show that all three procedures studied work quite well and point the way to further improvement. (Author)
December 1979
·
20 Reads
·
7 Citations
This research has focused on the problem of estimating probabilities in the upper tail of an underlying distribution and the corresponding quantiles based on a random sample from the distribution. Two estimation procedures, exponential tail and transformed exponential tail, were defined and their bias and variance properties were thoroughly studied both analytically and by means of an extensive Monte Carlo experiment. The experiment involved several forms of each of the two procedures; twenty underlying distributions were simulated, including a variety of Weibull and lognormal distributions; four sample sizes were considered--100, 200, 400 and 800. Careful study of the analytic and Monte Carlo results showed that exponential tail and transformed exponential tail procedures worked quite well, but indicated a potential for substantial further improvement by properly combining them. (Author)
... Our evaluation process involved four machine learning models -Logistic Regression with elastic net penalization [35,36], Decision Tree [36,37], Random Forest [36,38], and XGBoost [39]. We chose these classical machinelearning algorithms due to the small amount of available data (229 participants) and their reputation to work well even in such scenarios. ...
January 1984
... On the other hand, the different condition ( ( , )/| |) represents the belonging to class . By separating different probabilities in the formula, different situations are evaluated in more detail (Breiman et al. 1984;Pal 2005). ...
January 1983
... To predict the climatic match, six methods were fitted to determine the model performance and relate the predictor variable that best described the respective avian species distribution suitability. The methods were isolated from 15 methods supported by the SDM package with the following methods; boosted regression tree (BRT: Friedman 2001), classification and regression trees (CART : Breiman 1984), generalised linear model (GLM: McCullagh 1989), multivariate adaptive regression spline (MARS: Friedman 1991), random forest (RF : Breiman 2001), and support vector machine (SVM: Vapnik 1995). The models were evaluated at 100 runs of bootstrap replication to give model adequate time to converge. ...
January 1984
... The machine learning algorithms used in the experiment were Support Vector Classification (SVC) [49], k-Nearest Neighbor (k-NN) [50], Decision Tree (DT) [51], Random Forest (RF) [52], Artificial Neural Network (ANN) [53], Gradient Boosting Decision Tree (GBDT) [54], and TabNet [55]. These algorithms except for TabNet were implemented by scikit-learn [56], and the TabNet algorithm was implemented by the pytorch-tabnet library [57]. ...
September 1984
Biometrics
... We tested which subsets of the 17 variables were most strongly associated with the establishment and impact stages of invasion in each region by using a classification tree analysis in the program Salford Predictive Modeler (v. 8, Minitab, State College, Pennsylvania). Classification tree analysis is a machine learning method that employs binary recursive partitioning to model categorical response variables (Breiman et al. 1984). The tree is constructed by repeatedly splitting the data into two groups (nodes) defined by a threshold value (continuous data) or category (categorical data) of a single independent variable that maximizes homogeneity of outcome (e.g., established versus not established) within the two groups created by the split (De'ath and Fabricius 2000). ...
January 1984
... Bagging and Random Forests [8][9][10]. Contributions from Computer Science came later in the 1990s, with Neural Networks, Boosting and Support Vector Machine (SVM) [11,120]. ...
December 1979
... The concept of fitting an exponential curve to the tail of a sorted sample is not a new one. Breiman, Gins and Stone (1979) propose taking the maximum likelihood estimator of the tail of the sample and then fitting an exponential curve with this estimated parameter value to predict unobserved tail quantiles. Ott (1995) discusses various methods for fitting a two-parameter exponential curve to the tail of the data. ...
January 1981