ArticlePDF Available

Beyond regression : new tools for prediction and analysis in the behavioral sciences /

Authors:

Abstract

Thesis (Ph. D.)--Harvard University, 1975. Includes bibliographical references.
A preview of the PDF is not available
... In general, three types of architectures stand out in ANN studies: single-layer feedforward networks, multiple layer feedforward networks, and recurrent networks as described in [6]. In this work, we use the multiple-layer feedforward network (known as multi-layer perceptron network -MLP) and backpropagation as it can recognize and separate linearly non-separable patterns and according to [16], backpropagation minimizes the error obtained by the network. Results using this technique could be seen in Table I. 2) Support Vector Machines: SVMs were developed by Vapnik [15] and his theory establishes a series of principles that one must follow to obtain classifiers with good generalization, defined as their ability to correctly predict the class of new data from the same domain in which the learning occurred. ...
... The ELM algorithm greatly improves the learning speed of the Single Layer Feedforward Network (SLFN) and efficient to reach a global optimum. The Learning speed of ELM is thousands of times faster than back-propagation algorithm (Werbos, 1974) and performing better generalization ability. At the same time, it is widely used in various fields (Huang et al., 2011(Huang et al., , 2006(Huang et al., , 2016Sattar et al., 2019). ...
Article
Full-text available
With the development of economic and technologies, the trend of annual Gross Domestic Product (GDP) and carbon dioxide (CO2) emission changes with time passes. The relationship between economic growth and carbon dioxide emissions is considered as one of the most important empirical relationships. In this study, we focus on the member of Shanghai Cooperation Organization, including China, Russia, India, and Pakistan and collect CO2 emission and annual GDP from 1969 to 2014. The statistical methods and tests are used to find the relationship between annual GDP and CO2 emission in these countries. Based on relationship between annual and CO2 emission, a novel multi-step prediction algorithm called Extreme Learning Machine with Artificial Bee Colony (ELM-ABC) is proposed for forecasting annual GDP based on CO2 emission and historical GDP features. According to the experimental results, it proved that the proposed model had a super forecasting ability in GDP prediction and it could predict ten-year future annual GDP for the corresponding countries. Moreover, the forecasting results showed that the annual GDP of China and Pakistan will continue to grow but growth will slow after 2025. The annual GDP in India will exhibit unstable growth. The trend of Russia will follow the pattern between 2010 and 2016.
... Later, Werbos [124] developed a method known as backpropagation to accelerate the training of multi-layer networks, which feeds the error value back to the network to modify the parameters in each neuron. By computing the gradient of the loss function between the ground truth value and the predicted value (e.g., mean square error), the parameters in the hidden layers will be updated and gradually approach the optimal values. ...
Thesis
This dissertation aims at developing a machine learning workflow in solving design-related problems, taking a data-driven structural design method with topological data using graphic statics as an example. It shows the advantages of building machine learning surrogate models for learning the design topology -- the relationship of design elements. It reveals a future tendency of the coexistence of the human designer and the machine, in which the machine learns the appearance and correlation between design data, while the human supervises the learning process. Theoretically, with the commencement of the age of Big Data and Artificial Intelligence, the usage of machine learning in solving design problems is widely applied. The existing research mainly focuses on the machine learning of the geometric data, however, the internal logic of a design is represented as the topology, which describes the relationship between each design element. The topology can not be easily represented for the human designer to understand, however it's readable and understandable by the machine, which suggests a method of using machine learning techniques to learn the intrinsic logic of a design as the topology. Technically, we propose to use machine learning as a framework and graphic statics as a supporting method to provide training data, suggesting a new design methodology by the machine learning of the topology. Different from previous geometry-based design, in which only the design geometry is presented and considered, in this new topology-based design, the human designer employs the machine and provides training materials showing the topology of a design to train the machine. The machine finds the design rules related to the topology and applies the trained machine learning models to generate new design cases as both the geometry and the topology.
... Extending the perceptron to multiple layers was challenging at the time due to lack of su cient computational power (Minsky and Papert 1969). However, these ideas eventually led to widespread use of artificial neural networks, whose building blocks are nodes, or neurons, and where weights and biases are adjusted multiple times in a process called backpropagation (Werbos 1975) in order to minimise a loss function. Artificial neural networks contain at least an input layer, an output layer and some number of layers in between, referred to as hidden layers. ...
Thesis
Today, a plethora of model-based diffusion MRI (dMRI) techniques exist that aim to provide quantitative metrics of cellular-scale tissue properties. In the brain, many of these techniques focus on cylindrical projections such as axons and dendrites. Capturing additional tissue features is challenging, as conventional dMRI measurements have limited sensitivity to different cellular components, and modelling cellular architecture is not trivial in heterogeneous tissues such as grey matter. Additionally, fitting complex non-linear models with traditional techniques can be time-consuming and prone to local minima, which hampers their widespread use. In this thesis, we harness recent advances in measurement technology and modelling efforts to tackle these challenges. We probe the utility of B-tensor encoding, a technique that offers additional sensitivity to tissue microstructure compared to conventional measurements, and observe that B-tensor encoding provides unique contrast in grey matter. Motivated by this and recent work showing that the diffusion signature of soma in grey matter may be captured with spherical compartments, we use B-tensor encoding measurements and a biophysical model to disentangle spherical and cylindrical cellular structures. We map apparent markers of these geometries in healthy human subjects and evaluate the extent to which they may be interpreted as correlates of soma and projections. To ensure fast and robust model fitting, we use supervised machine learning (ML) to estimate parameters. We explore limitations in ML fitting in several microstructure models, including the model developed here, and demonstrate that the choice of training data significantly impacts estimation performance. We highlight that high precision obtained using ML may mask strong biases and that visual assessment of the parameter maps is not sufficient for evaluating the quality of the estimates. We believe that the methods developed in this work provide new insight into the reliability and potential utility of advanced dMRI and ML in microstructure imaging.
Chapter
Nowadays, it is extremely difficult to choose wines as there are numerous wine manufacturers. In response to the increase in customer base of wine, wine companies need to improve their quality and sales. There have been many attempts to develop a methodological approach for assessment of wine quality. In this paper, machine learning methods such as decision tree, random forest and support vector are used to check the quality of two types of wine: red and white. This work takes into account various ingredients of wine to predict its quality. The experiments show the superiority of random forest over decision tree and support vector classifiers.KeywordsRed wineWhite wineDecision treeRandom forestSupport vector
Preprint
This paper studies a continuous-time stochastic linear-quadratic (SLQ) optimal control problem on infinite-horizon. A data-driven policy iteration algorithm is proposed to solve the SLQ problem. Without knowing three system coefficient matrices, this algorithm uses the collected data to iteratively approximate a solution of the corresponding stochastic algebraic Riccati equation (SARE). A simulation example is provided to illustrate the effectiveness and applicability of the algorithm.
Preprint
The purpose of this paper is to study the fractal phenomena in large data sets and the associated questions of dimension reduction. We examine situations where the classical Principal Component Analysis is not effective in identifying the salient underlying fractal features of the data set. Instead, we employ the discrete energy, a technique borrowed from geometric measure theory, to limit the number of points of a given data set that lie near a $k$-dimensional hyperplane, or, more generally, near a set of a given upper Minkowski dimension. Concrete motivations stemming from naturally arising data sets are described and future directions outlined.
Chapter
Power error loss (PEL) has recently been suggested as a more efficient generalization of binary or categorical cross entropy (BCE/CCE). However, as PEL requires to adapt the exponent q of a power function to training data and learning progress, it has been argued that the observed improvements may be due to implicitly optimizing learning rate. Here we invalidate this argument by optimizing learning rate in each training step. We find that PEL clearly remains superior over BCE/CCE if q is properly decreased during learning. This proves that the dominant mechanism of PEL is better adapting to output error distributions, rather than implicitly manipulating learning rate.KeywordsCross entropyPower error lossLearning rateLearning scheduleRandom grid path search
Chapter
An online adaptive dynamic program algorithm is presented to obtain the optimal control policy for a partial unknown system in this paper. Firstly, we develop an actor-critic neural network approximation structure based on the integral reinforcement learning approach. Then, we design a novel tuning law for the weights of neural networks by adding experience replay. The approximate convergence to the optimal control is proven. To check the stability of adaptive system, We propose a safety check module. Finally, a MATLAB simulation example verifies the theoretical result.KeywordsOptimal controlAdaptive dynamic programActor-criticIntegral reinforcement learningExperience replaySafety check
Chapter
In the field of object recognition, feature descriptors have proven to be able to provide accurate representations of objects facilitating the recognition task. In this sense, Histograms of Oriented Gradients (HOG), a descriptor that uses this approach, together with Support Vector Machines (SVM) have proven to be successful human detection methods. In this paper, we propose a scheme consisting of improved HOG and a classifier with a neural approach to producing a robust system for object recognition. The main contributions of this work are: First, we propose an improved gradient calculation that allows for better discrimination for the classifier system, which consists of performing a threshold over both the magnitude and direction of the gradients. This improvement reduces the rate of false positives. Second, although HOG is particularly suited for human detection, we demonstrate that it can be used to represent different objects accurately, and even perform well in multi-class applications. Third, we show that a classifier that uses a neuronal approach is an excellent complement to a HOG-based feature extractor. Finally, experimental results on the well-known Caltech 101 dataset illustrate the benefits of the proposed scheme.
ResearchGate has not been able to resolve any references for this publication.