February 2020
·
77 Reads
·
4 Citations
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
February 2020
·
77 Reads
·
4 Citations
November 2017
·
80 Reads
·
4 Citations
March 2014
·
10 Reads
·
2 Citations
Psychonomic Science
Fourteen rhesus monkeys and two human Os were trained to discriminate between identical blocks of wood placed 13 in apart, using cues that were provided by a pointer that was placed at random in positions spaced 1.0 in apart between the manipulanda. Monkeys made increasingly more errors as a function of increasing distance between the manipulandum and discriminandum, and extensive practice did not alter this relationship. The human Os, however, made no errors at positions of the pointer other than the center.
January 2004
·
456 Reads
·
893 Citations
multiple simultaneous constraints parallel distributed processing [PDP] / examples of PDP models representation and learning in PDP models origins of parallel distributed processing (PsycINFO Database Record (c) 2012 APA, all rights reserved)
September 2002
·
881 Reads
·
762 Citations
Computational modeling plays a central role in cognitive science. This book provides a comprehensive introduction to computational models of human cognition. It covers major approaches and architectures, both neural network and symbolic; major theoretical issues; and specific computational models of a variety of cognitive processes, ranging from low-level (e.g., attention and memory) to higher-level (e.g., language and reasoning). The articles included in the book provide original descriptions of developments in the field. The emphasis is on implemented computational models rather than on mathematical or nonformal approaches, and on modeling empirical data from human subjects. Bradford Books imprint
December 1999
·
82 Reads
·
82 Citations
We discuss the development of a neural network for facial expression recognition. It aims at recognizing and interpreting facial expressions in terms of signaled emotions and level of expressiveness. We use the backpropagation algorithm to train the system to differentiate between facial expressions. We show how the network generalizes to new faces and we analyze the results. In our approach, we acknowledge that facial expressions can be very subtle, and propose strategies to deal with the complexity of various levels of expressiveness. Our database includes a variety of different faces, including individuals of different gender, race, and including different features such as glasses, mustache, and beard. Even given the variety of the database, the network learns fairly succesfully to distinguish various levels of expressiveness, and generalizes on new faces as well. Introduction Within the field of computer vision, there has recently been an increasing interest to rec...
June 1999
·
660 Reads
·
8 Citations
e present a speaker-independent, continuous-speech recog- ( nition system based on a hybrid multilayer perceptron MLP)/hidden Markov model (HMM). The system come bines the advantages of both approaches by using MLPs to stimate the state-dependent observation probabilities of an e p HMM. New MLP architectures and training procedures ar resented that allow the modeling of multiple distributions . C for phonetic classes and context-dependent phonetic classes omparisons with a pure HMM system illustrate advantages n of the hybrid approach both in recognition accuracy and in umber of parameters required. N H 1. INTRODUCTIO idden Markov models (HMMs) are used in most state-ofa the-art continuous-speech recognition systems. This pproach is limited by the need for strong statistical assumpu tions that are unlikely to be valid for speech. Techniques sing multilayer perceptrons (MLPs) for probability estimaa tion have recently been introduced [1], which reduce the ssumption of independen...
June 1999
·
60 Reads
arlier hybrid multilayer perceptron (MLP)/hidden Markov model (HMM) continuous speech recognition sysr g tems have not modeled context-dependent phonetic effects, sequences of distributions for phonetic models, o ender-based speech consistencies. In this paper we present a new MLP architecture and training procedure for t " modeling context-dependent phonetic classes with a sequence of distributions. A new training procedure tha smooths" networks with different degrees of context-dependence is proposed in order to obtain a robust estip mate of the context-dependent probabilities. We have used this new architecture to model generalized biphone honetic contexts. Tests with the speaker-independent DARPA Resource Management database have shown d w average reductions in word error rates of 20% in both the word-pair grammar and no-grammar cases, compare ith our earlier context-independent MLP/HMM hybrid. P 1. Introduction revious work by Morgan, Bourlard, et al. [1, 2] has shown both the...
June 1999
·
118 Reads
·
15 Citations
n M In this paper we present a hybrid multilayer perceptron (MLP)/hidde arkov model (HMM) speaker-independent continuous-speech recognib tion system, in which the advantages of both approaches are combined y using MLPs to estimate the state-dependent observation probabilities p of an HMM. New MLP architectures and training procedures are resented which allow the modeling of multiple distributions for phonetic a p classes and context-dependent phonetic classes. Comparisons with ure HMM system illustrate advantages of the hybrid approach both in terms of recognition accuracy and number of parameters required. 1. INTRODUCTION - o Hidden Markov models (HMMs) are used in most current state f-the-art continuous-speech recognition systems. This approach is u limited by the need for strong statistical assumptions that are nlikely to be valid for speech. Techniques using multilayer peri ceptrons (MLPs) for probability estimation have recently been ntroduced [1] which reduce the assumption o...
June 1999
·
90 Reads
·
5 Citations
In this paper we present a training method and a network architecture for estimating contextdependent observation probabilities in the framework of a hybrid hidden Markov model (HMM) / multi layer perceptron (MLP) speaker-independent continuous speech recognition system. The context-dependent modeling approach we present here computes the HMM context-dependent observation probabilities using a Bayesian factorization in terms of context-conditioned posterior phone probabilities which are computed with a set of MLPs, one for every relevant context. The proposed network architecture shares the input-to-hidden layer among the set of context-dependent MLPs in order to reduce the number of independent parameters. Multiple states for phone models with different context dependence for each state are used to model the different context effects at the beginning and end of phonetic segments. A new training procedure that "smooths" networks with different degrees of context-dependence is proposed ...
... This limitation may result in decreased model performance when dealing with complex patterns and long time-series data. Therefore, in practical applications, CNNs are often combined with other techniques, such as recurrent neural networks (RNNs) [8] or attention mechanisms, to compensate for the shortcomings of receptive fields and enhance the overall effectiveness of fault diagnosis. RNNs are specifically designed to handle sequential data. ...
April 1988
... For example, specific deep neural networks, in particular so-called convolutional networks, are often used as models of human vision (but see Bowers et al. 2022 for criticism). Moreover, the patterns of learning in artificial networks themselves are sometimes held to be informative in one way or another of other phenomena, for example, the past tense of English verbs (Rumelhart and McClelland 1986), or particular optimizations of human vision (Kanwisher, Khosla, and Dobs 2023). ...
November 1993
... Surrogate models are capable of effectively producing the dynamic responses of FWTs under various conditions while maintaining satisfactory accuracy. Various surrogate models have been developed, including polynomial regression models (Tibshirani, 1996), radial basis function models (Buhmann, 2003), Kriging (Krige, 1951), polynomial chaos expansion (PCE) , and artificial neural networks (ANN) (Rumelhart et al., 1986). In recent years, the use of surrogate models for FWT-related problems has seen significant developments. ...
September 2002
... Il existe plusieurs méthodes pour procéder à l'estimation des paramètres du modèle de langage [Federico 1998]. La plus commune est l'estimation par maximum de vraisemblance, dont le nom indique que la distribution des probabilités du modèle de langage obtenue est celle qui maximise la vraisemblance du corpus d'apprentissage : ...
October 1992
... An unsupervised machine learning technique, A K-SOM 17,18 or self-organizing feature map, was used to produce a low-dimensional representation of higher dimensional data set with complex structures while preserving the topological structure of the data. During the K-SOM analysis, a competitive learning algorithm [19][20][21] along with Best Matching Unit (BMU) strategy were employed to identify the "winner" nodes/neurons for an 8 × 8 topology. The cover steps for initial covering of the input space for ordering the phase steps of the K-SOM was set to 100. ...
January 1986
... As Ramsey pointed out, this entire process can be understood without the need for representational explanations. On the other hand, tacit-representations apply to neural networks not because they are triggered by a stimulus, but because the entire network encodes the information through its connections (Rumelhart et al., 1986a(Rumelhart et al., , 1986b. Even though this might seem familiar to S-representations, tacit-representations do not share structural isomorphism with their target; rather, the representational information is distributed across the network's connections. ...
January 1987
... However, graceful degradation is another important motivation for feedback in interactive activation models (Dell, Chang & Griffin, 1999;McClelland & Elman, 1986 [e.g., pp. 6-7]; McClelland & Rumelhart, 1981, 1989, which turns out to have important implications for the feedback vs. autonomy debate. Graceful degradation seems to be less familiar to most cognitive scientists (e.g., it received no discussion in the Norris et al., 2000, target article or in the accompanying commentaries), although it is one of the original, primary motivations for feedback in interactive activation models (for example, when noise is added to inputs, feedback promotes gradual declines in performance rather than an abrupt collapse; McClelland & Rumelhart, 1981). ...
October 1989
... Rumelhart used his interactive reading model to demonstrate the weaknesses of the bottom-up and top-down models which respectively proceed only in one direction, while, for the interactive model, each level communicates with those immediately above and below it (Rumelhart & McClelland, 1981). The interactive model has the advantage of providing young readers with opportunities to develop foundational reading skills such as letter-sound knowledge which helps them decode familiar and unfamiliar words accurately to comprehend what they are reading. ...
January 1981
... The Backpropagation Neural Network (BPNN) is a popular algorithm for training ANNs as it acts as a functional estimator between input and output data. The error BPNN was initially introduced by Werbos (1974), followed by Rumelhart et al. (1986). The BPNN consists of three layers: input, hidden, and output. ...
January 1986
Nature
... As one kind of intelligence optimization algorithms, the BP multi-layer feed-forward ANN algorithm was proposed by D.E. Rumelhart firstly [31]. In complicated systems with several effective input parameters, ANN can be used to predict output data. ...
January 1990