ArticlePDF Available

A General Framework for Parallel Distributed Processing

Authors:
... An intuitive way around this difficulty is offered by distributed representations which have played a prominent role in the Parallel Distributed Processing (PDP) tradition (Rumelhart et al., 1986) and in modern deep learning algorithms. Distributed representations entail a many-to-many relationship between a concept and a set of units in a neural network (or neurons in a brain) (Plaut & McClelland, 2010). ...
Article
Full-text available
Self-control is core to human well-being. However, the lack of a well-specified, computationally tractable framework related to self-control makes it difficult to clarify underlying mechanisms, interpret relevant empirical phenomena, or develop interventions helpful in promoting self-control. To help address this gap, we invite consideration of the Comparison with Goal States Model (CGSM) for self-control. The CGSM amplifies activations related to available options whose representations are similar to representations of relevant goals and diminishes activations related to available options whose representations are dissimilar to representations of relevant goals. For example, influenced by healthy eating goals, the CGSM would amplify activations related to an apple and diminish activations related to a cookie, leading to an eventual preference for the apple, even though the cookie might be initially preferred. The CGSM successfully explicates observations related to reaction time in food choice, dynamics reflected in mouse-tracking trajectories, and showcases a mechanism by which hyperbolic discount curves in temporal discounting contexts might emerge. We use the CGSM to propose theoretical constraints on the nature of self-control and describe how multiple strategies have the potential to promote self-control.
... Consistent with a previous study on false memory tasks in people with DS, this study revealed that people with WS also have atypical semantic organization, resulting in bizarre lexical semantics in their long-term memory and problematic cognitive abilities such as attention, learning, and executive functions. According to the parallel distributed processing model (McClelland & Cleeremans, 2009;RuMelhart, Hinton, & McClelland, 1986), it is hypothesized that the input layer of people with WS is wary, leading to atypical output in language processing and cognitive functions. ...
Preprint
Full-text available
Purpose: People with Williams syndrome (WS, n = 20, CA = 12.5, MA = 8.9) have strong verbal short-term memory but challenged verbal long-term memory. This study aimed to evaluate the memory abilities of people with WS and examine the possibility of improving their memory performance. Methods: Twelve navigations completed the study. Fourteen indexes were used for each navigation. Controls matched in chronological age (CA, n = 20, mean age = 12.5), mental age (MA, n = 20, mean age = 8.8), 5th graders (n = 20, mean age = 10.3), and college students (CS, n = 20, mean age = 20.2) were recruited. Results: The WS group was slowest and erred more in completing tasks than the CA and MA groups. The error patterns revealed group differences in the number of extra items purchased and missed correct items. The 5th graders erred more than the CS on extra items purchased; the CA group erred more than the CS and the 5th graders on missed correct items. The WS group erred more than the CA and MA groups for incorrect items purchased, extra items purchased, missed correct items, and incorrect types purchased. The error patterns yielded replacements and confusion. Semantic features attributed to atypical processing during navigation have emerged in people with WS. Conclusion: These findings suggest that people with WS have developmental disorders in verbal long-term memory. The current study serves as a foundation for future interventions in people with WS.
Article
This study proposes portfolio construction strategies based on novel sentiment, ESG and SDG scores. We utilize natural language processing to establish a novel daily score system that mitigates concerns of different rating standards. The portfolios constructed are optimized via machine learning algorithms on a monthly basis using daily historical returns. Utilizing the equal‐weighted portfolios as benchmarks, we empirically show that our optimized portfolios exhibit better trading performance in both the SPX500 and STOXX600 indices. The findings demonstrate that nonlinear models such as random forests, neural networks, and genetic algorithms can perform better than other machine learning models in portfolio management.
Chapter
Full-text available
Artificial intelligence (AI) is rapidly being implemented in the healthcare sphere, threatening the ability of patients to make judgments tailored to their personal circumstances and beliefs. This book is concerned with the ability of two legal systems, those of the UK and the U.S., to meet the resulting challenges posed to patient autonomy. It deploys a forward-looking analysis to identify the unique problems raised by clinical AI and to anticipate the responses that are offered by the common law’s doctrine of informed consent. This assessment culminates in a concrete proposal for the regulation of medical AI and an affirmation of the law’s fundamental role in societies’ adaptation to innovative technologies.
Chapter
The foundation of the connectionist framework stems from connectionism, which simulates the information processing and learning process of the brain through artificial neural networks. Took an overview on the framework, we introduce connectionist learning theories, including Hebb theory, parallel distributed processing, connectionist models, and neural network theory. Next, we explain some units in neural networks, including biological neuron, artificial neuron, spike neuron, long short-term memory (LSTM) unit, gated recurrent unit (GRU), and capsule unit. Then, we discuss the structures of neural networks, namely, feedforward, feedback, and symmetric neural networks. After that, we introduce the optimizations of neural networks, such as backpropagation, evolutionary methods, and weight-agnostic methods without training the neural networks. Finally, the single- and multiagent networks are also explained with two case studies.
ResearchGate has not been able to resolve any references for this publication.