ArticlePDF Available

A Spreading Activation Theory of Semantic Processing

Authors:

Abstract

Presents a spreading-activation theory of human semantic processing, which can be applied to a wide range of recent experimental results. The theory is based on M. R. Quillian's (1967) theory of semantic memory search and semantic preparation, or priming. In conjunction with this, several misconceptions concerning Quillian's theory are discussed. A number of additional assumptions are proposed for his theory to apply it to recent experiments. The present paper shows how the extended theory can account for results of several production experiments by E. F. Loftus, J. F. Juola and R. C. Atkinson's (1971) multiple-category experiment, C. Conrad's (1972) sentence-verification experiments, and several categorization experiments on the effect of semantic relatedness and typicality by K. J. Holyoak and A. L. Glass (1975), L. J. Rips et al (1973), and E. Rosch (1973). The paper also provides a critique of the Rips et al model for categorization judgments.
... Accordingly, a great deal of research investigating the structure of semantic memory and the mechanisms governing the performance on semantic tasks has been conducted. During the last 30 years, the study of semantic memory has been dominated by both network theories, such as those of Collins and Loftus (1975), and feature theories, such as those of Smith, Shoben, and Rips (1974). Recently, a number of researchers have extended feature-based theories by instantiating them in distributed connectionist attractor networks (e.g., Becker Hinton & Shallice, 1991;Masson, 1995;McRae, de Sa, & Seidenberg, 1997). ...
... Results of semantic priming experiments have often been interpreted in terms of spreading-activation theory (Anderson, 1983;Collins & Loftus, 1975;McNamara, 1992). In this account, recognizing a word involves activating its corresponding node in a hierarchically structured semantic network. ...
... In experiments based on semantic network theory, primes and targets have typically been treated as semantically related if they are exemplars of the same superordinate category; for example, beans and peas are both vegetables (Moss et al., 1995). Priming is expected because activation is assumed to spread from the prime to the target via links with their shared category node (Collins & Loftus, 1975). However, an attractor network such as ours does not contain superordinate category nodes, so that priming must be due to featural overlap epiphenomenon. ...
... In order to take advantage of these models, we have to face the problem of how to represent semantic information in devices' memory, how semantic information is retrieved and exchanged upon contacts between nodes in physical proximity, and how content is finally selected for dissemination, based on the semantic data exchange that has been carried on. For each node, the internal memory representation of semantic concepts is inspired by the associative network models (AN) [26,27] of human memory coming the from cognitive psychology field. In AN models, semantic concepts are represented by nodes that are interconnected by paths that vary in strength, reflecting the degree of association between each pair of concepts. ...
... Thus, we used one of the memory models present in the cognitive science field. Recently, two categories of models are significantly well established in the literature, namely, the Associative Network Models and the Connectionist Models [26,27]. ...
Preprint
Full-text available
In cyber-physical convergence scenarios information flows seamlessly between the physical and the cyber worlds. Here, users' mobile devices represent a natural bridge through which users process acquired information and perform actions. The sheer amount of data available in this context calls for novel, autonomous and lightweight data-filtering solutions, where only relevant information is finally presented to users. Moreover, in many real-world scenarios data is not categorised in predefined topics, but it is generally accompanied by semantic descriptions possibly describing users' interests. In these complex conditions, user devices should autonomously become aware not only of the existence of data in the network, but also of their semantic descriptions and correlations between them. To tackle these issues, we present a set of algorithms for knowledge and data dissemination in opportunistic networks, based on simple and very effective models (called cognitive heuristics) coming from cognitive sciences. We show how to exploit them to disseminate both semantic data and the corresponding data items. We provide a thorough performance analysis, under various different conditions comparing our results against non-cognitive solutions. Simulation results demonstrate the superior performance of our solution towards a more effective semantic knowledge acquisition and representation, and a more tailored content acquisition.
... In the 1960s, it was strived to produce such programmes which could work on cognitive frameworks of human beings. Furthermore, different dictionaries were used to disambiguate the word sense of problematized words (Collins, & Loftus, 1975). ...
Thesis
Full-text available
ABSTRACT Thesis Title: Unveiling Knowledge Patterns in Intermediate English Textbooks through Voyant Text Mining Tools: A Digital Humanities Study The contemporary digital era faces the challenge of extracting knowledge patterns from big diversified data which are difficult to read with the traditional “close reading” method. Likewise, traditional paper textbooks are considered inadaptable and less appealing, so their reading becomes uninteresting, time-consuming and less knowledge- investigative. This dissertation on text mining primarily aims to discover interactive knowledge patterns, innovative and idiosyncratic knowledge bearing dimensions through “distant reading”. To address the research problem, intermediate English textbooks have been analysed with five Voyant tools: Summary, Cirrus, Phrases, Links and Contexts. The main focus of the analysis is the transformation of static traditional Pakistani intermediate English textbooks into interactive data visuals of Summary, Cirrus, Phrases, Links and Contexts. Theoretical triangulation integrates Knowledge Discovery Theory and Hermeneutica Theory. Accordingly, the textbooks have been analysed with mixed methods to explore new interactive knowledge patterns. Results have been displayed in the form of data visualization, tabular, qualitative and quantitative data. The current research finds that Summary tool precisely quantifies stylometric features of total words, unique words, vocabulary density, average sentence length and the most frequent themes in each piece of writing. Cirrus discovers most of the key themes and characters. Phrases tool extracts 168 which are the most repeated standard collocation patterns. It was also found that Links tool interrelates almost all key ideas with one another through accurate Knowledge Graphs. Further, Context’s tool disambiguates word sense by discriminating their context, contextual meanings and parts of speech. The current study contributes by resolving the research problem, saving time with distant reading and adding aesthetic appeal for Voyant users. Finally, pedagogical implications of the current study introduce autonomous learning and teaching of textbooks, corpus building, visual generation, interesting knowledge pattern discovery and the data unification for libraries. Moreover, the current study also diverts students, teachers and publishers to digital text mined learning, teaching and publishing.
... Moreover, since the digital medium externalizes cognitive associations in a hypermedia environment (Riva and Galimberti 1997;Hoffman and Novak 1996), memetic patterns can actually be coded into intelligent content objects. Much like the traditional concept of "associative networks" as patterns that activate cognitive meaning (Collins and Loftus 1975;Anderson and Bower 1973), memetic codes can design apparel shopping websites where digital mannequins and material mirror the mind of the market. ...
Article
Full-text available
Digital markets demand new marketing management competencies. This study advances new marketing management competencies to meet the challenges of an evolving digital market. Until recently, digital marketing strategy has focused on information control and the advantages of computing technology. Future strategic value, however, will be largely derived from collaborative intelligence and applications of intelligent digital content. These future trends point to emerging marketing management information competencies comprised of cross-disciplinary techniques which leverage the malleability of digital content, most notably the confluence of enterprise and ethics intelligence. After identifying the stages of digital market evolution, information competency is defined as the salient skill for achieving digital marketing management success. The delineation of information competency techniques is then extended to formulate a digital marketing competency rubric with confluent strategic dimensions and societal domains. This proposed Information Competency Codes Typology (ICCT) draws upon contributions from marketing management, as well as management information systems, information economics, and computer/information ethics research. As a digital marketing strategy heuristic, the ICCT directs confluence of enterprise objectives with ethics outcomes.
... The latter is built on the two theoretical notions of associative networks in semantic memory and automatic activation. Concepts in semantic memory are assumed to be linked together in the form of associative networks, with associated concepts having stronger links, or being closer together, than unrelated concepts (Collins and Loftus 1975). A stereotypical association might be stored in semantic memory and automatically activated, hence producing an implicit stereotype effect (Devine 1989). ...
Article
Full-text available
Biases in cognition are ubiquitous. Social psychologists suggested biases and stereotypes serve a multifarious set of cognitive goals, while at the same time stressing their potential harmfulness. Recently, biases and stereotypes became the purview of heated debates in the machine learning community too. Researchers and developers are becoming increasingly aware of the fact that some biases, like gender and race biases, are entrenched in the algorithms some AI applications rely upon. Here, taking into account several existing approaches that address the problem of implicit biases and stereotypes, we propose that a strategy to cope with this phenomenon is to unmask those found in AI systems by understanding their cognitive dimension, rather than simply trying to correct algorithms. To this extent, we present a discussion bridging together findings from cognitive science and insights from machine learning that can be integrated in a state-of-the-art semantic network. Remarkably, this resource can be of assistance to scholars (e.g., cognitive and computer scientists) while at the same time contributing to refine AI regulations affecting social life. We show how only through a thorough understanding of the cognitive processes leading to biases, and through an interdisciplinary effort, we can make the best of AI technology.
... Each of these modalities contributes to enrich our multimodal conceptual representations (Dilkina & Lambon Ralph, 2013;Dove, 2011;Reilly, Peelle, Garcia & Crutch, 2016;Vigliocco, Meteyard, Andrews & Kousta, 2009). This view of concepts grounded in different modalities is illustrated in the literature by considerable research within the theoretical framework of embodied cognition (Barsalou, 1999(Barsalou, ,2008Buccino, Colagè, Gobbi & Bonaccorso, 2016;Wilson, 2002) and is opposed to the theory of semantics which proposes a complete independence between semantic and sensorimotor systems (Collins & Loftus, 1975 ;Fodor, 1987;Levelt, 1993). There is now growing and undeniable empirical behavioural and neurophysiological evidence of strong interactions between these systems (for reviews see Binder, 2016;Martin, 2007;Meteyard & Vigliocco, 2008;Meteyard, Cuadrado, Bahrami & Vigliocco, 2012;Patterson, Nestor & Rogers, 2007;Thompson-Schill, 2003). ...
Article
Embodied approach postulates that knowledge and conceptual representations are grounded in action and perception. In order to investigate the involvement of sensorimotor information in conceptual and cognitive processing, researchers have collected various norms in young adults. For instance, the perceptual strength (PS) assesses perceptual experience (i.e. visual, auditory, haptic, gustatory, olfactory) associated with a concept and the body-object-interaction (BOI) assesses the ease with which a human body can interact with the referent of a word. The importance of both BOI and PS in the multimodal composition of word meaning is today well recognized. However, given the sensorimotor development of the individual from childhood to later life, it is likely that different age periods are associated with different perceptual experience and capacity to interact with objects. The purpose of this research is to investigate exploratory the effect of age on PS and BOI by comparing the evaluation of 270 French language words by young adults and healthy older people. The results showed that older adults presented similar or even higher PS for some modalities (e.g. gustatory and olfactory) and in particular for certain categories of words, while the BOI decreases. In addition to the importance of adjusting the verbal stimuli used in aging studies when dealing with multimodal representations, our results will lead us to discuss the evolution of sensorimotor representations with age.
Article
Questions about measurement of individual differences in implicit attitudes, which have been the focus so far in this exchange, should be distinguished from more general questions about whether implicit attitudes exist and operate in our minds. Theorists frequently move too quickly from pessimistic results regarding the first set of questions to pessimistic conclusions about the second. That is, they take evidence that indirect measures such as the implicit association test (IAT) disappoint as individual difference measures and use it to (mistakenly) suggest that people do not in fact have implicit attitudes directed at stigmatized groups. In this commentary, I dissect this mistake in detail, drawing key lessons from a parallel debate that has unfolded in cognitive science about “conflict tasks” such as the Stroop task. I argue that the evidence overall supports a nuanced conclusion: Indirect measures such as the IAT measure individual differences in implicit attitudes poorly, but they—via distinct lines of evidence—still support the view that implicit attitudes exist. This article is categorized under: Psychology > Theory and Methods
Article
Full-text available
When naming a sequence of pictures of the same semantic category (e.g., furniture ), response latencies systematically increase with each named category member. This cumulative semantic interference effect has become a popular tool to investigate the cognitive architecture of language production. However, not all processes underlying the effect itself are fully understood, including the question where the effect originates from. While some researchers assume the interface of the conceptual and lexical level as its origin, others suggest the conceptual-semantic level. The latter assumption follows from the observation that cumulative effects, namely cumulative facilitation, can also be observed in purely conceptual-semantic tasks. Another unanswered question is whether cumulative interference is affected by the morphological complexity of the experimental targets. In two experiments with the same participants and the same material, we investigated both of these issues. Experiment 1, a continuous picture naming task, investigated whether morphologically complex nouns (e.g., kitchen table ) elicit identical levels of cumulative interference to morphologically simple nouns (e.g., table ). Our results show this to be the case, indicating that cumulative interference is unaffected by lexical information such as morphological complexity. In Experiment 2, participants classified the same target objects as either man-made or natural. As expected, we observed cumulative facilitation. A separate analysis showed that this facilitation effect can be predicted by the individuals’ effect sizes of cumulative interference, suggesting a strong functional link between the two effects. Our results thus point to a conceptual-semantic origin of cumulative semantic interference.
Article
This study aims to investigate how the extent of skepticism toward a firm's overall CSR practices spills over to consumers' evaluations of an actual incident, through examining recent crises involving Gucci and H&M. A total of 531 responses obtained through an online survey method were analyzed, and the results revealed that a low level of consumer skepticism toward a firm's corporate social responsibilities (CSR) practices increased trust, which in turn encouraged resilience (forgiveness) intentions for the firm's misconduct. Moreover, the resilience intention mitigated the extent to which consumers attributed the incident to the firm, perceived the severity of the incident, and perceptions of the firm's self‐interest motives. The findings suggest that a firm's continuous and consistent CSR engagement can play a buffering role in alleviating negative consequences when brand crises occur. Therefore, the findings of this study will inform risk management by highlighting the protective effect of a firm's continuous CSR efforts.
Preprint
Full-text available
Creative cognition is conceived as the process whereby something novel and appropriate is generated. However, the contribution of novelty and appropriateness to creativity is far from being understood, especially during developmental age. Here, we asked children, ranging from 10 to 11 years old, to perform a word association task according to three instructions, which triggered a more appropriate (ordinary), novel (random), or balanced (creative) response. Results revealed that children exhibited greater cognitive flexibility in the creative condition compared to the control conditions, as revealed by the structure and resiliency of the semantic networks. Moreover, responses’ word embeddings extracted from pre-trained deep neural networks showed that semantic distance and category switching index increased in the creative condition with respect to the ordinary condition and decreased compared to the random condition. Our findings provide evidence that children balance novelty and appropriateness to generate creative associations, corroborating previous findings on the adult population and highlighting the relevant contribution of both components to the overall creative process.
Article
Two groups of Ss compared a target word with a memory set consisting of from one to four words (Group W) or from one to four semantic categories (Group C). The Ss made a positive or a negative response to indicate whether or not a target word matched one of the words in the memory set for Group W, or whether or not the target word was an exemplar of one of the categories in the memory set for Group C. Reaction times for negative responses were linear functions of the memory set size for both groups, but the slope of the function for Group C was about four times the slope for Group W. The results were discussed in terms of alterna- twe memory search mechanisms and the possible serial and parallel scanning models that were consistent with the data. Landauer and Freedman (1968) studied information retrieval from long-term memory in an experiment designed to test the effects of category size on classification time. The Ss were shown single words and identified them as belonging (positive response) or not belong- ing (negative response) to well-known seman- tic categories. It was shown that latencies for both positive and negative responses were greater for large categories than for small ones. Two of the explanations offered for the
Article
Reviews research undertaken to create a theory of human natural language understanding. It is noted that those who have made attempts to solve the problem have had to restrict the domain of the particular problem that they were trying to solve, and sacrifice theoretical considerations for programming considerations. Psychiatric interviewing programs that have been written have not attempted to do much more than demonstrate that it is possible to have conversations with machines. It is suggested that what is needed, and what has been lacking, is a cohesive theory of how humans understand natural language without regard to particular subparts of that problem. A theory is described which is also intended to be a basis for computer programs that understand natural language. The initial premise of the theory is that the basis of natural language is conceptual. This base is interlingual, its elements are concepts not words. The conceptual content underlying an utterance is its meaning. Components of the system are a syntactic processor, a conceptual processor, and memories. Dependency relations between concepts form a network constituting the conceptual base. (29 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The Teachable Language Comprehender (TLC) is a program designed to be capable of being taught to “comprehend” English text. When text which the program has not seen before is input to it, it comprehends that text by correctly relating each (explicit or implicit) assertion of the new text to a large memory. This memory is a “semantic network” representing factual assertions about the world. The program also creates copies of the parts of its memory which have been found to relate to the new text, adapting and combining these copies to represent the meaning of the new text. By this means, the meaning of all text the program successfully comprehends is encoded into the same format as that of the memory. In this form it can be added into the memory. Both factual assertions for the memory and the capabilities for correctly relating text to the memory's prior content are to be taught to the program as they are needed. TLC presently contains a relatively small number of examples of such assertions and capabilities, but within the system, notations for expressing either of these are provided. Thus the program now corresponds to a general process for comprehending language, and it provides a methodology for adding the additional information this process requires to actually comprehend text of any particular kind. The memory structure and comprehension process of TLC allow new factual assertions and capabilities for relating text to such stored assertions to generalize automatically. That is, once such an assertion or capability is put into the system, it becomes available to help comprehend a great many other sentences in the future. Thus the addition of a single factual assertion or linguistic capability will often provide a large increment in TLC's effective knowledge of the world and in its overall ability to comprehend text. The program's strategy is presented as a general theory of