Kevin D ShabahangUniversity of Melbourne | MSD · Melbourne School of Psychological Sciences
Kevin D Shabahang
BSc Psychology (honours), B.Comp.
About
12
Publications
1,438
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
55
Citations
Introduction
My interest lies at the intersection between memory and language. My approach follows the distributional assumption of semantics, and is implemented within an associative framework. The general problem is to specify a computationally tractable account of human information processing, able to explain various laboratory and non-laboratory paradigms. Whereas classic models depend on a priori explications of control procedures, our work aims to offload as much of the control onto the data as possible. Herbert Simon's example of an ant on a hill nicely captures the general ethos of the research enterprise.
Publications
Publications (12)
Transformer models of language represent a step change in our ability to account for cognitive phenomena. Although the specific architecture that has garnered recent interest is quite young, many of its components have antecedents in the cognitive science literature. In this article, we start by providing an introduction to large language models ai...
Models of word meaning that exploit patterns of word usage across large text corpora to capture semantic relations, like the topic model and word2vec, condense word‐by‐context co‐occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy, etc.). However, their relia...
Without having seen a bigram like “her buffalo”, you can easily tell that it is congruent because “buffalo” can be aligned with more common nouns like “cat” or “dog” that have been seen in contexts like “her cat” or “her dog”—the novel bigram structurally aligns with representations in memory. We present a new class of associative nets we call Dyna...
In a Linear Associative Net (LAN), all input settles to a single pattern, therefore Anderson, Silverstein, Ritz, and Jones (1977) introduced saturation to force the system to reach other steady-states in the Brain-State-in-a-Box (BSB). Unfortunately, the BSB is limited in its ability to generalize because its responses are restricted to previously...
Recognition memory models posit that false alarm rates increase as the global similarity between the probe cue and the contents of memory is increased. Global similarity predictions have been commonly tested using category length designs where it has been found that false alarm rates increase as the number of studied items from a common category is...
Linear-Associative-Nets respond with the same pattern regardless of input, motivating Anderson, Silverstein, Ritz, and Jones (1977) to introduce saturation to facilitate other response states in the Brain-State-in-a-Box. The Brain-State-in-a-Box is also limited, however, because it only responds with previously stored patterns. We present a new cla...
We present a new version of the Syntagmatic-Paradigmatic model (SP; Dennis, 2005) as a representational substrate for encoding meaning from textual input. We depart from the earlier SP model in three ways. Instead of two multi-trace memory stores, we adopt an auto-associative network. Instead of treating a sentence as the unit of representation, we...
Propositional accounts of organization in memory have dominated theory in compositional semantics, but it is an open question whether their adoption has been necessitated by the data. We present data from a narrative comprehension experiment, designed to distinguish between a propositional account of semantic representation and an associative accou...
We present a new version of the Syntagmatic-Paradigmatic model (SP; Dennis, 2005) as a representational substrate for encoding meaning from textual input. We depart from the earlier SP model in three ways. Instead of two multi-trace memory stores, we adopt an auto-associative network. Instead of treating a sentence as the unit of representation, we...
Recall decreases across a series of subspan immediate-recall trials but rebounds if the semantic category of the words is changed, an example of release from proactive interference (RPI). The size of the rebound depends on the semantic categories used and ranges from 0% to 95%. We used a corpus of novels to create vectors representing the meaning o...