Michael Spivey

Michael Spivey
University of California, Merced | UCM · Department of Cognitive Science

Ph.D., Brain & Cognitive Sciences, U. Rochester (Advisor: Mike Tanenhaus)

About

188
Publications
55,327
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
11,759
Citations
Introduction
Who You Are: The Science of Connectedness (2020, MIT Press) By Michael J. Spivey Why you are more than just a brain, more than just a brain-and-body, and more than all your assumptions about who you are. https://mitpress.mit.edu/books/who-you-are https://shepherd.com/best-books/the-mind-as-more-than-a-brain
Additional affiliations
June 2008 - February 2020
July 1996 - June 2009
Cornell University
Position
  • Professor
Education
July 1991 - June 1996
University of Rochester
Field of study
  • Brain and Cognitive Sciences
September 1987 - May 1991
University of California, Santa Cruz
Field of study
  • Psychology

Publications

Publications (188)
Article
Full-text available
All scientists use data visualizations to discover patterns in their phenomena that may have otherwise gone unnoticed. Likewise, we also use scientific visualizations to help us describe our verbal theories and predict those data patterns. But scientific visualization may also constitute a hindrance to theory development when new data cannot be acc...
Book
Full-text available
Summary. Why you are more than just a brain, more than just a brain-and-body, and more than all your assumptions about who you are. Who are you? Are you just a brain? A brain and a body? All the things you have done and the friends you have made? Many of us assume that who we really are is something deep inside us, an inner sanctuary that contains...
Conference Paper
Full-text available
A current debate concerns the degree to which moral reasoning is susceptible to bias from low-level perceptual cues. Pärnamets et al. (2015) reported that moral decisions could be biased by manipulating the timing of a prompt to respond via measurement of eye gaze, but these results were critiqued by Newell and Le Pelley (2018) as a potential desig...
Article
Full-text available
If our choices make us who we are, then what does that mean when these choices are made in the human-machine interface? Developing a clear understanding of how human decision making is influenced by automated systems in the environment is critical because, as human-machine interfaces and assistive robotics become even more ubiquitous in everyday li...
Article
Full-text available
While the notion of the brain as a prediction machine has been extremely influential and productive in cognitive science, there are competing accounts of how best to model and understand the predictive capabilities of brains. One prominent framework is of a “Bayesian brain” that explicitly generates predictions and uses resultant errors to guide ad...
Article
Full-text available
Humans interact with other humans at a variety of timescales and in a variety of social contexts. We exhibit patterns of coordination that may differ depending on whether we are genuinely interacting as part of a coordinated group of individuals vs merely co-existing within the same physical space. Moreover, the local coordination dynamics of an in...
Preprint
Humans interact with other humans at a variety of timescales and in a variety of social contexts. We exhibit patterns of coordination that may differ depending on whether we are genuinely interacting as part of a coordinated group of individuals vs merely co-existing within the same physical space. Moreover, the local coordination dynamics of an in...
Chapter
Full-text available
Bilingual Lexical Ambiguity Resolution - edited by Roberto R. Heredia January 2020
Article
Full-text available
Cambridge Core - Cognition - Bilingual Lexical Ambiguity Resolution - edited by Roberto R. Heredia
Article
Full-text available
Heyes’ book is an important contribution that rightly integrates cognitive development and cultural evolution. However, understanding the cultural evolution of cognitive gadgets requires a deeper appreciation of complexity, feedback, and self-organization than her book exhibits.
Article
Full-text available
In a science of language, it can be useful to partition different formats of linguistic information into different categories, such as phonetics, phonology, semantics, and syntax. However, when the actual phenomena of language processing cross those boundaries and blur those lines, it can become difficult to understand how these different formats o...
Article
Full-text available
The distinction between ontological ground-truth phenomena and epistemic measurements of those phenomena is discussed and analyzed in the context of complex cognitive systems research. A common style of computational simulation is identified as a Dissect-the-Simulation motif. In this style, a model of cognition is designed and when it generally mim...
Article
Full-text available
A few decades ago, cognitive psychologists generally took for granted that the reason we perceive our visual environment as one contiguous stable whole (i.e., space constancy) is because we have an internal mental representation of the visual environment as one contiguous stable whole. They supposed that the non-contiguous visual images that are ga...
Article
Full-text available
In previous decades, the language sciences made important advances by dividing language into its different information formats, such as phonetics, semantics, and syntax. Such division generally implied that language processing is divorced from context. In more recent decades, however, important advances in the language sciences have been made in un...
Conference Paper
Full-text available
Rumors inundate every social network. Some of them are true, but many of them are false. On rare occasions, a false rumor is exposed as the lie that it is. But more commonly, false rumors have a habit of obtaining apparent verification, by corroboration from what seems to be a second independent source. However, in complex social networks, the conn...
Article
A number of studies have suggested that perception of actions is accompanied by motor simulation of those actions. To further explore this proposal, we applied Transcranial magnetic stimulation (TMS) to the left primary motor cortex during the observation of handwritten and typed language stimuli, including words and non-word consonant clusters. We...
Article
Full-text available
We review a variety of studies in the neural and cognitive sciences that progressively move from the level of neural systems to the level of individual behavior to the level of group behavior. At each step along the way, the evidence suggests that a cognitive process observed at one spatiotemporal scale of analysis is inseparable from the larger su...
Book
Full-text available
By examining a brief history of psycholinguistics and its various approaches to research on sentence processing, we point to a general convergence toward evidence that multiple different linguistic constraints interact in real-time to allow for successful comprehension of a sentence. While some traditions emphasized the unique importance syntactic...
Chapter
A number of connectionist models (inspired by biological neural networks) have been designed to simulate human data in bilingual word reading tasks. These models have in common a reliance on neuron-like nodes that are connected by a distributed pattern of synapse-like connections. When some nodes become active due to linguistic input, this activati...
Article
Full-text available
The main question that Firestone & Scholl (F&S) pose is whether “what and how we see is functionally independent from what and how we think, know, desire, act, and so forth” (sect. 2, para. 1). We synthesize a collection of concerns from an interdisciplinary set of coauthors regarding F&S's assumptions and appeals to intuition, resulting in their t...
Article
Although relational reasoning has been described as a process at the heart of human cognition, the exact character of relational representations remains an open debate. Symbolic-connectionist models of relational cognition suggest that relations are structured representations, but that they are ultimately grounded in feature sets; thus, they predic...
Chapter
As the field of psycholinguistics gradually confronts the evidence that language is enmeshed with perceptual-motor processes, it may be in for a shock when it learns that perceptual-motor processes are themselves comprised of the relationship between organism and environment (e.g., ecological perception and active externalism). Therefore, not only...
Article
Full-text available
Eye gaze is a window onto cognitive processing in tasks such as spatial memory, linguistic processing, and decision making. We present evidence that information derived from eye gaze can be used to change the course of individuals' decisions, even when they are reasoning about high-level, moral issues. Previous studies have shown that when an exper...
Article
Full-text available
Learning of feature-based categories is known to interact with feature-variation in a variety of ways, depending on the type of variation (e.g., Markman and Maddox, 2003). However, relational categories are distinct from feature-based categories in that they determine membership based on structural similarities. As a result, the way that they inter...
Conference Paper
Full-text available
In this paper, we propose an auditory search task using a virtual ambisonic environment presented through static Head-Related Transfer Functions (HRTF’s). Head-tracking using a magnetometer captures the listener’s orientation and presents an interactive auditory scene. Reaction times from 15 participants are compared for Simple and Complex auditory...
Article
Full-text available
When humans perform a response task or timing task repeatedly, fluctuations in measures of timing from one action to the next exhibit long-range correlations known as 1/f noise. The origins of 1/f noise in timing have been debated for over 20 years, with one common explanation serving as a default: humans are composed of physiological processes thr...
Article
Full-text available
Recent studies have shown that, instead, of a dichotomy between parallel and serial search strategies, in many instances we see a combination of both search strategies utilized. Consequently, computational models and theoretical accounts of visual search processing have evolved from traditional serial-parallel descriptions to a continuum from 'effi...
Article
Full-text available
Recent research using eye-tracking typically relies on constrained visual contexts in particular goal-oriented contexts, viewing a small array of objects on a computer screen and performing some overt decision or identification. Eyetracking paradigms that use pictures as a measure of word or sentence comprehension are sometimes touted as ecological...
Article
Full-text available
Recent research using eye-tracking typically relies on constrained visual contexts in particular goal-oriented contexts, viewing a small array of objects on a computer screen and performing some overt decision or identification. Eyetracking paradigms that use pictures as a measure of word or sentence comprehension are sometimes touted as ecological...
Article
Full-text available
In the present study, we investigated how degree of certainty modulates anticipatory processes using a modified spatial cuing task in which participants made an anticipatory hand movement with the computer mouse toward one of two probabilistic targets. A cue provided information of the location of the upcoming target with 100% validity (certain con...
Article
Full-text available
Grammatical aspect is known to shape event understanding. However, little is known about how it interacts with other important temporal information, such as recent and distant past. The current work uses computer-mouse tracking (Spivey et al., 2005) to explore the interaction of aspect and temporal context. Participants in our experiment listened t...
Article
In Van Orden and Holden's (2002) article, “Intentional Contents and Self-Control,” they provide an outline for how a research program can preserve the notion of intentionality in goal-directed behavior without reifying it or treating it as a singular efficient cause all its own. In a conventional theoretical framework where intention is treated as...
Article
Full-text available
Eyes move to gather visual information for the purpose of guiding behavior. This guidance takes the form of perceptual-motor interactions on short timescales for behaviors like locomotion and hand-eye coordination. More complex behaviors require perceptual-motor interactions on longer timescales mediated by memory, such as navigation, or designing...
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Comparison co-location plot. Plots of co-location functions averaged for each participant (left column) and each image (right column), separated into three comparison conditions: XYgd × XYpd (top), XYgs × XYgd (middle), and XYgs × XYpd (bottom). The periodic pattern in some functions was likely due to differences in sample rates. (TIF)
Data
Saliency maps of stimulus images. Saliency heat maps for each of the six images, overlaid with example samples from their corresponding probability distributions. (TIF)
Data
Allan Factor functions. Plots of Allan factor functions averaged for each participant in the gaze-study (top-left), gaze-draw (top-right), and pen-draw conditions (bottom-left), and for each image (bottom-right). (TIF)
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Ca,b(S) functions. Plots of Ca,b(S) functions averaged per participant for each of the series shown in Figure 3B from main text. (TIF)
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Individual trial examples with fixations. One example image (A) and corresponding drawing (B) from each of the 11 participants, with eye tracking positions down-sampled to 15 Hz to reduce visual clutter. Five of six images are shown twice, and each image is shown at least once. (TIF)
Data
Supplementary Materials and Methods. File contains: Table S1 Means of Ca,b(S). Means of Ca,b(S) functions minus their respective baselines, for each of the conditions shown in Figure 4 from the main text. (DOCX)
Article
Full-text available
Previous research on language comprehension has used the eyes as a window into processing. However, these methods are entirely reliant upon using visual or orthographic stimuli that map onto the linguistic stimuli being used. The potential danger of this method is that the pictures used may not perfectly match the internal aspects of language proce...
Article
Spatial formats of information are ubiquitous in the cognitive and neural sciences. There are neural uses of space in the topographic maps found throughout cortex. There are metaphorical uses of space in cognitive linguistics, physical uses of space in ecological psychology, and mathematical uses of space in dynamical systems theory. These varied i...
Article
Full-text available
Grounded theories assume that there is no central module for cognition. According to this view, all cognitive phenomena, including those considered the province of amodal cognition such as reasoning, numeric, and language processing, are ultimately grounded in (and emerge from) a variety of bodily, affective, perceptual, and motor processes. The de...
Article
Within the context of the theory of embodied cognition, our most frequent motor movements-eye movements-are sure to play an important role in our cognitive processes. Not only do eye movements provide the experimenter with a special window into these cognitive processes, they provide the individual with a way to modify their cognitive processes. Th...
Article
Full-text available
Spoken language comprehension research with eyetracking typically uses concurrent auditory and visual stimuli, often referred to as the visual-world paradigm (Tanenhaus, Spivey-Knowlton, Eberhard & Sedivy, 1995). Even with no visual stimuli at all, directions of saccades are congruent with movement direction in a story (e.g. more downward saccades...
Article
Full-text available
Embodied theories are increasingly challenging traditional views of cognition by arguing that conceptual representations that constitute our knowledge are grounded in sensory and motor experiences, and processed at this sensorimotor level, rather than being represented and processed abstractly in an amodal conceptual system. Given the established e...
Article
Full-text available
Prior research indicates that synchronized tapping performance is very poor with flashing visual stimuli compared with auditory stimuli. Three finger-tapping experiments compared flashing visual metronomes with visual metronomes containing a spatial component, either compatible, incompatible, or orthogonal to the tapping action. In Experiment 1, sy...
Article
Because of the strong associations between verbal labels and the visual objects that they denote, hearing a word may quickly guide the deployment of visual attention to the named objects. We report six experiments in which we investigated the effect of hearing redundant (noninformative) object labels on the visual processing of multiple objects fro...
Article
Because of the strong associations between verbal labels and the visual objects that they denote, hearing a word may quickly guide the deployment of visual attention to the named objects. We report six experiments in which we investigated the effect of hearing redundant (noninformative) object labels on the visual processing of multiple objects fro...
Article
Why are people more irritated by nearby cell-phone conversations than by conversations between two people who are physically present? Overhearing someone on a cell phone means hearing only half of a conversation--a "halfalogue." We show that merely overhearing a halfalogue results in decreased performance on cognitive tasks designed to reflect the...
Article
Full-text available
Recent converging evidence suggests that language and vision interact immediately in non-trivial ways, although the exact nature of this interaction is still unclear. Not only does linguistic information influence visual perception in real-time, but visual information also influences language comprehension in real-time. For example, in visual searc...