ArticleLiterature Review

Why cognitive penetration of our perceptual experience is still the most plausible account

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Las investigaciones reportadas utilizan la graduación o ajuste del color/luminosidad como forma de evaluar los procesos perceptivos de los participantes, lo cual evita que se confundan los efectos perceptuales con juicios posperceptuales (Marchi & Newen, 2015), que según los defensores de la impenetrabilidad invalidarían las conclusiones de penetrabilidad cognitiva. Estos experimentos midieron directamente la experiencia perceptual sin tener que basarse en la memoria (Newen & Vetter, 2017). Por otro lado, la metodología utilizada en las tres investigaciones permitió descartar algunas explicaciones alternativas a la penetrabilidad en los hallazgos reportados. ...
... A pesar de estas críticas, los hallazgos de las investigaciones de Hansen et al. (2006) y de Levin y Banaji (2006) reafirman la idea de que la visión temprana va más allá de la codificación y procesamiento de las propiedades físicas de los estímulos y serían pruebas a favor de la penetrabilidad de la visión temprana, pues muestran que la visión temprana es influenciada por conceptos abstractos o plantillas visuales memorizadas, las cuales son demasiado complejas para ser generadas o procesadas únicamente por áreas visuales tempranas (Newen & Vetter, 2017). ...
... Los hallazgos presentados en esta revisión apoyarían la tesis modular de autores como Pinker (2005), que consideran que el modularidad no necesariamente implica encapsulamiento. Así, pueden existir áreas cerebrales altamente especializadas en procesar determinado tipo de información sin que esto implique impenetrabilidad (Newen & Vetter, 2017). De acuerdo con esta flexibilización de la exigencia del encapsulamiento, es válido pensar en la existencia de otros módulos mentales no periféricos y no encapsulados. ...
Article
Full-text available
Con base en un trasfondo teórico sobre las concepciones modulares de la mente de Fodor (2001) y Pinker (2005), el objetivo del presente texto es analizar cualitativemente la solidez de la evidencia experimental de una muestra de artículos publicados entre 2002 y 2017 que apoyan la tesis de la penetrabilidad cognitiva en la percepción visual temprana. El estudio se justifica por las implicaciones que pueden tener los resultados de estas investigaciones para las diferentes concepciones sobre arquitectura mental en funciones perceptuales, procesamiento de la información intra e intermodular e isomorfismo entre arquitectura mental y cerebral. La metodología que se utilizó para realizar este estudio implicó establecimiento de la tesis y de los criterios de inclusión de los artículos a revisar, selección final de los artículos más representativos sobre las subáreas seleccionadas, análisis de la calidad metodológica y de los resultados de éstos, identificación de aportes específicos de cada estudio a la tesis planteada e interpretación y síntesis de los hallazgos. De 26 artículos revisados sobre el tema, se reportan y analizan 7, que se consideran representativos de 4 subáreas: penetrabilidad de expectativas, de percepción del color, de rasgos faciales y de reconocimiento de objetos. Se concluye que hay amplia y sólida evidencia convergente (perceptual y neurofisiológica) a favor de los fenómenos penetrativos en la visión temprana, lo cual apoyaría indirectamente la hipótesis de permeabilidad de los módulos mentales de Pinker. Se formulan recomendaciones sobre aspectos por investigar y variables a controlar en experimentos sobre este tema.
... In this work, we will review and evaluate existing anatomical and functional data suggesting that early vision consists of two processing components that work in parallel, but independently of each other by utilizing distinct communication channels. Arguments for CPV, reviewed above, rely heavily on the dominance of the FB pathway in shaping visual processing (Newen & Vetter, 2017;Vetter & Newen, 2014), but whether this is so remains unclear. There are two parallel counterstreams that run through the visual hierarchy, each consisting of FF and FB pathways (Markov et al., , 2014Vezoli et al., 2021). ...
... Its FB pathway is constrained by FF activity and it participates in input amplification, as described in Section 3, and in contextual interactions that contribute to contour integration and figure-ground organization, as described in Section 4. By contrast, the infragranular counterstream is an interface for unconstrained interaction between vision and cognition. The infragranular FB pathway allows for a global integration of vision with a wide range of extra-visual sources including audition, touch, object recognition, emotions, and efferent copies of motor commands (Newen & Vetter, 2017;O'Callaghan et al., 2017;Vetter & Newen, 2014). In addition, it may support mental imagery (Koenig-Robert & Pearson, 2020) and memory retrieval (Takeda et al., 2018). ...
Article
According to a predictive coding framework, visual processing involves the computation of prediction errors between sensory data and a generative model that is supplied via feedback projections. This implies that vision is cognitively penetrable by all sorts of top-down influences. In this paper, we review anatomical and functional data which suggest that feedforward and feedback projections are organized into two parallel processing streams: the supragranular and the infragranular counterstreams. The supragranular counterstream computes surface and motion representation in depth. It represents the best interpretation of what is given in the input image based on physical regularities that are built into this network. By contrast, the infragranular counterstream integrates vision with cognition, because it represents what is likely to be found in the environment based on the predictions derived from learned statistical regularities. The two counterstreams work in parallel, but independently of each other. They compete for dominance, and only one is allowed to deliver its output to higher-order areas at any instance of time. Such an arrangement allows the supragranular counterstream to remain cognitively impenetrable to top-down influences.
... Several authors have observed that predictive coding implies cognitive penetrability of vision (CPV) suggesting that generative models directly modulate and shape visual perception Newen & Vetter, 2017;. For example, Lupyan (2015a, p. 547) explicitly stated that "expectations, knowledge, and task demands can shape perception at multiple levels, leaving no part untouched." ...
... An opposing perspective argues for the complementarity of perception and cognition based on the idea that cognition supplies contextual information that may disambiguate contradictory sensory evidence, or it may fill in missing parts (Goldstone et al., 2015;Lupyan 2012. A large amount of behavioral and brain data has been accumulated over the years suggesting that vision is indeed cognitively penetrable Newen & Vetter, 2017;. Such findings fit well within a predictive coding framework . ...
Thesis
This doctoral thesis aims to develop new neural network models that will explore how feedback projections in the visual cortex contribute to top-down modulations of visual perception. Two types of top-down effects are considered: 1) Selective visual attention and 2) prior expectations. The models represent modifications and extensions of previously published models of lateral inhibition and adaptive resonance theory. The proposed models are thoroughly evaluated using computer simulations implemented in MATLAB. The models’ outputs are compared with behavioral and neural data. The first part of this thesis develops a model of the recurrent competitive network with the ability to flexibly orient attention in a spatial map to either a single location in space, all locations occupied by an object, or all locations occupied by the feature value. To achieve this property, the network was augmented by biophysically plausible mechanisms emulating properties of synaptic and dendritic computation. The proposed network can simulate object-based attention and implement visual routines, such as mental contour tracing, when further embedded in a more extensive multi-scale neural architecture for boundary detection. The second part of this thesis develops a neural network for color perception based on adaptive resonance theory. The model explains how feedback projections contribute to the stable learning of color codes and conscious experience of colors. The model demonstrates that the same mechanisms that assure learning stability are also responsible for constraining the effect of top-down expectations on color perception. In general, the model indicates that top-down predictions, to a large extent, do not alter the content of conscious visual perception.
... The first treats multisensory integration in V1 as bottom-up, and horizontal, rather than hierarchical (Watkins et al. 2006). However, given the prevalence of top-down multisensory signals reviewed in the last section, this leads to the suggestion that bottom-up processes in V1 are subject to top-down 'cognitive penetration' (Vetter and Newen 2014;Newen and Vetter 2017). What this means is that these top-down signals are influencing our visual experience. ...
... The second, more radical, but increasingly orthodox paradigm in the multisensory literature treats perception as one large predictive process (Rao and Ballard 1999;Keller and Mrsic-Flogel 2018;Petro et al. 2017;Newen and Vetter 2017 also frame their work in these terms). On this account, perception is hierarchical, and the perception/cognition distinction is effectively eradicated. ...
Article
Full-text available
We typically distinguish between V1 as an egocentric perceptual map and the hippocampus as an allocentric cognitive map. In this article, we argue that V1 also functions as a post-perceptual egocentric cognitive map. We argue that three well-documented functions of V1, namely (i) the estimation of distance, (ii) the estimation of size, and (iii) multisensory integration, are better understood as post-perceptual cognitive inferences. This argument has two important implications. First, we argue that V1 must function as the neural correlates of the visual perception/cognition distinction and suggest how this can be accommodated by V1’s laminar structure. Second, we use this insight to propose a low-level account of visual consciousness in contrast to mid-level accounts (recurrent processing theory; integrated information theory) and higher-level accounts (higher-order thought; global workspace theory). Detection thresholds have been traditionally used to rule out such an approach, but we explain why it is a mistake to equate visibility (and therefore the presence/absence of visual experience) with detection thresholds.
... There is growing evidence that perception is affected by higher-level cognitive states, like the desirability of objects 6 , action capabilities 7-9 , arousal 10 , and categorical knowledge [11][12][13][14][15] . However, whether perception actually is 44,45 or is not 46-48 penetrable by cognition is presently a hotly debated topic. There is strong evidence from theoretical, behavioral, neurophysiological and clinical studies strongly supporting the predictive coding framework of perception, in which top-down pathways send information based on prior experiences which is then combined with incoming sensory information 44,45,49,50 . ...
... However, whether perception actually is 44,45 or is not 46-48 penetrable by cognition is presently a hotly debated topic. There is strong evidence from theoretical, behavioral, neurophysiological and clinical studies strongly supporting the predictive coding framework of perception, in which top-down pathways send information based on prior experiences which is then combined with incoming sensory information 44,45,49,50 . For instance, imagined and real natural sounds (e.g. ...
Article
Full-text available
The memory of an object’s property (e.g. its typical colour) can affect its visual perception. We investigated whether memory of the softness of every-day objects influences their haptic perception. We produced bipartite silicone rubber stimuli: one half of the stimuli was covered with a layer of an object (sponge, wood, tennis ball, foam ball); the other half was uncovered silicone. Participants were not aware of the partition. They first used their bare finger to stroke laterally over the covering layer to recognize the well-known object and then indented the other half of the stimulus with a probe to compare its softness to that of an uncovered silicone stimulus. Across four experiments with different methods we showed that silicon stimuli covered with a layer of rather hard objects (tennis ball and wood) were perceived harder than the same silicon stimuli when being covered with a layer of rather soft objects (sponge and foam ball), indicating that haptic perception of softness is affected by memory.
... A similar ongoing debate is observed in psychology. Theoretical approaches (Balcetis, 2016;Churchland et al., 1994;Collins and Olson, 2014;Hohwy, 2013Hohwy, , 2017Lupyan, 2012Lupyan, , 2015Lupyan et al., 2010;Newen and Vetter, 2017;Vetter and Newen, 2014) and empirical evidence seem to show that higher influences on visual processing have consequences for human behaviour. Penetrating higher states might harm social interaction if faces look angrier than they really are (Zhang et al., 2017), discourage actions if distances or heights look bigger than expected (Storbeck and Stefanucci, 2014;Stefanucci and Proffitt, 2009), alter the performance of a task if objects look different (den Daas et al., 2013;Witt and Proffitt, 2005), affect business if beverages taste less palatable than they normally do (Harrar et al., 2011;Piqueras-Fiszman and Spence, 2012;Wanab et al., 2015), and the like. ...
... Some researchers have focused on understanding the nature of bottom-up and top-down processes in visual perception (Teufel and Nanay, 2017;see Rauss et al., 2011;Gilbert and Sigman, 2007;Gilbert and Li, 2013; for scientific literature). Other philosophers have argued for the necessity of top-down signals to achieve perceptual computation (Marchi and Newen, 2015;Newen and Vetter, 2017; see Cheung and Bar, 2014;O´Callaghan et al., 2017;Piëch et al., 2013; for non-philosophical sources). Furthermore, theoretical approaches have postulated that top-down effects result from predictive processes in the brain (Clark, 2013;see Fenske et al., 2006;Hohwy, 2013Hohwy, , 2017Lupyan, 2015; for scientific references). ...
Article
Cognitive and affective penetration of perception refers to the influence that higher mental states such as beliefs and emotions have on perceptual systems. Psychological and neuroscientific studies appear to show that these states modulate the visual system at the visuomotor, attentional, and late levels of processing. However, empirical evidence showing that similar consequences occur in early stages of visual processing seems to be scarce. In this paper, I argue that psychological evidence does not seem to be either sufficient or necessary to argue in favour of or against the cognitive penetration of perception in either late or early vision. In order to do that we need to have recourse to brain imaging techniques. Thus, I introduce a neuroscientific study and argue that it seems to provide well-grounded evidence for the cognitive penetration of early vision in face perception. I also examine and reject alternative explanations to my conclusion.
... In fact, I will ignore the third claim since it is mostly based on a study that "does not directly show influence on visual perception per se". ( Newen & Vetter, 2017, p. 31, their emphasis) 5 What I want to propose is that if one approaches the strong impenetrability claim in terms of neurophysiology, it is better to first focus on the neural correlates of visual experiences and then consider the reasons why those neural correlates are or could be influenced by other (sub)cortical areas. In practice, this means that three interrelated questions must be addressed: First, what are the neural correlates of the contents of our perceptual experiences? ...
... Their argumentation for cognitive penetration can be objected to, however, because they explicitly exclude perceptual learning from their consideration. They do so because perceptual learning cannot explain their last example ( Newen & Vetter, 2017, p. 32). This line of reasoning is highly unusual because candidates for cases of cognitive penetration differ considerably, and thus it is common to explain different candidates by different means. ...
Article
Full-text available
Albert Newen and Petra Vetter argue that neurophysiological considerations and psychophysical studies provide striking evidence for cognitive penetration. This commentary focuses mainly on the neurophysiological considerations, which have thus far remained largely absent in the philosophical debate concerning cognitive penetration, and on the cognitive penetration of perceptual experiences, which is the form of cognitive penetration philosophers have debated about the most. It is argued that Newen and Vetter's evidence for cognitive penetration is unpersuasive because they do not sufficiently scrutinize the details of the empirical studies they make use of-the details of the empirical studies are crucial also when the studies are used in philosophical debates. The previous does not mean that cognitive penetration could not occur. Quite the contrary, details of the feedback connections to the visual perceptual module and one of the candidates presented by Newen and Vetter suggest that cognitive penetration can occur in rare cases.
... Some biases in behaviour may be thus explained if, for example, the mere perception of a Black face triggers an affective response that biases behaviour (Azevedo et al. 2017), or if the visual perception of mouth movements influences which phoneme is heard (McGurk and MacDonald 1976). Determining whether the influencing states or associations are "cognitive enough" to count as cognitive penetration is increasingly at odds with current views where there is no sharp distinction between cognition and perception (Newen and Vetter 2017). Nevertheless, the distinction could be maintained as a matter of degree by determining whether the effect is responsive to other clearly cognitive states, such as explicit beliefs or intentions (Deroy 2019). ...
Chapter
Full-text available
Cognitive states, such as beliefs, desires and intentions, may influence how we perceive people and objects. If this is the case, are those influences worse when they occur implicitly rather than explicitly? Here we show that cognitive penetration in perception generally involves an implicit component. First, the process of influence is implicit, making us unaware that our perception is misrepresenting the world. This lack of awareness is the source of the epistemic threat raised by cognitive penetration. Second, the influencing state can be implicit, though it can also be or become explicit. Being unaware of the content of the influencing state, we argue, does not make as much difference to the epistemic threat as it does to the epistemic responsibility of the agent. Implicit influencers cannot be examined for their accuracy and justification, and cannot be voluntarily accepted by the perceiver. Conscious awareness, however, is not sufficient for attributing blame to the agent. An equally important condition is the degree of control that they can exercise to change the contents that influence perception or stop their influence. Here we suggest that such control can also result from social influence, and that cognitive penetrability of perception is therefore also a social issue.
... The proponents of cognitive/agency penetrability base their arguments on the points like (a) the presence of downstream projections to the sensory areas in the brain (Newen and Vetter, 2017;O'Callaghan et al., 2017), (b) the visual system is not encapsulated or modular (Masrour et al., 2015;Ogilvie and Carruthers, 2016;Briscoe, forthcoming), (c) there is no distinction between perception and cognition (Vetter and Newen, 2014;Lupyan, 2015), (d) perception is for action (Bence Nanay, 2012;Gross and Proffitt, 2014), (e) perception is theory-laden (Brewer, 2015), (f) perception is predictive coded (Andy Clark, 2014;Lupyan and Clark, 2015;Piotr Litwin, 2017;Newen, Marchi and Brössel, 2017), (g) the presence of perceptual learning (Ellen Fridland, 2015; but see Valtteri Arstila, 2016) etc. However, the proposal of top-down penetrability into perceptions is not universally accepted. ...
Thesis
Full-text available
The Sense of agency (SoA) as conceived in experimental paradigms adheres to “cognitive penetration” and “cognitive phenomenology.” Cognitive penetrability is the assumption that agency states penetrate sensory modalities like time perception – the Intentional binding (IB) hypothesis – and auditory, visual and tactile perceptions – the Sensory attenuation (SA) hypothesis. Cognitive phenomenology, on the other hand, assumes that agency states are perceptual or experiential, akin to sensory states. I critically examine these operationalizations and argue that the SoA is a judgment effect rather than a perceptual/phenomenal state. My thesis criticizes the experimentally operationalized implicit SoA (in chapter 2), explicit SoA (in chapter 3) and cue-integrated SoA (in chapter 4) by arguing that: (a) There is uncertainty in the SoA experimental operationalization (making the participants prone to judgment effects); (b) There are inconsistencies and incoherence between different findings and reports in the SoA domain; (c) The SoA reports are influenced by prior as well as online-generated beliefs (under uncertainty); (d) The SoA operationalizations had inaccuracy or approximation standard for measuring perception/experience of agency; (e) Under certainty and accuracy standard (for perception), the SoA (biased or nonveridical) reports might not have occurred at all; and (f) Reported inconsistencies and, the effects of beliefs can be parsimoniously accounted by compositionality nature of judgment. Thus, my thesis concludes that SoA reports are not instances of feelings/perceptions but are judgments.
... The proponents of cognitive/agency penetrability base their arguments on the points like (a) the presence of downstream projections to the sensory areas in the brain (Newen and Vetter, 2017;O'Callaghan et al., 2017), (b) the visual system is not encapsulated or modular (Masrour et al., 2015;Ogilvie and Carruthers, 2016;Briscoe, forthcoming), (c) there is no distinction between perception and cognition (Vetter and Newen, 2014;Lupyan, 2015), (d) perception is for action (Bence Nanay, 2012;Gross and Proffitt, 2014), (e) perception is theory-laden (Brewer, 2015), (f) perception is predictive coded (Andy Clark, 2014;Lupyan and Clark, 2015;Piotr Litwin, 2017;Newen, Marchi and Brössel, 2017), (g) the presence of perceptual learning (Ellen Fridland, 2015; but see Valtteri Arstila, 2016) etc. However, the proposal of top-down penetrability into perceptions is not universally accepted. ...
Article
How does one know that (s)he is the causal agent of their motor actions? Earlier theories of sense of agency have attributed the capacity for perception of self-agency to the comparator process of the motor-control/action system. However, with the advent of the findings implying a role of non-motor cues (like affective states, beliefs, primed concepts, and social instructions or previews of actions) in the sense of agency literature, the perception of self-agency is hypothesized to be generated even by non-motor cues (based on their relative reliability or weighting estimate); and, this theory is come to be known as the cue-integration of sense of agency. However, the cue-integration theory motivates skepticism about whether it is falsifiable and whether it is plausible that non-motor cues that are sensorily unrelated to typical sensory processes of self-agency have the capacity to produce a perception of self-agency. To substantiate this skepticism, I critically analyze the experimental operationalizations of cue-integration—with the (classic) vicarious agency experiment as a case study—to show that (1) the participants in these experiments are ambiguous about their causal agency over motor actions, (2) thus, these participants resort to reports of self-agency as heuristic judgments (under ambiguity) rather than due to cue-integration per se, and (3) there might not have occurred cue-integration based self-agency reports if these experimental operationalizations had eliminated ambiguity about the causal agency. Thus, I conclude that the reports of self-agency (observed in typical non-motor cues based cue-integration experiments) are not instances of perceptual effect—that are hypothesized to be produced by non-motor cues—but are of heuristic judgment effect.
... Recent research also stresses that vision is a flexible mechanism, influenced in a top-down fashion by endogenous factors, such as attention, memory, and emotion (Panichello et al. 2013;Newen and Vetter 2017). Indeed, healthy individuals use SF flexibly depending on the needs of the tasks and categories of visual stimuli to be processed (Morrison and Schyns 2001). ...
Article
Full-text available
Rationale Visuo-perceptive deficits in severe alcohol use disorder (SAUD) remain little understood, notably regarding the respective involvement of the two main human visual streams, i.e., magnocellular (MC) and parvocellular (PC) pathways, in these deficits. Besides, in healthy populations, low-level visual perception can adapt depending on the nature of visual cues, among which emotional features, but this MC and PC pathway adaptation to emotional content is unexplored in SAUD. Objectives To assess MC and PC functioning as well as their emotional modulations in SAUD. Methods We used sensitivity indices (d′) and repeated-measures analyses of variance to compare orientation judgments of Gabor patches sampled at various MC- and PC-related spatial frequencies in 35 individuals with SAUD and 38 matched healthy controls. We then explored how emotional content modulated performances by introducing neutral or fearful face cues immediately before the Gabor patches and added the type of cue in the analyses. Results SAUD patients showed a general reduction in sensitivity across all spatial frequencies, indicating impoverished processing of both coarse and fine-scale visual content. However, we observed selective impairments depending on facial cues: individuals with SAUD processed intermediate spatial frequencies less efficiently than healthy controls following neutral faces, whereas group differences emerged for the highest spatial frequencies following fearful faces. Altogether, SAUD was associated with mixed MC and PC deficits that may vary according to emotional content, in line with a flexible but suboptimal use of low-level visual content. Such subtle alterations could have implications for everyday life’s complex visual judgments.
... Exploring MC and PC pathways is also relevant regarding bidirectional influences between low-level vision and high-level cognition, including attentional, executive, and affective processes (Barrett and Bar, 2009;Newen and Vetter, 2017;O'Callaghan et al., 2017). Indeed, bidirectional visual feedback depends on their integrity and smooth interplay: The MC pathway promotes a rapid but coarse analysis of incoming visual information thanks to its high temporal but low spatial frequency sensitivity. ...
Article
Visuospatial impairments have long been reported in Severe Alcohol Use Disorder but remain poorly understood, notably regarding the involvement of magnocellular (MC) and parvocellular (PC) pathways. This empirical gap hampers understanding the implications of these visual changes, especially since the MC and PC pathways are thought to sustain central bottom-up and top-down processes during cognitive processing. They thus influence our ability to efficiently monitor our environment and make the most effective decisions. To overcome this limitation, we measured PC-inferred spatial and MC-inferred temporal resolution in 35 individuals with SAUD and 30 healthy controls. We used Landolt circles displaying small apertures outside the sensitivity range of MC cells or flickering at a temporal frequency exceeding PC sensitivity. We found evidence of preserved PC spatial resolution combined with impaired MC temporal resolution in SAUD. We also measured how spatial and temporal sensitivity is influenced by the prior presentation of fearful faces – as emotional content could favor MC processing over PC one – but found no evidence of emotional modulation in either group. This spatio-temporal dissociation implies that individuals with SAUD may process visual details efficiently but perceive rapidly updating visual information at a slower pace. This deficit has implications for the tracking of rapidly changing stimuli in experimental tasks, but also for the decoding of crucial everyday visual incentives such as faces, whose micro-expressions vary continuously. Future studies will help further specify the visual profile of individuals with SAUD to incorporate disparate findings within a theoretically grounded model of vision.
... This would be a view according to which memory recall is a purely bottom-up modular process independent form conceptualizations. We argue that in parallel to the claim of cognitive penetration of our perceptual experience (Macpherson 2012;Newen & Vetter 2017), it also seems plausible to allow for a cognitive penetration of episodically recalled scenarios. recalled scenarios, we discuss how the narrative self might, in addition to content, also change the phenomenology of memories through the conceptualization-route. Research shows that various experiential dimensions of memory, such as those distinguished by Sutin and Robins (2007), tend to cluster together (cf. ...
Article
Full-text available
Episodic memories can no longer be seen as the re-activation of stored experiences but are the product of an intense construction process based on a memory trace. Episodic recall is a result of a process of scenario construction. If one accepts this generative framework of episodic memory, there is still a be big gap in understanding the role of the narrative self in shaping scenario construction. Some philosophers are in principle sceptic by claiming that a narrative self cannot be more than a causally inefficacious attributed entity anyway. Thus, we first characterize a narrative self in detail and second we clarify its influential causal role in shaping our episodic memories by influencing the process of scenario construction. This happens at three stages, namely at the level of the input, the output and the process of scenario construction.
... Note that this whole debate is better understood if connected with the parallel but distinct issue regarding cognitive penetrability. 2 Cognitive penetrability can be defined as the property of perceptual experience to be influenced by what happens at the so-called higher cognitive level; in other words, we speak of cognitive penetration when perceptual experience is influenced by beliefs, desires, intentions and concepts (Newen and Vetter 2017). In a way, the debate can be conceived to proceed hand in hand with the issue treated here: admitting an influence of linguistic information on non linguistic processing means admitting permeability of perceptual experience. ...
Chapter
Full-text available
This paper connects the issue of the influence of language on conceptual representations, known as Linguistic Relativity, with some issues pertaining to concepts’ structure and retrieval. In what follows, I present a model of the relation between linguistic information and perceptual information in concepts using frames as a format of mental representation, and argue that this model not only accommodates the empirical evidence presented by the linguistic relativity debate, but also sheds some light on unanswered questions regarding conceptual representations’ structure. A fundamental assumption is that mental representations can be conceptualised as complex functional structures whose components can be dynamically and flexibly recruited depending on the tasks at hand; the components include linguistic and non-linguistic elements. This kind of model allows for the representation of the interaction between linguistic and perceptual information and accounts for the variable influence that color labels have on non-linguistic tasks. The paper provides some example of strategy shifting and flexible recruitment of linguistic information available in the literature and explains them using frames.
... Thus, the assumption of cognitive penetration proposes that either perceptual overestimation or perceptual underestimation occur in accordance with the cognitive-scaling of perception that is unique (or uniquely functional) to the corresponding cognitive factor. Furthermore, the proponents of cognitive penetration substantiate their hypothesis on the basis of an existence of downstream projections into the sensory areas in the brain (Newen and Vetter, 2017;O'Callaghan et al., 2017). According to the proponents of cognitive penetrability, the sensory areas in the brain embrace downward projections from the non-sensory areas and thus, there occurs (downward) cognitive penetration. ...
Article
Full-text available
Cognitive penetration is the assumption that non-sensory factors influence sensory perception at the core level of sensory processing, thus generating or modifying the contents of perception. However, the experimental instances of cognitive penetration can be argued to be instances of experimental confounding that occur due to the operationalization of perception to be a magnitude estimation activity rather than as a category identification task. The magnitudinal stimuli can confound the experiments as they tend to generate perceptual fuzziness and thus lead to non-veridical overestimations as well as underestimations of those stimuli. And, these non-veridical estimations or approximations fail to be distinguishable whether they are (sensory) perceptual errors or (non-sensory) response biases per se. Moreover, the typical cognitive-penetration-like effects will not be observed if the perception is operationalized as a category identification activity, as the categorical stimuli are not fuzzy and do not lead to response biases. Thus, the purported instances of cognitive penetration can be argued to be mere instances of experimental confounding, and thus, cognitive penetration is not a valid psychological phenomenon.
... In other words, cognition may penetrate taste perception. While many studies addressing cognitive penetration focus on visual perception (MacPherson, 2012;Cecchi, 2014Cecchi, , 2018Vetter and Newen, 2014;Silins, 2016;Newen and Vetter, 2017;Raftopoulos, 2016Raftopoulos, , 2019, only a few have focused on taste perception (Wansink et al., 2000;McClure et al., 2004;Lee, Frederick and Ariely, 2006). ...
Article
Full-text available
The relevance of cognitive penetration has been pointed out concerning three fields within philosophy: philosophy of science, philosophy of mind, and epistemology. This paper argues that this phenomenon is also relevant to the philosophy of language. First, I will defend that there are situations where ethical, social, or cultural rules can affect our taste perceptions. This influence can cause speakers to utter conflicting contents that lead them to disagree and, subsequently, to negotiate the circumstances of application of the taste predicates they have used to describe or express their taste perceptions. Then, to account for the proper dynamics of these cases, I will develop a theoretical framework build upon two elements: the Lewisian idea of the score of a conversation (Lewis, 1979), and Richard's (2008) taxonomy of the different attitudes speakers can have in taste disagreements. In a nutshell, I will argue that speakers can accommodate these conflicting contents as exceptions to the rule that determines the circumstances of application of taste predicates.
... Visuoperception is critical for humans considering not only the temporal precedence of vision in the continuum of cognitive processing but also its interplay with higher-level cerebral systems, including attentional, executive, and emotional ones (Creupelandt et al., 2019). Indeed, current models of vision stress that vision and cognition act together and do not strictly represent two independent and successive steps of cerebral processing (Newen & Vetter, 2017;O'Callaghan et al., 2017). In this framework, the efficiency of visuoperception arises from the integration of both intra-visual connections and communication paths between visual regions and higher-order areas, including, but not limited to, the frontal cortex. ...
Article
Visuoperceptive deficits are frequently reported in severe alcohol use disorder (SAUD) and are considered as pervasive and persistent in time. While this topic of investigation has previously driven researchers’ interest, far fewer studies have focused on visuoperception since the ‘90s, leaving open central questions regarding the origin and implications of these deficits. To renew research in the field and provide a solid background to work upon, this paper reviews the neural correlates of visuoperception in SAUD, based on data from neuroimaging and electrophysiological studies. Results reveal structural and functional changes within the visual system but also in the connections between occipital and frontal areas. We highlight the lack of integration of these findings in the dominant models of vision which stress the dynamic nature of the visual system and consider the presence of both bottom-up and top-down cerebral mechanisms. Visuoperceptive changes are also discussed in the framework of long-lasting debates regarding the influence of demographic and alcohol-related factors, together stressing the presence of inter-individual differences. Capitalizing on this review, we provide guidelines to inform future research, and ultimately improve clinical care.
... This question is critical to understand the mechanisms underlying widespread higher-order processing deficits in SAUD, as low-level visual deficits might impair the subsequent cognitive and emotional stages, and patients with SAUD may have to base their decisions and judgments on degraded visual information (Creupelandt et al., 2019). Besides, recent theoretical frameworks (Firestone and Scholl, 2016;Newen and Vetter, 2017;O'Callaghan et al., 2017) postulate early connections between the visual system and higher-level cerebral areas (e.g., orbitofrontal cortex) implicated in a variety of crucial executive and emotional processes (Bar, 2003;Kauffmann et al., 2015;Kveraga et al., 2007). Applying such models to SAUD would thus renew the understanding of visual impairments, as well as their influence on the largely explored high-level cognitive functions (Creupelandt et al., 2019). ...
Article
Background: Severe Alcohol Use Disorder (SAUD) is associated with widespread cognitive impairments, including low-level visual processing deficits persisting even after prolonged abstinence. However, the extent and characteristics of these visual deficits remain largely undetermined, impeding the identification of their underlying mechanisms and influence on higher-order processing. In particular, little work has been conducted to assess the integrity of the magnocellular (MC) and parvocellular (PC) visual pathways, namely the two main visual streams that convey information from the retina up to striate, extra-striate, and ventral/dorsal cerebral regions. Methods: We investigated achromatic luminance contrast processing mediated by inferred MC and PC pathways in 33 patients with SAUD and 32 matched healthy controls using two psychophysical pedestal contrast discrimination tasks promoting responses of inferred MC or PC pathways. We used a staircase procedure to assess participants' ability to detect small changes in luminance within an array of four grey squares that were either continuously presented (steady pedestal, MC-biased) or briefly flashed (pulsed pedestal, PC-biased). Results: We replicated the expected pattern of MC and PC contrast responses in healthy controls. We found preserved MC and PC contrast signatures dissociation in SAUD but also higher MC-mediated mean contrast discrimination thresholds compared to healthy controls, combined with a steeper PC-mediated contrast discrimination slope. Conclusion: These findings indicate altered MC-mediated contrast sensitivity and PC-mediated contrast gain, confirming the presence of early sensory disturbances. Such low-level deficits, while usually overlooked, might influence higher-order abilities (e.g., memory, executive functions) in SAUD by disturbing the "coarse-to-fine" tuning of the visual system, which relies on the distinct functional properties of MC and PC pathways and ensures proper and efficient monitoring of the environment.
... 3 This research puts the modular accounts that favor the view that early vision is contentfully encapsulated under considerable pressure. The philosophical community has presented strong arguments, based on the best experimental evidence, in favor of cognitive penetrability (Arstila 2017;Marchi, 2017;Newen and Vetter, 2017;Stokes 2017;Briscoe 2015;MacPherson 2015;Siegel 2012;Lyons 2011). Summing up a conclusion from a wealth of similar work, Carruthers (2015) reports that "increasingly it has been argued that perceptual processing is deeply interactive at many different levels simultaneously" (p. ...
... Similar techniques can be adopted to investigate how object representations are affected by different cognitive processes such as belief, desire, and concepts (e.g. Pylyshyn, 1999;Raftopoulos, 2014;Firestone & Scholl, 2016;Newen & Vetter, 2017;Teufel & Nanay, 2017). ...
Preprint
Full-text available
At which phase(s) does task demand affect object processing? Previous studies showed that task demand affects object representations in higher-level visual areas but not so much in earlier areas. There are, however, limitations in those studies concerning the relatively weak manipulation of task due to the use of familiar real-life objects, and/or the low temporal resolution in brain activation measures such as fMRI. In the current study, observers categorized images of artificial objects in one of two orthogonal dimensions, shape and texture. Electroencephalogram (EEG), a technique with higher temporal resolution, and multivariate pattern analysis (MVPA) were employed to reveal object processing across time under different task demands. Results showed that object processing along the task-relevant dimension was enhanced starting from a relatively late time (~230ms after image onset), within the time range of the event-related potential (ERP) components N170 and N250. The findings are consistent with the view that task exerts an effect on object processing at the later phases of processing in the ventral visual pathway.
... An opposing perspective argues for the complementarity of perception and cognition based on the idea that cognition supplies contextual information that may disambiguate contradictory sensory evidence, or it may fill in missing parts (Goldstone, de Leeuw, & Landy, 2015;Lupyan, 2012Lupyan, , 2015aLupyan, , 2017aLupyan, , 2017b. A large amount of behavioral and brain data has been accumulated over the years suggesting that vision is indeed cognitively penetrable (O'Callaghan, Kveraga, Shine, Adams, & Bar, 2017;Newen & Vetter, 2017;Vetter & Newen, 2014). Such findings fit well within a predictive coding framework (Clark, 2013;Hohwy, 2013Hohwy, , 2017. ...
Article
The memory color effect and Spanish castle illusion are taken as evidence of the cognitive penetrability of vision. In the same manner, the successful decoding of color-related brain signals in functional neuroimaging studies suggests the retrieval of memory colors associated with a perceived gray object. Here, we offer an alternative account of these findings based on the design principles of adaptive resonance theory (ART). In ART, conscious perception is a consequence of a resonant state. Resonance emerges in a recurrent cortical circuit when a bottom-up spatial pattern agrees with the top-down expectation. When they do not agree, a special control mechanism is activated that resets the network and clears off erroneous expectation, thus allowing the bottom-up activity to always dominate in perception. We developed a color ART circuit and evaluated its behavior in computer simulations. The model helps to explain how traces of erroneous expectations about incoming color are eventually removed from the color perception, although their transient effect may be visible in behavioral responses or in brain imaging. Our results suggest that the color ART circuit, as a predictive computational system, is almost never penetrable, because it is equipped with computational mechanisms designed to constrain the impact of the top-down predictions on ongoing perceptual processing.
... It remains an open question whether or not there is sufficiently dynamic interaction between perceptual and cognitive processes at various stages in more elaborate networks to doubt modularity or dissolve a strict perception-cognition border. One might object here, for example, that the amygdala's influence implies an implausibly "large" visual module, which might lead to a rejection of modularity (a similar point is made by Newen and Vetter 2017). 6 However, nothing in the proposed model necessarily bears on whether or not vision is modular, and there are certainly other grounds for distinguishing perceptual and cognitive processes besides proposing a stark contrast between modular and non-modular processes. ...
Article
Full-text available
There is ongoing philosophical debate about the kinds of properties that are represented in visual perception. Both “rich” and “thin” accounts of perceptual content are concerned with how prior assumptions about the world influence the construction of perceptual representations. However, the idea that biased assumptions resulting from oppressive social structures contribute to the contents of perception has been largely neglected historically in this debate in the philosophy of perception. I draw on neurobiological evidence of the role of the amygdala in visual processing to show that the influence of biased assumptions on visual perception gives us a unique path to rich evaluative content that does not require an appeal to controversial mechanisms like top-down modulation.
... SeeRaftopoulos (2013) for a discussion of a minimal part of vision that might be considered completely encapsulated. However,Vetter and Newen (2014) argue that even such minimal part is not isolated form higher-level feedback (see alsoNewen & Vetter, 2017).Content courtesy of Springer Nature, terms of use apply. Rights reserved. ...
Article
Full-text available
There is a view on consciousness that has strong intuitive appeal and empirical support: the intermediate-level theory of consciousness, proposed mainly by Ray Jackendoff and by Jesse Prinz. This theory identifies a specific “intermediate” level of representation as the basis of human phenomenal consciousness, which sits between high-level non-perspectival thought processes and low-level disjointed feature-detection processes in the perceptual and cognitive processing hierarchy. In this article, we show that the claim that consciousness arises at an intermediate-level is true of some cognitive systems, but only in virtue of specific constraints on their active interactions with the environment. We provide ecological reasons for why certain processing levels in a cognitive hierarchy are privileged with respect to consciousness. We do this from the perspective of a prediction-error minimization model of perception and cognition, relying especially on the notion of active inference: the privileged level for consciousness depends on the specific dispositions of an organism concerned with inferring its policies for action. Such a level is indeed intermediate for humans, but this depends on the spatiotemporal resolution of the typical actions that a human organism can normally perform. Thus, intermediateness is not an essential feature of consciousness. In organisms with different action dispositions the privileged level or levels may differ as well.
... After all, to assess the relationship between two phenomena a minimal differentiability between each phenomenon is to be expected, for the information of a cognitive state (C) influence the information of a perceptual state (P), there should be some visible demarcation between (C) and (P). Whereas impenetrability defenders advocate for a clear-cut division (Pylyshyn, 1999;Raftopoulos, 2009;Firestone & Scholl, 2016), penetrability supporters give up the idea of a sharp division, thus considering the line that divides perception from cognition imprecise and non-fixed (Newen & Vetter, 2017). In the more extreme position are those who deny any real distinction. ...
Article
Despite the extensive body of psychological findings suggesting that cognition influences perception, the debate between defenders and detractors of the cognitive penetrability of perception persists. While detractors demand more strictness in psychological experiments, proponents consider that empirical studies show that cognitive penetrability occurs. These considerations have led some theorists to propose that the debate has reached a dead end. The issue about where perception ends and cognition begins is, I argue, one of the reasons why the debate is cornered. Another reason is the inability of psychological studies to present uncontroversial interpretations of the results obtained. To dive into other kinds of empirical sources is, therefore, required to clarify the debate. In this paper, I explain where the debate is blocked, and suggest that neuroscientific evidence together with the predictive coding account, might decant the discussion on the side of the penetrability thesis.
... For these reasons and others, the memory color effect is considered by many researchers to be among the most promising candidates for a genuine top-down effect of cognition on perception. Indeed, effects of knowledge on color appearance have played a central role in recent arguments for the cognitive penetrability of perception (e.g., Macpherson, 2012;Newen & Vetter, 2017;Vetter & Newen, 2014), have helped to motivate new perspectives on cognitive architecture more generally (e.g., Barsalou, 2008;Lupyan, 2015a), and have even appeared in popular perception textbooks (e.g., Goldstein & Brockmole, 2016;Schwartz & Krantz, 2017). ...
... In the same way, a phenomenological perspective is often used to argue for the rich content of our perceptual experience in social cognition, prominently defended by Gallagher (2008) and Zahavi (2011). The general line of argument can be roughly characterized as follows: perceptual experiences can be cognitively penetrated and they can thereby involve a rich content (McPherson 2012;Vetter & Newen 2014;Newen & Vetter 2016). Expert perception, we may say, is different from the perception of laypersons. ...
Article
Full-text available
What would be an adequate theory of social understanding? In the last decade, the philosophical debate has focused on Theory Theory, Simulation Theory and Interaction Theory as the three possible candidates. In the following, we look carefully at each of these and describe its main advantages and disadvantages. Based on this critical analysis, we formulate the need for a new account of social understanding. We propose the Person Model Theory as an independent new account which has greater explanatory power compared to the existing theories.
... Indeed, effects of knowledge on color appearance have played a central role in recent arguments for the cognitive penetrability of perception (e.g., Macpherson, 2012;Newen & Vetter, 2017;Vetter & Newen, 2014), have helped to motivate new perspectives on cognitive architecture more generally (e.g., Barsalou, 2008;Lupyan, 2015a), and have even appeared in popular perception textbooks (e.g., Goldstein & Brockmole, 2016;Schwartz & Krantz, 2017). ...
... They expressed one worry: "Why should we accept the generalization of exceptional cases of visual illusions to any case of everyday perceptual experience?" [14, p. 28] We find their argument based on the supposed exceptionality of visual illusion cases superficial and limited, because how then to explain the experimental statistics of the strange-face-in-the-mirror illusion which shows that some illusions are standardly effective (and typisized) only on healthy subjects, and less effective (and unstadardized) in patients diagnosed with some psychological problem or a psychiatric illness. We analyse findings of G. B. Caputo [15][16] related to the strange-face-in-the-mirror illusion as an example to prove that Newen & Vetter's second argument against cognitive impenetrability is not true [14]. Caputo's experiments show that in healthy, average observers gazing at one's own face in the mirror for a few minutes, at a low illumination level, produces the apparition of strange faces. ...
Conference Paper
Full-text available
The paper is an extended summary from a larger research. It explores the hypothesis that there might be a certain type of probabilistically acquired embodied calculus that meditates our sensory perception. Our intention is to reflect upon some problems connected to modeling consciousness while weighing sensory and social perception against each other in the theater of action where Bayesian brain “plays us” while playing with itself.
... In typical cases of automatic or effortless inference, you can infer that someone is late by looking at their facial expression or how they are looking at their watch, but this does not mean that you are seeing "lateness." Emotion perception is more complicated, but it might be susceptible to similar interpretative treatments (for dissent, see Siegel, 2006;Newen and Vetter, 2017). We can infer someone's joy through their facial expressions, but we do not necessarily see the actual feeling of joy. ...
Article
Full-text available
The main thesis of this paper is that two prevailing theories about cognitive penetration are too extreme, namely, the view that cognitive penetration is pervasive and the view that there is a sharp and fundamental distinction between cognition and perception, which precludes any type of cognitive penetration. These opposite views have clear merits and empirical support. To eliminate this puzzling situation, we present an alternative theoretical approach that incorporates the merits of these views into a broader and more nuanced explanatory framework. A key argument we present in favor of this framework concerns the evolution of intentionality and perceptual capacities. An implication of this argument is that cases of cognitive penetration must have evolved more recently and that this is compatible with the cognitive impenetrability of early perceptual stages of processing information. A theoretical approach that explains why this should be the case is the consciousness and attention dissociation framework. The paper discusses why concepts, particularly issues concerning concept acquisition, play an important role in the interaction between perception and cognition.
Article
In the Weapon Identification Task (WIT), Black faces prime the identification of guns compared with tools. We measured race-induced changes in visual awareness of guns and tools using continuous flash suppression (CFS). Eighty-four participants, primed with Black or Asian faces, indicated the location of a gun or tool target that was temporarily rendered invisible through CFS, which provides a sensitive measure of effects on early visual processing. The same participants also completed a standard (non-CFS) WIT. We replicated the standard race-priming effect in the WIT. In the CFS task, Black and Asian primes did not affect the time guns and tools needed to enter awareness. Thus, race priming does not alter early visual processing but does change the identification of guns and tools. This confirms that race-priming originates from later post-perceptual memory- or response-related processing.
Chapter
In this chapter, we discuss the problems of the human now, that attracted much attention in the XX century, and a number of their comprehensive accounts proposed at the beginning of the XXI century. We combine these accounts under the term temporal experience or temporal consciousness and analyze them in detail. During this analysis, in parallel, we formulate some of the main premises making up the gist of our account of the human temporality and describe the basic elements of individual temporal dimension attributed to the mind. In particular, the following issues are discussed in detail: The structure of a single unit of temporal experience. How these units are combined into the stream of experiences and form the diachronic unit of temporal experience. The available experimental data elucidating the details and particular time scales characterizing the experiential now.
Article
Full-text available
Hedonic adaptation has come to play a large role in wellbeing studies and in practical philosophy more generally. We argue that hedonic adaptation has been too closely assimilated to sensory adaptation. Sensation and selective attention do indeed play a role in adaptation; but so do judgment, articulation, contextualization and background assumptions, as well as coping strategies and features of one’s social and physical environment. Hence the notion of hedonic adaptation covers not a single uniform phenomenon, but a whole range of different processes and mechanisms. We present a taxonomy of different forms of hedonic adaptation, pointing especially to the importance of coping strategies and socially supported adaptation, which have been overlooked or misdescribed by adaptation theory, but implicitly recognized by empirical research. We further argue that the differences between types adaptive processes have ramifications for normative theories. Adaptation can work both for good and for bad, depending on the psychological and contextual details. Acknowledging the many forms of hedonic adaptation, and the ubiquitous role of mutual adjustments of values, standards of judgment, emotional tendencies, behavior and environmental factors in achieving wellbeing also gives support to a more complex and dynamic view of wellbeing as such.
Article
Full-text available
Moral‐value perception occurs when sensory perception is directed by cognitive schemas, which selectively sample information in objects and events that represent personally valued means or ends. As a result of the perception of such value representations people experience affect ranging from strongly negative to strongly positive. If one's schemas direct attention to elements of objects or actions that verify schematic expectations, and spark little of no noticeable affect, the new information automatically produces minor alterations of the schema. However, if attention is directed or drawn to information that produces strong affect, and is contradictory of activated schemas, one is likely to engage in conscious assessment, through which the information is re‐conceptualized in a way that preserves schematic integrity. Based on multidisciplinary analysis, this paper (1) addresses the ways in which the perception of things within the natural world are represented in moral‐value perceptions, (2) identifies important cognitive, affective and emotional processes involved in the ongoing experience of such perception, and (3) illustrates some of the ways in which moral‐value perceptions influence moral assessments and judgements?
Article
Full-text available
The present literature review aimed at offering a comprehensive and critical view of the behavioral data collected during the past seventy years concerning visuoperception in severe alcohol use disorders (SAUD). To pave the way for a renewal of research and clinical approaches in this very little understood field, this paper: (1) provides a critical review of previous behavioral studies exploring visuoperceptive processing in SAUD (2) identifies the alcohol-related parameters and demographic factors that influence the deficits; and (3) addresses the limitations of this literature and their implications for current clinical strategies. By doing so, this review highlights the presence of visuoperceptive deficits but also shows how the lack of in-depth studies exploring the visual system in this clinical population results in the current absence of integration of these deficits in the dominant models of vision. Given the predominance of vision in everyday life, we stress the need to better delineate the extent, the specificity, and the actual implications of the deficits for SAUD.
Chapter
In this chapter, I present the cognitive penetrability hypothesis in full detail. In Sect. 3.1, I offer an historical contextualization of the discussion of top-down effects of cognition on perception, of which cognitive penetrability is a special case. I discuss some of the most relevant objections to the occurrence of such effects and elucidate why the issue of whether cognitive penetrability occurs is considered a very pressing one in cognitive science. In Sect. 3.2, I present the recent developments of the cognitive penetrability debate. I narrow down four definitions of cognitive penetrability that are traceable to the most recent literature and reflect different aspects of the phenomenon that should not be conflated. In Sect. 3.3, I present some of the evidence that has been proposed to support the cognitive penetrability hypothesis.
Chapter
In this chapter I introduce the terminology and concepts that are crucial for the development of my arguments in the book and discuss how the two key elements of my discussion, namely perception and cognition, can be kept apart in a mental processing system. In the overarching argumentative line of this book, which revolves around the cognitive penetrability of perceptual experience, keeping perception and cognition apart is a fundamental requirement. If perception and cognition cannot be separated, an issue immediately arises for the possibility of asking questions about their interactions. The structure of the chapter is as follows: Sect. 1.1 outlines the main theoretical commitments that form the backdrop of the discussion in this book. Section 1.2 is devoted to terminological clarifications and conceptual stage-setting. Section 1.3 explores how a clear-cut distinction between perception and cognition may be drawn.
Thesis
My thesis explores the interplay between perceptual awareness, metacognition and metarepresentations using various experimental manipulations. It is articulated around two approaches. The first part of my thesis seeks to better understand how perceptual awareness and metacognition are modulated by experimentally-induced metarepresentations in two studies based on beliefs manipulation. We use placebo suggestions aiming at improving perceptual awareness at different levels of processing in a first set of visual experiments and we study the impact of a negative placebo suggestion impact on perceptual awareness and metacognitive abilities in a second set of tactile experiments. Ours results suggest that placebo suggestions lead to fragile if not non-existent effects in non-noxious perception and that high-level cognitive-affective components may be essential for placebo effect to occur. The second part is focused on the relationship between perceptual consciousness and a core metarepresentation that is the self. In particular, it aims at deepening our understanding of whether bodily self-consciousness has a role in shaping perceptual consciousness. This fundamental relation has indeed surprisingly remained overlooked so far, perceptual- and bodily self- consciousness being largely studied independently. This second part is composed of 3 studies. The first one examines how body movement can influence vision and metacognition through sensory attenuation. The second study investigates how manipulating one’s sense of self through sensorimotor conflicts alters perception and metacognition. The third study explores whether self-metacognition requires embodiment and to which extent one can evaluate the (un)certainty of others. Taken together, our findings suggest that the brain — and consciousness — cannot be studied in isolation, and that it is essential to take into account our body and our actions into the world, as well as the fact that we live in a social environment in order to have a deeper understanding of perceptual consciousness.
Article
Full-text available
Knowing the identity of an object can powerfully alter perception. Visual demonstrations of this-such as Gregory's (1970) hidden Dalmatian-affirm the existence of both top-down and bottom-up processing. We consider a third processing pathway: lateral connections between the parts of an object. Lateral associations are assumed by theories of object processing and hierarchical theories of memory, but little evidence attests to them. If they exist, their effects should be observable even in the absence of object identity knowledge. We employed Continuous Flash Suppression (CFS) while participants studied object images, such that visual details were learned without explicit object identification. At test, lateral associations were probed using a part-to-part matching task. We also tested whether part-whole links were facilitated by prior study using a part-naming task, and included another study condition (Word), in which participants saw only an object's written name. The key question was whether CFS study (which provided visual information without identity) would better support part-to-part matching (via lateral associations) whereas Word study (which provided identity without the correct visual form) would better support part-naming (via top-down processing). The predicted dissociation was found and confirmed by state-trace analyses. Thus, lateral part-to-part associations were learned and retrieved independently of object identity representations. This establishes novel links between perception and memory, demonstrating that (a) lateral associations at lower levels of the object identification hierarchy exist and contribute to object processing and (b) these associations are learned via rapid, episodic-like mechanisms previously observed for the high-level, arbitrary relations comprising episodic memories. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Chapter
In this chapter, I defend the thesis that early vision is Cognitively Impenetrable (CI) against very recent criticisms, some of them aimed specifically at my arguments, which state that neurophysiological evidence shows that early vision is affected in a top-down manner by cognitive states. This criticism comes from (a) studies on fast object recognition; (b) pre-cueing studies; and (c) imaging studies that examine the recurrent processes in the brain during visual perception. I argue that upon closer examination, all this evidence supports rather than defeats the thesis that early vision is CI, because it shows that (a) the information used in early vision to recognize objects very fast is not cognitive information; (b) the processes of early vision do not use the cognitive information that issues cognitive demands guiding attention or expectation in pre-cueing studies; and (c) the recurrent processes in early vision are purely stimulus-driven and do not involve any cognitive signals.
Chapter
In this chapter, I assess the definitions of CI in the literature and synthesize them to propose a new definition of CP that incorporates the much-heated discussion about the effects of CP on the epistemic role of perception. I distinguish this definition from the other definitions that I had examined underlying at the same time the commonalities with them. Then, I propose to approach CP by factoring in the epistemic role of perception in justifying perceptual beliefs. This means that one should determine and assess the epistemic role of each stage of visual processing separately, in view of the different roles that the two stages play in perception and in view of the fact that cognition affects early and late vision differently. In view of the two threads that exist in definition of CP, one imposing the demand that for a perceptual process to be CP it must be directly affected by cognition, and the other imposing the demand that for a perceptual process to be CP cognition should affect in an interesting way its epistemic role, I discuss the relation between these two conditions. Finally, in view of my thesis that late vision is CP because cognitive states affect hitherto purely perceptual processes, I propose a way in which states with cognitive contents that are symbolically structured could affect states with purely iconic or analog contents.
Article
Visuoperceptive impairments are among the most frequently reported deficits in alcohol-use disorders, but only very few studies have investigated their origin and interactions with other categories of dysfunctions. Besides, these deficits have generally been interpreted in a linear bottom-up perspective, which appears very restrictive with respect to the new models of vision developed in healthy populations. Indeed, new theories highlight the predictive nature of the visual system and demonstrate that it interacts with higher-level cognitive functions to generate top-down predictions. These models notably posit that a fast but coarse visual analysis involving magnocellular pathways helps to compute heuristic guesses regarding the identity and affective value of inputs, which are used to facilitate conscious visual recognition. Building on these new proposals, the present review stresses the need to reconsider visual deficits in alcohol-use disorders as they might have crucial significance for core features of the pathology, such as attentional bias, loss of inhibitory control and emotion decoding impairments. Centrally, we suggest that individuals with severe alcohol-use disorders could present with magnocellular damage and we defend a dynamic explanation of the deficits. Rather than being restricted to high-level processes, deficits could start at early visual stages and then extend and potentially intensify during following steps due to reduced cerebral connectivity and dysfunctional cognitive/emotional regions. A new research agenda is specifically provided to test these hypotheses.
Preprint
Full-text available
I argue that pain sensations are perceptual states, namely states that represent (actual or potential) damage. I defend this position against the objection that pains, unlike standard perceptual states, do not allow for an appearance-reality distinction by arguing that in the case of pain as well as in standard perceptual experiences, cognitive penetration or malfunctions of the underlying sensory systems can lead to a dissociation between the sensation on the one hand, and what is represented on the other hand. Moreover, I refute the objection that the allegedly weak correlation between pain and bodily damage forces intentionalist accounts of pain to postulate so many malfunctions (misrepresentations respectively) that such accounts become implausible. I also rebut Murat Aydede's objection that our linguistic practice supposedly shows that there is a conceptual difference between standard perceptual experiences and pain sensations by challenging Aydede's premise that we always withdraw standard perceptual reports in case of counterevidence, while we never do that with pain reports. At the end, I propose an explanation as to why we do not express perceptual reports of (potential) bodily damage in objectivist, but in mental terms.
Article
Full-text available
If beliefs and desires affect perception-at least in certain specified ways-then cognitive penetration occurs. Whether it occurs is a matter of controversy. Recently, some proponents of the predictive coding account of perception have claimed that the account entails that cognitive penetrations occurs. I argue that the relationship between the predictive coding account and cognitive penetration is dependent on both the specific form of the predictive coding account and the specific form of cognitive penetration. In so doing, I spell out different forms of each and the relationship that holds between them. Thus, mere acceptance of the predictive coding approach to perception does not determine whether one should think that cognitive penetration exists. Moreover, given that there are such different conceptions of both predictive coding and cognitive penetration, researchers should cease talking of either without making clear which form they refer to, if they aspire to make true generalisations.
Article
Full-text available
The debate about direct perception encompasses different topics, one of which concerns the richness of the contents of perceptual experiences. Can we directly perceive only low-level properties, like edges, colors etc. (the sparse-content view), or can we perceive high-level properties and entities as well (the liberal-content view)? The aim of the paper is to defend the claim that the content of our perceptual experience can include emotions and also person impressions. Using these examples, an argument is developed to defend a liberal-content view for core examples of social cognition. This view is developed and contrasted with accounts which claim that in the case of registering another person’s emotion while seeing them, we have to describe the relevant content not as the content of a perceptual experience, but of a perceptual belief. The paper defends the view that perceptual experiences can have a rich content yet remain separable from beliefs formed on the basis of the experience. How liberal and enriched the content of a perceptual experience is will depend upon the expertise a person has developed in the field. This is supported by the argument that perceptual experiences can be systematically enriched by perceiving affordances of objects, by pattern recognition or by top-down processes, as analyzed by processes of cognitive penetration or predictive coding.
Article
Full-text available
Unlabelled: The human visual system must extract reliable object information from cluttered visual scenes several times per second, and this temporal constraint has been taken as evidence that the underlying cortical processing must be strictly feedforward. Here we use a novel rapid reinforcement paradigm to probe the temporal dynamics of the neural circuit underlying rapid object shape perception and thus test this feedforward assumption. Our results show that two shape stimuli are optimally reinforcing when separated in time by ∼60 ms, suggesting an underlying recurrent circuit with a time constant (feedforward + feedback) of 60 ms. A control experiment demonstrates that this is not an attentional cueing effect. Instead, it appears to reflect the time course of feedback processing underlying the rapid perceptual organization of shape. Significance statement: Human and nonhuman primates can spot an animal shape in complex natural scenes with striking speed, and this has been taken as evidence that the underlying cortical mechanisms are strictly feedforward. Using a novel paradigm to probe the dynamics of shape perception, we find that two shape stimuli are optimally reinforcing when separated in time by 60 ms, suggesting a fast but recurrent neural circuit. This work (1) introduces a novel method for probing the temporal dynamics of cortical circuits underlying perception, (2) provides direct evidence against the feedforward assumption for rapid shape perception, and (3) yields insight into the role of feedback connections in the object pathway.
Article
Full-text available
Visual stimuli quickly activate a broad network of brain areas that often show reciprocal structural connections between them. Activity at short latencies (<100 ms) is thought to represent a feed-forward activation of widespread cortical areas, but fast activation combined with reciprocal connectivity between areas in principle allows for two-way, recurrent interactions to occur at short latencies after stimulus onset. Here we combined EEG source-imaging and Granger-causal modeling with high temporal resolution to investigate whether recurrent and top-down interactions between visual and attentional brain areas can be identified and distinguished at short latencies in humans. We investigated the directed interactions between widespread occipital, parietal and frontal areas that we localized within participants using fMRI. The connectivity results showed two-way interactions between area MT and V1 already at short latencies. In addition, the results suggested a large role for lateral parietal cortex in coordinating visual activity that may be understood as an ongoing top-down allocation of attentional resources. Our results support the notion that indirect pathways allow early, evoked driving from MT to V1 to highlight spatial locations of motion transients, while influence from parietal areas is continuously exerted around stimulus onset, presumably reflecting task-related attentional processes.
Article
Full-text available
Mental imagery research has weathered both disbelief of the phenomenon and inherent methodological limitations. Here we review recent behavioral, brain imaging, and clinical research that has reshaped our understanding of mental imagery. Research supports the claim that visual mental imagery is a depictive internal representation that functions like a weak form of perception. Brain imaging work has demonstrated that neural representations of mental and perceptual images resemble one another as early as the primary visual cortex (V1). Activity patterns in V1 encode mental images and perceptual images via a common set of low-level depictive visual features. Recent translational and clinical research reveals the pivotal role that imagery plays in many mental disorders and suggests how clinicians can utilize imagery in treatment. Recent research suggests that visual mental imagery functions as if it were a weak form of perception.Evidence suggests overlap between visual imagery and visual working memory - those with strong imagery tend to utilize it for mnemonic performance.Brain imaging work suggests that representations of perceived stimuli and mental images resemble one another as early as V1.Imagery plays a pivotal role in many mental disorders and clinicians can utilize imagery to treat such disorders.
Article
Full-text available
What determines what we see? In contrast to the traditional “modular” understanding of perception, according to which visual processing is encapsulated from higher-level cognition, a tidal wave of recent research alleges that states such as beliefs, desires, emotions, motivations, intentions, and linguistic representations exert direct top-down influences on what we see. There is a growing consensus that such effects are ubiquitous, and that the distinction between perception and cognition may itself be unsustainable. We argue otherwise: none of these hundreds of studies — either individually or collectively — provide compelling evidence for true top-down effects on perception, or “cognitive penetrability”. In particular, and despite their variety, we suggest that these studies all fall prey to only a handful of pitfalls. And whereas abstract theoretical challenges have failed to resolve this debate in the past, our presentation of these pitfalls is empirically anchored: in each case, we show not only how certain studies could be susceptible to the pitfall (in principle), but how several alleged top-down effects actually are explained by the pitfall (in practice). Moreover, these pitfalls are perfectly general, with each applying to dozens of other top-down effects. We conclude by extracting the lessons provided by these pitfalls into a checklist that future work could use to convincingly demonstrate top-down effects on visual perception. The discovery of substantive top-down effects of cognition on perception would revolutionize our understanding of how the mind is organized; but without addressing these pitfalls, no such empirical report will license such exciting conclusions.
Article
Full-text available
We develop a version of a direct perception account of emotion recognition on the basis of a metaphysical claim that emotions are individuated as patterns of characteristic features. On our account, emotion recognition relies on the same type of pattern recognition as is described for object recognition. The analogy allows us to distinguish two forms of directly perceiving emotions, namely perceiving an emotion in the (near) absence of any top-down processes, and perceiving an emotion in a way that significantly involves some top-down processes (including expectations and background knowledge); and, in addition, an inference-based evaluation of an emotion. Our model clarifies the epistemology of emotion recognition.
Article
Full-text available
Closing the structure-function divide is more challenging in the brain than in any other organ (Lichtman and Denk, 2011). For example, in early visual cortex, feedback projections to V1 can be quantified (e.g., Budd, 1998) but the understanding of feedback function is comparatively rudimentary (Muckli and Petro, 2013). Focusing on the function of feedback, we discuss how textbook descriptions mask the complexity of V1 responses, and how feedback and local activity reflects not only sensory processing but internal brain states.
Article
Full-text available
I elaborate on Pylyshyn's definition of the cognitive impenetrability (CI) of early vision, and draw on the role of concepts in perceptual processing, which links the problem of the CI or cognitive penetrability (CP) of early vision with the problem of the nonconceptual content (NCC) of perception. I explain, first, the sense in which the content of early vision is CI and I argue that if some content is CI, it is conceptually encapsulated, that is, it is NCC. Then, I examine the definitions of NCC and argue that they lead to the view that the NCC of perception is retrieved in a stage of visual processing that is CI. Thus, the CI of a state and content is a sufficient and necessary condition for the state and its content to be purely NCC, the CI ≡ NCC thesis. Since early vision is CI, the purely NCC of perception is formed in early vision. I defend the CI ≡ NCC thesis by arguing against objections raised against both the sufficient and the necessary part of the thesis.
Article
Full-text available
Human early visual cortex was traditionally thought to process simple visual features such as orientation, contrast and spatial frequency via feed-forward input from the lateral geniculate nucleus [e.g.1]. However, the role of non-retinal influence on early visual cortex is so far insufficiently investigated despite feedback connections greatly outnumbering feed-forward connections [2-5]. Here we explored in five fMRI experiments how information originating from audition and imagery affects the brain activity patterns in early visual cortexin the absence of any feed-forward visual stimulation. We show that category-specific information from both complex natural sounds and imagery can be read out from early visual cortex activity in blindfolded participants. The coding of non-retinal information in the activity patterns of early visual cortex is common across actual auditory perception and imagery and may be mediated by higher level multi-sensory areas. Furthermore, this coding is robust to mild manipulations of attention and working memory but affected by orthogonal, cognitively demanding visuo-spatial processing. Crucially, the information fed down to early visual cortex is category-specific and generalises to sound exemplars of the same category, providing evidence for abstract information feedback rather than precise pictorial feedback.Our results suggest that early visual cortex receives non-retinal input from other brain areas, both when it is generated by auditory perception or imagery, and this input carries common abstract information. Our findings are compatible with feedback of predictive information to the earliest visual input level [e.g. 6] in line with predictive coding models [7 - 10].
Article
Full-text available
Small-world networks provide an appealing description of cortical architecture owing to their capacity for integration and segregation combined with an economy of connectivity. Previous reports of low-density interareal graphs and apparent small-world properties are challenged by data that reveal high-density cortical graphs in which economy of connections is achieved by weight heterogeneity and distance-weight correlations. These properties define a model that predicts many binary and weighted features of the cortical network including a core-periphery, a typical feature of self-organizing information processing systems. Feedback and feedforward pathways between areas exhibit a dual counterstream organization, and their integration into local circuits constrains cortical computation. Here, we propose a bow-tie representation of interareal architecture derived from the hierarchical laminar weights of pathways between the high-efficiency dense core and periphery.
Article
Full-text available
Given the vast amount of sensory information the brain has to deal with, predicting some of this information based on the current context is a resource-efficient strategy. The framework of predictive coding states that higher-level brain areas generate a predictive model to be communicated via feedback connections to early sensory areas. Here, we directly tested the necessity of a higher-level visual area, V5, in this predictive processing in the context of an apparent motion paradigm. We flashed targets on the apparent motion trace in-time or out-of-time with the predicted illusory motion token. As in previous studies, we found that predictable in-time targets were better detected than unpredictable out-of-time targets. However, when we applied functional magnetic resonance imaging-guided, double-pulse transcranial magnetic stimulation (TMS) over left V5 at 13–53 ms before target onset, the detection advantage of in-time targets was eliminated; this was not the case when TMS was applied over the vertex. Our results are causal evidence that V5 is necessary for a prediction effect, which has been shown to modulate V1 activity (Alink et al. 2010). Thus, our findings suggest that information processing between V5 and V1 is crucial for visual motion prediction, providing experimental support for the predictive coding framework.
Article
Full-text available
Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this "hierarchical prediction machine" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.
Article
Full-text available
Can the phenomenal character of perceptual experience be altered by the states of one's cognitive system, for example, one's thoughts or beliefs? If one thinks that this can happen (at least in certain ways that are identified in the paper) then one thinks that there can be cognitive penetration of perceptual experience; otherwise, one thinks that perceptual experience is cognitively impenetrable. I claim that there is one alleged case of cognitive penetration that cannot be explained away by the standard strategies one can typically use to explain away alleged cases. The case is one in which it seems subjects' beliefs about the typical colour of objects affects their colour experience. I propose a two-step mechanism of indirect cognitive penetration that explains how cognitive penetration may occur. I show that there is independent evidence that each step in this process can occur. I suspect that people who are opposed to the idea that perceptual experience is cognitively penetrable will be less opposed to the idea when they come to consider this indirect mechanism and that those who are generally sympathetic to the idea of cognitive penetrability will welcome the elucidation of this plausible mechanism.
Article
Full-text available
This article maybe used for research, teaching and private study purposes. Any substantial or systematic reproduction, re-distribution, re-selling, loan or sub-licensing, systematic supply or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
Article
Full-text available
In this paper, I examine the processes that occur in late vision and address the problem of whether late vision should be construed as a properly speaking perceptual stage, or as a thought-like discursive stage. Specifically, I argue that late vision, its (partly) conceptual nature notwithstanding, neither is constituted by nor does it implicate what I call pure thoughts, that is, propositional structures that are formed in the cognitive areas of the brain through, and participate in, discursive reasoning and inferences. At the same time, the output of late vision, namely an explicit belief concerning the identity and category membership of an object (that is, a recognitional belief) or its features, eventually enters into discursive reasoning. Using Jackendoff's distinction between visual awareness, which characterizes perception, and visual understanding, which characterizes pure thought, I claim that the contents of late vision belong to visual awareness and not to visual understanding and that although late vision implicates beliefs, either implicit or explicit, these beliefs are hybrid visual/conceptual constructs and not pure thoughts. Distinguishing between these hybrid representations and pure thoughts and delineating the nature of the representations of late vision lays the ground for examining, among other things, the process of conceptualization that occurs in visual processing and the way concepts modulate perceptual content affecting either its representational or phenomenal character. I also do not discuss the epistemological relations between the representations of late vision and the perceptual judgments they "support" or "guide" or "render possible" or "evidence" or "entitle." However, the specification of the epistemology of late vision lays the ground for attacking that problem as well.
Article
Full-text available
We report a series of experiments utilizing the binocular rivalry paradigm designed to investigate whether auditory semantic context modulates visual awareness. Binocular rivalry refers to the phenomenon whereby when two different figures are presented to each eye, observers perceive each figure as being dominant in alternation over time. The results demonstrate that participants report a particular percept as being dominant for less of the time when listening to an auditory soundtrack that happens to be semantically congruent with the other alternative (i.e., the competing) percept, as compared to when listening to an auditory soundtrack that was irrelevant to both visual figures (Experiment 1A). When a visually presented word was provided as a semantic cue, no such semantic modulatory effect was observed (Experiment 1B). We also demonstrate that the crossmodal semantic modulation of binocular rivalry was robustly observed irrespective of participants' attentional control over the dichoptic figures and the relative luminance contrast between the figures (Experiments 2A and 2B). The pattern of crossmodal semantic effects reported here cannot simply be attributed to the meaning of the soundtrack guiding participants' attention or biasing their behavioral responses. Hence, these results support the claim that crossmodal perceptual information can serve as a constraint on human visual awareness in terms of their semantic congruency.
Article
Full-text available
This article presents the principles of an adaptive mixed reality rehabilitation (AMRR) system, as well as the training process and results from 2 stroke survivors who received AMRR therapy, to illustrate how the system can be used in the clinic. The AMRR system integrates traditional rehabilitation practices with state-of-the-art computational and motion capture technologies to create an engaging environment to train reaching movements. The system provides real-time, intuitive, and integrated audio and visual feedback (based on detailed kinematic data) representative of goal accomplishment, activity performance, and body function during a reaching task. The AMRR system also provides a quantitative kinematic evaluation that measures the deviation of the stroke survivor's movement from an idealized, unimpaired movement. The therapist, using the quantitative measure and knowledge and observations, can adapt the feedback and physical environment of the AMRR system throughout therapy to address each participant's individual impairments and progress. Individualized training plans, kinematic improvements measured over the entire therapy period, and the changes in relevant clinical scales and kinematic movement attributes before and after the month-long therapy are presented for 2 participants. The substantial improvements made by both participants after AMRR therapy demonstrate that this system has the potential to considerably enhance the recovery of stroke survivors with varying impairments for both kinematic improvements and functional ability.
Article
Full-text available
Expertise with unfamiliar objects ('greebles') recruits face-selective areas in the fusiform gyrus (FFA) and occipital lobe (OFA). Here we extend this finding to other homogeneous categories. Bird and car experts were tested with functional magnetic resonance imaging during tasks with faces, familiar objects, cars and birds. Homogeneous categories activated the FFA more than familiar objects. Moreover, the right FFA and OFA showed significant expertise effects. An independent behavioral test of expertise predicted relative activation in the right FFA for birds versus cars within each group. The results suggest that level of categorization and expertise, rather than superficial properties of objects, determine the specialization of the FFA.
Article
Full-text available
Visual cognition is limited by computational capacity, because the brain can process only a fraction of the visual sensorium in detail, and by the inherent ambiguity of the information entering the visual system. Two mechanisms mitigate these burdens: attention prioritizes stimulus processing on the basis of motivational relevance, and expectations constrain visual interpretation on the basis of prior likelihood. Of the two, attention has been extensively investigated while expectation has been relatively neglected. Here, we review recent work that has begun to delineate a neurobiology of visual expectation, and contrast the findings with those of the attention literature, to explore how these two central influences on visual perception overlap, differ and interact.
Article
Full-text available
Our voluntary behaviors are thought to be controlled by top-down signals from the prefrontal cortex that modulate neural processing in the posterior cortices according to the behavioral goal. However, we have insufficient evidence for the causal effect of the top-down signals. We applied a single-pulse transcranial magnetic stimulation over the human prefrontal cortex and measured the strength of the top-down signals as an increase in the efficiency of neural impulse transmission. The impulse induced by the stimulation transmitted to different posterior visual areas depending on the domain of visual features to which subjects attended. We also found that the amount of impulse transmission was associated with the level of attentional preparation and the performance of visual selective-attention tasks, consistent with the causal role of prefrontal top-down signals.
Article
Full-text available
Perception arises through an interaction between sensory input and prior knowledge. We propose that at least two brain areas are required for such an interaction: the 'site' where analysis of afferent signals occurs and the 'source' which applies the relevant prior knowledge. In the human brain, functional imaging studies have demonstrated that selective attention modifies activity in early visual processing areas specific to the attended feature. Early processing areas are also modified when prior knowledge permits a percept to emerge from an otherwise meaningless stimulus. Sources of this modification have been identified in parietal cortex and in prefrontal cortex. Modification of early processing areas also occurs on the basis of prior knowledge about the predicted sensory effects of the subject's own actions. Activity associated with mental imagery resembles that associated with response preparation (for motor imagery) and selective attention (for sensory imagery) suggesting that mental imagery reflects the effects of prior knowledge on sensory processing areas in the absence of sensory input. Damage to sensory processing areas can lead to a form of sensory hallucination which seems to arise from the interaction of prior knowledge with random sensory activity. In contrast, hallucinations associated with schizophrenia may arise from a failure of prior knowledge about motor intentions to modify activity in relevant sensory areas. When functioning normally, this mechanism permits us to distinguish our own actions from those of independent agents in the outside world. Failure to make this distinction correctly may account for the strong association between hallucinations and paranoid delusions in schizophrenia; the patient not only hears voices, but attributes (usually hostile) intentions to these voices.
Article
Full-text available
An analysis of response latencies shows that when an image is presented to the visual system, neuronal activity is rapidly routed to a large number of visual areas. However, the activity of cortical neurons is not determined by this feedforward sweep alone. Horizontal connections within areas, and higher areas providing feedback, result in dynamic changes in tuning. The differences between feedforward and recurrent processing could prove pivotal in understanding the distinctions between attentive and pre-attentive vision as well as between conscious and unconscious vision. The feedforward sweep rapidly groups feature constellations that are hardwired in the visual brain, yet is probably incapable of yielding visual awareness; in many cases, recurrent processing is necessary before the features of an object are attentively grouped and the stimulus can enter consciousness.
Article
Full-text available
We previously showed that feedback connections from MT play a role in figure/ground segmentation. Figure/ground coding has been described at the V1 level in the late part of the neuronal responses to visual stimuli, and it has been suggested that these late modulations depend on feedback connections. In the present work we tested whether it actually takes time for this information to be fed back to lower order areas. We analyzed the extracellular responses of 169 V1, V2, and V3 neurons that we recorded in two anesthetized macaque monkeys. MT was inactivated by cooling. We studied the time course of the responses of the neurons that were significantly affected by the inactivation of MT to see whether the effects were delayed relative to the onset of the response. We first measured the time course of the feedback influences from MT on V1, V2, and V3 neurons tested with moving stimuli. For the large majority of the 51 neurons for which the response decreased, the effect was present from the beginning of the response. In the responses averaged after normalization, the decrease of response was significant in the first 10-ms bin of response. A similar result was found for six neurons for which the response significantly increased when MT was inactivated. We then looked at the time course of the responses to flashed stimuli (95 neurons). We observed 15 significant decreases of response and 14 significant increases. In both populations, the effects were significant within the first 10 ms of response. For some neurons with increased responses we even observed a shorter latency when MT was inactivated. We measured the latency of the response to the flashed stimuli. We found that even the earliest responding neurons were affected early by the feedback from MT. This was true for the response to flashed and to moving stimuli. These results show that feedback connections are recruited very early for the treatment of visual information. It further indicates that the presence or absence of feedback effects cannot be deduced from the time course of the response modulations.
Article
Full-text available
This study provides a time frame for the initial trajectory of activation flow along the dorsal and ventral visual processing streams and for the initial activation of prefrontal cortex in the human. We provide evidence that this widespread system of sensory, parietal, and prefrontal areas is activated in less than 30 ms, which is considerably shorter than typically assumed in the human event-related potential (ERP) literature and is consistent with recent intracranial data from macaques. We find a mean onset latency of activity over occipital cortex (C1(e)) at 56 ms, with dorsolateral frontal cortex subsequently active by just 80 ms. Given that activity in visual sensory areas typically continues for 100-400 ms prior to motor output, this rapid system-wide activation provides a time frame for the initiation of feedback processes onto sensory areas. There is clearly sufficient time for multiple iterations of interactive processing between sensory, parietal, and frontal areas during brief (e.g., 200 ms) periods of information processing preceding motor output. High-density electrical mapping also suggested activation in dorsal stream areas preceding ventral stream areas. Our data suggest that multiple visual generators are active in the latency range of the traditional C1 component of the ERP, which has often been taken to represent V1 activity alone. Based on the temporal pattern of activation shown in primate recordings and the evidence from these human recordings, we propose that only the initial portion of the C1 component (approximately the first 10-15 ms; C1(e)) is likely to represent a response that is predominated by V1 activity. These data strongly suggest that activity represented in the "early" ERP components such as P1 and N1 (and possibly even C1) is likely to reflect relatively late processing, after the initial volley of sensory afference through the visual system and involving top-down influences from parietal and frontal regions.
Article
Full-text available
Frontal eye field (FEF) neurons discharge in response to behaviorally relevant stimuli that are potential targets for saccades. Distinct visual and motor processes have been dissociated in the FEF of macaque monkeys, but little is known about the visual processing capacity of FEF in humans. We used double-pulse transcranial magnetic stimulation [(d)TMS] to investigate the timing of target discrimination during visual conjunction search. We applied dual TMS pulses separated by 40 msec over the right FEF and vertex. These were applied in five timing conditions to sample separate time windows within the first 200 msec of visual processing. (d)TMS impaired search performance, reflected in reduced d' scores. This effect was limited to a time window between 40 and 80 msec after search array onset. These parameters correspond with single-cell activity in FEF that predicts monkeys' behavioral reports on hit, miss, false alarm, and correct rejection trials. Our findings demonstrate a crucial early role for human FEF in visual target discrimination that is independent of saccade programming.
Article
Full-text available
A key question in understanding visual awareness is whether any single cortical area is indispensable. In a transcranial magnetic stimulation experiment, we show that observers' awareness of activity in extrastriate area V5 depends on the amount of activity in striate cortex (V1). From the timing and pattern of effects, we infer that back-projections from extrastriate cortex influence information content in V1, but it is V1 that determines whether that information reaches awareness.
Article
Using fMRI decoding techniques we recently demonstrated that early visual cortex contains content-specific information from sounds in the absence of visual stimulation (Vetter, Smith & Muckli, Current Biology, 2014). Here we studied whether the emotional valence of sounds can be decoded in early visual cortex during emotionally ambiguous visual stimulation. Participants viewed video clips in which two point-light walkers interacted with each other, either emotionally neutrally (having a normal conversation) or emotionally negatively (having an argument). Videos were paired with low-pass filtered soundtracks of this interaction either congruently or incongruently. Participants' task was to judge the overall emotion of the interaction. The emotionally ambiguous condition consisted of the neutral visual stimulus which could be interpreted as either a negative or neutral interaction depending on the soundtrack. The emotionally unambiguous condition consisted of the negative visual stimulus which was judged as negative independently of soundtrack (as confirmed behaviourally). Functional MRI data were recorded while participants viewed and judged the interaction. Activity patterns from early visual cortex (as identified with individual retinotopic mapping) were fed into a multi-variate pattern classification analysis. When the visual stimulus was neutral, and thus emotionally ambiguous, the emotional valence of sounds could be decoded significantly above chance in V1. However, when the visual stimulus was negative, and thus emotionally unambiguous, emotional valence of sounds could not be decoded in early visual cortex. Furthermore, emotional valence of the visual stimulus was decoded in both early visual and auditory cortex independent of soundtrack. The results suggest that emotional valence of sounds is contained in early visual cortex activity when visual information is emotionally ambiguous, but not when it is emotionally unambiguous. Thus, feedback from audition may help the visual system to resolve ambiguities when interpreting a visual scene, and thus may serve a function in perception. Meeting abstract presented at VSS 2016
Article
Some everyday objects are associated with a particular color, such as bananas, which are typically yellow. Behavioral studies show that perception of these so-called color-diagnostic objects is influenced by our knowledge of their typical color, referred to as memory color [1,2]. However, neural representations of memory colors are unknown. Here we investigated whether memory color can be decoded from visual cortex activity when color-diagnostic objects are viewed as grayscale images. We trained linear classifiers to distinguish patterns of fMRI responses to four different hues. We found that activity in V1 allowed predicting the memory color of color-diagnostic objects presented in grayscale in naive participants performing a motion task. The results imply that higher areas feed back memory-color signals to V1. When classifiers were trained on neural responses to some exemplars of color-diagnostic objects and tested on others, areas V4 and LOC also predicted memory colors. Representational similarity analysis showed that memory-color representations in V1 were correlated specifically with patterns in V4 but not LOC. Our findings suggest that prior knowledge is projected from midlevel visual regions onto primary visual cortex, consistent with predictive coding theory [3].
Article
The fusiform face area (FFA) is a well-studied human brain region that shows strong activation for faces. In functional MRI studies, FFA is often assumed to be a homogeneous collection of voxels with similar visual tuning. To test this assumption, we used natural movies and a quantitative voxelwise modeling and decoding framework to estimate category tuning profiles for individual voxels within FFA. We find that the responses in most FFA voxels are strongly enhanced by faces, as reported in previous studies. However, we also find that responses of individual voxels are selectively enhanced or suppressed by a wide variety of other categories and that these broader tuning profiles differ across FFA voxels. Cluster analysis of category tuning profiles across voxels reveals three spatially segregated functional subdomains within FFA. These subdomains differ primarily in their responses for nonface categories, such as animals, vehicles, and communication verbs. Furthermore, this segregation does not depend on the statistical threshold used to define FFA from responses to functional localizers. These results suggest that voxels within FFA represent more diverse information about object and action categories than generally assumed.
Article
What kind of information is found in visual experience, and what kind can be found only in judgments made on its basis? Do we visually experience arrays of colored shapes, variously illuminated, and sometimes moving? Or does visual experience involve more complex features, such as personal identity, causation, and kinds such as bicycle, keys, and cars? This chapter argues that kind properties can be represented in experience. The contents of visual experience are not limited to colour, shape, illumination, and motion.
Article
Re-entrant or feedback pathways between cortical areas carry rich and varied information about behavioural context, including attention, expectation, perceptual tasks, working memory and motor commands. Neurons receiving such inputs effectively function as adaptive processors that are able to assume different functional states according to the task being executed. Recent data suggest that the selection of particular inputs, representing different components of an association field, enable neurons to take on different functional roles. In this Review, we discuss the various top-down influences exerted on the visual cortical pathways and highlight the dynamic nature of the receptive field, which allows neurons to carry information that is relevant to the current perceptual demands.
Article
Sounds can modulate visual perception as well as neural activity in retinotopic cortex. Most studies in this context investigated how sounds change neural amplitude and oscillatory phase reset in visual cortex. However, recent studies in macaque monkeys show that congruence of audio-visual stimuli also modulates the amount of stimulus information carried by spiking activity of primary auditory and visual neurons. Here, we used naturalistic video stimuli and recorded the spatial patterns of functional MRI signals in human retinotopic cortex to test whether the discriminability of such patterns varied with the presence and congruence of co-occurring sounds. We found that incongruent sounds significantly impaired stimulus decoding from area V2 and there was a similar trend for V3. This effect was associated with reduced inter-trial reliability of patterns (i.e. higher levels of noise), but was not accompanied by any detectable modulation of overall signal amplitude. We conclude that sounds modulate naturalistic stimulus encoding in early human retinotopic cortex without affecting overall signal amplitude. Subthreshold modulation, oscillatory phase reset and dynamic attentional modulation are candidate neural and cognitive mechanisms mediating these effects.
Article
Humans can see and name thousands of distinct object and action categories, so it is unlikely that each category is represented in a distinct brain area. A more efficient scheme would be to represent categories as locations in a continuous semantic space mapped smoothly across the cortical surface. To search for such a space, we used fMRI to measure human brain activity evoked by natural movies. We then used voxelwise models to examine the cortical representation of 1,705 object and action categories. The first few dimensions of the underlying semantic space were recovered from the fit models by principal components analysis. Projection of the recovered semantic space onto cortical flat maps shows that semantic selectivity is organized into smooth gradients that cover much of visual and nonvisual cortex. Furthermore, both the recovered semantic space and the cortical organization of the space are shared across different individuals. Video Abstract eyJraWQiOiI4ZjUxYWNhY2IzYjhiNjNlNzFlYmIzYWFmYTU5NmZmYyIsImFsZyI6IlJTMjU2In0.eyJzdWIiOiI3ZWNkYzY3ZDRjOTFhY2IyM2M3OTFiMTQ1NjBjODcxYyIsImtpZCI6IjhmNTFhY2FjYjNiOGI2M2U3MWViYjNhYWZhNTk2ZmZjIiwiZXhwIjoxNjA1OTg1OTk3fQ.TIfO0vZhS64vD6XcG7NrP4pNhAwy5ffRuUCy6rfglyVYgh132-tX0C-qXLM0hWFTshxonufyBgunY-95zDnSr25Wt4Hlhj8dUMCaT4-FGUKKlWG_9CmdIqZw768-OxcKUwY30gavdTbvrJMYWSVAh4CAItc182RqWtzJkhBJ3ppyVJg2QDgCOq9pVfmX55cDhs56RRa0l8iZJZV7E3usEYII46FHGoGK3NaE0czwZGS2hH4-UQsUAN7EXePId7dq0powuVh6oZzv9jzHgBbl0G43aItJb-EzdCjxvT1f9isAMSbxcv5FFXxyzMloSBMTjILvJTzC3XVLFZXOg9aePg (mp4, (74.34 MB) Download video
Article
Prior expectations about the visual world facilitate perception by allowing us to quickly deduce plausible interpretations from noisy and ambiguous data. The neural mechanisms of this facilitation remain largely unclear. Here, we used functional magnetic resonance imaging (fMRI) and multivariate pattern analysis (MVPA) techniques to measure both the amplitude and representational content of neural activity in the early visual cortex of human volunteers. We find that while perceptual expectation reduces the neural response amplitude in the primary visual cortex (V1), it improves the stimulus representation in this area, as revealed by MVPA. This informational improvement was independent of attentional modulations by task relevance. Finally, the informational improvement in V1 correlated with subjects' behavioral improvement when the expected stimulus feature was relevant. These data suggest that expectation facilitates perception by sharpening sensory representations.
Article
The conceptual system contains categorical knowledge about experience that supports the spectrum of cognitive processes. Cognitive science theories assume that categorical knowledge resides in a modular and amodal semantic memory, whereas neuroscience theories assume that categorical knowledge is grounded in the brain's modal systems for perception, action, and affect. Neuroscience has influenced theories of the conceptual system by stressing principles of neural processing in neural networks and by motivating grounded theories of cognition, which propose that simulations of experience represent knowledge. Cognitive science has influenced theories of the conceptual system by documenting conceptual phenomena and symbolic operations that must be grounded in the brain. Significant progress in understanding the conceptual system is most likely to occur if cognitive and neural approaches achieve successful integration.
Article
We report a series of experiments designed to demonstrate that the presentation of a sound can facilitate the identification of a concomitantly presented visual target letter in the backward masking paradigm. Two visual letters, serving as the target and its mask, were presented successively at various interstimulus intervals (ISIs). The results demonstrate that the crossmodal facilitation of participants' visual identification performance elicited by the presentation of a simultaneous sound occurs over a very narrow range of ISIs. This critical time-window lies just beyond the interval needed for participants to differentiate the target and mask as constituting two distinct perceptual events (Experiment 1) and can be dissociated from any facilitation elicited by making the visual target physically brighter (Experiment 2). When the sound is presented at the same time as the mask, a facilitatory, rather than an inhibitory effect on visual target identification performance is still observed (Experiment 3). We further demonstrate that the crossmodal facilitation of the visual target by the sound depends on the establishment of a reliable temporally coincident relationship between the two stimuli (Experiment 4); however, by contrast, spatial coincidence is not necessary (Experiment 5). We suggest that when visual and auditory stimuli are always presented synchronously, a better-consolidated object representation is likely to be constructed (than that resulting from unimodal visual stimulation).
Article
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.
Article
Although it is known that sounds can affect visual perception, the neural correlates for crossmodal interactions are still disputed. Previous tracer studies in non-human primates revealed direct anatomical connections between auditory and visual brain areas. We examined the structural connectivity of the auditory cortex in normal humans by diffusion-weighted tensor magnetic resonance imaging and probabilistic tractography. Tracts were seeded in Heschl's region or the planum temporale. Fibres crossed hemispheres at the posterior corpus callosum. Ipsilateral fibres seeded in Heschl's region projected to the superior temporal sulcus, the supramarginal gyrus and intraparietal sulcus and the occipital cortex including the calcarine sulcus. Fibres seeded in the planum temporale terminated primarily in the superior temporal sulcus, the supramarginal gyrus, the central sulcus and adjacent regions. Our findings suggest the existence of direct white matter connections between auditory and visual cortex--in addition to subcortical, temporal and parietal connections.
Article
Are the kinds of abnormal cross-modal interactions seen in synaesthesia or following brain damage due to hyperconnectivity between or within brain areas, or are they a result of lack of inhibition? This question is highly contested. Here we show that posthypnotic suggestion induces abnormal cross-modal experience similar to that observed in congenital grapheme-color synaesthesia. Given the short time frame of the experiment, it is unlikely that new cortical connections were established, so we conclude that synaesthesia can result from disinhibition between brain areas.
Article
A single visual stimulus activates neurons in many different cortical areas. A major challenge in cortical physiology is to understand how the neural activity in these numerous active zones leads to a unified percept of the visual scene. The anatomical basis for these interactions is the dense network of connections that link the visual areas. Within this network, feedforward connections transmit signals from lower-order areas such as V1 or V2 to higher-order areas. In addition, there is a dense web of feedback connections which, despite their anatomical prominence, remain functionally mysterious. Here we show, using reversible inactivation of a higher-order area (monkey area V5/MT), that feedback connections serve to amplify and focus activity of neurons in lower-order areas, and that they are important in the differentiation of figure from ground, particularly in the case of stimuli of low visibility. More specifically, we show that feedback connections facilitate responses to objects moving within the classical receptive field; enhance suppression evoked by background stimuli in the surrounding region; and have the strongest effects for stimuli of low salience.
Article
To test whether the human fusiform face area (FFA) responds not only to faces but to anything human or animate, we used fMRI to measure the response of the FFA to six new stimulus categories. The strongest responses were to stimuli containing faces: human faces (2.0% signal increase from fixation baseline) and human heads (1.7%), with weaker but still strong responses to whole humans (1.5%) and animal heads (1.3%). Responses to whole animals (1.0%) and human bodies without heads (1.0%) were significantly stronger than responses to inanimate objects (0.7%), but responses to animal bodies without heads (0.8%) were not. These results demonstrate that the FFA is selective for faces, not for animals.
Article
Prior to the twentieth century, theories of knowledge were inherently perceptual. Since then, developments in logic, statistics, and programming languages have inspired amodal theories that rest on principles fundamentally different from those underlying perception. In addition, perceptual approaches have become widely viewed as untenable because they are assumed to implement recording systems, not conceptual systems. A perceptual theory of knowledge is developed here in the context of current cognitive science and neuroscience. During perceptual experience, association areas in the brain capture bottom-up patterns of activation in sensory-motor areas. Later, in a top-down manner, association areas partially reactivate sensory-motor areas to implement perceptual symbols. The storage and reactivation of perceptual symbols operates at the level of perceptual components--not at the level of holistic perceptual experiences. Through the use of selective attention, schematic representations of perceptual components are extracted from experience and stored in memory (e.g., individual memories of green, purr, hot). As memories of the same component become organized around a common frame, they implement a simulator that produces limitless simulations of the component (e.g., simulations of purr). Not only do such simulators develop for aspects of sensory experience, they also develop for aspects of proprioception (e.g., lift, run) and introspection (e.g., compare, memory, happy, hungry). Once established, these simulators implement a basic conceptual system that represents types, supports categorization, and produces categorical inferences. These simulators further support productivity, propositions, and abstract concepts, thereby implementing a fully functional conceptual system. Productivity results from integrating simulators combinatorially and recursively to produce complex simulations. Propositions result from binding simulators to perceived individuals to represent type-token relations. Abstract concepts are grounded in complex simulations of combined physical and introspective events. Thus, a perceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal symbol systems. Implications for cognition, neuroscience, evolution, development, and artificial intelligence are explored.
Article
Although the study of visual perception has made more progress in the past 40 years than any other area of cognitive science, there remain major disagreements as to how closely vision is tied to cognition. This target article sets out some of the arguments for both sides (arguments from computer vision, neuroscience, psychophysics, perceptual learning, and other areas of vision science) and defends the position that an important part of visual perception, corresponding to what some people have called early vision, is prohibited from accessing relevant expectations, knowledge, and utilities in determining the function it computes – in other words, it is cognitively impenetrable. That part of vision is complex and involves top-down interactions that are internal to the early vision system. Its function is to provide a structured representation of the 3-D surfaces of objects sufficient to serve as an index into memory, with somewhat different outputs being made available to other systems such as those dealing with motor control. The paper also addresses certain conceptual and methodological issues raised by this claim, such as whether signal detection theory and event-related potentials can be used to assess cognitive penetration of vision.
Article
Much is known about the pathways from photoreceptors to higher visual areas in the brain. However, how we become aware of what we see or of having seen at all is a problem that has eluded neuroscience. Recordings from macaque V1 during deactivation of MT+/V5 and psychophysical studies of perceptual integration suggest that feedback from secondary visual areas to V1 is necessary for visual awareness. We used transcranial magnetic stimulation to probe the timing and function of feedback from human area MT+/V5 to V1 and found its action to be early and critical for awareness of visual motion.