Optimizing vs. Matching: Response Strategy in a Probabilistic Learning Task is associated with Negative Symptoms of Schizophrenia

Department of Psychiatry, Maryland Psychiatric Research Center, University of Maryland School of Medicine, PO Box 21247, Baltimore, MD 21228, USA.
Schizophrenia Research (Impact Factor: 4.43). 04/2011; 127(1-3):215-22. DOI: 10.1016/j.schres.2010.12.003
Source: PubMed

ABSTRACT Previous research indicates that behavioral performance in simple probability learning tasks can be organized into response strategy classifications that are thought to predict important personal characteristics and individual differences. Typically, relatively small proportion of subjects can be identified as optimizers for effectively exploiting the environment and choosing the more rewarding stimulus nearly all of the time. In contrast, the vast majority of subjects behaves sub-optimally and adopts the matching or super-matching strategy, apportioning their responses in a way that matches or slightly exceeds the probabilities of reinforcement. In the present study, we administered a two-choice probability learning paradigm to 51 individuals with schizophrenia (SZ) and 29 healthy controls (NC) to examine whether there are differences in the proportion of subjects falling into these response strategy classifications, and to determine whether task performance is differentially associated with symptom severity and neuropsychological functioning. Although the sample of SZ patients did not differ from NC in overall rate of learning or end performance, significant clinical differences emerged when patients were divided into optimizing, super-matching and matching subgroups based upon task performance. Patients classified as optimizers, who adopted the most advantageous learning strategy, exhibited higher levels of positive and negative symptoms than their matching and super-matching counterparts. Importantly, when both positive and negative symptoms were considered together, only negative symptom severity was a significant predictor of whether a subject would behave optimally, with each one standard deviation increase in negative symptoms increasing the odds of a patient being an optimizer by as much as 80%. These data provide a rare example of a greater clinical impairment being associated with better behavioral performance.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Abnormalities in the dopamine system have long been implicated in explanations of reinforcement learning and psychosis. The updated reward prediction error (RPE)-a discrepancy between the predicted and actual rewards-is thought to be encoded by dopaminergic neurons. Dysregulation of dopamine systems could alter the appraisal of stimuli and eventually lead to schizophrenia. Accordingly, the measurement of RPE provides a potential behavioral index for the evaluation of brain dopamine activity and psychotic symptoms. Here, we assess two features potentially crucial to the RPE process, namely belief formation and belief perseveration, via a probability learning task and reinforcement-learning modeling. Forty-five patients with schizophrenia [26 high-psychosis and 19 low-psychosis, based on their p1 and p3 scores in the positive-symptom subscales of the Positive and Negative Syndrome Scale (PANSS)] and 24 controls were tested in a feedback-based dynamic reward task for their RPE-related decision making. While task scores across the three groups were similar, matching law analysis revealed that the reward sensitivities of both psychosis groups were lower than that of controls. Trial-by-trial data were further fit with a reinforcement learning model using the Bayesian estimation approach. Model fitting results indicated that both psychosis groups tend to update their reward values more rapidly than controls. Moreover, among the three groups, high-psychosis patients had the lowest degree of choice perseveration. Lumping patients' data together, we also found that patients' perseveration appears to be negatively correlated (p = 0.09, trending toward significance) with their PANSS p1 + p3 scores. Our method provides an alternative for investigating reward-related learning and decision making in basic and clinical settings.
    Frontiers in Psychology 11/2014; 5:1282. DOI:10.3389/fpsyg.2014.01282 · 2.80 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Abnormalities in reinforcement learning are a key finding in schizophrenia and have been proposed to be linked to elevated levels of dopamine neurotransmission. Behavioral deficits in reinforcement learning and their neural correlates may contribute to the formation of clinical characteristics of schizophrenia. The ability to form predictions about future outcomes is fundamental for environmental interactions and depends on neuronal teaching signals, like reward prediction errors. While aberrant prediction errors, that encode non-salient events as surprising, have been proposed to contribute to the formation of positive symptoms, a failure to build neural representations of decision values may result in negative symptoms. Here, we review behavioral and neuroimaging research in schizophrenia and focus on studies that implemented reinforcement learning models. In addition, we discuss studies that combined reinforcement learning with measures of dopamine. Thereby, we suggest how reinforcement learning abnormalities in schizophrenia may contribute to the formation of psychotic symptoms and may interact with cognitive deficits. These ideas point toward an interplay of more rigid versus flexible control over reinforcement learning. Pronounced deficits in the flexible or model-based domain may allow for a detailed characterization of well-established cognitive deficits in schizophrenia patients based on computational models of learning. Finally, we propose a framework based on the potentially crucial contribution of dopamine to dysfunctional reinforcement learning on the level of neural networks. Future research may strongly benefit from computational modeling but also requires further methodological improvement for clinical group studies. These research tools may help to improve our understanding of disease-specific mechanisms and may help to identify clinically relevant subgroups of the heterogeneous entity schizophrenia.
    Frontiers in Psychiatry 01/2013; 4:172. DOI:10.3389/fpsyt.2013.00172
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Schizophrenia is associated with upregulation of dopamine (DA) release in the caudate nucleus. The caudate has dense connections with the orbitofrontal cortex (OFC) via the frontostriatal loops, and both areas exhibit pathophysiological change in schizophrenia. Despite evidence that abnormalities in dopaminergic neurotransmission and prefrontal cortex function co-occur in schizophrenia, the influence of OFC DA on caudate DA and reinforcement processing is poorly understood. To test the hypothesis that OFC dopaminergic dysfunction disrupts caudate dopamine function, we selectively depleted dopamine from the OFC of marmoset monkeys and measured striatal extracellular dopamine levels (using microdialysis) and dopamine D2/D3 receptor binding (using positron emission tomography), while modeling reinforcement-related behavior in a discrimination learning paradigm. OFC dopamine depletion caused an increase in tonic dopamine levels in the caudate nucleus and a corresponding reduction in D2/D3 receptor binding. Computational modeling of behavior showed that the lesion increased response exploration, reducing the tendency to persist with a recently chosen response side. This effect is akin to increased response switching previously seen in schizophrenia and was correlated with striatal but not OFC D2/D3 receptor binding. These results demonstrate that OFC dopamine depletion is sufficient to induce striatal hyperdopaminergia and changes in reinforcement learning relevant to schizophrenia.
    The Journal of Neuroscience : The Official Journal of the Society for Neuroscience 05/2014; 34(22):7663-76. DOI:10.1523/JNEUROSCI.0718-14.2014 · 6.75 Impact Factor

Full-text (2 Sources)

Available from
May 29, 2014