Gilles Dutilh's scientific contributions

Publications (16)

Publications citing this author (681)

    • In a 4-parameter version of the model by Wallsten et al. (2005), for example, α and µ parameters control the learning rate at which one's belief that a balloon will burst on a given trial is updated, γ + represents the general propensity to take risks, and β captures the behavioral consistency of the agents. Crucially, Wallsten and colleagues showed that parameters recovered in their study correlated with self-reported indices of risky behaviors, supporting the view that their specification of the model captures the cognitive components of risk taking in BART (but see a discussion of alternative models by van Ravenzwaaij et al., 2011). In fact, research has found that the differences between young and old on BART performance can be attributed to heightened reward-sensitivity and the initial perception of risk (Cavanagh et al., 2012), as opposed to differences in the ability to update beliefs based on observed outcomes (Rolison et al., 2012).
    Full-text · Article · Aug 2016
    • However, when different RT data sets that each exhibited this pattern were analyzed with an evidence accumulation model, researchers found that the locus of the slow-down in older people was higher response caution, not a lower processing speed (Ratcliff, Thapar, & McKoon, 2001 Ratcliff, Thapar, Gomez, & McKoon, 2004;). A parsimonious evidence accumulation model that retains all of the explanatory power of more complex models (e.g., Ratcliff & Rouder, 1998; Usher & McClelland, 2001), while having the advantage of being tractable, is the linear ballistic accumulator (LBA; Brown & Heathcote, 2008). 2 The LBA has been applied to a number of perceptual discrimination paradigms (e.g., Ho, Brown, & Serences, 2009; Forstmann, Brown, Dutilh, Neumann, & Wagenmakers, 2010; Forstmann et al., 2008; Cassey, Heathcote, & Brown, 2014; van Ravenzwaaij, Provost, & Brown, 2016) and has been fit to tasks where the responses are categories (e.g., Hawkins et al., 2014; Trueblood, Brown, & Heathcote, 2014), which is the same set-up as used in a phoneme categorization task. Therefore, the LBA can be feasibly extended to model phonological decisions in categorization tasks that yield choice data and RTs.
    [Show abstract] [Hide abstract] ABSTRACT: Listeners rely on multiple acoustic cues to recognize any phoneme. The relative contribution of these cues to listeners’ perception is typically inferred from listeners’ categorization of sounds in a two-alternative forced-choice task. Here we advocate the use of an evidence accumulation model to analyze categorization as well as response time data from such cue weighting paradigms in terms of the processes that underlie the listeners’ categorization. We tested 30 Dutch listeners on their categorization of speech sounds that varied between typical /A/ and /a:/ in vowel quality (F1 and F2) and duration. Using the linear ballistic accumulator model, we found that the changes in spectral quality and duration lead to changes in the speed of information processing, and the effects were larger for spectral quality. In addition, for stimuli with atypical spectral information, listeners accumulate evidence faster for /A/ compared to /a:/. Finally, longer durations of sounds did not produce longer estimates of perceptual encoding time. Our results demonstrate the utility of evidence accumulation models for learning about the latent processes that underlie phoneme categorization. The implications for current theory in speech perception as well as future directions for evidence accumulation models are discussed.
    Full-text · Article · Nov 2016
    • Although this research has identified an error monitoring system in the medial frontal cortex that rapidly detects and evaluates errors (Ridderinkhof et al., 2004), the cognitive and behavioral consequences of error monitoring are still unclear (). Some studies demonstrated that errors lead to adaptive adjustments that aim to prevent further errors (e.g., Dutilh et al., 2011; Maier et al., 2011 ), whereas others suggested that errors primarily elicit nonadaptive adjustments that impair performance even further (e.g., Van der Borght et al., 2014). In the present study, we applied a visual search task that allows for distinguishing between two stages of selective attention – target selection and target identification.
    [Show abstract] [Hide abstract] ABSTRACT: Errors in speeded choice tasks can lead to post-error adjustments both on the behavioral and on the neural level. There is an ongoing debate whether such adjustments result from adaptive processes that serve to optimize performance or whether they reflect interference from error monitoring or attentional orientation. The present study aimed at identifying adaptive adjustments in a two-stage visual search task, in which participants had to select and subsequently identify a target stimulus presented to the left or right visual hemifield. Target selection and identification can be measured by two distinct event-related potentials, the N2pc and the SPCN. Using a decoder analysis based on multivariate pattern analysis, we were able to isolate the processing stages related to error sources and post-error adjustments. Whereas errors were linked to deviations in the N2pc and the SPCN, only for the N2pc we identified a post-error adjustment, which exhibits key features of source-specific adaptivity. While errors were associated with an increased N2pc, post-error adjustments consisted in an N2pc decrease. We interpret this as an adaptive adjustment of target selection to prevent errors due to disproportionate processing of the task-irrelevant target location. Our study thus provides evidence for adaptive post-error adjustments in visual search.
    Full-text · Article · Apr 2017
    • Performance monitoring was studied in conditions with and without auditory cues by comparing response time in pre-error trials to RT in post-error trials. As discussed elsewhere (Dutilh, Vandekerckhove, Forstmann, Keuleers, Brysbaert & Wagenmakers, 2012), this method is more valid than other methods, such as comparing post-correct trials to post-error trials. The average number of post error trials is 7.2.
    [Show abstract] [Hide abstract] ABSTRACT: The question of interest in this study was whether bilingual individuals show superior executive control compared to monolingual participants. Findings are mixed, with studies showing advantage, disadvantage, or no difference between bilingual and monolingual speakers. In this study, we used different experimental conditions to examine implicit learning, resistance to interference, monitoring, and switching, independently. In addition, we matched our monolingual and bilingual participants on baseline response time. Bilingual participants demonstrated faster implicit learning, greater resistance to interference, more efficient switching compared to monolingual participants. The groups did not differ in monitoring. In conclusion, depending on task complexity and on the target executive control component, there are different patterns of bilingual advantage, beyond the global faster processing speed documented in previous studies. Bilingual young adults showed more efficient adjustments of the cognitive system in response to changes in task demands.
    Full-text · Article · Jan 2017
    • Another theory posits that errors delay the start of information accrual on the following trial due to task-irrelevant factors like affective responses (Rabbitt & Rodgers, 1977). Recent computational work using diffusion models supported the idea that increased response caution gives rise to post-error adjustments (Dutilh et al., 2012), but other task-related factors and individual differences can also affect the psychological process that result in post-error slowing, including error-related distraction away from task demands and delayed accumulation of sensory information (Dutilh, Forstmann, Vandekerckhove, & Wagenmakers, 2013). Although not explicitly accounted for in drift diffusion models, it remains plausible that affective response to errors underlies response caution.
    [Show abstract] [Hide abstract] ABSTRACT: Sexual dimorphism in the brain and cognition is a topic of widespread interest. Many studies of sex differences have focused on visuospatial and verbal abilities, but few studies have investigated sex differences in executive functions. We examined two key components of executive function - response inhibition and response monitoring - in healthy men (n = 285) and women (n = 346) performing the Stop-signal task. In this task, participants are required to make a key press to a stimulus, unless a tone is presented at some delay following the initial stimulus presentation; on these infrequent trials, participants are instructed to inhibit their planned response. Response inhibition was assessed with an estimate of the latency needed to inhibit a response (stop-signal reaction time), and response monitoring was measured by calculating the degree to which participants adjusted their reaction times based on the immediately preceding trial (e.g., speeding following correct trials and slowing following errors). There were no sex differences in overall accuracy or response inhibition, but women showed greater sensitivity to trial history. Women sped up more than men following correct 'Go' trials, and slowed down more than men following errors. These small but statistically significant effects (Cohen's d = 0.25-0.3) suggest more flexible adjustments in speed-accuracy trade-offs in women and greater cognitive flexibility associated with the responsive control of action.
    Full-text · Article · May 2014
    • For example, Germar et al. (2014) fixed all three intertrial variabilities at zero (see also Ratcliff and Childers, 2015). Note that also in earlier work the intertrial variabilities have sometimes been fixed at zero, because the application of the EZ method does not allow to include these parameters (e.g., Schmiedek et al., 2007; Wagenmakers et al., 2007 Wagenmakers et al., , 2008b Grasman et al., 2009; van Ravenzwaaij et al., 2012; Dutilh et al., 2013). Whereas Ratcliff and Rouder (1998) and Ratcliff and Tuerlinckx (2002), who argued for the inclusion of intertrial variabilities, typically used very high trial numbers (at least 1000 trials per participant), more recently the model has also been applied to data sets with significantly smaller trial numbers (e.g., with only 100, see Metin et al., 2013).
    [Show abstract] [Hide abstract] ABSTRACT: The diffusion model (Ratcliff, 1978) takes into account the reaction time distributions of both correct and erroneous responses from binary decision tasks. This high degree of information usage allows the estimation of different parameters mapping cognitive components such as speed of information accumulation or decision bias. For three of the four main parameters (drift rate, starting point, and non-decision time) trial-to-trial variability is allowed. We investigated the influence of these variability parameters both drawing on simulation studies and on data from an empirical test-retest study using different optimization criteria and different trial numbers. Our results suggest that less complex models (fixing intertrial variabilities of the drift rate and the starting point at zero) can improve the estimation of the psychologically most interesting parameters (drift rate, threshold separation, starting point, and non-decision time).
    Full-text · Article · Sep 2016
    • Thirdly, because monkeys' performance in the tokens task was usually very good, both in the 415 slow and fast blocks of trials (SP = 0.78 vs 0.73 for Monkey S, 0.74 vs 0.68 for Monkey Z), we 416 tested for the possibility that most of our post-error trials come from periods of reduced 417 performance, and the differences we report are actually confounded by factors such as general 418 vigilance and attention. To eliminate this confound, we followed the approach proposed by 419 previous studies (Dutilh et al. 2012a; Purcell and Kiani 2016) and defined a new category of 420 trials, named post-correct-pre-error trials (PCPE), whereby a trial following a correct choice also 421 preceded an error. This criterion ensured that post-error and post-correct-pre-error trials were 422 sampled from periods of time with similar levels of performance, and balanced the number of 423 trials between the two categories.
    [Show abstract] [Hide abstract] ABSTRACT: Recent studies have shown that activity in sensorimotor structures varies depending on the speed-accuracy trade-off (SAT) context in which a decision is made. Here, we test the hypothesis that the same areas also reflect a more local adjustment of SAT established between individual trials, based on the outcome of the previous decision. Two monkeys performed a reaching decision task in which sensory evidence continuously evolves during the time course of a trial. In two SAT contexts, we compared neural activity in trials following a correct choice versus those following an error. In dorsal premotor cortex (PMd), we found that 23% of cells exhibited significantly weaker baseline activity after error trials, and for approximately 30% of them this effect persisted into the deliberation epoch. These cells also contributed to the process of combining sensory evidence with the growing urgency to commit to a choice. We also found that the activity of 22% of PMd cells was increased after error trials. These neurons appeared to carry less information about sensory evidence and time-dependent urgency. For most of these modulated cells, the effect was independent of whether the previous error was expected or unexpected. We found similar phenomena in primary motor cortex (M1), with 25% of cells decreasing and 34% increasing activity after error trials, but unlike PMd, these neurons showed less clear differences in their response properties. These findings suggest that PMd and M1 belong to a network of brain areas involved in SAT adjustments established using the recent history of reinforcement.
    Full-text · Article · Nov 2016
    • We used the diffusion model to further analyze the effects of our emotion induction. As for RTs, and in line withDutilh et al. (2011), we found an overall sequence effect on all the model parameters (T er , a, z, and v). However, happiness and sadness appear to have impeded the training effect as we found a significant interaction between sequence and induction, only for response-encoding time.
    [Show abstract] [Hide abstract] ABSTRACT: This study measured the effect of emotional states on lexical decision task performance and investigated which underlying components (physiological, attentional orienting, executive, lexical, and/or strategic) are affected. We did this by assessing participants' performance on a lexical decision task, which they completed before and after an emotional state induction task. The sequence effect, usually produced when participants repeat a task, was significantly smaller in participants who had received one of the three emotion inductions (happiness, sadness, embarrassment) than in control group participants (neutral induction). Using the diffusion model (Ratcliff, 1978) to resolve the data into meaningful parameters that correspond to specific psychological components, we found that emotion induction only modulated the parameter reflecting the physiological and/or attentional orienting components, whereas the executive, lexical, and strategic components were not altered. These results suggest that emotional states have an impact on the low-level mechanisms underlying mental chronometric tasks.
    Full-text · Article · Mar 2017
    • However, the latter investigations did not consider response times (RTs), which could potentially support different conclusions (Ratcliff & Starns, 2009). This turned out to be the case, with our analysis based on both RTs and accuracy clearly rejecting the equal variance assumption (see alsoDutilh et al., 2009Dutilh et al., , 2011Dutilh et al., , 2012). These results imply that researchers should take account of factors that affect the variability in evidence as well as its mean.
    [Show abstract] [Hide abstract] ABSTRACT: The lexical-decision task is among the most commonly used paradigms in psycholinguistics. In both the signal-detection theory and Diffusion Decision Model (DDM; Ratcliff, Gomez, & McKoon, 2004) frameworks, lexical-decisions are based on a continuous source of word-likeness evidence for both words and non-words. The Retrieving Effectively from Memory model of Lexical-Decision (REM–LD; Wagenmakers et al., 2004) provides a comprehensive explanation of lexical-decision data and makes the prediction that word-likeness evidence is more variable for words than non-words and that higher frequency words are more variable than lower frequency words. To test these predictions, we analyzed five lexical-decision data sets with the DDM. For all data sets, drift-rate variability changed across word frequency and non-word conditions. For the most part, REM–LD’s predictions about the ordering of evidence variability across stimuli in the lexical-decision task were confirmed.
    Full-text · Article · Feb 2017
    • Importantly, the decision bound did not differ between both instruction conditions. As expected, instructions concerning speed/accuracy modulated the height of the decision bound (see alsoForstmann et al., 2008;King, Korb, & Egner, 2012). The influence of these instructions was, however, not restricted to the decision bound.
    [Show abstract] [Hide abstract] ABSTRACT: When performing a conflict task, performance is typically worse on trials with conflict between two responses (i.e., incongruent trials) compared to when there is no conflict (i.e., congruent trials), a finding known as the congruency effect. The congruency effect is reduced when the proportion of incongruent trials is high, relative to when most of the trials are congruent (i.e., the proportion congruency effect). In the current work, it was tested whether different kinds of instructions can be used to induce a proportion congruency effect, while holding the actual proportion of congruent trials constant. Participants were instructed to strategically use the (invalid) information that most of the trials would be congruent versus incongruent, or they were told to adopt a liberal versus a conservative response threshold. All strategies effectively altered the size of the congruency effect relative to baseline, although in terms of statistical significance the effect was mostly limited to the error rates. A diffusion-model analysis of the data was partially consistent with the hypothesis that both types of instructions induced a proportion congruency effect by means of different underlying mechanisms.
    Full-text · Article · Apr 2017