Content uploaded by Alexandre Hyafil
Author content
All content in this area was uploaded by Alexandre Hyafil on Oct 04, 2021
Content may be subject to copyright.
Emotional states and self-confidence independently
fluctuate at different time scales
María da Fonseca1,2*, Giovanni Maffei3, Aleksandar Matic3, Rubén Moreno-Bote1,4,5,
Alexandre Hyafil6
1Center for Brain and Cognition, Barcelona, Spain
2Laboratory of Applied Artificial Intelligence, Computer Science Institute, School of Exact
and Natural Science, University of Buenos Aires, Argentina
3Koa Health B.V., Barcelona, Spain
4Department of Information and Communication Technologies, Universitat Pompeu Fabra,
Barcelona, Spain
5Serra Hunter Fellow Programme, Universitat Pompeu Fabra, Barcelona, Spain
6Centre de Recerca Matemàtica, Bellaterra, Spain
*e-mail: mariadafon@gmail.com
Abstract
Emotional states are an important ingredient of decision-making. Human beings are
immersed into a sea of emotions where episodes of high mood alternate with episodes of
low mood. While changes in mood are well characterized, little is known about how these
fluctuations interact with metacognition, and in particular with our perception of having made
the right choice. Here, we evaluate how implicit measurements of confidence are related
with the emotional states of human participants through two online longitudinal experiments
involving mood self-reports and visual discrimination decision-making tasks. Self-confidence
was assessed on each session by monitoring the proportion of opt-out trials when an opt-out
option was available, as well as the mean reaction times on standard correct trials. We first
report a strong coupling between the mood, stress, food enjoyment and quality of sleep
reported by participants in the same session. Second, we confirmed that the proportion of
opt-out responses as well as reaction times in non-opt-out trials provided reliable indices of
self-confidence in each session. We introduce a normative measure of overconfidence
based on the pattern of opt-out selection and the signal-detection-theory framework. Finally
and crucially, we found that mood, sleep quality, food enjoyment and stress level are not
coupled with self-confidence, but rather they fluctuate at different time scales: emotional
states expose faster fluctuations (over one day or half-a-day) than self-confidence level
(two-and-a-half days). Therefore, our findings suggest that emotional states and confidence
in decision making spontaneously fluctuate in an independent manner in the healthy adult
population.
1
Highlights
- Longitudinal study tracking affective states and decision uncertainty of subjects for a
period of 10 consecutive days in everyday life settings.
- Self-reported emotional states significantly correlate with each other.
- The proportion of opt-out responses (allowing skipping of the decision) and reaction
time in non-optout correct trials reflect decision uncertainty in two discrimination
tasks.
- There is no significant correlation between daily fluctuations of emotional states and
self-confidence markers.
- Emotional states and self-confidence fluctuate at different time scales.
Introduction
Emotions and cognition have long been known to interact (Damasio, 2008). In particular,
emotional states modulate decision-making and especially meta-cognition, that is the
monitoring of one’s own thought process and performance. For example, inducing a
transient state of sadness or anxiety can shift a subject’s willingness to perform a risky
decision (Raghunathan & Pham, 1999) or boost the accuracy of confidence judgments
(Massoni, 2014). On a longer time scale, personality traits also affect meta-cognitive
judgments (H. Xu, 2020), while metacognitive impairments and emotional dysregulation are
associated in various psychiatric disorders (Rouault et al., 2018). In particular, these
disorders lead to an unbalanced sense of self-confidence (or simply confidence), which
refers to our capacity to perform and report robust evaluations of our decisions, and use
these evaluations to control our decision-making (Yeung & Summerfield, 2012). Confidence
allows us to know when a decision is too risky to take and to revert decisions based on more
recent and more compelling evidence. For example, confidence judgments is impaired in
depressed patients (Fu et al., 2005), who consistently underestimate their performance level,
which could have a major impact in preventing these patients from taking the appropriate
decisions to improve their condition. On the converse, schizophrenic and psychotic patients
usually display strong overconfidence (Averbeck et al., 2011; Jardri & Denève, 2013;
Moutoussis et al., 2011; Rubio et al., 2011).
Our understanding of the association between emotions and meta-cognition is limited by the
gap in the time-scale between short-lived experimentally-induced emotional states and
long-lasting states such as affective disorders. Fundamentally, it is currently unknown
whether spontaneous daily fluctuations in mood and meta-cognition are coupled together. In
fact, whether confidence fluctuates and at which time scale has to our knowledge never
been studied, beyond the mere introspective experience that we sometimes feel more
confident. Popular sayings seem to take such associations between mood state and
alterations of meta-cognition for granted: “Do not promise when you are happy, do not
decide when you are sad”. Testing such association could be performed by tracking
spontaneous emotional states and cognition simultaneously through longitudinal studies.
Online tools now make longitudinal studies with many behavioral sessions in a cohort of
participants much more affordable to the experimenter (Gillan & Rutledge, 2021). This could
potentially help the diagnostics of affective disorders and this could also support
2
interventions for crisis prevention, in a particular through digital phenotyping, i.e. making
inference of emotions from patient’s digital data “in the wild” (Dagum, 2018; Jones et al.,
2021; Taylor et al., 2017).To our knowledge, only one study so far has investigated the
coupling between fluctuations of mood and cognition (more specifically, value-based
decision-making patterns) through longitudinal studies in a group of healthy adult
participants learning (Eldar et al., 2018). But it remains unknown whether daily mood
fluctuations interact with metacognitive states such as self-confidence.
Based on the ideas exposed above, we hypothesize a possible association between daily
fluctuations in mood and self-confidence. To address the above hypothesis, we developed
two online longitudinal experiments where adult volunteers reported their mood, sleep
quality, food enjoyment and stress level, and performed one of two simple visual
discrimination task twice per day during ten consecutive days. We inferred the level of
self-confidence by tracking how often subjects chose an opt-out option available in a fraction
of the visual task trials which allowed avoiding to report the perceived stimulus. If mood and
confidence fluctuations are indeed linked, we expect the participants to opt out less often
during high mood episodes, whereas during low mood episodes they recur more to opting
out. We also used average reaction times in a session as another indicator of the
participant’s confidence (Moreno-Bote, 2010; Urai et al., 2017; Vickers, 1979, 2014). Our
results showed that both proxies of self-confidence are reliable markers of the session
confidence level. Self-reported emotional states were highly correlated with each other. In
contrast to our original hypothesis, we found that spontaneous fluctuations in mood and
confidence were not coupled, but evolved on different time scales. Our results challenge the
idea that fluctuations of emotional and meta-cognitive states are intrinsically coupled in the
healthy adult population.
Methods
General structure of the paradigm
All participants were invited to complete twenty sessions in ten consecutive days, that is, two
sessions per day, one in the morning (8-12 AM) and the other in the afternoon (4-8 PM),
starting on Thursday morning and ending on Saturday afternoon (Fig. 1). The sessions were
targeted to take about 10 minutes. In total 27 participants (20 female, 6 male, 1 other) and
23 participants (18 female, 4 male, 1 preferred not to answer) were recruited for the
Numerosity (NT) and the Orientation task (OT), respectively (10 of them participated in
both), mainly among students from the Pompeu Fabra University. The median age was 25
(minimum 19, maximum 42) for the NT and 23 (minimum 20, maximum 34) for the OT. We
accepted all healthy Spanish-speaking adults with normal or corrected to normal vision. One
participant (in each task) reported a neurological or psychological/psychiatric disorder
diagnosed by a professional, 2 (1) preferred not to report about a possible disorder in the NT
(OT). We obtained online confirmation of informed consent to the conditions and the
payment modalities of the task. Irrespective of their performance, they were paid 2.5 € per
session (5 € per day) and 40 € bonus for having completed all the sessions properly.
Additionally, they had the chance to obtain a bonus payment which was determined by their
final score in the session; the two highest scoring participants of each session received a 4 €
bonus. The session score was computed as the sum of the score of each trial: 3 (0) points
for correct (incorrect) answers; 2 points for deterministic opt-out election; 3 or 0 points,
3
randomly assigned, for stochastic opt-out choice (see below). The score and bonus
schemes were explained to the subject by written task instructions. We informed participants
of the bonus money they received after the end of the whole experiment, to prevent
feedback biases and maintain their level of motivation.
Participants were allowed to take a break between stages. A fraction number layed out in the
instructions title marked the current stage of the experiment, and the approximated time
duration of the current stage was also displayed. We excluded data from three sessions of
one participant in the NT, and two sessions (from two different participants) in the OT, where
median reaction time for some difficulty levels (see Methods - Stimuli & responses)
exceeded 2 seconds, and data from three incomplete sessions (from two different
participants) in the OT. The study was approved by the Ethics Committee of the Department
(CIREP approval #121).
Stimuli & responses
The participants performed the experiment via a browser on their personal computers
through the Jatos online platform (Lange et al., 2015). The experiment was written in
JavaScript and data were collected on an institutional server managed by the Jatos team
(Lange et al., 2015). Prior to the first session, we provided to each participant a personal link
to be used once per session and an online presentation with detailed instructions and a few
examples of the decision-making task. The first screen in each session instructed the
participants to make sure they had the right environmental conditions: be in an indoor room,
turn the brightness screen up to maximum, avoid any light source behind the screen, and to
place themselves at 60 cm from the center of screen.
A demographic questionnaire was displayed in the first session with questions about
participant’s age, gender, country of residence, education level, use of lens, whether they
were diagnosed with a neurological or psychological/psychiatric disorder and/or took
medication. The answer’s options were predetermined alternatives to scroll and select.
In each session (Fig. 1), we presented a questionnaire with 3/4 questions about their quality
of life. In the morning session, questions were: ‘How have you felt this morning?’ (mood),
‘How did you sleep last night?’ (sleep), ‘How did you enjoy your last meal/snack?’ (food), and
‘How did you feel about your personal and working problems this morning?’ (stress). In the
afternoon session, questions were the same mood, food and stress inquiries asked in the
morning session, with the word ‘afternoon’ instead of ‘morning’, while the sleep question was
skipped. To answer each of the 4 (morning) or 3 (afternoon) self-reports, participants placed
a cursor with the mouse along a horizontal continuous-scale with sad and smiling emoji
faces at the ends (see supplementary Figure 1). The cursors for the different self-reports
were initially placed at the middle of the corresponding bar (all presented on the same
screen), and responses could not be validated until all cursors were moved, preventing
skipping the report. The response was linearly mapped onto an interval between [0,1].
Because the sad (smiling) emoji face was placed at the left (right) extreme, the
corresponding quantity for the stress-report was inverted as (1 − cursor-position) so that high
value indicates high level of stress.
After the questionnaire, we presented a two-alternative (left/right) visual discrimination task .
The participants answered after a 300 ms presentation of the stimulus by pressing the
left/right arrow of the keyboard. If the participants chose the correct answer, they received 3
points. An incorrect choice did not yield any point. In a fraction of trials, the participants had
the possibility to select an opt-out option reflected by a third response option represented by
the up arrow of the keyboard. By choosing this option, the subject skipped the decision and
4
passed to the next trial, but obtained a certain or stochastic reward, explained in what
follows. There were two stages in each session which differed in the reward scheme for
opt-out responses, and were composed of the same number of trials (120 trials for NT and
90 trials for OT). In both stages, two-thirds of the trials corresponded to opt-out trials (80
trials for NT, 60 for OT), while the remaining third of trials corresponded to non-optout trials
where the option was not presented (40 trials for NT, 30 for OT). In the first stage, the
Deterministic Opt-out (DO) stage, the opt-out option (in opt-out trials) was represented by a
question mark icon displayed during the response window (see Figure 1) and returned a
fixed number of 2 points. In the second stage, the Stochastic Opt-out (SO) stage, the opt-out
option was represented by a dice icon and the number of points were chosen randomly: in
80% of the trials the opt-out option returned 3 points, while the other 20% did not return
points. After selecting the opt-out option in the SO stage, participants received a feedback
coin indicating the amount of points they received. This contrasted with the non-optout
response trials and opt-out responses in the DO stage where no feedback was provided.
Written instructions were displayed at the beginning of each stage informing the score
scheme in place for the session.
Before each stimulus presentation, a cue consisting of either a cross, a question mark or a
dice was presented for 300 ms, indicating that the trial would be a non-optout, DO or SO
trial, respectively. In the NT (see Figure 1), two empty circles (radius = 15% of screen width)
were presented on each side of a central cross (size = 20 x 20 pix.) for 300 ms. Then, white
dots (radius = 20 pix.) appeared within each of the circles for 300 ms (adequate grid spacing
was introduced to prevent the circles from overlapping) (Fleming et al., 2016). One of the
circles always contained 50 dots and the other a larger number of dots. The difficulty of the
trial was manipulated by controlling the number of dots in the larger set of dots, which could
be either 52, 56, 60 or 64. Participants were instructed to maintain fixation on the central
small cross placed between the two circles and report whether the left or right circle included
more dots. In the OT, the stimulus consisted of a noisy Gabor patch (radius = 216 pix.,
period = 72 pix., phase = 0; envelope = 150 pix., middle contrast, middle average luminance)
tilted either to the left (-45º) or right (45º) presented at fixed contrast in the center of the
screen on a middle gray background. After the stimulus presentation, the Gabor image
disappeared for a short delay of 300 ms followed by presentation of the different response
options. The difficulty of the trial was controlled by manipulating the level of noise, from 0
(noiseless Gabor patch) to 1 (completely noisy Gabor patch), following (Wyart et al., 2012).
We used three levels of difficulty corresponding to levels of noise of {ns-0.08,ns,ns+0.08}
where nsis a session-adjusted noise level defined using an adaptive procedure (see below).
In both tasks, following stimulus presentation, the response options appeared on the screen,
in accordance with the cue, without time limitation to answer. The options disappeared
immediately after the participant's response, and the following trial started 300 ms after the
button press. In a small fraction of trials, the actual stimulus presentation appeared longer
than 300 ms due to small timing inaccuracies in the Internet browser. We excluded all trials
where the stimulus display exceeded 350 ms.
Training and adaptive procedure. In the NT, on every session, a practice stage of 10 trials
was performed before the DO and SO stages. The stimulus difficulty of the practice stage
followed a staircase procedure, starting with 20 points of difference between the two circles
and decreasing (increasing) 3 points of difference after correct (incorrects) replies. During
5
this practice stage, performance feedback was provided after each trial, consisting of the
image of a coin with 3 points for correct and 0 point for incorrect trials.
In the OT, a practice block of 10 trials was performed in the first participant session only.
Noise level was set to 30%, and feedback was provided as for practice trials in the NT. On
every session, following task instructions, participants performed a block of 60 non-optout
trials where the level of stimulus noise was adjusted following an adaptive staircase to adjust
the participant average accuracy on the stimulus to reach 66%. The value of the noise at the
end of this block defined ns, i.e. the level of noise for the middle difficulty trials in the rest of
the session. Stimuli were picked up from a library of pre-selected images generated with
noise levels sampled according to their energy (Wyart et al., 2012).
Autocorrelation analysis
We computed the autocorrelation for report variables (mood, stress, etc.) and psychometric
variables extracted from the decision making task in each session (proportion of opt-out
responses, overall performance in non-optout trials, mean reaction time in non-optout correct
trials, etc.). For each variable, we first calculated the autocorrelation (AC) as the Pearson
correlation coefficient between the temporal series (consisting of one data point of each
variable per session) and the same series shifted by ksessions (or kdays for sleep). For the
psychometric variables, we subtracted the mean across participants to the variable in order
to remove possible biases due to learning effects consistent across subjects. The AC of
short temporal series is biased negatively (Marriott & Pope, 1954). We removed this
negative bias by substracting from the AC vector the average AC vector from 100 randomly
shuffled versions of the corresponding temporal series. Finally, for each variable, we
compute the mean and standard error of the mean (s.e.m.) of the bias-corrected AC across
participants. We corrected the p-values for comparison at different lags by using the
following iterative procedure: we first tested significance at lag 1; if the p-value was larger
than 0.05, then autocorrelation coefficients were judged non-significant at all lags; if the
p-value was lower than 0,05, the coefficient was judged significant at lag 1 and we moved on
to assessing p-value at lag 2; and iteratively until we found the first lag where the associated
p-value was larger than 0.05.
Psychometric curves
We fitted the responses of each participant in each session by a psychometric curve,
separately in non-optout, DO and SO trials. The psychometric curve captures the proportion
for each response type (left, right, opt-out) as a function of the signed stimulus evidence
(positive stimulus indicates evidence in favor of the right option). In the NT, the signed
stimulus evidence was defined as the difference in the number of points between the right
and left circles. In the OT, the signed stimulus evidence was defined as the difference in
motion energy between 45 and -45 degrees. The psychometric curve for non-optout trials
was computed by grouping non-optout trials of DO and SO stages from the same session.
Following Signal Detection Theory (SDT), the psychometric curve is determined by the level
of perceptual and decision noise σ2and the decision boundary H. When we present a
stimulus strength e, the participant observes ê = e + η, where η ~ N(0,σ2), with Nthe
gaussian distribution, and the participants categorizes the stimulus to the right category if ê >
H, and to the left if ê < H. Thus, the probability of answering rightwards can be written as
6
, (1)
𝑝 (𝑟𝑖𝑔ℎ𝑡𝑤𝑎𝑟𝑑 | 𝑒) =
𝐻
+∞
∫ (𝑒 + η)𝑑η = Φ( 𝑒−𝐻
σ)
where the standard normal cumulative density function. We estimated the internal noise
Φ
and the decision boundary by fitting the psychometric curve with the logistic regression using
the sklearn Python library (Fig. 5 A).
In opt-out trials, participants could select between 3 options: L (leftward), R (rightward) and
O (opt-out)(Fig. 5B). Following SDT, we now postulate that participants apply two decision
boundaries in the perceptual space: between the leftward and opt-out responses , and
𝐻𝐿𝐻𝑅
between the opt-out and rightwards responses (García-Pérez & Alcalá-Quintana, 2017). This
gives rise to the following equations for the proportion of the leftwards, rightwards and
opt-out responses as a function of stimulus evidence ) ,
𝑝𝐿(𝑒) = 1 −Φ( 𝑒−𝐻𝐿
σ𝑝𝑅(𝑒) =Φ( 𝑒−𝐻𝑅
σ
) and We estimated , and σfrom each session data through
𝑝𝑂(𝑒) = 1 − 𝑝𝐿(𝑒) − 𝑝𝑅(𝑒). 𝐻𝐿𝐻𝑅
the maximum likelihood estimation method. In the few sessions where the participant did not
opt out for a single trial, we instead fitted the psychometric curve as in the non-optout trials.
Optimal decision boundary in opt-out trials. If the participants know their internal
perceptual noise level, then they can adjust the boundaries in the opt-out conditions in order
to maximize the expected number of points associated with the response (Barrett et al.,
2013). In deterministic opt-out trials, given the point scheme, the expected number of points
of the opt-out, left and right responses is respectively 2 points, and ,
3𝑝(𝑒 < 0|ê) 3𝑝(𝑒 > 0|ê)
respectively, where the conditional probabilities refer to the probability that the true stimulus
evidence eis positive or negative given the observed evidence . According to the optimal
ê
observer model, the rightward decision boundary ( ) should be set as the observed
𝐻𝑅
evidence êwhere the expected number of points for rightward and opt-out responses are
equated, i.e. where . In other words, the optimal observer should select
𝑝(𝑒 > 0|ê) = 2/3
the rightward response only if its expected accuracy is above ⅔; otherwise, it is more
convenient to opt-out and collect 2 points. Thus, the boundary should be placed such that
𝐻𝑅
, while the boundary should be placed where
𝑝(𝑒 > 0|ê = 𝐻𝑅
𝑜𝑝𝑡) = 2/3 𝐻𝐿
.
𝑝(𝑒 > 0|ê = 𝐻𝐿
𝑜𝑝𝑡) = 1/3
The Bayes rule gives us that , i.e. the posterior over each stimulus
𝑝(𝑒|ê)∝𝑝(𝑒)𝑝(ê|𝑒)
strength depends both on the prior evidence for the stimulus strength and the likelihood of
the stimulus strength given the observed evidence. Assuming that the bias displayed by
participants in the trials without the opt-out option is a perceptual bias H, then
, where the variance of the sensory noise σ2can be estimated
𝑝(ê|𝑒) = 𝑁(ê|𝑒 − 𝐻, σ 2)
from the resulting fit of the psychometric curves and ecorresponds to the signed stimulus
strength corresponding to the presented stimulus. Here we assume that the prior over the
stimulus strength is a gaussian centered on 0, i.e. , where is the
𝑝(𝑒) = 𝑁(𝑒|0, ε 2) ε
standard deviation of e. While the true prior distribution is over a set of 6/8 discrete values, it
is unlikely that participants notice such a discrete distribution from a noisy set of
observations. Rather we assume that they use such gaussian form with the same standard
deviation as the standard deviation of the true distribution (notice that the gaussian form is
the maximum-entropy distribution, i.e. the less informative distribution when the mean and
standard deviation are known). We can then further develop:
7
,
𝑝(𝑒|ê) ∝ 𝑝(𝑒)𝑝(ê|𝑒) = 𝑁(𝑒; 0, ε2)𝑁(ê; 𝑒 + 𝐻, σ2) ∝ 𝑁(𝑒; µ, 𝑣2)
with and .
µ = (ê − 𝐻) ε2/(ε2+ σ2) 𝑣2= σ2ε2/(ε2+ σ2)
Thus . Using the limit
𝑝(𝑒 > 𝑎|ê) =
𝑒>𝑎
∫ 𝑝(𝑒|ê)𝑑𝑒 ∝
𝑒>𝑎
∫ 𝑁(𝑒; µ, 𝑣2)𝑑𝑒 = Φ( µ−𝑎
𝑣) 𝑎 → − ∞
where , we see that the coefficient of proportionality is 1. Hence:
𝑝(𝑒 > 𝑎|ê) → 1
.
𝑝(𝑒 > 0|ê) = Φ( µ
𝑣)
The implicit equation can now be solved:
𝑝(𝑒 > 0|ê = 𝐻𝐿
𝑜𝑝𝑡) = 1/3
Φ( µ
𝑣) = 1/3 ⇒ µ = 𝑣Φ−1(1/3) ⇒
.
𝐻𝐿
𝑜𝑝𝑡 = 𝐻 + (ε2+ σ2)/ε2𝑣Φ−1(1/3) = 𝐻 + σ 1 + (σ/ε)2Φ−1(1/3)
Similarly for the optimal right boundary:
.
𝐻𝑅
𝑜𝑝𝑡 = 𝐻 + (ε2+ σ2)/ε2𝑣Φ−1(2/3)
In the stochastic opt-out trials, the expected number of points for the opt-out response is
0.8*3=2.4 (as the 3 point reward is obtained in 80% of trials), so the optimal boundaries
should be computed accordingly.
Overconfidence (OC). We defined the overconfidence associated with a participant’s
session as . The value is bounded between -1 and 1: it is
𝑂𝐶 = |𝐻 𝑜𝑝𝑡
𝐿−𝐻 𝑜𝑝𝑡
𝑅| −|𝐻 𝐿−𝐻 𝑅|
|𝐻 𝑜𝑝𝑡
𝐿−𝐻 𝑜𝑝𝑡
𝑅| +|𝐻 𝐿−𝐻 𝑅|
positive when the participant opted out less than according to the optimal strategy, reflecting
overconfidence, and negative otherwise, reflecting underconfidence. Notice that the distance
between optimal boundaries is independent of the perceptual bias Hbut scales with the
standard deviation of the perceptual noise :
σ
. (2)
|𝐻 𝑜𝑝𝑡
𝐿− 𝐻 𝑜𝑝𝑡
𝑅| = 2σ 1 + (σ/ε)2Φ−1(2/3)
Risk aversion (RA). We defined risk aversion as the difference between the proportion of
opt-out responses in the stochastic (SO) and deterministic (DO) stages of each session
(bounded between -1 and 1).
Results
In two longitudinal online experiments, participants reported their emotional states and
performed a decision-making task twice a day, once in the morning and once in the
afternoon, during 10 consecutive days, for a total of 20 sessions per participant (see Fig. 1).
In each session, participants started by answering 3 (morning sessions) or 4 (afternoon)
questions about their mood, the quality of their sleep, their food enjoyment and their stress
level. After this self-report questionnaire, they performed one of two two-alternative forced
choice decision-making tasks: one cohort of subjects (n= 27 participants) performed a
numerosity task (NT) where they reported which of two simultaneously presented circles
contained more dots (Fleming et al., 2016), and the other cohort (n= 23 participants)
performed an orientation task (OT) where they reported whether a noisy titled Gabor patch
8
was titled to the left or right (Wyart et al., 2012); see Methods. We controlled the difficulty of
each trial by manipulating the difference between the number of dots in the circles (NT) or
the difference in stimulus energy between rightward and leftward orientation of the noisy
Gabor patches following (OT). In a fraction of the trials, participants were offered an option to
opt out and skip the decision, which resulted in a certain or stochastic number of points.
Participants were encouraged to use the opt-out option when they were uncertain of the
stimulus category in order to maximize the collection of points over the session. Three points
were achieved with a correct answer and zero points for an incorrect one. The opt-out
option, when available, ensured a fixed amount of two points in the first stage of the session
(Deterministic Opt-out, DO) or 3 points with probability 80% and 0 points with probability
20% in the second stage of the session (Stochastic Opt-out, SO). We hypothesized that the
proportion of opt-out responses in the session would represent a robust index of the current
level of self-confidence of the subject (Grimaldi et al., 2015). To determine whether the
proportion of opt-outs is inflated by a risk-averse strategy, we introduced the SO task in the
second stage in each session, where the levels of risk between the opt-out and non-optout
strategies are similar.
We used another implicit measurement of the level of participant confidence on each
session: the reaction time in trials with no opt-out option (non-optout trial), which is known to
be larger for uncertain decisions (Moreno-Bote, 2010; Urai et al., 2017; Vickers, 1979). We
avoided explicit measures such as reporting decision confidence on a scale for each trial
(Martino et al., 2013; Schustek et al., 2019) in order to prevent the possible direct influence
of the emotional states over explicit meta-cognitive assessments. As introduced above, the
SO stage allowed disentangling the effects of self-confidence and risk aversion in the
selection of the opt-out options (Dienes & Seth, 2010). We used the difference between the
proportion of opt-out responses in the DO and SO stages in a single session as a direct
index of risk aversion.
Figure 1. Experimental paradigm. The online experiment lasts 10 days with two sessions
per day of approx. 10 minutes: morning (8-12 am) and afternoon session (4-8 pm). Each
session starts with 3 or 4 questions about the participant's mood, the quality of their sleep,
their enjoyment of the food, and their stress level. After the questionnaire, the participant
9
completed a two-alternative forced-choice perceptual task. In two-thirds of the trials, they
could select a third opt-out option (‘?’ symbol on the top left trial) which allowed skipping the
decision. Participants were instructed to select the opt-out option when they were unsure of
the stimulus category in order to maximize their cumulative score on the session. Two
stages of trials in each session differed in contingencies of the opt-out option. In the first
block called ‘Deterministic Opt-out’ (DO stage; top row), the opt-out option returns a fixed
number of 2 points. In the second block called ‘Stochastic Opt-out’ (SO stage; bottom row),
the opt-out option yields 80% of chance of obtaining returns 3 points (like in the example),
while the other 20% do not return any point (in contrast to all other options, the number of
points obtained was provided after choosing the stochastic opt-out option). On each trial
task, a cue is presented prior to the stimulus, indicating the available options (an X for
no-opt-out trials, a question mark for DO and a dice for SO).
Daily fluctuations in self-reported emotional states covary
The average values over participants of the self-report variables (mood, quality of sleep,
enjoyment of food, stress) per session are illustrated in Figure 2A. They appeared mostly
stable across days, with a significant difference between the mean across weekend and
week sessions for mood (two-paired t-test across NT and OT participants: t=2.1, p=0.04),
stress (t=-4.02, p<10-3) and food (t=3.8, p<10-3) reports. The difference was not significant for
the sleep report (t=1.1, p=0.3). This result indicates that, on average, participants were in a
better mood, less stressed and ate better during weekend sessions compared with week
sessions (Larsen & Kasimatis, 1990). At the individual level, these variables fluctuated
substantially per session. We found that mood, food and stress self-reports presented a
significant positive autocorrelation at lag 1 (i.e., around 4-16 hours difference between the
two reports) in both tasks (two-tailed t-test on 1-lag correlation values across subjects,
p<0.05 for each variable in both experiments; Figure 2B). For the food self-reports, the
autocorrelation stayed significant at lag 2 (i.e. ~24 hours difference between the two reports).
This suggests that these markers fluctuate at the scale of one or half a day: slower
fluctuations were not observed. Furthermore, we found strong correlations between all
self-reported emotional states (Figure 2C): mood, food and sleep indicators were positively
correlated, and negatively correlated with the stress level, as expected (all p-values
remained significant after Bonferroni correction). This result indicates that, on average, when
the participants slept and ate better, they also experienced a better mood and were less
stressed. Remarkably the whole pattern of correlations between reports was very similar in
both experiments (Figure 2C). Finally, we found significant correlations at the individual level
in a large proportion of participants (Figure 2D, E). Overall, these results suggest a strong
and reliable association between the daily fluctuations in mood, quality of sleep, stress and
enjoyment of food.
10
Figure 2. Session-to-session fluctuations in quality of life reports. A. Mean values per
session for mood, food, stress and sleep self-reports across participants in the numerosity
task group (full lines; n=27) and orientation task group (dashed line; n=23). Coloured shaded
areas indicate standard error of the mean (s.e.m.). Gray shaded areas indicate weekend
sessions. B. Autocorrelation coefficients in self-reports (star: t-test across participants, p <
0.05 after correction for multiple lags, see Methods). C. Average correlation heat map of the
self-report variables across participants (** p < 0.01; **** p < 0.0001). D. Mood vs. stress
reports for one exemplar participant. Each dot represents a session. The line represents a
linear fit while the shaded area represents the standard error of the estimated slope. E.
Distribution of Pearson correlation coefficients between mood and stress (mood and sleep)
11
in the left (right) panel across subjects. Filled bars indicate subjects with significant
correlation at individual level (p < 0.05, uncorrected). Upright and inverted axes correspond
to the numerosity and orientation task groups, respectively.
Opt-out selection reflects the session-specific level of self-confidence.
Next, we analyzed behavioral data in the numerosity task and assessed whether the
proportion of opt-out options and reaction times constituted solid markers of choice
confidence. As expected, the overall accuracy level dropped for increasing stimulus difficulty
for non-optout, deterministic opt-out and stochastic opt-out trials (Figure 3A, left).
Interestingly, accuracy was higher in trials where the opt-out option (either stochastic or
deterministic) was presented but finally discarded by the participant than in trials where it
was not presented. This was confirmed in a factorial 2-way ANOVA with difficulty and opt-out
condition (non-optout, stochastic opt-out, deterministic opt-out) as factors: we found a main
effect of opt-out condition (Fcondition=18.3, p<10-7) as well as difficulty (Fdifficulty=451.4, p<10-113),
with no significant interaction between opt-out condition and difficulty (F=0.6, p>0.7) .
Post-hoc t-tests showed that accuracy was lower in non-optout trials compared to
deterministic (t=-5.1, p<10-4 corrected) or stochastic opt-out trials (t=-9.2, p<10-8 corrected).
This suggests that subjects were able to estimate on a trial by trial basis when their choice
was likely to be erroneous, and used to opt-out in those trials when this was an option. In
line with this account, the proportion of opt-out responses strongly increased for larger
difficulty (Figure 3A, middle). Overall, this suggests that the selection of the opt-out option
reflects the level of confidence at a particular trial. We also confirmed that reaction times
provide another indicator of confidence. As expected, in non-optout trials, the average
reaction time as a function of stimulus difficulty displayed the classical X-pattern (Kepecs et
al., 2008; Urai et al., 2017): reaction time increases for more difficult stimuli in correct trials,
and decreased for more difficult stimuli in error trials (Figure 3A, right).
Since both opt-out selection (for opt-out trials) and reaction time (for non-opt-out trials) are
robust indicators of choice confidence on a particular trial, then the overall proportion of
opt-out responses and average reaction time in a session likely reflect the level of subject
self-confidence at the particular period of time. We found that these indicators were largely
stable across the 20 sessions at the population level, whereas the overall accuracy
displayed a modest improvement likely due to learning effects in the first two days of the
experiment (Figure 3B, top panel). However, at an individual level, these measures
displayed important fluctuations which may provide a window into subject-specific variations
in self-confidence. In particular, the autocorrelograms indicated a slow component to the
fluctuations in opt-out tendency (Figure 3C). Significant positive correlations between the
proportions of opt-out responses in two sessions were seen when these two sessions were
separated as much as two-and-a-half days (both for deterministic and stochastic opt-outs;
Figure 3C middle panel). The same tendency was found for the average reaction time
series, although the significance of auto-correlation weights did not survive correction for
multiple comparison (bottom panel). The autocorrelations in accuracy across sessions was
somehow shorter-lived, with significant correlation between two consecutive sessions (top
panel). Finally, we investigated the possible coupling between the values of the different
psychometric variables in a single session. We found that the tendency to opt-out in
deterministic and stochastic stages were very strongly correlated across sessions (average r
across subjects: 0.5; t-test: p<10-11; see Figure 3D-E). By contrast, we found no significant
mean correlation across sessions between mean reaction times and opt-out tendency (either
in stochastic or deterministic stage; both p>0.2). The average accuracy in non-optout trials in
12
a session also appeared to be uncorrelated with all three proxies for confidence (opt-out
tendency in DO, SO; and average RT). Overall, these results suggest that the tendency to
opt-out and average reaction time in the perceptual task provide robust markers of a slowly
fluctuating subject confidence. Note that a very similar set of results was found for the
orientation task (Supp Figure 2).
Figure 3. Psychometric variables fluctuations (Numerosity task): A. Mean across
participants and sessions, as a function of stimulus difficulty. Left panel: accuracy
(percentage in correct responses) in NO (non-optout), DO (deterministic opt-out) and SO
(stochastic opt-out) trials. In opt-out trials, the accuracy is expressed as the percentage of
correct responses out of the non-optout responses. Middle panel: percentage of opt-out
responses, in deterministic and stochastic opt-out trials. Right panel: normalized reaction
time over correct trials, and incorrect non-optout trials. Reaction times are normalized by the
median reaction time across all trials in each session. In all panels shaded areas indicate
standard error of the mean (s.e.m.). B. Psychometric values as a function of session,
averaged across participants (shaded area: s.e.m). Gray shaded area indicating weekend
sessions. C. Autocorrelation coefficients of accuracy in non-optout trials (top panel),
proportion of opt-out responses (middle) and reaction time of correct non-optout trials
(bottom). Star: t-test across participants, p < 0.05 after correction, see Methods. D.
Distribution of Pearson coefficients of across-session correlations of the proportion of
opt-out responses in deterministic (DO) and stochastic (SO) opt-out stages. Filled bars
indicate subjects with significant correlation (p < 0.05) at individual level. E. Matrix
representing the mean correlation coefficient across participants (**** p < 0.0001;
significance remains after a Bonferroni correction).
13
A normative approach to over-confidence
While the proportion of opt-out responses in a session provides an intuitive measure of the
confidence level of the participant, we also used a model-based approach based on signal
detection theory (SDT) to define a normative measure of overconfidence. According to the
SDT framework, in non-opt-out trials there is a single boundary in the observer perceptual
space that is used as a decision criterion to select between a leftward or a rightward
response. A simple fit to participant responses in non-optout trials allows to infer the decision
boundary Has well as the perceptual noise (Figure 4A). When the opt-out option is present,
there are now two decision boundaries HLand HRthat separates lefwards and opt-out
responses for the first, opt-out and rightwards responses for the second. An optimal
observer would place these boundaries such as to maximize the expected number of points.
For example, in the deterministic opt-out stage where correct responses are rewarded by 3
points while opt-outs yield 2 points, the optimal boundary HLis placed at the point in the
perceptual space corresponding to a ⅔ probability of a leftward stimulus (and similarly for
HR). The position of this point can be evaluated analytically and depends on the value of the
perceptual noise (see Methods for details). We can then estimate boundaries HLand HRfrom
responses in the opt-out condition and compare the distance between these estimated
boundaries to the distance between the optimal boundaries (Figure 4B). A larger than
optimal inter-boundary distance reflects a larger than optimal use of the opt-out option, i.e.
can be associated with an underestimation in the probability of a correct judgment. By
contrast, a smaller than optimal inter-boundary distance reflects less frequent than optimal
resorting to the opt-out response, which signals an overestimation of the probability for a
correct judgment. We define an overconfidence metrics (see Methods) based on the
relationship of the estimated inter-boundary distance to the optimal inter-boundary distance,
as a metric bounded between -1 and 1, where 0 reflects optimal distance, positive values
reflect overconfidence and negative values reflect underconfidence (see fluctuations on
overconfidence across sessions in Supp. Figure 3).
We estimated decision boundaries and perceptual noise from the pattern of responses in
non-optout, deterministic opt-out and stochastic trials in each session and each participant
(Figure 4C,F). The SDT framework predicts that only the position of boundaries should vary
between opt-out and non-optout trials, while the perceptual noise should not. Indeed we
found a strong correlation between the perceptual noise estimated from opt-out and
non-optout trials belonging to the same session (Figure 4E). This was observed for
deterministic as well as stochastic opt-out trials, and for both NT and OT. We also predict
that the decision bias should be preserved in opt-out trials. If the value of the decision
boundary Hreflects a bias of the mapping of the stimulus onto the perceptual space (e.g. if
the number of dots in the left circle are consistently underestimated), then boundaries HLand
HRshould be similarly biased. Indeed we also found a strong correlation across sessions
between the bias in the non-optout trials -defined simply as H- and the bias in the opt-out
trials -defined as the middle point between HLand HR- (Figure 4D). These analyses confirm
that SDT provides a sound framework for understanding participant responses, and validate
our normative approach to overconfidence derived from this framework.
14
Figure 4. Psychometric curves. A. Example psychometric curve in non-optout trials in one
session for a participant performing the numerosity task. The sigmoid reflects the fit by a
probit sigmoid function (H: estimated decision boundary; see methods). B. Optimal
psychometric curves inferred for deterministic opt-out trials of this participant-session. The
grey, red and black curves represent the probability of leftwards, opt-out and rightwards
responses, respectively ( and : left and right optimal decision boundaries). C, F.
𝐻𝑜𝑝𝑡
𝐿 𝐻𝑜𝑝𝑡
𝑅
Example of the psychometric curve in deterministic (C) and stochastic (F) opt-out trials for
the same participant and session as in panel A. Lines representing the psychometric fits for
each type of choice and : left and right decision boundaries). D, E. Distribution of
(𝐻𝐿𝐻𝑅
across-session Pearson correlation coefficient of bias (D) and perceptual noise (E) between
non-optout and opt-out trials (DO: red/upright histogram; SO: blue/inverted histogram) in the
numerosity task. Filled bars indicate subjects with significant correlation (p < 0.05) at
individual level.
15
Emotional states and self-confidence fluctuate independently
In the prior analyses, we have characterized the dynamics and relationships between quality
of life reports on one hand, and between psychometric markers on the other hand. We next
evaluated whether the evolution of self-reports and psychometric markers were coupled. In
particular, we hypothesized that mood could correlate across sessions with the different
proxies for self-confidence. At odds with our hypothesis, at the population level, we found no
correlation across sessions between the self-confidence proxies (proportion of opt-out
choice, median reaction times) derived from the perceptual tasks and mood or any of the
other quality of life self-reports. The correlation between mood and median reaction time was
negatively correlated, as predicted (mean r=-0.09), but the marginal significance (p=0.03
uncorrected) did not survive Bonferroni correction for multiple comparison (Figure 5A). To
further test if the correlation could emerge between components of opt-out behavior only, we
separated the stimulus-insensitive and stimulus-sensitive portions of the opt-out responses
in each session by extracting the slope and the intercept of a simple linear regression of the
opt-out proportion by stimulus difficulty. Neither the intercepts nor the slopes correlated
across sessions with any of the quality of life reports. Furthermore, we reasoned that our
normative definition of overconfidence may provide a better proxy to self-confidence than the
raw proportion of opt-out responses in the session, and so we also tested for correlations of
the overconfidence index with the quality of life self-reports. None of the correlations were
significant. Self-reports were also not coupled significantly with other psychometric variables
such as the overall accuracy (for non-optout trials) in the session, or the degree of risk
aversion. Overall, we did not find any significant mean correlation between a self-report and
a psychometric variable.
At the individual level, however, we found that a significant subset of participants (4 out of
27, binomial test with p=0.01 uncorrected in NT) exhibited significant correlation between the
proportion of opt-out choice and mood: two subjects opted out significantly less in sessions
where they reported better mood, while two other subjects opted out significantly less
(Figure 5B top-left panel). This result could indicate a subject-specific relationship between
mood state and the level of confidence. This pattern was not found in the OT (no significant
correlation in any participant).
We further investigated whether self-confidence and mood states could be coupled but with
a certain delay by computing the cross-correlation between the proportion of opt-out
responses and the different self-report measures. The cross-correlograms were all flat
(Figure 5C), indicating an absence of coupling at the population level between emotional
states and self-confidence proxies.
16
17
Figure 5. Lack of robust correlation between self-reports & psychometric variables
(Numerosity task group). A. Mean of correlation coefficients between self-report and
psychometric variables (proportion of opt-out responses in DO stage, proportion of opt-out
responses in SO stage, mean accuracy in non-optout trials, mean reaction time in correct
non-optout responses, overconfidence in DO stage, overconfidence in SO stage, risk
aversion), averaged across participants (* p < 0.05 uncorrected). B. Distribution of Pearson
coefficients of correlation between the proportion of opt-out responses (deterministic:
red/upright histogram; stochastic: blue/inverted histogram) and each self-report along
sessions. Filled bars indicate subjects with significant correlation (p < 0.05) at individual
level. C. Cross-correlation coefficients between both opt-out election types (DO in solid line
and SO in dashed line) and each self-report.
Discussion
In this longitudinal online study, we assessed for the first time whether daily fluctuations in
mood and related variables (stress, sleep, food enjoyment) are coupled to fluctuations in
meta-cognitive states (including self-confidence, response vigor, discrimination performance
and risk aversion). Participants directly reported their emotional states at the beginning of
each bi-daily sessions, while meta-cognitive states were inferred from the behavior in a
simple discrimination task with an opt-out option (Figure 1). First, we found a strong
correlation between the different mood-related variables, visible at the level of individual
participants (Figure 2). The different reports fluctuated rapidly, as the value of one session
correlated significantly with the value half-a-day later but not one-day later (except for food
enjoyment). Second, the proportion of opt-out responses and the average reaction times on
non-optout trials provided solid markers of the confidence of the participant in a particular
session (Figure 3). We also derived participant overconfidence in a normative setting based
on the signal detection theory framework (Figure 4). Self-confidence fluctuated at a slow
time scales, with auto-correlations of the time series with a lag up to two-and-a-half days.
Finally, none of the mood-related variables correlated significantly with any metacognitive
variable in the same session at the population level (Figure 5). In particular, mood did not
appear to correlate with self-confidence. An exploratory analysis revealed however that the
correlation between mood and the proportion of opt-out responses was found at the
individual level in a significant fraction of participants in the NT. Overall, our results suggest
that mood-related states and metacognitive states fluctuate independently in the healthy
adult, with slower fluctuations for self-confidence than for the mood-related variables.
Our results related to the discrimination task follows a long tradition on measuring implicit
markers of meta-cognitive states (including confidence) from behavioral measures.
Confidence that humans have in decision-making tasks is often measured by asking them to
explicitly report their confidence in a decision or series of decisions (Martino et al., 2013;
Schustek et al., 2019). However, these reports have some drawbacks: they are not intuitive,
participants may feel little motivation to report accurately their confidence level, and they can
be contaminated by learning processes (Solovey et al., 2016). Importantly, emotional states
and mood in particular may interfere with the process of explicitly reporting a confidence
measure. Here, we used an implicit measure of confidence in order to test core relationships
between confidence and mood states without potential contamination by the reporting
process. Implicit markers of confidence rely on an economic paradigm where confidence is
18
linked to the willingness to bet on their own choice and the magnitude of the bet (Grimaldi et
al., 2015; Insabato et al., 2016; Sanders et al., 2016). Here, we used an opt-out mechanism
where in a portion of trials participants could, instead of reporting the stimulus category,
secure a fixed number of points (DO stage) or bet on a lottery (SO stage). Such opt-out or
wagering scheme has been used in a large range of setting, from perceptual discrimination
in non-human primates (Kiani & Shadlen, 2009), rodents (Kepecs et al., 2008) and humans
(García-Pérez & Alcalá-Quintana, 2017) to movie selection in adults (Bhatia & Mullett, 2016)
In line with these studies, we found that the use of opt-out responses in our participants
could be tied to the expected accuracy in the discrimination, i.e. the confidence about the
choice. First, resorting to the opt-out increased with stimulus difficulty, when expected
accuracy is bound to decrease. Second, for a fixed stimulus difficulty, the accuracy of
responses increased when the opt-out response was presented but waived compared to
trials where the opt-out response was not presented. This suggests that participants
selected the opt-out response when their internal estimate of the stimulus category did not
ensure a high probability of correct response, thus avoiding potential errors (Kiani &
Shadlen, 2009). We also used reaction time in non-optout trials as another proxy for
self-confidence (Martino et al., 2013; Moreno-Bote, 2010; Vickers, 1979). We replicated the
findings that reaction times in such discrimination task display the typical X-pattern expected
of a confidence proxy, with faster responses associated with larger stimulus strenght in
correct trials, but slower responses associated with larger sitmulus strength in error trials
(Sanders et al., 2016; Urai et al., 2017). Both proxies of confidence fluctuated slowly across
sessions, with a time scale of up to two days (4 sessions). Of note, the two confidence
proxies (proportion of opt-out responses and median reaction time) appeared to be little
correlated or uncorrelated across sessions (Figure 3E). This suggests that they might report
different dimensions of the confidence construct. It is also possible that the reaction times in
a session are also influenced by other factors such as vigor or the particular position of the
hand of the subject on the keyboard.
We introduced a normative estimation of overconfidence that assesses how much
participants resort to the opt-out option compared to what would be optimal given their level
of perceptual noise (Moore & Healy, 2008). The definition is based on the framework of
signal detection theory (Barrett et al., 2013; Massoni et al., 2014). We confirmed beforehand
that signal detection theory provided a reasonable account of the pattern of choices both in
non-optout and opt-out trials. In opt-out trials, we assumed that the decision space is splitted
according to two boundaries: the boundary between left and opt-out responses, and the
boundary between opt-out and right responses (García-Pérez & Alcalá-Quintana, 2010,
2017; Pritchett & Murray, 2015). We also found that both the level of perceptual noise and
the bias were consistent between the opt-out and non-optout trials in the same session. The
overconfidence is based on comparing the distance between decision boundaries used by
participants and the distance between optimal boundaries, which depends on the level of
perceptual noise. Thus, underestimating the perceptual noise leads to overconfidence, as
measured by our normative index. We believe this definition offers a normative approach to
measuring implicitly the level of overconfidence in healthy subjects and pathological
populations.
Here, we tested for the first time (to our knowledge) the possible link between spontaneous
fluctuations in mood states and confidence in the healthy adult population. No significant
correlation at the group level was found between the reported quality of life states and the
19
implicit self-confidence markers extracted from the decision-making task. It is important to
note that these non-significant effects are probably not related to a lack of statistical power in
our data, as some temporal correlations between quality of life states and between
self-confidence markers reached very strong significance levels. The autocorrelograms of
the different variables revealed that confidence proxies fluctuated at a slower time scale (up
to two-and-a-half days) than mood states (half-a-day). Altogether, we found support for
fluctuations of mood and confidence with different dynamical properties and little to no
interaction. We did find however a small subset of participants that displayed significant
correlations (some positive, some negative) between mood reports and the proportion of
opt-out responses in the NT cohort. This suggests that a subject-specific link between mood
and confidence may exist in some individuals. Testing on a larger cohort would be needed to
confirm this exploratory finding and explore the demographic or personality features
associated with such a link.
The absence of a consistent link suggested by this study defies our original hypothesis
based on a series of adjacent findings. For example, Massoni showed that anxiety about the
future performance in the numerosity task affects meta-cognition on that task, and was in
particular associated with a lower level of overconfidence (Massoni, 2014). Three key
differences in the paradigm could explain the discrepancies of the results: i) in his
experiment, anxiety was expressed towards the task specifically, while participants in our
experiment report their mood irrespective of the task (and prior to performing it); ii) anxiety
was induced by the paradigm in his study, while we only measured spontaneous variations
in mood: iii) anxiety in his study was a short-lived emotional state, evolving from one block of
trials to the other, while reported mood states in our experiment fluctuated over days. It has
also long been known that induced sadness reduces risk aversion, while induced anxiety
enhances it (Hartley & Phelps, 2012; Raghunathan & Pham, 1999). We failed to find any link
between natural daily fluctuations in either mood or stress level and risk aversion (Figure 5).
Future studies are needed to understand which of these factors are key to the association
between emotional states and meta-cognitive states including confidence and risk aversion:
the spontaneous vs. induced nature of the emotional state, the time scale of its fluctuations
(minutes, days or years), and the attachment of the emotion to the task. Note that while we
looked for within-subject coupling between emotional states and meta-cognition, there are
clear signs of across-subject associations between psychopathological symptoms and
metacognitive abilities. For example, a symptom dimension related to anxiety and
depression correlates with lower confidence and heightened metacognitive efficiency on the
numerosity task (Rouault et al., 2018), while stable anxiety traits are also associated with
increased risk aversion (Maner et al., 2007).
The lack of correlation between daily fluctuations of mood and meta-cognition comes also as
a surprise given the strong association between deficits in emotional and meta-cognitive
processes in several psychiatric disorders. For example, depressed patients, which primarily
suffer from sustained negative mood states, also display increased underconfidence in their
own decisions (Fu et al., 2005). On the contrary, schizophrenic patients, who suffer from
dysregulated mood, generally express strong overconfidence in their own decisions and
poor insight about their dysfunctions (David et al., 2014), a phenomenon usually linked to
circular inference (Jardri et al., 2017; Jardri & Denève, 2013). Again, we can speculate about
why these associations are present in psychiatric disorders but apparently absent in
spontaneous fluctuations in the healthy population. The link may be only present in
20
pathological conditions, or for very slow and stable features (i.e. the association may be
across individuals but not within individuals). Given the very diverse etiology of these
different disorders, it is also possible that the mood and metacognitive dysregulations
represent independent manifestations of the disorder.
Beyond confidence and risk aversion, other markers of cognitive function are thought to be
affected by mood states and quality of sleep. For example, extreme sleep deprivation and
stress exposure leads to a decrease in perceptual task performance and slowing of the
responses (Lieberman et al., 2002). We found no sign of such relation between spontaneous
and moderate fluctuations of sleep and stress and either the accuracy or speed of
responses. More recently, lower quality of sleep in the general population was found to
abolish the differential sensitivity to risk aversion under gain and loss frames (S. Xu et al.,
2021). This behavioral result was mirrored by a reduction in the bad sleepers of the
difference between the EEG responses to negative feedback in the gain vs loss frames.
Moreover, depression is associated both with lower mood and reduced motor vigor (Carland
et al., 2019; Van De Leemput et al., 2014). In constrast to these results, we did not find any
relationship between spontaneous fluctuations in mood and response times in a
discrimination task.
In the last decades, hand in hand with the use of smartphones and big data recordings, an
interest in predicting emotional states has arisen attempting to anticipate states of mental
health and illness (Gillan & Rutledge, 2021; Sano et al., 2015; Taquet et al., 2020, 2021;
Taylor et al., 2017). Our experimental design aimed at tracking longitudinally the evolution of
affective and metacognitive states of subjects in ecologically valid conditions, in order to
unveil how these dimensions evolve and co-evolve over a period of two weeks. Findings in
this area could contribute to a better understanding of affective states and affective
disorders. Indeed, the standard clinical symptom-based categorisation of affective disorders
has been widely criticised due to the absence of objective diagnostic criteria that take into
account a) the neurobiology of affective states and b) the broader context in which they take
place during the life of an individual (Hyman, 2010). For this reason, novel approaches to the
diagnosis of mental disorders, such as the RDoC (Insel, 2014), propose to look at mental
conditions under the lens of a series of constructs representing specific functional
dimensions from low level factors (e.g. genes, cells and brain circuits) up to the range of
observable units of analysis such as behaviour and self-reports. This holistic view
necessarily stresses the importance of taking a longitudinal approach to the study of
affective states at multiple scales and multiple time frames.
Daily fluctuations on emotional states are very well reported via smartphone mobiles
recordings tracked with longitudinal studies over larger sets of participants and longer
periods of time (Moturu et al., 2011; Sano et al., 2015; Triantafillou et al., 2019). These
longitudinal studies show that poor sleepers report high PSS (self-reported Perceived Scale
Stress) and subjective low happiness. Moreover, a longitudinal study with a large
female-population sample showed that better sleep was associated with positive affect on
the next day (Wild-Hartmann et al., 2013). Our dataset supports previous findings and shows
an association between mood and the quality of the sleep, namely, a significant association
of sleep quality, stress level and mood (Moturu et al., 2011; Sano et al., 2015; Triantafillou et
al., 2019). As far as we know, the enjoyment of the food has never been linked before with
mood, stress level and quality of sleep. Interestingly, we found strong evidence indicating
21
that participants enjoy their meals more when they are in high mood, low stress, and better
sleep episodes.
Understanding the underlying factors of these fluctuations can also be relevant to
characterise mental disorders, such as depression (Rutledge et al., 2014). For example,(Van
De Leemput et al., 2014) found that in the transition to depressed state, the mood of patients
fluctuates more slowly (higher autocorrelation), consistent with a tipping point in a dynamical
system. The authors also found a negative correlation between content and anxiety, a result
compatible with the negative relationship between mood and stress reported in our data.
Similarly, recent studies support the longitudinal approach to studying depression by
stressing its intrinsic dynamic nature and its characterisation in terms of this interaction
between micro- and macro-behavioural patterns repeated over time (Wichers, 2014). This
dynamical view is also supported by computational models that look at network
categorization of affective states based on the predictive within-subject and between-subject
relationship of different underlying factors extracted from longitudinal self reports (Bringmann
et al., 2013). Our findings are consistent with these results as we find that mood fluctuations
are predictable over time and strongly correlate with fluctuations of other mental and
physical dimensions (e.g. appetite and sleep). In that we extended the notion of wellbeing
from the purely affective domain to a broader spectrum that includes both mental and
physical states and points at an underlying holistic well-being model comprising multiple
dimensions and timescales (Rutledge et al., 2014).
Acknowledgements
We thank Guillermo Solovey and Andrés Taraciuk for productive discussions on the
experimental code development. MdF was funded by grant “Linking Mood and Metacognition
through a mobile based experimental platform” from Koa Heath B.V. (formerly Telefonica
Innovation Alpha). RM-B is supported by BFU2017-85936-P from MINECO (Spain), the
Howard Hughes Medical Institute (HHMI; ref 55008742), an ICREA Academia award, and
the Bial Foundation (grant number 117/18). AH is funded by the Spanish Ministry of
Economy (grants PSI-2015-74644-JIN from Jovenes-Investigadores programme and
RYC-2017-23231 from Ramon y Cajal programme).
References
Averbeck, B. B., Evans, S., Chouhan, V., Bristow, E., & Shergill, S. S. (2011). Probabilistic
learning and inference in schizophrenia. Schizophrenia Research,127(1-3), 115–122.
Barrett, A. B., Dienes, Z., & Seth, A. K. (2013). Measures of metacognition on
signal-detection theoretic models. In Psychological Methods (Vol. 18, Issue 4, pp.
535–552). https://doi.org/10.1037/a0033268
Bhatia, S., & Mullett, T. L. (2016). The dynamics of deferred decision. Cognitive Psychology,
86, 112–151.
22
Bringmann, L. F., Vissers, N., Wichers, M., Geschwind, N., Kuppens, P., Peeters, F.,
Borsboom, D., & Tuerlinckx, F. (2013). A Network Approach to Psychopathology: New
Insights into Clinical Longitudinal Data. In PLoS ONE (Vol. 8, Issue 4, p. e60188).
https://doi.org/10.1371/journal.pone.0060188
Carland, M. A., Thura, D., & Cisek, P. (2019). The Urge to Decide and Act: Implications for
Brain Function and Dysfunction. The Neuroscientist: A Review Journal Bringing
Neurobiology, Neurology and Psychiatry,25(5), 491–511.
Dagum, P. (2018). Digital biomarkers of cognitive function. Npj Digital Medicine,1(1), 10.
Damasio, A. (2008). Descartes’ Error: Emotion, Reason and the Human Brain. Random
House.
David, A. S., Bedford, N., Wiffen, B., & Gilleen, J. (2014). Failures of Metacognition and Lack
of Insight in Neuropsychiatric Disorders. In The Cognitive Neuroscience of
Metacognition (pp. 345–365). https://doi.org/10.1007/978-3-642-45190-4_15
Dienes, Z., & Seth, A. (2010). Gambling on the unconscious: A comparison of wagering and
confidence ratings as measures of awareness in an artificial grammar task. In
Consciousness and Cognition (Vol. 19, Issue 2, pp. 674–681).
https://doi.org/10.1016/j.concog.2009.09.009
Eldar, E., Roth, C., Dayan, P., & Dolan, R. J. (2018). Decodability of Reward Learning
Signals Predicts Mood Fluctuations. Current Biology: CB,28(9), 1433–1439.e7.
Fleming, S. M., Massoni, S., Gajdos, T., & Vergnaud, J.-C. (2016). Metacognition about the
past and future: quantifying common and distinct influences on prospective and
retrospective judgments of self-performance. Neuroscience of Consciousness,2016(1).
https://doi.org/10.1093/nc/niw018
Fu, T., Koutstaal, W., Fu, C. H. Y., Poon, L., & Cleare, A. J. (2005). Depression, Confidence,
and Decision: Evidence Against Depressive Realism. In Journal of Psychopathology
and Behavioral Assessment (Vol. 27, Issue 4, pp. 243–252).
https://doi.org/10.1007/s10862-005-2404-x
García-Pérez, M. A., & Alcalá-Quintana, R. (2010). THE DIFFERENCE MODEL WITH
23
GUESSING EXPLAINS INTERVAL BIAS IN TWO-ALTERNATIVE FORCED-CHOICE
DETECTION PROCEDURES. In Journal of Sensory Studies (Vol. 25, Issue 6, pp.
876–898). https://doi.org/10.1111/j.1745-459x.2010.00310.x
García-Pérez, M. A., & Alcalá-Quintana, R. (2017). The Indecision Model of Psychophysical
Performance in Dual-Presentation Tasks: Parameter Estimation and Comparative
Analysis of Response Formats. In Frontiers in Psychology (Vol. 8).
https://doi.org/10.3389/fpsyg.2017.01142
Gillan, C. M., & Rutledge, R. B. (2021). Smartphones and the Neuroscience of Mental
Health. Annual Review of Neuroscience.
https://doi.org/10.1146/annurev-neuro-101220-014053
Grimaldi, P., Lau, H., & Basso, M. A. (2015). There are things that we know that we know,
and there are things that we do not know we do not know: Confidence in
decision-making. Neuroscience and Biobehavioral Reviews,55.
https://doi.org/10.1016/j.neubiorev.2015.04.006
Hartley, C. A., & Phelps, E. A. (2012). Anxiety and Decision-Making. In Biological Psychiatry
(Vol. 72, Issue 2, pp. 113–118). https://doi.org/10.1016/j.biopsych.2011.12.027
Hyman, S. E. (2010). The Diagnosis of Mental Disorders: The Problem of Reification. In
Annual Review of Clinical Psychology (Vol. 6, Issue 1, pp. 155–179).
https://doi.org/10.1146/annurev.clinpsy.3.022806.091532
Insabato, A., Pannunzi, M., & Deco, G. (2016). Neural correlates of metacognition: A critical
perspective on current tasks. In Neuroscience & Biobehavioral Reviews (Vol. 71, pp.
167–175). https://doi.org/10.1016/j.neubiorev.2016.08.030
Insel, T. R. (2014). The NIMH Research Domain Criteria (RDoC) Project: precision medicine
for psychiatry. The American Journal of Psychiatry,171(4), 395–397.
Jardri, R., & Denève, S. (2013). Circular inferences in schizophrenia. Brain: A Journal of
Neurology. https://doi.org/10.1093/brain/awt257
Jardri, R., Duverne, S., Litvinova, A. S., & Denève, S. (2017). Experimental evidence for
circular inference in schizophrenia. Nature Communications,8(1), 1–13.
24
Jones, N. M., Johnson, M., Sathappan, A. V., & Torous, J. (2021). Benefits and Limitations of
Implementing Mental Health Apps Among the Working Population. Psychiatric Annals,
51(2), 76–83.
Kepecs, A., Uchida, N., Zariwala, H. A., & Mainen, Z. F. (2008). Neural correlates,
computation and behavioural impact of decision confidence. In Nature (Vol. 455, Issue
7210, pp. 227–231). https://doi.org/10.1038/nature07200
Kiani, R., & Shadlen, M. N. (2009). Representation of confidence associated with a decision
by neurons in the parietal cortex. Science,324(5928), 759–764.
Lange, K., Kühn, S., & Filevich, E. (2015). “Just Another Tool for Online Studies” (JATOS):
An Easy Solution for Setup and Management of Web Servers Supporting Online
Studies. In PLOS ONE (Vol. 10, Issue 6, p. e0130834).
https://doi.org/10.1371/journal.pone.0130834
Larsen, R. J., & Kasimatis, M. (1990). Individual differences in entrainment of mood to the
weekly calendar. In Journal of Personality and Social Psychology (Vol. 58, Issue 1, pp.
164–171). https://doi.org/10.1037/0022-3514.58.1.164
Lieberman, H. R., Tharion, W. J., Shukitt-Hale, B., Speckman, K. L., & Tulley, R. (2002).
Effects of caffeine, sleep loss, and stress on cognitive performance and mood during
U.S. Navy SEAL training. Psychopharmacology,164(3), 250–261.
Maner, J. K., Anthony Richey, J., Cromer, K., Mallott, M., Lejuez, C. W., Joiner, T. E., &
Schmidt, N. B. (2007). Dispositional anxiety and risk-avoidant decision-making. In
Personality and Individual Differences (Vol. 42, Issue 4, pp. 665–675).
https://doi.org/10.1016/j.paid.2006.08.016
Marriott, F. H. C., & Pope, J. A. (1954). Bias in the Estimation of Autocorrelations. In
Biometrika (Vol. 41, Issue 3/4, p. 390). https://doi.org/10.2307/2332719
Martino, B. D., De Martino, B., Fleming, S. M., Garrett, N., & Dolan, R. J. (2013). Confidence
in value-based choice. In Nature Neuroscience (Vol. 16, Issue 1, pp. 105–110).
https://doi.org/10.1038/nn.3279
Massoni, S. (2014). Emotion as a boost to metacognition: how worry enhances the quality of
25
confidence. Consciousness and Cognition,29, 189–198.
Massoni, S., Gajdos, T., & Vergnaud, J.-C. (2014). Confidence measurement in the light of
signal detection theory. Frontiers in Psychology,5, 1455.
Moore, D. A., & Healy, P. J. (2008). The trouble with overconfidence. Psychological Review,
115(2), 502–517.
Moreno-Bote, R. (2010). Decision confidence and uncertainty in diffusion models with
partially correlated neuronal integrators. Neural Computation,22(7), 1786–1811.
Moturu, S. T., Khayal, I., Aharony, N., Pan, W., & Pentland, A. S. (2011). Sleep, mood and
sociability in a healthy population. Conference Proceedings: ... Annual International
Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering
in Medicine and Biology Society. Conference,2011, 5267–5270.
Moutoussis, M., Bentall, R. P., El-Deredy, W., & Dayan, P. (2011). Bayesian modelling of
Jumping-to-Conclusions bias in delusional patients. Cognitive Neuropsychiatry,16(5),
422–447.
Pritchett, L. M., & Murray, R. F. (2015). Classification images reveal decision variables and
strategies in forced choice tasks. In Proceedings of the National Academy of Sciences
(Vol. 112, Issue 23, pp. 7321–7326). https://doi.org/10.1073/pnas.1422169112
Raghunathan, R., & Pham, M. T. (1999). All Negative Moods Are Not Equal: Motivational
Influences of Anxiety and Sadness on Decision Making. In Organizational Behavior and
Human Decision Processes (Vol. 79, Issue 1, pp. 56–77).
https://doi.org/10.1006/obhd.1999.2838
Rouault, M., Seow, T., Gillan, C. M., & Fleming, S. M. (2018). Psychiatric Symptom
Dimensions Are Associated With Dissociable Shifts in Metacognition but Not Task
Performance. In Biological Psychiatry (Vol. 84, Issue 6, pp. 443–451).
https://doi.org/10.1016/j.biopsych.2017.12.017
Rubio, J. L., Ruiz-Veguilla, M., Hernández, L., Barrigón, M. L., Salcedo, M. D., Moreno, J.
M., Gómez, E., Moritz, S., & Ferrín, M. (2011). Jumping to conclusions in psychosis: a
faulty appraisal. Schizophrenia Research,133(1-3), 199–204.
26
Rutledge, R. B., Skandali, N., Dayan, P., & Dolan, R. J. (2014). A computational and neural
model of momentary subjective well-being. Proceedings of the National Academy of
Sciences of the United States of America,111(33), 12252–12257.
Sanders, J. I., Hangya, B., & Kepecs, A. (2016). Signatures of a Statistical Computation in
the Human Sense of Confidence. In Neuron (Vol. 90, Issue 3, pp. 499–506).
https://doi.org/10.1016/j.neuron.2016.03.025
Sano, A., Phillips, A. J., Yu, A. Z., McHill, A. W., Taylor, S., Jaques, N., Czeisler, C. A.,
Klerman, E. B., & Picard, R. W. (2015). Recognizing academic performance, sleep
quality, stress level, and mental health using personality traits, wearable sensors and
mobile phones. 2015 IEEE 12th International Conference on Wearable and Implantable
Body Sensor Networks (BSN), 1–6.
Schustek, P., Hyafil, A., & Moreno-Bote, R. (2019). Human confidence judgments reflect
reliability-based hierarchical integration of contextual information. In Nature
Communications (Vol. 10, Issue 1). https://doi.org/10.1038/s41467-019-13472-z
Solovey, G., Shalom, D., Pérez-Schuster, V., & Sigman, M. (2016). Perceptual learning effect
on decision and confidence thresholds. Consciousness and Cognition,45, 24–36.
Taquet, M., Quoidbach, J., Fried, E. I., & Goodwin, G. M. (2021). Mood Homeostasis before
and during the Coronavirus Disease 2019 (COVID-19) Lockdown among Students in
the Netherlands. In JAMA Psychiatry (Vol. 78, Issue 1, pp. 110–112). American Medical
Association. https://doi.org/10.1001/jamapsychiatry.2020.2389
Taquet, M., Quoidbach, J., Gross, J. J., Saunders, K. E. A., & Goodwin, G. M. (2020). Mood
homeostasis, low mood, and history of depression in 2 large population samples. JAMA
Psychiatry ,77(9), 944–951.
Taylor, S. A., Jaques, N., Nosakhare, E., Sano, A., & Picard, R. (2017). Personalized
Multitask Learning for Predicting Tomorrow’s Mood, Stress, and Health. IEEE
Transactions on Affective Computing, 1–1.
Triantafillou, S., Saeb, S., Lattie, E. G., Mohr, D. C., & Kording, K. P. (2019). Relationship
Between Sleep Quality and Mood: Ecological Momentary Assessment Study. JMIR
27
Mental Health,6(3), e12613.
Urai, A. E., Braun, A., & Donner, T. H. (2017). Pupil-linked arousal is driven by decision
uncertainty and alters serial choice bias. Nature Communications,8(1), 1–11.
Van De Leemput, I. A., Wichers, M., Cramer, A. O. J., Borsboom, D., Tuerlinckx, F.,
Kuppens, P., Van Nes, E. H., Viechtbauer, W., Giltay, E. J., Aggen, S. H., Derom, C.,
Jacobs, N., Kendler, K. S., Van Der Maas, H. L. J., Neale, M. C., Peeters, F., Thiery, E.,
Zachar, P., & Scheffer, M. (2014). Critical slowing down as early warning for the onset
and termination of depression. Proceedings of the National Academy of Sciences of the
United States of America,111(1), 87–92.
Vickers, D. (1979). Confidence. In Decision Processes in Visual Perception (pp. 171–200).
https://doi.org/10.1016/b978-0-12-721550-1.50011-9
Vickers, D. (2014). Decision Processes in Visual Perception. Academic Press.
Wichers, M. (2014). The dynamic nature of depression: a new micro-level perspective of
mental disorder that meets current challenges. In Psychological Medicine (Vol. 44, Issue
7, pp. 1349–1360). https://doi.org/10.1017/s0033291713001979
Wild-Hartmann, J. A. de, de Wild-Hartmann, J. A., Wichers, M., van Bemmel, A. L., Derom,
C., Thiery, E., Jacobs, N., van Os, J., & Simons, C. J. P. (2013). Day-to-day associations
between subjective sleep and affect in regard to future depressionin a female
population-based sample. In British Journal of Psychiatry (Vol. 202, Issue 6, pp.
407–412). https://doi.org/10.1192/bjp.bp.112.123794
Wyart, V., Nobre, A. C., & Summerfield, C. (2012). Dissociable prior influences of signal
probability and relevance on visual contrast sensitivity. Proceedings of the National
Academy of Sciences of the United States of America.
https://doi.org/10.1073/pnas.1120118109
Xu, H. (2020). Big Five Personality Traits and Ambiguity Management in Career
Decision‐Making. In The Career Development Quarterly (Vol. 68, Issue 2, pp. 158–172).
https://doi.org/10.1002/cdq.12220
Xu, S., Liu, Q., & Wang, C. (2021). Self-reported daily sleep quality modulates the impact of
28
the framing effect on outcome evaluation in decision-making under uncertainty: An ERP
study. In Neuropsychologia (Vol. 157, p. 107864).
https://doi.org/10.1016/j.neuropsychologia.2021.107864
Yeung, N., & Summerfield, C. (2012). Metacognition in human decision-making: confidence
and error monitoring. Philosophical Transactions of the Royal Society of London. Series
B, Biological Sciences,367(1594), 1310–1321.
29
SUPPLEMENTARY FIGURES
Supplementary figure 1. Quality of life self-reported questionnaire. Participants
answered 3 or 4 questions about the quality of life states by moving cursors in continuous
scales. In the morning sessions, they reported their mood (‘How have you felt this
morning?’), sleep quality (‘How did you sleep last night?’), food enjoyment (‘How did you
enjoy your last meal/snack?’), and stress level (‘How did you feel about your personal and
working problems this morning?’). The same questions were asked in the afternoon session
changing the word ‘mañana’ (morning) for ‘tarde’ (afternoon), except for the sleep question
which was skipped.
30
Supplementary figure 2. Psychometric variables fluctuations (Orientation task): A.
Mean values across participants and sessions vs. stimulus difficulty. Shaded areas indicate
standard error of the mean (s.e.m.). Left panel: accuracy percentage in the three types of
trials. Middle panel: percentage of opt-out of each stage. Right panel: reaction time over
correct trials and incorrect non-optout trials. B. Mean (line) and s.e.m. (shaded areas) values
across participants. Gray shaded area indicating weekend sessions. C. Autocorrelation
coefficients in non-optout accuracy, opt-out selection and reaction time of correct non-optout
trials (star: p < 0.05 after correction). D. Distribution of Pearson across-session correlation
coefficients of the proportion of deterministic (DO) and stochastic (SO) opt-outs. Filled bars
indicate subjects with significant correlation (p < 0.05) at individual level. E. Matrix
representing the mean correlation coefficient across participants (** p<0.01, *** p < 0.001,
**** p < 10-4 uncorrected).
31
Supplementary figure 3. Overconfidence & risk aversion fluctuations: A. Mean (line)
and s.e.m. (shaded areas) values across participants. Gray shaded area indicating weekend
sessions. B. Autocorrelation coefficients (star: p < 0.05 after correction).
32