PreprintPDF Available

A Metacognitive Triggering Mechanism for Anticipatory Thinking

Authors:
  • Air Force Research Laboratory
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Current autonomous systems have the ability to adapt to environmental changes in real-time, but limited ability to engage in anticipatory thinking (AT) with the flexibility to generalize and consider hypothetical future situations. We argue that metacognitive processes are important for and provide supporting literature primarily from psychology. As an example, we present a metacognitive monitoring mechanism implemented in a cognitive model and discuss ways to extend the mechanism to allow for dynamic behavior and anticipatory thinking capabilities.
A Metacognitive Triggering Mechanism for Anticipatory Thinking
Alexander R. Hough1,2, Othalia Larue1, and Ion Juvina1
1Wright State University, Psychology Department, ASTECCA laboratory, Dayton, OH 45435, USA
2ORISE at Air Force Research Laboratory, Wright-Patterson Air Force Base, OH 45433, USA
hough.15@wright.edu, othalia.larue@wright.edu, and ion.juvina@wright.edu
Abstract
Current autonomous systems have the ability to adapt to envi-
ronmental changes in real-time, but limited ability to engage in
anticipatory thinking (AT) with the flexibility to generalize and
consider hypothetical future situations. We argue that metacog-
nitive processes are important for an and provide supporting lit-
erature primarily from psychology. As an example, we present
a metacognitive monitoring mechanism implemented in a cog-
nitive model and discuss ways to extend the mechanism to allow
for dynamic behavior and anticipatory thinking capabilities.
Anticipatory Thinking
1
Anticipatory thinking (AT) is the deliberate exploration and
consideration of hypothetical future outcomes in order to
identify an appropriate action or plan (Amos-Binks and
Dannenhauer 2019; Geden et al. 2018). AT involves an ar-
ray of cognitive processes (Klein et al., 2003; Koziol, Bud-
ding, and Chidekel 2012), such as mental simulation, recog-
nition, preparation, and development of expectancies, which
are not completely understood prior to an event occurring
(Klein, Snowden, and Pin 2011; Warwick and Hutton 2007).
It is considered distinct from prediction (Klein et al. 2011)
and is described as gambling with attention in hopes of di-
recting it towards the most relevant event (Klein et al. 2007).
Geden et al. (2019) identified three forms of AT: how past
states led to current states (retrospective branching), antici-
pating future states and their indicators (prospective branch-
ing), and focusing on a potential future and working back-
wards (backcasting). Klein et al. (2011) takes a more natu-
ralistic approach to AT emphasizing the detection of dis-
crepancies through recognition and degree of match be-
tween past, current, and future situations (pattern matching),
using “trajectory” to prepare for the future and extrapolate
Copyright © 2019, Association for the Advancement of Artificial Intelli-
gence (www.aaai.org). All rights reserved.
trends (trajectory tracking), and being mindful of connec-
tions, implications, and interdependencies between events
(Conditional). Geden et al. (2019) and Klein et al. (2011)
have slightly different approaches, however, they both iden-
tified similar AT processes: recognizing feature and cue re-
lationships between situations, extrapolation or generaliza-
tion to other states, and construction of mental models based
on available evidence.
Anticipating future events is crucial. The real world is dy-
namic, often unpredictable, may have ill-defined goals, and
may involve high stakes. The ability to generate, use, and
reason about plans or goals to direct behavior and adapt to
changes is important for intelligent behavior (Newell and
Simon 1972; Schank and Abelson 1977) and autonomy
(Johnson et al. 2016; Vattam et al. 2013). Goal-driven be-
havior leverages discrepancies between expectations and the
environment in real-time, and when detected, they are ad-
dressed by modifying goals, reasoning about goals, and
learning (Aha 2018; Cox et al. 2016; Mun
͂oz, et al. 2019;
Pozanco, Fernández, and Borrajo 2018; Roberts et al. 2018).
Amos-Binks and Dannenhauer (2019) suggest most current
systems lack AT capabilities, such as the ability to address
unknown hypothetical future events by identifying and
avoiding errors or discrepancies before they occur, and how
to effectively trade off costs of computation and benefits of
considering a large number of possible futures. AT systems
need to strike a balance between flexibility and stability in
order to adapt to dynamic real-world environments before
conditions change (Bratman, Israel, and Pollack 1988).
Klein et al. (2011) suggest good AT requires one to be
sensitive to the constraints and affordances based on their
own beliefs, capabilities, and the current situation. Metacog-
nitive monitoring could help overcome the computation
problem and some of the barriers to AT identified by Klein
et al. (2011), such as taking a passive stance, becoming fix-
ated on patterns, explaining away evidence or interpreta-
tions, and being overconfident. Similar to AT as a metacog-
nitive capability (Amos-Binks and Dannenhauer 2019), we
emphasize how the calibration between metacognitive mon-
itoring and reality could help indicate when AT is needed,
how to accomplish it efficiently, and reduce the number of
futures to consider. We explore psychological metacogni-
tive measurements regarding conflict detection and resolv-
ing processes in simpler tasks and discuss how these capa-
bilities could be extended to AT.
A Critical Role for Metacognition
Humans use heuristics to make efficient and accurate deci-
sions (Cosmidies and Tooby 1996; Gigerenzer and Gaiss-
maier 2011), but this can lead to systematic error in inappro-
priate, novel, or misleading environments (Evans and Sta-
novich 2013; Kahneman 2011; Kahneman and Klein 2009).
A critical ability is recognizing when an approach is inade-
quate and suppressing it to come up with an alternative (Sta-
novich 2018). This ability is metacognition, which serves to
detect conflict or mismatch regarding an environment and a
strategy, type of processing, or expectation. This detection
depends on predictability and cues in the environment, abil-
ity to recognize relevant cues, and whether goals are reach-
able (Dannenhauer et al. 2018; Evans and Stanovich 2013;
Johnson et al. 2016; Klein 1998; Klein et al. 2007; Penny-
cook, Fugelsang, and Koehler 2015a; Stanovich 2018;
Vattam, et al. 2013). This process is referred to as metacog-
nitive monitoring or experience, which provides feedback,
leads to control decisions, activates knowledge, and can be
calibrated through experience leading to better regulation
behavior (Efklides 2006; Efklides, Samara, and Petropoulou
1999; Flavell 1979). Conflict often triggers the need for a
different approach towards solving a problem or completing
a task (Butcher and Sumner 2011; Dannenahuer et al. 2018;
Pennycook 2017; Stanovich 2018). However, it does not al-
ways lead to efficient processing (Pennycook et al. 2015a,
2015b; Swan, Calvillo, and Revlin 2018) or solutions to re-
solve the conflict (Novick and Holyoak 1991).
Monitoring and Conflict
There are several methods for measuring metacognitive
monitoring (e.g., Gascoine, Higgins, and Wall 2017). Two
common methods in psychology are the performance-based
Cognitive Reflection Test (CRT; Frederick 2005) that re-
quires overriding a primed heuristic response for a more de-
liberate response, and the subjective-based Feeling of Right-
ness (FOR; Thompson et al. 2011) that indicates one’s ac-
curacy and awareness of their own metacognitive monitor-
ing. Mata, Ferreira, and Sherman (2012) found that those
with better metacognitive awareness as measured by the
CRT were able to generate heuristic and deliberate answers,
more accurately rate performance of others and themselves,
and were able to better focus on the most relevant features
of a problem. Epstein et al. (1996) found that the ability to
shift between heuristic and deliberate thinking was better
than exclusively relying on one. Metacognitive monitoring
as measured by the CRT, FOR, and related tasks may be
more related to actual intelligence than traditional measures,
because it includes motivation and ability (Frederick 2005;
Toplak, West, and Stanovich 2011). For instance, Barr et al.
(2015) found the CRT positively correlates with cognitive
ability, need for cognition, analogies, the remote associates
test, and negatively with faith in intuition. Furthermore, the
CRT correlates with cognitive ability, performance on heu-
ristic and biases tasks, belief bias, rational thinking, set shift-
ing, and working memory, and predicts rational thinking
performance independent of intelligence, executive func-
tioning, and thinking dispositions (Toplak et al. 2012).
Conflict associated with metacognitive experience has
been measured using response times (De Neys and Glumicic
2008; Pennycook et al. 2015a), the FOR (Thompson et al.
2011), the CRT (Frederick 2005), and activation of specific
brain regions including the anterior cingulate cortex (Crox-
son et al. 2009; Kennerley et al. 2009) and medial prefrontal
cortex (Botvinick et al. 1999; Cohen, Botvinick, and Carter
2000) often associated with cognitive control. This conflict
is still observed when manipulations are in place to mini-
mize deliberation (Pennycook et al., 2015a; Thompson and
Johnson, 2014), and error signals during comprehension
(Glenberg, Wilkinson, and Epstein 1982; McNamara et al.,
1996) and disfluency appear to prompt similar responses
(Alter et al. 2007). Metacognitive monitoring appears to in-
volve both top-down and bottom-up processes, where the
willingness or motivation to engage in analytic thinking
(e.g., CRT) appears to be top-down, while the detection of
conflicts that triggers the engagement (e.g., FOR) appears to
be bottom-up (Pennycook et al. 2015b; Stanovich 2018).
Here, we focus primarily on bottom-up processes, but plan
on further addressing top-down processes in future work.
Dynamic Behavior
Metacognitive monitoring is effective but not perfect. It may
fail to detect conflict (Swan et al. 2018) or may result in
faulty judgments after conflict is detected. Detection may
not direct one to the necessary knowledge to solve the prob-
lem or implement a strategy (Novick and Holyoak 1991) and
the outcome might be influenced by biases, such as overcon-
fidence with naive individuals (Fischhoff 2012) or confir-
mation bias with the more experienced (Kahneman 2011).
Klein et al. (2006b) acknowledge that detecting such a con-
flict or recognizing insufficient performance is important,
but understanding how to modify thinking processes to ad-
dress this problem is more valuable. Metacognition could
help guide conflict resolution by helping to determine
whether the environment calls for more deliberate pro-
cessing or quick, less elaborate responding. Although hu-
mans are often good at sizing up the environment (Klein
1998) and making efficient tradeoffs between speed and ac-
curacy (Payne, Bettman, and Johnson 1988), exerting men-
tal effort is often experienced as aversive (Halpern 2014;
Kahneman 2011) and may be avoided based on an individ-
ual’s subjective cost of effort (Westbrook, Kester, and
Braver 2013). Similarly, in AT the environment may favor
considering more hypothetical future situations, exploring
some more deeply, or by quickly anticipating and preparing
for a few. Although typically applied to current events, met-
acognitive monitoring could be extended to future events to
help determine which approach fits better with the environ-
ment. For instance, if an individual engages in the three
types of AT (i.e., pattern matching, trajectory tracking, and
conditionals) identified by Klein et al. (2011) regarding a
hypothetical future, the presence of conflict among them
could inform whether that hypothetical future is appropriate
or if it should be discarded or modified. Metacognitive cali-
bration could help determine the appropriate response based
on one’s understanding of their own abilities and
knowledge, and how that corresponds to a situation. Re-
search addressing the understanding process, sensemaking,
critical thinking, forecasting, and counterfactual thinking
provide examples of how to identify the source of conflict,
how to make sense of and resolve it, and determine which
potential outcomes are most likely.
The understanding process was recently defined in a mul-
tidisciplinary review as The acquisition, organization, and
appropriate use of knowledge to produce a response directed
towards a goal, when that action is taken with awareness of
its perceived purpose (Hough and Gluck forthcoming, p.
11). The review revealed common features of understanding
and discussed how computer science, education, psychol-
ogy, and philosophy all emphasize the importance of meta-
cognition for understanding capabilities. Metacognition was
described as a self-evaluative feedback mechanism for iden-
tifying faulty knowledge or gaps that triggers additional pro-
cessing or information search, which helps calibrate mental
representations with the environment (Butcher and Sumner
2011; Forbus and Hinrichs 2006; Kirk and Laird 2014;
Mayer 1998; Perkins 1998; Perkins and Simmons 1988;
Woodward 2003). Better understanding, like expertise,
could increase the quality of AT by directing attention to-
ward the most relevant features.
Sensemaking models also involve components of under-
standing, such as abstraction of knowledge, development of
relations, ability to transfer knowledge to distant situations,
and often involves leveraging domain and context infor-
mation to develop frames (Hough and Gluck forthcoming).
Pirolli and Card’s (2005) sensemaking model includes an
information foraging loop (Pirolli and Card 1999) and
sensemaking loop (Russell et al. 1993). During foraging, an
individual engages in search and filtering of information and
then applies effort to give it more structure in an iterative
process. During sensemaking, one utilizes schemas to make
hypotheses and conclusions, similar to the construction of a
mental model (e.g., Johnson-Laird 2013). Although not ex-
plicit in the model, there appears to be a metacognitive pro-
cess. If there is insufficient evidence for a hypothesis, a case
cannot be built, or a discrepancy is detected then the agent
goes back to the foraging loop to fill in the gaps or gather
evidence for a new schema or hypothesis. Similarly, the
data/frame model (Klein, Moon, and Hoffman 2006a,
2006b; Klein et al. 2007) does not explicitly mention meta-
cognition, but does involve “questioning the frame” that in-
cludes anomaly detection or expectancy violations. If there
is a discrepancy, the existing frame can be discarded, elab-
orated, preserved, reframed, or compared to another.
Critical thinking is described as the ability to explain, jus-
tify, extrapolate, relate, and apply in ways that go beyond
knowledge and skill, and training in critical thinking and
metacognitive monitoring can enhance understanding and
generalizability (Halpern 1998, 2014; Willingham 2007).
Similar to metacognitive monitoring, after controlling for
cognitive ability, critical thinking correlates with the ability
to avoid cognitive biases by thinking logically even when it
conflicts with prior beliefs and thinking dispositions (West,
Toplak, and Stanovich 2008). To better understand and
teach these skills, Halpern (2010) developed a comprehen-
sive measure, called the Halpern Critical Thinking Assess-
ment (HCTA), which includes decision making, problem
solving, hypothesis testing, argument analysis, likelihoods
and uncertainties, and verbal reasoning. The HCTA has gen-
eralizability across various populations, positively corre-
lates with years of education, and negatively correlates with
the frequency of negative life events in a real-world outcome
inventory (Butler 2012). Critical thinking has some similar-
ities to sensemaking, but may be more generalizable and ap-
propriate for AT with little available knowledge.
Forecasting involves predicting the probability that spe-
cific events will occur. Its associated processes could be
used during AT to help reduce the unnecessary considera-
tion of unlikely future outcomes or to determine which are
more likely and should be better prepared for. Accurate fore-
casters typically have higher cognitive ability, motivation,
CRT scores, and open-minded thinking (Juvina et al. under
review; Mellers et al. 2015; Tetlock and Gardner 2015). In
addition, they often respond faster, have better discrimina-
bility and calibration, and learn faster.
Counterfactual thinking occurs after an event is experi-
enced and involves considering forgone outcomes (Byrne
2016; Kahneman and Miller 1986). It is more typical after
failures or shortcomings (Hur 2001; Roese and Olsen 1997;
Sanna and Turley 1996; Sanna and Turley-Ames 2000) and
often involves ways to correct or improve upon previous be-
haviors (Markman et al. 1993; Roese 1997; Roese, Hur, and
Pennington 1999). Epstude and Roese (2008) suggest that
this may depend on the realization that there is a problem or
goals are not sufficiently met, which is a form of metacog-
nitive monitoring. Improving future outcomes may be
achieved through goal-oriented reasoning (Epstude and
Roese 2008; Roese and Epstude 2017) or by increases in
motivation, persistence, and performance (Dyczewski and
Markman 2012; Markman, McMullen, and Elizaga 2008).
Although after the fact, this type of thinking could provide
experience to help calibrate metacognitive processes, pro-
vide more constructive ways to think about future events,
and help identify relevant alternative possibilities.
A Cognitive Model with Metacognitive Monitoring
We previously developed a model of the Wason card selec-
tion task (Wason 1966, 1968) with initial aspects of meta-
cognitive monitoring (see Larue, Hough, and Juvina 2018
for a full description). Our approach was informed by men-
tal models (Johnson-Laird, 2013) and dual process theories,
specifically Stanovich’s (2009) tripartite framework. Sta-
novich’s (2009) framework provides an explanation of how
reflective and adaptive (characterized by reactivity) human
behavior emerges from the interaction of three distinct cog-
nitive levels or “minds. The autonomous mind, responsible
for fast behaviors, includes instinctive and over-learned pro-
cesses, domain-specific knowledge, and emotional regula-
tion. The algorithmic mind, responsible for cognitive con-
trol, can affect decoupling (i.e., simulation) and serial asso-
ciative processes. The reflective mind, responsible for delib-
erative processing, can trigger or suppress the algorithmic
minds’ decoupling and serial associative processes. In this
framework, the reflective mind would be the center for met-
acognitive monitoring.
Here we briefly describe our model and in the next para-
graph, discuss how this model could be augmented and ap-
plied to AT. In the Wason card selection task, two cards (A
and 7) out of four (A, D, 3, and 7) must be flipped over to
verify a rule: If A is on one side, then there is a 3 on the
other. “A” and “3” are intuitively compelling to flip over
because they are both present in the rule. Flipping over “A”
is necessary because it can falsify the rule, however, flipping
over 3” is unnecessary because it may confirm the rule but
it cannot falsify it. Two types of logical errors are common:
the selection of the unnecessary card (3), and the non-selec-
tion of the necessary card (7). We believe selecting the un-
necessary card results from a metacognitive failure to detect
its inadequacy and not selecting the necessary card, involves
incomplete decoupling after the detection and override of
selecting the unnecessary card. The incomplete decoupling
may result from participants applying modus ponens (if P
then Q), but failing in the application of modus tollens (if
not Q then not P) in a “partial insight” (Evans 1977; Wason
1969). Only flipping the correct intuitively compelling card
(A) occurs because modus tollens needs to simulate more
intermediary mental models compared to the modus ponens.
Our model was implemented in the ACT-R cognitive ar-
chitecture (Anderson, 2007) with a core affect mechanism
(see Juvina, et al. 2018) and a FOR component to drive de-
coupling behavior. Rethinking times, answer changes, and
fluency are functions of the FOR. The FOR is computed
based on the time required to achieve the initial retrieval of
the answer for the two intuitively compelling cards through
the initial priming rule (autonomous mind), and serves as a
gateway for further processing. We use the temporal module
in ACT-R (Taatgen, van Rijn, and Anderson 2007) to meas-
ure time in ticks, which are noisy and increase in time in a
fashion similar to human time estimation. In the Wason task,
the FOR is computed as a function of the time required to
achieve the initial retrieval of “A” and “3” through the initial
priming rule. When the FOR is high, the model goes with
the initial answer (i.e., heuristic processing), but when low
cognitive decoupling is launched by the reflective mind and
carried out by the algorithmic mind. In the model, the time
required to achieve the initial retrievals is assigned to FOR-
inverse (e.g., higher time means lower FOR). When FOR-
inverse is below threshold (see Figure 1), the model goes
with the initial answer (i.e., type 1 processing). When FOR-
inverse is above threshold, cognitive decoupling is launched
and the model engages in further processing (i.e., type 2 pro-
cessing) with representations that are copied from its work-
ing memory based on activation during open retrieval (those
with highest activation are retrieved). The representations
are used in an inner cognitive simulation to indicate which
rules from the reflective mind can be applied. The process
by which representations are copied and used in a separate
inner simulation is “cognitive decoupling”. The importance
of further processing is a function of the FOR, which deter-
mines the extent a decoupling result (i.e., wrong, partial, and
complete) is taken into account in the final answer. The
model produces an answer when the valuation of a represen-
tation is above a certain threshold. The valuation and arousal
values, which are sub-symbolic quantities added to the cur-
rent sub-symbolic equations of ACT-R, help to define the
core affect. When a reward is triggered, valuations are up-
dated. Rewards are a function of the initial FOR (negative
factor in the case of negative reward), which affects answer
selection (“yes” or “no” answers are produced according to
how the model “feels” about the answer) (see Figure 1).
A training procedure was used to simulate individual dif-
ferences in heuristic and analytical behavior, where the dif-
ferent degrees of reinforcement allowed the model to learn
logical skills and vary in FOR. Metacognitive monitoring
implemented by the FOR determined if and how much ad-
ditional processing occurred. Model simulations produced
Figure 1. Answer Selection Processes in Our Cognitive Model
three types of outcomes that are typically observed in hu-
mans: (1) Complete reliance on the autonomous mind (no
decoupling) leading to the observed common error (cor-
rectly selecting “A” and incorrectly selecting “3” guided by
confirmation bias), (2) partial decoupling (partial insight)
leading to correctly selecting “A” and not falling for the con-
firmation bias (already simulated in a possible world), and
(3) complete decoupling allowing for the activation of the
counter-information rule which is less often activated.
Discussion
We presented some perspectives in metacognitive monitor-
ing research and some dynamic higher-level cognitive pro-
cesses. This research informed the development of our
model and we believe the interaction between the FOR (met-
acognitive monitoring) and decoupling (mental simulation)
in our model could be applied to AT.
In the model discussed here, the FOR determines whether
there is a need for more deliberation or a different approach
to complete the task. The FOR could be extended to simu-
late future situations through decoupling and include how
much the agent “knows” about the environment. For in-
stance, the FOR could determine how long decoupling
should continue (e.g., generating counterfactuals) and when
it should stop. Sensemaking and critical thinking emphasize
generating hypotheses and gathering evidence to indicate
the degree each is supported. The FOR could be informative
when there is a lack of knowledge, failure to find matching
strategies or procedures, or a lack of evidence. A partial
matching procedure could be used based on the degree of
the FOR, a lower FOR could increase the breadth of future
situations to consider, and a strategy or approach could be
chosen out of a hierarchy based on the FOR. Learning can
occur over time and counterfactuals could be generated and
utilized for learning what could have happened based on the
hypothetical actions corresponding to a given situation or
the environment. Although these processes typically apply
to situations in real time, they could be extended and applied
to hypothetical future states through mental simulation. Fur-
thermore, when hypothetical futures are considered they
could be weighted based on predicting the probability of
their occurrence through forecasting. This process in com-
bination with the FOR could also help inform when enough
hypothetical futures are considered. These types of pro-
cesses could help highlight the most relevant and likely fu-
ture outcomes, so that less preparation and planning is re-
quired. As the architecture learns more and is placed in con-
text, it reacts to events that it might have previously encoun-
tered. If there is a match (i.e., full or partial) with a previ-
ously developed strategy from cognitive decoupling, this
strategy is recorded and reinforced in the architecture. The
reinforcement of this strategy will lead to its prioritization
in procedural memory over some other strategies and its de-
clarative components will be faster to retrieve in declarative
memory. This means that the next time this strategy is used,
the FOR will better calibrated, resulting in more accurate
and adaptive behavior with the potential for AT capabilities.
Acknowledgements
This research was supported in part by Alex Hough’s ap-
pointment to the Student Research Participation Program at
the U.S. Air Force Research Laboratory, 711th Human Per-
formance Wing, Cognitive Science, Models, and Agents
Branch, administered by the Oak Ridge Institute for Science
and Education. Alex would like to thank the Chief Scien-
tist’s office of the 711th Human Performance Wing for the
opportunity to participate in the Repperager internship pro-
gram. We also thank Kevin Gluck and two anonymous re-
viewers for constructive feedback on an earlier draft.
References
Alter, A. L.; Oppenheimer, D. M.; Epley, N.; and Eyre, R. N. 2007.
Overcoming Intuition: Metacognitive Difficulty Activates Ana-
lytic Reasoning. Journal of Experimental Psychology: Gen-
eral 136(4): 569-576. doi.org/10.1037/0096-3445.136.4.569
Anderson, J. R. 2007. How Can the Human Mind Occur in the
Physical Universe? New York: Oxford University Press.
Amos-Binks, A.; and Dannenhauer, D. 2019. Anticipatory Think-
ing: A Metacognitive Capability. arxiv.org/pdf/1906.12249.pdf
Barr, N.; Pennycook, G.; Stolz, J. A.; & Fugelsang, J. A. 2015.
Reasoned Connections: A Dual-process Perspective on Creative
Thought. Thinking & Reasoning 21(1): 61-75.
doi.org/10.1080/13546783.2014.895915
Botvinick, M.; Nystrom, L. E.; Fissell, K.; Carter, C. S.; and Co-
hen, J. D. 1999. Conflict Monitoring Versus Selection-For-Action
in Anterior Cingulate Cortex. Nature 402(6758): 179.
doi.org/1710/13fb3f03a2bb0d707d2c73597f436e280728
Bratman, M. E.; Israel, D. J.; and Pollack, M. E. 1988. Plans and
Resource-Bounded Practical Reasoning. Computational Intelli-
gence 4: 349-355. doi.org/10.1111/j.1467-8640.1988.tb00284.x
Butcher, K.; and Sumner, T. 2011. Self-Directed Learning and The
Sensemaking Paradox. Human-Computer Interaction 26: 123-159.
doi.org/10.1080/07370024.2011.556552
Butler, H. A. 2012. Halpern Critical Thinking Assessment Predicts
Real-World Outcomes of Critical Thinking. Applied Cognitive
Psychology 26: 721-729. doi.org/10.1002/acp.2851
Byrne, R. M. 2016. Counterfactual Thought. Annual Review of
Psychology 67: 135-157. doi.org/10.1146/annurev-psych-122414-
033249
Cohen, J. D.; Botvinick, M.; and Carter, C. S. 2000. Anterior Cin-
gulate and Prefrontal Cortex: Who's in Control?. Nature Neurosci-
ence 3(5): 421-423. doi.org/10.1038/74783.
Cosmides, L.; and Tooby, J. 1996. Are Humans Good Intuitive
Statisticians After All? Rethinking Some Conclusions from the Lit-
erature on Judgment Under Uncertainty. Cognition 58(1): 1-73.
doi.org/10.1.1.131.8290.
Croxson, P. L.; Walton, M. E.; O'Reilly, J. X.; Behrens, T. E.; and
Rushworth, M. F. 2009. Effort-Based CostBenefit Valuation and
the Human Brain. Journal of Neuroscience 29(14): 4531-4541.
doi.org/10.1523/JNEUROSCI.4515-08.2009.
Cox, M. T.; Alavi, Z.; Dannenhauer, D.; Eyorokon, V.; Munoz-
Avila, H.; and Perlis, D. 2016. MIDCA: A Metacognitive, Inte-
grated Dual-Cycle Architecture for Self-Regulated Autonomy. In
Proceedings of the Thirteenth AAAI Conference on Artificial In-
telligence. Phoenix, AZ: AAAI Press. www.aaai.org/ocs/in-
dex.php/AAAI/AAAI16/paper/download/12292/12151
Dannenhauer, D.; Cox, M. T.; and Munoz-Avila, H. 2018. Declar-
ative Metacognitive Expectations for High-Level Cognition. Ad-
vances in Cognitive Systems 6: 231-250. www.cogsys.org/pa-
pers/ACSvol6/papers/paper-6-15.pdf
De Neys, W.; and Glumicic, T. 2008. Conflict Monitoring in Dual
Process Theories of Thinking. Cognition, 106: 12481299.
doi.org/10.1016/j.cognition.2007.06.002
Dyczewski, E. A.; and Markman, K. D. 2012. General Attainabil-
ity Beliefs Moderate the Motivational Effects of Counterfactual
Thinking. Journal of Experimental Social Psychology 48(5):
1217-1220. doi.org/10.1016/j.jesp.2012.04.016
Efklides, A. 2006. Metacognition and Affect: What Can Metacog-
nitive Experiences Tell Us About the Learning Process? Educa-
tional Research Review 1(1): 3-14.
doi.org/10.1016/j.edurev.2005.11.001
Efklides, A.; Samara, A.; and Petropoulou, M. 1999. Feeling of
Difficulty: An Aspect of Monitoring That Influences Control. Eu-
ropean Journal of Psychology of Education 14(4): 461-476.
http://www.jstor.org/stable/23420265
Epstein, S.; Pacini, R.; Denes-Raj, V.; and Heier, H. 1996. Individ-
ual Differences in Intuitive-Experiential and Analytical-Rational
Thinking Styles. Journal of Personality and Social Psychology
71(2): 390-405. doi.org/10.1037/0022-3514.71.2.390
Epstude, K.; and Roese, N. J. 2008. The Functional Theory of
Counterfactual Thinking. Personality and Social Psychology Re-
view 12(2): 168-192. doi link
Evans, J. S. B. 1977. Linguistic Factors in Reasoning. Quarterly
Journal of Experimental Psychology 29(2): 297-306.
www.tandfonline.com/doi/abs/10.1080/14640747708400605
Evans, J. S. B.; and Stanovich, K. E. 2013. Dual-Process Theories
of Higher Cognition: Advancing the Debate. Perspectives On Psy-
chological Science 8(3): 223-241.
doi.org/10.1080/14640747708400605
Fischoff, B. 2012. Judgment and Decision Making. London and
New York: Earthscan
Flavell, J. H. 1979. Metacognition and Cognitive Monitoring: A
New Area of CognitiveDevelopmental Inquiry. American Psy-
chologist 34(10): 906-911.
dx.doi.org/10.1037/0003-066X.34.10.906
Forbus, K. D., and Hinrichs, T. R. 2006. Companion Cognitive
Systems: A Step Toward Human-level AI. AI Magazine 27: 83-95.
doi.org/10.1609/aimag.v27i2.1882
Frederick, S. 2005. Cognitive Reflection and Decision Mak-
ing. Journal of Economic Perspectives 19(4): 25-42. pubs.aea-
web.org/doi/pdfplus/10.1257/089533005775196732
Gascoine, L.; Higgins, S.; and Wall, K. 2017. The Assessment of
Metacognition in Children Aged 416 Years: A Systematic Re-
view. Review of Education 5(1): 3-57. doi.org/10.1002/rev3.3077
Geden, M.; Smith, A.; Campbell, J.; Amos-Binks, A.; Mott, B.;
Feng, J.; and Lester, J. 2018. Towards Adaptive Support for Antic-
ipatory Thinking. In Proceedings of the Technology, Mind, and So-
ciety. Washington, DC: ACM. doi.org/10.1145/3183654.3183665
Geden, M.; Smith, A.; Campbell, J.; Spain, R.; Amos-Binks, A.;
Mott, B.; …Lester, J. 2019. Construction and Validation of an An-
ticipatory Thinking Assessment. PsyAriXiv.
doi.org/10.31234/osf.io/9reby
Gigerenzer, G.; and Gaissmaier, W. 2011. Heuristic Decision Mak-
ing. Annual Review of Psychology 62: 451-482.
doi.org/10.1146/annurev-psych-120709-145346
Gilovich, T. 1983. Biased Evaluation and Persistence in Gam-
bling. Journal of Personality and Social Psychology 44(6): 1110.
pdfs.semanticscholar.org/f644/196083e55f0f824f849625f6e5592
aa1d686.pdf
Glenberg, A. M.; Wilkinson, A. C.; and Epstein, W. 1982. The il-
lusion Of Knowing: Failure in The Self-Assessment of Compre-
hension. Memory and Cognition 10: 597-602. https://link.springer
.com/content/pdf/10.3758%2FBF03202442.pdf
Halpern, D. F. 1998. Teaching Critical Thinking for Transfer
Across Domains: Disposition, Skills, Structure Training, and Met-
acognitive Monitoring. American Psychologist 53(4): 449.
doi.org/10.1037/0003-066x.53.4.449
Halpern, D. F. 2010. Halpern Critical Thinking Assessment.
SCHUHFRIED (Vienna Test System): Moedling, Austria.
www.schuhfried.com/viennatestsystem10/tests-test-sets/all-tests-
from-a-z/test/hcta-halpern-critical-thinking-assessment-1/
Halpern, D. F. 2014. Thought and Knowledge: An Introduction to
Critical Thinking. New York: Psychology Press.
Hough, A. R.; and Gluck, K. Forthcoming. The understanding
problem in cognitive science. Advances in Cognitive Systems 7.
Hur, T. (2001). The Role of Regulatory Focus in Activation of
Counterfactual Thinking. Korean Journal of Social and Personal-
ity Psychology 15: 159-171.
Johnson, B.; Roberts, M.; Apker, T.; and Aha, D. W. 2016. Goal
Reasoning with Information Measures. In Proceedings of the
Fourth Conference on Advances in Cognitive Systems 4: 1-14.
Johnson-Laird, P. N. 2013. The Mental Models Perspective. In The
Oxford Handbook Of Cognitive Psychology, edited by D. Reisberg,
650-667. New York: Oxford University Press.
doi.org/ 10.1093/oxfordhb/9780195376746.013.0041
Juvina, I.; Larue, O.; and Hough, A. 2017. Modeling Valuation and
Core Affect in A Cognitive Architecture: The Impact of Valence
and Arousal on Memory and Decision-Making. Cognitive Systems
Research 48: 4-24. doi.org/10.1016/j.cogsys.2017.06.002
Juvina, I.; Larue, O.; Widmer, C.; Ganapathy, S.; Nadella, S.; and
Minnery, B. under review. Computer-supported Collaborative In-
formation Search for Geopolitical Forecasting.
Kahneman, D. 2011. Thinking, Fast and Slow. New York: Farrar,
Straus, and Giroux.
Kahneman, D.; and Klein, G. 2009. Conditions for Intuitive Exper-
tise: A Failure to Disagree. American Psychologist 64(6): 515 526.
doi.org/10.1037/a0016755
Kahneman, D.; and Miller, D. T. 1986. Norm Theory: Comparing
Reality to Its Alternatives. Psychological Review 93(2): 136.
pdfs.semanticscholar.org/9809/8ee48700173e2f09aeff48c4
06ef943918b5.pdf
Kennerley, S. W.; Dahmubed, A. F.; Lara, A. H.; and Wallis, J. D.
2009. Neurons in the Frontal Lobe Encode the Value of Multiple
Decision Variables. Journal of Cognitive Neuroscience 21(6):
1162-1178.www.mitpressjour-
nals.org/doi/abs/10.1162/jocn.2009.21100
Kirk, J. R.; and Laird, J. 2014. Interactive Task Learning for Sim-
ple Games. Advances in Cognitive Systems 3: 13-30.
www.cogsys.org/pdf/paper-9-3-33.pdf
Klein, G. 1998. Sources of Power: How People Make Decisions.
Cambridge, MA: MIT Press.
Klein, G.; Moon., B.; and Hoffman, R. R. 2006a. Making Sense of
Sensemaking 1: Alternative Perspectives. IEEE Intelligent Systems
21(4): 70-73. doi.org/10.1109/MIS.2006.75
Klein, G.; Moon., B.; and Hoffman, R. R. 2006b. Making Sense of
Sensemaking 2: A Macro Cognitive Model. IEEE Intelligent Sys-
tems 21(5): 88-92. doi.org/10.1109/MIS.2006.100
Klein, G.; Phillips, J. K.; Rall, E.; and Peluso, D. A. 2007. A
Data/Frame Theory of Sensemaking. In Expertise out of context:
Proceedings of the 6th international conference on naturalistic de-
cision making, edited by R. R. Hoffman, 118-160. New York: Psy-
chology Press.
Klein, G.; Ross, K. G.; Moon, B. M.; Klein, D. E.; Hoffman, R. R.;
and Hollnagel, E. (2003). Macrocognition. IEEE Intelligent Sys-
tems, 18(3), 81-85. doi.org/10.1109/MIS.2003.1200735
Klein, G.; Snowden, D.; and Pin, C. L. 2007. Anticipatory Think-
ing. In Proceedings of the Eighth International NDM Conference,
edited by K. Mosier and U. Fischer, 1-8. Pacific Grove, CA, June
2007.
Kurtz, C.F.; and Snowden, D.J. 2003. The New Dynamics of Strat-
egy: Sensemaking In A Complex and Complicated World. e-Busi-
ness Management, 42. doi.org/10.1147/sj.423.0462
Larue, O.; Hough, A.; and Juvina, I. (2018). A Cognitive Model of
Switching Between Reflective and Reactive Decision Making in
The Wason Task. In Proceedings of the Sixteenth International
Conference on Cognitive Modeling, edited by I. Juvina, J. Houpt,
and C. Myers, 55-60. Madison, WI: University of Wisconsin.
Markman, K. D.; Gavanski, I.; Sherman, S. J.; and McMullen, M.
N. 1993. The Mental Simulation of Better and Worse Possible
Worlds. Journal of Experimental Social Psychology 29: 87-109.
doi.org/10.1006/jesp.1993.1005
Markman, K. D.; McMullen, M. N.; and Elizaga, R. A. 2008.
Counterfactual thinking, Persistence, and Performance: A Test of
the Reflection and Evaluation Model. Journal of Experimental So-
cial Psychology 44(2): 421-428.
doi.org/10.1016/j.jesp.2007.01.001
Mata, A.; Ferreira, M. B.; and Sherman, S. J. 2013. The Metacog-
nitive Advantage of Deliberative Thinkers: A Dual-Process Per-
spective on Overconfidence. Journal of Personality and Social
Psychology 105(3): 353. dx.doi.org/10.1037/a0033640
Mayer, R. E. 1998. Cognitive, Metacognitive, and Motivational
Aspects of Problem Solving. Instructional Science 26: 49-63.
link.springer.com/con-
tent/pdf/10.1023%2FA%3A1003088013286.pdf
McNamara, D. S.; Kintsch, E.; Songer, N. B.; and Kintsch, W.
1996. Are Good Texts Always Better? Interactions of Text Coher-
ence, Background Knowledge, and Levels of Understanding in
Learning from Text. Cognition and Instruction 14: 1-43.
doi.org/10.1207/s1532690xci1401_1
Mellers, B.; Stone, E., Murray, T.; Minster, A.; Rohrbaugh, N.;
Bishop, M.; ... and Ungar, L. 2015. Identifying and Cultivating Su-
perforecasters as a Method of Improving Probabilistic Predictions.
Perspectives on Psychological Science 10(3): 267-281.
doi.org/10.1177/1745691615577794
Muñoz-Avila, H.; Dannenhauer, D.; and Reifsnyder, N. 2019. Is
Everything Going According to Plan? Expectations in Goal Rea-
soning Agents. In Proceedings of the AAAI Conference on Artifi-
cial Intelligence 33: 9823-9829.
doi.org/10.1609/aaai.v33i01.33019823
Muñoz -Avila, H.; and Cox, M. T. 2008. Case-based Plan Adapta-
tion: An Analysis and Review. IEEE Intelligent Systems 23: 75
81. doi.org/10.1109/MIS.2008.59
Newell, A.; and Simon, H. 1972. Human Problem Solving. Eng-
lewood Cliffs, NJ: Prentice-Hall.
Novick, L. R.; and Holyoak, K. J. 1991. Mathematical Problem
Solving by Analogy. Journal of Experimental Psychology: Learn-
ing, Memory, and Cognition 17(3): 398-415.
pdfs.semanticscholar.org/6bc0/a20b01f0ebfc243d00de3e55f6387
f8a77b8.pdf
Payne, J. W.; Bettman, J. R.; and Johnson, E. J. 1988. Adaptive
Strategy Selection in Decision Making. Journal of Experimental
Psychology: Learning, Memory, and Cognition 14(3): 534-552.
apps.dtic.mil/dtic/tr/fulltext/u2/a170858.pdf
Pennycook, G. 2017. A Perspective on the Theoretical Foundation
of Dual Process Models. In Dual Process Theory 2.0, edited by W.
De Neys, 6-26. New York: Routledge.
Pennycook, G.; Fugelsang, J. A.; and Koehler, D. J. 2015a. What
Makes Us Think? A Three-Stage Dual-Process Model of Analytic
Engagement. Cognitive Psychology 80: 34-72.
doi.org/10.1016/j.cogpsych.2015.05.001
Pennycook, G.; Fugelsang, J. A.; and Koehler, D. J. 2015b. Every-
day Consequences of Analytic Thinking. Current Directions in
Psychological Science 24(6): 425-432.
doi.org/10.1177/0963721415604610
Perkins, D. N. 1998. What is Understanding? In Teaching for Un-
derstanding: Linking Research with Practice, edited by M. S.
Wiske, 39-57. San Francisco, CA: Jossey-Bass.
Perkins, D. N.; and Simmons, R. 1988. Patterns of Misunderstand-
ing: An Integrative Model for Science, Math, and Programming.
Review of Educational Research 58: 303-326.
doi.org/10.3102/00346543058003303
Pirolli, P.; and Russell, D. M. 2011. Introduction to This Special
Issue on Sensemaking. Human-Computer Interaction 26: 1-8.
www.tandfonline.com/doi/full/10.1080/07370024.2011.556557
Pirolli, P.; and Card, S. K. 1999. Information Foraging. Psycholog-
ical Review 106: 643-675.
Pirolli, P.; and Card, S. K. 2005. The Sensemaking Process and
Leverage Points for Analyst Technology. Paper presented at the
International Conference on Intelligence Analysis, McLean, VA.
pdfs.semanticscholar.org/f6b5/4379043d0ee28bea45555f481af1a
693c16c.pdf
Pozanco, A.; Fernández, S.; and Borrajo, D. 2018. Learning-driven
Goal Generation. AI Communications 31: 137150.
doi.org/10.3233/AIC-180754
Roberts, M.; Borrajo, D.; Cox, M. T.; and Yorke-Smith, N. 2018.
Special Issue on Goal Reasoning. AI Communications 31: 115
116.
Roese, N. J.; and Epstude, K. 2017. The Functional Theory of
Counterfactual Thinking: New Evidence, New Challenges, New
Insights. In Advances in Experimental Social Psychology: Vol. 56,
edited by J. M. Olson, 1-79. Cambridge, MA: Academic Press.
doi.org/10.1016/bs.aesp.2017.02.001
Roese, N. J.; and Olson, J. M. 1997. Counterfactual Thinking: The
Intersection of Affect and Function. In Advances in experimental
social psychology: Vol. 29, edited by M. P. Zanna, 1-59. San Di-
ego, CA: Academic Press.
doi.org/10.1016/S0065-2601(08)60015-5
Roese, N. J.; Hur, T.; and Pennington, G. L. 1999. Counterfactual
Thinking and Regulatory Focus: Implications for Action Versus
Inaction and Sufficiency Versus Necessity. Journal of Personality
and Social Psychology 77(6): 1109.
doi.org/10.1037//0022-3514.77.6.1109
Russell, D. M.; Stefik, M. J.; Pirolli, P.; and Card, S. K. 1993. The
Cost of Structure of Sensemaking. Paper presented at the INTER-
CHI 1993 conference on Human Factors in Computing Systems,
Amsterdam, the Netherlands. doi.org/10.1145/169059.169209
Sanna, L. J.; and Turley, K. J. 1996. Antecedents to Spontaneous
Counterfactual Thinking: Effects of Expectancy Violation and
Outcome Valence. Personality and Social Psychology Bulletin 22:
906-919. doi.org/10.1177%2F0146167296229005
Sanna, L. J.; and Turley-Ames, K. J. 2000. Counterfactual Inten-
sity. European Journal of Social Psychology 30: 273-296.
doi.org/10.1002/(SICI)1099-0992(200003/04)30:2%3C273::AID-
EJSP993%3E3.0.CO;2-Y
Schank, R. C.; and Abelson, R. P. 1977. Scripts, Plans, Goals, and
Understanding: An Inquiry into Human Knowledge Structures.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Stanovich, K. E. 2009. Distinguishing the Reflective, Algorithmic,
And Autonomous Minds: Is It Time for A Tri-Process Theory. In
Two minds: Dual processes and beyond, edited by J. S. B. T. Evans
and K. Frankish, 55-88. New York: Oxford University Press.
Stanovich, K. E. 2018. Miserliness in Human Cognition: The In-
teraction of Detection, Override and Mindware. Thinking and Rea-
soning 24(4): 423-444. doi.org/10.1080/13546783.2018.1459314
Swan, A. B.; Calvillo, D. P.; and Revlin, R. 2018. To Detect or Not
to Detect: A Replication and Extension of the Three-stage
Model. Acta Psychologica 187: 54-65.
doi.org/10.1016/j.actpsy.2018.05.003
Taatgen, N. A.; Van Rijn, H.; and Anderson, J. 2007. An Integrated
Theory of Prospective Time Interval Estimation: The Role of Cog-
nition, Attention, and Learning. Psychological Review 114(3):
577-598. doi.org/10.1037/0033-295X.114.3.577
Tetlock, P. E.; and Gardner, D. 2015. Superforcasting: The Art and
Science of Prediction. New York, NY: Crown.
Thompson, V. A.; and Johnson, S. C. 2014. Conflict, Metacogni-
tion, and Analytic Thinking. Thinking and Reasoning 20(2): 215-
244. doi.org/10.1080/13546783.2013.869763
Thompson, V. A.; Evans, J. S. B.; and Campbell, J. I. 2013. Match-
ing Bias on the Selection Task: It's Fast and Feels Good. Thinking
and Reasoning 19: 431-452.
doi.org/10.1080/13546783.2013.820220
Thompson, V. A.; Prowse Turner, J. A.; and Pennycook, G. 2011.
Intuition, Reason, and Metacognition. Cognitive Psychology 63:
107-140. doi.org/10.1016/j.cogpsych.2011.06.001
Toplak, M. E.; West, R. F.; and Stanovich, K. E. 2011. The Cog-
nitive Reflection Test as A Predictor of Performance on Heuristics-
And-Biases Tasks. Memory and Cognition 39(7): 1275-1289.
doi.org/10.3758/s13421-011-0104-1
Vattam, S.; Klenk, M.; Molineaux, M.; and Aha, D. W. 2013.
Breadth of Approaches to Goal Reasoning: A Research Survey.
In Goal Reasoning: Papers from the ACS Workshop, edited by D.
W. Aha, M. T. Cox, and H. Muñoz-Avila. Technical Report CS-
TR-5029. College Park, MD: University of Maryland, Department
of Computer Science. apps.dtic.mil/dtic/tr/fulltext/u2/a603400.pdf
Warwick, W.; and Hutton, R. J. B. 2007. Computational and The-
oretical Perspectives on Recognition-Primed Decision Making. In
Expertise out of context: Proceedings of the sixth international
conference on naturalistic decision making, edited by R. R. Hoff-
man, 429-452. New York: Psychology Press.
Wason, P. C. 1966. Reasoning. In New horizons in psychology, ed-
ited by B. Foss, 135-151. London, UK: Penguin.
Wason, P. C. 1968. Reasoning About a Rule. Quarterly Journal of
Experimental Psychology 20: 273-281.
doi.org/10.1080%2F14640746808400161
Wason, P. C. 1969. Regression in Reasoning? British Journal of
Psychology 60(4): 471-480.
doi.org/10.1111/j.2044-8295.1969.tb01221.x
Weick, K.E.; and Sutcliffe, K.M. 2001. Managing the Unexpected:
Assuring High Performance in An Age of Complexity. San Fran-
cisco: Jossey-Bass.
West, R. F.; Toplak, M. E.; and Stanovich, K. E. 2008. Heuristics
and Biases as Measures of Critical Thinking: Associations with
Cognitive Ability and Thinking Dispositions. Journal of Educa-
tional Psychology 100(4): 930-941. doi.org/10.1037/a0012842
Westbrook, A.; Kester, D.; and Braver, T. S. 2013. What is the
Subjective Cost of Cognitive Effort? Load, Trait, and Aging Ef-
fects Revealed by Economic Preference. PloS one 8(7): e68210.
doi.org/10.1371/journal.pone.0068210
Willingham, D. T. 2007. Critical Thinking: Why is it So Hard to
Teach? American Educator 31(3): 8-19.
doi.org/10.3200/AEPR.109.4.21-32
Woodward, J. 2003. Making Things Happen: A Theory of Casual
Explanation. New York, NY: Oxford University Press.
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Counterfactual thinking is associated with regulatory focus in a way that explains previous empirical incongruities, such as whether additive counterfactuals (mutations of inactions) occur more or less frequently than subtractive counterfactuals (mutations of actions). In Experiment 1, regulatory focus moderated this pattern, in that additive counterfactuals were activated by promotion failure, whereas subtractive counterfactuals were activated by prevention failure. In Experiment 2, additive counterfactuals evoked a promotion focus and expressed causal sufficiency, whereas subtractive counterfactuals evoked a prevention focus and expressed causal necessity. In Experiment 3, dejection activated additive counterfactuals, whereas agitation activated subtractive counterfactuals. These findings illuminate the interconnections among counterfactual thinking, motivation, and goals.
Chapter
Full-text available
Dual-process theories formalize a salient feature of human cognition: We have the capacity to rapidly formulate answers to questions, but we sometimes engage in deliberate reasoning processes before responding. It does not require deliberative thought to respond to the question “what is your name”. It did, however, require some thinking to write this paragraph (perhaps not enough). We have, in other words, two minds that might influence what we decide to do (Evans, 2003; Evans & Frankish, 2009). Although this distinction is acceptable (and, as I’ll argue, essentially irrefutable), it poses serious challenges for our understanding of cognitive architecture. In this chapter, I will outline what I view to be important theoretical groundwork for future dual-process models. I will start with two core premises that I take to be foundational: 1) dual-process theory is irrefutable but falsifiable, and 2) analytic thought has to be triggered by something. I will then use these premises to outline my perspective on what I consider the most substantial challenge for dual-process theorists: We don’t (yet) know what makes us think.
Conference Paper
Full-text available
We present a metacognitive, integrated, dual-cycle architecture whose function is to provide agents with a greater capacity for acting robustly in a dynamic environment and managing unexpected events. We present MIDCA 1.3, an implementation of this architecture which explores a novel approach to goal generation, planning and execution given surprising situations. We formally define the mechanism and report empirical results from this goal generation algorithm. Finally, we describe the similarity between its choices at the cognitive level with those at the metacognitive.
Article
In part motivated by topics such as agency safety, there is an increasing interest in goal reasoning, a form of agency where the agents formulate their own goals. One of the crucial aspects of goal reasoning agents is their ability to detect if the execution of their courses of actions meet their own expectations. We present a taxonomy of different forms of expectations as used by goal reasoning agents when monitoring their own execution. We summarize and contrast the current understanding of how to define and check expectations based on different knowledge sources used. We also identify gaps in our understanding of expectations.
Preprint
Anticipatory thinking is a critical cognitive skill for successfully navigating complex, ambiguous systems in which individuals must analyze system states, anticipate outcomes, and forecast future events. For example, in military planning, intelligence analysis, business, medicine, and social services, individuals must use information to identify warnings, anticipate a spectrum of possible outcomes, and forecast likely futures in order to avoid tactical and strategic surprise. Existing methods for examining anticipatory thinking skill have relied upon task-specific behavioral measures or are resource-intensive, both of which are challenging to scale. Given the increasing importance of anticipatory thinking in many domains, developing a generic assessment of this skill and identifying the underlying cognitive mechanisms supporting it are paramount. The work reported here focuses on the development and validation of a general framework for ANticipatory Thinking Assessment (ANTA) that is rooted in widely used analytical methods as well as the divergent thinking literature. Two-hundred and ten participants completed the ANTA, which required them to anticipate possible risks, opportunities, trends or other uncertainties associated with a focal topic. Responses to the anticipatory thinking and divergent thinking tasks were rated by trained raters on a 5-point scale according to the uniqueness, specificity, and remoteness of responses. Results supported the ANTA’s construct validity, convergent validity, and discriminant validity. We also explored the relationship between ANTA scores and certain psychological traits and cognitive measures (need for cognition, need for closure, and mindfulness). Our findings suggest that the ANTA is a psychometrically valid instrument that may help researchers investigate anticipatory thinking in new contexts.
Conference Paper
Using the Wason card selection task, we model how humans decide to engage in further deliberation after generating an intuitive response. Central to our model is the Feeling of Rightness which is the fluency with which one makes a decision, and how good one feels about it. The model is implemented in a cognitive architecture, ACT-R. A core affect mechanism and the Feeling of Rightness component are used to drive the decision. A training procedure was used to simulate individual differences in heuristic and analytical behavior. Different degrees of reinforcement were then used to allow the model to learn logical skills, and acquire differences in FOR, which determined the degree of further deliberation. By relying on ACT-R’s different memory components and the core affect mechanism we were able to reproduce the variation between analytical and reactive behavior.
Article
When faced with a decision, people generally show a bias toward heuristic processing, even if it leads to the incorrect decision, such as in the base-rate neglect task. The crucial question is whether people know that they are biased. Recently, the three-stage model (Pennycook, Fugelsang, & Koehler, 2015) suggested that detecting this bias (conflict detection) is imperfect and a consistent source of bias because some people do not recognize that they are making biased decisions. In Experiment 1, participants completed a base-rate neglect task as replication of Pennycook et al. (2015). In Experiment 2, a conditional reasoning task was added as an extension to test the boundary conditions of the model. Results in Experiment 1 indicated that detection failures were a significant source of bias. However, results in Experiment 2 on the conditional reasoning task indicated that the three-stage model may be incompatible with a complex task such as conditional reasoning, an issue explored in detail in the General discussion.
Conference Paper
Adaptive training and support technologies have been used to improve training and performance in a number of domains. However, limited work on adaptive training has examined anticipatory thinking, which is the deliberate, divergent exploration and analysis of relevant futures to avoid surprise. Anticipatory thinking engages the process of imagining how uncertainties impact the future, helps identify leading indicators and causal dependencies of future scenarios, and complements forecasting, which focuses on assessing the likelihood of outcomes. It is particularly important for intelligence analysis, mission planning, and strategic forecasting, wherein practitioners apply prospective sense-making, scenario planning, and other methodologies to identify possible options and their effects during decision making processes. However, there is currently no underlying cognitive theory supporting specific anticipatory thinking methodologies, no adaptive technologies to support their training, and no existing measures to assess their efficacy. We are engaged in an ongoing effort to design adaptive technologies to support the acquisition and measurement of anticipatory thinking. As a first step toward adaptive environments that support the acquisition and application of anticipatory thinking competencies, we have developed a task to measure anticipatory thinking in which participants explore uncertainties and the impacts on the future given a particular topic. We present preliminary results from a study to examine the validity of this measure and discuss multiple factors that affect anticipatory thinking including attention, inhibitory control, need for cognition, need for closure, convergent thinking, and divergent thinking. We then introduce design principles for supporting training, application, and assessment of anticipatory thinking.
Article
A novel approach to adding primitive evaluative capabilities to a cognitive architecture is proposed. Two sub-symbolic quantities called valuation and arousal are learned for each declarative memory element based on usage statistics and the reward it generates. As a result, each memory element can be characterized as positive or negative and having a certain degree of affective intensity. In turn, these characteristics affect the latency and probability of retrieval for that memory element. Two global accumulators called core-affect-valuation and core-affect-arousal are computed as weighted sums of all retrievable valuations and arousals, respectively. The weights reflect usage history, context relevance, and reward accrual for all retrievable memory elements. These accumulators describe the general disposition or mood of the system. Core affect dynamics are used as reward signals to learn valuation and arousal values for new objects or events. The new architectural mechanism is used to develop two models that demonstrate the impact of affective valence and arousal on memory and decision-making. The models are fit to datasets from the literature and make novel predictions. The value of including valuation and core affect mechanisms in a cognitive architecture is discussed.
Chapter
Thinking about what might have been-counterfactual thinking-is a common feature of the mental landscape. Key questions about counterfactual thinking center on why and how they occur and what downstream cognitive and behavioral outcomes they engender. The functional theory of counterfactual thinking aims to answer these and other questions by drawing connections to goal cognition and by specifying distinct functions that counterfactuals may serve, including preparing for goal pursuit and regulating affect. Since the publication of our last theoretical statement (), numerous lines of empirical evidence support, or are rendered more readily understandable, when glimpsed through the lens of the functional theory. However, other lines of evidence have called into question the very basis of the theory. We integrate a broad range of findings spanning several psychological disciplines so as to present an updated version of the functional theory. We integrate findings from social psychology, cognitive neuroscience, developmental psychology, clinical psychology, and health psychology that support the claim that episodic counterfactual thoughts are geared mainly toward preparation and goal striving and are generally beneficial for individuals. Counterfactuals may influence behavior via either a content-specific pathway (in which the counterfactual insight informs behavior change) or a content-neutral pathway (in which the negative affect from the counterfactual motivates generic behavior change). Challenges to the functional theory of counterfactual thinking center on whether counterfactuals typically cohere to a structural form amenable to goal striving and whether behavioral consequences are mainly dysfunctional rather than functional. Integrating both supporting and challenging evidence, we offer a new theoretical synthesis intended to clarify the literature and guide future research in multiple disciplines of psychology.