Content uploaded by Derek Lomas
All content in this area was uploaded by Derek Lomas on Oct 09, 2015
Content may be subject to copyright.
How Might We Design Systems to
Build Human Intelligence?
This case study presents the design of “an intelligence-
building app for kids” as a worked example, with the
goal of sharing our accumulated design knowledge. Our
design goals are motivated by the fact that researchers
in artificial intelligence assume that "intelligence" is a
property that can be improved while researchers in
human intelligence do not have this confidence. Why?
Contemporary models of human intelligence largely
consist of mental characteristics that cannot be easily
improved (e.g., working memory & processing speed).
Thus, we wish to share our rationale for a digital-social
system designed to build human intelligence.
Using a worked example structure, we describe the
problem of designing systems for intelligence-building,
review the intelligence literature and its discontents,
describe our key design goals and then provide the
rationale for our specific design decisions. A primary
contribution of this case study is design-based theory
generation; specifically, we generate an alternative
model of intelligence that is based on improvable
Learning; intelligence; apps; game design;
Paste the appropriate copyright/license statement here. ACM now
supports three different publication options:
• ACM copyright: ACM holds the copyright on the work. This is the
• License: The author(s) retain copyright, but ACM receives an
exclusive publication license.
• Open Access: The author(s) wish to pay for the work to be open
access. The additional fee must be paid to ACM.
This text field is large enough to hold the appropriate release statement
assuming it is single-spaced in Verdana 7 point font. Please do not
change the size of this text box.
Every submission will be assigned their own unique DOI string to be
James Derek Lomas
The Design Lab
UC San Diego
9500 Gilman Drive
La Jolla, CA 92093 USA
Authorship Holdings, Ltd.
Authorfordshire, UK AU1 2JD
123 Another Ave.
Anothertown, PA 54321 USA
123 YetAnother Ave.
YetAnothertown, PA 54321 USA
123 Author Ave.
Authortown, PA 54321 USA
123 Another Ave.
Anothertown, PA 54321 USA
ACM Classification Keywords
H.5.m. Information interfaces and presentation (e.g.,
The goal of this paper is to articulate new generalizable
knowledge about the design of systems for building
human intelligence. Our purpose is theory building, not
theory testing: thus, we do not provide evidence
validating whether our designs worked. Instead, we
aim to describe the importance of our problem, the
designs we used to address the problem and the
reasoned arguments to explain our design decisions.
We frame this case study as a “worked example” .
Worked examples in instructional design are often
conceived as showing the solutions to problems, it is
best to conceive a “worked example” as a way of
sharing expert thinking during a decision-making or
problem-solving process . For the field of design, a
worked example “stresses examples (cases, specifics)
that are ‘worked’ or explicated in an overt way to make
thinking public” (p. 52, ). A successful worked
example makes the design theory, process or artifact
visible through documentation and annotated
reflection. In the format of a worked example, we will
present the problem, present a model of our decision-
making used to address the problem (with designs),
and justify our decisions using theoretical rationale. The
goal of this worked example is to support generalizable
knowledge that is useful for future problem solving.
Problem: Improving Human Intelligence
Researchers in artificial intelligence (AI) view
“intelligence” as a property that can be improved in
computational systems . Researchers in human
intelligence, however, tend to view intelligence as IQ,
which is claimed to be largely inherited through genes
and unchangeable over one’s lifetime . Only a
handful of environmental interventions consistently
improve IQ—but typically, only for children born into
The idea that intelligence is genetically determined and
unchangeable has been described as an affront to
modern value systems [21,30]. Moreover, a substantial
body of research has shown that people who believe
their intelligence is fixed have a reduced capacity to
learn , due to a reluctance to persist through
challenge. Furthermore, scientific advances in artificial
intelligence threaten the long-term economic viability of
humanity. Thus, we view the design of systems to
improve human intelligence as a societal “grand
challenge” of critical importance.
Our primary problem is the lack of system designs that
can provably improve human intelligence [41,31]. This
problem has negative outcomes in the form of social
inequity (if there is no way to help individuals with low
intelligence), loss of human well-being (if reduced
success potential reduces well-being), and the
existential dangers of artificial intelligence (if billions of
humans can’t maintain economic value in the face of
automation). Our secondary problem comes from the
threat of a widespread perception that intelligence is
fixed, which itself can cause reduced learning potential
 and social stratification . We do not aim to
solve either of these problems in this paper; we do,
however, hope to contribute a better understanding of
the problems and present a rational for design
examples that can inform other design attempts.
What is Intelligence?
This section communicates background information that
supports the viability of “intelligence-building systems.”
We point out, as others have, that IQ is not the same
as intelligence. Distinguishing general intelligence (i.e.,
IQ) from integrated intelligence (i.e., intelligence-as-a-
whole) opens up meaningful approaches to improving
intelligence even as IQ remains stable.
What is IQ?
IQ is a measure of what is known as general
intelligence, which reflects the tendency for different
tests of intelligence to be predicted by a single
underlying factor, or Spearman’s g . IQ is important
because it is a strong predictor of school sucess (~0.5
correlation with GPA [20,29] and future income (~.3
correlation with future income ). Frustratingly for
many, IQ appears to be strongly heritable through
genes (heritability estimates between .4-.8 [21,30]),
weakly heritable through environment  and
relatively stable over a person’s lifetime [21,29]. It is
also hard to train [25,31,41]. Only a small set of
interventions  have been consistently shown to
increase IQ, primarily for disadvantaged children and
fetuses (interventions include DHA nutritional
supplements, preschool, additional years of higher
education, and certain parenting behaviors). Direct
training approaches for neurotypical subjects have been
largely unsuccessful, however, there some possibilities
that have yet to be broadly explored [1,26].
IQ tests often involve a very wide range of different
items; in fact, the success of the construct of general
intelligence comes from the fact that people with a high
IQ tend to do well on all the different types of items.
Many have searched for a psychometric basis for
multiple factors in intelligence, like would be expected
by “multiple intelligences” , but the general factor g
remains dominant [6,29]. The components of
intelligence tests that load most strongly on g are the
so-called “culture-fair” tests of fluid intelligence, such
as Raven’s Matrices (Figure 1). Fluid intelligence is hard
to train, it is believed, because it is dependent upon on
working memory, which does not appear to be
improvable through practice .
All in all, pretty depressing constraints for a designer.
Yet, there are important caveats. IQ is especially
predictive for younger and less experienced people’s
performance. For instance, IQ tests tend to strongly
predict initial job performance, but over time, the
predictive utility is reduced; similarly, the tests are
more predictive for younger vs. older workers .
Additionally, IQ has been found to a very weak
predictor of the motivational aspects of job
performance, such as “cooperating with colleagues,
showing initiative and leadership.” Other studies of
motivational attributes have found that “self-discipline”
explained more than twice the variance in academic
performance than IQ . So, IQ isn’t everything.
What is Intelligence?
If we were trying to improve IQ, we might have given
up in the face of the evidence. Yet, there is an under-
acknowledged conceptual difference between general
intelligence (g) and intelligence-as-a-whole (for which
we suggest the term integrated intelligence). They are
often conflated, even though they refer to different
constructs. General intelligence is a unitary construct
that explains the correlation of intelligence tests. IQ is
an important component of intelligence, but they aren’t
the same thing. So, what is intelligence, then?
Figure 1: Example Raven’s Matrix
Intelligence has been defined in many dozens of ways.
We reviewed many definitions of intelligence, but
sought one that would remain valid across both
humans and non-humans (animals and other life,
computers, social groups and socio-technical systems).
Intelligence is clearly key to evolutionary fitness and
diverse researchers view intelligence as a capacity to
The scientific consensus on intelligence (to the extent
that scientists agree on anything) is that it is a person’s
capacity to adapt to one’s environment . For
instance, Alfred Binet (who developed the original IQ
test), defined intelligence as “the faculty of adapting
ones self to circumstances” . Scientists developing
artificial intelligence use similar definitions [22,23].
But that can’t be all: clay is highly adaptive in that it
can mould to fit any environment. So, an entity’s
adaptation must support some goal or purpose. Thus,
the most satisfying definitions of intelligence were
those that describe intelligence as the ability to support
success in uncertain environments [41,22]. Even more
simply, we can say that intelligence consists of the
collection of mental abilities that support one’s potential
“Success” is highly relative—it depends on an
individual’s idiosyncratic and ever-changing goals. Yet,
psychologists tend to assume that all people share the
general motivations to achieve well-being . So,
1 One’s “success potential” is also strongly influenced by one’s
material and social resources; our definition of individual
intelligence is focused on the mental qualities, thus we include
the mental qualities that can take advantage of these
resources, but not the resources themselves.
increasing intelligence should increase the probability of
achieving personal success and well-being.
This definition could be operationalized into the
ultimate intelligence test: did one achieve and maintain
success and well-being in their life? However, from a
psychometric perspective, this is not a very good test
because it violates a key assumption of traditional
psychometrics: measurable “test items” should have a
fixed relationship to the theoretical construct they seek
to measure. For instance, doing well on arithmetic or
calculus items will always be supportive evidence for
one’s general math ability. On the other hand,
completing a PhD is not always supportive evidence for
one’s intelligence, as it has a variable relationship to
one’s success and happiness. It might be hard and
seem smart, but it might not actually be the smartest
thing to do. Contrast this with getting better at
arithmetic or calculus—one definitely improved one’s
general math ability. But again, one who gets a PhD
will not necessarily be better positioned for success and
happiness—it just depends on their situation. This can
be acceptable, though, if we just treat markers of
mental abilities (like a degree) as a probabilistic
contributor to one’s intelligence: so, while completing
high school has a high probability of contributing to
one’s success potential, completing a PhD has a less
Moving towards a system design
From the above argument, it is reasonable that a
system for improving human intelligence should focus
on developing the mental abilities that probabilistically
support one’s success potential. But, while some
abilities may be highly predictive of success, they may
not be improvable (e.g., “working memory”). Thus,
we propose a model of improvable intelligence.
The basis of validity in this model of intelligence
building is “usability”—that is, the degree to which
individuals, parents or teachers can understand and
use the model to support cognitive development. An
improvable model of intelligence is still valid to the
extent that improvements actually lead to future
success. Long before we can measure a person’s
ultimate success, however, we can assess the
usability of the model. And, even before assessing
usability, we can design a model of improvable
intelligence with usability in mind, which is what we
The 4Cs: A Model of Improvable
To develop an improvable model of intelligence, we
first reviewed a vast array of psychological literature.
We collected individual traits and attributions within
this literature in a manner resembling Sternberg’s
methods for studying lay-conceptions of intelligence
We specifically sought “intelligence-building skills”
according to the following criteria: they should be
abilities or attitudes that, when improved/acquired,
would 1. enhance one’s potential for success, 2.
accelerate one’s future learning or 3. improve one’s
performance in a broad manner.
After using these criteria to identify about 30 different
“intelligence-building skills”, we then explored
different modes for categorization. Initially, we sought
to incorporate or extend existing models for “multiple
intelligence” [13,41], but ultimately desired to create
our own model to meet our unique needs for usability
by parents, teachers and individuals. We wanted a
model of intelligence that was memorable (i.e., short
and comprehensible), comprehensive, largely non-
overlapping, improvable, and supported by scientific
research. Ultimately, we came up with what we called
“the 4Cs of improvable intelligence”: Curiosity,
Creativity, Character and Cognitive Skills.
Curiosity: High levels of curiosity are a defining
feature of intelligent people [40,37,27]. When people
are curious about a topic, their learning is accelerated
due to increased motivation to learn . Therefore,
we adopt the hypothesis that cultivating curiosity in a
range of topics will accelerate future learning in those
Creativity: the capacity for creative problem solving, is
another defining feature of intelligent people .
There are a number of different techniques that can
help people improve their capacity for creative problem
solving , including methods for need finding
(problem identification), brainstorming without
judgment and rapidly iterating within a solution space.
While many of these methods should broadly improve
one’s performance, an explicit goal of these techniques
is to increase a person’s confidence that any problem is
solvable . This increase in self-efficacy alone would
meet our criteria for broadly improving performance
Character: Nobel Prize winning economist James
Heckman has elegantly shown the importance of
character skills in life success and their capacity for
improvement . “Character” represents a diverse
Size of universe
Why I don’t
Table 1: Examples of videos
found in the different curiosity
range of different abilities (e.g., social skills and
emotional control) as well as different motivational
attitudes (e.g., commitment to honesty and kindness).
Cognitive Skills: There are a number of improvable
cognitive skills that can accelerate future learning and
performance. For instance, fluent reading skills , a
strong number sense  and reasoning skills
[28,26,32] are both improvable and broadly improve
future performance. It is plausible that these skills can
be tested and improved through digital experiences.
Design Goals and Examples
We had two key design goals for the design of an
“intelligence-building program for kids.” First, we
needed to consistently deliver experiences that could
support measurable improvements in our learning
goals. Second, our platform needed to engage users
(kids and parents) and keep them coming back.
Second, we needed to measure our key theoretical
outcomes to support data-driven continuous
After completing designs that satisfied these needs, our
user testing revealed several key shortcomings: our
virtual pet approach appeared to reduce intrinsic
motivation for the content, our UI needed to directly
support goal-directed deliberate practice, and we
needed a design that wasn’t “kiddish” for older kids.
The worked examples of our designs are presented in
the following section. To familiarize readers with the
basic idea of our program, we first present our user
Our first version, illustrated in Figure 2,3 and 4, was
designed for kids 3-8. After the parent registered, kids
selected pictures of topics that interested them during
the personalization section. Then, they were given a
short tutorial, where they were shown how to feed their
virtual pet and then spin the spinner. The spinner
would land on a particular activity type (e.g., video,
learning game, trivia, etc.), which would give the player
a choice of 1-3 different activities. This was the main
mechanism for delivering “intelligence-building” digital
media to players.
Our model of change was based on the hypothesis that
experiences, digital and non-digital, can produce
cognitive and motivational changes that meet our
criteria for intelligence building. Therefore, in following
a backwards design approach , we first sought to
identify a broad set of assessments for our goals and
then identify experiences that could reasonably “move
the needle” on these assessments.
To measure and improve curiosity, we used a visual
interest inventory during the “personalization” section
to measure the topics that a child wanted to engage
with. Our instruction design goal, then, was to provide
experiences that would 1. broaden a child’s set of
interests in this inventory and 2. lengthen the duration
of voluntary time that children spent engaging with
those interests. This would show that we increased the
range of a child’s intrinsic motivation  to engage in
diverse subjects. To accomplish this, we collected a
broad range of interesting and unusual videos from
Youtube (examples in Table 1).
Figure 2: Virtual pet and spinner,
which gave access to games, videos,
trivia, facts and star coins.
Figure 3: Completing spinner content
advances the star level (top right) to
unlock new resources, like games
Figure 4: After 5 spins, the pet falls
asleep. This limits a player to content
that requires more reflective thought.
To measure and improve creative problem solving, we
used variations on the Torrence Test of Creative
Thinking . We also used a performance assessment
based on a person’s understanding of design thinking
methods. Our instructional goal, then, became to
increase a child’s creative fluency, flexibility and
originality (as we could measure them), and to provide
experiences that would enable a child to participate in a
structured creative problem solving session. To support
these goals, we provided open-ended creative design
tasks in a novel image manipulation interface for
storyboarding. We also developed short “parent
recipes” for guiding children through structured
problem solving and IDEO-style design thinking
To measure and improve character skills, we adopted
parent assessments from the Strengths and
Weaknesses Questionnaire  and assessments from
Heckman’s studies of character skills . We looked
at developing these skills through recommendations
from the body of “effective parenting” literature
[33,10,18] and cognitive skills associated with
character, such as emotional discrimination. We sought
to support these recommendations and skills through a
mix of digital media (e.g., videos that highlighted the
importance of character skills in successful people) and
emails to parents (which contained either structured
non-digital activities or direct parenting tips).
To measure and improve cognitive skills, we developed
a game-based of cognitive skills assessments
commonly used in the neuropsychological literature
[e.g., 9]. Some of these games had the potential for
training the cognitive skills directly (such as the
number sense games), whereas other were likely only
suitable for assessment (such as the working memory
games). Nevertheless, these games could provide
improvements in self-efficacy, and thereby improve
performance broadly. Altogether, the games could also
serve as a longitudinal mechanism for measuring
changes in cognitive skills—changes that we could, in
the future, attribute to interventions from our program
Engagement and Retention
To support player retention, we used a virtual pet game
mechanic. Players needed to take care of their pet
(feeding, bathing, brushing teeth, shaving, etc). They
also had the ability to upgrade it, through new clothes,
tools, etc. By completing intelligence-building content
games, videos, trivia or short presentations
(factbooks), the player would get “star power.” When
their star power meter filled up, they advanced to the
next “star level.” Each level brought access to some
new resource—e.g, new food, clothes and tools for the
pet—and new books, music, and games for the player.
Players are able to access intelligence-building digital
media through a “spinner”, the randomness of which is
intended to provide variable rewards [nir eyal]. To
prevent the player from getting bored, we had the
virtual pet fall asleep after 5 spins of the spinner. Our
goal was to have the pet get tired before the player, so
that the player was forced to disengage before getting
bored. While the pet was sleeping, the player could also
access more “passive” activities such as books, music
and design tools.
It was also essential that we engaged the player’s
parent, as we had specific learning goals that were best
produced through non-digital means (e.g., goals for
Table 2: Each of the 14
minigames provided assessment
or instruction in the above topics.
emotional control or grit). We sent out emails with
these “activity recipes” for parents and “intelligence-
building tips.” But, we felt that the emails would be
more likely to be meaningful to parents if it helped
connect them to their child’s behavior in the app.
Therefore, we provided kids with a share button that
would share videos or games with their parents email
address. This kid-shared content felt more meaningful
than simple reports sent by the system.
Reflection on Initial Design
After conducting a series of user tests with parents and
children, we found two major issues. First, our efforts
to deliver engagement to the child were conflicting with
parent’s perception of our product integrity (Figure 4).
That is, they felt that it didn’t look like it was improving
intelligence—since it was all about this virtual pet.
Second, older children didn’t really like the virtual pet
While the virtual pet could be redesigned, we were
disturbed by the idea that the pet was affecting
parent’s perception of our product integrity. We had
talked about integrity a lot – we viewed it as the core
challenge in our attempt to build an intelligence-
building app. We really had no interest in developing a
“smoke screen” that only appeared to work. The
frustrating thing was that the different parts of our
product really did support intelligence-building, even
the virtual pet (which aimed to provide the retention
necessary for the content to have effect). We
investigated various models of product integrity  and
developed them further to explain this occurrence. As
we see it now, according to this model (Figure 4), even
though we had aligned our product features to the
user’s purpose and processes, their perception of that
alignment was misaligned.
To address the concern that the product didn’t “look”
like it was improving intelligence, we developed a new
screen (Figure 5) that showed how the player had
“leveled up” within each of the different intelligence
areas. Within each area, parents could also see the
activities in that area and their child’s level of
performance in those activities. These “levels” are not
the same as “stars”, as commonly used in games to
indicate performance levels (like the 3 stars used in
Angry Birds levels). All players needed to move
incrementally from level 0 to level 5, no matter how
strong their performance – this allowed us to place a
greater emphasis on the person’s deliberate practice on
these areas than their initial skill levels. This also
avoided the need to have strong psychometric
reliability and validity prior to user testing.
Figure 5: Screen showing the “leveling up” of different
Figure 4: A Model of Product
As this screen was welcomed so strongly by users, we
began to reconceive our product with a greater
emphasis on deliberate self-improvement. We decided
to create a stripped down version of the original
without the virtual pet, which was designed with an
adult aesthetic sensibility.
While this version lacked the retention-supporting
virtual pet, it also treated the content as something
that was intrinsically motivating. After all, by rewarding
children with instrumental items following their
performance in the first version, we might have been
reducing their intrinsic motivation via the
overjustification effect .
So, in our version 2 (Figure 6), we simply provided
intelligence building media on the left side of the screen
and the intelligence-building areas on the right side of
the screen. This version is much more clear about its
goals than the previous version.
Figure 6: Version 2 of our Intelligence-Building App.
A key limitation of this paper is our focus on our actual
system designs and our design rationale, to the
exclusion of discussions of the design process.
Furthermore, we have excluded qualitative and
quantitative to support these claims.
The purpose of this paper is to use a worked example
format to communicate the design of a system to build
human intelligence. We hope that our theoretical
justifications of our design decisions can support theory
generation. However, we do not aim to test theories at
that point, and so future empirical research will be
required to investigate the actual efficacy of these
What have we accomplished? We believe the most
important contribution is a model of improvable
intelligence. While we expect that the components of
our model may change, our contribution is a
theoretically justifiable model of the elements of
intelligence that are susceptible to development.
We use the term "Integrated Intelligence" to distinguish
general intelligence (g) from the broader set of
information processing characteristics that tend to
support successful environmental adaptation. In this
usage, "integrated intelligence" (intelligence-as-a-
whole) refers to all aspects of a person's adaptive
capacity: the cognitive, the non-cognitive, the social,
the technological etc. Thus, when approaching the
challenge of improving human intelligence from a
design perspective, we avoid g (at least until there is a
mechanistic theory for its change). However, we have
made an argument that there are many aspects of
integrated intelligence that can be improved. The
question remaining, of course, is whether our
interventions (and any in the future) can actually lead
to improved adaptivity—as measured by improved
success and happiness.
We suggest that height and strength can serve as an
analogy with IQ and Integrated intelligence: while
height might be generally fixed by genetics, and
strength correlates with height -- everyone has the
capacity to become stronger. Similarly, while many
aspects of intelligence are not controllable, we argue
that everyone has the capacity to become more
intelligent. To advance this argument, we have offered
a system design.
When scientific theory is used to drive system design, the
design process can usefully articulate the shortcomings of the
scientific theory. For instance, we needed an improvable model
of intelligence – and our design goals lead us to one. In
general, we would like to see more contributions of practitioner
theory so that theorists can understand the shortcomings of
their theories. Through this manner of theory generation and
perturbation, we believe that design worked examples can
motivate the evolution of generalizable theory.
Education vs Intelligence
One wonders how our educational system would
operate if it were oriented towards a comprehensive
model of human intelligence, or in any way focused on
teaching the qualities that lead to success and well-
being . For one thing, instructional goals could be
subject to data-driven continuous improvement; today,
assessments and instruction can be continuously
improved, but there is no accepted model for deciding
what goals should be taught.
In contrast, by focusing on intelligence (the mental
characteristics that support one’s success potential),
there is a way to approach educational goals from first
principles, such that the goals themselves can be
continuously improved. We hope that our preliminary
work can suggest the broader feasibility of teaching for
intelligence. Building upon similar suggestions from
researchers in intelligence [1,41], we view our primary
contribution as framing the developing human
intelligence as a grand challenge for design and HCI.
1. Adey, P., Csapó, B., Demetriou, A., Hautamäki, J.,
& Shayer, M. (2007). Can we be intelligent about
intelligence? Why education needs the concept of
plastic general ability. Educational Research
Review, 2(2), 75–97.
2. Bandura, A. (1993). Perceived Self-Efficacy in
Cognitive Development and Functioning.
3. Blackwell, L. S., Trzesniewski, K. H., & Dweck, C.
S. (2007). Implicit theories of intelligence predict
achievement across an adolescent transition: A
longitudinal study and an intervention. Child
development, 78(1), 246-263.
4. Brown, T. (2009). Change by design. Harper
5. Carpenter, P. a, Just, M. a, & Shell, P. (1990).
What one intelligence test measures: a theoretical
account of the processing in the Raven Progressive
Matrices. Psychological Review, 97(3), 404–431.
6. Cianciolo, A. T., & Sternberg, R. J. (2008).
Intelligence: A Brief History. Intelligence: A Brief
7. Clark, K. B., & Fujimoto, T. (1989). The power of
product integrity. Harvard business review, 68(6),
8. Gee, J. P. (2010). New Digital Media and Learning
as an Emerging Area and “Worked Examples” as
One Way Forward. Digital Media. MIT Press.
9. Godwin, K. E., Lomas, D., Koedinger, K. R., &
Fisher, A. V. (2015). Monster Mischief: Designing a
Video Game to Assess Selective Sustained
Attention. International Journal of Gaming and
Computer-Mediated Simulations 7(4), 18-39.
10. Diamond, a. (2012). Activities and Programs That
Improve Children’s Executive Functions. Current
Directions in Psychological Science, 21(5), 335–
11. Duckworth, A. "Intelligence is not enough: Non -IQ
predictors of achievement" (January 1,
2006). Dissertations available from ProQuest.
12. Eyal, N. (2014). Hooked: How to build habit-
forming products. Penguin Canada.
13. Gardner, H. (2011). Frames of mind: The theory of
multiple intelligences. Basic books.
14. Goodman, R. (2001). Psychometric properties of
the strengths and difficulties questionnaire. Journal
of the American Academy of Child and Adolescent
Psychiatry, 40(11), 1337–45.
15. Heckman, J. J. J., & Kautz, T. (2013). Fostering
and Measuring Skills: Interventions That Improve
Character and Cognition.
16. Hunt, E. (1995). The role of intelligence in modern
society. American Scientist, 83, 356-356.
17. IDEO (2015) The Field Guide to Human-Centered
18. Joussemet, M., Mageau, G. a., & Koestner, R.
(2014). Promoting Optimal Parenting and
Children’s Mental Health: A Preliminary Evaluation
of the How-to Parenting Program. Journal of Child
and Family Studies, 23(6), 949–964.
19. Keyes, C. L. M., Shmotkin, D., & Ryff, C. D. (2002).
Optimizing well-being: the empirical encounter of
two traditions. Journal of Personality and Social
Psychology, 82(6), 1007–1022.
20. Laidra, K., Pullmann, H., & Allik, J. (2007).
Personality and intelligence as predictors of
academic achievement: A cross-sectional study
from elementary to secondary school. Personality
and Individual Differences, 42(3), 441–451.
21. Lee, J. J. (2010). Review of intelligence and how to
get it: Why schools and cultures count, R.E.
Nisbett, Norton, New York, NY. Personality and
Individual Differences, 48(2), 247–255.
22. Legg, S., & Hutter, M. (2007). Universal
Intelligence: A Definition of Machine Intelligence.
Minds and Machines, 17(4), 391–444.
23. Legg, S., & Hutter, M. (2007). A Collection of
Definitions of Intelligence, 1–12. Retrieved from
24. Lomas, J. D. (2014). Optimizing motivation and
learning with large-scale game design experiments
(Unpublished Doctoral Dissertation). HCI Institute,
Carnegie Mellon University.
25. Melby-Lervåg, M., & Hulme, C. (2013). Is working
memory training effective? A meta-analytic review.
Developmental Psychology, 49(2), 270–291.
26. Mackey, A. P., Hill, S. S., Stone, S. I., & Bunge, S.
a. (2011). Differential effects of reasoning and
speed training in children. Developmental Science,
27. McCall, R., & Carriger, M. (1993). A meta-analysis
of infant habituation and recognition memory
performance as predictors of later IQ. Child
Development, 64(1), 57–79.
28. Mercer, N., Wegerif, R., & Dawes, L. (1999).
Children’s Talk and the Development of Reasoning
in the Classroom. British Educational Research
Journal, 25(1), 95–111.
29. Neisser, U., Boodoo, G., Bouchard Jr., T. J.,
Boykin, a W., Brody, N., Ceci, S. J., … Urbina, S.
(1996). Intelligence: Knowns and unknowns.
American Psychologist, 51(2), 77–101.
30. Nisbett, R. E., Aronson, J., Blair, C., Dickens, W.,
Flynn, J., Halpern, D. F., & Turkheimer, E. (2012).
Intelligence: New findings and theoretical
developments. American Psychologist, 67(2), 130–
31. Protzko, J., Aronson, J., & Blair, C. (2013). How to
Make a Young Child Smarter: Evidence From the
Database of Raising Intelligence. Perspectives on
Psychological Science, 8(1), 25–40.
32. Rapport, M. D., Orban, S. a., Kofler, M. J., &
Friedman, L. M. (2013). Do programs designed to
train working memory, other executive functions,
and attention benefit children with ADHD? A meta-
analytic review of cognitive, academic, and
behavioral outcomes. Clinical Psychology Review,
33. Rasmussen, K. N. (2009). Effective Parenting. In
Encyclopedia of Positive Psychology.
34. Resnick, L. B., Michaels, S., & O’Connor, C. (2010).
How (well structured) talk builds the
mind. Innovations in educational psychology:
Perspectives on learning, teaching and human
35. Schiefel, U. (1991). Interest, Learning, and
Motivation. Educational Psychologist.
36. Siegler, R. S., Thompson, C. a, & Schneider, M.
(2011). An integrated theory of whole number and
fractions development. Cognitive Psychology,
37. Silvia, P. J., & Sanders, C. E. (2010). Why are
smart people curious? Fluid intelligence, openness
to experience, and interest. Learning and Individual
Differences, 20(3), 242–245.
38. Srinivasan, M., Dunham, Y., Hicks, C. M., & Barner,
D. (2015). Do attitudes toward societal structure
predict beliefs about free will and achievement?
Evidence from the Indian caste system.
39. Stanovich, K. E. (1986). Matthew Effects in
Reading!: Some Consequences of Individual
Differences in the Acquisition of Literacy Matthew
effects in reading!: Some consequences of
individual differences in the acquisition of literacy.
Reading Research Quarterly, 21(4), 360–407.
40. Sternberg, R., & Conway, B. (1981). People’s
conceptions of intelligence. Journal of Personality
and Social Psychology, 41(1), 37–55.
41. Sternberg, R. J., Kaufman, J. C., & Grigorenko, E.
L. (2008). Applied Intelligence.
42. Strenze, T. (2007). Intelligence and socioeconomic
success: A meta-analytic review of longitudinal
research. Intelligence, 35, 401–426.
43. Sweller, J., & Cooper, G. a. (1985). The Use of
Worked Examples as a Substitute for Problem
Solving in Learning Algebra. Cognition and
Instruction, 2(1), 59–89.
44. Torrance, E. (1972). Predictive Validity of the
Torrance Tests of Creative Thinking. The Journal of
Creative Behavior, 6(4), 236-262.
45. Veenhoven, R., & Choi, Y. (2012). Does intelligence
boost happiness? Smartness of all pays more than
being smarter than others. International Journal of
Happiness and Development, 1(1), 5-27.
46. Zagorsky, J. L. (2007). Do you have to be smart to
be rich? The impact of IQ on wealth, income and
financial distress. Intelligence, 35(5), 489-501.