Conference PaperPDF Available

Deciphering The Cookie Monster: A case study in impossible combinations

Authors:

Abstract and Figures

In conceptual blending, the transfer of properties from the input spaces relies on a shared semantic base. At the same time, interesting blends are supposed to resolve deep semantic clashes where many concept combinations correspond to impossible blends, i.e. blends whose input spaces lack any obvious similarities. Instead of a shared structure, the blends are based on bi-directional affordance structures. While humans can easily map this information, computational systems for creative constructions require an understanding of how these features relate to one another. In this paper, we discuss this problem from the perspective of linguistics and computational blending and propose a method combining theory weakening and semantic prioritisation. To demonstrate the problem space, we look at the Sesame Street character 'The Cookie Monster' and formalise the blending process using description logic.
Content may be subject to copyright.
Deciphering The Cookie Monster: A case study in impossible combinations
Maria M. Hedblom
Institute of Artificial Intelligence
University of Bremen, Germany
hedblom@uni-bremen.de
Guendalina Righetti
KRDB, Computer Science Faculty
Free University of Bozen-Bolzano, Italy
guendalina.Righetti@stud-inf.unibz.it
Oliver Kutz
KRDB, Computer Science Faculty
Free University of Bozen-Bolzano, Italy
okutz@unibz.it
Abstract
In conceptual blending, the transfer of properties from
the input spaces relies on a shared semantic base. At
the same time, interesting blends are supposed to re-
solve deep semantic clashes where many concept com-
binations correspond to impossible blends, i.e. blends
whose input spaces lack any obvious similarities. In-
stead of a shared structure, the blends are based on bi-
directional affordance structures. While humans can
easily map this information, computational systems for
creative constructions require an understanding of how
these features relate to one another. In this paper, we
discuss this problem from the perspective of linguistics
and computational blending and propose a method com-
bining theory weakening and semantic prioritisation. To
demonstrate the problem space, we look at the Sesame
Street character ‘The Cookie Monster’ and formalise
the blending process using description logic.
Introduction
Conceptual blending (CB) has been proposed as a model
of combinational creativity (Boden, 1998; Fauconnier and
Turner, 2008). Based on the principle of analogical trans-
fer, where information of one domain is transferred onto
another based on their shared structure, CB suggests that
creativity arises as a conceptual merge of two input spaces.
Cognitively speaking, blending is a dynamic process guided
by a series of optimality principles that repeatedly up-
date the interpretation of both the shared structure and the
blended concept (Fauconnier and Turner, 2008). In com-
putational approaches to CB, a static knowledge represen-
tation is required, limiting the possibilities to model the
dynamics of emergent processes in cognition (e.g. Kutz
et al. (2014); Pereira and Cardoso (2002)). This becomes
a problem when concepts from (very) different ontologi-
cal branches are merged, a phenomenon called impossible
blends (Turner, 1996). While humans can through men-
tal elaboration principles find shared structure and connec-
tions between concepts that have little to nothing in com-
mon, computers are forced to rely on the information they
are presented with and innovative methods to identify po-
tentially shared structure are required in order to advance
computational CB.
In comparison to computational blending, the creative
process of noun-noun (NN) compound word constructions
demonstrate how humans easily can blend words from dif-
ferent ontological branches based on other criteria than di-
rectly shared structure. Concepts like coffee cup and face-
palm do not result from some manipulation of the inter-
section of their respective input spaces. Instead, a coffee
cup is a cup particularly designated to contain coffee based
on the inverse role and bidirectional affordances of how a
cup is a container for liquids and how liquids need a con-
tainer. For a face-palm, there exists a shared ontological
structure, namely body-part. However, a face-palm is not
a body-part, but rather slang for the emotional reaction cap-
tured in the embodied action of covering your eyes with your
hand. Computational blending systems that rely on map-
ping the intersection of two input spaces would not be able
to reach these interpretations as they involve an understand-
ing of embodied experiences that is not shared between the
input spaces.
In order to formally deal with impossible blends, they
need to be treated with respect to the semantic components
within the respective input spaces and how they could re-
late to one another. As in the examples above, some im-
portant semantic components are object affordances (Gib-
son, 1977) and semantic components describing perceptive
and embodied experiences, such as those found in image
schemas (Johnson, 1987).
To learn how to better deal with impossible blends in com-
putational blending, we present a top-down analysis of the
creative process that takes place when conceptualising the
impossible blend found in the Sesame Street character The
Cookie Monster. Our method utilises ontological weaken-
ing of the input spaces to identify shared semantic structure
with an emphasise on identifying transitive and inverse roles
of affordances and by performing property interpretation in
the form of semantic prioritisation. In a miniature setting,
we formalise the spaces using Description Logic (DL).
All that combines is not Blending
Compound words are lexical compositions in which one do-
main inherits properties from another domain by merging
two words, e.g. blackbird and coffee cup. In English, two
parts can be distinguished: the Head, denoting the class,
and the Modifier, restricting the meaning of the word. For
instance, a compound concept such as Cookie Monster
would be differently interpreted than a Monster Cookie:
in the first case the word Monster plays the role of the
Head, modified by the word Cookie (possibly a Cookie eat-
ing Monster, see below). In the second one, it is the other
way around, and the concept is more likely to be interpreted
as something like a Cookie which is monstrous in some re-
spects.
According to Wisniewski (1997), NN combinations can
be interpreted in three ways: 1) The first is the relation-
linking interpretation, where some kind of relation between
the components is highlighted (the Cookie Monster is a
monster that eats cookies). 2) The second is the property
interpretation, where one or more properties of the Modi-
fier noun apply to the Head concept (the Cookie Monster
is a monster that is as sweet as cookies). 3) The third is
called hybridisation, which is a “combination of the two
constituents [. .. ] or a conjunction of the constituents” (Wis-
niewski, 1997, p.169). The result of the combination corre-
sponds essentially to a ‘mash-up’ or ‘blend’ of both com-
ponents (the Cookie Monster would then be both a cookie
and a monster).
However, even with this differentiation, the inheritance
relationship from the input spaces is not always straightfor-
ward. Consider the difference between the NN compound
words snowman and the ice-cream man. Ontologically, the
input spaces snow and ice-cream share several properties
such as being cold and fluffy, yet the compound words are
ontologically distinct based on essential properties and what
they are ‘used’ for. In the case of a snowman, the Mod-
ifier’s properties are transferred in its entirety as the com-
pound refers to a man made out of snow1. Hence, snowman
corresponds to a hybridisation of the two concepts. In the
ice-cream man, the result has little to do with any proper-
ties of ice-cream. Instead, the ice-cream man blend calls for
a relation-linking interpretation on weakened input spaces
based on functionality. Here, man:ability to bring is treated
in relation to the ice-cream space, essentially making the
blend a man who brings ice-cream.
In comparison to the linguistic research on compound
words, CB is the emergent process that finds this intersection
during (primarily) hybridisation. The blend inherits proper-
ties from both input spaces and through emergent properties
and optimality principles, the blends are ensured to make
sense from a cognitive perspective (Fauconnier and Turner,
1998; Pereira and Cardoso, 2003).
Turning such cognitive processes into ‘artificially intelli-
gent’ identification of shared structure and projection of rel-
evant information is a non-trivial problem. One important
feature is that (most often) the most salient and semantically
rich features should be inherited by the blend. Arguably, the
essence of objects are tightly connected to the affordances
they offer (Gibson, 1977), especially in terms of their func-
tional, spatiotemporal behaviours.
Affordances have an interesting feature. They are essen-
tially bi-directional dispositions (Beßler et al., 2020) with
1Arguably it would be possible to claim that a snowman is just
snow that inherited the shape of a man. However, we pertain that
the snowman is inadvertently also given individual identity through
anthropomorphism.
transitive or inverse roles of the participants (e.g. inverse
roles in DL (Horrocks and Sattler, 1999)). For instance,
Food has properties that offer the affordance ToBeEaten as
its most essential property is to be edible. Simultaneously,
aLivingCreature has the behaviour CanEat as it is essen-
tial it should eat, else it is not alive (for long). These kinds
of essential properties are of crucial importance when per-
forming CB and interpreting compound words and should,
therefore, be incorporated in the formal blending process.
Two important suggestions for improving semantics for
computational CB have been introduced (e.g. see (Eppe et
al., 2018; Hedblom, 2020)). The first is theory weakening,
in which the input spaces are ontologically generalised into
spaces of less detail, or higher-order components, to bet-
ter identify potentially shared structure. Axiomatised theory
weakening has been applied in logical approaches to anal-
ogy and CB (e.g. Gentner (1983); Schmidt et al. (2014)).
However, they lack semantic selection. The second sugges-
tion is semantic prioritisation2that promotes that the most
important attributes and properties should be transferred into
the blend. An example is the property interpretation found
in the houseboat blend: despite being a house to live in, it
also moves on water as the most salient image-schematic af-
fordances of the Modifier, boat, is projected into the blend.
In the next section, we demonstrate how these two methods
are involved in deconstructing the Cookie Monster blend.
The Complexity of the Cookie Monster
Actually named Sidney Monster, The Cookie Monster is a
blue hand-puppet from The Muppets famous for his obses-
sion with eating cookies. The conceptual complexity that
emerges when looking at the Cookie Monster as a compound
blend is the following: While the character is a Monster by
Muppet-classification, the most appropriate interpretation is
that the epithet monster is there to describe an unnatural rela-
tionship to cookies. Compare it with calling a (non-monster)
friend a “cookie monster” if s/he eats a lot of cookies.
From a formal blending perspective, two interesting
things happen in this blend:
1) The blended space is not an intersection between the in-
put spaces Monster and Cookie. Instead, it corresponds to
a relation-linking interpretation, based on a conceptual map-
ping between the inverse roles of: edible and canEat.
2) The second thing that happens is that the blend is not
exclusively a cookie-eating monster. Cookie Monster is a
sweet character that simply eats cookies in a monstrous way
(over-consumption, guzzling, etc.). This is a form of prop-
erty interpretation where the sweetness of the cookies are
transferred onto the blend and, even more interestingly, the
unnaturalness of monsters are transposed onto the cookie-
eating property.
As cookies and monsters have nothing in common in their
conceptual spaces, they need to be generalised to the point
in which a connection can be made. For this, we use the
2Hedblom (2020) calls this image schema prioritisation and
focus on spatiotemporal relationships. However, the idea can be
transposed onto any conceptual components of semantic impor-
tance.
Figure 1: Blending diagram of The Cookie Monster
ontological branches to identify the first bi-directional affor-
dance relationships and ‘step up’ in the object classes.
In Figure 1, the blending diagram for Cookie Monster is
presented with respect to both theory weakening and seman-
tic prioritisation based on property interpretation, explained
in more detail below.
Initiation of Computing the Impossible
Step 1: Relation Linking through Weakening. A com-
putational approach looking for a relation-linking interpre-
tation needs to identify a relation that holds between the
two input spaces. In our top-down example, and follow-
ing a logic-based representation, this corresponds to looking
for a role Rholding between the instances of the classes in
the input spaces. If such a role is identified, then the task
would be easily solved. For instance, for the input spaces
‘Human’ and ‘Monster’ one trivial relation-linking interpre-
tation would be R:scares as monsters are commonly per-
ceived as dangerous to humans3. However, for the input
spaces Monster and Cookie, ontologically represented in
Figure 1, no such obvious relation exists4.
Addressing this, one (or both) input spaces need to be gen-
eralised until a shared role is found. This corresponds to a
form of theory weakening, where the weakening allows to
‘step up’ in the ontological branches by exploiting the sub-
sumption relations holding between the concepts in the on-
tologies. One possibility to formally capture this is to utilise
a generalisation operator as described in (Confalonieri et al.,
2020) and as exploited in (Confalonieri and Kutz, 2020). In
short, a generalisation operator with respect to an ontology
is a function γOthat takes a concept Cand returns the set
3Based on the assumption that monsters are inherently scary
(Neuhaus et al., 2014).
4In the context of a formal ontology MonstervLivingBeing
and LivingBeingv eats.Food implies Monsterv eats.Food.
Ideally, theory weakening could be set to exploit logical inference
directly.
γO(C)of the super-concepts of C5. Intuitively, a concept
Dis a generalised super-concept of the concept Cwith re-
spect to an ontology Oif in every model of the ontology all
instances of Care also instances of D.
In our example, the ontological assumption presented
in Figure 1 claims that LivingBeing is a super-concept of
the concept Monster. Applying theory weakening, the in-
put space Monster is then generalised into LivingBeing,
and a relation holding between LivingBeing and Cookie is
sought. If a relation is found, it is returned. Otherwise, as in
this case, also the other input space needs to be generalised
from Cookie into Food. Here, the role eats holds between
the instances of LivingBeing and Food and constitutes the
shared structure that belongs to the generic space.
Following blending heuristics, the information in the
generic space constitutes the foundation for the blend by
adding the specific information from the input spaces, gen-
erating the blend CookieMonsterv eats.Cookie. How-
ever, as CookieMonstervMonster is also a correct inter-
pretation, and following the transfer of information between
Head and Modifier, the blend will also be defined by the ax-
ioms describing the Head ontology of Monster. Yet, one
more complexity arises due to the nature of this impossible
blend. This leads us to step 2.
Step 2: Semantic Property Prioritisation. Semantic pri-
oritisation is a form of formal property interpretation. It
suggests that the most salient features should be identified
in the input spaces and inherited into the blend. One such
interesting semantic transfer is the mapping of the abnor-
mality of Monster into the role identified in the previous
step, namely the eats function. This role is enhanced by an
abnormal relationship, i.e. to define a role inclusion such as
abnormallyEats veats (Horrocks and Sattler, 1999).
Finally, as with all blending, the Modifier concept is
there to alter the nature of the Head concept. For Cookie,
the most salient feature, i.e. that distinguishing it from the
other foods, is hasProperty.Sweetness and its concep-
tual space extends more than just sugary foods—but also
sweet and desirable characteristics. In contrast, monsters are
scary and its conceptual space is directly inconsistent with
that of sweetness. Directly transferring the salient features
of both input spaces, therefore, may create a logical impos-
sibility. Working top-down we already know that Cookie
Monster is a charming fellow, hence priority is given to the
Modifier. In unknown combinations or situations with mul-
tiple salient features, prioritisation strategies need to be ap-
plied to identify the most appropriate mapping.
Identifying the salient features to be transferred is one of
the biggest challenges for future work.
Discussion and future work
Computational blending has become one of the most widely
used methods for simulating computational creativity, yet
human ability still far exceeds the current state of the art
of computational systems.
To contribute to this research agenda, we took a brief
look into impossible blends by using the Cookie Monster
5Conversely, also a specialisation operator can be defined.
as a case study of a compound of two ontologically dis-
tinct branches. Building on linguistic research on noun-noun
compound words, one of the paper’s main contributions to
formal CB is showing how theory weakening could be em-
ployed to identify relation-linking interpretations, as well as
utilising semantic prioritisation of salient features to more
accurately deal with property projection.
The ideas follow the large body of work aiming to
improve computational blending (e.g. Eppe et al. (2018);
Neuhaus et al. (2014); Veale, Seco, and Hayes (2004)). The
distinction between Head and Modifier is also reminiscent of
asymmetric amalgams as described in (Besold, K¨
uhnberger,
and Plaza, 2017), with the difference that our approach does
not identify a traditional generic space and uses different
computational techniques as well.
Many challenges remain in order to utilise these ideas to
deal with impossible blends in computational CB. To ad-
dress these, future work includes incorporating the work on
generalisation operators together with a system for semantic
prioritisation. More precisely, by building on previous re-
search and on empirical results regarding concept salience
in compound words (Devereux and Costello, 2012), we plan
to combine the formal work on inverse and transitive roles
(Horrocks and Sattler, 1999) together with an ontological
repository of affordances and other semantic components.
Acknowledgements
Special thanks to Prof. Mark Turner for providing valuable
input on conceptual blending during the completion of this
research.
The research reported in this paper has been supported
by the German Research Foundation DFG, as part of Col-
laborative Research Center (Sonderforschungsbereich) 1320
“EASE - Everyday Activity Science and Engineering”, Uni-
versity of Bremen (http://www.ease-crc.org/).
References
Besold, T. R.; K¨
uhnberger, K.-U.; and Plaza, E. 2017. To-
wards a computational-and algorithmic-level account of
concept blending using analogies and amalgams. Con-
nection Science 29(4):387–413.
Beßler, D.; Porzel, R.; Pomarlan, M.; Beetz, M.; Malaka,
R.; and Bateman, J. 2020. A formal model of affordances
for flexible robotic task execution. In Proc. of the 24th
ECAI.
Boden, M. A. 1998. Creativity and artificial intelligence.
Artificial intelligence 103(1-2):347–356.
Confalonieri, R., and Kutz, O. 2020. Blending under de-
construction. Annals of Mathematics and Artificial Intel-
ligence 88(5):479–516.
Confalonieri, R.; Galliani, P.; Kutz, O.; Porello, D.; Righetti,
G.; and Troquard, N. 2020. Towards even more irre-
sistible axiom weakening. In Borgwardt, S., and Meyer,
T., eds., Proc. of DL, volume 2663. CEUR-WS.org.
Devereux, B. J., and Costello, F. J. 2012. Learning to in-
terpret novel noun-noun compounds: Evidence from cat-
egory learning experiments. In Cognitive aspects of com-
putational language acquisition. Springer. 199–234.
Eppe, M.; Maclean, E.; Confalonieri, R.; Kutz, O.; Schor-
lemmer, M.; Plaza, E.; and K¨
uhnberger, K.-U. 2018. A
computational framework for conceptual blending. Artifi-
cial Intelligence 256:105–129.
Fauconnier, G., and Turner, M. 1998. Conceptual integra-
tion networks. Cognitive Science 22(2):133—187.
Fauconnier, G., and Turner, M. 2008. The way we think:
Conceptual blending and the mind’s hidden complexities.
Basic Books.
Gentner, D. 1983. Structure mapping: A theoretical frame-
work for analogy. Cognitive Science 7(2):155–170.
Gibson, J. J. 1977. The theory of affordances. In Shaw,
R., and Bransford, J., eds., Perceiving, Acting, and Know-
ing: Toward an Ecological Psychology. Hillsdale: NJ:
Lawrence Erlbaum. 67–82.
Hedblom, M. M. 2020. Image Schemas and Concept Inven-
tion: Cognitive, Logical, and Linguistic Investigations.
Cognitive Technologies. Springer Computer Science.
Horrocks, I., and Sattler, U. 1999. A description logic with
transitive and inverse roles and role hierarchies. Journal
of logic and computation 9(3):385–410.
Johnson, M. 1987. The Body in the Mind: The Bodily Ba-
sis of Meaning, Imagination, and Reason. Chicago and
London: The University of Chicago Press.
Kutz, O.; Bateman, J.; Neuhaus, F.; Mossakowski, T.; and
Bhatt, M. 2014. E pluribus unum: Formalisation, Use-
Cases, and Computational Support for Conceptual Blend-
ing. In Computational Creativity Research: Towards Cre-
ative Machines, Thinking Machines. Atlantis/Springer.
Neuhaus, F.; Kutz, O.; Codescu, M.; and Mossakowski, T.
2014. Fabricating monsters is hard: towards the automa-
tion of conceptual blending. In Proc. of C3GI. Citeseer.
Pereira, F. C., and Cardoso, A. 2002. Conceptual Blending
and the Quest for the Holy Creative Process. In Proc. of
the 2nd Workshop on Creative Systems.
Pereira, F. C., and Cardoso, A. 2003. Optimality principles
for conceptual blending: A first computational approach.
AISB Journal 1(4):351–369.
Schmidt, M.; Krumnack, U.; Gust, H.; and K¨
uhnberger,
K.-U. 2014. Heuristic-Driven Theory Projection: An
Overview. In Prade, H., and Richard, G., eds., Com-
putational Approaches to Analogical Reasoning: Cur-
rent Trends, volume 548 of Computational Intelligence.
Springer-Verlag.
Turner, M. 1996. The literary mind: The origins of thought
and language. Oxford University Press.
Veale, T.; Seco, N.; and Hayes, J. 2004. Creative discov-
ery in lexical ontologies. In Proc. of the 20th COLING
Computational Linguistics, 1333–1338.
Wisniewski, E. 1997. When concepts combine. Psycho-
nomic Bulletin & Review 4:167–183.
Chapter
Building on previous work on Weighted Description Logic (WDL), we present and assess an algorithm for concept combination grounded in the experimental research in cognitive psychology. Starting from two WDL formulas representing concepts in a way similar to Prototype Theory and a knowledge base (KB) modelling background knowledge, the algorithm outputs a new WDL formula which represent the combination of the input concepts. First, we study the logical properties of the operator defined by our algorithm. Second, we collect data on the prototypical representation of concepts and their combinations and learn WDL formulas from them. Third, we evaluate our algorithm and the role of the KB by comparing the algorithm’s outputs with the learned WDL formulas.
Conference Paper
Full-text available
One of the key reasoning tasks of robotic agents is inferring possible actions that can be accomplished with a given object at hand. This cognitive task is commonly referred to as inferring the affordances of objects. In this paper, we propose a novel conceptualization of affordances and its realization as a description logic ontology. The key idea of the framework is that it proposes candidate affordances through inference, and that these can then be validated through physics-based simulation. We showcase the practical use of our conceptualization by means of demonstrating what competency questions an agent equipped with it can answer. The proposed formal model is implemented as a TBox OWL ontology of affordances based on the DOLCE Ultra Light + DnS foundational ontology.
Article
Full-text available
We present a computational framework for conceptual blending, a concept invention method that is advocated in cognitive science as a fundamental and uniquely human engine for creative thinking. Our framework treats a crucial part of the blending process, namely the generalisation of input concepts, as a search problem that is solved by means of modern answer set programming methods to find commonalities among input concepts. We also address the problem of pruning the space of possible blends by introducing metrics that capture most of the so-called optimality principles, described in the cognitive science literature as guidelines to produce meaningful and serendipitous blends. As a proof of concept, we demonstrate how our system invents novel concepts and theories in domains where creativity is crucial, namely mathematics and music.
Article
Full-text available
The ability to correctly interpret and pro- duce noun-noun compounds such as WIND FARM or CARBON TAX is an important part of the acquisition of language in various do- mains of discourse. One approach to the interpretation of noun-noun compounds as- sumes that people make use of distributional information about how the constituent words of compounds tend to combine; another as- sumes that people make use of information about the two constituent concepts' features to produce interpretations. We presentan ex- periment that examines how people acquire both the distributional information and con- ceptual information relevant to compound interpretation. A plausible model of the in- terpretation process is also presented.
Article
Concept blending – a cognitive process which allows for the combination of certain elements (and their relations) from originally distinct conceptual spaces into a new unified space combining these previously separate elements, and enables reasoning and inference over the combination – is taken as a key element of creative thought and combinatorial creativity. In this article, we summarise our work towards the development of a computational-level and algorithmic-level account of concept blending, combining approaches from computational analogy-making and case-based reasoning (CBR). We present the theoretical background, as well as an algorithmic proposal integrating higher-order anti-unification matching and generalisation from analogy with amalgams from CBR. The feasibility of the approach is then exemplified in two case studies.
Article
We propose an implementation of the eight Optimality Principles from the framework of Conceptual Blending, as presented by Fauconnier and Turner (1998). Conceptual Blending explains several cognitive phenomena in the light of the integration of knowledge from different mental spaces onto a single mental space: the Blend. The Optimality Principles express general pressures that compete in the generation of the Blend. The work we present now corresponds to the Constraints module of our computational model of Conceptual Blending, also described in other papers.
Article
We usually consider literary thinking to be peripheral and dispensable, an activity for specialists: poets, prophets, lunatics, and babysitters. Certainly we do not think it is the basis of the mind. We think of stories and parables from Aesop's Fables or The Thousand and One Nights, for example, as exotic tales set in strange lands, with spectacular images, talking animals, and fantastic plots-wonderful entertainments, often insightful, but well removed from logic and science, and entirely foreign to the world of everyday thought. But Mark Turner argues that this common wisdom is wrong. The literary mind-the mind of stories and parables-is not peripheral but basic to thought. Story is the central principle of our experience and knowledge. Parable-the projection of story to give meaning to new encounters-is the indispensable tool of everyday reason. Literary thought makes everyday thought possible. This book makes the revolutionary claim that the basic issue for cognitive science is the nature of literary thinking. In The Literary Mind, Turner ranges from the tools of modern linguistics, to the recent work of neuroscientists such as Antonio Damasio and Gerald Edelman, to literary masterpieces by Homer, Dante, Shakespeare, and Proust, as he explains how story and projection-and their powerful combination in parable-are fundamental to everyday thought. In simple and traditional English, he reveals how we use parable to understand space and time, to grasp what it means to be located in space and time, and to conceive of ourselves, other selves, other lives, and other viewpoints. He explains the role of parable in reasoning, in categorizing, and in solving problems. He develops a powerful model of conceptual construction and, in a far-reaching final chapter, extends it to a new conception of the origin of language that contradicts proposals by such thinkers as Noam Chomsky and Steven Pinker. Turner argues that story, projection, and parable precede grammar, that language follows from these mental capacities as a consequence. Language, he concludes, is the child of the literary mind. Offering major revisions to our understanding of thought, conceptual activity, and the origin and nature of language, The Literary Mind presents a unified theory of central problems in cognitive science, linguistics, neuroscience, psychology, and philosophy. It gives new and unexpected answers to classic questions about knowledge, creativity, understanding, reason, and invention.