Article

Integrating Structure and Meaning: A Distributed Model of Analogical Mapping

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper we present Drama, a distributed model of analogical mapping that integrates semantic and structural constraints on constructing analogies. Specifically, Drama uses holographic reduced representations (Plate, 1994), a distributed representation scheme, to model the effects of structure and meaning on human performance of analogical mapping. Drama is compared to three symbolic models of analogy (SME, Copycat, and ACME) and one partially distributed model (LISA). We describe Drama’s performance on a number of example analogies and assess the model in terms of neurological and psychological plausibility. We argue that Drama’s successes are due largely to integrating structural and semantic constraints throughout the mapping process. We also claim that Drama is an existence proof of using distributed representations to model high-level cognitive phenomena.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The technical areas most relevant to the author's contributions are listed in Table 1-2 with example references of recent prior work (2000 and later) as well as the research conducted by the author in this thesis (Combs, 2021) or a separate article (Combs, Bihl, Ganapathy, & Staples, 2022). (Eliasmith & Thagard, 2001); (Doumas, et al., 2008); (Lu, et al., 2012); ; (Levy & Goldberg, 2014); (Drozd, et al., 2016); (Speer, et al., 2017) X (Combs, 2021) AR Algorithm Comparisons X (French R. M., 2002); (Kokinov & French, 2003); (Leech, et al., 2008); (Gentner & Forbus, 2010); (Rogers, et al., 2017); ; (Mikolov, et al., 2018); (Peterson, et al., 2020) X (Combs, 2021); (Combs, et al., 2022) Interdisciplinary AR Comparison with Metrics X (Combs, 2021); (Combs, et al., 2022) Image-based AR X (Yaner & Goel, 2006); (Doumas & Hummel, 2010); (Hwang, Grauman, & Sha, 2013); (Sadeghi, Zitnick, & Farhadi, 2015); (Reed, Zhang, Zhang, & Lee, 2015); Image and text to AR X (Lu, Liu, Ichien, Yuille, & Holyoak, 2019) Image-text to AR X (Combs, 2021) AR Algorithm Taxonomy X (Combs, 2021); (Combs, et al., 2022) AR Comparison Metrics X (Leech, et al., 2008); (Gentner & Forbus, 2010); (Rogers, et al., 2017); ; (Mikolov, et al., 2018); (Peterson, et al., 2020) X (Combs, 2021); (Combs, et al., 2022) Correctness Metric X (Morrison, et al., 2004) X (Combs, 2021); (Combs, et al., 2022) Goodness Metric X (Combs, 2021); (Combs, et al., 2022) Contextual Metrics X (Combs, 2021) RESEARCH OBJECTIVES ...
... Using the explanation of Figure 2-10 and Structure Mapping Theory to translate it into a textual representation resulted in Figure 2-11 (Falkenhainer & Forbus, 1989). (Eliasmith & Thagard, 2001) Understanding the context of the analogy, STAR-2 follows the steps outlined below: ...
... Though there are several parallels between LISA and STAR (such as both being neutral networks), the latter algorithms mental capacity is based on limiting the number of firings done synchronously (Hummel & Holyoak, 2005); whereas, STAR reduces the number of "chunks" that can be evaluated simultaneously (Halford, et al., 1994). LISA's limitations (the ability to only fire three propositions at once (Hummel & Holyoak, 2005)) could potentially represent the constraints on a human's short-term memory compared to other algorithms (Eliasmith & Thagard, 2001). LISA is partly biased due to relational concepts being hard-coded into the program and its inability to learn new predicates (Lu, Chen, & Holyoak, 2012). ...
Thesis
Full-text available
There is a continual push to make Artificial Intelligence (AI) as human-like as possible; however, this is a difficult task because of its inability to learn beyond its current comprehension. Analogical reasoning (AR) has been proposed as one method to achieve this goal. Current literature lacks a technical comparison on psychologically-inspired and natural-language-processing-produced AR algorithms with consistent metrics on multiple-choice word-based analogy problems. Assessment is based on “correctness” and “goodness” metrics. There is not a one-size-fits-all algorithm for all textual problems. As contribution in visual AR, a convolutional neural network (CNN) is integrated with the AR vector space model, Global Vectors (GloVe), in the proposed, Image Recognition Through Analogical Reasoning Algorithm (IRTARA). Given images outside of the CNN’s training data, IRTARA produces contextual information by leveraging semantic information from GloVe. IRTARA’s quality of results is measured by definition, AR, and human factors evaluation methods, which saw consistency at the extreme ends. The research shows the potential for AR to facilitate more a human-like AI through its ability to understand concepts beyond its foundational knowledge in both a textual and visual problem space.
... Two types of similarity that influence processing of analogical episodes are distinguished. Structural similarity (which should not be confused with the structural similarity in HDC/VSA) reflects how the elements of analogs are arranged with respect to each other, that is, in terms of the relations between the elements [74,117,118]. Analogs are also matched by the "surface" or "superficial" similarity [85,114] based on common analogs' elements or a broader "semantic" similarity [74,154,401], based on, e.g., joint membership in a taxonomic category or on similarity of characteristic feature vectors. Experiments based on human assessment of similarities and analogies confirmed that both surface (semantic) and structural similarity are necessary for sound retrieval [85]. ...
... Structural similarity (which should not be confused with the structural similarity in HDC/VSA) reflects how the elements of analogs are arranged with respect to each other, that is, in terms of the relations between the elements [74,117,118]. Analogs are also matched by the "surface" or "superficial" similarity [85,114] based on common analogs' elements or a broader "semantic" similarity [74,154,401], based on, e.g., joint membership in a taxonomic category or on similarity of characteristic feature vectors. Experiments based on human assessment of similarities and analogies confirmed that both surface (semantic) and structural similarity are necessary for sound retrieval [85]. ...
... One of the limitations of these studies is that the approach was not demonstrated to be scalable to large analogical episodes. HRR was also used in another model for the analogical mapping [74] called DRAMA, where the similarity between HVs was used to initialize a localist network involved in the mapping. ...
Article
Full-text available
This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations [322, 327] is an influential HDC/VSA model that is well-known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the field. Part I of this survey [223] covered foundational aspects of the field, such as the historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and the transformation of input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the Machine Learning/Artificial Intelligence domain, however, we also cover other applications to provide a complete picture. The survey is written to be useful for both newcomers and practitioners.
... Despite using ACME as its basis, DRAMA has been generally accepted to be a hybrid model [46]. DRAMA uses holographic reduced representations (HRRs) (as discussed by Plate in [47]) and manipulates them through convolution and superimposition [46]. ...
... Despite using ACME as its basis, DRAMA has been generally accepted to be a hybrid model [46]. DRAMA uses holographic reduced representations (HRRs) (as discussed by Plate in [47]) and manipulates them through convolution and superimposition [46]. By nature, HRRs are influenced by noise, and experimental data shows that HRRs can yield results similar to human recollection [46]. ...
... DRAMA uses holographic reduced representations (HRRs) (as discussed by Plate in [47]) and manipulates them through convolution and superimposition [46]. By nature, HRRs are influenced by noise, and experimental data shows that HRRs can yield results similar to human recollection [46]. DRAMA compares elements in the source and target by taking their dot product and dividing it by an arbitrary weight on semantics called the "semantic similarity" parameter, which is incorporated into the "activation" variable directly used to determine the analogy's final mapping [46]. ...
... Two types of similarity that influence processing of analogical episodes are distinguished. Structural similarity reflects how the elements of analogs are arranged with respect to each other [Eliasmith and Thagard, 2001], [Gentner and Smith, 2012], [Gentner and Maravilla, 2017]. Analogs are also matched by the "surface" or "superficial" similarity [Gentner, 1983], [Forbus et al., 1995] based on common analog's elements or a broader "semantic" similarity [Hummel and Holyoak, 1997], [Thagard et al., 1990], [Eliasmith and Thagard, 2001], based on, e.g., joint membership in a taxonomic category or on similarity of characteristic feature vectors. ...
... Structural similarity reflects how the elements of analogs are arranged with respect to each other [Eliasmith and Thagard, 2001], [Gentner and Smith, 2012], [Gentner and Maravilla, 2017]. Analogs are also matched by the "surface" or "superficial" similarity [Gentner, 1983], [Forbus et al., 1995] based on common analog's elements or a broader "semantic" similarity [Hummel and Holyoak, 1997], [Thagard et al., 1990], [Eliasmith and Thagard, 2001], based on, e.g., joint membership in a taxonomic category or on similarity of characteristic feature vectors. Experiments based on human assessment of similarities and analogies confirmed that both surface (semantic) and structural similarity are necessary for the sound retrieval [Forbus et al., 1995]. ...
... One of the limitations of these studies is that the approach was not demonstrated to be scalable to large analogical episodes. HRR was also used in another model for the analogical mapping [Eliasmith and Thagard, 2001] called DRAMA, where the similarity between HVs was used to initialize a localist network involved in the mapping. ...
Preprint
Full-text available
This is Part II of the two-part comprehensive survey devoted to a computing framework most commonly known under the names Hyperdimensional Computing and Vector Symbolic Architectures (HDC/VSA). Both names refer to a family of computational models that use high-dimensional distributed representations and rely on the algebraic properties of their key operations to incorporate the advantages of structured symbolic representations and vector distributed representations. Holographic Reduced Representations [Plate, 1995], [Plate, 2003] is an influential HDC/VSA model that is well-known in the machine learning domain and often used to refer to the whole family. However, for the sake of consistency, we use HDC/VSA to refer to the area. Part I of this survey [Kleyko et al., 2021c] covered foundational aspects of the area, such as historical context leading to the development of HDC/VSA, key elements of any HDC/VSA model, known HDC/VSA models, and transforming input data of various types into high-dimensional vectors suitable for HDC/VSA. This second part surveys existing applications, the role of HDC/VSA in cognitive computing and architectures, as well as directions for future work. Most of the applications lie within the machine learning/artificial intelligence domain, however we also cover other applications to provide a thorough picture. The survey is written to be useful for both newcomers and practitioners.
... They apply the multiconstraint theory, which assumes that analogical reasoning is guided by three broad classes of constraints: similarity, structure and purpose, which contribute to the coherence in analogical reasoning (Holyoak & Thagard, 1997;Keane M. , 1988). The three constraints are considered "soft" and do not provide strict rules (Eliasmith & Thagard, 2001). As they interact with each other they should rather be seen as guiding influences in the analogical process (Holyoak & Thagard, 1997). ...
... Purpose and goals should be made explicit, especially when using analogical reasoning in new product development (Holyoak & Thagard, 1997). A specific purpose affects how a person makes an analogy, as the focus of attention will be on the elements linked to this purpose (Clement & Gentner, 1991;Eliasmith & Thagard, 2001). Also the amount of initial knowledge of the target's structure can be of importance (Ward T. B., 1998). ...
... (1) elements that correspond to each other in the source and the target are matched, then (2) a part of the element structure is transferred from the source to the target, which forms the basis of analogical inference (Eliasmith & Thagard, 2001). Similarities between the source and the target context of the analogical reasoning process thus take place on two distinct levels: superficial features and structural relationships (Blanchette & Dunbar, 2001;Grégoire, Barr, & Shepherd, 2010;Ross, 1989). ...
Thesis
Full-text available
In order to be able to remain successful, a company needs to anticipate and react to changes in the environment. Through business model innovation companies are able to react to change and it can help them in commercializing new technologies while gaining competitive advantage. Through the development of LED technology, Philips Lighting is confronted with significant change in the outdoor lighting market. However, they struggle to react accordingly. Therefore, the first objective of this research is to analyze three other industries that have also undergone a business model transition similar to the one Philips is facing: Philips Healthcare, the print industry and the music industry. The second objective of this study is to analyze how the business model innovations from another industry can influence the innovation processes at Philips Lighting. In other words, how can Philips Lighting learn from analogies? This research thus focuses on how analogical reasoning can influence business model innovation. The verbal protocol method was applied, resulting in 18 analyzable protocols. This qualitative research design was chosen in order to analyze concurrent thought processes of Philips Lighting managers. During the analogical thought process, superficial features as well as structural relationships are transferred from one industry to a new business model for Philips Lighting. The results demonstrate that analogical reasoning can be an effective method for business model innovations. Individuals are forced to think beyond their local knowledge and analyze which capabilities are needed by Philips in order to react to change. Near analogies lead to more incremental business model solutions, whereas far analogies result in more radical business model innovations. Several propositions are developed and recommendations are given as to how Philips Lighting should innovate their business model for outdoor lighting.
... Moreover, the built-in representation of LTM makes LISA an uncreative system with low flexibility, since it demands the explicit coding of eac proposition. Drama is a system that aims to integrate semantic and structural information in analogy making [Eliasmith and Thagard, 2001]. It has a set of particularities that make it unique among its peers. ...
... It is then necessary an errorcleaning mechanism. It is claimed that HRRs are cognitively plausible models of memory [Eliasmith and Thagard, 2001]. Other systems also apply distributed representations such as neural networks (e.g. ...
... "Beth" with "Mary" -both are women) which would have little coactivation with the former nodes. Then, with a spreading activation process (as in Sapper), it would select the mapping sets that best satisfy the constraints of similarity, structure and purpose, as defined in [Eliasmith and Thagard, 2001] and [Holyoak and Thagard, 1989]. In theory, Drama can integrate both structure and meaning, which would be a major breakthrough in analogy research, but, since the ground concepts are given random vectors, the meaning is entirely dependent on the property and ISA relations, which end up as being structural knowledge as any other relation. ...
... In recent years, we and others have made progress in advancing a vector space computing framework called Vector Symbolic Architecture (VSA), or synonymously Hyperdimensional (HD) computing, that both enables variable binding and is fully transparent [12,21,30,44,48] (see also the survey in [35,36]). In VSA, symbols, data, or other entities are represented by randomly mapping them into a vector space of fixed dimensionality. ...
... By drawing the base vector of an FPE from distributions other than the uniform band-limited distribution (12), one can design kernels with shapes that differ from the sinc function. The Bochner theorem (5) states that any kernel whose Fourier transform is a proper density function can be represented with Fourier features drawn from this density [51]. ...
... HRRvariant memory models such as CHARM (Metcalfe-Eich, 1982) and TODAM (Murdock, 1983) can explain and predict a variety of human memory phenomena. HRRs have also been used to model how people understand analogies (Eliasmith & Thagard, 2001;Plate, Chapter 2: Background and Related Work 2000) and language (Jones & Mewhort, 2007). In addition to cognitive models, HRRs have also been used as the basis for recommender systems, though due to time and space requirements this is somewhat impractical (Rutledge-Taylor, Vellino, & West, 2008). ...
... Because HRR similarity can reflect both content and structure, Plate (2000) has used HRRs to model human performance on analogy retrieval and processing, and Eliasmith and Thagard (2001) have further developed an HRR model of analogical reasoning. Plate (1997) has also suggested that HRRs could potentially be used in case-based reasoning systems. ...
Thesis
Full-text available
In this thesis, we build upon the work of Plate by advancing the theory and utility of Holographic Reduced Representations (HRRs). HRRs are a type of linear, associative memory developed by Plate and are an implementation of Hinton's reduced representations. HRRs and HRR-like representations have been used to model human memory, to model understanding analogies, and to model the semantics of natural language. However, in previous research, HRRs are restricted to storing and retrieving vectors of random numbers, limiting both the ability of HRRs to model human performance in detail, and the potential applications of HRRs. We delve into the theory of HRRs and develop techniques to store and retrieve images, or other kinds of structured data, in an HRR. We also investigate square matrix representations as an alternative to HRRs, and use iterative training algorithms to improve HRR performance. This work provides a foundation for cognitive modellers and computer scientists to explore new applications of HRRs.
... Models of analogical reasoning like LISA Holyoak 1997, 2003) and DORA (Doumas et al. 2008;Doumas and Hummel 2013) also compare representations, but additionally factor in the overlap between what Hummel et al. (Hummel 2016) would consider sub-structural elements-products of a representational system that contains both localist and distributed concepts. Contrast this with models making use of fully distributed representations, such as those which are vector-based (Eliasmith and Thagard 2001;Emruli and Sandin 2013). Gentner and Forbus (2011) summarize the many computational models of analogy and their variations, the vast majority of which make use of structural similarity criteria. ...
... -interesting, surprising, or informative (Bowers and Davis 2012a, b;Thibodeau et al. 2016;Aouchiche and Hansen 2010;Gopnik 2011;Licato et al. 2014a;Gauthier et al. 2016;Ireland and Bundy 1996;Bundy et al. 2005) -reconstructible-minimizes some reconstruction error of a loss function (Hinton and Zemel 1994;Han et al. 2016) -correct-generates truthful inferences -re-usable or abstract (Bengio et al. 2013) -fruitful (Carnap 1950) -deep (Gerring 1999) -normative Davis 2012, 2015) -falsifiable (Jones and Love 2011; Marcus and Davis 2012; Bowers and Davis 2012a) -otherwise useful, e.g. "as input to a supervised predictor" (Bengio et al. 2013) 6. Robust -"Disentangles the factors of variation" (Bengio et al. 2013) -denoising (Vincent et al. 2010;Han et al. 2016) -minimizes reconstruction error (Hinton and Zemel 1994;Han et al. 2016) 7. Appropriately structured -distributed or sparse (Bengio et al. 2009(Bengio et al. , 2013Hinton et al. 1986;Han et al. 2016;Eliasmith and Thagard 2001;Stewart and Eliasmith 2012;Emruli and Sandin 2013;Rachkovskij et al. 2013) -localist (Falkenhainer et al. 1989;Gentner 1983;Gust et al. 2003Gust et al. , 2006Schwering et al. 2009;Schmidt et al. 2014;Fodor 1980;Fodor and Pylyshyn 1988;Fodor 1998) -hybrid (Hummel and Biederman 1992;Hummel and Holyoak 1997;Hummel 2001;Sun 1991Sun , 2002Sun , 2004 The seven categories of criteria in the preceding list are generally applicable to one type of representational object: Categories 1, 2, 4, and 7 tend to apply to representational systems; categories 3, 5, and 6 to representational spaces. Categories 1, 3, 5, and 6 also doubly can apply to individual representations as well. ...
Article
Full-text available
All artificial reasoners work within representational systems. These systems, which may have varying levels of formality or detail, determine the space of possible representations over which the artificial reasoner can operate, by defining the syntactic and semantic properties of the symbols, structures, and inferences that they manipulate. But we are now seeing an increasing need for the ability to reason over representational systems, rather than just working within them. A prerequisite of performing such reasoning is the ability to evaluate and compare representational objects (and to know the difference between them). We survey the criteria that are used for such evaluations in AI, machine learning, and other AI-related fields. To aid our survey, we introduce a formalism of representations, representational systems, and representational spaces that lends itself nicely to an analysis of the criteria typically used for evaluating them.
... The third and last problem is to do with particular implementations of ICS. [34,15] use tensor binding representations to model grammar in the following manner: words such as funny or joke are tagged in some way with their parts of speech. This tagging takes the form of a Kronecker product with circular convolution applied to the resulting matrix. ...
... Now that we can see that ICS and CatCo have the same sort of structure, we can cross-fertilize in order to reap the maximum benefit from each representation. The ICS representation has been developed with connectionist implementations in mind, and therefore methods developed in [32], and implementations such as [14,34,15,16] can be used to develop the CatCo model into a cognitive system rather than the purely linguistic system that it is currently used for. ...
Article
Full-text available
We accommodate the Integrated Connectionist/Symbolic Architecture (ICS) of [32] within the categorical compositional semantics (CatCo) of [13], forming a model of categorical compositional cognition (CatCog). This resolves intrinsic problems with ICS such as the fact that representations inhabit an unbounded space and that sentences with differing tree structures cannot be directly compared. We do so in a way that makes the most of the grammatical structure available, in contrast to strategies like circular convolution. Using the CatCo model also allows us to make use of tools developed for CatCo such as the representation of ambiguity and logical reasoning via density matrices, structural meanings for words such as relative pronouns, and addressing over- and under-extension, all of which are present in cognitive processes. Moreover the CatCog framework is sufficiently flexible to allow for entirely different representations of meaning, such as conceptual spaces. Interestingly, since the CatCo model was largely inspired by categorical quantum mechanics, so is CatCog.
... Однако изоморфизм не учитывает сходство самих компонентов и не отражает особенностей оценки сходства, обнаруженных психологами в рассуждениях по аналогии у людей. В связи с важностью моделирования рассуждений по аналогии для проблематики искусственного интеллекта (ИИ) актуальной является разработка подходов и методов, которые позволяли бы достигать уровня результатов лучших известных (символьных) моделей [Falkenhainer et.al., 1989], [Holyoak & Thagard, 1989], однако позволяли бы преодолеть их недостатки [Eliasmith & Thagard, 2001], [Hummel & Holyoak, 1997], [Kanerva, 2000], [Plate, 2003] (высокая вычислительная сложность и слабый учет семантического сходства компонентов аналогов). ...
... Для более адекватного учета семантики аналогов, масштабирования подходов на случаи, когда имеется большое количество потенциальных аналогов, повышения нейробиологической релевантности моделей в новых моделях рассуждений по аналогии стали использовать РП. Однако анализ моделей рассуждения по аналогии LISA [Hummel & Holyoak, 1997] и DRAMA [Eliasmith & Thagard, 2001] показывает, что использование РП в них носит фрагментарный, непоследовательный характер, либо их авторам удалось работать лишь с простейшими аналогами. ...
Article
Gentner, 1983], [Hummel & Holyoak, 1997]. Мышление с использованием аналогий является одним из важнейших процессов разумной деятельности людей и его моделированию посвящено большое количество работ [Gentner, 1983], [Holyoak & Thagard, 1989], [Hummel & Holyoak, 1997], [Markman, 1997], [Гладун, 2000], [Гладун и др.] . В рассуждениях по аналогии первыми тремя стадиями считают [Falkenhainer et.al., 1989], [Holyoak & Thagard, 1989], [Hummel & Holyoak, 1997] поиск (процесс обнаружения в памяти наиболее близкого аналога ко входному), отображение (установление соответствия между компонентами двух аналогов) и вывод по аналогии (процесс переноса знаний от одного аналога к другому). Все эти стадии требуют обработки структурированной информации, содержащейся в представлении аналогов.
... Different combinations produce different qualitative experiences because of the different neural firings that contribute to the resulting semantic pointer. Situations can be represented purely perceptually, or by verbal sentences, which can be neurally constructed by the same kinds of binding operations that produce semantic pointers (Eliasmith, 2013;Eliasmith & Thagard, 2001). Semantic pointers explain why we can talk about our conscious experiences, because semantic pointers can function as symbols that can be bound into verbal reports such as ''My toe hurts'' or ''I see a blue sky.'' ...
... The solution to this problem is to build a neural network that can bind two patterns together Thagard & Stewart, 2011). For binding, we use circular convolution, a mathematical operation that takes in two patterns and produces a third, novel pattern (Eliasmith & Thagard, 2001;Plate, 2003). This function is approximately invertible: given the novel pattern and one of the two original patterns, the other pattern can be recovered. ...
... The vector dot product, the measure of similarity proposed for analogical reasoning with symbolic representations (Plate 1993;Eliasmith and Thagard 2001), also provides a meaningful measure of similarity between continuous-valued data. Specifically, the dot product between real-valued vector data represented using SSPs is a product of sinc functions (e.g., Komer et al. 2019;Voelker 2020;Dumont and Eliasmith 2020;Furlong, Stewart, and Eliasmith 2022). ...
Article
Recent developments in generative models have demonstrated that with the right data set, techniques, computational infrastructure, and network architectures, it is possible to generate seemingly intelligent outputs, without explicitly reckoning with underlying cognitive processes. The ability to generate novel, plausible behaviour could be a boon to cognitive modellers. However, insights for cognition are limited, given that generative models' blackbox nature does not provide readily interpretable hypotheses about underlying cognitive mechanisms. On the other hand, cognitive architectures make very strong hypotheses about the nature of cognition, explicitly describing the subjects and processes of reasoning. Unfortunately, the formal framings of cognitive architectures can make it difficult to generate novel or creative outputs. We propose to show that cognitive architectures that rely on certain Vector Symbolic Algebras (VSAs) are, in fact, naturally understood as generative models. We discuss how memories of VSA representations of data form distributions, which are necessary for constructing distributions used in generative models. Finally, we discuss the strengths, challenges, and future directions for this line of work.
... To select the analogical reasoning algorithm (involved in processes 2, 3, and 4 of Figure 1), the review of AR algorithms from (Combs, Bihl, Ganapathy, & Staples, 2022) was used and the following methods were considered: Bayesian Analogy with Relational Transformations (BART) 1.0 (Lu, Chen, & Holyoak, 2012) and 2.0 (Lu, Wu, & Holyoak, 2019), 3 Cosine Average (3CosAvg) (Drozd, Gladkova, & Matsuoka, 2016), Distributed Representation Analogy MApper (DRAMA) (Eliasmith & Thagard, 2001), Linear Regression Cosine (LRCos) (Drozd, Gladkova, & Matsuoka, 2016), GloVe (Pennington, Socher, & Manning, 2014), and Word2Vec (Mikolov, Sutskever, Chen, Corrado, & Dean, 2013) (Mikolov, Tomas, Yih, & Zweig, 2013). To select an AR method for IRTARA, the adjusted correctness (based selection of the correct answer) and goodness (how close to an "ideal" analogy the correct answer is according to the algorithm) (Combs, Bihl, Ganapathy, & Staples, 2022). ...
Conference Paper
Full-text available
Current state-of-the-art artificial intelligence struggles with accurate interpretation of out-of-library (OOL) objects. One method proposed remedy is analogical reasoning (AR), which utilizes abductive reasoning to draw inferences on an unfamiliar scenario given knowledge about a similar familiar scenario. Currently, applications of visual AR gravitate toward analogy-formatted image problems rather than to computer vision data sets. The Image Recognition Through Analogical Reasoning Algorithm (IRTARA) approach described herein shows how AR can be leveraged to improve computer vision in OOL situations. IRTARA produces a word-based term frequency list that characterizes the OOL object of interest. To evaluate the quality of the results of IRTARA, both quantitative and qualitative assessments are used, including a baseline to compare the automated methods with human-generated results. Fifteen OOL objects were tested using IRTARA, which showed consistent results across all three evaluation methods on the objects that performed exceptionally well or poorly overall.
... However, searching analogical inspirations in a large corpus of papers remains a longstanding challenge [34,44,83,99]. Previous systems for retrieving analogies have largely focused on modeling analogical relations in non-scientific domains and/or in limited scopes (e.g., structure-mapping [36-38, 42, 106], multiconstraint-based [33,59,65], connectionist [57], rule-based reasoning [3,15,16,111] systems), and the prohibitive costs of creating highly structured representations prevented hand-crafted systems (e.g., DANE [65,110]) from having a broad coverage of topics and being deployed for realistic use. Conversely, scalable computational approaches such as keyword or citation based search engines have been limited by a dependence on surface or domain similarity. ...
Preprint
Full-text available
Analogies have been central to creative problem-solving throughout the history of science and technology. As the number of scientific papers continues to increase exponentially, there is a growing opportunity for finding diverse solutions to existing problems. However, realizing this potential requires the development of a means for searching through a large corpus that goes beyond surface matches and simple keywords. Here we contribute the first end-to-end system for analogical search on scientific papers and evaluate its effectiveness with scientists' own problems. Using a human-in-the-loop AI system as a probe we find that our system facilitates creative ideation, and that ideation success is mediated by an intermediate level of matching on the problem abstraction (i.e., high versus low). We also demonstrate a fully automated AI search engine that achieves a similar accuracy with the human-in-the-loop system. We conclude with design implications for enabling automated analogical inspiration engines to accelerate scientific innovation.
... While (Eliasmith & Thagard, 2001) divide stages in analogy reasoning into 3 stages, namely retrieval, mapping, and application. Gentner (Gentner, 1983) also identified stages in analogical reasoning into 3 stages, namely access, mapping, and evaluation and use. ...
Article
Full-text available
If using different instruments obtained a different analogical reasoning component. With use people-piece analogies, verbal analogies, and geometric analogies, have analogical reasoning component consists of encoding, inferring, mapping, and application. Meanwhile, with use analogical problems (algebra, source problem and target problem is equal), have analogical reasoning components consist of structuring, mapping, applying, and verifying. The instrument used was analogical problems consisting of two problems where the source problem was symbolic quadratic equation problem and the target problems were trigonometric equation problem and a word problem. This study aims to provide information analogical reasoning process in solving indirect analogical problems. in addition, to identify the analogical reasoning components in solving indirect analogical problems. Using a qualitative design approach, the study was conducted at two schools in Mataram city of Nusa Tenggara Barat, Indonesia. The results of the study provide an overview of analogical reasoning of the students in solving indirect analogical problems and there is a component the representation and mathematical model in solving indirect analogical problems. So the analogical reasoning component in solving indirect analogical problems is the representation and mathematical modeling, structuring, mapping, applying, and verifying. This means that there are additional components of analogical reasoning developed by Ruppert. Analogical reasoning components in problem-solving depend on the analogical problem is given.
... In recent years, we and others have made progress in advancing a vector space computing framework called Vector Symbolic Architecture (VSA), or synonymously Hyperdimensional (HD) computing, that both enables variable binding and is fully transparent (Plate, 1994a;Kanerva, 1996;Gayler, 1998a;Eliasmith and Thagard, 2001;Rachkovskij and Kussul, 2001). In VSA, symbols, data, or other entities are represented by randomly mapping them into a vector space of fixed dimensionality. ...
Preprint
Full-text available
Vector space models for symbolic processing that encode symbols by random vectors have been proposed in cognitive science and connectionist communities under the names Vector Symbolic Architecture (VSA), and, synonymously, Hyperdimensional (HD) computing. In this paper, we generalize VSAs to function spaces by mapping continuous-valued data into a vector space such that the inner product between the representations of any two data points represents a similarity kernel. By analogy to VSA, we call this new function encoding and computing framework Vector Function Architecture (VFA). In VFAs, vectors can represent individual data points as well as elements of a function space (a reproducing kernel Hilbert space). The algebraic vector operations, inherited from VSA, correspond to well-defined operations in function space. Furthermore, we study a previously proposed method for encoding continuous data, fractional power encoding (FPE), which uses exponentiation of a random base vector to produce randomized representations of data points and fulfills the kernel properties for inducing a VFA. We show that the distribution from which elements of the base vector are sampled determines the shape of the FPE kernel, which in turn induces a VFA for computing with band-limited functions. In particular, VFAs provide an algebraic framework for implementing large-scale kernel machines with random features, extending Rahimi and Recht, 2007. Finally, we demonstrate several applications of VFA models to problems in image recognition, density estimation and nonlinear regression. Our analyses and results suggest that VFAs constitute a powerful new framework for representing and manipulating functions in distributed neural systems, with myriad applications in artificial intelligence.
... There are many computational models of analogy [20]. For example, LISA [21], DORA [22] and DRAMA [23] all explore neural models of versions of structure-mapping. It is unclear whether their relational capacity is sufficient to support visual reasoning. ...
Article
Visual reasoning tasks involving comparison provide interesting insights into how people make similarity and difference judgments. This review summarizes work that provides evidence that the same structure-mapping comparison processes that appear to be used elsewhere in cognition can also be used to model comparison in human visual reasoning tasks. These models rely on qualitative representations, which provide symbolic descriptions of continuous properties, an important kind of relational representation. Cognitive simulations of multiple human visual reasoning tasks, using the same model of high-level vision to compute relational representations, achieve human-like performance, both in terms of accuracy and estimating the relative difficulty of problems.
... Holographic memory has also been used to model analogical reasoning (Plate, 2000;Eliasmith & Thagard, 2001) and how humans perform simple problem-solving tasks such as playing rocks, paper, scissors (DSHM; Rutledge-Taylor et al., 2014) or solving Raven's progressive matrices (Eliasmith, 2013). Knowledge in SPAUN, the world's largest functional brain model (Eliasmith, 2013), is represented using holographic memory. ...
Thesis
Full-text available
Computational memory models can explain the behaviour of human memory in diverse experimental paradigms—whether it be recall or recognition, short-term or long-term retention, implicit or explicit learning. Simulation has led to parsimonious theories of memory, but at a cost of a profusion of competing models. As different models focus on different phenomena, there is no best model. However, the models share many characteristics, indicating wide agreement on the mathematics of how memory works in the brain. On the basis of an analysis of computational memory models, we argue that these models can be understood in terms of a single neurally-plausible computational and theoretical framework. We present a proof of concept neural implementation, integration with the ACT-R cognitive architecture, and demonstrate model performance on procedural, declarative, episodic, and semantic learning tasks. This research aims to advance cognitive psychology towards a single integrated, computational model of human memory that can account for human performance on diverse experimental tasks, that can be implemented at a neural level of detail, and can be scaled to modelling arbitrarily long-term learning.
... The STAR architecture of [24] used tensor product representations of structured data to perform simple analogies of the form R(x, y) ⇒ S(f (x), f (y)); though their method could not operate over higher-order structural inputs. Drama [25] was an implementation of the multi-constraint theory of analogy [12] that employed a holographic representation similar to tensor products to embed structure. While more capable of handling higher-order structure than STAR; Drama had difficulty when dealing with several propositions simultaneously due to the noisiness of its method for composing several distributed representations together. ...
Preprint
Analogy is core to human cognition. It allows us to solve problems based on prior experience, it governs the way we conceptualize new information, and it even influences our visual perception. The importance of analogy to humans has made it an active area of research in the broader field of artificial intelligence, resulting in data-efficient models that learn and reason in human-like ways. While analogy and deep learning have generally been considered independently of one another, the integration of the two lines of research seems like a promising step towards more robust and efficient learning techniques. As part of the first steps towards such an integration, we introduce the Analogical Matching Network; a neural architecture that learns to produce analogies between structured, symbolic representations that are largely consistent with the principles of Structure-Mapping Theory.
... Computer scientists compare understanding-related processing in humans and computers. They note that understanding involves (1) performance (Lindsay, 1963), (2) incremental steps (Moore & Newell, 1974), (3) efficient search and use of indexed information (Simon, 1980;Simon & Gilmartin, 1973), (4) networks of relations and the ability to perform in novel situations (Eliasmith & Thagard, 2001;Forbus & Gentner, 1997), and (5) information about the environment (Schank & Abelson, 1977;Simon, 1980). Some argue for unorganized encoding so analogies can be actively constructed upon retrieval (Veloso & Carbonell, 1993), while others suggest organized encoding for efficient identification and processing of relations (Simon, 1980). ...
Article
Understanding is at the core of higher-level information processing and has a long history in the cognitive sciences. It is often described as a complex phenomenon with many dimensions, which makes it difficult to define with precision. Many researchers have noted that understanding is often ill defined, indirectly addressed, or avoided altogether. This is particularly disappointing considering that understanding has been a topic of interest since the ancient Greeks. In order to address this problem with our understanding of understanding, we reviewed literature from philosophy, psychology , education, neuroscience, and computer science. Here we summarize insights from that review, focusing on similarities and differences across those domains, as well as implications for the nature, measurement, and modeling of understanding. http://www.cogsys.org/papers/ACSvol8/papers/article-8-3.pdf
... In addition to cognitive analytics as a general topic, primary concerns for analytics in general for C5ISR include developing abilities to handle unexpected events. Notably, some work has been done to date in this area, including: analogical reasoning [33], out-of-library considerations [34], and transfer learning [35]. However, capabilities are still limited as noted in [24]. ...
... Vector Symbolic Architectures (VSAs), a term coined by Gayler (2003; but see also Plate, 1995), are a set of techniques for instantiating and manipulating symbolic structures in distributed representations. VSAs have been used to successfully model a number of different cognitive processes (e.g., analogical mapping in Eliasmith & Thagard, 2001; letter position coding in Hannagan, Dupoux, & Christophe, 2011; semantic memory in Jones & Mewhort, 2007). It has been argued that VSAs provide a bridge between conventional symbolic modelling and both connectionist modelling (Rutledge-Taylor & West, 2008) and more realistic models of neural processing (Eliasmith, 2007). ...
Preprint
Full-text available
To achieve a full, theoretical understanding of a cognitive process, explanations of the process need to be provided at both symbolic (i.e., representational) and sub-symbolic levels of description. We argue that cognitive models implemented in vector-symbolic architectures (VSAs) intrinsically operate at both of levels and thus provide a needed bridge. We characterize the sub-symbolic level of VSAs in terms of a small set of linear algebra operations. We characterize the symbolic level of VSAs in terms of cognitive processes, in particular how information is represented, stored, and retrieved, and classify vector-symbolic cognitive models in the literature according to their implementation of these processes. On the basis of our analysis, we speculate on avenues for future research, and suggest means for theoretical unification of existent models.
... En lo que a la investigación neurológica se refiere, Boroojerdi et al. (2001) descubrieron que la corteza prefrontal participa en el razonamiento analógico en coherencia con los hallazgos que plantean que el razonamiento en el que se necesita recurrir a relaciones complejas, requiere la participación de la corteza prefontal izquierda (Christoff et al., 2001;kroger et al., 2002). Otros especialistas han creado modelos computacionales para la analogía utilizando redes neuronales artificiales cuyo comportamiento se aproxima al de las redes neuronales biológicas (Hummel y Hoyoak, 1997, 2003Eliasmith y Thagard, 2001). ...
Article
Resumen: Desde las ciencias cognitivas se entiende el pensamiento en términos de estructuras de representaciones mentales sobre las que operan procesos computacionales. En el modelo representacional-computacional de la mente se recurre a una compleja analogía triádica que vincula mente, cerebro y ordenadores. La mayoría de estos modelos son simbólicos, aunque también existen modelos representacionales no simbólicos (conexionismo) y modelos cognitivos no representacionales de la mente. El análisis de los diferentes enfoques cognitivos sobre las representaciones y los procesos mentales en el marco de la ciencia cognitiva y de sus ventajas y limitaciones revela que se trata de enfoques que no tienen por qué ser excluyentes entre sí y que en muchos de los casos se complementan, aunque también se constata la ausencia de una teoría unificada al respecto. Tras considerar los puntos débiles tanto del modelo simbólico computacional clásico como del conexionismo, reconociendo los avances significativos propiciados por ambos en el estudio de la mente, se concluye que no existe todavía ningún modelo computacional con capacidad representacional para abarcar todo el pensamiento humano. About the representational nature of the mind. The representational mind. Abstract: According to cognitive science, thinking is understood as structures of mental representations on which computational processes operate. In the representational-computational model of the mind, we resort to a complex triadic analogy that links mind, brain and computers. Most of these models are symbolic, although there are also non-symbolic representational models (connectionism) and non-representational cognitive models of the mind. The analysis of the various cognitive approaches on the representations and the mind processes within the framework of the cognitive sciences and of their advantages and limitations reveals that these approaches do not have to mutually exclusive and that, in many of the cases, they complement each other. However, the lack of a unified theory regarding this matter has also been stated. After considering the weak points of both the classic computational symbolic model and the connectionism, acknowledging the significant progresses made by both in the study of the mind, we conclude that there is still no computational model with representational capacity to cover the whole human thinking.
... There are a number of computer models of analogy processing, with a great deal of associated experimental research validating some of the assumptions of these models Gentner D and Forbus KD 2011;Holyoak KJ and Thagard P 1995). Neurologically plausible models of analogy processing have been developed (Eliasmith C and Thagard P 2001;Knowlton BJ et al. 2012), and there is research on neural correlates of analogy and metaphor processing (e.g. (Bassok M et al. 2012;Chettih S et al. 2012;Green AE et al. 2012;Knowlton BJ et al. 2012;Maguire MJ et al. 2012;Prat CS et al. 2012)). ...
Article
Purpose Agent-based models are typically “simple-agent” models, in which agents behave according to simple rules, or “complex-agent” models which incorporate complex models of cognitive processes. I argue that there is also an important role for agent-based computer models in which agents incorporate cognitive models of moderate complexity. In particular, I argue that such models have the potential to bring insights from the humanistic study of culture into population-level modeling of cultural change. Methods I motivate my proposal in part by describing an agent-based modeling framework, POPCO, in which agents’ communication of their simulated beliefs depends on a model of analogy processing implemented by artificial neural networks within each agent. I use POPCO to model a hypothesis about causal relations between cultural patterns proposed by Peggy Sanday. Results In model 1, empirical patterns like those reported by Sanday emerge from the influence of analogies on agents’ communication with each other. Model 2 extends model 1 by allowing the components of a new analogy to diffuse through the population for reasons unrelated to later effects of the analogy. This illustrates a process by which novel cultural features might arise. Conclusions The inclusion of relatively simple cognitive models in agents allows modeling population-level effects of inferential and cultural coherence relations, including symbolic cultural relationships. I argue that such models of moderate complexity can illuminate various causal relationships involving cultural patterns and cognitive processes.
... As described earlier, ABSURDIST offers a complementary approach to analogical reasoning between domains. Most existing models of analogical comparison, including SME, SIAM, LISA, Drama, and ACME (Eliasmith & Thagard, 2001;Falkenhainer et al., 1989;Goldstone, 1994;Holyoak & Thagard, 1989;Hummel & Holyoak, 1997, 2003, represent the domains to be compared in terms of richly structured propositions. This is a useful strategy when the knowledge of a domain can be easily and unambiguously expressed in terms of symbolic predicates, attributes, functions, and higher-order relations. ...
Chapter
Full-text available
Category learning not only depends upon perceptual and semantic representations; it also leads to the generation of these representations. We describe two series of experiments that demonstrate how categorization experience alters, rather than simply uses, descriptions of objects. In the first series, participants first learned to categorize objects on the basis of particular sets of line segments. Subsequently, participants were given a perceptual part/whole judgment task. Categorization training influenced participants’ part/whole judgments, indicating that whole objects were more likely to be broken down into parts that were relevant during categorization. In the second series, correlations were created or broken between semantic features of word concepts (e.g., ferocious vs. timid and group-oriented vs. solitary animals). The best transfer was found between category learning tasks that shared the same semantic organization of concepts. Together, the experiments support models of category learning that simultaneously create the elements of categorized objects’ descriptions and associate those elements with categories.
... Symbolic systems made use of symbolic logic, means-ends analysis and classical logical techniques [1,5,21]. Connectionist systems made use of networks, with spreading-activation and back propagation building networks of similarity between domains [4,12]. Hybrid models often combined other models and made use of an agent based, distributed structure [16,17]. ...
Conference Paper
Full-text available
It has been demonstrated that computational evolution can be utilised in the creation of aesthetic analogies between two artistic domains by the use of mapping expressions. When given an artistic input these mapping expressions can be used to guide the generation of content in a separate domain. For example, a piece of music can be used to create an analogous visual display. In this paper we examine the implementation and performance of such a system. We explore the practical implementation of real-time evaluation of evolved mapping expressions, possible musical input and visual output approaches, and the challenges faced therein. We also present the results of an exploratory study testing the hypothesis that an evolved mapping expression between the measurable attributes of musical and visual harmony will produce an improved aesthetic experience compared to a random mapping expression. Expressions of various fitness values were used and the participants were surveyed on their enjoyment, interest, and fatigue. The results of this study indicate that further work is necessary to produce a strong aesthetic response. Finally, we present possible approaches to improve the performance and artistic merit of the system.
... Gentner & Markman (1992) stated that the ability of a connectionist system to perform analogical reasoning would constitute a watershed moment. Numerous papers exist demonstrating the ability of VSAs to solve simple analogies such as the following where the goal is to retrieve " Peso " (Eliasmith & Thagard, 2001; R. W. Gayler & Levy, 2009; R. W. Gayler & Sandin, 2013; Halford, Wiles, Humphreys, & Wilson, 1993; Kanerva, 2010; Plate, 1994 Plate, , 2000 W. H. Wilson, Street, & Halford, 1995). 78 í µí±ˆí µí±›í µí±–í µí±¡í µí±’í µí±‘ í µí±†í µí±¡í µí±Ží µí±¡í µí±’í µí± ∶ í µí±€í µí±’í µí±¥í µí±–í µí±í µí±œ ⋮⋮ í µí°·í µí±œí µí±™í µí±™í µí±Ží µí±Ÿ ∶ ? ...
Article
This dissertation explores the implications of computational cognitive modeling for information retrieval. The parallel between information retrieval and human memory is that the goal of an information retrieval system is to find the set of documents most relevant to the query whereas the goal for the human memory system is to access the relevance of items stored in memory given a memory probe (Steyvers & Griffiths, 2010). The two major topics of this dissertation are desirability and information scent. Desirability is the context independent probability of an item receiving attention (Recker & Pitkow, 1996). Desirability has been widely utilized in numerous experiments to model the probability that a given memory item would be retrieved (Anderson, 2007). Information scent is a context dependent measure defined as the utility of an information item (Pirolli & Card, 1996b). Information scent has been widely utilized to predict the memory item that would be retrieved given a probe (Anderson, 2007) and to predict the browsing behavior of humans (Pirolli & Card, 1996b). In this dissertation, I proposed the theory that desirability observed in human memory is caused by preferential attachment in networks. Additionally, I showed that documents accessed in large repositories mirror the observed statistical properties in human memory and that these properties can be used to improve document ranking. Finally, I showed that the combination of information scent and desirability improves document ranking over existing well-established approaches.
... It is possible to model analogical mapping as a purely algorithmic process. However, we are concerned with physiological plausibility and consequently limit our attention to connectionist models of analogical mapping such as ACME (Holyoak & Thagard, 1989), AMBR (Kokinov, 1988), DRAMA (Eliasmith & Thagard, 2001), and LISA (Hummel & Holyoak, 1997). These models vary in their theoretical emphases and the details of their connectionist implementations, but they all share a problem in the scalability of the representation or construction of the connectionist mapping network. ...
Article
Full-text available
We are concerned with the practical feasibility of the neural basis of analogical mapping. All existing connectionist models of analogical mapping rely to some degree on localist representation (each concept or relation is represented by a dedicated unit/neuron). These localist solutions are implausible because they need too many units for human-level competence or require the dynamic re-wiring of networks on a sub-second time-scale. Analogical mapping can be formalised as finding an approximate isomorphism between graphs representing the source and target conceptual structures. Connectionist models of analogical mapping implement continuous heuristic processes for finding graph isomorphisms. We present a novel connectionist mechanism for finding graph isomorphisms that relies on distributed, high-dimensional representations of structure and mappings. Consequently, it does not suffer from the problems of the number of units scaling combinatorially with the number of concepts or requiring dynamic network re-wiring.
... Third, in a similar fashion, one might ask how much bending is sufficient for analogy work to be successful. Prior work on cognition suggests that structural consistency is an essential aspect of successful analogies Markman 1997, Eliasmith andThagard 2001), yet insights from institutional theory would indicate that formal structure may at times be decoupled from activity (e.g., Meyer and Rowan 1977). As such, more research is needed to gain a full understanding of the nature and limitations of bending as an activity. ...
Article
Full-text available
Analogies to financial markets have proven powerful in establishing novel or potentially controversial business concepts, even in contexts that deviate significantly from financial markets. This phenomenon challenges theory that suggests analogies work best when elements from a source and target domain map closely to each other. To develop a theory that explains how organizations make initially imperfect analogies “work,” we use a case study of online advertising exchanges, a market-inspired model for buying and selling online advertising space. We find that as organizations stretch an initially misfitting exchange analogy from financial markets to online advertising, they iteratively bend their activities in superficial, structural, and generative ways to match the analogy and position themselves for advantage in the new space being created. Whereas prior studies emphasize shared cognition about familiar domains as the reason why analogies work, our study offers a dynamic account in which stretching, bending, and positioning combine to not only establish the financial market analogy but also subtly change the understanding of markets.
... Combination of vectors should not be simple addition and subtraction, but rather a more mathematically complicated operation such as circular convolution (Plate, 2003). Eliasmith and Thagard (2001) describe how convolution into vectors can represent many kinds of information. ...
Chapter
Full-text available
This chapter discusses the relevance of emotion to urban planning and design, arguing for the following conclusions. Urban planning requires values, which are emotional mental/neural representations of things and situations. Decisions about how to design cities are based on emotional coherence. The concepts used in urban planning are best understood as a new kind of neural representation called semantic pointers. The social processes of planning cities and living in them can be modeled using multi-agent systems, where the agents are emotional. The cognitive, emotional, and social mechanisms underlying urban development are complex in that they are nonlinear, emergent, chaotic, synergistic, amplified by feedback loops, and result in tipping points.
... A first step is to develop a model of problem restructuring using a "reaction network" inspired model that has as its basic unit, not catalytic molecules, but interacting concepts. There are various methods for going about this, for example using Concat, or Holographic Reduced Representations to computationally model the convolution or 'twisting together' of mental representation (Aerts, Czachor, & De Moor, 2009;Eliasmith & Thagard, 2001;Thagard & Stewart, 2011). Another promising route is to use a quantum-inspired theory of concepts such as SCOP that incorporates the notion of context-driven actualization of potential (Aerts & Gabora, 2005a,b;Gabora & Aerts, 2002a,b). ...
Chapter
Full-text available
The speed and transformative power of human cultural evolution is evident from the change it has wrought on our planet. This chapter proposes a human computation program aimed at (1) distinguishing algorithmic from non-algorithmic components of cultural evolution, (2) computationally modeling the algorithmic components, and amassing human solutions to the non-algorithmic (generally, creative) components, and (3) combining the two to develop human-machine hybrids with previously unforeseen computational power that can be used to solve real problems. Drawing on recent insights into the origins of evolutionary processes from biology and complexity theory, human minds are modeled as self-organizing, interacting, autopoietic networks that evolve through a Lamarckian (non-Darwinian) process of communal exchange. Existing computational models as well as directions for future research are discussed.
... This model builds on earlier simulations by Hinton's "family tree" simulations (1989) as well as R&M's (2008). Our model is different from previous computational models of analogy/metaphor (e.g., Falkenhainer et al., 1989;Eliasmith & Thagard, 2001;Hummel & Holyoak, 1997) in that the conceptual representations are entirely learned and distributed. ...
Conference Paper
Full-text available
Metaphors are pervasive in our discussions of abstract and complex ideas (Lakoff & Johnson, 1980), and have been shown to be instrumental in problem solving and building new conceptual structure (e.g., Gentner & Gentner, 1983; Nersessian, 1992; Boroditsky, 2000). In this paper we look at the role of metaphor in framing social issues. Our language for discussing war, crime, politics, healthcare, and the economy is suffused with metaphor (Schön, 1993; Lakoff, 2002). Does the way we reason about such important issues as crime, war or the economy depend on the metaphors we use to talk about these topics? Might changing metaphors lead us to different conceptions and in turn different social policies? In this paper we focused on the domain of crime and asked whether two different metaphorical systems we have for talking about crime can lead people to different ways of approaching and reasoning about it. We find that framing the issue of crime metaphorically as a predator yielded systematically different suggestions for solving the crime problem than when crime was described as a virus. We then present a connectionist model that explores the mechanistic underpinnings of the role of metaphor.
... HDC is based on the idea that the distances between concepts in our minds correspond to distances between points in a very high-dimensional space (Kanerva, 2009). Since its introduction, HDC has been used successfully in modeling of analogical processing (Plate, 1995; see also Eliasmith & Thagard, 2001), latent semantic analysis (Kanerva et al., 2000), multimodal data fusion and prediction (Räsänen & Kakouros, 2014), robotics (Jockel, 2010), and, e.g., cognitive architectures (Rachkovskij et al., 2013; see also Levy & Gayler, 2008;Kelly & West, 2012) as it successfully bridges the gap between symbolic processing and connectionist systems. ...
Conference Paper
Full-text available
Hyperdimensional computing (HDC) refers to the representation and manipulation of data in a very high dimensional space using random vectors. Due to the high dimensionality, vectors of the space can code large amounts of information in a distributed manner, are robust to variation, and are easily distinguished from random noise. More importantly, HDC can be used to represent compositional and hierarchical relationships and recursive operations between entities using fixed-size representations, making it intriguing from a cognitive modeling point of view. However, the majority of the existing work in this area has focused on modeling discrete categorical data. This paper presents a new method for mapping continuous-valued multivariate data into hypervectors, enabling construction of compositional representations from non-categorical data. The mapping is studied in a word classification task, showing how rich distributed representations of spoken words can be encoded using HDC-based representations.
... Cognitive models based on holographic memory can explain and predict a variety of human memory phenomena, such as the serial position curve in free recall (Franklin & Mewhort, 2015). Holographic memory has also been used to model analogical reasoning (Plate, 2000;Eliasmith & Thagard, 2001) and how humans perform simple problemsolving tasks such as playing rocks, paper, scissors (DSHM; Rutledge-Taylor et al., 2014) or solving Raven's progressive matrices (Eliasmith, 2013). Knowledge in SPAUN, the world's largest functional brain model (Eliasmith, 2013), is represented using holographic memory. ...
Conference Paper
Full-text available
We present Holographic Declarative Memory (HDM), a new memory module for ACT-R and alternative to ACT-R's Declarative Memory (DM). ACT-R is a widely used cognitive architecture that models many different aspects of cognition, but is limited by its use of symbols to represent concepts or stimuli. HDM replaces the symbols with holographic vectors. Holographic vectors retain the expressive power of symbols but have a similarity metric, allowing for shades of meaning, fault tolerance, and lossy compression. The purpose of HDM is to enhance ACT-R's ability to learn associations, learn over the long-term, and store large quantities of data. To demonstrate HDM, we fit performance of an ACT-R model that uses HDM to a benchmark memory task, the fan effect. We analyze how HDM produces the fan effect and how HDM relates to the standard DM model of the fan effect.
... Within the cognitive science community, vector symbolic approaches to reasoning have been developed and evaluated for their ability to perform tasks traditionally accomplished by discrete symbolic models, often motivated by Fodor and Pylyshn's critique concerning the limited ability of existing connectionist models of cognition to address such issues (Fodor and Pylyshyn, 1988). One prominent area of application concerns analogical reasoning (Plate, 1994(Plate, , 2000Kanerva et al, 2001;Eliasmith and Thagard, 2001). This work demonstrates that analogical mapping can be accomplished by a VSA trained on a small set of propositions that have been deliberately constructed to represent a well-defined analogical reasoning problem. ...
Article
Full-text available
See http://jigpal.oxfordjournals.org/content/early/2014/12/03/jigpal.jzu028.short This article describes the use of continuous vector space models for reasoning with a formal knowledge base. The practical significance of these models is that they support fast, approximate but robust inference and hypothesis generation, which is complementary to the slow, exact, but sometimes brittle behaviour of more traditional deduction engines such as theorem provers. The article explains the way logical connectives can be used in semantic vector models, and summarizes the development of Predication-based Semantic Indexing, which involves the use of Vector Symbolic Architectures to represent the concepts and relationships from a knowledge base of subject-predicate-object triples. Experiments show that the use of continuous models for formal reasoning is not only possible, but already demonstrably effective for some recognized informatics tasks, and showing promise in other traditional problem areas. Examples described in this article include: predicting new uses for existing drugs in biomedical informatics; removing unwanted meanings from search results in information retrieval and concept navigation; type inference from attributes; comparing words based on their orthography; and representing tabular data, including modelling numerical values. The algorithms and techniques described in this article are all publicly released and freely available in the Semantic Vectors open-source software package.1
... It is important to note that this is not the first time that HRRs have been used to create language models. Eliasmith and Thagard [11] have previously shown that HRRs can be used to model both syntactic and semantic psychological data. As well, Eliasmith [10] has shown that HRRs can be successfully applied to model cognitive language processing. ...
... (Barsalou, 2008;Harnad, 1990), and randomly initialized structures that become gradually associated in a system-wide memory through learning (Emruli and Sandin, 2014;Kanerva, 1988). For related discussions see Eliasmith and Thagard (2001);Neumann (2002); Plate (1995); Sutton and Whitehead (1993). In the following sections of this paper, we present the proposed communication architecture, the simulation results demonstrating that the approach enables interoperability in the form of learning context-dependent prediction of state transitions, and related work. ...
Article
The rapid integration of physical systems with cyberspace infrastructure, the so-called Internet of Things, is likely to have a significant effect on how people interact with the physical environment and design information and communication systems. Internet-connected systems are expected to vastly outnumber people on the planet in the near future, leading to grand challenges in software engineering and automation in application domains involving complex and evolving systems. Several decades of artificial intelligence research suggests that conventional approaches to making such systems automatically interoperable using handcrafted “semantic” descriptions of services and information are difficult to apply. In this paper we outline a bioinspired learning approach to creating interoperable systems, which does not require handcrafted semantic descriptions and rules. Instead, the idea is that a functioning system (of systems) can emerge from an initial pseudorandom state through learning from examples, provided that each component conforms to a set of information coding rules. We combine a binary vector symbolic architecture (VSA) with an associative memory known as sparse distributed memory (SDM) to model context-dependent prediction by learning from examples. We present simulation results demonstrating that the proposed architecture can enable system interoperability by learning, for example by human demonstration.
Article
Analogy is core to human cognition. It allows us to solve problems based on prior experience, it governs the way we conceptualize new information, and it even influences our visual perception. The importance of analogy to humans has made it an active area of research in the broader field of artificial intelligence, resulting in data-efficient models that learn and reason in human-like ways. While cognitive perspectives of analogy and deep learning have generally been studied independently of one another, the integration of the two lines of research is a promising step towards more robust and efficient learning techniques. As part of a growing body of research on such an integration, we introduce the Analogical Matching Network: a neural architecture that learns to produce analogies between structured, symbolic representations that are largely consistent with the principles of Structure-Mapping Theory.
Article
Full-text available
We present Graph Mapping – a simple and effective computerized test of fluid intelligence (reasoning ability). The test requires structure mapping – a key component of the reasoning process. Participants are asked to map a pair of corresponding nodes across two mathematically isomorphic but visually different graphs. The test difficulty can be easily manipulated – the more complex structurally and dissimilar visually the graphs, the higher response error rate. Graph Mapping offers high flexibility in item generation, ranging from trivial to extremally difficult items, supporting progressive item sequences suitable for correlational studies. It also allows multiple item instances (clones) at a fixed difficulty level as well as full item randomization, both particularly suitable for within-subject experimental designs, longitudinal studies, and adaptive testing. The test has short administration times and is unfamiliar to participants, yielding practical advantages. Graph Mapping has excellent psychometric properties: Its convergent validity and reliability is comparable to the three leading traditional fluid reasoning tests. The convenient software allows a researcher to design the optimal test variant for a given study and sample. Graph Mapping can be downloaded from: https://osf.io/wh7zv/
Article
One of the central issues in cognitive science is the nature of human representations. We argue that symbolic representations are essential for capturing human cognitive capabilities. We start by examining some common misconceptions found in discussions of representations and models. Next we examine evidence that symbolic representations are essential for capturing human cognitive capabilities, drawing on the analogy literature. Then we examine fundamental limitations of feature vectors and other distributed representations that, despite their recent successes on various practical problems, suggest that they are insufficient to capture many aspects of human cognition. After that, we describe the implications for cognitive architecture of our view that analogy is central, and we speculate on roles for hybrid approaches. We close with an analogy that might help bridge the gap.
Article
Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n(2) log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before.
Chapter
This book addresses three areas of current and varied interest: common sense, reasoning, and rationality. While common sense and rationality often have been viewed as two distinct features in a unified cognitive map, this book offers novel, even paradoxical, views of the relationship. The book considers what constitutes human rationality, behavior, and intelligence, while covering diverse areas of philosophy, psychology, cognitive science, and computer science.
Article
There are two views of cognition in general and of language comprehension in particular. According to the traditional view (Chomsky, 1957; Fodor, 1983; Pylyshyn, 1986), the human mind is like a bricklayer, or maybe a contractor, who puts together bricks to build structures. The malleable clay of perception is converted to the neat mental bricks we call words and propositions, units of meaning, which can be used in a variety of structures. But whereas bricklayers and contractors presumably know how bricks are made, cognitive scientists and neuroscientists have no idea how the brain converts perceptual input to abstract lexical and propositional representations – it is simply taken as a given that this occurs (Barsalou, 1999). According to an alternative and emerging view, there are no clear demarcations between perception, action, and cognition. Interactions with the world leave traces of experience in the brain. These traces are (partially) retrieved and used in the mental simulations that make up cognition. Crucially, these traces bear a resemblance to the perceptual/action processes that generated them (Barsalou, 1999) and are highly malleable. Words and grammar are viewed as a set of cues that activate and combine experiential traces in the mental simulation of the described events (Zwaan, 2004). The main purpose of this chapter is to provide a discussion of this view of language comprehension. To set the stage for this discussion we first analyze a series of linguistic examples that present increasingly larger problems for the traditional view.
Conference Paper
Full-text available
We demonstrate that distributed vector representations are capable of hierarchical reasoning by summing sets of vectors representing hyponyms (subordinate concepts) to yield a vector that resembles the associated hypernym (superordinate concept). These distributed vector representations constitute a potentially neurally plausible model while demonstrating a high level of performance in many different cognitive tasks. Experiments were run using DVRS, a word embedding system designed for the Sigma cognitive architecture, and Word2Vec, a state-of-the-art word embedding system. These results contribute to a growing body of work demonstrating the various tasks on which distributed vector representations perform competently
Article
The article re-engages with the 9th century CE temple complex of Prambanan, in Central Java, as a performance locus, discussing the different phases of a bodily interaction with the site from the reconstitution of its dance units, retrievable from the dance reliefs of the main temple, to an exploration of the temple-dance-site connection. The author proposes that archaeology can be conceived and experienced as an embodied and performative practice: the Prambanan site has been incorporated in the archaeological process of dance movement reconstitution and its re-embodiment. This in turn has enabled a choreography of the site through an exploration of the architecture/dance relationship, mutually inscribed as a corporeality.
Data
Full-text available
Supplement to the paper BICA 2013
Article
Analogical cognition refers to the ability to detect, process, and learn from relational similarities. The study of analogical and similarity cognition is widely considered one of the ‘success stories’ of cognitive science, exhibiting convergence across many disciplines on foundational questions. Given the centrality of analogy to mind and knowledge, it would benefit philosophers investigating topics in epistemology and the philosophies of mind and language to become familiar with empirical models of analogical cognition. The goal of this essay is to describe recent empirical work on analogical cognition as well as model applications to philosophical topics. Topics to be discussed include the epistemological distinction between implicit knowledge and explicit knowledge, the debate between empiricists and nativists, the frame problem, expertise, creativity and autism, cognitive architecture, and relational knowledge. Particular attention is given to Dedre Gentner and colleague’s structure-mapping theory – the most developed and widely accepted model of analogical cognition.
Article
Full-text available
Binocular depth perception, or stereopsis, depends on matching corresponding points in two images taken from two vantage points. In random-dot stereograms the features to be matched are individual pixels. We have used the recurrent backpropagation learning algorithm of Pineda (1987) to construct network models with lateral and feedback connections that can solve the correspon- dence problem for random-dot stereograms. The network learned the uniqueness and continuity constraints origi- nally proposed by Marr and Poggio (1976) from a training set of dense random-dot stereograms. We also con- structed networks that can solve sparse random-dot stere- ograms of transparent surfaces. The success of the learn- ing algorithm depended on taking advantage of translation invariance and restrictions on the range of interactions.
Article
Full-text available
A set of hypotheses is formulated for a connectionist approach to cognitive modeling. These hypotheses are shown to be incompatible with the hypotheses underlying traditional cognitive models. The connectionist models considered are massively parallel numerical computational systems that are a kind of continuous dynamical system. The numerical variables in the system correspond semantically to fine-grained features below the level of the concepts consciously used to describe the task domain. The level of analysis is intermediate between those of symbolic cognitive models and neural models. The explanations of behavior provided are like those traditional in the physical sciences, unlike the explanations provided by symbolic models. Higher-level analyses of these connectionist models reveal subtle relations to symbolic models. Parallel connectionist memory and linguistic processes are hypothesized to give rise to processes that are describable at a higher level as sequential rule application. At the lower level, computation has the character of massively parallel satisfaction of soft numerical constraints; at the higher level, this can lead to competence characterizable by hard rules. Performance will typically deviate from this competence since behavior is achieved not by interpreting hard rules but by satisfying soft constraints. The result is a picture in which traditional and connectionist theoretical constructs collaborate intimately to provide an understanding of cognition.
Article
Full-text available
This article describes an integrated theory of analogical access and mapping, instantiated in a computational model called LISA (Learning and Inference with Schemas and Analogies). LISA represents predicates and objects as distributed patterns of activation that are dynamically bound into propositional structures, thereby achieving both the flexibility of a connectionist system and the structure sensitivity of a symbolic system. The model treats access and mapping as types of guided pattern classification, differing only in that mapping is augmented by a capacity to learn new correspondences. The resulting model simulates a wide range of empirical findings concerning human analogical access and mapping. LISA also has a number of inherent limitations, including capacity limits, that arise in human reasoning and suggests a specific computational account of these limitations. Extensions of this approach also account for analogical inference and schema induction. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Connectionism and classicism, it generally appears, have at least this much in common: both place some notion of internal representation at the heart of a scientific study of mind. In recent years, however, a much more radical view has gained increasing popularity. This view calls into question the commitment to internal representation itself. More strikingly still, this new wave of anti-representationalism is rooted not in armchair theorizing but in practical attempts to model and understand intelligent, adaptive behavior. In this paper we first present, and then critically assess, a variety of recent anti-representationalist treatments. We suggest that so far, at least, the sceptical rhetoric outpaces both evidence and argument. Some probable causes of this premature scepticism are isolated. Nonetheless, the anti-representationalist challenge is shown to be both important and progressive insofar as it forces us to see beyond the bare representational/non-representational dichotomy and to recognize instead a rich continuum of degrees and types of representationality.
Article
Full-text available
An algebraic characterization of convolution and correlation is outlined. The basic algebraic structures generated on a suitable vector space by the two operations are described. The convolution induces an associative Abelian algebra over the real field; the correlation induces a not-associative, not-commutative — but Lieadmissible algebra — with a left unity. The algebraic connection between the two algebras is found to coincide with the relation of isotopy, an extension of the concept of equivalence. The interest of these algebraic structures with respect to information processing is discussed.
Conference Paper
Full-text available
Neural network models have been criticized for their inability to make useofcompositional representations . In this paper, we describe a series of psychological phenomena that demonstrate the role of structured representations in cognition . These findings suggest that people compare relational representations via a process of structural alignment . This process will have to be captured by any model of cognition, symbolic or subsymbolic .
Chapter
Full-text available
There is now a reasonable amount of consensus that an analogy entails a mapping from one structure, the base or source, to another structure, the target (Gentner, 1983, 1989; Holyoak & Thagard, 1989). Theories of human analogical reasoning have been reviewed by Gentner (1989), who concludes that there is basic agreement on the one-to-one mapping of elements and the carry over of predicates. Furthermore, as Palmer (1989) points out, some of the theoretical differences represent different levels of description rather than competing models. Despite this consensus about the central role of structure mapping, it really only treats the syntax of analogies, and there are also important pragmatic factors, as has been pointed out by Holland, Holyoak, Nisbett, and Thagard (1986) and Holyoak and Thagard (1989), However in this chapter we are primarily concerned with the problem of how to model the structure mapping or syntactic component of analogical reasoning in terms of parallel distributed processing (PDP) architectures. According to Gentner (1983), attributes are not normally mapped in analogies, and only certain relations are mapped, the selection being based on systematicity, or the degree to which relations enter into a coherent structure. Gentner (1983) defines an attribute as a predicate taking one argument, whereas a relation is a predicate taking two arguments. Strictly, this only covers binary relations; in general, a relation is a predicate taking two or more arguments, so ternary relations have three arguments, quaternary relations four arguments, and so on. For our purposes a predicate is essentially an N-place relation; it can be defined as a N-place function from the Cartesian product of the N sets to the set {T,F}. This includes unary relations, which are predicates with one argument, and are equivalent to attributes in Gentner's terms. Our derivations based on relations can be applied to functions.
Article
Full-text available
Kosslyn (psychology, Harvard U.) presents a 20-year research program on the nature of high-level vision and mental imagery--offering his research as a definitive resolution of the long-standing "imagery debate," which centers on the nature of the internal representation of visual mental imagery. He combines insights and empirical results from computer vision, neurobiology, and cognitive science to develop a general theory of visual mental imagery, its relation to visual perception, and its implementation in the human brain.
Article
Full-text available
The barn owl accurately localizes sounds in the azimuthal plane, using interaural time difference as a cue. The time-coding pathway in the owl's brainstem encodes a neural map of azimuth, by processing interaural timing information. We have built a silicon model of the time-coding pathway of the owl. The integrated circuit models the structure as well as the function of the pathway; most subcircuits in the chip have an anatomical correlate. The chip computes all outputs in real time, using analog, continuous-time processing.
Chapter
The retinal image of a visual scene consists of a two-dimensional continuous distribution of grey levels. In order to identify particular figures or objects it needs to be determined which of the local luminance gradients result from particular objects and which are generated from the embedding background. Some grouping must be performed in order to associate these luminance distributions with the contours of a single object, to segregate signals from objects with overlapping contours from each other and from the signals generated by the background. These operations are commonly addressed as scene segmentation or figure-ground segregation. Because most of them are usually carried out subconsciously and do not require directing selective attention to particular features of the scene, these operations are called “preattentive visual processes” or “early visual processes” (for reviews and examples see Julesz 1971; Marr 1976; Treisman 1986; Ramachandran 1988).
Article
This article describes an integrated theory of analogical access and mapping, instantiated in a computational model called LISA (Learning and Inference with Schemas and Analogies). LISA represents predicates and objects as distributed patterns of activation that are dynamically bound into propositional structures, thereby achieving both the flexibility of a connectionist system and the structure sensitivity of a symbolic system. The model treats access and mapping as types of guided pattern classification, differing only in that mapping is augmented by a capacity to learn new correspondences. The resulting model simulates a wide range of empirical findings concerning human analogical access and mapping. LISA also has a number of inherent limitations, including capacity limits, that arise in human reasoning and suggests a specific computational account of these limitations. Extensions of this approach also account for analogical inference and schema induction.
Conference Paper
The chapter provides an alternative theory to the way visual representations and environmental cues are processed. The chapter argues against the common assumption that what we perceive is processed in a logical manner, similar to how we connect sentences through criteria of coherence, truth, and probability. The chapter proposes a different method of processing, similar to the way the human brain manages stimuli, wherein input vectors go through a large mesh of synaptic connections, initiating a cycle of neural activation of certain packets throughout the entire system. With this alternative view, a discussion on the process of stereo vision is undertaken, which aims to explain how we are able to perceive objects in three-dimensional space, given what we know of human anatomy, neuroscience, and the functional capacity of binocular vision. In the succeeding sections, several computational models of the proposed theory are discussed in detail, each with their pros and cons.
Article
To study productive thinking where it is most conspicuous in great achievements is certainly a temptation, and without a doubt, important information about the genesis of productive thought could be found in biographical material. A problem arises when a living creature has a goal but does not know how this goal is to be reached. Whenever one cannot go from the given situation to the desired situation simply by action, then there has to be recourse to thinking. The subjects ( S s), who were mostly students of universities or of colleges, were given various thinking problems, with the request that they think aloud. This instruction, "Think aloud", is not identical with the instruction to introspect which has been common in experiments on thought-processes. While the introspecter makes himself as thinking the object of his attention, the subject who is thinking aloud remains immediately directed to the problem, so to speak allowing his activity to become verbal. It is the shift of function of the components of a complex mathematical pattern—a shift which must so often occur if a certain structure is to be recognized in a given pattern—it is this restructuration, more precisely: this transformation of function within a system, which causes more or less difficulty for thinking, as one individual or another tries to find a mathematical proof.
Article
In Discovering Complexity, William Bechtel and Robert Richardson examine two heuristics that guided the development of mechanistic models in the life sciences: decomposition and localization. © 2010 Massachusetts Institute of Technology. All rights reserved.
Article
An analysis of the process of analogical thinking predicts that analogies will be noticed on the basis of semantic retrieval cues and that the induction of a general schema from concrete analogs will facilitate analogical transfer. These predictions were tested in experiments in which subjects first read one or more stories illustrating problems and their solutions and then attempted to solve a disparate but analogous transfer problem. The studies in Part I attempted to foster the abstraction of a problem schema from a single story analog by means of summarization instructions, a verbal statement of the underlying principle, or a diagrammatic representation of it. None of these devices achieved a notable degree of sucess. In contrast, the experiments in Part II demonstrated that if two prior analogs were given, subjects often derived a problem schema as an incidental product of describing the similarities of the analogs. The quality of the induced schema was highly predictive of subsequent transfer performance. Furthermore, the verbal statements and diagrams that had failed to facilitate transfer from one analog proved highly beneficial when paired with two. The function of examples in learning was discussed in light of the present study.
Article
A remarkable ability of people is to easily understand new situations by analogy to old ones, to comprehend metaphors, and to solve problems based on previously solved, analogous problems. All of these may be considered abilities of analogical reasoning (AR). In artificial intelligence (AI), we would like to understand these so that we may capture them in intelligent machines.
Article
A general method, the tensor product representation, is defined for the connectionist representation of value/variable bindings. The technique is a formalization of the idea that a set of value/variable pairs can be represented by accumulating activity in a collection of units each of which computes the product of a feature of a variable and a feature of its value. The method allows the fully distributed representation of bindings and symbolic structures. Fully and partially localized special cases of the tensor product representation reduce to existing cases of connectionist representations of structured data. The representation rests on a principled analysis of structure; it saturates gracefully as larger structures are represented; it permits recursive construction of complex representations from simpler ones; it respects the independence of the capacities to generate and maintain multiple bindings in parallel; it extends naturally to continuous structures and continuous representational patterns; it permits values to also serve as variables; and it enables analysis of the interference of symbolic structures stored in associative memories. It has also served as the basis for working connectionist models of high-level cognitive tasks.
Article
Analogical reasoning processes were studied in third- and fourth-grade children. During acquisition, each child received analogs of one of the following seven-move scheduling problems: “farmer's dilemma,” “three missionaries/two cannibals,” or “tower of Hanoi.” On each presentation of a problem, the child heard a list of statements representing the exact series of moves necessary to solve the problem and was immediately asked to recall the list. Physical materials representing the problem were then produced and the child was asked to solve it. A trial was terminated and a new one begun when an error was made on the physical task. Following acquisition, the children were transferred to either an isomorphic analog or an analog from one of the other training conditions. During acquisition, the propositions representing the solution to each problem were acquired piecemeal and incrementally, but consolidation of the generalizable problem representation was abrupt. Isomorphic transfer was good in all conditions, but nonisomorphic transfer was unexpectedly asymmetrical. This finding was discussed in terms of each problem space, number of solution paths, and similarity relations.
Article
The study investigated the effect of transfer between two problems having similar (homomorphic) problem states. The results of three experiments revealed that although transfer occurred between repetition of the same problems, transfer occurred between the Jealous Husbands problem and the Missionary—Cannibal problem only when (a) Ss were told the relationship between the two problems and (b) the Jealous Husbands problem was given first. The results are related to the formal structure of the problem space and to alternative explanations of the use of analogy in problem solving. These include memory for individual moves, memory for general strategies, and practice in applying operators.
Article
This paper provides a computational characterization of coherence that applies to a wide range of philosophical problems and psychological phenomena. Maximizing coherence is a matter of maximizing satisfaction of a set of positive and negative constraints. After comparing five algorithms for maximizing coherence, we show how our characterization of coherence overcomes traditional philosophical objections about circularity and truth.
Article
A theory of analogical mapping between source and target analogs based upon interacting structural, semantic, and pragmatic constraints is proposed here. The structural constraint of isomorphism encourages mappings that maximize the consistency of relational corresondences between the elements of the two analogs. The constraint of semantic similarity supports mapping hypotheses to the degree that mapped predicates have similar meanings. The constraint of pragmatic centrality favors mappings involving elements the analogist believes to be important in order to achieve the purpose for which the analogy is being used. The theory is implemented in a computer program called ACME (Analogical Constraint Mapping Engine), which represents constraints by means of a network of supporting and competing hypotheses regarding what elements to map. A cooperative algorithm for parallel constraint satisfaction identities mapping hypotheses that collectively represent the overall mapping that best fits the interacting constraints. ACME has been applied to a wide range of examples that include problem analogies, analogical arguments, explanatory analogies, story analogies, formal analogies, and metaphors. ACME is sensitive to semantic and pragmatic information if it is available, and yet able to compute mappings between formally isomorphic analogs without any similar or identical elements. The theory is able to account for empirical findings regarding the impact of consistency and similarity on human processing of analogies.
Article
A neural network learning procedure has been applied to the classification of sonar returns from two undersea targets, a metal cylinder and a similarly shaped rock. Networks with an intermediate layer of hidden processing units achieved a classification accuracy as high as 100% on a training set of 104 returns. These networks correctly classified up to 90.4% of 104 test returns not contained in the training set. This performance was better than that of a nearest neighbor classifier, which was 82.7%, and was close to that of an optimal Bayes classifier. Specific signal features extracted by hidden units in a trained network were identified and related to coding schemes in the pattern of connection strengths between the input and the hidden units. Network performance and classification strategy was comparable to that of trained human listeners.
Article
This research investigates the development of analogy: In particular, we wish to study the development of systematicity in analogy. Systematicity refers to the mapping of systems of mutually constraining relations, such as causal chains or chains of implication. A preference for systematic mappings is a central aspect of analogical processing in adults ( [20] and [21]). This research asks two questions: Does systematicity make analogical mapping easier? And, if so, when, developmentally, do children become able to utilize systematicity? Children aged 5–7 and 8–10 acted out stories with toy characters. Then they were asked to act out the same stories with new characters. Two variables were manipulated: systematicity, or the degree of explicit causal structure in the original stories, and the transparency of the object-mappings. Transparency was manipulated by varying the similarity between the original characters and the corresponding new characters: it was included in order to vary the difficulty of the transfer task. If children can utilize systematicity, then their transfer accuracy should be greater for systematic stories. The results show: (1) As expected, transparency strongly influenced transfer accuracy (for both age groups, transfer accuracy dropped sharply as the object correspondences became less transparent); and (2) for the older group, there was also a strong effect of systematicity and an interaction between the two variables. Given a systematic story, 9-year-olds could transfer it accurately regardless of the transparency of the object correspondence.
Article
Cognitive grammar takes a nonstandard view of linguistic semantics and grammatical structure. Meaning is equated with conceptualization. Semantic structures are characterized relative to cognitive domains, and derive their value by construing the content of these domains in a specific fashion. Grammar is not a distinct level of linguistic representation, but reduces instead to the structuring and symbolization of conceptual content. All grammatical units are symbolic: Basic categories (e.g., noun and verb) are held to be nationally definable, and grammatical rules are analyzed as symbolic units that are both complex and schematic. These concepts permit a revealing account of grammatical composition with notable descriptive advantages.
Article
We describe a computational model of how analogs are retrieved from memory using simultaneous satisfaction of a set of semantic, structural, and pragmatic constraints. The model is based on psychological evidence suggesting that human memory retrieval tends to favor analogs that have several kinds of correspondences with the structure that prompts retrieval: semantic similarity, isomorphism, and pragmatic relevance. We describe ARCS, a program that demonstrates how these constraints can be used to select relevant analogs by forming a network of hypotheses and attempting to satisfy the constraints simultaneously. ARCS has been tested on several data bases that display both its psychological plausibility and computational power.
Article
Analogical reasoning has a long history in artificial intelligence research, primarily because of its promise for the acquisition and effective use of knowledge. Defined as a representational mapping from a known “source” domain into a novel “target” domain, analogy provides a basic mechanism for effectively connecting a reasoner's past and present experience. Using a four-component process model of analogical reasoning, this paper reviews sixteen computational studies of analogy. These studies are organized chronologically within broadly defined task domains of automated deduction, problem solving and planning, natural language comprehension, and machine learning. Drawing on these detailed reviews, a comparative analysis of diverse contributions to basic analogy processes identifies recurrent problems for studies of analogy and common approaches to their solution. The paper concludes by arguing that computational studies of analogy are in a state of adolescence: looking to more mature research areas in artificial intelligence for robust accounts of basic reasoning processes and drawing upon a long tradition of research in other disciplines.
Article
This paper describes the structure-mapping engine (SME), a program for studying analogical processing. SME has been built to explore Gentner's structure-mapping theory of analogy, and provides a “tool kit” for constructing matching algorithms consistent with this theory. Its flexibility enhances cognitive simulation studies by simplifying experimentation. Furthermore, SME is very efficient, making it a useful component in machine learning systems as well. We review the structure-mapping theory and describe the design of the engine. We analyze the complexity of the algorithm, and demonstrate that most of the steps are polynomial, typically bounded by O(N2). Next we demonstrate some examples of its operation taken from our cognitive simulation studies and work in machine learning. Finally, we compare SME to other analogy programs and discuss several areas for future work.
Article
A theory of analogy must describe how the meaning of an analogy is derived from the meanings of its parts. In the structure‐mapping theory, the interpretation rules are characterized as implicit rules for mapping knowledge about a base domain into a target domain. Two important features of the theory are (a) the rules depend only on syntactic properties of the knowledge representation, and not on the specific content of the domains; and (b) the theoretical framework allows analogies to be distinguished cleanly from literal similarity statements, applications of abstractions, and other kinds of comparisons. Two mapping principles are described: (a) Relations between objects, rather than attributes of objects, are mapped from base to target; and (b) The particular relations mapped are determined by systematicity, as defined by the existence of higher‐order relations.
Article
From the point of view of psychology and cognitive science, much of modern linguistics is too formal and mathematical to be of much use. The New Psychology of Language volumes broke new ground by introducing functional and cognitive approaches to language structure in terms already familiar to psychologists, thus defining the next era in the scientific study of language. The Classic Edition volumes re-introduce some of the most important cognitive and functional linguists working in the field. They include a new introduction by Michael Tomasello in which he reviews what has changed since the volumes first published and highlights the fundamental insights of the original authors. The New Psychology of Language volumes are a must-read for anyone interested in understanding how cognitive and functional linguistics has become the thriving perspective on the scientific study of language that it is today.
Article
In this toy model of the simplest form of categorization performed by neural nets, CP effects arise as a natural side-effect of the way these particular nets accomplish categorization. Whether the CP effect is universal or peculiar to some kinds of nets (cf. Grossberg 1984), whether the nets' capacity to do simple one-dimensional categorization will scale up to the full multidimensional categorization capacities of human beings, how the grounded labels of these sensory categories are to be combined into strings of symbols that function as propositions about higher-order category membership, and how the nonarbitrary "shape" constraints these symbols inherit from their grounding will affect the functioning of such a hybrid symbol system remain questions for future research. If these results can be generalized, however, the "warping" of analog similarity space may be a significant factor in grounding.
Article
The use of an analogy from a semantically distant domain to guide the problemsolving process was investigated. The representation of analogy in memory and processes involved in the use of analogies were discussed theoretically and explored in five experiments. In Experiment I oral protocols were used to examine the processes involved in solving a problem by analogy. In all experiments subjects who first read a story about a military problem and its solution tended to generate analogous solutions to a medical problem (Duncker's "radiation problem"), provided they were given a hint to use the story to help solve the problem. Transfer frequency was reduced when the problem presented in the military story was substantially disanalogous to the radiation problem, even though the solution illustrated in the story corresponded to an effective radiation solution (Experiment II). Subjects in Experiment III tended to generate analogous solutions to the radiation problem after providing their own solutions to the military problem. Subjects were able to retrieve the story from memory and use it to generate an analogous solution, even when the critical story had been memorized in the context of two distractor stories (Experiment IV). However, when no hint to consider the story was given, frequency of analogous solutions decreased markedly. This decrease in transfer occurred when the story analogy was presented in a recall task along with distractor stories (Experiment IV), when it was presented alone, and when it was presented in between two attempts to solve the problem (Experiment V). Component processes and strategic variations in analogical problem solving were discussed. Issues related to noticing analogies and accessing them in memory were also examined, as was the relationship of analogical reasoning to other cognitive tasks. Peer Reviewed http://deepblue.lib.umich.edu/bitstream/2027.42/23210/1/0000139.pdf