January 2025
·
6 Reads
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
January 2025
·
6 Reads
October 2024
·
62 Reads
A common goal in cognitive science involves explaining/predicting human performance in experimental settings. This study proposes a single GEMS computational scientific discovery framework that automatically generates multiple models for verbal learning simulations. GEMS achieves this by combining simple and complex cognitive mechanisms with genetic programming. This approach evolves populations of interpretable cognitive agents, with each agent learning by chunking and incorporating long-term memory (LTM) and short-term memory (STM) stores, as well as attention and perceptual mechanisms. The models simulate two different verbal learning tasks: the first investigates the effect of prior knowledge on the learning rate of stimulus-response (S-R) pairs and the second examines how backward recall is affected by the similarity of the stimuli. The models produced by GEMS are compared to both human data and EPAM-a different verbal learning model that utilises hand-crafted task-specific strategies. The models automatically evolved by GEMS produced good fit to the human data in both studies, improving on EPAM's measures of fit by almost a factor of three on some of the pattern recall conditions. These findings offer further support to the mechanisms proposed by chunking theory (Simon, 1974), connect them to the evolutionary approach, and make further inroads towards a Unified Theory of Cognition (Newell, 1990).
October 2024
·
11 Reads
July 2024
·
17 Reads
A fundamental issue in cognitive science concerns the interaction of the cognitive "how" operations, the genetic/memetic "why" processes, and by what means this interaction results in constrained variability and individual differences. This study proposes a single GEVL model that combines complex cognitive mechanisms with a genetic programming approach. The model evolves populations of cognitive agents, with each agent learning by chunking and incorporating LTM and STM stores, as well as attention. The model simulates two different verbal learning tasks: one that investigates the effect of stimulus-response (S-R) similarity on the learning rate; and the other, that examines how the learning time is affected by the change in stimuli presentation times. GEVL's results are compared to both human data and EPAM-a different verbal learning model that utilises hand-crafted task-specific strategies. The semi-automatically evolved GEVL strategies produced good fit to the human data in both studies, improving on EPAM's scores by as much as factor of two on some of the pattern similarity conditions. These findings offer further support to the mechanisms proposed by chunking theory, connect them to the evolutionary approach, and make further inroads towards a Unified Theory of Cognition (Newell, 1990).
November 2023
·
185 Reads
·
2 Citations
Lecture Notes in Computer Science
How can we infer the strategies that human participants adopt to carry out a task? One possibility, which we present and discuss here, is to develop a large number of strategies that participants could have adopted, given a cognitive architecture and a set of possible operations. Subsequently, the (often many) strategies that best explain a dataset of interest are highlighted. To generate and select candidate strategies, we use genetic programming, a heuristic search method inspired by evolutionary principles. Specifically, combinations of cognitive operators are evolved and their performance compared against human participants’ performance on a specific task. We apply this methodology to a typical decision-making task, in which human participants were asked to select the brighter of two stimuli. We discover several understandable, psychologically-plausible strategies that offer explanations of participants’ performance. The strengths, applications and challenges of this methodology are discussed.
July 2022
·
96 Reads
·
21 Citations
Data Mining and Knowledge Discovery
Genetic programming (GP), a widely used Evolutionary Computing technique, suffers from bloat -- the problem of excessive growth in individuals' sizes. As a result, its ability to efficiently explore complex search spaces reduces. The resulting solutions are less robust and generalisable. Moreover, it is difficult to understand and explain models which contain bloat. This phenomenon is well researched, primarily from the angle of controlling bloat: instead, our focus in this paper is to review the literature from an explainability point of view, by looking at how simplification can make GP models more explainable by reducing their sizes. Simplification is a code editing technique whose primary purpose is to make GP models more explainable. However, it can offer bloat control as an additional benefit when implemented and applied with caution. Researchers have proposed several simplification techniques and adopted various strategies to implement them. We organise the literature along multiple axes to identify the relative strengths and weaknesses of simplification techniques and to identify emerging trends and areas for future exploration. We highlight design and integration challenges and propose several avenues for research. One of them is to consider simplification as a standalone operator, rather than an extension of the standard crossover or mutation operators. Its role is then more clearly complementary to other GP operators, and it can be integrated as an optional feature into an existing GP setup. Another proposed avenue is to explore the lack of utilisation of complexity measures in simplification. So far, size is the most discussed measure, with only two pieces of prior work pointing out the benefits of using time as a measure when controlling bloat.
April 2022
·
22 Reads
·
1 Citation
Lecture Notes in Computer Science
The performance of cloud computing depends in part on job-scheduling algorithms, but also on the connection structure. Previous work on this structure has mostly looked at fixed and static connections. However, we argue that such static structures cannot be optimal in all situations. We introduce a dynamic hierarchical connection system of sub-schedulers between the scheduler and servers, and use artificial intelligence search algorithms to optimise this structure. Due to its dynamic and flexible nature, this design enables the system to adaptively accommodate heterogeneous jobs and resources to make the most use of resources. Experimental results compare genetic algorithms and simulating annealing for optimising the structure, and demonstrate that a dynamic hierarchical structure can significantly reduce the total makespan (max processing time for given jobs) of the heterogeneous tasks allocated to heterogeneous resources, compared with a one-layer structure. This reduction is particularly pronounced when resources are scarce.KeywordsCloud computingDynamic hierarchical job scheduling structureGenetic algorithmsOptimisation
December 2020
·
7 Reads
·
5 Citations
August 2020
·
182 Reads
·
6 Citations
A fundamental issue in cognitive science concerns the mental processes that underlie the formation and retrieval of concepts in the short-term and long-term memory (STM and LTM respectively). This study advances Chunking Theory and its computational embodiment CHREST to propose a single model that accounts for significant aspects of concept formation in the domains of literature and music. The proposed model inherits CHREST's architecture with its integrated STM/LTM stores, while also adding a moving attention window and an "LTM chunk activation" mechanism. These additions address the overly destructive nature of primacy effect in discrimination network based architectures and expand Chunking Theory to account for learning, retrieval and categorisation of complex sequential symbolic patterns-like real-life text and written music scores. The model was trained through exposure to labelled stimuli and learned to categorise classical poets/writers and composers. The model categorised previously unseen literature pieces by Homer, Chaucer, Shakespeare, Walter Scott, Dickens and Joyce, as well as unseen sheet music scores by Bach, Mozart, Beethoven and Chopin. These findings offer further support to mechanisms proposed by Chunking Theory and expand it into the psychology of music.
September 2019
·
869 Reads
·
7 Citations
Is it reasonable to talk about scientific discoveries in the social sciences? This chapter briefly reviews the status of scientific research in the social sciences and some of the arguments for and against the notion of scientific discovery in those sciences. After providing definitions of “scientific discovery” and “social sciences”, the chapter notes the large variety of epistemological views and methodologies drawn on by the social sciences. It discusses the extent to which the social sciences use precise formalisms for expressing theories. Critiques of the use and reliability of the scientific method in the social sciences are discussed. In spite of these critiques, it is argued that it is possible to speak of scientific discovery in the social sciences. The chapter ends with a preview of the book.
... In job scheduling, evolutionary techniques help optimize resource allocation and job sequencing, adapting to changing conditions while avoiding local optima. Lane et al. (2022) investigated the role of optimization algorithms in enhancing cloud computing performance through job scheduling. The authors proposed a dynamic hierarchical connection system that optimizes the scheduling using artificial intelligence search algorithms. ...
April 2022
Lecture Notes in Computer Science
... To find these, techniques from subgroup discovery are adapted to an XAI framework. Finally, Javed et al. (2022) focus on a class of methods that has so far not received much attention in XAI, namely the learning of computational models via genetic programming. Such methods are also susceptible to learning overly specific models, which can be countered by a variety of techniques, which are discussed in this introductory survey. ...
July 2022
Data Mining and Knowledge Discovery
... The EPAM architecture (Feigenbaum & Simon, 1984), the initial goal of which was to model memory and perception, has recently been extended into a running production system (Gobet & Jansen, 1994;Lane, Cheng, & Gobet, 1999). The chunks learned while interacting with the task environment can later be used as conditions of productions. ...
December 2020
... First, in line with expert memory theories, music reading expertise also involves structural processing. Studies have investigated the application of theories such as chunking theory (Halpern & Bower, 1982), long-term WM theory Drai-Zerbib & Baccino, 2005Williamon & Valentine, 2002), template theory Sheridan & Kleinsmith, 2022), or even the CHREST model, a computational application of the chunking theory (Bennett et al., 2020), within the context of music. As in other expertise domains, expert musicians have been shown to benefit from a larger perceptual span than nonexpert musicians in studies involving musical material. ...
August 2020
... Triangulation, or what Diesling (1972) refers to as contextual validation, is "where a piece of evidence can be assessed by comparing it with other pieces of evidence on the same point" (p. 147). ...
January 2019
... The automation of statistical inference, however, is mostly confined to the deduction of new knowledge based on prespecified statistical models. scientific inference is termed computational scientific discovery and has so far centered on identifying models or laws that elucidate specific phenomena (22,23,72). One instance of computational scientific discovery involves the identification of equations ("symbolic regression") to uncover quantitative laws governing a given dataset. ...
September 2019
... Challenges in reproducibility in complex systems include the sampling methods and data availability, [7], [8], in addition to the general challenges of computational reproducibility, e.g., [9]. For example, a study of complex systems analyses of Twitter datasets found that even with the public scrapable data, reproducibility was not necessarily possible [10]. ...
February 2018
Information and Software Technology
... Debido a la complejidad de la experticia, se puede afirmar que la modelización formal (matemática y computacional) será clave. Dicha complejidad resulta de diferentes factores, incluidos los mecanismos cognitivos que ocurren en paralelo, la necesidad de adoptar distintos niveles de análisis (por ejemplo, micromecanismos para la memoria de trabajo y macromecanismos para la planificación); la necesidad de considerar distintas escalas de tiempo (desde milisegundos hasta años) y la necesidad de considerar el ambiente en el que los expertos de desarrollan y desempeñan (Gobet, 2016;Gobet, Lloyd-Kelly y Lane, 2018). La importancia del ambiente puede observarse en el hecho de que los expertos en distintas áreas se enfrentan a exigencias muy diversas, ya sean relativas al conocimiento en la ciencia, a la resistencia o la velocidad en el deporte, o a las emociones en el caso del arte. ...
September 2017
... Due to the excessive volume of the article, further explanations are avoided ( Table 4). The results of this section with the findings of Schubert et al. 108 , Panday et al. 109 , and Masoudi-Sobhanzadeh et al. 45 are consistent. ...
September 2017
Information Processing Letters
... Popper 1959). An excellent account is given by Sozou et al. (2017) and an account focussed on GIScience is provided by Gahegan (2005). Under what Kuhn (1962) would call 'normal science' we posit a hypothesis, build an analytical model, run experiments, evaluate the results, iterate with modifications and refinements, then when we are confident in our findings, we share them. ...
January 2017