ArticlePDF Available

Newell and Simon's Logic Theorist: Historical Background and Impact on Cognitive Modeling

Authors:

Abstract

Fifty years ago, Newell and Simon (1956) invented a "thinking machine" called the Logic Theorist. The Logic Theorist was a computer program that could prove theorems in symbolic logic from Whitehead and Russell's Principia Mathematica. This was perhaps the first working program that simulated some aspects of peoples' ability to solve complex problems. The Logic Theorist and other cognitive simulations developed by Newell and Simon in the late 1950s had a large impact on the newly developing field of information-processing (or cognitive) psychology. Many of the novel ideas about mental representation and problem solving instantiated in the Logic Theorist are still a central part of the theory of cognitive psychology, and are still used in modeling the complex tasks studied in human factors psychology. This paper presents some of the theoretical precursors of the Logic Theorist, describes the principles and implementation of the program, and discusses its immediate and long-term impacts.
NEWELL AND SIMON’S LOGIC THEORIST:
HISTORICAL BACKGROUND AND IMPACT ON COGNITIVE MODELING
Leo Gugerty
Psychology Department, Clemson University, Clemson, SC USA
Fifty years ago, Newell and Simon (1956) invented a “thinking machine” called the
Logic Theorist. The Logic Theorist was a computer program that could prove theorems in
symbolic logic from Whitehead and Russell’s Principia Mathematica. This was perhaps
the first working program that simulated some aspects of peoples’ ability to solve
complex problems. The Logic Theorist and other cognitive simulations developed by
Newell and Simon in the late 1950s had a large impact on the newly developing field of
information-processing (or cognitive) psychology. Many of the novel ideas about mental
representation and problem solving instantiated in the Logic Theorist are still a central
part of the theory of cognitive psychology, and are still used in modeling the complex
tasks studied in human factors psychology. This paper presents some of the theoretical
precursors of the Logic Theorist, describes the principles and implementation of the
program, and discusses its immediate and long-term impacts.
INTRODUCTION
Fifty years ago, in early 1956, Herbert Simon told
a group of students that “over the Christmas holiday, Al
Newell and I invented a thinking machine” (Simon,
1996, p. 206). This thinking machine was a program
called the Logic Theorist (Newell & Simon, 1956). Its
thinking consisted of creating proofs for theorems in
propositional logic. In fact, it could prove 38 of the 52
theorems in Chapter 2 of Whitehead and Russell’s
Principia Mathematica (1910). The Logic Theorist was
one of the first, and perhaps the first, working program
that simulated some aspects of peoples’ ability to solve
complex problems. The Logic Theorist and other
cognitive simulations developed by Newell, Simon and
Cliff Shaw in the late 1950s had a large impact on the
newly developing field of information-processing (or
cognitive) psychology. Many of the novel ideas about
mental representation and problem solving instantiated
in the Logic Theorist are still a central part of the theory
of cognitive psychology, and many of these ideas are
still used in modeling the complex tasks studied in
human factors psychology. This paper presents some of
the theoretical ideas that influenced the development of
the Logic Theorist, describes the principles and
implementation of the program, and discusses its
immediate and long-term impacts.
FORERUNNERS
In his autobiography, Herbert Simon (1996) notes
that he and Allen Newell were influenced by
theoreticians in mathematical logic and information
theory – such as Gödel (1931), Turing (1936) and
Shannon (1948) – who showed that complex
mathematical ideas and processes could be represented
by formal systems of symbols that were manipulated
according to well-defined rules. Some of these logicians,
especially Turing, claimed that these formal symbol
systems could be instantiated in physical machines. In
the late 1940s, these ideas led directly to the creation of
digital computers. Although digital computers initially
were used to implement primarily numerical
calculations, Turing (1950) and others felt strongly that
computer implementations of formal symbol systems
such as the Turing Machine could eventually exhibit a
variety of complex thinking processes. In addition to this
influence from formal logic, Simon also notes the
influence of researchers in cybernetics – such as
McCulloch and Pitts (1943) – who developed formal
models of mental processes with a close connection to
neurophysiology.
A number of researchers have noted that the
emergence of information-processing and cognitive
psychology was also influenced by a shift in the type of
tasks researchers attempted to study and model. In
particular, researchers shifted from the very simple tasks
of behaviorist psychology to the complex tasks studied
in human factors research, i.e., tasks involving problem
solving, communication and use of technology. For
example, after studying communication and decision-
making in submarine crews in 1947, Jerome Bruner
(1983, p. 62) commented “develop a sufficiently
complex technology and there is no alternative but to
develop cognitive principles in order to explain how
people can manage it.” Another key driving force in
cognitive psychology, George Miller, studied how noise
affected radio communication during WW II (Waldrop,
2001). Simon (1996) cites human factors work during
WW II as influencing his thinking. And in work prior to
the Logic Theorist, Simon and Newell studied
organizational decision-making and military command
& control systems.
In the early 1950s, Simon and Newell began
collaborating and set themselves the goal of developing
a formal symbolic system that could execute a complex
thought process. They first focused on the task of chess
and then moved on to geometry proofs, but later dropped
both of these visual tasks because of difficulties in
formalizing the perceptual processes involved. In the fall
of 1955, Simon and Newell switched to modeling a less
visual task – proving theorems in propositional logic.
DEVELOPING THE LOGIC THEORIST
The major insight that helped Newell and Simon
understand how people generated logic proofs was to
focus on peoples’ heuristics. While an undergraduate at
Stanford, Newell had learned about the importance of
heuristics in problem solving from the mathematician
George Polya. Simon and Newell discovered potential
heuristics by noticing and recording their own mental
processes while working on proofs. By December of
1955, they had implemented some promising heuristics
in a fairly complete version of the Logic Theorist and
hand-simulated the operation of this program. In January
1956, they performed a detailed hand-simulation with
their family members and students acting out the various
“methods” of the program. In conjunction with this work
on the Logic Theorist program, Newell and Shaw
worked on developing a list-processing language (IPL)
that could implement the program on a computer. In
August 1956, the first Logic Theorist proof was run on a
JOHNNIAC computer (named after John von Neumann)
using IPL. In September 1956, the first published
description of the Logic Theorist was presented at the
Second Symposium on Information Theory at MIT
(Newell & Simon, 1956). Since the Logic Theorist was
then capable of proving a number of the theorems in
Whitehead and Russell’s Principia Mathematica, Simon
informed Russell of this fact, no doubt savoring the
irony that their program developed based on work in
symbolic logic could now generate and prove important
theorems in symbolic logic.
DESCRIPTION OF THE LOGIC THEORIST
The basic principles underlying the Logic Theorist
are:
Thinking is seen as processing (i.e., transforming)
symbols in short-term and long-term memories. These
symbols were abstract and amodal, that is, not
connected to sensory information.
Symbols are seen as carrying information, partly by
representing things and events in the world, but mainly
by affecting other information processes so as to guide
the organism’s behavior. In Newell and Simon’s
words, “symbols function as information entirely by
virtue of their making the information processes act
differentially” (1956, p. 62).
Symbols represent knowledge hierarchically. For
example, the representation of a logic expression in the
Logic Theorist was hierarchical, with elements and
sub-elements. Also the processes used by the Logic
Theorist were hierarchical, in that processes would set
sub-goals that initiated new processes.
Complex problems are solved by the use of heuristics
that are fairly efficient but do not guarantee solution.
The Logic Theorist works backwards from the
theorem to be proved by using the heuristics to make
valid inferences until it has reached an axiom.
The claim here is not that Newell and Simon
initiated any of these principles, but that they integrated
and applied them to develop a working system that could
solve complex problems..
Knowledge Representation
A logic expression (e.g., ~ P (Q v ~ P) ; read as
“not P implies Q or not P”) is represented in the Logic
Theorist as a hierarchy of elements and sub-elements.
The main connective (here ) is the main element.
Other elements include the left (~ P) and right sub-
elements. In this expression the right sub-element is a
sub-expression, (Q v ~ P), which has its own main and
sub-elements. Each element (E) in an expression
contains up to 11 attributes, including:
the number of negation signs before E,
the connective in E (if any),
the name of the variable or sub-expression in E (if
any),
E’s position in the expression, and
the location of the expression containing E in storage
memory.
The Logic Theorist contains two kinds of
memories, “working memories” for temporary storage
and “storage memory” for longer-term storage. A single
working memory holds a single element and its
attributes while solving a single problem. Usually one to
three working memories are used. Storage memory is
used for storage of expressions (e.g., axioms and proved
theorems) across problems, and for temporary storage of
expressions and sets of elements while solving a single
problem. Storage memory consists of lists, with each list
containing a whole expression or a set of elements. Each
list has a location label that is used to index the list from
working memories.
Information Processes
The lowest-level unit in the Logic Theorist’s
information processes is an “instruction.” An example
instruction is: “find the right sub-element of the
expression in working-memory 1 and put this sub-
element in working-memory 2”. Instructions can also
shift control of processing to other instructions, using a
branching technique similar to “goto” statements in a
computer program.
The next-highest level in the Logic Theorist’s
information processes is “elementary processes.” Each
elementary process is a sequentially-executed list of
instructions and their associated control flow that
achieves a specific goal. Elementary processes are
similar to “routines” in a computer program or methods
in a GOMS model (Card, Moran & Newell, 1983).
The next-highest level in the Logic Theorist’s
information processes is “methods.” Each method is a
sequentially-executed list of elementary processes, along
with associated control flow. There are four main
methods in the Logic Theorist, each instantiating a
heuristic for proving logic theorems. The methods are:
Substitution – this method seeks to transform one logic
expression (e.g., the theorem to be proved) into
another (e.g., an axiom) via a series of logically-valid
substitutions of variables and replacements of
connectives
Detachment – this method implements the logical
inference rule of modus ponens, that is, if the goal is to
prove theorem B and the method can prove the
theorems A B and A, then B is a proven theorem. If
the goal is to prove theorem B, the detachment method
first attempts to finds a proved theorem in storage
memory of the form A B where the right side either
matches B or can be made to match B by substitution.
If successful, a sub-goal is set to prove theorem A. If
A is not in the list of proved theorems, the detachment
method attempts to prove A by substitution.
Chaining forward – this method implements the
transitive rule: if A B and B C, then A C. If
the goal is to prove A C, this method first searches
for a theorem of the form A B (or which can be
transformed into A B by substitution). If successful,
the method then attempts to prove B C by
substitution.
Chaining backward – in a similar manner, this method
attempts to prove A C by first proving B C, and
then A B.
The highest-level information process in the Logic
Theorist is the executive control method. This method
applies the substitution, detachment, forward chaining,
and backward chaining methods, in turn, to each
proposed theorem.
Newell, Shaw and Simon (1958) saw the Logic
Theorist as an example of a program composed from
primitive information processes that could generate (or
perform) a complex behavior. They also pointed out that
information processing programs such as the Logic
Theorist offer explanations of the cognitive control
structures and processes underlying complex human
behavior.
Performance
In one test, the Logic Theorist was started with the
axioms of propositional logic in its storage memory and
then presented with 52 theorems to prove from Chapter
2 of Principia Mathematica. The theorems were
presented in the same order as in the book. Upon
proving a theorem, the Logic Theorist added it to its
storage memory for use in later proofs. Given these
constraints, the Logic was able to prove 73% of the 52
theorems. Using a computer that took 30 ms per
elementary information process, half of the theorems
were proved in less than 1 minute, and most in less than
5 minutes.
EVALUATION OF THE LOGIC THEORIST
In the rest of this paper, I will evaluate the Logic
Theorist by considering whether it is an artificial
intelligence (AI) program or a cognitive simulation, and
by assessing its immediate and long-term impacts on
theory and models in cognitive psychology.
AI program or cognitive simulation?
In an article in the Psychological Review in 1958,
Newell, Shaw and Simon pointed out that the elementary
information processes in the Logic Theorist were not
modeled after human thinking, and that the model was
not shaped by fitting to quantitative human data. Also,
the branching control structure and the list-based
knowledge representation of the Logic Theorist were
later determined to be psychologically implausible.
These considerations support the conclusion that the
Logic Theorist does not simulate human cognitive
processes, and therefore, given its intelligent behavior, is
an AI program.
On the other hand, the higher-level information
processes in the Logic Theorist – the methods
instantiating the four heuristics – were explicitly
modeled after the introspective protocols of Simon and
Newell themselves. Newell and Simon explicitly claim
that heuristics are a good way to model the quick but
error-prone nature of human problem solving, and they
used heuristics to model other kinds of problems solving
(e.g., chess) around this time. In their 1958
Psychological Review article, Newell et al. point out a
number of other similarities in how people and the Logic
Theorist solve logic problems – e.g., both generate sub-
goals, and both learn from previously solved problems.
These considerations suggest that in terms of higher-
level information processes such as heuristics,
subgoaling, and learning, the Logic Theorist was a
simulation of human cognition.
Immediate Impact of the Logic Theorist
The completed Logic Theorist was initially
presented to the research world on September 11, 1956
at a star-studded conference, the Second Symposium on
Information Theory at MIT. In addition to Newell’s
presentation, Noam Chomsky presented his ideas on
transformational grammar, George Miller discussed
limitations in short-term memory, and John Swets
applied signal detection theory to perceptual recognition.
Miller has called this day the “moment of conception” of
cognitive science (2003, p. 142).
Other evidence for the impact of the Logic
Theorist on other researchers is contained in Miller,
Galanter and Pribram’s book, Plans and the Structure of
Behavior (1960), which was itself a seminal early work
in cognitive psychology. This book outlines a theory of
how people use plans – structured knowledge – to guide
behavior, and it sketches out a formal, computer-
program-like mechanism for plans based on test-operate-
test-exit units. Thus, in a sense Plans was a
generalization of the ideas that Newell, Simon and Shaw
had actually implemented in the Logic Theorist. The
following excerpts from Plans demonstrate the strong
influence of the Logic Theorist on Miller et al.’s ideas.
“The first intensive effort to meet this need … to
simulate the human chess player or logician … was the
work of Newell, Shaw and Simon (1956), who have
advanced the business of psychological simulation
further than anyone else” (p. 55)
Referring to the use of a formalized program to solve
complex problems, Miller et al. praise Newell and
Simon’s “demonstration that what so many have long
described has finally come to pass.”
Miller et al. agreed with the emphasis of Newell
and Simon on heuristics as a general method for
modeling human problem solving, and they describe two
other heuristics used by Newell and Simon – means-ends
analysis and simplification (constraint relaxation) – in
work that led up their General Problem Solver.
Finally, Miller et al. anticipated and gave answers
to some of the common criticisms of cognitive
simulations – criticisms that are still relevant today. The
first criticism is that cognitive simulations are too
complex and have too many parameters to qualify as a
valid, general model of behavior. Miller et al. reply that
“if the description is valid … the fact that it is
complicated can’t be helped. No benign and
parsimonious deity has issued us an insurance policy
against complexity.” While later cognitive modelers
have agreed that parsimony is not the most important
criterion for evaluating models of complex thinking
(Anderson, 1983), they have also tried to reduce the
number of free parameters in their models by using
consistent parameter estimates across models based on
empirical research in cognitive psychology (Card et al.,
1983; Kieras, Wood & Meyer, 1997).
The second criticism of cognitive simulations
anticipated by Miller et al. is the homunculus problem,
i.e., that cognitive simulations may need to posit a smart
but unexplained mental process to interpret mental
representations and make decisions. Miller et al.’s
response to this is that cognitive simulations solve
complex problems without a homunculus, using only
decision-making processes that are explicit and evident
in their rules and heuristics. The third criticism focuses
on how cognitive simulations are to be validated. Miller
et al. suggest Newell and Simon’s main validation
technique, verbal and behavioral protocols, as one way
of validating simulations. Later, proponents of cognitive
simulation developed other kinds of data to validate
models against, including human response times, error
rates, and eye movements.
Long-Term Impact of the Logic Theorist
In a review of the construct of mental
representation in cognitive psychology, Markman and
Dietrich (2000) describe the classical view of
representation in the same way that Newell and Simon
did for the Logic Theorist – i.e., that information
processing consists of transforming amodal mental
symbols so as to guide the organism’s action. Newell
and Simon were key figures in developing the classical
view of representation, which is still followed in a
number of cognitive modeling systems, including
GOMS (Card et al., 1983), ACT-R (Anderson &
Lebiere, 1998), SOAR (Newell, 1990), and EPIC
(Meyer & Kieras, 1997).
BEYOND THE LOGIC THEORIST
In the 1960s, Newell and Simon continued their
work on information processing programs for complex
problem solving. This work was published in their book
Human Problem Solving in 1972. In the early 1970’s,
Newell initiated the use of production systems as an
alternative to the branching control structure of the
Logic Theorist (Newell, 1973). The modular nature of
productions is now felt by many to be a better
description of human procedural knowledge (e.g.,
Anderson & Lebiere, 1998) and production systems are
widely used in cognitive modeling architectures.
In addition to the switch from branching control
structures to production systems, cognitive modelers
have also updated a number of the other techniques used
in the Logic Theorist. In the 1990’s, some cognitive
modelers integrated sub-symbolic, connectionist
processes into classical symbol-processing models. For
example, ACT-R now conditions retrieval of
information from declarative memory on the flow of
activation in a memory network with varying strengths
of associations among nodes. Also in the 1990’s, when
creating the EPIC modeling architecture, Meyer and
Kieras (1997) integrated perceptual and motor processes
into the heretofore purely cognitive architectures.
Other changes in cognitive modeling architectures
are still in progress. These include shifting from amodal
symbols to symbols that include sensori-motor
representations (e.g., Barsalou, 1999), and integrating
emotional and stress responses into cognitive models.
However, Newell and Simon’s demonstration in
the Logic Theorist that an information processing
program could manipulate symbols so as to perform
complex problem-solving tasks is still reflected in
current cognitive modeling. Also, their use of heuristics
as the core of these information processing programs is
still very influential.
REFERENCES
Anderson, J. (1983). The Architecture of Cognition.
Cambridge, MA: Harvard University Press.
Anderson, J. R., Bothell, D., Byrne, M. D., Douglass, S.,
Lebiere, C., & Quin, Y. (2004). An integrated theory of
the mind. Psychological Review, 111, 1036-1060
Anderson. J. & Lebiere, C. (1998). The atomic components of
thought. Mahwah, NJ: Erlbaum.
Barsalou, L. (1999). Perceptual symbol systems. Behavioral
and Brain Sciences, 22, 577-660.
Bruner, (1983). In Search of Mind: Essays in Autobiography.
New York, NY: Harper & Row.
Card, S., Moran, T. & Newell, A. (1983). The Psychology of
Human Computer Interaction. Mahwah, NJ: Erlbaum.
Gödel, K. (1931). On formally undecidable propositions of
Principia Mathematica and related systems. I Monatshefte
für Mathematik und Physik, 38, 173-198.
Kieras, D., Wood, S. & Meyer, D. (1997). Predictive
engineering models based on the EPIC architecture for a
multimodal high-performance human-computer
interaction task. ACM Transactions on Computer-Human
Interaction, 4(3). p. 230- 275.
Markman, A. & Dietrich, E. (2000). Extending the classical
view of representation. TRENDS in Cognitive Sciences,
4(12), 470-475.
McCulloch, W. & Pitts, W. (1943) A logical calculus of the
ideas immanent in nervous activity. Bulletin of
Mathematical Biophysics, 5, 115-130.
Meyer, D. & Kieras, D. (1997). A computational theory of
executive cognitive processes and multiple-task
performance: I. Basic mechanisms. Psychological Review,
104(1), 3-65.
Miller, G. A (2003). The cognitive revolution: A historical
perspective. TRENDS in Cognitive Sciences, 7(3), 141-
144.
Miller, G., Galanter, E. & Pribram, K. (1960). Plans and the
Structure of Behavior. New York, NY: Henry Holt and
Company.
Newell, A. (1973). Production systems: Models of control
structures. In W. Chase (Ed.), Visual Information
Processing, Oxford, England: Academic.
Newell, A. (1990).Unified Theories of Cognition. Cambridge,
MA: Harvard University Press.
Newell, A. & Simon, H. (1956). The logic theory machine: A
complex information processing system. IRE
Transactions on Information Theory, 2, 61-79.
Newell, A., Shaw, J. C. & Simon. H. A. (1958) Elements a
theory of human problem solving. Psychological Review,
65(3), 151-166.
Shannon, C. E. (1948). A mathematical theory of
communication, Bell System Technical Journal, 27 (July
and October), pp. 379-423 and 623-656.
Simon H. (1996). Models of My Life. Cambridge, MA: MIT
Press.
Turing, A. (1936). On Computable Numbers, with an
application to the Entscheidungsproblem, Proceedings of
the London Mathematical Society, 2, 230-265.
Turing, A. (1950). Computing machinery and intelligence,
Mind, 59(236), 433-460.
Waldrop, M. M. (2001). The Dream Machine. New York, NY:
Penguin Group.
Whitehead, A. & Russell, B. (1910). Principia Mathematica.
Cambridge, UK: Cambridge University Press.
... Artificial intelligence (AI) is a method by which computer software is developed based on the study and use of human brain patterns and becomes capable of creating an intelligent product in a cognitive way similar to the human brain. Although the history of AI development began not so long ago -in 1956, when Allen Newell and Herbert Simon created the first artificial intelligence program -Logic Theorist, which proved 38 of the first 52 theorems in chapter two of Whitehead and Bertrand Russell's Principia Mathematica, and found new and shorter proofs for some of them [14], and the term 'artificial intelligence' was first used by American computer scientist John McCarthy at the Dartmouth Conference in 1956 [37], it quickly spread and began to be used in various fields: economics, art, education, military and construction, medicine, etc. The interest of people in AI and its possibilities exceeded the expectations of its makers, as in the late 20th and early 21st century centuries, there was a kind of leap in its development, characterised by the development of the first chatbot ELIZA by Weizenbaum [43], the creation of the first intelligent humanoid robot called WABOT-1 in Japan [42], the emergence of intelligent agents [13], including the IBM Deep Blue computer that beat world chess champion G. Garry Kasparov [29], the Roomba vacuum cleaner [7], and the use of technologies by Facebook, X (Twitter), and Netflix. ...
Article
Full-text available
Based on the analysis of scientific sources and practical experience in teaching social disciplines, the article reveals the role and problems of using AI in the system of higher education. The scientific opinion about the inevitability of using AI, despite its risks, in teaching social disciplines is expressed. A comparative analysis of the subject area, conceptual apparatus, functional purpose and research methods of social disciplines is carried out. Similar features are revealed, the basis of which is their focus on the scientific study of the social and psychological behaviour of individuals, social groups and communities. On the example of didactic methods of teaching social disciplines in higher education, the expediency of using AI tools in the process of teaching is substantiated. Their main capabilities are personalised learning, forecasting and trend analysis, data visualisation, simulations and virtual laboratories, speech, video and voice interfaces, assessment tools, and generation of comments and feedback. Clarification of the principles and main possibilities of using AI tools has revealed the main problems, which include ethics, reliability, transparency, insufficient professional training of teachers and imperfect legal support. We outline prospects for further research on the peculiarities of using AI tools to teach social disciplines in higher education.
... In those early years, some scholars made overly optimistic predictions about AI evolution, as in 1956, Herbert Simon and Allen Newell developed the "Logic Theorist", a well-known program that demonstrated mathematical theorems by manipulating symbols and finding solutions (Gugerty, 2006). This year, at the Dartmouth Conference between experts and pioneers in the field of computing, John Maccarthy coined the expression "Artificial Intelligence", as we know it today (Kumar & Jindal, 2018). ...
Chapter
Today's digital marketplace is volatile and fast-paced, and the introduction of AI technologies has become an integral part of improving many aspects of business. This chapter aims to shed light on how AI techniques were/are/will be related to Brand Management. First, the basic ideas about AI will be introduced, and the development of the concept and the main technologies that form AI will be explained. Furthermore, it will be necessary to provide a comprehensive analysis of the use of AI in brand management and its potential to revolutionize the methods, customer interactions, and trends in the market. This paper integrates a contextual approach to explore how AI solutions are transforming the brand-consumer relationship. Secondly, we will investigate the variety of advantages that AI has for brands. Finally, a glimpse of new opportunities created by AI in the context of brand management will be highlighted. Thus, the information presented in this chapter is useful for academics, researchers, and practitioners interested in unlocking the capabilities of AI for brand management.
... While the concept of artificial intelligence (AI) has deep historical roots, its development as a formal scientific field was initiated in the 1950s by Newell and Simon, who invented a "thinking machine" called the Logic Theorist [1]. Since then, AI has undergone several cycles of breakthroughs and setbacks, evolving into the transformative technology we see today. ...
Article
Full-text available
While the concept of artificial intelligence (AI) has deep historical roots, its development as a formal scientific field was initiated in the 1950s by Newell and Simon, who invented a “thinking machine” called the Logic Theorist [...]
... Serving as a pioneering achievement, the Logic Theorist stood out as one of the initial, if not the very first, operational programs capable of emulating certain aspects of Human problem-solving abilities. This invention by Newell, Simon, and Shaw in the late 1950s played a pivotal role in shaping the burgeoning field of information processing, also known as cognitive psychology 3 (Gugerty, 2006). ...
Chapter
The history of artificial intelligence is complicated. We cannot pinpoint where it all started. Some might say it began in the 1956 Dartmouth Conference when John McCarthy and his peers coined the term ‘artificial intelligence' for the first time. The term ‘robot' was coined in a Sci-Fi Play in 1920. We can find hints of similar ideas in ancient myths and scriptures, ideas about automata and intelligent creatures forged by men. AI is a vast and fascinating subject. The chapter presents an overview of the history of artificial intelligence. It also talks about the emergence and different sub-branches of AI and the theoretical foundation, frameworks, and theories related to them. After that its applications in the modern world and the challenges and Risks are also discussed.
Chapter
The historical convergence of Marvin Minsky’s contributions to artificial intelligence (AI) and confocal microscopy (CM) is explored in this contribution, highlighting the intersection of these disciplines. As a luminary in the field of AI, Minsky's foundational work at the Massachusetts Institute of Technology's Artificial Intelligence Laboratory set the stage for neural network research and computational theories of mind. Simultaneously, Minsky's inventive prowess extended to CM, where he introduced the confocal scanning microscope, revolutionizing three-dimensional imaging. This book chapter provides a concise examination of Minsky’s groundbreaking legacy, elucidating the profound impact of his work on the evolution of both AI and CM. Minsky’s interdisciplinary approach and his ability to seamlessly navigate between AI and advanced microscopy reflected his deep curiosity and commitment to pushing the boundaries of scientific exploration. His impact on AI and CM was not confined to theoretical contributions but extended to practical innovations that continue to shape these fields today. Through a multidisciplinary lens, the chapter aims to underscore Minsky’s trailblazing contributions and the enduring interplay between AI and advanced microscopy.
Chapter
The study did an extensive review in understanding the relationship between Artificial Intelligence and Brain Plasticity aligned with consumer behaviour and workplace dynamics. The study conducted an intensive review of various dimensions of neuromarketing and workplace dynamics, and the role of AI in influencing these two variables. Further, the workplace dynamics include factors like employee engagement, preventive stress management, and brand loyalty. The dimension of consumer behaviour comprises neuromarketing and brand loyalty. Besides, the study has demographic variables like age, gender, and culture as moderators, and this will help understand the role of these intervening variables between AI and Brain Plasticity (Neuromarketing and Workplace Dynamics). The study has formulated five assumptions and these assumptions have been validated through the existing literature. The study concludes with an inference that the goals of influencing consumers’ buying intentions and nurturing a strong sense of citizenship behaviours among the employees. The study's findings imply that adopting AI to influence consumer behaviour will enhance the intensity of neuromarketing strategies and strengthen brand loyalty among customers. In continuation, the agents of Artificial Intelligence have become potentially consequential in devising innovative employee engagement strategies, and preventive stress management initiatives.
Article
One important goal of cognitive science is to understand the mind in terms of its representational and computational capacities, where computational modeling plays an essential role in providing theoretical explanations and predictions of human behavior and mental phenomena. In my research, I have been using computational modeling, together with behavioral experiments and cognitive neuroscience methods, to investigate the information processing mechanisms underlying learning and visual cognition in terms of perceptual representation and attention strategy. In perceptual representation, I have used neural network models to understand how the split architecture in the human visual system influences visual cognition, and to examine perceptual representation development as the results of expertise. In attention strategy, I have developed the Eye Movement analysis with Hidden Markov Models method for quantifying eye movement pattern and consistency using both spatial and temporal information, which has led to novel findings across disciplines not discoverable using traditional methods. By integrating it with deep neural networks (DNN), I have developed DNN+HMM to account for eye movement strategy learning in human visual cognition. The understanding of the human mind through computational modeling also facilitates research on artificial intelligence's (AI) comparability with human cognition, which can in turn help explainable AI systems infer humans’ belief on AI's operations and provide human‐centered explanations to enhance human−AI interaction and mutual understanding. Together, these demonstrate the essential role of computational modeling methods in providing theoretical accounts of the human mind as well as its interaction with its environment and AI systems.
Article
Full-text available
Purpose: This study explores the potential and challenges of AI in professional settings, focusing on its ethical, socio-economic, and practical implications, and proposes strategies for fostering human-AI collaboration. Design/Methodology/Approach: The systematic literature review methodology analyzes academic papers, books, reports, and articles on AI's applications, challenges, and ethical considerations across various electronic databases. Originality/Value: This study provides a comprehensive overview of AI integration in collaborative work environments, highlighting its benefits and challenges, and emphasizing the need for responsible deployment and proactive policies. Findings: The literature review highlights the challenges of integrating AI into collaborative work environments, including ethical concerns, algorithmic bias, and job displacement, but also highlights potential opportunities for improved human-AI collaboration.
Article
Full-text available
A description of a theory of problem-solving in terms of information processes amenable for use in a digital computer. The postulates are: "A control system consisting of a number of memories, which contain symbolized information and are interconnected by various ordering relations; a number of primitive information processes, which operate on the information in the memories; a perfectly definite set of rules for combining these processes into whole programs of processing." Examples are given of how processes that occur in behavior can be realized out of elementary information processes. The heuristic value of this theory is pertinent to theories of learning, perception, and concept formation.