ChapterPDF Available

Human Intelligence and Artificial Intelligence: Divergent or Complementary Intelligences?

Authors:

Abstract

Artificial intelligence (AI) is a relatively new phenomenon. The general consensus holds that its field of exploration began in 1956 at a summer conference at Dartmouth College sponsored by the Defense Advanced Research Projects Agency. Somewhat earlier, Alan Turing sparked interest in AI in a paper published in Mind in 1950 by wondering if a machine could think and how that could be determined. AI has progressed significantly since then. Meanwhile, human intelligence (HI) has been studied by philosophers and researchers for centuries but has remained relatively unchanged in terms of its fundamental nature (e.g., logic and problem solving). In this chapter, we examine how AI and HI have converged in certain contexts but how they remain distinct in other aspects. Two areas will be highlighted in the discussion – namely, expertise and embodiment.
H
UMAN INTELLIGENCE AND ARTIFICIAL INTELLIGENCE
:
Divergent or complementary intelligences?
Ma, Shanshan
University of North Texas, shanshanma@my.unt.edu
Spector, Jonathan M.*
University of North Texas, mike.spector@unt.edu
Abstract
Artificial intelligence (AI) is a relatively new phenomenon. The general
consensus holds that its field of exploration began in 1956 at a summer
conference at Dartmouth College sponsored by the Defense Advanced
Research Projects Agency. Somewhat earlier, Alan Turing sparked interest in
AI in a paper published in Mind in 1950 by wondering if a machine could think
and how that could be determined. AI has progressed significantly since then.
Meanwhile, human intelligence (HI) has been studied by philosophers and
researchers for centuries but has remained relatively unchanged in terms of
its fundamental nature (e.g., logic and problem solving). In this paper, we
examine how AI and HI have converged in certain contexts but how they
remain distinct in other aspects. Two areas will be highlighted in the
discussion namely, expertise and embodiment.
Keywords:
Embodiment; expertise; OK test; Turing test
Introduction
Chess has a very long history dating back 1500 years to northern India.
In the intervening years, chess has undergone some changes and spread
throughout the world. The first recognized world chess championship was in
1886 that was won by Wilhelm Steinitz, a Prague citizen (see
https://www.britannica.com/topic/chess/History). Since then, there have
been many chess masters and grandmasters, including Magnus Carlsen, Garry
Kasparov, Bobby Fisher, and Anatoly Karpov. Chess grandmasters are often
considered among the most intelligent humans.
As mainframe computers began to find applications outside research
laboratories, Alan Turing argued that a computer could be programmed to
play chess (for example, see AMT/D 3 in the Turing archive located at
http://www.turingarchive.org/browse.php/D/3). By the 1980s, interest in
computers playing chess had drawn the interest of large computer companies,
including IBM which commissioned the developers at Carnegie Mellon
University to further develop the game for IBM. In 1989, IBM’s chess game was
renamed Deep Blue. In 1996, Deep Blue defeated world champion Garry
Kasparov in game one of a six-game match that Kasparov eventually won 5-2.
In 1997, there was a rematch which Deep Blue won. Now there is an annual
world computer chess championship match. Few master chess players could
win against the best of those computer games.
To round out this historical introduction to the relationships of AI and
HI, there is one additional historical development worth considering namely,
the Turing Test (Turing, 1950). Turing argued that in an imitation game
involving an interrogator and a computer and a respondent both hidden
behind a curtain or in another room with written messages being passed back
and forth. The interrogator is asked to determine which is the human
respondent and which is the computer. If the interrogator cannot distinguish
the human respondent from the computer, then one must conclude that it is
reasonable to call the computer a thinking machine. Processing questions
posed by the interrogator involves natural language processing which was
then believed to be a uniquely human capability. However, Joseph
Weizenbaum (1955) demonstrated that one could program a machine to
respond in a manner one could distinguish from a human therapist. While
some will claim this was not a genuine Turing Test, Weizenbaum
demonstrated that one could process natural language in a human-like
manner with just a couple hundred lines of programming code.
These two cases chess and counseling are used to frame the
subsequent elaboration of the differences and similarities of human and
computer intelligence. Where the two converge and where they diverge are
also discussed.
Dimensions of intelligence
A global dimension
Figure 1 depicts how one might conceptualize intelligence globally.
Figure 1. A global perspective.
Considering the hierarchy in Figure 1, it is possible to characterize in a
very general way how education has evolved over the centuries. That
evolution has hardly been linear as the hierarchy might suggest. Early
apprentice training skipped many of the very low levels and integrated them
in problem-solving activities such as hunting or building shelters. One might
conclude that prehistoric education was aimed at survival. Knowledge was
passed from one generation to the next in terms of survival skills. While this
characterization is probably an oversimplification, it sets the stage for an
evolutionary, multifaceted, and dynamic perspective of education.
Spector and Ren (2015) noticed that education in the USA by British
colonialists focused on reading and arithmetic as those were skills that early
American traders needed to report back to British overseers. In a sense,
American colonists needed those skills to survive under British rule.
One might also recall Maslow’s (1943) hierarchy of needs with security
and safety at the bottom of the hierarchy and belonginess and esteem needs in
the middle and self-actualization at the top. In that hierarchy, deficiency needs
had to be satisfied before growth needs could be satisfied, and motivation
could be explained by where an individual was in terms of satisfying deficiency
needs or working on growth needs. Maslow’s model has been critiqued and
expanded (Maslow, 1987) to include cognitive needs, aesthetic needs, and
transcendence (above self-actualization) which expands the growth needs. In
spite of numerous critiques, especially about the order of needs and whether
several can be addressed at the same time, there is general acceptance that
Maslow’s hierarchy of needs is fairly general and cuts across different cultures.
It is worth noting at this point that AI programs do not have such a
hierarchy of needs, although AI programs do have requirements for different
kinds of input and searches to conduct depending on a specific situation. To
bring the discussion back to HI and AI, within this global dimension, it is
worthwhile to consider Dreyfus and Dreyfus’ (1980) five stages of expertise:
(a) novice, (b) advanced beginner, (c) competent performer, (d) proficient
performer, and (e) intuitive expert. The argument is that instruction can be
tailored to the learner’s level and help learners progress from the first stage to
the fourth stage. However, instruction is difficult to design to help learners
progress to stage five and not many people reach that stage. The implication
is that computers can be programmed to perform at any of the first four levels,
but intuitive expertise (high level mastery or wisdom in Figure 1) is the
domain of a few exceptional humans and beyond specification in a way that
lends itself to a programmed replacement.
The reader should now be in a position to challenge the implication that
computers cannot reach the fifth level of expertise based on the earlier
discussion of computer chess programs, at least in the domain of playing a
complex and challenging game such as chess. In addition, within this global
perspective, there are apparent differences between humans and computers
and those differences affect he ability to acquire knowledge and solve
problems. Finally, it is also worth noting that while intelligence tends to grow
within an individual over that person’s life, the overall growth of human
intelligence has not significantly progressed since ancient times. For example,
writing has existed for at least 5,000 years. Logic was codified several
thousand years ago, as was a calendar year. The abacus calculating device was
invented thousands of years ago (Spector & Ren, 2015). On the other hand,
artificial intelligence has grown exponentially in the last 50 years as
demonstrated by the computer chess discussion. Moreover, it is now
becoming clear that a computer program can grow in terms of problem-
solving ability just as a human can gain expertise over a lifetime. That
similarity will be clear in the next section on a problem-solving dimension of
intelligence.
Problem-solving and learning dimensions
Early examples of applications of AI came in the form of expert systems,
especially in the medical domain (Miller, Pople, & Myers, 1982). Early expert
systems can be traced back to Feigenbaum’s (1980) doctoral dissertation at
Carnegie Mellon University that was entitled “Information Theories of Human
Verbal Learning” and supervised by Herbert Simon. These systems typically
had a knowledge or rule base comprised by domain experts, a description of
the current state of affairs, and an inference engine to search and rank
potentialrules to apply in the current situation. Such systems lent themselves
to applications in a number of problem-solving domains, including medical
diagnosis and business decision making.
Early expert systems were somewhat limited as they had a static set of
rules and used standard logic to match a situation to a rule. Subsequent
advances included the use of fuzzy logic and a way for a system to add new
rules to the knowledge base, both of which were previously human activities.
Those and other refinements showed some convergence of artificial and
human reasoning. The ability of an expert system to expand its knowledge
base required the application of more recent advances in machine learning.
The ability of a machine to learn and use that learning to improve problem
solving represents an example that human intelligence and artificial
intelligence that are fundamentally similar. This apparent similarity will be
discussed in a subsequent section.
Other early examples of educational applications of AI came in the form
of intelligent tutoring systems (ITSs) (Shute & Psotka, 1994), Early ITSs were
somewhat akin to expert systems in that there was a knowledge base of a
subject domain, a model of instruction that included a representation of
knowledge to be acquired by a learner, a model of what a learneres currently
knows about that subject, a database of common misconceptions, and an
inference engine to provide feedback or new information or a new problem to
solve. Early examples showed a positive impact on learning in highly restricted
domains such as two-column arithmetic and simple programming skills.
Early ITSs also evolved in ways somewhat similar to expert systems.
Rather than have a fixed set of common misconceptions, using techniques
involving big data and learning analytics, an ITS could examine what other
similar students were doing and what seemed to work well in terms of new
problems or feedback and provide much better feedback. In addition, the
model of the learner began to grow beyond a model of what that learner knew
in a particular domain to include learning styles, performance in other subject
domains, and interests (Graf & Kinshuk, 2015). A more robust model of the
learner and a system that could determine what instructional treatment was
effective for similarly situated learners shows great promise for the future of
learning and instruction.
Again, progress in ITS technology seems to show a convergence what a
skilled human tutor can do and what an intelligent tutor can do. Once again
there seems to be some convergence between human intelligence and artificial
intelligence in the domain of tutoring.
Underlying foundations
There are two, possibly more, underlying foundations to consider with
regards to convergence or divergence between human and artificial
intelligence. One concerns pattern matching and the other concerns neural
networks. After discussing these two areas, the notion of weak and strong AI
will be introduced.
Pattern matching
Pattern recognition is a core function of human intelligence and
reasoning (Mattson, 2014). A familiar occurrence is the ability to recognize an
acquaintance merely by seeing the back of that person’s head. There is an
incomplete and relatively unfamiliar perception involved in such recognition,
but humans are able to master such complex forms of pattern recognition.
Likewise, computers have been used for decades to support pattern
recognition in many industrial applications (Bishop, 2006). Moreover,
computers are now starting to be used to identify patterns in disparate forms
of input, which is something that some humans are also able to do. For
example, an experienced automobile mechanic may listen to a running engine
and use that perceptual pattern in combination with a readout from a
diagnostic check to form a conclusion about the cause of the presenting
problem.
Image processing and pattern recognition are clearly at the core of
recent AI research and development (Bishop, 2006; Burns, 2020). As it
happens, the ability of a computer to access images and patterns in a large
database and form a conclusion exceeds the ability of most humans. While
human memory is vast, recall and analysis is more efficient in a modern
computer. Moreover, human capabilities in this area begin to fade with age
whereas computers can be continually upgraded with new processors and
databases. In that sense, divergence in human and artificial intelligence is
beginning to occur.
Neural networks
There is a great deal of knowledge about artificial neural networks with
regard to their architecture and capabilities (Parsons, 2017). Artificial neural
networks were initially modelled after how neuroscientists believed human
neural networks were structured. While artificial neural networks are entirely
electrical, human neural networks are partly electrical and partly chemical in
operation. It is generally fair to say that less in known about the specific
architecture and capabilities of human neural networks, although more is
being learned every year (Parsons, Lin, & Cockerham, 2018). As more is
learned about both kinds of neural networks, the forms and extent of
convergence and divergence will be determined.
Weak and strong AI
Before concluding this excursion into the similarities and differences in
human and artificial intelligence, it is worth a short side trip to goals. The goals
of human intelligence are many and varied. Some seek fame and fortune
through their intelligence. Others seek to improve the life and welfare of
people through their intelligence. Some seek power and influence while others
seek to simply add to what humans know and can do.
On the other hand, the goals of AI can be classified in two distinct
categories strong and weak (Spector & Anderson, 2000; Spector, Polson, &
Muraida, 1993). Strong AI systems are those which are intended to replace an
activity previously performed by a human, such as driving an automobile.
Weak AI systems are those which are intended to enable less experienced
persons to perform more like highly experienced persons, such as collision
avoidance systems in many automobiles. There are clearly appropriate
applications for each kind of AI system, although many will argue that strong
AI systems are growing in terms of funding as well as in areas of application.
This fundamental difference is mentioned as a transition to the area of
measurement, as measurements and evaluations are generally made in
relation to intended use and purpose. The measurement and evaluation
communities typically argue that measurements and evaluations should occur
in the context of aims and goals. As mentioned previously, human goals vary
significantly. Some may be placed near the bottom or in the middle of the
pyramid depicted in Figure 1. As Dreyfus and Dreyfus (1980) argued,
computers can be programmed to move up that pyramid but not to the
topmost layer wisdom. On the other hand, some humans are widely
recognized as having some wisdom in some domains of interest.
Measuring intelligence
While there are different kinds of human intelligence (Gardner, 1999)
and many methods used to measure human intelligence, few of those methods
address the ability of humans to solve complex, dynamic, and ill-structured
problems. Such outcomes of learning and intelligence are difficult to measure
as there are a number of acceptable approaches, solutions, and outcomes.
Other forms of human intelligence and learning are much more easily
measured.
On the other hand, measuring artificial intelligence is in its infancy.
Some measures include how many records were examined and how long it
took to find a conclusion, which says little about the nature of the outcome,
which is so crucial in measuring human intelligence. More typically, when it
comes to measuring artificial intelligence in terms of outcomes, the results are
often compared with those of a few human experts.
Conclusion
While focusing on goals and outcomes, it is worth reconsidering Figure
1 and the top level of that pyramid namely, wisdom. To make this final point
concrete, I wish to introduce the notion of embodied cognition and embodied
educational motivation (Spector & Park, 2018; Wilson & Foglia, 2017). People
are more than cognitive processors. People do more than process images,
access memory, repeat information and solve problems. Some people manage
to do remarkable things that others thought impossible. Were this an
interactive I would ask for examples. Think of a few before continuing to the
next paragraph. Then consider the person whom you know or about whom
you have read that you regard as the wisest person you can name. Remember
that wisdom is at the top of the pyramid in Figure 1.
Meanwhile, remember that people have moods and physical
limitations. Perceptions vary significantly from one person to another as do
knowledge, experience, and goals. In addition, recall Jonassen’s (2000) type of
problems and Gardner’s (1999) types of intelligence, and reconsider who that
wisest person might be.
Having posed this challenge, it seems appropriate to seed other
responses with my own. The wisest person I have known is Oets Kolk
Bouwsma as he managed to turn my world upside down and change my
perspective on life in an incidental exchange in an optional seminar on a Friday
afternoon with doctoral students at the University of Texas at Austin. One
doctoral student brought the initial question to consider namely, what was
John Locke’s conception of substance. After an hour’s discussion of a
paragraph in Locke during which time Bouwsma did not speak, he brought the
discussion to an early end with this remark: “I guess that substance is what
properties get stuck in.” The group of about a dozen students knew it was time
to move on, and another student said he wanted to discuss Plato’s Symposium.
Bouwsma then framed the central question of that text: “What is love.” Being
the group’s notetaker and rare contributor to the discussions I ventured a rare
remark aimed at the kind of irony I found in many of Bouwsma’s writing: “Love
is what people get stuck in.” Everyone laughed for a few seconds. Then
Bouwsma tilted his head toward me and with piercing blue eyes asked me to
repeat what I had said. Having no means of escape, I repeated my ironic retort.
Bouwsma, then said, without hesitation: “I thought it was the glue that binds
us together.”
My life changed that day in the 1970s. I am convinced that Bouwsma
knew I was in need of such a change. I felt small and insignificant but also
strangely liberated at the same time. Bouwsma was wise beyond words. The
ability to transform a person or situation in such a positive way is what I now
call the OK Test. Few people have passed it and no machines as yet have been
tested in that way. Someday, there be a computer that achieves wisdom and
passes my vaguely elaborated OK Test. Someday there may be a wise ruler of
the kind Plato imagined in The Republic leading this republic. Someday soon, I
hope.
References:
Bishop, C. M. (Ed.) (2006). Pattern learning and machine learning. Heidelberg,
Germany: Springer.
Burns, E. (2020). In-depth guide to machine learning in the enterprise.
SearchEnterprisedAI, June 2020. Retrieved from
https://searchenterpriseai.techtarget.com/In-depth-guide
Dewey, J. (1910). How we think. New York: D. C. Heath. Retrieved from
https://www.gutenberg.org/files/37423/37423-h/37423-h.htm
Dreyfus, S. E., & Dreyfus, H. L. (1980). Mind over machine: The power of human
intuition and expertise in the era of the computer. New York: Free Press.
Feigenbaum, E. (1980). Information theories of human verbal learning (Unpublished
doctoral dissertation). Carnegie Mellon University, Pittsburgh, PA.
Gardner, H. E. (1999). Intelligence reframed: Multiple intelligences for the 21st century.
New York: Basic Books.
Graf, S, & Kinshuk (2015). Dynamic student modelling of learning styles for advanced
adaptability in learning management systems. International Journal of
Information Systems and Social Change, 4(1) 85-100.
DOI: 10.4018/jissc.2013010106
Hartley, R., Kinshuk, Koper, R., Okamoto, T., & Spector, J. M. (2010). The education and
training of learning technologists: A competences approach. Educational
Technology & Society, 13(2), 206-216. Retrieved from
https://drive.google.com/file/d/1ZTLFIskL5fJwedBKznO3Idrbb6y28N1O/v
iew
Jonassen, D. H. (2000). Toward a design theory of problem solving. Educational
Technology Research & Development, 48(4), 63-85.
Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50(4), 270-
396.
Maslow, A. H. (1987). Motivation and personality (3rd ed.). Delhi, India: Pearson
Education.
Mattson, M. P. (2014). Superior pattern processing is the essence of the evolved
human brain. Frontiers in Heursocscience, 8(265),
doi: 10.3389/fnins.2014.00265
Miller, R. A., Pople, H. E. Jr., & Myers, J. D. (1982). Internist-I, an experimental
computer-based diagnostic consultant for general internal medicine. New
England Journal of Medicine. 307 (8), 468
476. doi:10.1056/NEJM198208193070803 \
Parsons, T. D. (2017). Cyberpsychology and the brain: The interaction of neuroscience
and affective computering. Cambridge: Cambridge University Press.
Parsons, T. D., Lin, L., & Cockerham, D. (2018). Mind, brain, and technology: How people
learn in the age of new technologies. New York: Springer.
Shute, V. J., & Psotka, J. (1994). Intelligent tutoring systems: Past, present, and future.
[AL/HR-TP-1994-0005]. Brooks AFB, TX: Human Resources Directorate.
Retrieved from https://apps.dtic.mil/dtic/tr/fulltext/u2/a280011.pdf
Spector, J. M., & Anderson, T. M. (Eds.) (2000). Integrated and holistic perspectives on
learning, instruction and technology: Understanding complexity. Dordrecht:
Kluwer Academic Press.
Spector, J. M., & Park, S. W. (2018). Motivation, learning and technology: Embodied
educational motivation. New York: Routledge.
Spector, J. M., Polson, M. C., & Muraida, D. J. (Eds.) (1993). Automating instructional
design: Concepts and issues. Englewood Cliffs, NJ: Educational Technology.
Spector, J. M., & Ren, Y. (2015). History of educational technology. In J. M. Spector
(Ed.), The SAGE Encyclopedia of educational technology (pp. 335-345).
Thousand Oaks, CA: Sage Publications.
Turing, A. (1950). Computing machinery and intelligence. Mind, 59(236). 433460.
Retrieved from https://phil415.pbworks.com/f/TuringComputing.pdf
Weizenbaum, J. (1955). ELIZA a computer program for the study of natural language
communication between man and machine. Communications of the ACM, 9,
36-45. Retrieved from http://www.universelle-
automation.de/1966_Boston.pdf
Wilson, R. A., & Foglia, L. (2017). Embodied cognition. In E. N. Zalta (Ed.), The Stanford
Encyclopedia of Philosophy. Retrieved from
https://plato.stanford.edu/archives/spr2017/entries/embodied-cognition
Acknowledgements
My sincere and profound thanks to Oets Kolk Bouwsma
(https://en.wikipedia.org/wiki/Oets_Kolk_Bouwsma), one of my philosophy
professors, for showing me the nature of intelligence. Thanks also to Lemoyne Dunn
for providing feedback on this paper.
Author Information
Ma, Shanshan
University of North Texas
3940 N. Elm Street, Denton, TX 76207 USA
TEL: +1 940 535 8909
Email address: shanshanma@my.unt.edu
Short biographical sketch:
Shanshan MA, a doctoral graduate at University of North Texas with abundant
experience in international cooperation. She has been cooperating with professors
and research associates with different backgrounds from several countries (e.g., the
USA, China, India, and the UK). Her research interests include technology-supported
teaching and learning strategies, educational technology design, learning technology
integration theory, game-based learning, and instructional design. Her recent
research focuses on critical thinking development in K-12 education and critical
thinking teaching integration competence in teachers. She is a reviewer for several
research journals, such as the Journal of Smart Learning Environments, Computers
in Human Behavior, and, Contemporary Issues in Technology and Teacher Education
(Science), and she is also a member of several professional associations including
Association for Education Communication Technology (AECT) and American
Education Research Association (AERA), and Texas Center Education Technology
(TCET). She presented at five more international conferences (i.e., SITE 2018, AECT
2018, UCSEC 2019, AECT 2019, PPTELL 2020) and one workshop funded by NSF, co-
held one seminar on critical thinking at AECT 2019. She has several publications,
including two chapters and three journal papers, one under review, and two under
revision.
Spector, Jonathan Michael
University of North Texas
3940 N. Elm Street, Denton, TX 76207 USA
1501 Greenside Drive, Round Rock, TX USA 78665
TEL: +1 706 202 9350 / +1 950 369 5070
FAX: +1 940 565 4194
Email address: mike.spector@unt.edu
Website: https://sites.google.com/site/jmspector007/
Short biographical sketch:
J. Michael Spector, Professor at UNT, was previously Professor of Educational
Psychology at the University of Georgia, Associate Director of the Learning Systems
Institute at Florida State University, Chair of Instructional Design, Development and
Evaluation at Syracuse University, and Director of the Educational Information
Science and Technology Research Program at the University of Bergen. He earned a
Ph.D. from The University of Texas. He is a visiting research professor at Beijing
Normal University, at East China Normal University, and the Indian Institute of
Technology-Kharagpur. His research focuses on assessing learning in complex
domains, inquiry and critical thinking skills, and program evaluation. He was
Executive Director of the International Board of Standards for Training, Performance
and Instruction and a Past-president of the Association for Educational and
Communications Technology. He is Editor Emeritus of Educational Technology
Research & Development; he edited two editions of the Handbook of Research on
Educational Communications and Technology and the SAGE Encyclopedia of
Educational Technology and more that 150 publications to his credit.
ResearchGate has not been able to resolve any citations for this publication.
Book
Full-text available
Cyberpsychology is a relatively new discipline that is growing at an alarming rate. While a number of cyberpsychology-related journals and books have emerged, none directly address the neuroscience behind it. This book proposes a framework for integrating neuroscience and cyberpsychology for the study of social, cognitive, and affective processes, and the neural systems that support them. A brain-based cyberpsychology can be understood as a branch of psychology that studies the neurocognitive, affective, and social aspects of humans interacting with technology, as well as the affective computing aspects of humans interacting with computational devices or systems. As such, a cyberpsychologist working from a brain-based cyberpsychological framework studies both the ways in which persons make use of devices and the neurocognitive processes, motivations, intentions, behavioural outcomes, and effects of online and offline uses of technology. Cyberpsychology and the Brain brings researchers into the vanguard of cyberpsychology and brain research.
Article
Full-text available
Humans have long pondered the nature of their mind/brain and, particularly why its capacities for reasoning, communication and abstract thought are far superior to other species, including closely related anthropoids. This article considers superior pattern processing (SPP) as the fundamental basis of most, if not all, unique features of the human brain including intelligence, language, imagination, invention, and the belief in imaginary entities such as ghosts and gods. SPP involves the electrochemical, neuronal network-based, encoding, integration, and transfer to other individuals of perceived or mentally-fabricated patterns. During human evolution, pattern processing capabilities became increasingly sophisticated as the result of expansion of the cerebral cortex, particularly the prefrontal cortex and regions involved in processing of images. Specific patterns, real or imagined, are reinforced by emotional experiences, indoctrination and even psychedelic drugs. Impaired or dysregulated SPP is fundamental to cognitive and psychiatric disorders. A broader understanding of SPP mechanisms, and their roles in normal and abnormal function of the human brain, may enable the development of interventions that reduce irrational decisions and destructive behaviors.
Article
Full-text available
In this paper, we address many aspects of Intelligent Tutoring Systems (ITS) in our search for answers to the following main questions; (a) What are the precursors of ITS? (b) What does the term mean? (c) What are some important milestones and issues across the 20+ year history of ITS? (d) What is the status of ITS evaluations? and (e) What is the future of ITS? We start with an historical perspective.
Book
One outcome of recent progress in educational technology is strong interest in providing effective support for learning in complex and ill-structured domains. We know how to use technology to promote understanding in simpler domains (e.g., orientation information, procedures with minimal-branching, etc.), but we are less sure how to use technology to support understanding in more complex domains (e.g., managing limited resources, understanding environmental impacts, etc.). Such domains are increasingly significant for society. Technology (e.g., collaborative tele-learning, digital repositories, interactive simulations, etc.) can provide conceptually and functionally rich domains for learning. However, this introduces the problem of determining what works in which circumstances and why. Research and development on these matters is reflected in this collection of papers. This research suggests a need to rethink foundational issues in educational philosophy and learning technology. One major theme connecting these papers is the need to address learning in the large - from a more holistic perspective. A second theme concerns the need to take learners where and as they are, integrating technology into effective learning places. Significant and systematic progress in learning support for complex domains demands further attention to these important issues.
Article
Learning management systems (LMSs) are commonly used in e-learning; however, they typically do not consider the individual differences of students, including their different background knowledge, cognitive abilities, motivation, and learning styles. A basic requirement for enabling such systems to consider students’ individual characteristics is to know these characteristics first. This paper focuses on the consideration of learning styles and introduces a dynamic student modelling approach that monitors students’ behaviour over time and uses these data to build an accurate student model by frequently refining the information in the student model as well as by responding to changes in students’ learning styles over time. The proposed approach is especially useful for LMSs, which are commonly used by educational institutions for whole programs of study and therefore can monitor students’ behaviour over time, in different courses. The paper demonstrates how this approach can be integrated in an adaptive mechanism that enables LMSs to automatically generate courses that fit students’ learning styles and discusses how dynamic student modelling can help in identifying students’ learning styles more accurately, which enables the LMS to provide more accurate adaptivity and therefore support students’ learning processes more effectively.