ArticlePDF Available

The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years.

Authors:
WINTER 2006 87
Reports
Marvin Minsky, Claude Shannon, and
Nathaniel Rochester for the 1956
event, McCarthy wanted, as he ex-
plained at AI@50, “to nail the flag to
the mast.” McCarthy is credited for
coining the phrase “artificial intelli-
gence” and solidifying the orientation
of the field. It is interesting to specu-
late whether the field would have
been any different had it been called
“computational intelligence” or any
of a number of other possible labels.
Five of the attendees from the orig-
inal project attended AI@50 (figure 1).
Each gave some recollections. Mc-
Carthy acknowledged that the 1956
project did not live up to expectations
in terms of collaboration. The atten-
dees did not come at the same time
and most kept to their own research
agenda. McCarthy emphasized that
nevertheless there were important re-
search developments at the time, par-
ticularly Allen Newell, Cliff Shaw, and
Herbert Simon’s Information Process-
ing Language (IPL) and the Logic The-
ory Machine.
Marvin Minsky commented that,
although he had been working on
neural nets for his dissertation a few
years prior to the 1956 project, he dis-
The Dartmouth College Artificial Intelli-
gence Conference: The Next 50 Years
(AI@50) took place July 13–15, 2006. The
conference had three objectives: to cele-
brate the Dartmouth Summer Research
Project, which occurred in 1956; to as-
sess how far AI has progressed; and to
project where AI is going or should be
going. AI@50 was generously funded by
the office of the Dean of Faculty and the
office of the Provost at Dartmouth Col-
lege, by DARPA, and by some private
donors.
Reflections on 1956
Dating the beginning of any move-
ment is difficult, but the Dartmouth
Summer Research Project of 1956 is
often taken as the event that initiated
AI as a research discipline. John Mc-
Carthy, a mathematics professor at
Dartmouth at the time, had been dis-
appointed that the papers in Automa-
ta Studies, which he coedited with
Claude Shannon, did not say more
about the possibilities of computers
possessing intelligence. Thus, in the
proposal written by John McCarthy,
continued this earlier work because he
became convinced that advances
could be made with other approaches
using computers. Minsky expressed
the concern that too many in AI today
try to do what is popular and publish
only successes. He argued that AI can
never be a science until it publishes
what fails as well as what succeeds.
Oliver Selfridge highlighted the im-
portance of many related areas of re-
search before and after the 1956 sum-
mer project that helped to propel AI as
a field. The development of improved
languages and machines was essential.
He offered tribute to many early pio-
neering activities such as J. C. R. Lick-
leiter developing time-sharing, Nat
Rochester designing IBM computers,
and Frank Rosenblatt working with
perceptrons.
Trenchard More was sent to the
summer project for two separate
weeks by the University of Rochester.
Some of the best notes describing the
AI project were taken by More, al-
though ironically he admitted that he
never liked the use of “artificial” or
“intelligence” as terms for the field.
Ray Solomonoff said he went to the
summer project hoping to convince
everyone of the importance of ma-
chine learning. He came away know-
ing a lot about Turing machines that
informed future work.
Thus, in some respects the 1956
summer research project fell short of
expectations. The participants came at
various times and worked on their
own projects, and hence it was not re-
ally a conference in the usual sense.
There was no agreement on a general
theory of the field and in particular on
a general theory of learning. The field
of AI was launched not by agreement
on methodology or choice of prob-
lems or general theory, but by the
shared vision that computers can be
made to perform intelligent tasks.
This vision was stated boldly in the
proposal for the 1956 conference:
“The study is to proceed on the basis
of the conjecture that every aspect of
learning or any other feature of intel-
ligence can in principle be so precise-
ly described that a machine can be
made to simulate it.”
The Dartmouth College
Artificial Intelligence
Conference:
The Next
Fifty Years
James Moor
Copyright © 2006, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2006 / $2.00
AI Magazine Volume 27 Number 4 (2006) (© AAAI)
Evaluations at 2006
There were more than three dozen ex-
cellent presentations and events at
AI@50, and there is not space to give
them the individual treatment each
deserves. Leading researchers reported
on learning, search, networks,
robotics, vision, reasoning, language,
cognition, and game playing.1
These presentations documented
significant accomplishments in AI
over the past half century. Consider
robotics as one example. As Daniela
Rus pointed out, 50 years ago there
were no robots as we know them.
There were fixed automata for specific
jobs. Today robots are everywhere.
They vacuum our homes, explore the
oceans, travel over the Martian sur-
face, and win the DARPA Grand Chal-
lenge in a race of 132 miles in the Mo-
jave Desert. Rus speculated that in the
future we might have our own person-
al robots as we now have our own per-
sonal computers, robots that could be
tailored to help us with the kind of ac-
tivities that each of us wants to do.
Robot parts might be smart enough to
self-assemble to become the kind of
structure we need at a given time.
Much has been accomplished in
robotics, and much to accomplish
seems not too far over the horizon.
Although AI has enjoyed much suc-
cess over the last 50 years, numerous
dramatic disagreements remain with-
in the field. Different research areas
frequently do not collaborate, re-
searchers utilize different methodolo-
gies, and there still is no general theo-
ry of intelligence or learning that
unites the discipline.
One of the disagreements that was
debated at AI@50 is whether AI should
be logic based or probability based.
McCarthy continues to be fond of a
logic-based approach. Ronald Brach-
man argued that a core idea in the pro-
posal for the 1956 project was that “a
large part of human thought consists
of manipulating words according to
rules of reasoning and rules of conjec-
ture” and that this key idea has served
as a common basis for much of AI dur-
ing the past 50 years. This was the AI
revolution or, as McCarthy explained,
the counter-revolution, as it was an at-
88 AI MAGAZINE
Reports
Photographer: Joe Mehling
Figure 1. Trenchard More, John McCarthy, Marvin Minsky, Oliver Selfridge, and Ray Solomonoff.
tack on behaviorism, which had be-
come the dominant position in psy-
chology in the 1950s.
David Mumford argued on the con-
trary that the last 50 years has experi-
enced the gradual displacement of
brittle logic with probabilistic meth-
ods. Eugene Charniak supported this
position by explaining how natural
language processing is now statistical
natural language processing. He stated
frankly, “Statistics has taken over nat-
ural language processing because it
works.”
Another axis of disagreement, cor-
related with the logic versus probabil-
ity issue, is the psychology versus
pragmatic paradigm debate. Pat Lang-
ley, in the spirit of Allen Newell and
Herbert Simon, vigorously maintained
that AI should return to its psycholog-
ical roots if human level AI is to be
achieved. Other AI researchers are
more inclined to explore what suc-
ceeds even if done in nonhuman
ways. Peter Norvig suggested that
searching, particularly given the huge
repository of data on the web, can
show encouraging signs of solving tra-
ditional AI problems though not in
terms of human psychology. For in-
stance, machine translation with a rea-
sonable degree of accuracy between
Arabic and English is now possible
through statistical methods though
nobody on the relevant research staff
speaks Arabic.
Finally, there is the ongoing debate
of how useful neural networks might
be in achieving AI. Simon Osindero
working with Geoffrey Hinton dis-
cussed more powerful networks. Both
WINTER 2006 89
Reports
Photographer: Joe Mehling
Figure 2. Dartmouth Hall, Where the Original Activities Took Place.
Terry Sejnowski and Rick Granger ex-
plained how much we have learned
about the brain in the last decade and
how this information is very sugges-
tive for building computer models of
intelligent activity.
These various differences can be tak-
en as a sign of health in the field. As
Nils Nilsson put it, there are many
routes to the summit. Of course, not
all of the methods may be fruitful in
the long run. Since we don’t know
which way is best, it is good to have
many explored. Despite all the differ-
ences, as in 1956, there is a common
vision that computers can do intelli-
gent tasks. Perhaps, this vision is all it
takes to unite the field.
Projections to 2056
Many predictions about the future of
AI were given at AI@50. When asked
what AI will be like 50 years from now,
the participants from the original con-
ference had diverse positions. Mc-
Carthy offered his view that human-
level AI is likely but not assured by
2056. Selfridge claimed that comput-
ers will do more planning and will in-
corporate feelings and affect by then
but will not be up to human-level AI.
Minsky thought what is needed for
significant future progress is a few
bright researchers pursuing their own
good ideas, not doing what their advi-
sors have done. He lamented that too
few students today are pursuing such
ideas but rather are attracted into en-
trepreneurships or law. More hoped
that machines would always be under
the domination of humans and sug-
gested that machines were very un-
likely ever to match the imagination
of humans. Solomonoff predicted on
the contrary that really smart ma-
chines are not that far off. The danger,
according to him, is political. Today
90 AI MAGAZINE
Reports
Photographer: Joe Mehling
Figure 3. Dartmouth Hall Commerative Plaque.
disruptive technologies like comput-
ing put a great deal of power, a power
that can misused, in the hands of in-
dividuals and governments.
Ray Kurzweil offered a much more
optimistic view about progress and
claimed that we can be confident of
Turing test–capable AI within a quar-
ter century—a prediction with which
many disagreed. Forecasting techno-
logical events is always hazardous. Si-
mon once predicted a computer chess
champion within 10 years. He was
wrong about the 10 years, but it did
happen within 40 years. Thus, given
an even longer period, another 50
years, it is fascinating to ponder what
AI might accomplish.
Sherry Turkle wisely pointed out
that the human element is easily over-
looked in technological development.
Eventually we must relate to such ad-
vanced machines if they are devel-
oped. The important issue for us may
be less about the capabilities of the
computers than about our own vul-
nerabilities when confronted with
very sophisticated artificial intelli-
gences.
Several dozen graduate and post-
doctoral students were sponsored by
DARPA to attend AI@50. Our hope is
that many will be inspired by what
they observed. Perhaps some of those
will present their accomplishments at
the 100-year celebration of the Dart-
mouth Summer Research Project.
Postscript
A plaque honoring the 1956 summer
research project has been recently
mounted in Dartmouth Hall, the
building in which the 1956 summer
activities took place (figure 3).
For details of AI@50 and announce-
ments of products resulting from
AI@50 please check the conference
website.2
Notes
1. For more details on the speakers and top-
ics, check www.dartmouth.edu/~ai50/.
2. www.dartmouth.edu/~ai50/.
James Moor is a professor of philosophy at
Dartmouth College. He is an adjunct pro-
fessor with The Centre for Applied Philoso-
WINTER 2006 91
Reports
Figure 4. The 2006 Conference Logo.
phy and Public Ethics
(CAPPE) at the Aus-
tralian National Univer-
sity. He earned his PhD
in history and philoso-
phy of science at Indi-
ana University. He pub-
lishes on philosophy of
artificial intelligence, computer ethics,
philosophy of mind, philosophy of sci-
ence, and logic. He is the editor of the jour-
nal Minds and Machines and is the presi-
dent of the International Society for Ethics
and Information Technology (INSEIT). He
is a recipient of the American Computing
Machinery SIGCAS “Making a Difference”
award.
... These systems combine artificial intelligence (AI) and other advanced analytics techniques like machine learning, natural language processing, etc. with other technologies including the internet of things (IoT), cyber-physical systems (CPSs), cloud computing, augmented reality (AR), virtual reality (VR), blockchain, 3D printing, etc. defined AI in 1955 as "the science and engineering of making intelligent machines" [2]. As the definition, autonomous things such as autonomous vehicles, robots, chatbots, drones, virtual personal assistants, wearable technologies, etc. have been becoming smarter with AI techniques. ...
Article
Full-text available
Intelligent systems are commonly used to perform routine and complex tasks in various environments and create smart environments. These systems combine artificial intelligence and other advanced analytics techniques like machine learning, natural language processing, etc. with other technologies including the internet of things, cyber-physical systems, cloud computing, augmented reality, virtual reality, blockchain, 3D printing, etc. Nevertheless, fewer studies have been reported about the applications of these technologies in service processes unlike in manufacturing processes. The study aims to investigate the common applications of intelligent systems technologies in service processes regarding views of experts from companies in several service areas. Actual data are collected with the semi-structured interviews with experts for the investigation. Respondents were selected from directors, executives, or managers considering the heterogeneity and whether they can give a relevant response about the applications of intelligent system technologies in service processes. The collected data are thematically analyzed. The data were transcribed, reduced, rearranged under technology and service categories, and synthesized for obtaining the related applications. The study presents the findings of applications of intelligent system technologies under the related service and technology categories. The findings are analyzed, and first, the functions of intelligent system technologies in service processes are described and then, the common and rare service processes where intelligent system technologies are given as a result of the analysis. The results can contribute to literature providing a better understanding of applications of intelligent system technologies in service processes for future studies. In practice, the results can help to service companies that need the applications of these technologies to be able to stay competitive.
... Depuis la conférence de Dartmounth en 1956 [Moo06] que certains voient comme la naissance de la discipline, on distingue deux branches de l'IA. La première, nommée IA numérique utilise largement les statistiques et les probabilités tandis que la seconde, nommée IA symbolique, porte sur la représentation et le raisonnement principalement basés sur la logique. ...
Thesis
Le modèle des cartes cognitives est un modèle de représentation des connaissances qui vise à représenter des systèmes d’influences complexes. Il est simple et visuel mais assez formel pour être manipulé de manière informatique. L’enjeu de cette thèse réside dans l’idée d’améliorer les capacités informatiques du modèle sans trop le complexifier pour ne pas le dénaturer. Cette thèse se base sur un cas réél d’utilisation du modèle dans le milieu de l’halieutique ; elle propose deux contributions. La première consiste à étendre le modèle en y associant une ontologie de manière à pouvoir caractériser les concepts utilisés sur leurs aspects spatiaux et temporels. La seconde contribution consiste à interroger les connaissances du modèle au travers d’un langage de requête. Ce langage se base sur des relations atomiques, appelées primitives, pour accéder à des connaissances spécifiques qui peuvent être exprimées de manière déclarative. Cette thèse propose également un prototype regroupant l’implémentation des contributions.
Chapter
Active methodologies are proposed to allow, through social students' engagement, a new way to improve educational activities, resulting in a productive and continuous scenario where knowledge is communitarely shared. Answering social demands for a more inclusive education, the market sector, composed of public and private organizations, is enhancing the adoption of active methodologies. As in any action like these, there is a risk of focus loss, aiming more to the technologies applied than to the educational process itself. This chapter proposes a study of multidisciplinarity, interdisciplinarity, and transdisciplinarity paradigms as a source of thinking to support educational planning to adopt active methodologies, offering a robust way to plan their practice. Emerging technologies support, as resources which can favor the introductions of active methodologies are also evaluated, as they also offer a risk for planning when thought as the final objectives of these educational plans, instead of the indisputable social results of education.
Article
The article discusses the V.V. Tselishchev’s original and unique systematic study of the specific and extremely complicated problems of Gödel results regarding the question of artificial intelligence essence. Tselishchev argues that the reflexive property should be considered not only as an advantage of human reasoning, but also as an objective internal limitation that appears in case of adding Gödel sentence to a theory to build a new theory. The article analyzes so-called mentalistic Gödel’s argument for fundamental superiority of human intelligence over machine one and the non-algorithmic nature of natural thinking. The discussion about the Gödel argument is not entirely speculative, but contains new knowledge. An example of such knowledge are the results of R. Smullyan levels of computers “awareness,” which are may be interpreted in a psychophysical sense. The concept of “zero level of intelligence” is proposed for such a reflexive property as “awareness of selfconsciousness.” Reflexive ranks below the awareness of self-consciousness can be considered negative levels of thinking in the sense that the intelligence, being reduced to them, significantly loses its completeness. Even self-consciousness turns out to be a negative level of thinking, since, according to Smullyan, the subject of self-consciousness is unaware of the type of thought to which he belongs. A thought experiment is proposed that allows us to establish the distribution of the properties of Smullyan stability and normality and to answer the question “Does an intuitive belief in the truth of a formal proof affect the truth of a proposition being proved?” According to intuitionism, the most unpleasant epistemic property is instability: beliefs that are not based on deep intuitions have no value. According to the constructivist philosophy of mathematics, instability is a less negative property than abnormality: the fact that high-ranking beliefs cannot be immersed to the very foundations is not significant because violation of truth due to lowering the rank of reflection is not critical.
Book
Technologies, Artificial Intelligence and the Future of Learning Post-COVID-19: The Crucial Role of International Accreditation https://link.springer.com/book/10.1007/978-3-030-93921-2
Article
Emerging Artificial Intelligence (AI) systems are revolutionizing computing and data processing approaches with their strong impact on society. Data is processed with automated labelling pipelines rather than providing it as input to the system. The innovative nature increases the overall performance of monitoring/detection/reaction mechanisms for efficient system resource management. However, due to hardware-driven design limitations, networking and trust mechanisms are not flexible and adaptive enough to be able to interact and control the resources dynamically. Novel adaptive software-driven design approaches can enable us to build growing intelligent mechanisms with software-defined networking (SDN) features by virtualizing network functionalities with maximized features. These challenges and critical feature sets have been identified and introduced into this survey with their scientific background for AI systems and growing intelligent mechanisms. Furthermore, obstacles and research challenges between 1950–2021 are explored and discussed with a focus on recent years. The challenges are categorized according to three defined architectural perspectives (central, decentral/autonomous, distributed/hybrid) for emerging trusted distributed AI mechanisms. Therefore, resiliency and robustness can be assured in a dynamic context with an end-to-end Trusted Execution Environment (TEE) for growing intelligent mechanisms and systems. Furthermore, as presented in the paper, the trust measurement, quantification, and justification methodologies on top of Trusted Distributed AI (TDAI) can be applied in emerging distributed systems and their underlying diverse application domains, which will be explored and experimented in our future related works.
Article
As the driving force behind the new wave of technological advancement, artificial intelligence (AI) paves the way for a new era of green economic growth. This paper uses the case of China and constructs a relevant mathematical model to propose the two-way impact of AI on green total factor productivity (GTFP) and systematically analyze the impact of AI on green economic growth. The main findings of this study are that AI has a significant “U-shaped” effect on GTFP, which was estimated using a nonlinear dynamic panel regression model. The analysis of regional heterogeneity shows that improving AI in resource-rich areas can boost GTFP and break the “resource curse”. The “productivity paradox” arises, according to the industry heterogeneity analysis, because the current low level of intelligence in resource-intensive and labor-intensive industries fails to improve GTFP. This paper also breaks down GTFP into green technological progress and green efficiency improvement effects to figure out how AI affects green economic growth.
Chapter
The exponential growth in healthcare data as a whole and in particular the breadth of advanced imaging studies and data unique to pediatric cardiology welcomes more innovative methods for analysis, interpretation, and translation of these data into clinically useful information. As it stands today, physicians are inefficiently only acting on a small portion of the data available to them. Implementation of artificial intelligence (AI) into the healthcare model offers promise to improve the quality of patient care in clinical practice with significant potential in pediatric cardiology to lead the transition from a “one-size-fits-all” approach to management of congenital heart disease (CHD) to patient-specific tailored therapies that acknowledge that subtle variations in CHD lesions merit individualized treatment strategies. This chapter reviews the history of AI, provides a primer on the aspects of AI and machine learning in pediatric cardiology, discusses current applications of these methods, and explores how pediatric cardiology can incorporate AI in the future.
Chapter
Artificial intelligence (AI) origins can be dated all the way back to 1948. Alan Turing wrote the first AI manifesto called “Intelligent Machinery,” theorizing that in order to be considered intelligent, a machine should be able to imitate human behavior so as to be indistinguishable from human themselves. In the same period, McCulloch and Pitts first introduced a prototype of neural networks. It was in 1955 at the Dartmouth Conference that the term “artificial intelligence” was used to describe the science aimed to develop machines that simulate human’s behavior. From that point, research in the field of AI never stopped, being a rollercoaster of successes and failures. In more recent times, thanks to the development of more powerful computers and efficient algorithms, the field of AI and especially that of neural networks are gaining more and more attention, reaching some remarkable results such as the defeat of Garry Kasparov in a chess match and the ability to autonomously drive a car. AI is being implemented in multiple areas, including medicine and more importantly medical imaging. Throughout the years, software that can support physicians in making diagnosis, select optimal therapy options, and assess prognosis have been developed. In this chapter, we are discussing some of the crucial steps that led AI development, giving insight in the major hurdles AI has already overcome and which ones are still present in the current age.
ResearchGate has not been able to resolve any references for this publication.