ArticlePDF Available

The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years.

Authors:
WINTER 2006 87
Reports
Marvin Minsky, Claude Shannon, and
Nathaniel Rochester for the 1956
event, McCarthy wanted, as he ex-
plained at AI@50, “to nail the flag to
the mast.” McCarthy is credited for
coining the phrase “artificial intelli-
gence” and solidifying the orientation
of the field. It is interesting to specu-
late whether the field would have
been any different had it been called
“computational intelligence” or any
of a number of other possible labels.
Five of the attendees from the orig-
inal project attended AI@50 (figure 1).
Each gave some recollections. Mc-
Carthy acknowledged that the 1956
project did not live up to expectations
in terms of collaboration. The atten-
dees did not come at the same time
and most kept to their own research
agenda. McCarthy emphasized that
nevertheless there were important re-
search developments at the time, par-
ticularly Allen Newell, Cliff Shaw, and
Herbert Simon’s Information Process-
ing Language (IPL) and the Logic The-
ory Machine.
Marvin Minsky commented that,
although he had been working on
neural nets for his dissertation a few
years prior to the 1956 project, he dis-
The Dartmouth College Artificial Intelli-
gence Conference: The Next 50 Years
(AI@50) took place July 13–15, 2006. The
conference had three objectives: to cele-
brate the Dartmouth Summer Research
Project, which occurred in 1956; to as-
sess how far AI has progressed; and to
project where AI is going or should be
going. AI@50 was generously funded by
the office of the Dean of Faculty and the
office of the Provost at Dartmouth Col-
lege, by DARPA, and by some private
donors.
Reflections on 1956
Dating the beginning of any move-
ment is difficult, but the Dartmouth
Summer Research Project of 1956 is
often taken as the event that initiated
AI as a research discipline. John Mc-
Carthy, a mathematics professor at
Dartmouth at the time, had been dis-
appointed that the papers in Automa-
ta Studies, which he coedited with
Claude Shannon, did not say more
about the possibilities of computers
possessing intelligence. Thus, in the
proposal written by John McCarthy,
continued this earlier work because he
became convinced that advances
could be made with other approaches
using computers. Minsky expressed
the concern that too many in AI today
try to do what is popular and publish
only successes. He argued that AI can
never be a science until it publishes
what fails as well as what succeeds.
Oliver Selfridge highlighted the im-
portance of many related areas of re-
search before and after the 1956 sum-
mer project that helped to propel AI as
a field. The development of improved
languages and machines was essential.
He offered tribute to many early pio-
neering activities such as J. C. R. Lick-
leiter developing time-sharing, Nat
Rochester designing IBM computers,
and Frank Rosenblatt working with
perceptrons.
Trenchard More was sent to the
summer project for two separate
weeks by the University of Rochester.
Some of the best notes describing the
AI project were taken by More, al-
though ironically he admitted that he
never liked the use of “artificial” or
“intelligence” as terms for the field.
Ray Solomonoff said he went to the
summer project hoping to convince
everyone of the importance of ma-
chine learning. He came away know-
ing a lot about Turing machines that
informed future work.
Thus, in some respects the 1956
summer research project fell short of
expectations. The participants came at
various times and worked on their
own projects, and hence it was not re-
ally a conference in the usual sense.
There was no agreement on a general
theory of the field and in particular on
a general theory of learning. The field
of AI was launched not by agreement
on methodology or choice of prob-
lems or general theory, but by the
shared vision that computers can be
made to perform intelligent tasks.
This vision was stated boldly in the
proposal for the 1956 conference:
“The study is to proceed on the basis
of the conjecture that every aspect of
learning or any other feature of intel-
ligence can in principle be so precise-
ly described that a machine can be
made to simulate it.”
The Dartmouth College
Artificial Intelligence
Conference:
The Next
Fifty Years
James Moor
Copyright © 2006, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2006 / $2.00
AI Magazine Volume 27 Number 4 (2006) (© AAAI)
Evaluations at 2006
There were more than three dozen ex-
cellent presentations and events at
AI@50, and there is not space to give
them the individual treatment each
deserves. Leading researchers reported
on learning, search, networks,
robotics, vision, reasoning, language,
cognition, and game playing.1
These presentations documented
significant accomplishments in AI
over the past half century. Consider
robotics as one example. As Daniela
Rus pointed out, 50 years ago there
were no robots as we know them.
There were fixed automata for specific
jobs. Today robots are everywhere.
They vacuum our homes, explore the
oceans, travel over the Martian sur-
face, and win the DARPA Grand Chal-
lenge in a race of 132 miles in the Mo-
jave Desert. Rus speculated that in the
future we might have our own person-
al robots as we now have our own per-
sonal computers, robots that could be
tailored to help us with the kind of ac-
tivities that each of us wants to do.
Robot parts might be smart enough to
self-assemble to become the kind of
structure we need at a given time.
Much has been accomplished in
robotics, and much to accomplish
seems not too far over the horizon.
Although AI has enjoyed much suc-
cess over the last 50 years, numerous
dramatic disagreements remain with-
in the field. Different research areas
frequently do not collaborate, re-
searchers utilize different methodolo-
gies, and there still is no general theo-
ry of intelligence or learning that
unites the discipline.
One of the disagreements that was
debated at AI@50 is whether AI should
be logic based or probability based.
McCarthy continues to be fond of a
logic-based approach. Ronald Brach-
man argued that a core idea in the pro-
posal for the 1956 project was that “a
large part of human thought consists
of manipulating words according to
rules of reasoning and rules of conjec-
ture” and that this key idea has served
as a common basis for much of AI dur-
ing the past 50 years. This was the AI
revolution or, as McCarthy explained,
the counter-revolution, as it was an at-
88 AI MAGAZINE
Reports
Photographer: Joe Mehling
Figure 1. Trenchard More, John McCarthy, Marvin Minsky, Oliver Selfridge, and Ray Solomonoff.
tack on behaviorism, which had be-
come the dominant position in psy-
chology in the 1950s.
David Mumford argued on the con-
trary that the last 50 years has experi-
enced the gradual displacement of
brittle logic with probabilistic meth-
ods. Eugene Charniak supported this
position by explaining how natural
language processing is now statistical
natural language processing. He stated
frankly, “Statistics has taken over nat-
ural language processing because it
works.”
Another axis of disagreement, cor-
related with the logic versus probabil-
ity issue, is the psychology versus
pragmatic paradigm debate. Pat Lang-
ley, in the spirit of Allen Newell and
Herbert Simon, vigorously maintained
that AI should return to its psycholog-
ical roots if human level AI is to be
achieved. Other AI researchers are
more inclined to explore what suc-
ceeds even if done in nonhuman
ways. Peter Norvig suggested that
searching, particularly given the huge
repository of data on the web, can
show encouraging signs of solving tra-
ditional AI problems though not in
terms of human psychology. For in-
stance, machine translation with a rea-
sonable degree of accuracy between
Arabic and English is now possible
through statistical methods though
nobody on the relevant research staff
speaks Arabic.
Finally, there is the ongoing debate
of how useful neural networks might
be in achieving AI. Simon Osindero
working with Geoffrey Hinton dis-
cussed more powerful networks. Both
WINTER 2006 89
Reports
Photographer: Joe Mehling
Figure 2. Dartmouth Hall, Where the Original Activities Took Place.
Terry Sejnowski and Rick Granger ex-
plained how much we have learned
about the brain in the last decade and
how this information is very sugges-
tive for building computer models of
intelligent activity.
These various differences can be tak-
en as a sign of health in the field. As
Nils Nilsson put it, there are many
routes to the summit. Of course, not
all of the methods may be fruitful in
the long run. Since we don’t know
which way is best, it is good to have
many explored. Despite all the differ-
ences, as in 1956, there is a common
vision that computers can do intelli-
gent tasks. Perhaps, this vision is all it
takes to unite the field.
Projections to 2056
Many predictions about the future of
AI were given at AI@50. When asked
what AI will be like 50 years from now,
the participants from the original con-
ference had diverse positions. Mc-
Carthy offered his view that human-
level AI is likely but not assured by
2056. Selfridge claimed that comput-
ers will do more planning and will in-
corporate feelings and affect by then
but will not be up to human-level AI.
Minsky thought what is needed for
significant future progress is a few
bright researchers pursuing their own
good ideas, not doing what their advi-
sors have done. He lamented that too
few students today are pursuing such
ideas but rather are attracted into en-
trepreneurships or law. More hoped
that machines would always be under
the domination of humans and sug-
gested that machines were very un-
likely ever to match the imagination
of humans. Solomonoff predicted on
the contrary that really smart ma-
chines are not that far off. The danger,
according to him, is political. Today
90 AI MAGAZINE
Reports
Photographer: Joe Mehling
Figure 3. Dartmouth Hall Commerative Plaque.
disruptive technologies like comput-
ing put a great deal of power, a power
that can misused, in the hands of in-
dividuals and governments.
Ray Kurzweil offered a much more
optimistic view about progress and
claimed that we can be confident of
Turing test–capable AI within a quar-
ter century—a prediction with which
many disagreed. Forecasting techno-
logical events is always hazardous. Si-
mon once predicted a computer chess
champion within 10 years. He was
wrong about the 10 years, but it did
happen within 40 years. Thus, given
an even longer period, another 50
years, it is fascinating to ponder what
AI might accomplish.
Sherry Turkle wisely pointed out
that the human element is easily over-
looked in technological development.
Eventually we must relate to such ad-
vanced machines if they are devel-
oped. The important issue for us may
be less about the capabilities of the
computers than about our own vul-
nerabilities when confronted with
very sophisticated artificial intelli-
gences.
Several dozen graduate and post-
doctoral students were sponsored by
DARPA to attend AI@50. Our hope is
that many will be inspired by what
they observed. Perhaps some of those
will present their accomplishments at
the 100-year celebration of the Dart-
mouth Summer Research Project.
Postscript
A plaque honoring the 1956 summer
research project has been recently
mounted in Dartmouth Hall, the
building in which the 1956 summer
activities took place (figure 3).
For details of AI@50 and announce-
ments of products resulting from
AI@50 please check the conference
website.2
Notes
1. For more details on the speakers and top-
ics, check www.dartmouth.edu/~ai50/.
2. www.dartmouth.edu/~ai50/.
James Moor is a professor of philosophy at
Dartmouth College. He is an adjunct pro-
fessor with The Centre for Applied Philoso-
WINTER 2006 91
Reports
Figure 4. The 2006 Conference Logo.
phy and Public Ethics
(CAPPE) at the Aus-
tralian National Univer-
sity. He earned his PhD
in history and philoso-
phy of science at Indi-
ana University. He pub-
lishes on philosophy of
artificial intelligence, computer ethics,
philosophy of mind, philosophy of sci-
ence, and logic. He is the editor of the jour-
nal Minds and Machines and is the presi-
dent of the International Society for Ethics
and Information Technology (INSEIT). He
is a recipient of the American Computing
Machinery SIGCAS “Making a Difference”
award.
... This theory became the foundation for subsequent discussions on AI ethics and responsibility. Two years later, Moor (2006) [23], in the paper "The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years," reflected on the development of AI and outlined philosophical challenges for the next half-century. He raised the issue of "AI's moral agency," posing the question of how ethical responsibility should be defined if AI can make autonomous decisions. ...
... This theory became the foundation for subsequent discussions on AI ethics and responsibility. Two years later, Moor (2006) [23], in the paper "The Dartmouth College Artificial Intelligence Conference: The Next Fifty Years," reflected on the development of AI and outlined philosophical challenges for the next half-century. He raised the issue of "AI's moral agency," posing the question of how ethical responsibility should be defined if AI can make autonomous decisions. ...
Preprint
This paper proposes the e-person architecture for constructing a unified and incremental development of AI ethics. The e-person architecture takes the reduction of uncertainty through collaborative cognition and action with others as a unified basis for ethics. By classifying and defining uncertainty along two axes - (1) first, second, and third person perspectives, and (2) the difficulty of inference based on the depth of information - we support the development of unified and incremental development of AI ethics. In addition, we propose the e-person framework based on the free energy principle, which considers the reduction of uncertainty as a unifying principle of brain function, with the aim of implementing the e-person architecture, and we show our previous works and future challenges based on the proposed framework.
... In fact, the foundations for the concept of AI were established in the paper "Computer Machinery and Intelligence", where Alan Turing postulated that machines could simulate human intelligence through computation, and the Turing Test was then proposed as a means to measure machine intelligence [11]. The term artificial intelligence was later coined in 1956 during the Dartmouth Conference, but the context was not based on a common methodology or general theory proposed by the conference participants, but rather on the shared vision that computers can be made to perform intelligent tasks [12]. The principle was that one should continue to investigate based on the conjecture that every aspect of learning or any other characteristic of intelligence can, supposedly, be described so precisely that a machine can be made to simulate it. ...
Article
Full-text available
This research aims to explore the use, perceptions, and challenges associated with generative AI (GenAI) among higher education students. As GenAI technologies, such as language models, image generators, and code assistants, become increasingly prevalent in academic settings, it is essential to understand how students engage with these tools and their impact on their learning process. The study investigates students’ awareness, adoption patterns, and perceptions of generative AI’s role in academic tasks, alongside the benefits they identify and the challenges they face, including ethical concerns, reliability, and accessibility. Through quantitative methods, the research provides a comprehensive analysis of student experiences with generative AI in higher education. The findings aim to inform educators, technologists, and institutions about the opportunities and barriers of integrating these technologies into educational practices and guide the development of strategies that support effective and responsible AI use in academia.
... the study of intelligent machines and computer programs, AI has since experienced exponential growth driven by sustained research and innovation (Moor 2006;Zhao et al. 2022). ...
Article
Full-text available
The BRICS nations (Brazil, Russia, India, China, and South Africa) aim to achieve Sustainable Development Goal (SDG) 1 (poverty eradication) and SDG 8 (sustainable economic growth), yet the moderating role of governance in artificial intelligence (AI)-poverty-growth nexus remains underexplored. Therefore, this study investigates the AI-poverty-economic growth nexus in selected BRICS-Plus countries (2012–2023), with governance as a moderating variable, using the Cross-Sectional Augmented Autoregressive Distributed Lag (CS-ARDL) technique. The results show a long-term equilibrium among variables, with unidirectional causality: (i) from growth to AI, and (ii) from AI to poverty and governance quality. The findings highlight AI’s transformative potential in tackling poverty and governance issues, with economic growth enabling AI advancements. This underscores the critical need to integrate AI within governance frameworks to address development challenges effectively. The short-run CS-ARDL results for the growth model indicate that AI and governance boost growth, though their interaction diminishes AI's impact. In the long-run, both sustain growth, with stricter governance moderating AI's potential. For the poverty model, AI increases poverty in the short-run, while governance reduces poverty by improving resource allocation and mitigating AI's impacts. The interaction between AI and governance highlights their role in moderating AI’s adverse effects. In the long-run, AI modestly worsens poverty, while governance alleviates poverty by promoting growth and redistributing AI-driven gains. The policy implications stress improving governance to balance AI’s economic benefits and mitigate poverty, emphasizing equitable resource allocation to harness AI’s potential for sustainable growth.
... Artificial Intelligence (AI), originally introduced at the Dartmouth Conference in 1956, is described as the field focused on developing intelligent machines, especially intelligent computer programs (Moor, 2006). With rapid iterations and innovations in recent years, AI's impacts on economic and societal development have grown exponentially. ...
Article
Full-text available
In the context of rapid advancements in artificial intelligence (AI) technology, AI-driven robotics, representing cutting-edge AI developments, plays a crucial role in driving high-quality economic development and promoting the green transformation of enterprises. This study shifts the focus from macro-level outcomes of AI to micro-level dynamics, examining how AI-driven robotics impact corporate green transformation through panel data analysis of listed manufacturing firms in China from 2009 to 2019. The results demonstrate that AI-driven robotics significantly enhances corporate green transformation, a finding that remains robust after addressing potential endogeneity concerns. Mechanism analysis reveals that AI-driven robotics facilitates green transformation by strengthening green innovation capabilities, expanding the number of firms engaging in green innovation, and easing financial constraints. Moreover, moderation analysis highlights that environmental regulation and market concentration positively moderate the relationship between AI-driven robotics and green transformation. Heterogeneity analysis further shows that this positive impact of AI-driven robotics on corporate green transformation is more pronounced in firms located in China’s eastern and central regions, non-state-owned enterprises, non-heavy polluting industries, high-tech sectors, and firms in growth and maturity stages. By offering a multidimensional approach that integrates productivity, innovation, environmental management, and Environmental, Social, and Governance evaluation, this study provides comprehensive insights and policy recommendations for effectively promoting corporate green transformation through AI-driven robotics.
... In 1950, Alan Turing [8] proposed the idea of using machines to simulate human intelligence and introduced the concept of the Turing Test as a standard for evaluating machine intelligence. Subsequently, at the Dartmouth Conference in 1956, the term 'Artificial Intelligence' was coined, marking a foundational event in the field of AI research [9]. ...
Article
Full-text available
Even with the state-of-the-art technology of computer-aided design and topology optimization, the present structural design still faces the challenges of high dimensionality, multi-objectivity, and multi-constraints, making it knowledge/experience-demanding, labor-intensive, and difficult to achieve or simply lack of global optimality. Structural designers are still searching for new ways to cost-effectively to achieve a possible global optimality in a given structure design, in particular, we are looking for decreasing design knowledge/experience-requirements and reducing design labor and time. In recent years, Artificial Intelligence (AI) technology, characterized by the large language model (LLM) of Machine Learning (ML), for instance Deep Learning (DL), has developed rapidly, fostering the integration of AI technology in structural engineering design and giving rise to the concept and notion of Artificial Intelligence-Aided Design (AIAD). The emergence of AIAD has greatly alleviated the challenges faced by structural design, showing great promise in extrapolative and innovative design concept generation, enhancing efficiency while simplifying the workflow, reducing the design cycle time and cost, and achieving a truly global optimal design. In this article, we present a state-of-the-art overview of applying AIAD to enhance structural design, summarizing the current applications of AIAD in related fields: marine and naval architecture structures, aerospace structures, automotive structures, civil infrastructure structures, topological optimization structure designs, and composite micro-structure design. In addition to discussing of the AIAD application to structural design, the article discusses its current challenges, current development focus, and future perspectives.
Article
The article examines the impact of innovations and global changes on social networks in IT companies. The following three key factors are considered: the COVID-19 pandemic, the war in Ukraine and the active implementation of artificial intelligence tools in the working environment of IT organizations. The analysis shows how such changes are transforming the structure and functions of social networks as well as affecting employee interaction, information diffusion, knowledge sharing, and corporate culture. The author emphasizes the need to adapt management strategies to maintain the social connectivity of employees in order to increase productivity in remote work and digital transformation.
Article
We are currently living through a rapid advancement in Artificial Intelligence (AI) technology where it appears almost impossible for anyone not to be unconsciously subsumed and shaped by it. This paper examines the integration of AI in the mental health field where some psychotherapists are utilising AI tools such as ChatGPT to enhance their therapeutic practices. This paper explores some of the possibilities of using these apps but also the perils of overusing it where practitioners may end up colluding with platforms designed and programmed to gratify clients instantly as part of the "accelerationist" culture of techno-capitalism. There are currently some large language model chatbots that are designed to provide 'therapy' without the presence of a human practitioner altogether. Spike Jonze's 2013 film Her portrays a future where AI chatbots achieve remarkable sentience and emotional depth, raising questions about the feasibility of such technology. Gadwat (2022) maintains that by the year 2049, AI will be one billion times more intelligent than humans. Furthermore, this paper questions the nature of this type of 'intelligence' and queries whether some of these language-based technologies could be eventually programmed to operate as part of an automated Lacanian psychoanalytic approach able to identify and interpret unconscious elements in a person's speech and if so, could this mean the end of the human Lacanian analyst as we know it?
Article
Full-text available
A inteligência artificial é um desafio regulatório para os Estados modernos. Todos buscam o equilíbrio entre o avanço tecnológico e a proteção de direitos. O objetivo geral desta pesquisa é fornecer perspectivas normativas para a regulamentação da IA pelo Estado brasileiro, já o objetivo específico é investigar se obras criadas por IAs podem ser protegidas por direitos autorais e, em caso afirmativo, determinar quem poderia ser o titular desses direitos. Para alcançar esses objetivos, a análise documental, a análise bibliográfica, o direito comparado e o método dedutivo são as metodologias aplicadas nesta pesquisa. Após uma análise extensiva, conclui-se que as IAs não podem ser titulares de direitos autorais, bem como se constata que a legislação brasileira carece de diretrizes claras sobre o uso de IAs. Como perspectivas de legislação, este estudo sugere as orientações do U.S Copyright Office para o registro de obras feitas com auxílio de inteligência artificial e a regulação baseada em níveis de risco da União Europeia como modelos normativos promissores para futuras regulamentações brasileiras sobre o tema. No que diz respeito à proteção dos direitos autorais, é imprescindível determinar o grau de intervenção humana necessária na criação de uma obra para legitimar a reivindicação de direitos autorais. A ausência de regulamentação torna difícil determinar quem poderia ser considerado o autor, embora se possa afirmar que a máquina não poderia ser titular desse direito.
Chapter
Securing information communication dates back thousands of years ago. The meaning of information security, however, has evolved over time and today covers a very wide variety of goals, including identifying the source of information, the reliability of information, and ultimately whether the information is trustworthy. In this paper, we will look at the evolution of the information security problem and the approaches that have been developed for providing information protection. We argue that the more recent problem of misinformation and disinformation has shifted the content integrity problem from the protection of message syntax to the protection of message semantics. This shift, in the age of advanced AI systems, a technology that can be used to mimic human-generated content as well as to create bots that mimic human behaviour on the Internet, poses fundamental technological challenges that evade existing technologies. It leaves social elements, including public education and a suitable legal framework, as increasingly the main pillars of effective protection, at least in the short run. It also poses an intriguing challenge to the scientific community: to design effective solutions that employ cryptography and AI, together with incentivization to engage the global community, to ensure the safety of the information ecosystem.
ResearchGate has not been able to resolve any references for this publication.