Content uploaded by Maad M. Mijwil
Author content
All content in this area was uploaded by Maad M. Mijwil on Jan 03, 2018
Content may be subject to copyright.
2015
History of Artificial Intelligence
Yapay Zekânın Tarihi
Maad M. Mijwel
2015
Maad M. Mijwel April 2015
1
maadalnaimiy@yahoo.com _____________________________________
History of Artificial Intelligence
Maad M. Mijwel
Computer science, college of science,
University of Baghdad
Baghdad, Iraq
maadalnaimiy@yahoo.com
April 2015
__________________________________________________*****_________________________________________________
Abstract___Artificial intelligence is finding its way into ever more areas of life. The latest craze is AI chips and related
applications on the smartphone. However, technology began as early as the 1950s with the Dartmouth Summer Research Project
on Artificial Intelligence at Dartmouth College, USA. The beginnings go even further back to the work of Alan Turing - which
goes back to the well-known Turing test -, Allen Newell and Herbert A. Simon. With the chess computer Deep Blue from IBM,
which succeeded in 1996 as the first machine to beat the then-reigning chess world champion Garry Kasparov in a match, the
artificial intelligence managed to get into the focus of the world public. In data centers and on mainframes, AI algorithms have
been used for many years.
__________________________________________________*****_________________________________________________
I. INTRODUCTION
In recent years, incredible progress has been made in
computer science and AI. Watson, Siri or Deep Learning show
that AI systems are now delivering services that must be
considered intelligent and creative. And there are fewer and
fewer companies today that can do without artificial
intelligence if they want to optimize their business or save
money.
AI systems are undoubtedly very useful. As the world
becomes more complex, we need to leverage our human
resources and high-quality computer systems help. This also
applies to applications that require intelligence. The other side
of the AI medal is: The possibility that a machine might
possess intelligence scares many. Most people believe that
intelligence is something unique, which is what distinguishes
Homo sapiens. But if intelligence can be mechanized, what is
unique about humans and what sets it apart from the machine?
The quest for an artificial copy of man and the complex of
questions involved are not new. The reproduction and
imitation of thought already occupied our ancestors. From the
sixteenth century, it was teeming with legends and the reality
of artificial creatures. Homunculi, mechanical automata, the
golem, the Mälzel chess automaton, or Frankenstein were all
imaginative or real attempts in the past centuries to artificially
produce intelligences and to imitate what is essential to us.
The idea of making inanimate objects into intelligent beings
by giving life a long time is fascinating the mind of mankind.
Ancient Greeks had myths about robotics, and Chinese and
Egyptian engineers made automatons. We can see the traces of
the beginning of modern artificial intelligence as an attempt to
define the classical philosophers' system of human thought as
a symbolic system. However, the field of artificial intelligence
was not formally established until 1956. In 1956, a conference
"Artificial Intelligence" was held for the first time in Hanover,
New Hampshire, at Dartmouth College. Cognitive scientist
Marvin Minsky at MIT and other scientists participating in the
conference were quite optimistic about the future of artificial
intelligence. As Minsky stated in his book "AI: The
Tumultuous Search for Artificial Intelligence": "In a
generation, the problem of artificial intelligence creation will
be solved at a significant level."
One of the most important visionaries and theoreticians was
Alan Turing (1912-1954): in 1936, the British mathematician
proved that a universal calculator - now known as the Turing
machine - is possible. Turing's central insight is that such a
machine is capable of solving any problem as long as it can be
represented and solved by an algorithm. Transferred to human
intelligence, this means that if cognitive processes can be
algorithm can be broken down into finite well-defined
individual steps they can be executed on one machine. A few
decades later, the first practical digital computers were
actually built. Thus, the "physical vehicle" for artificial
intelligence was available
Alan Turing
The electromechanical machine of Turing, considered a
precursor of modern computers, managed to unlock the code
used by the German submarines in the Atlantic. His work at
Bletchley Park is considered key to the end of World War II.
His work at Bletchley Park, an isolated country house north of
London, was made public in the 1970s, when the role of the
brilliant mathematician in the war was revealed. The
cryptographers who worked helped shorten World War II by
about two years, by deciphering around 3,000 German military
messages a day.
The British mathematician
Alan Turing, father of modern
computing and key man for
the British victory in World
War II by cracking the Nazi
code "unbreakable" Enigma,
has finally received a royal
pardon that tries to amend his
criminal conviction for being
homosexual, a fact that led to
suicide
Maad M. Mijwel April 2015
2
maadalnaimiy@yahoo.com _____________________________________
Nazi code decryption machine
Turing's team deciphered the 'Enigma' code, which the
Germans considered unbreakable, and designed and developed
Colossus, one of the first programmable computers. But after
the war, Prime Minister Winston Churchill ordered the
destruction of Colossus computers and 200 'Turing bombe'
machines to keep them secret from the Soviet Union.
II. ARTIFICIAL INTELLIGENCE HISTORY
To be informed about the history of artificial intelligence, it
is necessary to go back to previous dates in Milat. In the
Ancient Greek era, it is proven that various ideas about
humanoid robots have been carried out. An example of this is
Daedelus, who is said to have ruled the mythology of the
wind, to try to create artificial humans. Modern artificial
intelligence has begun to be seen in history with the aim of
defining philosophers' system of human thought. 1884 is very
important for artificial intelligence. Charles Babbage, on
this date, has worked on a mechanical machine that will
exhibit intelligent behavior. However, as a result of these
studies, he decided that he would not be able to produce a
machine that would exhibit as intelligent behaviors as a
human being, and he took his work suspended. In 1950,
Claude Shannon introduced the idea that computers could
play chess. Work on artificial intelligence continued slowly
until the early 1960s.
The emergence of artificial intelligence officially in
history dates back to 1956. In 1956, a conference artificial
intelligence session at Dartmouth College was introduced
for the first time. Marvin Minsky stated in his book
"Stormed Search for Artificial Intelligence " that "the
problem of artificial intelligence modeling within a
generation will be solved ". The first artificial intelligence
applications were introduced during this period. These
applications are based on logic theorems and chess game.
The programs developed during this period were
distinguished from the geometric forms used in the
intelligence tests; which has led to the idea that intelligent
computers can be created.
III. MILESTONES FOR AI HISTORY
In 1950, Alan Turing created a test to determine whether
a machine was intelligent. This test shows the intelligence
given to computers. The intelligence level of the machines
that passed the test at that time was considered adequate.
LISP (List Processing Language), developed by John
McCarthy in 1957, is a functional programming language
developed for artificial intelligence. One of the rather old
and powerful programming languages , LISP is a language
that allows you to create flexible programs that represent
basic operations with list structure. Between 1965 and 1970,
it could be called a dark period for artificial
intelligence. The developments on artificial intelligence in
this period are too few to be tested. The hasty and optimistic
attitude due to the unrealistic expectations that have
emerged has led to the idea that it will be easy to uncover
the machines with intelligence. But this period was named
as a dark period on behalf of artificial intelligence because it
did not succeed with the idea of creating intelligent
machines by simply uploading data .Between 1970 and
1975, artificial intelligence gained momentum. Thanks to
the success achieved in artificial intelligence systems that
have been developed and developed on subjects such as
disease diagnosis, the basis of today's artificial intelligence
has been established. During the period 1975-1980 they
developed the idea that they could benefit artificial
intelligence through other branches of science such as
psychology.
Artificial Intelligence began to be used in large projects
with practical applications in the 1980s. The next time the
daylight is passed, the artificial intelligence has been
adapted to solve real life problems. Even when the needs of
users are already met with traditional methods, the use of
artificial intelligence has reached to a much wider range
thanks to more economical software and tools.
History of AI with Chronological Order:
May 1. year: Alexander Heron in antiquity made
automatons with mechanical mechanisms working with
water and steam power.
1206: Ebru İz Bin Rezzaz Al Jezeri, one of the pioneers
of cybernetic science, has made water-operated
automatic controlled machines.
1623: Wilhelm Schickard invented a mechanic and a
calculator capable of four operations.
1672: Gottfried Leibniz has developed a binary
counting system that forms the abstract basis of today's
computers.
1822-1859: Charles Babbage is a mechanical
calculator. Ada Lovelace is regarded as the first
Maad M. Mijwel April 2015
3
maadalnaimiy@yahoo.com _____________________________________
computer programmer because of the work he has done
with Babbage's punched cards on his
machines. Lovelace's work includes algorithms.
1923: Karel Capek first introduced the robot concept in
the theater play of Rossum's Universal Robots (RUR -
Rossum's Universal Robots).
1931: Kurt Gödel introduced the theory of deficiency,
which is called by his own name.
1936: Konrad Zuse developed a programmable
computer named Z1 named 64K memory.
1946: ENIAC (Electronic Numerical Integrator and
Computer), the first computer in a room size of 30 tons,
started to work.
1948: John von Neumann introduced the idea of self-
replicating program.
1950: Alan Turing, founder of computer science,
introduced the concept of the Turing Test.
1951: The first artificial intelligence programs for the
Mark 1 device were written.
1956: The logic theorist (Logic Theory-LT) program
for solving mathematical problems is introduced by
Neweell, Shaw and Simon. The system is regarded as
the first artificial intelligence system.
The end of the 1950s - the beginning of the 1960s: A
schematic network for machine translation was
developed by Margaret Masterman et al.
1958: John McCarty of MIT created the LISP (list
Processing language) language.
1960: JCR Licklider described the human-machine
relationship in his work.
1962: Unimation was established as the first company
to produce robots for the industrial field.
1965: An artificial intelligence program ELIZA is
written.
1966: The first animated robot "Shakey" was produced
at Stanford University.
1973: DARPA begins development for protocols called
TCP / IP.
1974: The Internet has begun to be used for the first
time.
1978: Herbert Simon earned a Nobel Prize for his
limited Rationality Theory, which is an important work
on Artificial Intelligence.
1981: IBM produced the first personal computer.
1993: Production of Cog, a human-looking robot at
MIT, began.
1997: Deep Blue named supercomputer defeated world
famous chess player Kasparov.
1998: Furby, the first artificial intelligence player, was
driven to the market.
2000: Kismet named robot which can use gesture and
mimic movements in communication is introduced.
2005: Asimo, the closest robot to artificial intelligence
and human ability and skill, is introduced.
2010: Asimo is made to act using mind power
IV. WHAT IS ARTIFICIAL INTELLGENCE?
Artificial intelligence is the general name of the
technology for the development of machines, which are
created entirely by artificial means and can exhibit
behaviors and behaviors like human beings, without taking
advantage of any living organism.
Artificial intelligence products, which, when approached
as an idealist, are completely human like and can perform
things such as feeling, foreseeing, and making decisions, are
generally called robot names.
The artificial intelligence of which the first steps are
being taken by the question of Mathison Turing by the
question "Can machines be considered?" Is one of the most
important factors in the emergence of various military
weapon technologies and the development of computers
during the period of World War II.
The concept of Machine Intelligence emerging with
various code algorithms and data studies reveals that all the
technological devices produced from the first computers to
today's smart phones are developed on the basis of people.
The artificial intelligence, which was developed very slowly
in the old periods but so important steps as the day-to-day,
reveals how much progress has been made with the
emergence of gifted robots today.
McCulloch and Pitts introduced the ability to assign
various functions to robots by utilizing artificial intelligence
studies, artificial nerve cells and different science branches
at the product development focus pointing to human
behaviors. Nevertheless, the first steps of the one-arm robot
workers in the factory were taken. In 1956, McCarthy,
Minsky, Shannon and Rochester in the study process
conducted by the artificial intelligence put forward the name
McCarthy, artificial intelligence could be described as the
father of the name.
Warren McCulloch & Walter Pitts
Although symbolic and cybernetic artificial intelligence
studies have different currents, the two currents faced a bad
Maad M. Mijwel April 2015
4
maadalnaimiy@yahoo.com _____________________________________
start at the beginning and could not be maintained as
expected on both sides. In the symbolic artificial
intelligence studies, robots cannot give exactly the expected
responses and answers to the questions of the people,
whereas on the cybernetic artificial intelligence side, the
artificial neural networks do not give the expectation and the
works on the two sides cannot be successful with literally.
Artificial intelligence has led to the emergence of
specialized artificial intelligence exercises that will continue
with a single purpose, rather than different branches and
minds, after failures in Symbolic and Cybernetic artificial
intelligence studies developed on different sides.
While the concept of artificial intelligence has stimulated
artificial intelligence studies, the fact that artificial
intelligence products do not have enough knowledge about
what is being worked on has brought about various
problems. However, the artificial intelligence developers
who brought rational solutions to the problems that have
arisen, have reached to commercial level of artificial
intelligence, and the artificial intelligence industry that
emerged in the coming periods has shown that the
achievement of successful works is achieved with billion
dollar billets.
Recent developments in artificial intelligence studies
have revealed the importance of language. As anthropology,
Human Science studies show, people have begun to hold the
language in front of the artificial intelligence studies in
recent years because people think with language and put out
various functions.
Later, a number of artificial intelligence marking
languages appeared with the language studies that were
behind those who supported Symbolic Artificial Intelligence
studies. Today, artificial intelligence studies carried out by
Symbolic artificial intelligence developers have benefited
from artificial intelligence languages and have made it
possible to show even robots that can speak.
V. EXPERT SYSTEMS 1975 TO 1985
In the third era, starting in the mid-1970s, they broke
away from the toy worlds and tried to build practically
usable systems, whereby methods of knowledge
representation were in the foreground. The AI left its ivory
tower and AI research also became known to a wider public.
Initiated by the US computer scientist Edward Feigenbaum
expert system technology is initially limited to the
university area. However, little by little, expert systems
developed into a small commercial success and for many
were identical to all AI research just as many Machine
Learning today are identical to AI.
In an expert system, the knowledge of a particular subject
area is represented in the form of rules and large knowledge
bases. The best known expert system was the MYCIN
developed by T. Shortliffe at Stanford University. It was
used to support diagnostic and therapeutic decisions in
blood infectious diseases and meningitis. He was attested by
an evaluation that his decisions are as good as those of an
expert in the field and better than those of a non-expert.
Starting with MYCIN, a large number of other expert
systems with more complex architecture and extensive rules
have been developed and used in various fields. In medicine,
for example, PUFF (data interpretation of lung tests),
CADUCEUS (diagnostics in internal medicine), in
chemistry DENDRAL (analysis of molecular structure), in
geology PROSPECTOR (analysis of rock formations) or in
computer science the system R1 for the configuration of
Computers that saved Digital Equipment Corporation (DEC)
$ 40 million a year.
Even the area of language processing, in the shadow of
expert system euphoria, was oriented towards practical
problems. A typical example is the dialog system HAM-ANS,
with which a dialogue can be conducted in various fields of
application. Natural-language interfaces to databases and
operating systems penetrated into the commercial market such
as Intellect, F & A or DOS-MAN.
I. THE RENAISSANCE OF NEURAL NETWORKS
1985 TO 1990
In the early 1980s, Japan announced the ambitious "Fifth
Generation Project," which was designed, among other things,
to carry out practically applicable AI cutting-edge research.
For the AI development, the Japanese favored the
programming language PROLOG, which had been introduced
in the seventies as the European counterpart to the US-
dominated LISP. In PROLOG, a certain form of predicate
logic can be used directly as a programming language. Japan
and Europe were largely PROLOG-dominated in the
sequence, in the US continued to rely on LISP.
In the mid 80's the symbolic AI got competition from the
resurrected neural networks. Based on brain research results,
McCulloch, Pitts, and Hebb first developed mathematical
models for artificial neural networks in the 1940s. But then
lacked powerful computers. Now in the eighties, the
McCulloch-Pitts neuron experienced a renaissance in the form
of so-called connectionism.
Unlike the symbol-processing AI, connectionism is more
oriented towards the biological model of the brain. Its basic
idea is that information processing is based on the interaction
of many simple, uniform processing elements and is highly
parallel. Neural networks offered impressive performance,
especially in the field of learning. The Netttalk program was
able to learn how to speak using example sentences: by
entering a limited set of written words with pronunciation as
phoneme chains, such a net could learn how to pronounce
English words correctly and apply the learned to unknown
words correctly.
But even this second attempt came too early for neural
networks. Although the funding was booming, but also the
limits were clear. There was not enough training data,
solutions for structuring and modularizing the networks were
missing, and also the computers before the millennium were
still too slow.
Maad M. Mijwel April 2015
5
maadalnaimiy@yahoo.com _____________________________________
VI. THE BEST 5 FILMS IN A.I.
1- Bicentennial Man 1990
2- I Robot 2004
This time the star of the cast is Will Smith , the detective.
The film did not have very good acceptance among the
followers of Asimov because the bulk of the argument is not
based on any of his books but only takes some of its
elements
3- Artificial Intelligence (AI) (2001)
All part of a Stanley Kubrick project started at the beginning
of the 70s and that could not be done in his day because the
computer generated image systems are not too
advanced. Therefore, at the end of Spielberg's film there is a
dedication: "For Stanley Kubrick".
4- Blade Runner 1982
The film bears the signature of Riddley Scott, which in
itself is a push to want to see it. The film delves into the
consequences of the penetration of technology in a society
of a not so distant future. This film is essential in your
library of the science fiction genre.
5- EX Machina 2015
It is still too early to see what is going to leave this film
in moviegoers. The only thing that we can confirm is that it
has an Oscar Award for the Best Special Effects and very
good reception by the public and critics.
It is based on a homonymous
account of Isaac Asimov
himself. NDR ("Andrew") is
a robot that has been
acquired by a family to
perform cleaning tasks. But
there is something that
makes him special: he is able
to identify emotions,
something for which no
robot is programmed.
The script is signed by
Jeff Vintar who had to
incorporate, at the request
of the producers, the
Three Laws of Robotics
and other ideas of Isaac
Asimov after the Producer
acquired the rights to the
title of that author's book.
Steven Spielberg adapted a
story written by Brian Aldiss
entitled "The super toys last all
summer" with some influence
of "The Adventures of
Pinocchio". In the film we
meet David, a robot child
capable of showing feelings
like love.
It is considered a cult
movie. It is based on the
novel "Do Androids
Dream of Mechanical
Sheep?" By Philip K.
Dick, an author who has
inspired countless films
including, for example,
(based on his book
"Ubik"). The case that
concerns us today is Blade
Runner.
We finish with a more
recent production. We know
the story of Caleb, a
programmer who is invited
in his company to perform a
test with an android that has
artificial intelligence.