PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Artificial intelligence is finding its way into ever more areas of life. The latest craze is AI chips and related applications on the smartphone. However, technology began as early as the 1950s with the Dartmouth Summer Research Project on Artificial Intelligence at Dartmouth College, USA. The beginnings go even further back to the work of Alan Turing - which goes back to the well-known Turing test -, Allen Newell and Herbert A. Simon. With the chess computer Deep Blue from IBM, which succeeded in 1996 as the first machine to beat the then-reigning chess world champion Garry Kasparov in a match, the artificial intelligence managed to get into the focus of the world public. In data centers and on mainframes, AI algorithms have been used for many years.
2015
History of Artificial Intelligence
Yapay Zekânın Tarihi
Maad M. Mijwel
2015
Maad M. Mijwel April 2015
1
maadalnaimiy@yahoo.com _____________________________________
History of Artificial Intelligence
Maad M. Mijwel
Computer science, college of science,
University of Baghdad
Baghdad, Iraq
maadalnaimiy@yahoo.com
April 2015
__________________________________________________*****_________________________________________________
Abstract___Artificial intelligence is finding its way into ever more areas of life. The latest craze is AI chips and related
applications on the smartphone. However, technology began as early as the 1950s with the Dartmouth Summer Research Project
on Artificial Intelligence at Dartmouth College, USA. The beginnings go even further back to the work of Alan Turing - which
goes back to the well-known Turing test -, Allen Newell and Herbert A. Simon. With the chess computer Deep Blue from IBM,
which succeeded in 1996 as the first machine to beat the then-reigning chess world champion Garry Kasparov in a match, the
artificial intelligence managed to get into the focus of the world public. In data centers and on mainframes, AI algorithms have
been used for many years.
__________________________________________________*****_________________________________________________
I. INTRODUCTION
In recent years, incredible progress has been made in
computer science and AI. Watson, Siri or Deep Learning show
that AI systems are now delivering services that must be
considered intelligent and creative. And there are fewer and
fewer companies today that can do without artificial
intelligence if they want to optimize their business or save
money.
AI systems are undoubtedly very useful. As the world
becomes more complex, we need to leverage our human
resources and high-quality computer systems help. This also
applies to applications that require intelligence. The other side
of the AI medal is: The possibility that a machine might
possess intelligence scares many. Most people believe that
intelligence is something unique, which is what distinguishes
Homo sapiens. But if intelligence can be mechanized, what is
unique about humans and what sets it apart from the machine?
The quest for an artificial copy of man and the complex of
questions involved are not new. The reproduction and
imitation of thought already occupied our ancestors. From the
sixteenth century, it was teeming with legends and the reality
of artificial creatures. Homunculi, mechanical automata, the
golem, the Mälzel chess automaton, or Frankenstein were all
imaginative or real attempts in the past centuries to artificially
produce intelligences and to imitate what is essential to us.
The idea of making inanimate objects into intelligent beings
by giving life a long time is fascinating the mind of mankind.
Ancient Greeks had myths about robotics, and Chinese and
Egyptian engineers made automatons. We can see the traces of
the beginning of modern artificial intelligence as an attempt to
define the classical philosophers' system of human thought as
a symbolic system. However, the field of artificial intelligence
was not formally established until 1956. In 1956, a conference
"Artificial Intelligence" was held for the first time in Hanover,
New Hampshire, at Dartmouth College. Cognitive scientist
Marvin Minsky at MIT and other scientists participating in the
conference were quite optimistic about the future of artificial
intelligence. As Minsky stated in his book "AI: The
Tumultuous Search for Artificial Intelligence": "In a
generation, the problem of artificial intelligence creation will
be solved at a significant level."
One of the most important visionaries and theoreticians was
Alan Turing (1912-1954): in 1936, the British mathematician
proved that a universal calculator - now known as the Turing
machine - is possible. Turing's central insight is that such a
machine is capable of solving any problem as long as it can be
represented and solved by an algorithm. Transferred to human
intelligence, this means that if cognitive processes can be
algorithm can be broken down into finite well-defined
individual steps they can be executed on one machine. A few
decades later, the first practical digital computers were
actually built. Thus, the "physical vehicle" for artificial
intelligence was available
Alan Turing
The electromechanical machine of Turing, considered a
precursor of modern computers, managed to unlock the code
used by the German submarines in the Atlantic. His work at
Bletchley Park is considered key to the end of World War II.
His work at Bletchley Park, an isolated country house north of
London, was made public in the 1970s, when the role of the
brilliant mathematician in the war was revealed. The
cryptographers who worked helped shorten World War II by
about two years, by deciphering around 3,000 German military
messages a day.
The British mathematician
Alan Turing, father of modern
computing and key man for
the British victory in World
War II by cracking the Nazi
code "unbreakable" Enigma,
has finally received a royal
pardon that tries to amend his
criminal conviction for being
homosexual, a fact that led to
suicide
Maad M. Mijwel April 2015
2
maadalnaimiy@yahoo.com _____________________________________
Nazi code decryption machine
Turing's team deciphered the 'Enigma' code, which the
Germans considered unbreakable, and designed and developed
Colossus, one of the first programmable computers. But after
the war, Prime Minister Winston Churchill ordered the
destruction of Colossus computers and 200 'Turing bombe'
machines to keep them secret from the Soviet Union.
II. ARTIFICIAL INTELLIGENCE HISTORY
To be informed about the history of artificial intelligence, it
is necessary to go back to previous dates in Milat. In the
Ancient Greek era, it is proven that various ideas about
humanoid robots have been carried out. An example of this is
Daedelus, who is said to have ruled the mythology of the
wind, to try to create artificial humans. Modern artificial
intelligence has begun to be seen in history with the aim of
defining philosophers' system of human thought. 1884 is very
important for artificial intelligence. Charles Babbage, on
this date, has worked on a mechanical machine that will
exhibit intelligent behavior. However, as a result of these
studies, he decided that he would not be able to produce a
machine that would exhibit as intelligent behaviors as a
human being, and he took his work suspended. In 1950,
Claude Shannon introduced the idea that computers could
play chess. Work on artificial intelligence continued slowly
until the early 1960s.
The emergence of artificial intelligence officially in
history dates back to 1956. In 1956, a conference artificial
intelligence session at Dartmouth College was introduced
for the first time. Marvin Minsky stated in his book
"Stormed Search for Artificial Intelligence " that "the
problem of artificial intelligence modeling within a
generation will be solved ". The first artificial intelligence
applications were introduced during this period. These
applications are based on logic theorems and chess game.
The programs developed during this period were
distinguished from the geometric forms used in the
intelligence tests; which has led to the idea that intelligent
computers can be created.
III. MILESTONES FOR AI HISTORY
In 1950, Alan Turing created a test to determine whether
a machine was intelligent. This test shows the intelligence
given to computers. The intelligence level of the machines
that passed the test at that time was considered adequate.
LISP (List Processing Language), developed by John
McCarthy in 1957, is a functional programming language
developed for artificial intelligence. One of the rather old
and powerful programming languages , LISP is a language
that allows you to create flexible programs that represent
basic operations with list structure. Between 1965 and 1970,
it could be called a dark period for artificial
intelligence. The developments on artificial intelligence in
this period are too few to be tested. The hasty and optimistic
attitude due to the unrealistic expectations that have
emerged has led to the idea that it will be easy to uncover
the machines with intelligence. But this period was named
as a dark period on behalf of artificial intelligence because it
did not succeed with the idea of creating intelligent
machines by simply uploading data .Between 1970 and
1975, artificial intelligence gained momentum. Thanks to
the success achieved in artificial intelligence systems that
have been developed and developed on subjects such as
disease diagnosis, the basis of today's artificial intelligence
has been established. During the period 1975-1980 they
developed the idea that they could benefit artificial
intelligence through other branches of science such as
psychology.
Artificial Intelligence began to be used in large projects
with practical applications in the 1980s. The next time the
daylight is passed, the artificial intelligence has been
adapted to solve real life problems. Even when the needs of
users are already met with traditional methods, the use of
artificial intelligence has reached to a much wider range
thanks to more economical software and tools.
History of AI with Chronological Order:
May 1. year: Alexander Heron in antiquity made
automatons with mechanical mechanisms working with
water and steam power.
1206: Ebru İz Bin Rezzaz Al Jezeri, one of the pioneers
of cybernetic science, has made water-operated
automatic controlled machines.
1623: Wilhelm Schickard invented a mechanic and a
calculator capable of four operations.
1672: Gottfried Leibniz has developed a binary
counting system that forms the abstract basis of today's
computers.
1822-1859: Charles Babbage is a mechanical
calculator. Ada Lovelace is regarded as the first
Maad M. Mijwel April 2015
3
maadalnaimiy@yahoo.com _____________________________________
computer programmer because of the work he has done
with Babbage's punched cards on his
machines. Lovelace's work includes algorithms.
1923: Karel Capek first introduced the robot concept in
the theater play of Rossum's Universal Robots (RUR -
Rossum's Universal Robots).
1931: Kurt Gödel introduced the theory of deficiency,
which is called by his own name.
1936: Konrad Zuse developed a programmable
computer named Z1 named 64K memory.
1946: ENIAC (Electronic Numerical Integrator and
Computer), the first computer in a room size of 30 tons,
started to work.
1948: John von Neumann introduced the idea of self-
replicating program.
1950: Alan Turing, founder of computer science,
introduced the concept of the Turing Test.
1951: The first artificial intelligence programs for the
Mark 1 device were written.
1956: The logic theorist (Logic Theory-LT) program
for solving mathematical problems is introduced by
Neweell, Shaw and Simon. The system is regarded as
the first artificial intelligence system.
The end of the 1950s - the beginning of the 1960s: A
schematic network for machine translation was
developed by Margaret Masterman et al.
1958: John McCarty of MIT created the LISP (list
Processing language) language.
1960: JCR Licklider described the human-machine
relationship in his work.
1962: Unimation was established as the first company
to produce robots for the industrial field.
1965: An artificial intelligence program ELIZA is
written.
1966: The first animated robot "Shakey" was produced
at Stanford University.
1973: DARPA begins development for protocols called
TCP / IP.
1974: The Internet has begun to be used for the first
time.
1978: Herbert Simon earned a Nobel Prize for his
limited Rationality Theory, which is an important work
on Artificial Intelligence.
1981: IBM produced the first personal computer.
1993: Production of Cog, a human-looking robot at
MIT, began.
1997: Deep Blue named supercomputer defeated world
famous chess player Kasparov.
1998: Furby, the first artificial intelligence player, was
driven to the market.
2000: Kismet named robot which can use gesture and
mimic movements in communication is introduced.
2005: Asimo, the closest robot to artificial intelligence
and human ability and skill, is introduced.
2010: Asimo is made to act using mind power
IV. WHAT IS ARTIFICIAL INTELLGENCE?
Artificial intelligence is the general name of the
technology for the development of machines, which are
created entirely by artificial means and can exhibit
behaviors and behaviors like human beings, without taking
advantage of any living organism.
Artificial intelligence products, which, when approached
as an idealist, are completely human like and can perform
things such as feeling, foreseeing, and making decisions, are
generally called robot names.
The artificial intelligence of which the first steps are
being taken by the question of Mathison Turing by the
question "Can machines be considered?" Is one of the most
important factors in the emergence of various military
weapon technologies and the development of computers
during the period of World War II.
The concept of Machine Intelligence emerging with
various code algorithms and data studies reveals that all the
technological devices produced from the first computers to
today's smart phones are developed on the basis of people.
The artificial intelligence, which was developed very slowly
in the old periods but so important steps as the day-to-day,
reveals how much progress has been made with the
emergence of gifted robots today.
McCulloch and Pitts introduced the ability to assign
various functions to robots by utilizing artificial intelligence
studies, artificial nerve cells and different science branches
at the product development focus pointing to human
behaviors. Nevertheless, the first steps of the one-arm robot
workers in the factory were taken. In 1956, McCarthy,
Minsky, Shannon and Rochester in the study process
conducted by the artificial intelligence put forward the name
McCarthy, artificial intelligence could be described as the
father of the name.
Warren McCulloch & Walter Pitts
Although symbolic and cybernetic artificial intelligence
studies have different currents, the two currents faced a bad
Maad M. Mijwel April 2015
4
maadalnaimiy@yahoo.com _____________________________________
start at the beginning and could not be maintained as
expected on both sides. In the symbolic artificial
intelligence studies, robots cannot give exactly the expected
responses and answers to the questions of the people,
whereas on the cybernetic artificial intelligence side, the
artificial neural networks do not give the expectation and the
works on the two sides cannot be successful with literally.
Artificial intelligence has led to the emergence of
specialized artificial intelligence exercises that will continue
with a single purpose, rather than different branches and
minds, after failures in Symbolic and Cybernetic artificial
intelligence studies developed on different sides.
While the concept of artificial intelligence has stimulated
artificial intelligence studies, the fact that artificial
intelligence products do not have enough knowledge about
what is being worked on has brought about various
problems. However, the artificial intelligence developers
who brought rational solutions to the problems that have
arisen, have reached to commercial level of artificial
intelligence, and the artificial intelligence industry that
emerged in the coming periods has shown that the
achievement of successful works is achieved with billion
dollar billets.
Recent developments in artificial intelligence studies
have revealed the importance of language. As anthropology,
Human Science studies show, people have begun to hold the
language in front of the artificial intelligence studies in
recent years because people think with language and put out
various functions.
Later, a number of artificial intelligence marking
languages appeared with the language studies that were
behind those who supported Symbolic Artificial Intelligence
studies. Today, artificial intelligence studies carried out by
Symbolic artificial intelligence developers have benefited
from artificial intelligence languages and have made it
possible to show even robots that can speak.
V. EXPERT SYSTEMS 1975 TO 1985
In the third era, starting in the mid-1970s, they broke
away from the toy worlds and tried to build practically
usable systems, whereby methods of knowledge
representation were in the foreground. The AI left its ivory
tower and AI research also became known to a wider public.
Initiated by the US computer scientist Edward Feigenbaum
expert system technology is initially limited to the
university area. However, little by little, expert systems
developed into a small commercial success and for many
were identical to all AI research just as many Machine
Learning today are identical to AI.
In an expert system, the knowledge of a particular subject
area is represented in the form of rules and large knowledge
bases. The best known expert system was the MYCIN
developed by T. Shortliffe at Stanford University. It was
used to support diagnostic and therapeutic decisions in
blood infectious diseases and meningitis. He was attested by
an evaluation that his decisions are as good as those of an
expert in the field and better than those of a non-expert.
Starting with MYCIN, a large number of other expert
systems with more complex architecture and extensive rules
have been developed and used in various fields. In medicine,
for example, PUFF (data interpretation of lung tests),
CADUCEUS (diagnostics in internal medicine), in
chemistry DENDRAL (analysis of molecular structure), in
geology PROSPECTOR (analysis of rock formations) or in
computer science the system R1 for the configuration of
Computers that saved Digital Equipment Corporation (DEC)
$ 40 million a year.
Even the area of language processing, in the shadow of
expert system euphoria, was oriented towards practical
problems. A typical example is the dialog system HAM-ANS,
with which a dialogue can be conducted in various fields of
application. Natural-language interfaces to databases and
operating systems penetrated into the commercial market such
as Intellect, F & A or DOS-MAN.
I. THE RENAISSANCE OF NEURAL NETWORKS
1985 TO 1990
In the early 1980s, Japan announced the ambitious "Fifth
Generation Project," which was designed, among other things,
to carry out practically applicable AI cutting-edge research.
For the AI development, the Japanese favored the
programming language PROLOG, which had been introduced
in the seventies as the European counterpart to the US-
dominated LISP. In PROLOG, a certain form of predicate
logic can be used directly as a programming language. Japan
and Europe were largely PROLOG-dominated in the
sequence, in the US continued to rely on LISP.
In the mid 80's the symbolic AI got competition from the
resurrected neural networks. Based on brain research results,
McCulloch, Pitts, and Hebb first developed mathematical
models for artificial neural networks in the 1940s. But then
lacked powerful computers. Now in the eighties, the
McCulloch-Pitts neuron experienced a renaissance in the form
of so-called connectionism.
Unlike the symbol-processing AI, connectionism is more
oriented towards the biological model of the brain. Its basic
idea is that information processing is based on the interaction
of many simple, uniform processing elements and is highly
parallel. Neural networks offered impressive performance,
especially in the field of learning. The Netttalk program was
able to learn how to speak using example sentences: by
entering a limited set of written words with pronunciation as
phoneme chains, such a net could learn how to pronounce
English words correctly and apply the learned to unknown
words correctly.
But even this second attempt came too early for neural
networks. Although the funding was booming, but also the
limits were clear. There was not enough training data,
solutions for structuring and modularizing the networks were
missing, and also the computers before the millennium were
still too slow.
Maad M. Mijwel April 2015
5
maadalnaimiy@yahoo.com _____________________________________
VI. THE BEST 5 FILMS IN A.I.
1- Bicentennial Man 1990
2- I Robot 2004
This time the star of the cast is Will Smith , the detective.
The film did not have very good acceptance among the
followers of Asimov because the bulk of the argument is not
based on any of his books but only takes some of its
elements
3- Artificial Intelligence (AI) (2001)
All part of a Stanley Kubrick project started at the beginning
of the 70s and that could not be done in his day because the
computer generated image systems are not too
advanced. Therefore, at the end of Spielberg's film there is a
dedication: "For Stanley Kubrick".
4- Blade Runner 1982
The film bears the signature of Riddley Scott, which in
itself is a push to want to see it. The film delves into the
consequences of the penetration of technology in a society
of a not so distant future. This film is essential in your
library of the science fiction genre.
5- EX Machina 2015
It is still too early to see what is going to leave this film
in moviegoers. The only thing that we can confirm is that it
has an Oscar Award for the Best Special Effects and very
good reception by the public and critics.
It is based on a homonymous
account of Isaac Asimov
himself. NDR ("Andrew") is
a robot that has been
acquired by a family to
perform cleaning tasks. But
there is something that
makes him special: he is able
to identify emotions,
something for which no
robot is programmed.
The script is signed by
Jeff Vintar who had to
incorporate, at the request
of the producers, the
Three Laws of Robotics
and other ideas of Isaac
Asimov after the Producer
acquired the rights to the
title of that author's book.
Steven Spielberg adapted a
story written by Brian Aldiss
entitled "The super toys last all
summer" with some influence
of "The Adventures of
Pinocchio". In the film we
meet David, a robot child
capable of showing feelings
like love.
It is considered a cult
movie. It is based on the
novel "Do Androids
Dream of Mechanical
Sheep?" By Philip K.
Dick, an author who has
inspired countless films
including, for example,
(based on his book
"Ubik"). The case that
concerns us today is Blade
Runner.
We finish with a more
recent production. We know
the story of Caleb, a
programmer who is invited
in his company to perform a
test with an android that has
artificial intelligence.
... These include image enhancement for radiology, which improves the visibility of dental structures and facilitates the diagnosis of cysts and tumors [8][9][10][11]. AI has also been utilized for the determination of periapical lesions and root anatomy for endodontics [12][13][14][15][16], as well as for the diagnosis of periodontitis [14][15][16][17][18][19]. AI has also been used to automate the identification of cephalometric landmarks in orthodontics. ...
... Alan Turing, a British mathematician, in 1936, demonstrated the feasibility of a universal calculator, also known as the "Turing machine", which was capable of solving any problem that could be represented and solved using an algorithm [13]. The first AI program, "The Logic Theorist", which Newell and Simon produced in 1955, signaled the start of the modern AI era. ...
Article
Full-text available
Since the beginning of recorded history, the human brain has been one of the most intriguing structures for scientists and engineers. Over the centuries, newer technologies have been developed based on principles that seek to mimic their functioning, but the creation of a machine that can think and behave like a human remains an unattainable fantasy. This idea is now known as "artificial intelligence". Dentistry has begun to experience the effects of artificial intelligence (AI). These include image enhancement for radiology, which improves the visibility of dental structures and facilitates disease diagnosis. AI has also been utilized for the identification of periapical lesions and root anatomy in endodontics, as well as for the diagnosis of periodontitis. This review is intended to provide a comprehensive overview of the use of AI in modern dentistry's numerous specialties. The relevant publications published between March 1987 and July 2023 were identified through an exhaustive search. Studies published in English were selected and included data regarding AI applications among various dental specialties. Dental practice involves more than just disease diagnosis, including correlation with clinical findings and administering treatment to patients. AI cannot replace dentists. However, a comprehensive understanding of AI concepts and techniques will be advantageous in the future. AI models for dental applications are currently being developed.
... These developments have proven effective in addressing a wide array of challenges across various sectors of the global economy. According to a renowned publication by McKinsey Global Institute titled "Big data: The next frontier for innovation, competition, and productivity", it was projected that by 2009, the majority of sectors within the United States economy possessed an average of 200 terabytes of stored data [5]. ...
Conference Paper
Full-text available
The field of Artificial Intelligence (AI) possesses a substantial historical background, tracing its origins to the year 1956 when the term was first introduced at Dartmouth College to development of Chatbots. Despite encountering initial obstacles and facing financial difficulties, the field of AI witnessed a renewed interest and progress in the 1980s. This comeback was further propelled by significant improvements in the 21st century, primarily attributed to the development of machine learning techniques and the availability of vast amounts of data sometimes referred to as big data. Within the realm of education, AI possesses the capacity to revolutionise the processes of teaching and learning by providing individualised support, improving communication channels, and streamlining research endeavours. Nonetheless, apprehensions regarding the displacement of jobs continue to be a prevailing issue. This study examines the historical progression of AI and its potential implications within the realm of education.
... A mesterséges intelligencia meghatározása A mesterséges intelligencia azon technológiák együttesének a megnevezése, amelyek nemcsak a sokrétű információk programmatikus feldolgozására képesek, hanem saját folyamataik fejlesztésére, adaptálására is. Tehát egy mesterséges intelligencia alapú szoftver egy olyan program, ami adott paraméterek között önmagát fejleszteni, adaptálni képes (Mijwel 2015;Bostrom 2014). ...
Article
Our paper aims to inspect the adoption mechanisms of the artificial intelligence-based digital assistants by their human users and to examine the relevant factors affecting the adoption process. As a secondary goal, we aim to synthesize the existing literature related to this phenomenon. We use five independent variables, which are rooted in the UTAUT2 model, for examining their explaining power on one dependent variable, namely the intention to use digital assistant technology. The exogenous constructs are as follows: performance expectancy, effort expectancy, social influence, facilitating conditions, and hedonic motivation. We conducted our research on a sample of primarily university students (N=123). After the exploratory data analysis, we used partial least squares structural equation modeling (PLSSEM) to test our hypotheses. We concluded that performance expectancy, effort expectancy, social influence, facilitating conditions, and hedonic motivation have a significant effect on usage intention. These independent variables explain 69% of the variance of the dependent variable.
... Turing (British mathematician), in 1936 proved that a universal calculator -known as the Turing machine -is possible [3]. Turing's central insight is that such a machine is capable of solving any problem as long as it can be represented and solved by an algorithm. ...
... Turing (British mathematician), in 1936 proved that a universal calculator -known as the Turing machine -is possible [3]. Turing's central insight is that such a machine is capable of solving any problem as long as it can be represented and solved by an algorithm. ...
... Therefore, the main research question that this study is trying to answer is what are the critical success factors for implementing (AI) projects for health care sector using the TAM? Unlike other typical information technology projects, (AI) project deals with simulation of the human processes by machines [25,26], hence their uniqueness. On the other hand, implementation of the (AI) projects in healthcare provide a great uniqueness. ...
Chapter
A systematic review for twenty-three research studies published between (2015–2018) was reviewed and analyzed deeply in order to answer the research question for the critical success factors for implementing artificial intelligence (AI) projects within the health sector. Afterwards, different constructs were elicited from the previous targeted studies to end up with a list of the most widely used external factors that are relied upon frequently during the process of adopting the technology. The study provide a critical discussion about what does (AI) means and how it can be linked to use the Technology Acceptance Model (TAM) to discuss the willingness to use (AI) systems by individuals.
... Son yıllarda özellikle bilgisayar bilimlerindeki gelişmelerle beraber yapay zekâ alanında da oldukça ciddi ilerleme kaydedilmiştir. Watson, Siri veya Deep Learning (Derin Öğrenme) bu gelişimin önemli göstergeleri olarak gösterilebilmektedir. Bu gelişmelerden sonra yapay zekâ hem şirketler hem de diğer paydaşlar için akıllı, yaratıcı ve faydalı olarak kabul edilmesi gereken hizmetler sunmaktadır (15). Yapay zekâ, makinelerin insan davranışlarını taklit edebilmesine imkân veren sistemler olarak tanımlanabilmektedir. Yapay zekâda yapılacak olan tahminler ve karar verme süreçleri makine öğrenmesi yoluyla gerçekleştirilebilmektedir. ...
Article
Full-text available
With the rapid development of technology, new methods and techniques have begun to provide many services. As these developments afford convenience in many ways, they have become an indispensable part of daily life. In addition, these developments have brought a different perspective to research and development activities. The healthcare sector has also been affected due to the use of new devices and methods. In particular, the development of information systems in digitalization and advancement in imaging technologies has paved the way for more frequent use of computer-based methods in healthcare. The most important advantages of these techniques are saving time and cost, creating new opportunities for diagnosis, and providing different opportunities in the treatment process. One of the methods used in healthcare services is the image processing technique, which is a method used in computer sciences. Image processing is defined as computer studies being carried out to obtain information from digital image data in a digital environment according to the targeted situation, which can be applied in many fields in healthcare services. In this research, relevant studies in the literature have been summarized to give an overview about image processing techniques, which have a wide application area from ophthalmology to radiology. For this purpose, contributions, challenges, and threats of image processing to healthcare services have been discussed and examples from the literature have been cited. Finally, some suggestions have been given after considering the studies in the literature and the technological changes that may occur in future. ÖZ Teknolojinin hızla gelişmesiyle birlikte yeni birçok yöntem ve teknik insanlığın hizmetine sunulmaya başlamıştır. Bu gelişmeler yaşamın pek çok alanında kolaylıklar sağladığından, günlük hayatın vazgeçilmez bir parçası haline gelmiştir. Ayrıca bu gelişmeler, araştırma ve geliştirme faaliyetlerine de farklı bir bakış açısı kazandırmıştır. Sağlık sektörü de bu gelişmelerden oldukça etkilenmiş ve bu durum sağlık hizmetlerinde yeni cihaz ve yöntemlerin kullanılmasını sağlamıştır. Özellikle dijitalleşme ile gelen bilgi sistemlerindeki gelişme ve görüntüleme teknolojile-rindeki ilerleme, bilgisayar temelli yöntemlerin sağlık hizmetlerinde daha sık kullanımının önünü açmıştır. Bu tekniklerin en önemli özellikleri ise zaman ve maliyet tasarrufu sağlama, teşhis koymak için yeni fırsatlar yaratma ve tedavilerde farklı fırsatlar sunma olarak özetlenebilir. Sağlık hizmetlerinde kullanılmaya başlanan bu yöntemlerden biri bilgisayar bilimlerinde kullanılan yöntemlerden olan görüntü işleme tekniğidir. Dijital ortamda bulunan sayısal görüntü verilerinden hedeflenen duruma uygun şekilde bilgi almak için yapılan bilgisayar çalışmaları olarak tanımlanan görüntü işleme, sağlık hizmetlerinde birçok konuda uygulanabilen bir tekniktir. Bu çalışmada, göz hastalıklarından radyolojiye kadar birçok alan-da kullanılan görüntü işleme tekniği ile yapılan çalışmalara genel bir bakış açışı ortaya koymak amaçlanmıştır. Bu amaçla literatürdeki çalışmalar özetlenmiş, görüntü işlemenin sağlık hizmetlerine olan katkıları, zorlukları ve tehditleri tartışılmış ve literatürden uygulamaya ilişkin örnekler verilmiştir. Çalışma sonucunda literatürde yapılan çalışmalar ve gelecekte ortaya çıkabilecek teknolojik değişimler göz önünde bulundurularak öneriler getirilmiştir.
Article
Full-text available
There is an expectation that the introduction of artificial intelligence (AI) will bring about profound changes in the current paradigm of the audit profession, ensuring better reliability and security in the analysis of financial statements. This paper reports the results of a questionnaire survey to ascertain the perceptions of certified auditors, from two Portuguese districts, regarding the impact of artificial intelligence on the audit profession. Findings reveal that the respondents believe that the profession's future depends on the implementation of AI, namely in the efficiency and effectiveness of audit procedures, sampling techniques, cost-benefit relationship and recognizing material distortions.
Article
Full-text available
People have used technology to improve themselves throughout the human history. From the ancient times, human beings tried to get their work done by human slaves or inanimate machines. To build intelligent agents each new technology has been exploited. Clockwise, hydraulics, telephone switching systems, holograms, analog computers and digital computers have all been suggested both as mechanisms for intelligent agent and as technological metaphors for intelligence. A new invention of computer system is known as Artificial Intelligence that can perform tasks with the help of human intelligence. Artificial Intelligence associated with computer systems which includes various types of intelligence: systems that understand new concepts and tasks, systems that are able to give reason and draw useful conclusion about the world around us, systems which can learn a natural language and comprehend a visual scene. Artificial Intelligence means intelligence that is demonstrated by machines. This is a device that recognizes its environment and takes action that increases the chances of achieving its goal. The research goal of Artificial Intelligence is to create technology which helps computers and machines to perform various tasks in an intelligent manner. Artificial Intelligence analysis the intelligent acts of computational agents. Computational agent is one whose decisions about his/her actions can be explained in terms of computation. His actions, firstly, may be broken down into primary operation that further can be applied in a physical device. Computations have many forms, for example: In humans, it is in the form of “wetware” and in computers it is in the form of “hardware”. Greatest advances have occurred in the field of game playing. Super computer named Deep Blue defeated world chess champion Gary Kasparov in May, 1997. This research article explains history, features and goals of artificial intelligence. It also explains various types of artificial intelligence like reactive machine,limited memory, theory of mind and self- awareness. This article focus on application of artificial intelligence in many fields like literacy, finance, heavy industries, hospitals, news, publishing, transportations, telecommunication maintenance, telephone and online customer services etc.
ResearchGate has not been able to resolve any references for this publication.