PreprintPDF Available

History of Artificial Intelligence

Authors:
  • Al-Iraqia University
Preprints and early-stage research may not have been peer reviewed yet.

Abstract

Artificial intelligence is finding its way into ever more areas of life. The latest craze is AI chips and related applications on the smartphone. However, technology began as early as the 1950s with the Dartmouth Summer Research Project on Artificial Intelligence at Dartmouth College, USA. The beginnings go even further back to the work of Alan Turing - which goes back to the well-known Turing test -, Allen Newell and Herbert A. Simon. With the chess computer Deep Blue from IBM, which succeeded in 1996 as the first machine to beat the then-reigning chess world champion Garry Kasparov in a match, the artificial intelligence managed to get into the focus of the world public. In data centers and on mainframes, AI algorithms have been used for many years.
2015
History of Artificial Intelligence
Yapay Zekânın Tarihi
Maad M. Mijwel
2015
Maad M. Mijwel April 2015
1
maadalnaimiy@yahoo.com _____________________________________
History of Artificial Intelligence
Maad M. Mijwel
Computer science, college of science,
University of Baghdad
Baghdad, Iraq
maadalnaimiy@yahoo.com
April 2015
__________________________________________________*****_________________________________________________
Abstract___Artificial intelligence is finding its way into ever more areas of life. The latest craze is AI chips and related
applications on the smartphone. However, technology began as early as the 1950s with the Dartmouth Summer Research Project
on Artificial Intelligence at Dartmouth College, USA. The beginnings go even further back to the work of Alan Turing - which
goes back to the well-known Turing test -, Allen Newell and Herbert A. Simon. With the chess computer Deep Blue from IBM,
which succeeded in 1996 as the first machine to beat the then-reigning chess world champion Garry Kasparov in a match, the
artificial intelligence managed to get into the focus of the world public. In data centers and on mainframes, AI algorithms have
been used for many years.
__________________________________________________*****_________________________________________________
I. INTRODUCTION
In recent years, incredible progress has been made in
computer science and AI. Watson, Siri or Deep Learning show
that AI systems are now delivering services that must be
considered intelligent and creative. And there are fewer and
fewer companies today that can do without artificial
intelligence if they want to optimize their business or save
money.
AI systems are undoubtedly very useful. As the world
becomes more complex, we need to leverage our human
resources and high-quality computer systems help. This also
applies to applications that require intelligence. The other side
of the AI medal is: The possibility that a machine might
possess intelligence scares many. Most people believe that
intelligence is something unique, which is what distinguishes
Homo sapiens. But if intelligence can be mechanized, what is
unique about humans and what sets it apart from the machine?
The quest for an artificial copy of man and the complex of
questions involved are not new. The reproduction and
imitation of thought already occupied our ancestors. From the
sixteenth century, it was teeming with legends and the reality
of artificial creatures. Homunculi, mechanical automata, the
golem, the Mälzel chess automaton, or Frankenstein were all
imaginative or real attempts in the past centuries to artificially
produce intelligences and to imitate what is essential to us.
The idea of making inanimate objects into intelligent beings
by giving life a long time is fascinating the mind of mankind.
Ancient Greeks had myths about robotics, and Chinese and
Egyptian engineers made automatons. We can see the traces of
the beginning of modern artificial intelligence as an attempt to
define the classical philosophers' system of human thought as
a symbolic system. However, the field of artificial intelligence
was not formally established until 1956. In 1956, a conference
"Artificial Intelligence" was held for the first time in Hanover,
New Hampshire, at Dartmouth College. Cognitive scientist
Marvin Minsky at MIT and other scientists participating in the
conference were quite optimistic about the future of artificial
intelligence. As Minsky stated in his book "AI: The
Tumultuous Search for Artificial Intelligence": "In a
generation, the problem of artificial intelligence creation will
be solved at a significant level."
One of the most important visionaries and theoreticians was
Alan Turing (1912-1954): in 1936, the British mathematician
proved that a universal calculator - now known as the Turing
machine - is possible. Turing's central insight is that such a
machine is capable of solving any problem as long as it can be
represented and solved by an algorithm. Transferred to human
intelligence, this means that if cognitive processes can be
algorithm can be broken down into finite well-defined
individual steps they can be executed on one machine. A few
decades later, the first practical digital computers were
actually built. Thus, the "physical vehicle" for artificial
intelligence was available
Alan Turing
The electromechanical machine of Turing, considered a
precursor of modern computers, managed to unlock the code
used by the German submarines in the Atlantic. His work at
Bletchley Park is considered key to the end of World War II.
His work at Bletchley Park, an isolated country house north of
London, was made public in the 1970s, when the role of the
brilliant mathematician in the war was revealed. The
cryptographers who worked helped shorten World War II by
about two years, by deciphering around 3,000 German military
messages a day.
The British mathematician
Alan Turing, father of modern
computing and key man for
the British victory in World
War II by cracking the Nazi
code "unbreakable" Enigma,
has finally received a royal
pardon that tries to amend his
criminal conviction for being
homosexual, a fact that led to
suicide
Maad M. Mijwel April 2015
2
maadalnaimiy@yahoo.com _____________________________________
Nazi code decryption machine
Turing's team deciphered the 'Enigma' code, which the
Germans considered unbreakable, and designed and developed
Colossus, one of the first programmable computers. But after
the war, Prime Minister Winston Churchill ordered the
destruction of Colossus computers and 200 'Turing bombe'
machines to keep them secret from the Soviet Union.
II. ARTIFICIAL INTELLIGENCE HISTORY
To be informed about the history of artificial intelligence, it
is necessary to go back to previous dates in Milat. In the
Ancient Greek era, it is proven that various ideas about
humanoid robots have been carried out. An example of this is
Daedelus, who is said to have ruled the mythology of the
wind, to try to create artificial humans. Modern artificial
intelligence has begun to be seen in history with the aim of
defining philosophers' system of human thought. 1884 is very
important for artificial intelligence. Charles Babbage, on
this date, has worked on a mechanical machine that will
exhibit intelligent behavior. However, as a result of these
studies, he decided that he would not be able to produce a
machine that would exhibit as intelligent behaviors as a
human being, and he took his work suspended. In 1950,
Claude Shannon introduced the idea that computers could
play chess. Work on artificial intelligence continued slowly
until the early 1960s.
The emergence of artificial intelligence officially in
history dates back to 1956. In 1956, a conference artificial
intelligence session at Dartmouth College was introduced
for the first time. Marvin Minsky stated in his book
"Stormed Search for Artificial Intelligence " that "the
problem of artificial intelligence modeling within a
generation will be solved ". The first artificial intelligence
applications were introduced during this period. These
applications are based on logic theorems and chess game.
The programs developed during this period were
distinguished from the geometric forms used in the
intelligence tests; which has led to the idea that intelligent
computers can be created.
III. MILESTONES FOR AI HISTORY
In 1950, Alan Turing created a test to determine whether
a machine was intelligent. This test shows the intelligence
given to computers. The intelligence level of the machines
that passed the test at that time was considered adequate.
LISP (List Processing Language), developed by John
McCarthy in 1957, is a functional programming language
developed for artificial intelligence. One of the rather old
and powerful programming languages , LISP is a language
that allows you to create flexible programs that represent
basic operations with list structure. Between 1965 and 1970,
it could be called a dark period for artificial
intelligence. The developments on artificial intelligence in
this period are too few to be tested. The hasty and optimistic
attitude due to the unrealistic expectations that have
emerged has led to the idea that it will be easy to uncover
the machines with intelligence. But this period was named
as a dark period on behalf of artificial intelligence because it
did not succeed with the idea of creating intelligent
machines by simply uploading data .Between 1970 and
1975, artificial intelligence gained momentum. Thanks to
the success achieved in artificial intelligence systems that
have been developed and developed on subjects such as
disease diagnosis, the basis of today's artificial intelligence
has been established. During the period 1975-1980 they
developed the idea that they could benefit artificial
intelligence through other branches of science such as
psychology.
Artificial Intelligence began to be used in large projects
with practical applications in the 1980s. The next time the
daylight is passed, the artificial intelligence has been
adapted to solve real life problems. Even when the needs of
users are already met with traditional methods, the use of
artificial intelligence has reached to a much wider range
thanks to more economical software and tools.
History of AI with Chronological Order:
May 1. year: Alexander Heron in antiquity made
automatons with mechanical mechanisms working with
water and steam power.
1206: Ebru İz Bin Rezzaz Al Jezeri, one of the pioneers
of cybernetic science, has made water-operated
automatic controlled machines.
1623: Wilhelm Schickard invented a mechanic and a
calculator capable of four operations.
1672: Gottfried Leibniz has developed a binary
counting system that forms the abstract basis of today's
computers.
1822-1859: Charles Babbage is a mechanical
calculator. Ada Lovelace is regarded as the first
Maad M. Mijwel April 2015
3
maadalnaimiy@yahoo.com _____________________________________
computer programmer because of the work he has done
with Babbage's punched cards on his
machines. Lovelace's work includes algorithms.
1923: Karel Capek first introduced the robot concept in
the theater play of Rossum's Universal Robots (RUR -
Rossum's Universal Robots).
1931: Kurt Gödel introduced the theory of deficiency,
which is called by his own name.
1936: Konrad Zuse developed a programmable
computer named Z1 named 64K memory.
1946: ENIAC (Electronic Numerical Integrator and
Computer), the first computer in a room size of 30 tons,
started to work.
1948: John von Neumann introduced the idea of self-
replicating program.
1950: Alan Turing, founder of computer science,
introduced the concept of the Turing Test.
1951: The first artificial intelligence programs for the
Mark 1 device were written.
1956: The logic theorist (Logic Theory-LT) program
for solving mathematical problems is introduced by
Neweell, Shaw and Simon. The system is regarded as
the first artificial intelligence system.
The end of the 1950s - the beginning of the 1960s: A
schematic network for machine translation was
developed by Margaret Masterman et al.
1958: John McCarty of MIT created the LISP (list
Processing language) language.
1960: JCR Licklider described the human-machine
relationship in his work.
1962: Unimation was established as the first company
to produce robots for the industrial field.
1965: An artificial intelligence program ELIZA is
written.
1966: The first animated robot "Shakey" was produced
at Stanford University.
1973: DARPA begins development for protocols called
TCP / IP.
1974: The Internet has begun to be used for the first
time.
1978: Herbert Simon earned a Nobel Prize for his
limited Rationality Theory, which is an important work
on Artificial Intelligence.
1981: IBM produced the first personal computer.
1993: Production of Cog, a human-looking robot at
MIT, began.
1997: Deep Blue named supercomputer defeated world
famous chess player Kasparov.
1998: Furby, the first artificial intelligence player, was
driven to the market.
2000: Kismet named robot which can use gesture and
mimic movements in communication is introduced.
2005: Asimo, the closest robot to artificial intelligence
and human ability and skill, is introduced.
2010: Asimo is made to act using mind power
IV. WHAT IS ARTIFICIAL INTELLGENCE?
Artificial intelligence is the general name of the
technology for the development of machines, which are
created entirely by artificial means and can exhibit
behaviors and behaviors like human beings, without taking
advantage of any living organism.
Artificial intelligence products, which, when approached
as an idealist, are completely human like and can perform
things such as feeling, foreseeing, and making decisions, are
generally called robot names.
The artificial intelligence of which the first steps are
being taken by the question of Mathison Turing by the
question "Can machines be considered?" Is one of the most
important factors in the emergence of various military
weapon technologies and the development of computers
during the period of World War II.
The concept of Machine Intelligence emerging with
various code algorithms and data studies reveals that all the
technological devices produced from the first computers to
today's smart phones are developed on the basis of people.
The artificial intelligence, which was developed very slowly
in the old periods but so important steps as the day-to-day,
reveals how much progress has been made with the
emergence of gifted robots today.
McCulloch and Pitts introduced the ability to assign
various functions to robots by utilizing artificial intelligence
studies, artificial nerve cells and different science branches
at the product development focus pointing to human
behaviors. Nevertheless, the first steps of the one-arm robot
workers in the factory were taken. In 1956, McCarthy,
Minsky, Shannon and Rochester in the study process
conducted by the artificial intelligence put forward the name
McCarthy, artificial intelligence could be described as the
father of the name.
Warren McCulloch & Walter Pitts
Although symbolic and cybernetic artificial intelligence
studies have different currents, the two currents faced a bad
Maad M. Mijwel April 2015
4
maadalnaimiy@yahoo.com _____________________________________
start at the beginning and could not be maintained as
expected on both sides. In the symbolic artificial
intelligence studies, robots cannot give exactly the expected
responses and answers to the questions of the people,
whereas on the cybernetic artificial intelligence side, the
artificial neural networks do not give the expectation and the
works on the two sides cannot be successful with literally.
Artificial intelligence has led to the emergence of
specialized artificial intelligence exercises that will continue
with a single purpose, rather than different branches and
minds, after failures in Symbolic and Cybernetic artificial
intelligence studies developed on different sides.
While the concept of artificial intelligence has stimulated
artificial intelligence studies, the fact that artificial
intelligence products do not have enough knowledge about
what is being worked on has brought about various
problems. However, the artificial intelligence developers
who brought rational solutions to the problems that have
arisen, have reached to commercial level of artificial
intelligence, and the artificial intelligence industry that
emerged in the coming periods has shown that the
achievement of successful works is achieved with billion
dollar billets.
Recent developments in artificial intelligence studies
have revealed the importance of language. As anthropology,
Human Science studies show, people have begun to hold the
language in front of the artificial intelligence studies in
recent years because people think with language and put out
various functions.
Later, a number of artificial intelligence marking
languages appeared with the language studies that were
behind those who supported Symbolic Artificial Intelligence
studies. Today, artificial intelligence studies carried out by
Symbolic artificial intelligence developers have benefited
from artificial intelligence languages and have made it
possible to show even robots that can speak.
V. EXPERT SYSTEMS 1975 TO 1985
In the third era, starting in the mid-1970s, they broke
away from the toy worlds and tried to build practically
usable systems, whereby methods of knowledge
representation were in the foreground. The AI left its ivory
tower and AI research also became known to a wider public.
Initiated by the US computer scientist Edward Feigenbaum
expert system technology is initially limited to the
university area. However, little by little, expert systems
developed into a small commercial success and for many
were identical to all AI research just as many Machine
Learning today are identical to AI.
In an expert system, the knowledge of a particular subject
area is represented in the form of rules and large knowledge
bases. The best known expert system was the MYCIN
developed by T. Shortliffe at Stanford University. It was
used to support diagnostic and therapeutic decisions in
blood infectious diseases and meningitis. He was attested by
an evaluation that his decisions are as good as those of an
expert in the field and better than those of a non-expert.
Starting with MYCIN, a large number of other expert
systems with more complex architecture and extensive rules
have been developed and used in various fields. In medicine,
for example, PUFF (data interpretation of lung tests),
CADUCEUS (diagnostics in internal medicine), in
chemistry DENDRAL (analysis of molecular structure), in
geology PROSPECTOR (analysis of rock formations) or in
computer science the system R1 for the configuration of
Computers that saved Digital Equipment Corporation (DEC)
$ 40 million a year.
Even the area of language processing, in the shadow of
expert system euphoria, was oriented towards practical
problems. A typical example is the dialog system HAM-ANS,
with which a dialogue can be conducted in various fields of
application. Natural-language interfaces to databases and
operating systems penetrated into the commercial market such
as Intellect, F & A or DOS-MAN.
I. THE RENAISSANCE OF NEURAL NETWORKS
1985 TO 1990
In the early 1980s, Japan announced the ambitious "Fifth
Generation Project," which was designed, among other things,
to carry out practically applicable AI cutting-edge research.
For the AI development, the Japanese favored the
programming language PROLOG, which had been introduced
in the seventies as the European counterpart to the US-
dominated LISP. In PROLOG, a certain form of predicate
logic can be used directly as a programming language. Japan
and Europe were largely PROLOG-dominated in the
sequence, in the US continued to rely on LISP.
In the mid 80's the symbolic AI got competition from the
resurrected neural networks. Based on brain research results,
McCulloch, Pitts, and Hebb first developed mathematical
models for artificial neural networks in the 1940s. But then
lacked powerful computers. Now in the eighties, the
McCulloch-Pitts neuron experienced a renaissance in the form
of so-called connectionism.
Unlike the symbol-processing AI, connectionism is more
oriented towards the biological model of the brain. Its basic
idea is that information processing is based on the interaction
of many simple, uniform processing elements and is highly
parallel. Neural networks offered impressive performance,
especially in the field of learning. The Netttalk program was
able to learn how to speak using example sentences: by
entering a limited set of written words with pronunciation as
phoneme chains, such a net could learn how to pronounce
English words correctly and apply the learned to unknown
words correctly.
But even this second attempt came too early for neural
networks. Although the funding was booming, but also the
limits were clear. There was not enough training data,
solutions for structuring and modularizing the networks were
missing, and also the computers before the millennium were
still too slow.
Maad M. Mijwel April 2015
5
maadalnaimiy@yahoo.com _____________________________________
VI. THE BEST 5 FILMS IN A.I.
1- Bicentennial Man 1990
2- I Robot 2004
This time the star of the cast is Will Smith , the detective.
The film did not have very good acceptance among the
followers of Asimov because the bulk of the argument is not
based on any of his books but only takes some of its
elements
3- Artificial Intelligence (AI) (2001)
All part of a Stanley Kubrick project started at the beginning
of the 70s and that could not be done in his day because the
computer generated image systems are not too
advanced. Therefore, at the end of Spielberg's film there is a
dedication: "For Stanley Kubrick".
4- Blade Runner 1982
The film bears the signature of Riddley Scott, which in
itself is a push to want to see it. The film delves into the
consequences of the penetration of technology in a society
of a not so distant future. This film is essential in your
library of the science fiction genre.
5- EX Machina 2015
It is still too early to see what is going to leave this film
in moviegoers. The only thing that we can confirm is that it
has an Oscar Award for the Best Special Effects and very
good reception by the public and critics.
It is based on a homonymous
account of Isaac Asimov
himself. NDR ("Andrew") is
a robot that has been
acquired by a family to
perform cleaning tasks. But
there is something that
makes him special: he is able
to identify emotions,
something for which no
robot is programmed.
The script is signed by
Jeff Vintar who had to
incorporate, at the request
of the producers, the
Three Laws of Robotics
and other ideas of Isaac
Asimov after the Producer
acquired the rights to the
title of that author's book.
Steven Spielberg adapted a
story written by Brian Aldiss
entitled "The super toys last all
summer" with some influence
of "The Adventures of
Pinocchio". In the film we
meet David, a robot child
capable of showing feelings
like love.
It is considered a cult
movie. It is based on the
novel "Do Androids
Dream of Mechanical
Sheep?" By Philip K.
Dick, an author who has
inspired countless films
including, for example,
(based on his book
"Ubik"). The case that
concerns us today is Blade
Runner.
We finish with a more
recent production. We know
the story of Caleb, a
programmer who is invited
in his company to perform a
test with an android that has
artificial intelligence.
... These include image enhancement for radiology, which improves the visibility of dental structures and facilitates the diagnosis of cysts and tumors [8][9][10][11]. AI has also been utilized for the determination of periapical lesions and root anatomy for endodontics [12][13][14][15][16], as well as for the diagnosis of periodontitis [14][15][16][17][18][19]. AI has also been used to automate the identification of cephalometric landmarks in orthodontics. ...
... Alan Turing, a British mathematician, in 1936, demonstrated the feasibility of a universal calculator, also known as the "Turing machine", which was capable of solving any problem that could be represented and solved using an algorithm [13]. The first AI program, "The Logic Theorist", which Newell and Simon produced in 1955, signaled the start of the modern AI era. ...
Article
Full-text available
Since the beginning of recorded history, the human brain has been one of the most intriguing structures for scientists and engineers. Over the centuries, newer technologies have been developed based on principles that seek to mimic their functioning, but the creation of a machine that can think and behave like a human remains an unattainable fantasy. This idea is now known as "artificial intelligence". Dentistry has begun to experience the effects of artificial intelligence (AI). These include image enhancement for radiology, which improves the visibility of dental structures and facilitates disease diagnosis. AI has also been utilized for the identification of periapical lesions and root anatomy in endodontics, as well as for the diagnosis of periodontitis. This review is intended to provide a comprehensive overview of the use of AI in modern dentistry's numerous specialties. The relevant publications published between March 1987 and July 2023 were identified through an exhaustive search. Studies published in English were selected and included data regarding AI applications among various dental specialties. Dental practice involves more than just disease diagnosis, including correlation with clinical findings and administering treatment to patients. AI cannot replace dentists. However, a comprehensive understanding of AI concepts and techniques will be advantageous in the future. AI models for dental applications are currently being developed.
... The inception of object detection leveraging AI (artificial intelligence) traces back to 1986 [11]. Nonetheless, the substantive contributions of AI and machine learning models were hindered during that period due to technological constraints. ...
Article
Full-text available
Forest fires have emerged as a significant global concern, exacerbated by both global warming and the expanding human population. Several adverse outcomes can result from this, including climatic shifts and greenhouse effects. The ramifications of fire incidents extend widely, impacting human communities, financial resources, the natural environment, and global warming. Therefore, timely fire detection is essential for quick and effective response and not to endanger forest resources, animal life, and the human economy. This study introduces a forest fire detection approach utilizing transfer learning with the YOLOv8 (You Only Look Once version 8) pretraining model and the TranSDet model, which integrates an improved deep learning algorithm. Transfer Learning based on pre-trained YoloV8 enhances a fast and accurate object detection aggregate with the TranSDet structure to detect small fires. Furthermore, to train the model, we collected 5200 images and performed augmentation techniques for data, such as rotation, scaling, and changing due and saturation. Small fires can be detected from a distance by our suggested model both during the day and at night. Objects with similarities can lead to false predictions. However, the dataset augmentation technique reduces the feasibility. The experimental results prove that our proposed model can successfully achieve 98% accuracy to minimize catastrophic incidents. In recent years, the advancement of deep learning techniques has enhanced safety and secure environments. Lastly, we conducted a comparative analysis of our method’s performance based on widely used evaluation metrics to validate the achieved results.
... [1] The central principles of AI revolve around reasoning, knowledge, planning, learning, communication, perception, and the ability to move and manipulate objects. One of the most important visionaries and theoreticians in the history of AI was Alan Turing (1912Turing ( -1954, a British mathematician who in 1936 proved that a universal calculator (now known as Turing machine) is possible [2]. It suggests that a machine is capable of solving any problem as long as it can be represented and solved by an algorithm. ...
... These developments have proven effective in addressing a wide array of challenges across various sectors of the global economy. According to a renowned publication by McKinsey Global Institute titled "Big data: The next frontier for innovation, competition, and productivity", it was projected that by 2009, the majority of sectors within the United States economy possessed an average of 200 terabytes of stored data [5]. ...
Conference Paper
Full-text available
The field of Artificial Intelligence (AI) possesses a substantial historical background, tracing its origins to the year 1956 when the term was first introduced at Dartmouth College to development of Chatbots. Despite encountering initial obstacles and facing financial difficulties, the field of AI witnessed a renewed interest and progress in the 1980s. This comeback was further propelled by significant improvements in the 21st century, primarily attributed to the development of machine learning techniques and the availability of vast amounts of data sometimes referred to as big data. Within the realm of education, AI possesses the capacity to revolutionise the processes of teaching and learning by providing individualised support, improving communication channels, and streamlining research endeavours. Nonetheless, apprehensions regarding the displacement of jobs continue to be a prevailing issue. This study examines the historical progression of AI and its potential implications within the realm of education.
... One of the most important visionary and theoretician named Alan Turing (British mathematician), in 1936 proved that a universal calculator, known as the Turing machine is possible. 3 Turing's central insight is that such a machine is capable of solving any problem as long as it can be represented and solved by an algorithm. ...
Article
Purpose The purpose of this paper is to define artificial intelligence (AI) and examine its history, positive and negative impacts, ethical and social implications and implementation within management education. This paper offers various suggestions for the use of AI, as well as context surrounding the current AI landscape. Design/methodology/approach The paper uses a narrative review (Sylvester et al. , 2013). Findings This paper identifies several areas of AI innovation, including AI tutoring systems, feedback systems for student papers, utilization of AI for innovative lesson plans and the use of AI to predict potential student dropout from a course or institution. In addition, there are significant concerns regarding the lack of ethical guidelines with current AI. Practical implications Practical implications include the ability to immediately use certain AI tools to enhance lesson plans as well as enhance student work using AI as a tool. Originality/value This paper was originally created as a conference presentation and presented at the society for advancement of management (SAM) International Business Conference before being reworked to be submitted to the journal. All content in this paper is original in their creation.
Chapter
In this study, the characteristics of artificial intelligence tools used in the field of education are analyzed by dividing them into certain categories. The web content analysis conducted between January 15 and February 12, 2024, examined the characteristics of these tools and their relationship with education in detail. According to the findings, AI tools such as chatbots (ChatGPT, Gemini, and Microsoft Copilot), presentation tools (SlidesGo, Presentation.AI, Gamma), quiz tools (QuizBot, Quizgecko, and Questen), and document analysis tools (Quillbot, ChatPDF, and Summarizer.org) offer various features. The results show that although there are differences for each category and tool, there are significant differences and limitations in terms of language supported and pricing policies in all the tools analyzed. These findings emphasize that AI tools offer solutions for various needs in education, but these solutions come with some limitations.
Article
Full-text available
The functioning of institutions and organizations, as well as the lives of individuals, have been significantly impacted by artificial intelligence technologies becoming an essential part of everyday life. One area that has been greatly affected by these technologies is public relations. Within the realm of public relations, artificial intelligence offers numerous advantages in terms of time and cost, particularly in areas such as customer relationship management, media and social media monitoring and analysis, virtual assistants, internal communication, and crisis communication and management. However, the use of artificial intelligence also poses potential threats in areas that are crucial to the success of public relations, such as propaganda, privacy, data breaches, ethics, disinformation, and misinformation. As a result, international organizations, academics, and public relations professionals have put forth various suggestions to prevent the misuse of artificial intelligence in damaging the reputation of institutions, organizations, and the field of public relations as a whole. This study aims to explore the potential advantages, threats, and recommendations associated with the use of artificial intelligence technologies in public relations through a comprehensive literature review.
Chapter
Full-text available
The idea of implementing artificial intelligence in academic libraries is gaining momentum worldwide. Intelligent and automated systems integrated with machine learning algorithms have the capability to analyze data, find and retrieve information accurately and quickly, and provide relevant recommendations based on user behavior. The implementation of such systems can significantly improve user satisfaction and experience by enabling more efficient access to resources, enhancing personalized searching, reducing processing time for content management tasks, and enabling faster query resolution. Additionally, AI-powered smart libraries can help institutions become more cost-effective while there are challenges around embracing this technology due to concerns about privacy breaches; however, it is believed that the benefits outweigh such risks. This study focuses to make an overall view on the potential for employing AI in academic libraries throughout the LIS domain to enhance library services by creating smart libraries that can better cater to the needs of their users. In this chapter, we will explore origin, types, importance, benefits and challenges of using AI in academic libraries, as well as the different applications of AI in cataloguing and metadata management, information retrieval and recommendation systems, user services and support, and future directions and possibilities. This chapter also will converse the ethical, legal, and social implications of AI in academic libraries, future of AI based academic libraries in India, role of librarians, etc.,
Article
Full-text available
There is an expectation that the introduction of artificial intelligence (AI) will bring about profound changes in the current paradigm of the audit profession, ensuring better reliability and security in the analysis of financial statements. This paper reports the results of a questionnaire survey to ascertain the perceptions of certified auditors, from two Portuguese districts, regarding the impact of artificial intelligence on the audit profession. Findings reveal that the respondents believe that the profession's future depends on the implementation of AI, namely in the efficiency and effectiveness of audit procedures, sampling techniques, cost-benefit relationship and recognizing material distortions.
ResearchGate has not been able to resolve any references for this publication.