ResearchPDF Available

Historical Evolution, Current & Future Of Artificial intelligence (AI)

Authors:

Abstract

In the 1960s, Artificial-Intelligence were not necessarily new. With the growth of real-world implications over the previous years, however, interest in AI has been reignited over the last few years. The rapid and dynamic pace of AI development have made it very difficult to predict its future path and is enabling it to alter our world in ways we have yet to comprehend. This has then resulted in law and policy, having stayed one step behind the development of technology.
83472 RESH2002
Introduction
In the 1960s, Artificial-Intelligence were not necessarily new. With the growth of real-
world implications over the previous years, however, interest in AI has been reignited over
the last few years. The rapid and dynamic pace of AI development have made it very difficult
to predict its future path and is enabling it to alter our world in ways we have yet to
comprehend. This has then resulted in law and policy, having stayed one step behind the
development of technology.
Understanding and analysing all existing literature on AI is a necessary precursor to
subsequently recommending policy on the matter. By researching and examining academic
and news articles, paper policy and position papers from across the globe, this literature
review aims to provide a direct overview of Artificial Intelligence from multiple perspectives.
The structure taken in this literature review is stated below;
1) Overview of AI historical development.
2) Definitional Analyses.
3) Ethical and Social, Economic and Political impact and specific solutions.
4) the regulatory way in moving forward.
This literature review ensures the understanding of existing paradigms around
Artificial Intelligence, before narrowing to specific applications and subsequently,
political recommendations.
1
83472 RESH2002
Historical Evolution of AI
The historical development of Artificial Intelligence has been fairly recent, with its birth
being traced to the mid-20th century. Though the seemingly recent origin, there still exist some
influences that have greatly contributed, albeit indirectly, to the conceptualization of AI. The
genesis of Artificial Intelligence can be credited to the contributions of diverse academic fields,
history, art, philosophy logic and mathematics. This section will identify some of those
factors, in addition, provides a brief historical account of the notable breakthroughs in the
evolution of AI.
Contributions to the Genesis of Articial Intelligence
Mathematics & Logic
Once thinking became form of computation, the next steps were to mechanize and formalized
it.
Luger defines this as a phase involving the “development of formal logic”.
Both Ada Lovelace’s and Charles Babbage works focused on this, wherein, the
patterns of algebraic relationships were treated as entities that could be studied, resulting in
creating a formal language for thought.
The author had also credited George Boole for his contribution to the field Boole’s
operations of ‘AND’, ‘R’ and ‘NOT’ have remained the basis for all operations in formal
logic.
Russel’s and Whitehead work has also been acknowledged by Luger, with their treatment of
mathematical reasoning in purely formal terms, acting as a basis for computer automation.
2
83472 RESH2002
c. Biology
Apart from logic and philosophy, Nils Nilson believes that aspects of biology and life in general,
have provided important clues about intelligence. Norvig and Russel are more specific, pointing out
the neuroscience, in discovering that the human brain can be similar to computers in some ways,
provided the intuitive basis for AI. The idea of humans and animals as nothing but machines that
process information was then supplemented by phycology.
Historical Account of AI
The White House’s National Science and Technology Council traces the roots of artificial
Intelligence to the1950s, in McCulloch and Pitts, “A Logical Calculus of the Ideas Immanent in
Nervous Activity”.
The idea of AI was crystallized by Alan Turing, in his famous 1960s paper
“Computing Machinery and Intelligence”. The fundamental question posed in that paper was
“Can machines think? “, which Turns sought to an answer using what came to be known as the
Turing Test. He also believed that a machine could be programmed to be self-learning and learn from
experience, just as a child. However, the term ‘Artificial Intelligence’ itself was not coined until
1955.19 The Turing Test became the gold standard for AI development. Luger identifies its defining
features:
• It provides the objective notion of intelligence;
• It also enables unidimensional focus by containing a single standard of measurement. This
avoids side tracking with all questions such as, “Do a machine really knows that its thinking?”
• It tests only for solving problems, and not for other forms of human intelligence.
3
83472 RESH2002
• By using human standard to measure machine intelligence, the Test strait jackets
it into a human mold. This completely avoids the consideration of the possibility that
human and machine intelligence are simply different and can’t be compared and
contrasted.
According to Nilson, the emergence of Artificial Intelligence as an independent field of
research strengthened and was further strengthened by three important meetings. A 1954 session on
Learning Machines were held in conjunction with the 1955 Western Joint Computer Conference in
Los Angeles, a 1957 summer research project on Artificial Intelligence convened at Dartmouth
College and a 1959 symposium on the “Mechanization of Thought Processes” sponsored by the
National Physical Laboratory. Initially, development of AI was mainly used to solve mathematical
problems, games or puzzles by relying on simple symbol structures. In the 1960s’, however, programs
were required to perform more intellectual tasks such as solving geometric/analogy problems,
answering questions, storing information and creating semantic networks, therefor requiring more
complex symbol structures termed semantic representations.
The following breakthrough was in the creation of the General Problem Solver (GPS).
GPS made the first approaches of ‘thinking humanly”, it was designed to imitate human
problem solving protocols by solving puzzles using the same approach as humans would. In 1959, the
computer scientist John McCarthy made three crucial contributions to AI.
1) He defined Lisp, a language that would later become dominant and efficient for AI
programming.
2) He invented time sharing;
3) In a 1958 paper, he also described the Advice Taker, which was seen as first end-to-end
intelligence system.
4
83472 RESH2002
Denition and Compositional Aspect Of AI
What is AI?
“Pei Wang” doesn’t believe that there’s a fixed definition for AI. He listed 5 ways in which AI can
be defined. These include the structure, behavior, capabilities, function and by principles. These are in
order of decreasing specificity and increasing generality.
Nils j. Nilsson broken the word AI into its components. Artificial [meaning machine as
oppose to human] + Intelligence. With this he believes that with the increasing complexity of machine
it will come and increased amount of intelligence.
Ben Coppin notes the definition of AI as “the study of systems that act in a way, to any
observer would be intelligent.” However, its an incomplete definition since artificial intelligence are
also used to solve simple and complex problems which forms a base of intelligent behavior of humans
and other animals to solve problems. According to Ben Copping the second definition is used when
defining the differences between Strong AI and Weak AI.
5
83472 RESH2002
Sectoral Impact - AI
This section seeks to examine the ethical, social, economic and political impact of AI. Under
each subhead, literature on both negative & positive implications are detailed, along with existing
literature with regards potential solutions to the negative impact.
Ethical & Social Impact
Ethical and social impact of AI are divided into two distinct study area, the AI perspective and the
human perspective. The first part involves looking at the ethical and social aspects of AI impact on
humans. Subsequently, the implication was examined on the way in which the technology itself may
be perceived.
How will the Expansion of artificial intelligence transform society?
Nick Bostrom examines whether the intersection of AI and behavioral augmentations is a
threat to human dignity. He ultimately sides with the Transhumanists, who believe in the broadest
possible technological choices for the individual, and addresses the concerns of the conservative, who
call for a ban on human augmentation. His argument underlines that dignity is not restricted to the
state of humanity alone.
Jason Bornstein and Yevette Pearson discuss the application of AI (specifically to robots) in
the form of caregiving. In utilizing a capabilities approach analysis, authors believe that the use of
robots can maximize freedom and care for recipients of such care.
However, authors such as (Sharkey) doesn’t agree in utilizing AI for care-giving, whether it
be the care of geriatrics or children. In regards the Former, he notes that severe dysfunction occurs in
infants that develop attachments to in-animate entities. As regards the Latter, he notes that leaving the
elderly’s in the care of machines would deprive them of the human contact that is currently provided
by caregivers.
6
83472 RESH2002
Legal Impact
There’s a wide agreement that the law will struggle to keep UpToDate with the rapid changes
in AI. This part of the paper considers the legal implications of artificial intelligence in the areas of
legal liability, cybersecurity, privacy and intellectual property. Will also analyse the lens through
which various authors that have looked at these issues and will attempt to provide some solutions as
well.
Liability – Civil and Criminal
Andreas Matthias identifies the liability with regard to machines is normally contingent upon
control, whichever entity exercises control over the machine accepts responsibility for failures. He
says that a (responsibility gap) arises when traditional modes of attribution can’t be transposed to a
new class of machine actions, since nobody has enough ‘control’ to assume the responsibility.
Matthias also details the shift from technology over which the coder exercises control, to the types of
technology where the control function gradually encodes. At that same time, the environmental
influence in which the technology operates increases.
Cybersecurity Risks & impact
The threat towards cybersecurity caused by AI will be different depending on the both the
nature and type of AI application. Sapolsky classifies and surveys the potential kinds of dangerous AI.
He claims that cybersecurity threats from artificial intelligence will be less the robotic science fiction
kind, and will more divert out of deliberate human action, poor design, side effects or factors
attributable to the surrounding environment. He concludes that that most dangerous types of AI would
be those that are created to harm (military use). He also stated that deciding what constitutes
malevolent AI would be an important problem in AI research safety and prescribes for the intentional
creation of malevolent AI to be recognized as a crime.
7
83472 RESH2002
Economic Impact
AI affects the economy both at a jobs and economic development level. From a literature
analysis, authors seem to have mixed opinions on the amount of impact AI will have and whether it
can be, on balance, positive or negative.
How will AI Affect Job Employment
In the past, the fear of jobs being replaced by AI robots was mostly confined industrial
manufacturing, where a single robot can do the work of almost 7 people. The amount of industrial
robots in the U.S. quadrupled from the 90s -- 2000s and are predicted to do so again by 2025. By
2025, scientist predict that nearly half of all jobs will be at risk of automation.
With the recent evolvement in artificial intelligence, automation is overriding the jobs once
reserved for humans. The Associated Press (AP) publishes articles written by IBM’s Watson with the
AI helps to treat cancer patients. For now, people are still needed to double check, and polish the work
done by AI, yet the number of jobs is currently dwindling.
8
83472 RESH2002
Following table, shows a list of Advantages vs the drawbacks;
Advantages Disadvantages
errors are reduced and its possible to reach
accuracy with more precision
Both hardware and software must be updated to
meet the latest requirements.
Using AI decisions can be taken very fast. Increasing unemployment rate
In today’s era, AI are used in many applications
e.g Google’s OK Google. Using these types of
applications, it’s possible to virtually
communicate using our voice. Thus, making
work easier.
Robots only follow what they are programmed
to do. They cannot act or think any different
outside of whatever algorithm or programming
is stored in their internal circuits.
Machines can work 24/7 without break. Unlike
humans.
machines are much better working efficiently
but they can’t replace the human connection that
makes the team.
Robots can be used instead of humans to avoid
risk taking.
Machines cannot develop a bond with humans.
9
83472 RESH2002
Conclusion
The term (Artificial Intelligence) include within its scope is a wide range of technological
processes, making it difficult to understand and hence create policy. This literature synthesis provides
a broad overview of the particular technologies that compose the “Black-Box” term referred to as
artificial intelligence and the key common issues to its different disciplines. evident from this
literature synthesis, the field of AI offers great promises as solutions and performance for many of
problem statements we as humans face. However, AI also identifies key normative and practical
questions of governance and ethics that will eventually play a central role with increased adoption of
these technologies. While some tensions between efficiencies promised by AI, and the criticisms
pointed to by those advocating greater caution in its adoption may appear incompatible, it is important
to reach into these points of conflict, so that we can be capable of rethinking some the existing
regulatory paradigms and create new ones once required.
10
83472 RESH2002
Bibliography
Long, J. (2017, May 21). BrainLeaf. collected from
https://www.brainleaf.com/blog/contracts/building-scope-work-sow-document-website-
project/
Mahabir, D. (2018, October). Requirements Capture. Retrieved from https://www.ielts-
mentor.com/reading-sample/academic-reading/733-ielts-academic-reading-sample-81
Russel. (n.d.). Russell, Norvig, P., & Intelligence, A. (1995). A modern approach. Artificial
Intelligence. Prentice-Hall, Egnlewood Cliffs, 25, 27.
Russell, S., Norvig, P., & Intelligence, A. (1995). A modern approach. Artificial Intelligence. Prentice-
Hall, Egnlewood Cliffs, 25, 27. ((1995)). Prentice-Hall, Egnlewood Cliffs, 25, 27.
Veronica S. Moertini, S. S. (n.d.). REQUIREMENT ANALYSIS METHOD OF ECOMMERCE.
collected from intelligence/http://airccse.org/journal/ijsea/papers/5214ijsea02.pdf
11
... Machine learning is a set of mathematical algorithms that learn a model from data: regression, classification, decision trees . Deep learning or reinforcement learning is artificial neurons assembled in different ways to emulate intelligence (2010-future) [6]. Systems that can recognise and collect, process, analyse and categorise data; as well as can train and learn; can decide and adapt decision to a situation; can make recommendation or solution to interact with humans -are made by AI approaches. ...
Conference Paper
Despite technology and screen use benefits in education providing and supporting, lots of users are having complains after long screen reading. It is based on the slower evolution of humans' perception system and reading paradigms to new reading conditions. Followed new public health problems of nowadays related to screen-reading, and users' needs are thought of new content-presentation improvement is need. Methodology: Literature research of AI approaches, app prototype descriptive analysis, and simple comparison analysis of app and theoretical AI approaches. Results: Newly developed application prototype for e-material formatting is created to improve screen-reading abilities and comfort and improve learning processes. The app works by using several AI approaches and elements: machine learning, perceptron, decision tree graphs, rule-based system, classification, and deep learning. The app collects data and analyses them based on the training database. After, app categorises data by decision tree method. Finally, it decides for formatting recommendation to suggest and make appropriate document formatting. The app receives users' feedback after use. Conclusions: The app uses all collected data for the deep learning process to improve personalised recommendations to create the user-centred design. Currently, it is a narrow range use app designed for e-study use on MOOC type platforms for user group without specific limitations or disabilities. The app is with several level formatting possibilities, including deeply personalised formatting. That create and provide more effective e-materials as it increases visual perception, legibility, readability, reading comprehension, and memorability of content. It is a learner-oriented education methodology with an AI approach.
ResearchGate has not been able to resolve any references for this publication.