ResearchPDF Available

Abstract

Undoubtedly, innovations play a vital role in enhancing the well-being and progress of humanity. It is crucial, therefore, to acknowledge that technology and other forms of innovation should not be immediately dismissed as unfavorable. However, it is of utmost importance to exercise caution when distinguishing between beneficial innovations and those that may pose risks or dangers. Currently, the concept of Artificial Intelligence (AI) is a subject of intense debate worldwide. According to Sebastian Thrun, the Head of Google's Self-driving car initiative, AI research is expected to span a century before reaching its full potential. Thrun suggests that AI is gradually gaining control over various aspects of our world, potentially diminishing the dominance of human beings, although they may still retain some level of control in certain domains. The rapid progress and advancement of AI technology have raised concerns among experts. While some researchers argue that AI holds the potential to revolutionize numerous fields, others express apprehension regarding its negative consequences, such as job displacement and compromised privacy. This review paper aims to explore both the positive and negative aspects of AI innovation in selected sectors, while also examining the potential future trajectory of this innovation.
Uchenna Nzenwata et al., International Journal of Emerging Trends in Engineering Research, 11(7), July 2023, 245 252
245
ABSTRACT
Undoubtedly, innovations play a vital role in enhancing the
well-being and progress of humanity. It is crucial, therefore, to
acknowledge that technology and other forms of innovation
should not be immediately dismissed as unfavorable.
However, it is of utmost importance to exercise caution when
distinguishing between beneficial innovations and those that
may pose risks or dangers. Currently, the concept of Artificial
Intelligence (AI) is a subject of intense debate worldwide.
According to Sebastian Thrun, the Head of Google's
Self-driving car initiative, AI research is expected to span a
century before reaching its full potential. Thrun suggests that
AI is gradually gaining control over various aspects of our
world, potentially diminishing the dominance of human
beings, although they may still retain some level of control in
certain domains. The rapid progress and advancement of AI
technology have raised concerns among experts. While some
researchers argue that AI holds the potential to revolutionize
numerous fields, others express apprehension regarding its
negative consequences, such as job displacement and
compromised privacy. This review paper aims to explore both
the positive and negative aspects of AI innovation in selected
sectors, while also examining the potential future trajectory of
this innovation.
Key words: Artificial Intelligence, Innovation, Technology,
Impact, Consequences.
1. INTRODUCTION
Throughout history, innovation has been the driving force
behind improved standard of living. Nonetheless, innovation
can cause significant disruptions. Some technologies like the
Internet of Things (IoT), big data, data science, cloud
computing, and artificial intelligence (AI), all have existed for
at least 25 years but only recently gained mainstream
acceptance and viability for commercial applications [1].
These technologies have broad applications in various fields,
transforming human lives and society.
AI has a long history, dating back to the 1950s when the first
AI programs were developed. “Artificial Intelligence” was
coined by John McCarthy in 1956 as “the science and
engineering of making intelligent machines”. The field
experienced a period, of low interest and funding, called AI
winter [2], but has continued to grow ever since as an
important field of computer science. There are two broad
concepts of AI namely General AI, which is focused on
machines with certain human intelligence, and Normal AI
which are the common type of technology used to perform
specific tasks [3].
Existing literature provides ample evidence to suggest AI as an
innovation that can drive significant transformations; like
enhancing the accuracy and effectiveness of medical services
hence improving human lives, improving transportation with
autonomous cars that could reduce accidents, personalized
human learning experience, etc. However, despite all of these,
AI has a few negative consequences on privacy and the right to
personal data, ethics and social considerations, labour
displacement, and replacement, etc.
This paper explores the impacts of AI in contemporary
industries with a focus on the pros and cons. Additionally, the
paper examines the consequences of Al and also what the
future holds. Conclusions and recommendations on the best
ways to mitigate the implications and consequences as well as
maximize the impact of AI conclude the research work.
1.1 History of AI and its Evolution
AI was started as a field of its own in 1956 by a group of
scientists led by John McCarthy, professor emeritus of
computer science at Stanford University.
In the 1950s and 1960s, researchers created machines that
could solve simple mathematical problems and play chess. A
programming language, Lisp, was developed by John
McCarthy in 1956 and is still in use today in AI research. The
General Problem Solver (GPS) was developed in 1967,
allowing for the resolution of complex problems in different
domains.
The 1970s marked the "AI winter," characterized by a
decrease in public interest in AI and a slowdown in research
progress due to reduced funding. Some hypothetical products,
possible with AI, were postulated in 1973 [4]. A few of these
products are a reality today:
During the 1980s, AI research shifted focus towards
developing expert systems that could make decisions based on
Artificial Intelligence: Positive or Negative Innovation
Uchenna Nzenwata1, Olayemi Bakare2, Obumneme K. Ukandu3
1Department of Computer Science Babcock University, Ogun State, Nigeria, nzenwatau@pg.babcock.edu.ng
2Department of Computer Science Babcock University, Ogun State, Nigeria, bakare0371@pg.babcock.edu.ng
3Department of Computer Science Babcock University, Ogun State, Nigeria, ukandu0183@pg.babcock.edu.ng
Received Date: May 28, 2023 Accepted Date: June 24, 2023 Published Date : July 07, 2023
ISSN 2347 - 3983
Volume 11. No.7, July 2023
International Journal of Emerging Trends in Engineering Research
Available Online at http://www.warse.org/IJETER/static/pdf/file/ijeter031172023.pdf
https://doi.org/10.30534/ijeter/2023/031172023
Uchenna Nzenwata et al., International Journal of Emerging Trends in Engineering Research, 11(7), July 2023, 245 252
246
a set of rules, but these systems had limitations in their ability
to tackle complex problems and required extensive
development time and effort.
Currently, AI is increasingly integrated into daily life (Table
1), from virtual assistants to self-driving cars with active use of
neural networks, deep learning and machine learning
algorithms.
Table 1: AI technologies predictions and their today’s reality. Source: [4].
2. AI AND THEIR CATEGORIES
There are different types of AI, however, categorized based on
capabilities and functionalities. AI based on capabilities
include Narrow AI which is the sort of AI capable of doing
intelligent tasks. This type of AI is the most frequent and
currently accessible. It is however programmed for one single
task, and as such cannot perform outside its boundaries.
Examples of narrow (or weak AI) include Apple Siri,
self-driving cars, speech recognition, and picture
identification. Other types are General Artificial Intelligence
and Super Artificial Intelligence. General AI is capable of
S/N
Product idea
Possible ability
1
Automatic language translator
“Language translating device
capable of high-quality
translation of text in one foreign
language to another. (Both
technical and commercial
material).”
2
Automatic identification
System
“System for automatically
determining a person's identity
by recognizing his voice,
fingerprints, face, etc”
Identity Check with
NuData
Security.
3
Automatic diagnostician
“A system capable of
interactive and/or automatic
medical
diagnosis based on querying
the patient, an examination of
biological tests, etc.”
Cognitive App in
collaboration with IBM
Watson.
4
Industrial robots
“An autonomous industrial
robot capable of product
inspection and assembly in an
automated factory, using both
visual and manipulative skills.”
Mitsubishi Robots
5
Robot chauffeur
“Robot cars capable of
operation on standard city
streets and country highways,
using visual sensors”
6
Universal game player
“A system capable of playing
Chess, Checkers, Kalah, Go,
Bridge, Scrabble, Monopoly,
etc., at a controllable level of
proficiency, from master level
to novice.”
Uchenna Nzenwata et al., International Journal of Emerging Trends in Engineering Research, 11(7), July 2023, 245 252
247
doing as much intellectual work as a human and its goal is to
create a system that can mimic human beings. Super AI is a
futuristic idea and proposed to be a type of intelligence
capable of outsmarting humans and executing tasks better than
them.
Types of AI based on functionality include the most basic
kinds of AI called reactive machines, limited memory, theory
of mind, and self-awareness. Reactive machines do not keep
track of previous experiences or memories. Limited memory
on the other hand is just like reactive machines, however, they
have memory capabilities that enable them to leverage past
experiences in making better decisions. Real-life of examples
of technologies that use this type of AI are autonomous cars,
phone apps, etc. Theory of mind and Self-awareness AI are
theoretical ideas for now and may take up to decades if not
centuries to actualize.
2.1 AI in Contemporary Industries/Sectors
Artificial Intelligence (AI) refers to the creation of computer
programs that perform tasks that typically require human
intelligence, such as decision-making, problem-solving, visual
perception, and language translation. This technology is
rapidly evolving and transforming various industries,
fundamentally changing our work and lifestyles.
The healthcare industry benefits from AI technology by
employing AI algorithms to create personalized treatment
plans, diagnose diseases, and predict patients' health.
Similarly, in finance, AI powers the decision-making of
traders and investors through real-time data analysis. Fraud
detection systems and chatbots for banking customers are also
developed using AI. Self-driving cars and trucks improve
traffic flow and reduce accidents. AI is also used in
manufacturing to develop smart factories and predictive
maintenance systems that reduce downtime and waste.
Personalized learning systems and virtual tutoring can be
developed with AI, to adapt to individual student needs in
education. In customer service, AI-powered chatbots handle
customer queries and complaints. AI algorithms analyze
customer data to develop targeted marketing strategies,
increasing sales and customer satisfaction [5].
2.2 AI in the Job Market and Economy
According to AIDA (Artificial Intelligence Development
Agency), more than 50% of children starting school now will
hold jobs that never existed by the time they graduate high
school or college. Also, the future of work requires that
workers be creative, critical thinkers, and great decisions
makers as majority of jobs and roles will become replaced by
AI. McKinsey [6] add that by 2030, AI could boost the GDP of
the world by 26% while potentially adding 16% to the global
economy.
“Will robots really steal our jobs?”. A report by PwC [7]
categorized the introduction of AI technologies in the job
market and workforce into three phases; algorithm,
augmentation and autonomy. Algorithm involve basic tasks
that do not significantly affect the job market and the
autonomy is also not an immediate threat to people's jobs as it
may take decades to develop. However, augmentation is
already in effect and requires workers to adapt to new skill
sets.
Based on reports [8], [9], AI will affect a broad range of
professions, with some jobs disappearing, others adapting to
new circumstances, and new AI-related jobs emerging. About
133 million jobs are expected to be created by AI by 2025 and
while many jobs will be replaced, especially repetitive and
mundane tasks like data entry and processing, jobs that require
human skills such as empathy cannot be easily replaced by AI.
Also, there will be massive demand for workers with sufficient
skills for technology-rich environments which according to
AIDA, are currently just about 31%.
2.3 The Role of AI in Healthcare and Medical Diagnosis
AI has been in use in medicine since the 1950s, in form of
computer-aided programs, to detect symptoms and enhance
diagnoses. Recent advancements and increased computing
power have spurred interest and progress in medical AI with
disease diagnosis being the focus. Other areas of applications
of AI in medicine include clinical, diagnostic, rehabilitative,
surgical, and predictive practices.
By analyzing massive amounts of data across various
modalities, AI technologies can detect diseases and guide
clinical decisions [12][14]. AI can also identify new drugs for
health services management and patient care treatments, and
reveal new information that would otherwise remain hidden in
medical big data [15][17].
The potential of this technology also includes reducing care
costs, streamlining repetitive operations, and enabling the
medical profession to focus on critical thinking and clinical
creativity [12]. Existing healthcare systems can significantly
benefit from this technology by alleviating pressure [18]. In
2020, Babylon, a digital health tech company, developed a
new AI-based symptom checker that improves disease
diagnosis and could reduce diagnostic errors in primary care.
The University of Bonn also developed an AI-based machine
learning program to improve leukemia diagnosis by evaluating
the presence of cancer in the lymphatic system in blood or
bone marrow.
At Queen Mary University in London, researchers have
discovered a way to analyze blood from rheumatoid arthritis
patients using AI and predict their response to treatment in
advance. Meanwhile, Dr. Vathsala Patil and colleagues in
India have explored the potential of AI to improve the work of
radiologists, noting that learning algorithms have significantly
improved in recent years, enabling machines to perform tasks
previously limited to humans.
The use of AI in analyzing data from various sources such as
government and healthcare can aid in predicting and
monitoring the spread of communicable diseases. With its
potential in global public health, AI can serve as a significant
tool in combating pandemics such as COVID-19 and other
illnesses.
However, despite significant research efforts, the overall
diagnostic accuracy of AI still lags behind that of doctors [19].
As Dr. Jonathan Richens and colleagues at Babylon conclude,
Uchenna Nzenwata et al., International Journal of Emerging Trends in Engineering Research, 11(7), July 2023, 245 252
248
the combined diagnosis of doctors and algorithms is likely to
be more accurate than either one alone.
2.4 Impact of AI on Education and Personalized Learning
Education perhaps has witnessed the most application of AI in
recent times. Education in this context refers to all forms of
education as well as the medium they exist in, including
edutainment. From virtual classrooms to the metaverse, AI is
actively revolutionizing the learning process across various
areas for both students and teachers. With AI algorithms,
educational institutions can offer tailored learning experiences
that identify learning gaps and provide resources for
improvement [20]. AI can also automate grading, offer instant
feedback, and create virtual tutors that offer personalized
guidance.
AI has the potential to revolutionize the education sector by
facilitating interactive and captivating learning experiences,
improving precision and efficiency in educational materials,
and providing data-driven insights that help educators
understand their students' needs better. Additionally,
AI-driven technology can assist educators in developing more
effective assessments and adjusting instruction to enhance
student learning outcomes.
AI technology allows educators to create an efficient and
personalized learning environment that promotes student
engagement and motivation. It offers data-driven insights to
assist educators in improving instruction and assessment and
providing students with a customized learning experience.
However, despite its numerous benefits, AI-powered
personalized learning has its limitations. The major challenge
is its cost as implementing AI-powered systems demands
significant investments in hardware and software, which may
be expensive for educational institutions. Moreover, these
systems require ongoing maintenance and updates, which
could further increase the cost. Another potential challenge is
the lack of human interaction since AI cannot substitute the
role of human instructors, leading to a lack of social
interaction between students and their instructors, which is an
essential element of the learning process.
3. LANDSCAPE OF ARTIFICIAL INTELLIGENCE
Artificial Intelligence (AI) has become an integral part of our
lives, revolutionizing various sectors and offering
unprecedented opportunities for advancement. From
cybersecurity and online privacy to social media and
transportation, AI has significantly transformed the way we
navigate and interact with the world. However, along with its
undeniable benefits, AI also presents a series of risks, ethical
concerns, and potential implications that require careful
examination [9]. In this article, we delve into the multifaceted
landscape of AI, exploring its impact on cybersecurity, online
privacy, social media, transportation, logistics, ethical
decision-making, and its potential future impact on society [4].
By investigating these crucial areas, we aim to shed light on
the challenges and opportunities that AI brings, ultimately
paving the way for a deeper understanding of its complex
influence on our lives.
3.1 The Risks of AI in Cybersecurity and Online Privacy
Artificial Intelligence (AI) has become an essential tool for
enhancing cybersecurity by automating threat detection and
response. However, it also introduces new vulnerabilities that
can be exploited by attackers. Adversarial attacks are one of
the most significant risks of AI in cybersecurity, where
attackers manipulate AI algorithms to evade detection or cause
false positives [21]. Such attacks are particularly concerning in
critical infrastructure such as power grids or water treatment
plants, where successful attacks can cause severe damage.
Another risk of AI in cybersecurity is the bias in AI algorithms.
AI algorithms learn from large datasets, and if the data is
biased, the algorithms will be biased too. For example, if an AI
algorithm is trained on data that contains a disproportionate
number of male subjects, it may have difficulty accurately
identifying females [22]. AI algorithms can also be vulnerable
to poisoning attacks, where an attacker manipulates the
training data to bias the algorithm's output. For instance, an
attacker may add fake data to trick the algorithm into
identifying a specific type of malware as benign.
AI also poses new risks to online privacy. One of the
significant concerns is the use of AI to infer sensitive
information about individuals. For example, an AI algorithm
could analyze a person's browsing history and infer their
political affiliation or sexual orientation [23]. AI can also be
used to de-anonymize data, where an attacker can identify an
individual's identity from anonymized data. For instance,
researchers at the University of Texas were able to identify
individuals in a large dataset of anonymized taxi rides using
machine learning algorithms [24]. Moreover, AI can be used
to create realistic deep fakes, which are videos or images that
are manipulated using AI to make them appear authentic.
Deep-fakes can be used to spread disinformation, defame
individuals, or even extort them [25].
To mitigate the risks of AI in cybersecurity and online privacy,
several solutions can be implemented. First, AI algorithms
must be transparent and explainable, so that their outputs can
be audited and validated. This will enable experts to detect and
correct any biases or adversarial attacks [26]. Second, data
privacy laws and regulations must be strengthened to protect
individuals' privacy rights. Companies that collect and use
personal data must be held accountable for the security of that
data and be transparent about how it is being used [27]. Third,
security systems must be multi-layered and incorporate
multiple types of AI algorithms to reduce the risk of
adversarial attacks. For example, using multiple machine
learning algorithms with different architectures and training
data can increase the system's resilience to attacks [28].
Finally, individuals must be educated about the risks of AI and
how to protect their online privacy. They must be aware of the
data they share online and how it can be used to infer sensitive
information. They must also be vigilant about the authenticity
of online content and be cautious about sharing personal
information with unreliable sources [29].
Uchenna Nzenwata et al., International Journal of Emerging Trends in Engineering Research, 11(7), July 2023, 245 252
249
3.2 The Influence of AI in Social Media and Spread of
Misinformation
Artificial Intelligence (AI) has become an essential tool in the
world of social media. With its capability to learn and analyze
vast amounts of data, AI has revolutionized social media by
enabling businesses, organizations, and individuals to reach
their target audiences more effectively. Nevertheless, the
increasing use of AI in social media has sparked concerns over
the dissemination of fake news, propaganda, and
misinformation.
Gu and Xu [30] provided a comprehensive review of AI in
social media in their article, "A Comprehensive Review of AI
in Social Media,". They discuss how AI has revolutionized
social media, from personalization to content moderation.
According to them, AI has enabled social media platforms to
tailor content to the interests and behavior of individual users,
thus improving the user experience. The use of AI algorithms
in social media advertising has also led to higher engagement
rates, conversion rates, and ROI for businesses. They also
acknowledge that one of the biggest challenges of AI in social
media is the spread of misinformation.
Misinformation refers to intentionally or unintentionally false
or misleading information. Chen and Subrahmanian [31]
discuss the role of algorithms in the spread of misinformation
on social media platforms, through deep fake videos, fake
social media accounts, and bots, which can be used to spread
false information and manipulate public opinion.
The propagation of misinformation on social media can have
grave real-world implications. For example, the use of social
media to spread fake news and propaganda during the 2016
US presidential election influenced the election's outcome.
Moreover, during the COVID-19 pandemic, the spread of
misinformation on social media has led to mistrust, confusion,
and non-compliance with public health guidelines.
Chen and Subrahmanian [31] also argue that AI can be
leveraged to address the problem of misinformation on social
media by using it to detect and remove false information
quickly. Additionally, social media users must be educated
about the risks of misinformation and taught how to identify
false information. Social media platforms must also take
responsibility for the content posted on their platforms and be
transparent about their content moderation policies. Finally,
governments and regulatory bodies can implement laws and
regulations to address the spread of misinformation on social
media.
3.3 Impact of AI in Transportation
According to research, the global AI transportation industry is
valued to grow by 100% of its initial value in 2017 by 2023.
Through innovations that enable different modes of
transportation, resulting in safer and cleaner travel, the impact
of Artificial intelligence (AI) in revolutionizing transportation
is huge.
One of its applications is the prediction of accidents based on
environmental and other factors. Several companies, such as
Geotab, Sfara, and Zendrive, are utilizing AI to predict
crashes, enabling proactive measures to prevent accidents.
Another significant development is the integration of electric
vehicles with AI. Electric vehicles have lower emissions and
can greatly aid in reducing environmental pollution. Connect
Transit is an example of a company using electric buses
integrated with AI to optimize the routes and reduce energy
consumption. The AI system adjusts the bus schedules based
on traffic and weather conditions, providing a more efficient
and eco-friendly service.
AI-powered self-driving cars are another development that has
the potential to make transportation safer. Self-driving cars
equipped with AI have the ability to detect and avoid
collisions with pedestrians and cyclists, reducing the number
of accidents caused by human error. Companies such as
Waymo, Tesla, and Uber are already testing self-driving cars
on roads.
In addition, AI can be used to reduce traffic congestion and
ensure a smooth flow of traffic. AI-powered traffic
management systems are used by many smart cities worldwide
to optimize traffic flow and reduce congestion.
3.4 Artificial Intelligence in Logistics
The logistics industry has experienced a significant
transformation over the last few years due to the adoption of
artificial intelligence (AI).
AI's impact has been seen in several areas, including supply
chain management, warehouse management, and
transportation management. AI algorithms can help optimize
supply chain management by analyzing data on inventory
levels, shipping times, and customer demand, leading to
efficient inventory management, reduced shipping times, and
enhanced customer satisfaction. Warehouse management can
also benefit from AI-powered robots that automate picking
and packing, optimize warehouse layouts, and improve
inventory management.
Transportation management can be optimized using AI
algorithms to reduce fuel consumption, improve delivery
times, and optimize routes. Self-driving trucks and drones can
transport goods over long distances and deliver small
packages efficiently, respectively. Although AI technology's
implementation in logistics can automate some job roles, it
also creates new job opportunities in data analysis, robotics,
and AI development.
However, AI technology's use in logistics poses a significant
challenge to data privacy and security. Sensitive data, such as
customer information, could be at risk of data breaches and
cyber-attacks. It is critical for logistics companies to
implement robust security measures to safeguard their data
and comply with data privacy regulations.
3.5 AI and the Ethical Implications of Decision-making
Despite its potential to improve efficiency, reduce costs, and
speed up research and development, there are certain ethical
and social concerns regarding AI. This is a result of Private
companies employing AI software to make critical decisions
Uchenna Nzenwata et al., International Journal of Emerging Trends in Engineering Research, 11(7), July 2023, 245 252
250
concerning health, employment, creditworthiness, and
criminal justice, without any substantial oversight from the
government. Consequently, there are concerns about how
these companies ensure that their programs are not structurally
biased either consciously or unconsciously [32]. That is to say
that AI has become integral to the strategy of virtually every
big company, and AI systems are being used to manage more
sophisticated assignments. This could cause more societal
harm than economic benefits.
Some researchers believe that AI offers a way to remove
human subjectivity and bias, while others warn that many
algorithms used in decision-making processes may already be
influenced by societal biases. There are however opinions that
concerns about AI injecting bias into everyday life on a large
scale may be exaggerated [32]. The sentiment is that biases
already exist, especially situations where human
decision-making is involved.
When used carefully and thoughtfully, AI can help to reduce
the potential for human favoritism or prejudice although some
philosophers think that AI will always reproduce human
biases.
3.6 The Future of AI and its Potential Impact on Society
Artificial Intelligence is rapidly evolving and driving the
emergence of other technologies. AI will continue to expand in
its innovations and adoptions, with approximately 44% of
companies planning to make serious investments in it. IBM
inventors received 2,300 AI-related patents out of 9,130
patents in 2021 [33]. ChatGPT and AI art generators have
already gained mainstream attention as examples of generative
AI.
The potential applications of Artificial Intelligence (AI) are
not limited to technological advancements alone. In addition,
AI has the potential to significantly impact sustainability,
climate change, and environmental concerns. One of the areas
where AI can have a positive impact is in the development of
smart cities. The use of advanced sensors and machine
learning algorithms in cities can result in reduced congestion,
pollution, and improved quality of life for citizens. For
instance, AI can be used to optimize traffic flow, reduce
energy consumption, and improve waste management
practices.
According to a report by the United Nations [34], the use of AI
in smart cities has the potential to reduce energy consumption
by up to 30% and reduce greenhouse gas emissions by up to
15%. Moreover, the deployment of AI-powered sensors in the
transportation sector can help to reduce emissions by
optimizing routes and reducing congestion. These applications
of AI in smart cities are expected to create new job
opportunities in the fields of data analysis, machine learning,
and smart city planning, among others.
At present, AI cannot fully understand language, unlike
humans who can translate machine language. However, if AI
can understand human languages, it could read and understand
everything ever written. While AI can improve efficiency and
augment human work, it can also have both positive and
negative impacts on society. The use of AI has the potential to
improve job satisfaction and increase human productivity, as
repetitive or hazardous tasks can be delegated to machines.
This allows humans to focus on creative and empathetic work,
leading to a more fulfilling work environment [35].
AI can also impact healthcare by enhancing the operations of
medical facilities, and reducing operating costs. Patients could
also benefit from personalized treatment plans and drug
protocols.
The use of AI in the judiciary, including facial recognition
technology, presents opportunities to solve crimes effectively
without compromising individuals' privacy.
Overall, AI will have a tremendous impact on the society while
the misuse and ethical concerns eventually become bearable.
4. CONCLUSION AND RECOMMENDATION
The rapid growth of AI holds immense potential for
revolutionary advancements across various domains,
including healthcare, finance, transportation, and marketing.
However, the widespread application of AI technologies raises
significant concerns. The societal impacts of AI have sparked
extensive debates in scholarly literature. Some researchers
argue that AI has the capacity to bring substantial benefits to
society, such as improved healthcare outcomes. Conversely,
others express concerns about potential repercussions such as
widespread unemployment, data breaches, privacy violations,
cybercrime, and negative effects on social and environmental
sustainability.
In conclusion, the influence of AI on society is complex and
multifaceted. While the future of AI remains uncertain, it is
clear that its impact will continue to expand across numerous
industries and societal aspects. However, it is vital to address
the ethical and social implications associated with AI. This
necessitates a comprehensive assessment of the ethical and
societal consequences during the development and
implementation of AI technologies, along with responsible
governance practices. By doing so, we can minimize
unfavorable effects and maximize positive outcomes, ensuring
that AI contributes to a better society.
REFERENCES
[1] S. Marston, Z. Li, S. Bandyopadhyay, J. Zhang, & A.
Ghalsasi (2011). Cloud computingThe business
perspective. Decision support systems 51(1): 176-189.
[2] D. Crevier, "AI: The Tumultuous History of the Search
for Artificial Intelligence," New York, NY: BasicBooks,
1993.
[3] M. Copeland, "What’s the difference between artificial
intelligence, machine learning, and deep learning?"
[Online]. Available:
https://blogs.nvidia.com/blog/2016/07/29/whats-differe
nce-artificial-intelligence-machine-learning-deep-learni
ng-ai/. [Accessed: April, 2023].
[4] O. Firschein, M.A. Fischler, L.S. Coles, and J.M.
Tenenbaum, "Forecasting and assessing the impact of
artificial intelligence on society," in Proceedings of the
Uchenna Nzenwata et al., International Journal of Emerging Trends in Engineering Research, 11(7), July 2023, 245 252
251
5th International Joint Conference on Artificial
Intelligence (IJCAI 5), 1973, pp. 105-120.
[5] M.S. Anant and B.H. Wasif, "Artificial Intelligence,"
IJRASET, vol. 7, no. 10, pp. 1-7, 2022, doi:
10.22214/ijraset.2022.4430.
[6] McKinsey Global Institute. (2018). Notes from the
frontier: Modeling the impact of AI on the world
economy.
https://www.mckinsey.com/~/media/McKinsey/Feature
d%20Insights/Artificial%20Intelligence/Notes%20from
%20the%20frontier%20Modeling%20the%20impact%
20of%20AI%20on%20the%20world%20economy/MGI
-Notes-from-the-AI-frontier-Modeling-the-impact-of-AI
-on-the-world-economy-September-2018.ashx
[7] PwC UK, "Will Robots Really Steal Our Jobs?,"
https://www.pwc.co.uk/economic-services/assets/intern
ational-impact-of-automation-feb-2018.pdf
[8] World Economic Forum, “The Future of Jobs Report
2018,” Future of Jobs 2018, [Online]. Available:
http://reports.weforum.org/future-of-jobs-2018/?doing_
wp_cron=1615199949.969011146697998046875.
[Accessed: April, 2023]
[9] "The ZipRecruiter Future of Work Report,"
ZipRecruiter, Feb. 27, 2020. [Online]. Available:
https://www.ziprecruiter.com/blog/future-of-work-repor
t-2019/.
[10] X. Yang, Y. Wang, R. Byrne, G. Schneider, and S. Yang,
"Concepts of artificial intelligence for computer-assisted
drug discovery | chemical reviews," Chem Rev, vol. 119,
no. 18, pp. 10520-1094, 2019.
[11] R. J. Burton, M. Albur, M. Eberl, and S. M. Cuff, "Using
artificial intelligence to reduce diagnostic workload
without compromising detection of urinary tract
infections," BMC Med Inform Decis Mak, vol. 19, no. 1,
p. 171, 2019.
[12] B. Meskò, Z. Drobni, E. Bényei, B. Gergely, and Z.
Gyorffy, "Digital health is a cultural transformation of
traditional healthcare," Mhealth, vol. 3, p. 38, 2017.
[13] S. Hamid, "The opportunities and risks of artificial
intelligence in medicine and healthcare," 2016. [Online].
Available:
http://www.cuspe.org/wp-content/uploads/2016/09/Ham
id_2016.pdf.
[14] B.-J. Cho, Y.-J. Choi, M.-J. Lee, J.-H. Kim, G.-H. Son,
S.-H. Park, et al., "Classification of cervical neoplasms
on colposcopic photography using deep learning,"
Scientific Reports, vol. 10, no. 1, p. 13652, 2020.
[15] O. M. Doyle, N. Leavitt, and J. A. Rigg, "Finding
undiagnosed patients with hepatitis C infection: an
application of artificial intelligence to patient claims
data," Scientific Reports, vol. 10, no. 1, p. 10521, 2020.
[16] E. H. Shortliffe and M. J. Sepúlveda, "Clinical decision
support in the era of artificial intelligence," JAMA, vol.
320, no. 21, pp. 2199-2200, 2018.
[17] M. Massaro, J. Dumay, and J. Guthrie, "On the shoulders
of giants: undertaking a structured literature review in
accounting," Account. Auditing Account. J., vol. 29, no.
5, pp. 767801, 2016.
[18] B. Wahl, A. Cossy-Gantner, S. Germann, N.R.
Schwalbe, “Artificial intelligence (AI) and global health:
how can AI contribute to health in resource-poor
settings?,” BMJ Global Health. Vol. 3, Issue 4, 2018.
[19] J. Collingwood, "Artificial Intelligence in Medical
Diagnosis," Southern Medical Association. [Online].
Available: https://sma.org/ai-in-medical-diagnosis/.
[Accessed: April, 2023].
[20] F. Marcin, "Artificial Intelligence in Education: The Rise
of Personalized Learning," Artificial Intelligence.
[Online]. Available:
https://ts2.space/en/artificial-intelligence-in-education-t
he-rise-of-personalized-learning/. [Accessed: Apr.,
2023].
[21] V. Gupta, A. Chakraborty, and D. Singh, "AI-based
Cyber Security: An Overview of Recent Advancements,
Opportunities and Challenges," in 2021 IEEE
International Conference on Computing,
Communication and Automation (ICCCA), 2021, pp.
217-222.
[22] E. Kairouz, B. McMahan, and A. Avent, "Advances and
open problems in federated learning," in Proceedings of
the 2nd Workshop on Open Problems in Network
Security (iNetSec), 2019.
[23] A. Cavoukian, K. Jonas, and J. Castro, "The Future of
Privacy in AI: Principles to Guide a Human-Centred
Approach," The Global Privacy & Security by Design
Centre, Ryerson University, 2021.
[24] M. Shokri, S. Stronati, and V. R. M. Song, "Membership
inference attacks against machine learning models," in
IEEE Symposium on Security and Privacy, 2017, pp.
3-18.
[25] Y. Wang and M. Kosinski, "Deep neural networks are
more accurate than humans at detecting sexual
orientation from facial images," Journal of Personality
and Social Psychology, vol. 114, no. 2, pp. 246-257,
2018.
[26] A. Weller, "The challenges of securing artificial
intelligence," Nature Machine Intelligence, vol. 3, no. 7,
pp. 291-293, 2021.
[27] H. Shin, T. Kwon, and J. Hong, "Survey on the Security
of Artificial Intelligence in Autonomous Driving," IEEE
Access, vol. 8, pp. 42074-42091, 2020.
[28] K. Koscher, A. Czeskis, and F. Roesner, "Experimental
security analysis of a modern automobile," in
Proceedings of the 2010 IEEE Symposium on Security
and Privacy, 2010, pp. 447-462.
[29] O. Tene and J. Polonetsky, "Big data for all: Privacy and
user control in the age of analytics," Northwestern
Journal of Technology and Intellectual Property, vol. 16,
no. 5, pp. 239-274, 2018.
[30] Gu, B., & Xu, J. (2021). A Comprehensive Review of AI
in Social Media. IEEE
[31] Chen, J. Y., & Subrahmanian, V. S. (2021). AI and
Misinformation: The Role of Algorithms in the Spread of
Misinformation. IEEE Security & Privacy, 19(3), 87-91.
[32] P. Christina, "Ethical concerns mount as AI takes a
bigger decision-making role in more industries," The
Uchenna Nzenwata et al., International Journal of Emerging Trends in Engineering Research, 11(7), July 2023, 245 252
252
Harvard Gazette, Oct. 2020. [Online]. Available:
https://news.harvard.edu/gazette/story/2020/10/ethical-c
oncerns-mount-as-ai-takes-bigger-decision-making-role/
. [Accessed: Apr., 2023].
[33] IBM, "IBM Tops U.S. Patent List for 28th Consecutive
Year with Innovations in Artificial Intelligence, Hybrid
Cloud, Quantum Computing and Cyber Security," Jan.
12, 2021. [Online]. Available:
https://newsroom.ibm.com/2021-01-12-IBM-Tops-U-S-
Patent-List-for-28th-Consecutive-Year-with-Innovation
s-in-Artificial-Intelligence-Hybrid-Cloud-Quantum-Co
mputing-and-Cyber-Security. [Accessed: April , 2023]
[34] United Nations Department of Economic and Social
Affairs, "Artificial Intelligence for Sustainable
Development," United Nations, 2018.
[35] J. Lee, "The Positive Impact of Artificial Intelligence on
Society," Forbes, September 2, 2020. [Online].
Available:
https://www.forbes.com/sites/jasonlee1/2020/09/02/the-
positive-impact-of-artificial-intelligence-on-society/?sh
=18be58a33a98. [Accessed: April, 2023].
... This has motivated the digital security industry to make increasingly sophisticated research on methods to combat such activities. Artificial intelligence technology has helped in developing revolutionary models that have made finding anomalistic patterns in transactional databases much easier [15]. The models are even being used by major financial institutions that are looking for reliable methods of protecting their customer's interests. ...
Article
This study aims to address the rising issue of credit card fraud by developing a machine learning model capable of identifying and preventing fraudulent transactions. The model works by analyzing transaction data to detect potential fraud, subsequently canceling the transaction and alerting the credit card owner. Credit card fraud detection is a classification problem, where various machine learning algorithms are applied to distinguish between legitimate and fraudulent transactions. The analysis emphasizes the importance of robust countermeasures due to the increasing use of credit cards globally. However, real-world implementation of such systems may face challenges, particularly in securing the cooperation of banks and addressing resource constraints. The study also highlights key dataset features that correlate with fraudulent behavior, with ensemble methods standing out as top-performing algorithms in terms of accuracy and efficiency.
... The advent of new technologies has positively impacted the medical sector, assisting in the development of new healthcare systems that benefit both patients and the industry [8]. These advancements have contributed to improvements in areas such as diagnosis, disease prevention, treatment, and more. ...
Article
This study focuses on the development of a smart e-prescription and billing system that integrates blockchain, artificial intelligence (AI), and analytics to enhance healthcare services. The system's core features include an AI-driven symptom checker, appointment scheduling, and cryptocurrency-based billing. A systematic review of secondary data from academic studies and industry-related articles was conducted to assess current practices in drug prescription, dispensation, and existing software solutions. The symptom-checker model was trained using a dataset of 4,963 entries, with an 80/20 split between training and testing sets. A Decision Tree Classifier was employed, showing promising performance. The findings suggest that the e-prescription (eRx) application can reduce healthcare costs, minimize prescription errors, and improve the overall quality and efficiency of care. The system was built using frontend web technologies, with Python supporting backend functionalities. Cryptocurrency was implemented as the primary payment method, while machine learning was used to support appointment scheduling. The study recommends reliable internet access and adequate hardware for optimal system performance.
Article
Full-text available
The purpose of the study was to analyse the influence of artificial intelligence on modern socio-cultural processes to identify key trends and factors determining its development, to reveal positive and negative aspects of the integration of artificial intelligence into various spheres of public life, including work, education, medicine, and cultural and social relationships. A methodology has been developed that defines the stages of analysing the socio-cultural impact of artificial intelligence. The main trends determining the influence of artificial intelligence on modern socio-cultural processes were identified, which included automation of labour, changes in educational and medical practices, and the impact on cultural and social relationships. This provided an understanding of how the development of artificial intelligence leads to changes in the world of work, including the automation of routine tasks, the consequences for the labour market and employment. The use of artificial intelligence has changed teaching methods and educational programmes, and influenced schoolchildren, teachers, and educational institutions. The paper provided a comprehensive description of society’s perception of the use of artificial intelligence, revealing both its positive and negative aspects. The positive aspects included increased productivity, increased innovation, and improved services, while the negative aspects covered job losses, data privacy concerns, and ethical issues. The results showed that the integration of artificial intelligence into modern socio-cultural processes has significantly influenced society. The study identified a number of issues and challenges, including potential job losses, data privacy issues, and ethical issues.
Article
Full-text available
Artificial intelligence (AI) provides a promising substitution for streamlining COVID-19 diagnoses. However, concerns surrounding security and trustworthiness impede the collection of large-scale representative medical data, posing a considerable challenge for training a well-generalised model in clinical practices. To address this, we launch the Unified CT-COVID AI Diagnostic Initiative (UCADI), where the AI model can be distributedly trained and independently executed at each host institution under a federated learning framework (FL) without data sharing. Here we show that our FL model outperformed all the local models by a large yield (test sensitivity /specificity in China: 0.973/0.951, in the UK: 0.730/0.942), achieving comparable performance with a panel of professional radiologists. We further evaluated the model on the hold-out (collected from another two hospitals leaving out the FL) and heterogeneous (acquired with contrast materials) data, provided visual explanations for decisions made by the model, and analysed the trade-offs between the model performance and the communication costs in the federated training process. Our study is based on 9,573 chest computed tomography scans (CTs) from 3,336 patients collected from 23 hospitals located in China and the UK. Collectively, our work advanced the prospects of utilising federated learning for privacy-preserving AI in digital health.
Article
Full-text available
Colposcopy is widely used to detect cervical cancers, but experienced physicians who are needed for an accurate diagnosis are lacking in developing countries. Artificial intelligence (AI) has been recently used in computer-aided diagnosis showing remarkable promise. In this study, we developed and validated deep learning models to automatically classify cervical neoplasms on colposcopic photographs. Pre-trained convolutional neural networks were fine-tuned for two grading systems: the cervical intraepithelial neoplasia (CIN) system and the lower anogenital squamous terminology (LAST) system. The multi-class classification accuracies of the networks for the CIN system in the test dataset were 48.6 ± 1.3% by Inception-Resnet-v2 and 51.7 ± 5.2% by Resnet-152. The accuracies for the LAST system were 71.8 ± 1.8% and 74.7 ± 1.8%, respectively. The area under the curve (AUC) for discriminating high-risk lesions from low-risk lesions by Resnet-152 was 0.781 ± 0.020 for the CIN system and 0.708 ± 0.024 for the LAST system. The lesions requiring biopsy were also detected efficiently (AUC, 0.947 ± 0.030 by Resnet-152), and presented meaningfully on attention maps. These results may indicate the potential of the application of AI for automated reading of colposcopic photographs.
Article
Full-text available
Hepatitis C virus (HCV) remains a significant public health challenge with approximately half of the infected population untreated and undiagnosed. In this retrospective study, predictive models were developed to identify undiagnosed HCV patients using longitudinal medical claims linked to prescription data from approximately ten million patients in the United States (US) between 2010 and 2016. Features capturing information on demographics, risk factors, symptoms, treatments and procedures relevant to HCV were extracted from patients’ medical history. Predictive algorithms were developed based on logistic regression, random forests, gradient boosted trees and a stacked ensemble. Descriptive analysis indicated that patients exhibited known symptoms of HCV on average 2–3 years prior to their diagnosis. The precision was at least 95% for all algorithms at low levels of recall (10%). For recall levels >50%, the stacked ensemble performed best with a precision of 97% compared with 87% for the gradient boosted trees and just 31% for the logistic regression. For context, the Center for Disease Control recommends screening in an at-risk sub-population with an estimated HCV prevalence of 2.23%. The artificial intelligence (AI) algorithm presented here has a precision which is substantially higher than the screening rates associated with recommended clinical guidelines, suggesting that AI algorithms have the potential to provide a step change in the effectiveness of HCV screening.
Article
Full-text available
Background: A substantial proportion of microbiological screening in diagnostic laboratories is due to suspected urinary tract infections (UTIs), yet approximately two thirds of urine samples typically yield negative culture results. By reducing the number of query samples to be cultured and enabling diagnostic services to concentrate on those in which there are true microbial infections, a significant improvement in efficiency of the service is possible. Methodology: Screening process for urine samples prior to culture was modelled in a single clinical microbiology laboratory covering three hospitals and community services across Bristol and Bath, UK. Retrospective analysis of all urine microscopy, culture, and sensitivity reports over one year was used to compare two methods of classification: a heuristic model using a combination of white blood cell count and bacterial count, and a machine learning approach testing three algorithms (Random Forest, Neural Network, Extreme Gradient Boosting) whilst factoring in independent variables including demographics, historical urine culture results, and clinical details provided with the specimen. Results: A total of 212,554 urine reports were analysed. Initial findings demonstrated the potential for using machine learning algorithms, which outperformed the heuristic model in terms of relative workload reduction achieved at a classification sensitivity > 95%. Upon further analysis of classification sensitivity of subpopulations, we concluded that samples from pregnant patients and children (age 11 or younger) require independent evaluation. First the removal of pregnant patients and children from the classification process was investigated but this diminished the workload reduction achieved. The optimal solution was found to be three Extreme Gradient Boosting algorithms, trained independently for the classification of pregnant patients, children, and then all other patients. When combined, this system granted a relative workload reduction of 41% and a sensitivity of 95% for each of the stratified patient groups. Conclusion: Based on the considerable time and cost savings achieved, without compromising the diagnostic performance, the heuristic model was successfully implemented in routine clinical practice in the diagnostic laboratory at Severn Pathology, Bristol. Our work shows the potential application of supervised machine learning models in improving service efficiency at a time when demand often surpasses resources of public healthcare providers.
Article
Full-text available
Clinicians and researchers have long envisioned the day when computers could assist with difficult decisions in complex clinical situations. The first article on this subject appeared in the scientific literature about 60 years ago,¹ and the notion of computer-based clinical decision support has subsequently been a dominant topic for informatics research. Two recent Viewpoints in JAMA highlighted the promise of deep learning in medicine.²,3 Such new data analytic methods have much to offer in interpreting large and complex data sets. This Viewpoint is focused on the subset of decision support systems that are designed to be used interactively by clinicians as they seek to reach decisions, regardless of the underlying analytic methodology that they incorporate.
Article
Full-text available
The field of artificial intelligence (AI) has evolved considerably in the last 60 years. While there are now many AI applications that have been deployed in high-income country contexts, use in resource-poor settings remains relatively nascent. With a few notable exceptions, there are limited examples of AI being used in such settings. However, there are signs that this is changing. Several high-profile meetings have been convened in recent years to discuss the development and deployment of AI applications to reduce poverty and deliver a broad range of critical public services. We provide a general overview of AI and how it can be used to improve health outcomes in resource-poor settings. We also describe some of the current ethical debates around patient safety and privacy. Despite current challenges, AI holds tremendous promise for transforming the provision of healthcare services in resource-poor settings. Many health system hurdles in such settings could be overcome with the use of AI and other complementary emerging technologies. Further research and investments in the development of AI tools tailored to resource-poor settings will accelerate realising of the full potential of AI for improving global health.
Article
Full-text available
We show that faces contain much more information about sexual orientation than can be perceived or interpreted by the human brain. We used deep neural networks to extract features from 35,326 facial images. These features were entered into a logistic regression aimed at classifying sexual orientation. Given a single facial image, a classifier could correctly distinguish between gay and heterosexual men in 81% of cases, and in 71% of cases for women. Human judges achieved much lower accuracy: 61% for men and 54% for women. The accuracy of the algorithm increased to 91% and 83%, respectively, given five facial images per person. Facial features employed by the classifier included both fixed (e.g., nose shape) and transient facial features (e.g., grooming style). Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles. Prediction models aimed at gender alone allowed for detecting gay males with 57% accuracy and gay females with 58% accuracy. Those findings advance our understanding of the origins of sexual orientation and the limits of human perception. Additionally, given that companies and governments are increasingly using computer vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.
Article
Full-text available
Under the term "digital health", advanced medical technologies, disruptive innovations and digital communication have gradually become inseparable from providing best practice healthcare. While the cost of treating chronic conditions is increasing and doctor shortages are imminent worldwide, the needed transformation in the structure of healthcare and medicine fails to catch up with the rapid progress of the medical technology industry. This transition is slowed down by strict regulations; the reluctance of stakeholders in healthcare to change; and ignoring the importance of cultural changes and the human factor in an increasingly technological world. With access and adoption of technology getting higher, the risk of patients primarily turning to an accessible, but unregulated technological solution for their health problem is likely to increase. In this paper, we discuss how the old paradigm of the paternalistic model of medicine is transforming into an equal level partnership between patients and professionals and how it is aided and augmented by disruptive technologies. We attempt to define what digital health means and how it affects the status quo of care and also the study design in implementing technological innovations into the practice of medicine.
Article
For some time now,artificial intelligence(Al) has been receiving unprecedented attention.Why is this?Because it is making a genuine leap forward as a combined result of four factors:the rapid advance in communications that sends all forms of expression hurtling across the planet at the speed of light, computer processing power (now measured in quadrillions of operations per second), the explosion of available data and the progress of machine learning. Hence, as Andr6-Yves Portnoff and Jean-Frangois Soupizet assert, a whole new ecosystem is emerging. What might the applications of Al be? There are already countless possible uses, ranging from the milking of goats, banking services, autonomous vehicles, digital marketing and smart cities to health and sabotage...Some experts who subscribe to the "technological singularity" theory even believe that Al could take over the planet, an assertion staunchly contested here by our authors who do, however, stress how much the division of roles between men and machines needs to be rethought, as does the relationship between them. They also point out, incidentally, that the spread of Al within businesses hasn't gone as far as all that, since that would imply profound changes in forms of organization and management-in short, a cultural revolution, and culture does not move at the same pace as technological advance!Turning to the question of the players involved, they stress the conflict between the new entrants (the American and Chinese Internet giants) and traditional companies, together with states whose sovereignty is seriously impaired as a result; but these latter may discover that Al affords them the means to restore their power,for better or for worse, in years to come. Drawing in this article on a foresight analysis carried out for the members of the Futur- ibles International association, Andr6-Yves Portnoff and Jean-Francois Soupizet venture to outline a number of possible futures. These are not scenarios properly so-called, but contrasting models. They include the "privatized digital panopticon", characterized by the supremacy of the digital giants; the "statized digital panopticon", which would see the Chinese regime and the IT giants coming together in their own shared interest; the "enlightened long-termist" model; and that of "digital criminalities". In doing so,the authors show once again how technologies are double-edged and how it is important that we-and particularly we Europeans-take responsibility when choices are being made that will undoubtedly shape the future for many years to come.