Content uploaded by Gloria J. Miller
Author content
All content in this area was uploaded by Gloria J. Miller on Aug 22, 2023
Content may be subject to copyright.
Artificial I ntelligence P roject Success–Emerging
Trends
Gloria J. Miller, DBA
IEEE Senior Member, Distinguished Contributor Computer Society
Abstract—This study used a systematic literature review, topic
modeling, emerging trend measures, and qualitative analysis to
identify emerging trends relevant to artificial intelligence (AI)
projects. Project teams are the moral agents accountable for the
harms or benefits of the AI systems they develop. In view of
the transformative benefits and risks expected from AI, project
managers and sponsors need to adapt their project approaches
to address the emerging concerns with AI systems. This study
identified nine emerging topics and short descriptions for each
topic. The study contributes to the project management literature
and goes beyond pointing out ethical AI principles. It provides
practical topics professionals can use as directional input for
planning and controlling AI projects.
Keywords—Algorithms - Artificial intelligence - Big data -
Critical success factors - Project management
I. INT ROD UC TI ON
A moral agent is a person who makes a decision even
when the decision-maker may not recognize that a moral
issue is at stake [1]. The project teams implementing artificial
intelligence (AI) systems are moral agents. They are account-
able for the harms or benefits to individuals, society, and
the environment caused by developing or using their systems
[2]–[4].
AI systems can learn, reason, draw conclusions, process nat-
ural language, perceive and comprehend visual scenes and are
ultimately implemented as algorithms. Algorithmic decision-
making replaces or augments human decision-making across
many industries and functions [5], [6]. Scholars anticipate that
AI will significantly impact society, generating productivity
and efficiency gains and changing the way people work [7].
In view of AI’s transformative benefits and risks, project
managers and sponsors need to adapt their project approaches
to address the emerging concerns with AI systems.
This study builds on a study of AI project success factors
by using quantitative methods to identify the emerging trends
in the literature on AI. It answers the following questions:
What main topics are emerging in the scholarly literature of
interest to AI project success? How much emphasis does the
literature place on each of the topics identified? What is the
level of scientific impact of each topic? And most importantly,
the study uses qualitative methods to analyze each topic and
answer the question: What do the trends mean in practical
terms? The research methodology used includes a systematic
literature review, topic modeling, emerging trend measures,
and qualitative analysis.
This study makes three contributions. First, it introduces the
emerging trends, AI-relevant topics, and AI success factors
into the project management literature. Second, it provides
practical narratives professionals can use for planning and con-
trolling AI projects. Third, it uses systematic literature review
and topic modeling in an innovative way to provide practical
details relevant to researchers and professional practitioners;
it addresses the limitation of bibliographic studies that do not
address the “so what" question.
The paper is structured as follows. Section Two provides
a review of related research. Section Three describes the
research methodology, and Section Four outlines the findings.
Section Five discusses the results and conclusions, including
the studys contribution, implications, limitations, and consid-
erations for future research.
II. RE LATE D RES EA RC H
A. AI project success
In the existing project management literature, the differ-
ences between AI and other information technology projects
are not heavily researched. The information technology (IT)
[8]–[17] and project management success models [18]–[20] do
not directly address the challenges of successfully managing
AI projects [21]. This presents a dangerous gap, as project
teams are moral agents. The decisions made within the project
may cause harm to individuals, society, and the environment.
Moreover, as [3] explains, “developers are those most capable
of enacting change in the design and are sometimes the only
individuals in a position to change the algorithm" [3, p. 844].
Scholarly literature on AI ethics has focused on defining
values, principles, frameworks, and guidelines for ethical AI
development and deployment [22], [23]. However, there is dif-
ficulty translating concepts, theories, and values into practice.
Specifically, [23] explains that the translation process is likely
to “encounter incommensurable moral norms and frameworks
which present true moral dilemmas that principlism cannot
resolve" [23, p. 6].
[24] describes what users and developers should do to
realize their moral responsibilities in an AI project. The paper
provides detailed explanations of the normative implication of
AI ethics guidelines for developers and organizational users.
However, it limits the scope to ethical considerations and
excludes activities relevant to success by other stakeholders.
[4] identified five categories of AI project success factors in 17
groups related to moral decision-making with algorithms. The
study translates AI ethical principles into practical project de-
liverables and actions that underpin the success of AI projects.
However, the study did not address external critical success
factors such as organizational and environmental factors.
B. AI bibliometrics studies
Several bibliometric and literature studies address the ethical
concerns of AI applications. Specifically, [25] used bibliomet-
rics on a Web of Science (WoS) corpus with 4375 articles to
study AI ethics and privacy. Similarly, the bibliometric analysis
from [26] studied the ethical concerns of big data using 892 ar-
ticles from WoS. Other researchers have investigated AI ethics
with systematic literature reviews from various perspectives:
sustainable development goals [27], ethical data [28], project
success [21], human rights [29], algorithmic accountability
[30], trust and justice [31], and health applications [32].
While taking different approaches and having differing
scopes, the studies identified similar themes and issues: AI
techniques, applications, devices, and products; legal, tech-
nological and political issues; data privacy; and privacy and
ethics in the health care and medical field. The studies also
recognized that bibliographic studies are limited in their ability
to answer “how," “why," and “so what."
As [33] points out, detecting emerging research topics is
important for understanding the attributes of emergence and
as a starting point for policy and funding decisions. However,
some bibliographic approaches are not transparent about the
links between the attributes of emergence and the indicators.
III. MET HO DO LO GY
A systematic literature review, topic modeling, emerging
trend measures, and qualitative trend and article analysis were
selected as the research methodology. Figure 1 provides a
process overview of the methodology.
Systematic
Literature
Review
Topic
Modeling
Emerging
Trend
Calculations
Trend &
Topic Analysis
Fig. 1. Methodology Process
A. Systematic Literature Review
1) Procedure: A single researcher used a systematic review
of the literature to explore the research questions. The proce-
dure included the identification of bibliographic databases, a
definition of the search process including the keywords and
the search string, determining the inclusion and exclusion
criteria, removing duplicates and screening the articles. The
review used peer-reviewed articles from the ProQuest, Emer-
ald, ScienceDirect, IEEE Xplore, and ACM Digital Library
bibliographic databases. The literature search was conducted
in 2020 and 2021 as part of a study on AI project success;
the methodology used for the systematic literature review is
described in [4] and summarized here.
2) Keywords: The focal keywords were chosen after re-
viewing articles in an initial search using the keywords “stake-
holder" and “algorithm;" that search yielded 26 articles. The
ultimate search emphasized accountability because account-
ability focuses on the relationship between project actors and
those to whom the actors should be accountable [30]. Because
not all databases allowed wild cards, variations of the search
string were used, and adjustments were made in the syntax
for each search engine. The keywords in the search titles were
as follows: accountab*, AND (machine learning OR artificial
intelligence OR AI OR big data OR algorithm*) AND (fair*
OR ethic* OR moral* OR success OR transparency OR expla*
OR accountab*).
3) Exclusions/Inclusions: Articles were retained in the
search results for peer-reviewed journal articles or conference
papers in the English language. The search did not filter on
date. The 26 articles from the initial search were retained
for analysis. Duplicate entries and entries with no available
document were removed. Next, literature was excluded or
retained in an iterative process based on reading the title,
abstract, or both.
4) Results: In total, 144 articles were retained and used in
the analysis. Dummy records were added for years 2012 and
2016 where no articles were retained; this was necessary for
the emerging trend calculations. These entries did not play a
role in the topic analysis.
The number of citations and downloads were retrieved
for each article. Depending upon the bibliographic database,
downloads may be captured as downloads, full-text reviews,
readers, or accesses. Google Scholar was used when the
number of citations was not available at the bibliographic
database. When the number of downloads was not available,
it was computed as the number of citations multiplied by a
download-uplift factor. The download-uplift factor is a ratio
of downloads per citation per bibliographic database and
year. This step was taken to not disadvantage articles in the
computations where the data were not available.
The proceedings from the fairness, accountability, and trans-
parency (Fat, FAccT) and AI, ethics, and society (AIES)
conferences and the journal on Computer Law & Security
Review were the top publications by number of articles. Table
I lists the number of citations, downloads, and articles per
bibliographic database.
TABLE I
PUB LIC ATIO NS BY D ATABASE
Database Citations Downloads Articles
ACM 2089 66,927 57
Emerald 8 2198 5
IEEE Xplore 11 3848 14
Initial search 1496 45,507 14
Proquest 633 116,409 35
Science Direct 264 1595 19
*_None 0 0 2
*Entries with no articles; needed for the analysis.
B. Topic Modeling
Topic modeling extracts a set of words from documents
based on the statistical probabilities that the words belong in
a specific cluster. The method has three benefits in manage-
ment research [34]. First, the word sets are created without
requiring advanced semantic annotations. Thus, a dictionary
or interpretive rules do not have to be imposed in advance.
Next, it can identify themes that humans may miss. Finally,
words may appear across topics with differing probabilities.
Thus, topics are not mutually exclusive and may overlap.
This research extracted the abstract from the article meta-
data and loaded it into MS Excel. The MS Excel data were
imported into R Studio version 1.3.1073 and R version 4.0.2
for analysis. Latent Dirichlet allocation (LDA) and structural
topic model (STM) methods were evaluated for the topic
modeling. STM was selected as it “provides a general way
to incorporate corpus structure or document metadata into the
standard topic model" [35, p. 1]. In R, the stm package [36]
was used for processing: the textProcessor function was used
to prepare the document-term-matrix, the searchK function
was used to execute several iterations to determine the optimal
number of topics, and the stm function was executed to build
the model.
The topics were created by excluding words used in the
structured literature review query. This step was taken to
cluster articles by the subtopics they discussed instead of
clustering the existing topics according to methods used by
other bibliometric studies on AI. The topicCorr function was
used to calculate and visualize the correlation between topics.
The correlation threshold was set to 0.01. Lowercase words,
excluding numbers, and no stemming or other criteria were
used in creating the topic model.
Topic quality was evaluated using topic semantic coherence,
exclusivity, and residuals based on recommendations from
[37]. Semantic coherence measures the quality of the topics
computed by the topic models (i.e., how well word sets fit
together). The closer the semantic coherence is to zero, the
higher the topic coherence [38]. Topic exclusivity measures
the frequency of words used in a single topic. The residuals
measures the multinomial dispersion of the residuals. The
number of topics was chosen based on the model with the
highest semantic coherence and exclusivity and the lowest
residuals.
C. Emerging Tread Calculations
Emerging topics were identified based on the model from
[33]. Their because their approach identifies emerging topics in
a conceptually straightforward, operationally transparent way,
and the approach could be adjusted for various purposes. The
authors summarized that emerging topics should satisfy four
criteria: rapid growth, radical novelty, major scientific impact,
and coherence.
Rapid growth captures the growth rate in the number of pub-
lications. The publication ratio is smoothed across several time
periods to account for random fluctuations. Radical novelty
captures a sudden emergence of the topic; it is measured by
the number of articles being relatively small at the beginning
of the time period. Scientific impact reflects the use of the topic
in research. [33] uses citation counts as a measure of scientific
impact. However, in computer science, frontier research is
mainly presented at conferences [33], which would mean
too few topics would be identified by relying on citations.
Thus, we used the number of downloads and citation counts.
Coherence measures how closely the publications in the topic
are connected. [33] argues that the number of citations in the
cluster should not be higher than the number of publications in
the cluster. The number of within-cluster citations is divided
by the total number of publications for the measure. We use
the topic semantic coherence from the topic modeling as an
alternative measure for coherence.
Given the rapid changes in the field of AI, two years were
used for smoothing, and the analysis was for 10 years, 2011
to 2021. The measures were computed for each topic com-
bination. The means for the number of citations, downloads,
and growth factor were established as minimal measures. A
topic was considered emergent in a given year when it met
all criteria simultaneously for a year; for scientific impact, the
topic had to meet the criteria for downloads or citations. A
topic could have multiple emergent periods because all criteria
could be satisfied in multiple years. We recorded all emergent
periods.
D. Thematic analysis
Labeling the topics was done iteratively. First, the top two
most frequent words were automatically used to name the
topic. For each topic, the content of the top articles was
analyzed based on their correlation with the topic keywords,
number of citations, and number of downloads. The full
text of the top articles was read, and the key themes were
summarized; the themes are described in the findings section.
Finally, the topic names were refined to be meaningful based
on the topic’s themes and across topics.
E. Validity and Reliability
The systematic literature review checklist and phase flow
from the preferred reporting items for systematic reviews
and meta-analyses (PRISMA) statement guided the struc-
tured literature review [39]. The indicators and attributes of
emergence were determined following the definitions from
[33]. External validity was ensured by using using the peer-
reviewed literature as primary and validation sources. Because
the research was conducted by a single researcher, the topic
results were compared with other bibliographic research in AI
for big data and AI [21], [40].
IV. RES EA RC H FINDINGS
A. Topic Analysis
Nine topics were chosen based on the quality analysis of the
semantic coherence, exclusivity, and residuals. The topics were
then assigned to each article. Table II provides a summary of
the topics and article demographics. These tables answer the
research question What main topics of interest to AI project
success are emerging in the scholarly literature? A short
description of each topic is given in this section.
TABLE II
TOPICS WITH ARTICLE MEASURES
Topic Label Art Prev Coh Excl
T001 Ethics framework 19 0.12 -114.49 9.11
T002 Impact assessments 17 0.12 -64.23 8.62
T003 Legal protections 11 0.08 -85.35 9.42
T004 Business benefits 14 0.10 -135.09 9.11
T005 Design patterns 21 0.15 -77.63 8.47
T006 Trustworthy data 17 0.12 -86.03 8.57
T007 Stakeholder acceptance 21 0.13 -62.70 8.90
T008 Trustworthy models 19 0.13 -82.43 8.75
T009 Environmental factors 7 0.05 -151.81 9.73
Legend: Art-Number Articles; Prev-Prevalence;
Coh-Semantic Coherence; Excl-Exclusivity
•Topic T001 Ethics framework addresses activities nec-
essary from systems developers and their organizations
for ethical AI systems development and from the govern-
mental perspective for holding organizations accountable.
The topics include practical guidelines, human-centered
governance, individual accountability, and stakeholder-
oriented documents.
•Topic T002 Impact assessments emphasizes identifying
and assessing application quality and societal impacts.
The topic includes impact assessment and ethical frame-
work.
•Topic T003 Legal protections focuses on AI systems’
legal, regulatory, and usage concerns. The topic includes
human rights, data protection, other laws and regulation,
intellectual proper, and harmful usage.
•The Topic T004 Business models focuses on the impacts
of AI implementations on organizational business models
and the considerations AI implementations should give
to those impacts. It includes business benefits, business
transformation, disclosures, AI-aware business models,
sustainable business models, and project efficiency.
•Topic T005 Design patterns covers many topics that
require choices, considerations, and trade-offs in design-
ing and implementing AI solutions. The topic includes
built-in ethical-core, question past practices, focus on
resources, dis-aggregated evaluations, and transparency
mechanisms.
•Topic T006 Trustworthy data focuses on the life cycle
management of the data used in machine learning pro-
cesses. It includes rule annotations, datasheets, data en-
gineering, decision engineering, and performance metrics.
•Topic T007 Stakeholder acceptance focuses on the at-
titudes, perceptions, and expectations stakeholders have
about algorithmic outcomes. The topics include decision
accountability, developer orientation, developer bias, and
user perceptions.
•Topic T008 Trustworthy models focus on decisions and
trade-offs needed in the model and algorithm creation and
maintenance process to address the ethical expectations,
such as explainable, transparency, reliability. It includes
model cards, requirements elicitation, model trust, and
transparency documentation.
•Topic T009 Environmental factors brings together a set of
high-level environmental items reflected in many of the
other topics. The topic includes sustainable ecosystem,
environment context, fairness, and representativeness.
There was no overlap or significant correlation between the
topics based on the topicCorr function’s correlation analysis.
This is not surprising since topics were created by excluding
words used in the structured literature review query.
B. Trend and Topic Analysis
The four attributes for emerging research include radical
novelty, fast growth, coherence, and scientific impact. The
emerging parameters were established per topic according to
the process defined by [33]. For a topic-year pairing to be
considered emergent, it had to meet all criteria for that year.
Table III shows the emergence attributes and the pattern for
articles between 2011 and 2021 for each topic; the red dots
represent the years with the minimum and maximum articles.
The table answers the two of our research questions: How
much emphasis did the literature put on each topic identified?
What is the level of scientific impact of each topic?. The
majority of the AI topics have emerged since 2019. The topics
have trended from innovative uses of technology in decision
making, to addressing the general public’s concerns about
AI, to addressing some of the problems and challenges with
AI usage, towards finding who should be accountable for AI
decisions, and presently, applying techniques to address some
of the challenges and problems identified earlier.
1) Novelty: T008 Trustworthy models is the most novel
topic due to the number of publications in 2021. The articles
in this topic focus on the decisions and trade-offs needed in
the model and algorithm creation and maintenance process to
address the ethical expectations, such as explainable, trans-
parency, reliability.
2) Coherence: T007 Stakeholder acceptance is the most
coherent topic group, whereas T009 Environmental factors
is the least coherent. Both topics had an emerged in 2021.
T009 Environmental factors bring together a set of high-level
environmental items reflected in many other topics; thus, it
covers a wide variety of themes in a few articles. Conversely,
T007 Stakeholder acceptance focuses on people’s attitudes,
perceptions, and expectations about algorithmic outcomes.
These articles concentrate on the human elements of AI
development and usage. The concerns seem to be similar
across time.
3) Radical growth: T001 Ethics framework and T002 Im-
pact assessments demonstrated the highest growth factors.
Both addresses AI ethics but from completely different per-
spectives. T001 Ethics framework addresses activities nec-
essary from systems developers and their organizations for
ethical AI systems development and usage. T002 Impact
assessments is about ethical data-driven decision-making from
a technology perspective (e.g., big data, digitization, algo-
rithms).
TABLE III
TOP IC EM ER GEN CE ATT RIB UTE S WI TH TR EN D BY YE AR
Topic Emergent Emergent Measures Trend
ID Label Year(s) Grw Nov Coh Cite Dwnl 2011-2021
T001 Ethics framework 2013, 2020, 2021 1.89 5.33 -114 164 22232
T002 Impact assessments 2019, 2020 1.35 4.67 -64 55 23089
T003 Legal protections 2017, 2018, 2020, 2021 1.20 2.00 -85 29 21885
T004 Business models 2017, 2018, 2019, 2020, 2021 0.82 2.67 -135 55 29872
T005 Design patterns 2019, 2020, 2021 1.09 5.33 -78 64 30591
T006 Trustworthy data 2020, 2021 1.50 5.00 -86 194 20496
T007 Stakeholder acceptance 2019, 2020, 2021 0.78 5.67 -63 63 14117
T008 Trustworthy models 2021 0.41 6.00 -82 125 7030
T009 Environmental factors 2021 0.21 2.33 -152 374 1302
Legend: Grw-Radical Growth Nov-Novelty Coh-Semantic coherence Cite-Citations Dwnl-Downloads
*T003 contains the dummy records for years 2012 and 2016.
4) Scientific impact: Finally, the highest scientific impact
based on citation is T009 Environmental factors, and the
highest based on downloads is T005 Design patterns.T005
Design patterns covers several topics that require choices,
considerations, and trade-offs in designing and implementing
AI solutions. It has a variety of well-read journals across
diverse industry sectors. Conversely, T009 has a journal article
published in 2011 that has enjoyed an impressive number of
citations, 1310. Otherwise, the entries are mostly conference
papers.
V. DISCUSSION AND CONCLUSIONS
AI projects differ from other IT projects based on the
impacts of the project outcomes on society. The emergent
trends identified in this study underscore these differences. The
results answer the research question: What do the trends mean
in practical terms? Table IV compares the emerging trends and
the project management critical success factors as drawn from
Jeffrey K. Pinto and Dennis P. Slevin [19], a highly referenced
model [18].
The topics identify trends to which project managers and
sponsors should pay attention. Specifically, there is a close
relationship between the business model of the organization,
the AI system scope, and the legal and ethical concerns of
external stakeholders. Stakeholder acceptance depends on the
perception and biases of the development team and external
stakeholders. Thus, the ethical requirements go beyond ethical
policies and practices towards individual accountability and
questioning past practices (i.e., what the data represents).
The trustworthiness of the data and models is not only a
technical issue; it requires careful orchestration of require-
ments elicitation, documentation, engineering, and decision
accountability. Finally, many high-level environmental items
such as government policies, industry laws, and regulations
affect whether the project can meet its goal. In summary,
the topics provide a roadmap of items for project teams
to recognize decisions that may cause harm to individuals,
society, and the environment caused by developing or using
their AI systems.
This study contributes to project management research
by investigating the emerging topics unique to AI projects.
Because few studies focus on the concerns of AI from a project
perspective, the results of this study are novel. Furthermore,
the results are timely, given that many articles were released
on AI were published in the six months before this study.
There are some limitations to the study. The articles were
extracted at a single point in time. Articles on emerging
topics are published every day. Thus, the very latest trends
are not considered. The literature review was conducted by
an individual researcher, and it is possible that important and
relevant articles may have been overlooked. Future research
could include qualitative analysis methods with project par-
ticipants to understand how these trends materialize in reality.
Another avenue of research could be to investigate patterns
across cases.
REF ER EN CE S
[1] T. M. Jones, “Ethical decision making by individuals in organizations:
An issue-contingent model,” Academy of Management Review, vol. 16,
no. 2, p. 366395, 1991.
[2] N. Manders-Huits, “Moral responsibility and it for human enhancement,”
in SAC 2006: Proceedings of the 2006 ACM Symposium on Applied
Computing, pp. 267–271, ACM, 2006.
[3] K. Martin, “Ethical implications and accountability of algorithms,”
Journal of Business Ethics, vol. 160, no. 4, p. 835850, 2019.
[4] G. J. Miller, “Moral decision-making with algorithms: Artificial intelli-
gence project success factors,” in 2021 16th Conference on Computer
Science and Intelligence Systems (FedCSIS), (online), 2021.
[5] S. Garfinkel, J. Matthews, S. S. Shapiro, and J. M. Smith, “Toward
algorithmic transparency and accountability,” Communications of the
ACM, vol. 60, no. 9, p. 5, 2017.
TABLE IV
SUC CES S FACTO R AN D TREN D COM PARI SON
Success Factors [19] Emerging Trend
1 Project mission Business models
Legal protections
2 Top management support -
3 Project schedule/plan Ethics framework
4 Client consultation Impact assessment
5 Personnel -
6 Technical tasks Design patterns
Trustworthy data, models
7 Client acceptance Stakeholder acceptance
8 Monitoring/feedback -
9 Communication -
10 Troubleshooting -
External factors Environmental factors
[6] N. Helberger, T. Araujo, and C. H. de Vreese, “Who is the fairest of them
all? public attitudes and expectations regarding automated decision-
making,” Computer Law & Security Review, vol. 39, p. 116, 2020.
[7] E. Bonsón, D. Lavorato, R. Lamboglia, and D. Mancini, “Artificial
intelligence activities and ethical approaches in leading listed companies
in the european union,” International Journal of Accounting Information
Systems, vol. 43, p. 100535, 2021.
[8] P. Chatzoglou, D. Chatzoudes, L. Fragidis, and S. Symeonidis, “Exam-
ining the critical success factors for erp implementation: An explanatory
study conducted in smes,” in Information Technology for Management:
New Ideas and Real Solutions. ISM 2016, AITM 2016. Lecture Notes
in Business Information Processing (E. Ziemba, ed.), vol. 277 of
Information Technology for Management: New Ideas and Real Solutions,
(Cham), p. 179201, Springer International Publishing, 2017.
[9] C. Leyh, “Critical success factors for erp projects in small and medium-
sized enterprises - the perspective of selected german smes,” in Pro-
ceedings of the 2014 Federated Conference on Computer Science and
Information Systems (M. Ganzha, L. Maciaszek, and M. Paprzycki, eds.),
vol. 2, p. 11811190, ACSIS, 2014.
[10] C. Leyh, A. Gebhardt, and P. Berton, “Implementing erp systems in
higher education institutes critical success factors revisited,” in Pro-
ceedings of the 2017 Federated Conference on Computer Science and
Information Systems (M. Ganzha, L. Maciaszek, and M. Paprzycki, eds.),
p. 913917, ACSIS, 2017.
[11] C. Leyh, K. Köppel, S. Neuschl, and M. Pentrack, “Critical success fac-
tors for digitalization projects,” in Proceedings of the 16th Conference on
Computer Science and Intelligence Systems (M. Ganzha, L. Maciaszek,
M. Paprzycki, and D. lzak, eds.), vol. 25, p. 427436, ACSIS, 2021.
[12] G. J. Miller, “A conceptual framework for interdisciplinary decision
support project success,” in TEMSCON 2019: Proceedings of the 2019
IEEE Technology & Engineering Management Society Conference, p. 18,
IEEE, 2019.
[13] G. J. Miller, “Quantitative comparison of big data analytics and business
intelligence project success factors,” in Information Technology for
Management: Emerging Research and Applications. AITM 2018, ISM
2018. Lecture Notes in Business Information Processing (E. Ziemba,
ed.), vol. 346, (Cham), p. 5372, Springer International Publishing, 2019.
[14] S. Petter and E. R. McLean, “A meta-analytic assessment of the delone
and mclean is success model: An examination of is success at the
individual level,” Information & Management, vol. 46, no. 3, p. 159166,
2009.
[15] P. Ralph and P. Kelly, “The dimensions of software engineering suc-
cess,” in Proceedings of the 36th International Conference on Software
Engineering, p. 2435, ACM, 2014.
[16] M. Umar Bashir, S. Sharma, A. K. Kar, and G. Manmohan Prasad, “Crit-
ical success factors for integrating artificial intelligence and robotics,”
Digital Policy, Regulation and Governance, vol. 22, no. 4, p. 307331,
2020.
[17] R. Wodarski and A. Poniszewska-Marada, “Measuring dimensions of
software engineering projects’ success in an academic context,” in
Proceedings of the 2017 Federated Conference on Computer Science
and Information Systems (M. Ganzha, L. Maciaszek, and M. Paprzycki,
eds.), vol. 11 of ACSIS, p. 12071210, ACSIS, 2017.
[18] K. Davis, “An empirical investigation into different stakeholder groups
perception of project success,” International Journal of Project Man-
agement, vol. 35, no. 4, p. 604617, 2017.
[19] J. K. Pinto and D. P. Slevin, “Critical success factors across the project
life cycle,” Project Management Journal, vol. 19, no. 3, p. 6775, 1988.
[20] A. J. Shenhar, D. Dvir, O. Levy, and A. C. Maltz, “Project success:
a multidimensional strategic concept,” Long range planning, vol. 34,
no. 6, p. 699725, 2001.
[21] D. Magaña and J. C. Fernández Rodríguez, “Artificial intelligence
applied to project success: A literature review,” International Journal
of Artificial Intelligence and Interactive Multimedia, vol. 3, pp. 77–84,
2015.
[22] A. Jobin, M. Ienca, and E. Vayena, “The global landscape of ai ethics
guidelines,” Nature Machine Intelligence, vol. 1, no. 9, p. 389399, 2019.
[23] B. Mittelstadt, “Principles alone cannot guarantee ethical ai,” Nature
Machine Intelligence, vol. 1, no. 11, p. 501507, 2019.
[24] M. Ryan and B. C. Stahl, “Artificial intelligence ethics guidelines for de-
velopers and users: clarifying their content and normative implications,”
Journal of Information, Communication and Ethics in Society, vol. 19,
no. 1, p. 6186, 2021.
[25] Y. Zhang, M. Wu, G. Y. Tian, G. Zhang, and J. Lu, “Ethics and privacy of
artificial intelligence: Understandings from bibliometrics,” Knowledge-
Based Systems, vol. 222, p. 106994, 2021.
[26] M. Kuc-Czarnecka and M. Olczyk, “How ethics combine with big data:
a bibliometric analysis,” Humanities & Social Sciences Communications,
vol. 7, no. 1, 2020.
[27] A. Di Vaio, R. Palladino, R. Hassan, and O. Escobar, “Artificial
intelligence and business models in the sustainable development goals
perspective: A systematic literature review,” Journal of Business Re-
search, vol. 121, pp. 283–314, 2020.
[28] M. M. Rantanen, S. Hyrynsalmi, and S. M. Hyrynsalmi, “Towards
ethical data ecosystems: A literature study,” in 2019 IEEE International
Conference on Engineering, Technology and Innovation (ICE/ITMC),
17-19 June 2019, pp. 1–9, 2019.
[29] Z. Miao, “Investigation on human rights ethics in artificial intelligence
researches with library literature analysis method,” The Electronic Li-
brary, vol. 37, no. 5, pp. 914–926, 2018.
[30] M. Wieringa, “What to account for when accounting for algorithms:
a systematic literature review on algorithmic accountability,” in FAT*
2020: Proceedings of the 2020 Conference on Fairness, Accountability,
and Transparency, p. 118, ACM, 2020.
[31] J. P. Woolley, “Trust and justice in big data analytics: Bringing the
philosophical literature on trust to bear on the ethics of consent,”
Philosophy & Technology, vol. 32, no. 1, pp. 111–134, 2019.
[32] K. Murphy, R. Erica Di, R. Upshur, D. J. Willison, N. Malhotra, J. C.
Cai, N. Malhotra, V. Lui, and J. Gibson, “Artificial intelligence for good
health: a scoping review of the ethics literature,” BMC Medical Ethics,
vol. 22, pp. 1–17, 2021.
[33] Q. Wang, “A bibliometric model for identifying emerging research top-
ics,” Journal of the Association for Information Science and Technology,
vol. 69, no. 2, pp. 290–304, 2018.
[34] T. R. Hannigan, R. F. J. Haans, K. Vakili, H. Tchalian, V. L. Glaser,
M. S. Wang, S. Kaplan, and P. D. Jennings, “Topic modeling in man-
agement research: Rendering new theory from textual data,” Academy
of Management Annals, vol. 13, no. 2, pp. 586–632, 2019.
[35] M. E. Roberts, B. M. Stewart, D. Tingley, and E. M. Airoldi, “The
structural topic model and applied social science,” in Advances in neural
information processing systems workshop on topic models: computation,
application, and evaluation, vol. 4, pp. 1–20, Harrahs and Harveys, Lake
Tahoe, 2013.
[36] M. Roberts, B. Stewart, D. Tingley, and K. Benoit, “stm: Estimation of
the structural topic model,” 2014.
[37] M. Roberts, B. Stewart, and D. Tingley, “stm : An r package for
structural topic models,” Journal of Statistical Software, vol. 91, no. 2,
2019.
[38] D. Mimno, H. M. Wallach, E. Talley, M. Leenders, and A. McCallum,
“Optimizing semantic coherence in topic models,” in Proceedings of
the Conference on Empirical Methods in Natural Language Processing,
(Edinburgh, United Kingdom), p. 262272, 2011.
[39] D. Moher, A. Liberati, J. Tetzlaff, and D. G. Altman, “Preferred reporting
items for systematic reviews and meta-analyses: The prisma statement,”
International Journal of Surgery, vol. 8, no. 5, p. 336341, 2010.
[40] Y. Liu, M. Feng, and C. MacDonald, “A big-data approach to under-
standing the thematic landscape of the field of business ethics, 19822016:
Jbe,” Journal of Business Ethics, vol. 160, no. 1, pp. 127–150, 2019.