Conference PaperPDF Available

Literature Review of Ethical Concerns in the Use of Disruptive Technologies in Government 3.0



Government 3.0' as the new paradigm brings in new disruptive technologies in the digitization process of the public sector. The massive use of Artificial Intelligence, Machine Learning, Big Data Analytics, Internet of Things and other technologies in public service provisioning that have a potential to significantly influence the life of a large number of citizens demands for a thorough investigation of the ethical concerns. Along a literature review, this paper investigates the ethical issues associated with the implementation of disruptive technologies in the public sector. In the first part of the paper, ten categories of ethical concerns in e-government are identified. Subsequently, these ten categories guide a more detailed review of 74 articles dealing with specific ethical concerns in relation to the implementation of Artificial Intelligence and Big Data in e-government. The literature review revealed important similarities and differences in ethical issues relating to the two technologies.
Literature Review of Ethical Concerns in the Use of Disruptive Technologies in
Government 3.0
Alexander Ronzhyn
University of Koblenz-Landau,
Universitätsstr. 1, 56070 Koblenz, Germany
Nationales E-Government Kompetenzzentrum e.V.,
Schiffbauerdamm 40 10117 Berlin, Germany
Maria A. Wimmer
University of Koblenz-Landau,
Universitätsstr. 1, 56070 Koblenz, Germany
Nationales E-Government Kompetenzzentrum e.V.,
Schiffbauerdamm 40 10117 Berlin, Germany
Abstract Government 3.0’ as the new paradigm brings in new
disruptive technologies in the digitization process of the public
sector. The massive use of Artificial Intelligence, Machine
Learning, Big Data Analytics, Internet of Things and other
technologies in public service provisioning that have a potential
to significantly influence the life of a large number of citizens
demands for a thorough investigation of the ethical concerns.
Along a literature review, this paper investigates the ethical
issues associated with the implementation of disruptive
technologies in the public sector. In the first part of the paper,
ten categories of ethical concerns in e-government are identified.
Subsequently, these ten categories guide a more detailed review
of 74 articles dealing with specific ethical concerns in relation to
the implementation of Artificial Intelligence and Big Data in e-
government. The literature review revealed important
similarities and differences in ethical issues relating to the two
Keywords-ethics, government 3.0, e-government, disruptive
The discussion of ethics should be an integral part of e-
government research, in particular when new disruptive
technologies are to be deployed. Often however, ethical
considerations are relegated to the “Discussion” or “Future
research” sections of the papers. This paper therefore studies
existing literature on ethics in e-government. Furthermore,
ethical implications of the introduction of new disruptive
technologies in e-government are identified.
Ethics has been defined as “the art of living well by
Aristotle (cited in [1]) and has been one of the most discussed
philosophical concept ever since [2]. Treviño et al. define
ethical behavior as doing the right thing, showing concern
for people and treating people right, being open and
communicative, and demonstrating morality in one’s personal
life [3, pp. 131132]. Ethics in government refers to ethical
behavior and to the approach of organizing the processes and
rules of governance in a way that shows concern for citizens,
is transparent and accountable (cf. good governance principles
Discussion of ethics in e-government lies on the
intersection of the areas of the ethics in government and the
Information and Communication Technologies (ICT) ethics.
In his paper of 1986, Anderson identified four major ethical
issues in ICT: privacy, accuracy, property and accessibility
[5]. More than thirty years later these issues are even more
important and contentious than at the dawn of the Internet era,
for several reasons (particularly in regards to e-government):
firstly, the relationship between the government and a citizen
is unequal one: citizen is dependent and vulnerable [6];
secondly, ICTs have an effect on public values, and their
transformative potential should be also viewed in this
dimension [7][8]; thirdly, the landscape of the public sphere is
different from the private sphere as the ultimate aims of the
organizations involved are very different [9].
This paper studies the subject of ethical implementation of
e-government and the ethical introduction and use of the ICTs
in public sphere, while we do not discuss questions of ethical
decision-making by individual officials in government. The
research is a part of the Erasmus+ Gov 3.0 project
(, which aims to establish
Government 3.0 as a research domain. The project team
defines Government 3.0 as follows:
Government 3.0 refers to the use of new disruptive ICTs
(such as blockchain, big data and artificial intelligence
technologies), in combination with established ICTs (such
as distributed technologies for data storage and service
delivery) and taking advantage of the wisdom of crowd
(crowd/citizen-sourcing and value co-creation), towards
data-driven and evidence-based decision and policy
making. [10, p.2]
The Gov 3.0 project identifies and describes new
technologies, trends and concepts associated with the
Government 3.0 paradigm. Some of these technologies are
termed “disruptive” as they are likely to have significant
impact on how e-government will be shaped in the future.
Along the project, the authors conducted several workshops
discussing the Government 3.0 concept and the use of
disruptive technologies in public spheres. Ethical issues were
one of the most discussed topics along these workshops. Yet
despite ethics being one of the biggest concerns of academics
along the implementation of new technologies, no systematic
review of literature on ethics in e-government has been found.
In this paper, we therefore investigate ethics in the implemen-
tation of the most significant disruptive technologies, namely
Artificial Intelligence (AI) and Big Data.
The structure of the paper is as follows: Section II presents
the research methodology of the paper and outlines the
research questions, Section III presents results of the literature
review of the ethical considerations in e-government,
85Copyright (c) IARIA, 2019. ISBN: 978-1-61208-685-9
ICDS 2019 : The Thirteenth International Conference on Digital Society and eGovernments
identifying main ethical themes in the research. In Section IV,
we present the results of the literature review of the ethical
issues concerning AI and Big Data use in Government 3.0.
Section V discusses the results of the literature review and
concludes with an outlook on future research on ethics in e-
government and by reflecting the limitations of the paper.
The aim of the paper is to scope the understanding of
ethics in e-government and spotting the needs for ethical
considerations in Government 3.0, specifically with new
disruptive technologies and technological trends. The paper is
descriptive and based on a systematic literature review.
Three research questions guide this research:
1. What are the main ethical considerations within the
e-government domain?
2. What ethical issues can be identified concerning the
implementations of AI and Big Data within e-government?
3. What are the research needs concerning ethical
issues of disruptive technologies in e-government?
Following Kitchenham and Charters [11], the articles were
collected from the four databases: SpringerLink
(, IEEE Xplore
(, ACM (
and DGRL (V. 14, only for the first stage). The search was
carried out in autumn 2018. Search was restricted to the title
and abstract of the papers and was with the search string:
“ethics AND (‘digital government’ OR ‘e-government’)”.
This allowed identifying main ethical considerations and
themes in e-government, presented in Section III. Reviewing
the results of the searches ensured that chosen papers focus on
ethical issues, i.e., papers that did not include ethical issues as
a main or at least a secondary topic, were not published in a
peer-reviewed journal or conference proceeding or were not
accessible in full-text were excluded (exclusion criteria).
For the second stage, literature on ethical issues of the
specific technologies was searched and reviewed. The search
strings “AI | Artificial Intelligence | Big Data AND (‘ethical
issues’ OR ‘ethics’)” were used, resulting initially in 645
references. After the exclusion criteria were applied, 74 papers
were left (27 AI, 47 Big Data papers). First exclusion was
made after examining metadata of the articles, while the
second exclusion was based on full-text scans of the articles.
Out of the remaining papers, we extracted ethical issues
applicable to the use of these technologies in e-government.
To analyze ethics aspects specifically related to the
disruptive technologies, we used the concept-centric approach
suggested by Webster and Watson [12]. For example, the list
of broad ethical themes resulting from the first review cycle
of ethics was used to codifying the presence or absence of the
theme in each paper on ethics in AI and Big Data. The results
of this literature review are described in Section IV.
Literature review identified 22 articles focusing on ethics
in e-government. Table I lists the ten ethical considerations in
e-government along with the literature. Subsequently, we
summarize the main aspects of these ethical considerations.
Ethical considerations in
Articles reviewed
[6], [7], [9], [13][18]
[6], [16], [17], [19][25]
Data use
[6], [21], [24], [26]
Quality/ Accuracy of information
[9], [23], [25]
[19], [27]
Information ownership
[20], [23]
[6], [7], [9], [19], [23], [28]
Alignment of values
[13], [17], [23], [29], [30]
[6], [16], [25]
Inclusivity refers to the concern about the inability of some
groups of citizens to make use of the digital government
services. It is discussed in the context of the digital divide
either within a society or between countries. Most common
factors causing digital divide are disparity in access to
technology, wealth, education or age-related differences [14].
Inclusivity is a significant concern as in some cases e-
government services are replacing the traditional ways to
interact with the government, so citizens who are unable to
use the new services are put in significant disadvantage.
Privacy is the concern about the unauthorized or
inappropriate use of individual information by the government
or other actors. Privacy is the most discussed ICT-related
ethical issue, especially after the advent of social media and
large-scale personal data collection [31].
Data use refers to the concern about inappropriate use of
collected data. This includes for example the aggregation of
data from different sources to infer new information or to de-
anonymize individual citizens. This is not a new issue [32],
however with the increase of the amount of data about any
particular person, cross-referencing different databases has
become a significant concern, threatening citizens’ privacy.
Concerns on quality and accuracy of information relate to
the imperfect digitalization of certain data in the transition to
digital services. Data errors or incomplete information in
databases may result in additional costs for a citizen [25].
Transparency is a concern that certain processes in e-
government may become black boxes, impossible to
understand by individual citizens. Lack of transparency may
lead to the inequality of treatment, when certain decisions are
made using invisible decision processes based on data only
available to the system [24].
Accountability is related to transparency and concerns the
responsibility of government toward an individual citizen in
case of problems with or misuse of the digital government
system. Accountability is necessary to improve citizen trust in
e-government [33].
Information ownership is about the possibility of the
digital government system’s user to change or restrict access
to one’s own information. It also concerns the re-use of certain
information from the e-government systems by the third
parties [34][35].
Trust is a general consideration of the effect that the
automatization (and associated de-humanization) of the
government services may have on an individual citizen. It also
encompasses the issues of government control and
surveillance [7][31].
86Copyright (c) IARIA, 2019. ISBN: 978-1-61208-685-9
ICDS 2019 : The Thirteenth International Conference on Digital Society and eGovernments
Alignment of values refers to the mismatch between the
values of the government and the citizens. Sometimes
motivation of the government to introduce digital services
(e.g. cutting costs, improving efficiency) may not be aligned
with the interests of the citizens, who value accountability and
inclusivity of the services [6][16]. This concern is connected
to the inclusivity and trust concerns. The discussion of values
in this context also touches on the differences of attitudes to
the free speech versus security dilemma [17][36] and the
difference between values across countries, i.e., imposing
western values in the developing countries [13].
Cost consideration refers not only to the financial cost of
implementing and running the digital government services but
also trade-offs for the citizens, associated with the
implementation of e-government services: ensuring inclusive
access to government services may increase the workload for
the civil servants and thus the cost of public services [6].
The second stage of the literature review identifies specific
ethical issues related to the new disruptive technologies AI
and Big Data in e-government. The issues are categorized
along the 10 ethical considerations identified in Section III.
A. Artificial Intelligence
The use of AI in government is expected to increase as
well as the significance of its effects on issues with moral
component [37][38]. Literature distinguishes between
‘Artificial Intelligence’ (AI) and ‘Artificial General
Intelligence’, A(G)I. “AI in e-governmentrefers to the use of
elements of artificial intelligence to facilitate some of the
services and processes, while A(G)I relates to autonomous
decision-making and AI-supported robots in the society in
future [39]. While the latter has implication for government as
well [37], it is not part of our current investigations, as we
focus on current implementations of AI in e-government.
Ethical consideration in
Supporting literature
Quality/ Accuracy of
[37], [47]
[37], [45], [47][52]
Information ownership
[37], [40], [47], [54], [55]
Alignment of values
[37], [42], [44][48], [53], [55][60]
[41], [42], [48], [61], [62]
Of the literature reviewed, 27 papers (Table II) are dealing
with ethical issues in the application of AI. Most common
categories of the ethical concerns mentioned are values (14),
accountability (9), privacy (8), inclusivity (6) and cost (6). In
most of the cases, the issues relate to the AI-assisted Big Data
use for decision making (both autonomous by AI or AI-
assisted). The most common ethical issues in each category
are described below.
1) Major issues
Accountability: The concerns of this category relate to the
automated decision-making by AI systems. Who is
responsible or liable for AI making a bad decision (ethically,
legally or otherwise)? This is a significant concern in private
sector (especially relating to the autonomous vehicles), but
also a huge issue in government, where decisions can have
implications on a very large scale [52]. Thus the question of
liability should not be only discussed when implementing the
decision-making systems but also be explicitly addressed in
laws [50][51].
Value alignment: while the decisions made by the
autonomous AI systems should be ideally based on hard data,
there is a concern that such decisions might not be objective
[48]. What values should be programmed into the AI making
complex data-based decisions is an open question [50][59].
For simple decisions, rules may be straightforward. For some
other, choosing between two sub-optimal options may amount
to the value judgment [55]. Ensuring transparency and
providing sufficient discussions of such algorithms may help
address such concerns [55][56].
Privacy: Ethical concerns include using AI for surveil-
lance [45], for profiling [46] and the leakage of personal data
(especially in sensitive settings like healthcare) [48][49]).
Inclusivity: AI may also increase inequality between those
who control AI and other people [44][45]. The effect of AI on
the society needs to be studied to ensure inclusive realization
with respect to human rights [40][42][43].
Cost: AI can be a costly endeavor, especially in regard to
indirect costs: the increase of automation and move towards
automated decision-making is forecast to lead to a profound
shift in the structure of the labor market [42][48][61]. Brandy
argues that changes may affect public services twofold:
directly, when some public officials will lose their jobs as
services will be automated; and indirectly, when the increase
in unemployment will lead to the increasing pressure on the
public sector [62].
2) Minor issues
Transparency: AI systems need to be able to "explain"
why a certain decision has been made [37].
Trust: There is an issue of trust towards autonomous or AI-
assisted decisions [40][55], especially in sensitive settings like
healthcare [54].
B. Big Data
Big Data already plays an important role in many
domains, for example: disaster management [63], healthcare
[64][65], food security [66], law enforcement [67] and smart
cities [68]. In some cases, Big Data is used for automated
decision making, sometimes in conjunction with AI [69].
Ethics and ethical issues emerged as one of the important
topics in the Big Data literature review by Lu and Liu [70]
appearing in 97 of the collected sources. Other major topics
included healthcare applications of Big Data and privacy
(which was the fourth most prominent topic related to Big
Data overall, trailing only behind technology-related
87Copyright (c) IARIA, 2019. ISBN: 978-1-61208-685-9
ICDS 2019 : The Thirteenth International Conference on Digital Society and eGovernments
The review identified 47 papers dealing with ethical
issues in Big Data as shown in Table III. Most named ethical
concerns are privacy (40), data ownership (10), data accuracy
(9), values (9), data use (6) and inclusivity (6). The
descriptions of these ethical concerns along Big Data use in
e-government follow below.
Ethical consideration in e-
Supporting literature
[67], [69], [71][74]
[34], [48], [65], [67], [71][106]
Data use
[77], [79], [86], [90], [107], [108]
Quality/ Accuracy of
[65], [77], [95][97], [99], [100],
[102], [109]
[74], [81], [110]
[48], [74], [75], [110]
Information ownership
[34], [48], [69], [71], [78], [85],
[97], [98], [106], [111]
[95], [105]
Alignment of values
[34], [48], [77], [81], [91], [94],
[102], [107], [109]
[48], [65], [95], [98], [103]
1) Major issues:
Privacy: The main concern about Big Data is the ever-
increasing amount of personal information collected
[34][82], often without the subjects being aware of that
collection [90]. Even with de-personalised information there
is a significant concern about the cross-reference of data
between different datasets to identify the anonymised
individuals [76][80][112]. Given large amounts of
information collected and the improvements in Big Data
Analytics and Machine Learning technologies, it is very
difficult to guarantee full anonymity of data [78][96]. In the
government context, this concern is connected to the worry
about the surveillance state, when government "knows
everything" [88][104]. The benefits of the use of Big Data for
security and surveillance needs to be balanced against
personal freedom and privacy, otherwise it may lead to
significant erosion of trust towards government [100][101].
Data ownership: Organisations involved in data collection
(e.g. social media companies [113]) may accumulate very
large amounts of personal data. While the data may include
identifiable and potentially sensitive information, it does not
actually belong to the person: often individuals do not even
know what kind of information is collected about them
[90][106]. Ethical concerns regard making use of personal
data by organisations for their own benefit (or even for the
benefit of society), without explicit consent from individuals
Data accuracy: in e-government contexts, the collected
data can be used for decision making or provision of
personalised e-government services. Inaccurate or
incomplete data can lead to erroneous or biased decisions
[95][100][102]. These issues are more significant in the
public sector, as citizens cannot always opt-out of a service
and potential harm from incorrect data can be larger [109].
Data use: data misuse is a concern about the use of citizen
data for purposes other than ones, for which an explicit
consent has been given [86][90][108] or a legal ground exists.
However, in a dynamically evolving field of e-government it
may not be easy to predict every possible scenario, in which
the data might be used. A balance should be found for ethical
use of data, which would still allow creating innovative
public services [79][107].
Alignment of values: similar to the issues discussed in AI,
the use of Big Data may lead to the conflict between the
values of the government and citizens: between individual
and public good [81][94]. It is also necessary to consider the
implications of the decisions based on the biased Big Data for
the societal stability [48][102].
2) Minor issues
Inclusivity: there is a certain risk of the discrimination
based on the dataset used. This can lead to stigmatisation
[72], wrong identification in criminal cases [67] and
increasing digital divide [71].
Transparency and accountability: in public sector it is
important to indicate when and how the data is collected and
for what purposes is it used [74][92], while the algorithm
creators need to be accountable for their product [48][110].
Trust: improper management of data may lead to the
issues with citizen trust towards government. Data
management becomes an important concern for the agencies
dealing with Big Data, requiring skills and effort [95][105].
Costs: there are cost issues related to the storage and
processing of Big Data. By definition Big Data requires
significant resources that need to be diverted from elsewhere.
Implementing Big Data-based decision making systems, it is
necessary to assess the possible trade-offs [48][65][98].
Deploying disruptive technologies in public services
brings new ethical challenges that need to be addressed by the
researchers and practitioners of e-government. From the
literature research, we extracted ten ethical considerations
that should be carefully reflected along each project aiming
at deploying disruptive technologies in e-government.
Ensuring that the implementation of new services properly
addresses the inclusivity, privacy, data use, data accuracy,
accountability, ownership, trust, alignment of values and cost
concerns will help to move towards more responsible design
and implementation of the new Government 3.0 paradigm.
This research provides a description of ethical issues in AI
and Big Data along the ten ethical considerations. Ethical
concerns in the use of AI relate mostly to the accountability
of autonomous decision-makers (who is accountable for AI
making wrong decision?) and value alignment (what will be
the basis for AI decisions?). Privacy and inclusivity are other
important issues.
In Big Data, the main concern is privacy: what data should
be collected and for what purposes? Information ownership
and consent are important ethical issues as well. There is a
significant worry about the improper use of Big Data for
88Copyright (c) IARIA, 2019. ISBN: 978-1-61208-685-9
ICDS 2019 : The Thirteenth International Conference on Digital Society and eGovernments
surveillance, there is an apparent need for ethical discussion
regarding the limits of data collection and balancing the
benefits of Big Data with its drawbacks.
Finally, the need for legal frameworks and regulation of
the use of disruptive technologies arises in both, AI and Big
Data ethical discussions [37][51][85][95].
Kidder [114] argues that ethical responsibilities increase
with the increase of potential harm resulting from an unethical
decision. Both AI and Big Data offer significant benefits for
public sector, at the same time having considerable potential
for misuse. With the widespread use of ICTs by the
governments and digital transformation of governance
processes, main ethical concerns shift from the individual
decision-making by government officials to the discussion of
ethical implementation and management of ICTs and tools in
public sector.
Therefore, further research is needed to provide adequate
frameworks along the introduction of disruptive technologies
in e-government, which help to provide answers to the ethical
considerations described in Sections III and IV guiding
researchers and practitioners in the assessment of ethics.
Further empirical and theoretical research is necessary to
address the issues arising from the implementation of
disruptive technologies and provide a basis for drafting legal
framework regulating these technologies.
Future research should also assess ethics in the
implementation of other disruptive technologies identified as
a part of the Government 3.0 paradigm [115] (e.g.,
Augmented and Virtual Reality (see [116] for a discussion of
ethical challenges), Internet of Things, Blockchain, etc.).
Limitations of the research conducted in this paper may be
imposed by the methodology chosen: some relevant papers
dealing with ethical issues may have been excluded if they had
no “ethics/ ethical issues” in their title or abstract. Likewise,
there are some papers that might not be in any of the databases
used for the literature review, but still contain valuable
information. A more extensive literature review is needed to
overcome these limitations. Despite these limitations, we are
confident that the literature review presented here is
representative enough to provide valuable insights in the
ethical issues in e-government and provide useful outline of
the future research directions.
This research is developed in the context of the Gov 3.0
Project. It was funded by the Erasmus+ Knowledge Alliance,
Project Reference No. 588306-EPP-1-2017-1-EL-EPPKA2-
[1] E. M. Hartman, “Socratic questions and aristotelian answers:
A virtue-based approach to business ethics,” J. Bus. Ethics, vol.
78, no. 3, pp. 313328, 2008.
[2] D. Guttmann, Ethics in social work: A context of caring.
Routledge, 2013.
[3] L. K. Treviño, L. P. Hartman, and M. Brown, “Moral Person
and Moral Manager: How Executives Develop a Reputation for
Ethical Leadership,” Calif. Manage. Rev., vol. 42, no. 4, pp.
128142, 2000.
[4] OECD, OCDE, “The OECD principles of corporate
governance,” Contaduria y Adm., no. 216, pp. 183194, 2004.
[5] R. O. Mason, “Four Ethical Issues of the Information Age,”
MIS Q., vol. 10, no. 1, pp. 512, 1986.
[6] J. B. Berger, “Coercive E-Government Policy Imposing Harm:
The Need for a Responsible E-Government Ethics,” Proc. 49th
Hawaii Int. Conf. Syst. Sci., pp. 28302839, 2016.
[7] F. Bannister and R. Connolly, “ICT, public values and
transformative government: A framework and programme for
research,” Gov. Inf. Q., vol. 31, no. 1, pp. 119128, 2014.
[8] L. Royakkers, J. Timmer, L. Kool, and R. van Est, “Societal
and ethical issues of digitization,” Ethics Inf. Technol., vol. 20,
no. 2, pp. 127142, 2018.
[9] H. Mullen and D. S. Horner, “Ethical Problems for e-
Government: An Evaluative Framework,” Electron. J. e-
Government, vol. 2, no. 3, pp. 187196, 2004.
[10] G. V. Pereira et al., “Scientific foundations training and
entrepreneurship activities in the domain of ICT-enabled
governance,” in Proc. 19th Annu. Int. Conf. dg.o ’18, pp. 12.
[11] B. Kitchenham and S. Charters, “Guidelines for performing
Systematic Literature reviews in Software Engineering
Version 2.3.,” EBSE, UK, 2007.
[12] J. Webster and R. T. Watson, “Analyzing the past to prepare
for the future,” MIS Q., pp. xiiixxiii, 2002.
[13] S. Basu, “Digital Divide, Digital Ethics, and E-government,”
in ICTs in Developing Countries, London: Palgrave Macmillan
UK, 2016, pp. 161169.
[14] E. Mordini et al., “Senior citizens and the ethics of e-
inclusion,” Ethics Inf. Technol., vol. 11(3), pp. 203220, 2009.
[15] E. Easton-Calabria and W. L. Allen, “Developing ethical
approaches to data and civil society: from availability to
accessibility,” Innovation, vol. 28(1), pp. 5262, 2015.
[16] B. N. Fairweather and S. Rogerson, “Towards morally
defensible e-government interactions with citizens,” J. Inf.,
Commun. Ethics Soc., vol. 4(4), pp. 173180, 2006.
[17] N. Kapucu, “Ethics of Digital Government,” in Encyclopedia
of Digital Government, A.-V. Anttiroiko and M. Malkia, Eds.
IGI Global, 2007, pp. 745748.
[18] J. W. Weiss, G. J. Gulati, D. J. Yates, and L. E. Yates, “Mobile
broadband affordability and the global digital divide - An
information ethics perspective,” Proc. HICSS, vol. 2015
March, pp. 21772186, 2015.
[19] J. Lodge, “The dark side of the moon: Accountability, ethics
and new biometrics,” in Second generation biometrics: The
ethical, legal and social context, Springer, 2012, pp. 305328.
[20] R. Heersmink, J. van den Hoven, N. J. van Eck, and J. den van
Berg, “Bibliometric mapping of computer and information
ethics,” Ethics Inf. Technol., vol. 13(3), pp. 241249, 2011.
[21] R. Domanski, E. Estevez, E. Styrin, M. Alfano, and T. M.
Harrison, “Toward an ethics of digital government,” Proc. 19th
Annu. Int. Conf. dg.o ’18, pp. 14, 2018.
[22] I. Büschel, R. Mehdi, A. Cammilleri, Y. Marzouki, and B.
Elger, “Protecting Human Health and Security in Digital
Europe: How to Deal with the ‘Privacy Paradox’?,” Sci. Eng.
Ethics, vol. 20(3), pp. 639658, 2014.
[23] G. Kaisara and S. Pather, “Relevance of Ethics in e-
Government: An Analysis of Developments in the WWW era,”
in Proceedings of the 6th International Conference on E-
Government, 2010, pp. 4553.
[24] P. Henman, “E-Government, Targeting and Data Profiling,” J.
E-Government, vol. 2, no. 1, pp. 7998, Dec. 2005.
[25] R. E. Anderson, “Ethics and Digital Government,” in Digital
89Copyright (c) IARIA, 2019. ISBN: 978-1-61208-685-9
ICDS 2019 : The Thirteenth International Conference on Digital Society and eGovernments
Government: Principles and Best Practices, 2004.
[26] D. Goroff, J. Polonetsky, and O. Tene, “Privacy Protective
Research: Facilitating Ethically Responsible Access to
Administrative Data,” Ann. Am. Acad. Pol. Soc. Sci., vol.
675(1), pp. 4666, 2018.
[27] P. Henman, “E-Government, Targeting and Data Profiling,” J.
E-Government, vol. 2(1), pp. 7998, Dec. 2005.
[28] G. Sharma, X. Bao, and L. Peng, “Public Participation and
Ethical Issues on E-governance: A Study Perspective in
Nepal,” Electron. J. e-Government, vol.12(1), pp. 8296, 2014.
[29] E. Wihlborg, H. Larsson, and K. Hedström, “‘The computer
says no!’ - A case study on automated decision-making in
public authorities,” Proc. HICSS, vol. 2016March, pp. 2903
2912, 2016.
[30] A. V. Roman, “Framing the Questions of E-Government
Ethics: An Organizational Perspective,” Am. Rev. Public Adm.,
vol. 45(2), pp. 216236, 2015.
[31] F. Belanger and J. S. Hiller, “A framework for e-government:
Privacy implications,” Bus. Process Manag. J., vol. 12(1
SPEC. ISS.), pp. 4860, 2006.
[32] R. O. Mason, “Four Ethical Issues of the Information Age,”
MIS Q., vol. 10(1), pp. 512, 1986.
[33] E. W. Welch, C. C. Hinnant, and M. J. Moon, “Linking citizen
satisfaction with e-government and trust in government,”
Journal of Public Administration Research and Theory, pp.
371391. 2005.
[34] L. Voronova and N. Kazantsev, “The Ethics of Big Data:
Analytical Survey,” in 2015 IEEE 17th Conf. on Business
Informatics, 2015, pp. 5763.
[35] R. Heersmink, J. van den Hoven, N. J. van Eck, and J. den van
Berg, “Bibliometric mapping of computer and information
ethics,” Ethics Inf. Technol., pp. 241249, 2011.
[36] G. A. Sandy, “Mandatory ISP filtering for a clean feed to
australian internet subscribers,” 15th Am. Conf. Inf. Syst. 2009,
AMCIS 2009, vol. 10, pp. 69516958, 2009.
[37] A. Dameski, “A Comprehensive Ethical Framework for AI
Entities: Foundations,” in Artificial General Intelligence, M.
Iklé, A. Franz, R. Rzepka, and B. Goertzel, Eds. 2018, pp. 42
[38] A. Smith and J. Anderson, “Digital Life in 2025: AI, Robotics
and the Future of Jobs,” Pew Res. Cent., 6. 2014.
[39] M. Anderson and S. L. Anderson, “Machine ethics,” IEEE
Intelligent Systems, 2006.
[40] M. Brundage, “Artificial Intelligence and Responsible
Innovation,” in Fundamental Issues of Artificial Intelligence,
Cham: Springer International Publishing, 2016, pp. 543554.
[41] F. Kile, “Artificial intelligence and society: a furtive
transformation,” AI Soc., vol. 28(1), pp. 107115, 2013.
[42] W. Zhao, “Research on Social Responsibility of Artificial
Intelligence Based on ISO 26000,” 2019, pp. 130–137.
[43] P. Boddington, “Some Characteristic Pitfalls in Considering
the Ethics of AI, and what to do about them,” 2017, pp. 85–97.
[44] M. Drev, “Work Task Automation and Artificial Intelligence:
Implications for the Role of Government,” in At His
Crossroad, Cham: Springer, 2019, pp. 3541.
[45] T. Meek, H. Barham, N. Beltaif, A. Kaadoor, and T. Akhter,
“Managing the ethical and risk implications of rapid advances
in artificial intelligence: A literature review,” in 2016 PICMET,
2016, pp. 682693.
[46] K. Y. Lee, H. Y. Kwon, and J. I. Lim, “Legal Consideration on
the Use of Artificial Intelligence Technology and Self-
regulation in Financial Sector: Focused on Robo-Advisors,”
2018, pp. 323335.
[47] K. Shahriari and M. Shahriari, “IEEE standard review —
Ethically aligned design: A vision for prioritizing human
wellbeing with artificial intelligence and autonomous
systems,” in 2017 IEEE Canada IHTC, 2017, pp. 197201.
[48] B. C. Stahl and D. Wright, “Ethics and Privacy in AI and Big
Data: Implementing Responsible Research and Innovation,”
IEEE Secur. Priv., vol. 16(3), pp. 2633, 2018.
[49] K. Miller, “Can We Program Ethics into AI? [Reflections],”
IEEE Technol. Soc. Mag., vol. 36(2), pp. 2930, 2017.
[50] A. Etzioni and O. Etzioni, “AI assisted ethics,” Ethics Inf.
Technol., vol. 18(2), pp. 149156, 2016.
[51] R. Rault and D. Trentesaux, “Artificial Intelligence,
Autonomous Systems and Robotics: Legal Innovations,” 2018,
pp. 19.
[52] Y.-J. Lee and J.-Y. Park, “Identification of future signal based
on the quantitative and qualitative text mining: a case study on
ethical issues in artificial intelligence,” Qual. Quant., vol.
52(2), pp. 653667, 2018.
[53] D. Helbing et al., “Will Democracy Survive Big Data and
Artificial Intelligence?,” in Towards Digital Enlightenment,
Cham: Springer International Publishing, 2019, pp. 7398.
[54] J. Danaher, “Toward an Ethics of AI Assistants: an Initial
Framework,” Philos. Technol., vol. 31(4_, pp. 629653, 2018.
[55] V. Vakkuri and P. Abrahamsson, “The Key Concepts of Ethics
of Artificial Intelligence,” in 2018 IEEE ICE/ITMC, 2018, pp.
[56] E. Yudkowsky, “Complex Value Systems in Friendly AI,”
2011, pp. 388393.
[57] J. M. Kizza, “New Frontiers for Computer Ethics: Artificial
Intelligence, Virtualization, and Cyberspace,” 2016, pp. 207–
[58] S. D. Baum, “Social choice ethics in artificial intelligence,” AI
Soc., pp. 1-12, 2017.
[59] A. Giubilini and J. Savulescu, “The Artificial Moral Advisor.
The ‘Ideal Observer’ Meets Artificial Intelligence,” Philos.
Technol., vol. 31(2), pp. 169188, 2018.
[60] S. A. Sora, “Artificial intelligence in a value based
management system,” Int. J. Value-Based Manag., vol. 1(1),
pp. 2733, Feb. 1988.
[61] G. Su, “Unemployment in the AI Age,” AI Matters, vol. 3(4),
pp. 3543, Feb. 2018.
[62] J. Bandy, “Automation moderation,” AI Matters, vol. 3(4), pp.
5962, Feb. 2018.
[63] S. Akter and S. F. Wamba, “Big data and disaster management:
a systematic review and agenda for future research,” Annals of
Operations Research, pp.121, 2017.
[64] L. Yin, D. Fassi, H. Cheng, H. Han, and S. He, “Health Co-
Creation in Social Innovation: Design Service for Health-
Empowered Society in China,” Des. J., vol. 20(sup1), pp.
S2293S2303, 2017.
[65] A. Stylianou and M. A. Talias, “Big data in healthcare: a
discussion on the big challenges,” Health Technol., vol. 7(1),
pp. 97107, Mar. 2017.
[66] B. Evans, “Using Big Data to Achieve Food Security,” in Big
Data Challenges, London: Palgrave, 2016, pp. 127135.
[67] M. Phillips, E. S. Dove, and B. M. Knoppers, “Criminal
Prohibition of Wrongful Re‑identification: Legal Solution or
Minefield for Big Data?,” J. Bioeth. Inq., vol. 14(4), pp. 527
539, Dec. 2017.
[68] M. Batty, “Big data, smart cities and city planning,” Dialogues
Hum. Geogr., vol. 3(13), pp. 274-279, 2013.
90Copyright (c) IARIA, 2019. ISBN: 978-1-61208-685-9
ICDS 2019 : The Thirteenth International Conference on Digital Society and eGovernments
[69] S. Monteith and T. Glenn, “Automated Decision-Making and
Big Data: Concerns for People With Mental Illness,” Curr.
Psychiatry Rep., vol. 18(112), p. 1-12, Dec. 2016.
[70] L. Y. Y. Lu and J. S. Liu, “The Major Research Themes of Big
Data Literature: From 2001 to 2016,” in 2016 IEEE CIT, 2016,
pp. 586590.
[71] C. F. Breidbach, M. Davern, G. Shanks, and I. Asadi-Someh,
“On the Ethical Implications of Big Data in Service Systems,”
2019, pp. 661674.
[72] H. Mamiya, A. Shaban-Nejad, and D. L. Buckeridge, “Online
Public Health Intelligence: Ethical Considerations at the Big
Data Era,” 2017, pp. 129–148.
[73] L. Kammourieh et al., “Group Privacy in the Age of Big Data,”
in Group Privacy, Cham: Springer, 2017, pp. 3766.
[74] M. Cuquet, A. Fensel, and L. Bigagli, “A European research
roadmap for optimizing societal impact of big data on
environment and energy efficiency,” in 2017 GIoTS, 2017, pp.
[75] A. Zwitter, “The Network Effect on Ethics in the Big Data
Age,” in Big Data Challenges, London: Palgrave, 2016, pp.
[76] P. A. Chow-White, M. MacAulay, A. Charters, and P. Chow,
“From the bench to the bedside in the big data age: ethics and
practices of consent and privacy for clinical genomics and
personalized medicine,” Ethics Inf. Technol., vol. 17(3), pp.
189200, Sep. 2015.
[77] J. S. Saltz, “A Framework to Explore Ethical Issues When
Using Big Data Analytics on the Future Networked Internet of
Things,” in FNSS 2018, 2018, pp. 4960.
[78] A. Richterich, “Using Transactional Big Data for
Epidemiological Surveillance: Google Flu Trends and Ethical
Implications of ‘Infodemiology,’” in The Ethics of Biomedical
Big Data, Cham: Springer, 2016, pp. 4172.
[79] G. O. Schaefer, M. K. Labude, and H. U. Nasir, “Big Data:
Ethical Considerations,” in The Palgrave Handbook of
Philosophy and Public Policy, Springer, 2018, pp. 593607.
[80] J. N. Gathegi, “Clouding Big Data: Information Privacy
Considerations,” in IMCW2018, 2014, pp. 6469.
[81] C. Garattini, J. Raffle, D. N. Aisyah, F. Sartain, and Z.
Kozlakidis, “Big Data Analytics, Infectious Diseases and
Associated Ethical Impacts,” Philos. Technol., Aug., pp. 117,
[82] J.-L. Monino, “Data Value, Big Data Analytics, and Decision-
Making,” J. Knowl. Econ., pp.112, Aug. 2016.
[83] D. Nunan and M. Di Domenico, “Big Data: A Normal Accident
Waiting to Happen?,” J. Bus. Ethics, vol. 145(3), pp. 481491,
Oct. 2017.
[84] J. Collmann, K. T. FitzGerald, S. Wu, J. Kupersmith, and S. A.
Matei, “Data Management Plans, Institutional Review Boards,
and the Ethical Management of Big Data About Human
Subjects,” in Ethical Reasoning in Big Data, Cham: Springer,
2016, pp. 141184.
[85] K. Pormeister, “The GDPR and Big Data: Leading the Way for
Big Genetic Data?,” in Annual Privacy Forum, 2017, pp. 3
[86] N. Dorasamy and N. Pomazalová, “Social Impact and Social
Media Analysis Relating to Big Data,” in Data Science and Big
Data Computing, Cham: Springer, 2016, pp. 293313.
[87] M. Steinmann et al., “Embedding Privacy and Ethical Values
in Big Data Technology,” in Transparency in Social Media,
Cham: Springer, 2015, pp. 277301.
[88] H. Hijmans, “Internet and Loss of Control in an Era of Big Data
and Mass Surveillance,” 2016, pp. 77–123.
[89] A. Narayanan, J. Huey, and E. W. Felten, A Precautionary
Approach to Big Data Privacy,” 2016, pp. 357–385.
[90] B. van der Sloot, “Is the Human Rights Framework Still Fit for
the Big Data Era? A Discussion of the ECtHR’s Case Law on
Privacy Violations Arising from Surveillance Activities,”
2016, pp. 411436.
[91] B. Mittelstadt, “From Individual to Group Privacy in Big Data
Analytics,” Philos. Technol., vol. 30(4), pp. 475494, 2017.
[92] M. Andrejevic, “Surveillance in the Big Data Era,” in PICT,
Dordrecht: Springer, 2014, pp. 5569.
[93] Y. Wang, “Big Opportunities and Big Concerns of Big Data in
Education,” TechTrends, vol. 60(4), pp. 381384, Jul. 2016.
[94] M. Fuller, “Some Practical and Ethical Challenges Posed by
Big Data,” 2016, pp. 119–127.
[95] D. Helbing, “Societal, Economic, Ethical and Legal Challenges
of the Digital Revolution: From Big Data to Deep Learning,
Artificial Intelligence, and Manipulative Technologies,” in
Towards Digital Enlightenment, Springer, 2019, pp. 4772.
[96] X. Zhang and S. Xiang, “Data Quality, Analytics, and Privacy
in Big Data,” 2015, pp. 393–418.
[97] V. Morabito, “Big Data and Analytics for Government
Innovation,” in Big Data and Analytics, 2015, pp. 2345.
[98] L. Taylor and C. Richter, “Big Data and Urban Governance,”
in Geographies of Urban Governance, Cham: Springer, 2015,
pp. 175191.
[99] Aqeel-ur-Rehman, I. U. Khan, and S. ur Rehman, “A Review
on Big Data Security and Privacy in Healthcare Applications,”
in Big Data Management, Cham: Springer, 2017, pp. 7189.
[100]I. Stanier, “Enhancing Intelligence-Led Policing: Law
Enforcement’s Big Data Revolution,” in Big Data Challenges,
London: Palgrave, 2016, pp. 97113.
[101]A. Gerdes, “Big Data—Fighting Organized Crime Threats
While Preserving Privacy,” in Using Open Data to Detect
Organized Crime Threats, Cham: Springer International
Publishing, 2017, pp. 103117.
[102]R. Kitchin, “The real-time city? Big data and smart urbanism,”
GeoJournal, vol. 79(1), pp. 114, Feb. 2014.
[103]I. Olaronke and O. Oluwaseun, “Big data in healthcare:
Prospects, challenges and resolutions,” in 2016 FTC, 2016, pp.
[104]J. Richards, “Needles in Haystacks: Law, Capability, Ethics,
and Proportionality in Big Data Intelligence-Gathering,” in Big
Data Challenges, London: Palgrave, 2016, pp. 7384.
[105]M. Mulqueen, “Sustainable Innovation: Placing Ethics at the
Core of Security in a Big Data Age,” in Big Data Challenges,
London: Palgrave, 2016, pp. 6171.
[106]M. Alemany Oliver and J.-S. Vayre, “Big data and the future
of knowledge production in marketing research: Ethics, digital
traces, and abductive reasoning,” J. Mark. Anal., vol. 3(1), pp.
513, Mar. 2015.
[107]P. Prinsloo and S. Slade, “Big Data, Higher Education and
Learning Analytics: Beyond Justice, Towards an Ethics of
Care,” in Big Data and Learning Analytics in Higher
Education, Cham: Springer, 2017, pp. 109124.
[108]R. Finn and A. Donovan, “Big Data, Drone Data: Privacy and
Ethical Impacts of the Intersection Between Big Data and Civil
Drone Deployments,” 2016, pp. 47–67.
[109]W. J. Radermacher, “Official statistics in the era of big data
opportunities and threats,” Int. J. Data Sci. Anal., vol. 6(3), pp.
225231, Nov. 2018.
[110]M. Whitman, C. Hsiang, and K. Roark, “Potential for
91Copyright (c) IARIA, 2019. ISBN: 978-1-61208-685-9
ICDS 2019 : The Thirteenth International Conference on Digital Society and eGovernments
participatory big data ethics and algorithm design,” in
Proceedings of the 15th PDC ’18, 2018, pp. 16.
[111]M. Sax, “Big data: Finders keepers, losers weepers?,” Ethics
Inf. Technol., vol. 18(1), pp. 2531, Mar. 2016.
[112]V. Narayanan, P. N. Howard, B. Kollanyi, and M. Elswah,
“Russian Involvement and Junk News during Brexit,” 2017.
[113]L. Sax, S. Gilmartin, and A. Bryant, “Assessing response rates
and nonresponse bias in web and paper surveys,” Res. High.
Educ., vol. 44(4), pp. 409432, 2003.
[114]R. M. Kidder, “Ethics: A matter of survival,” Futurist, vol.
26(2), p. 10, 1992.
[115]Z. Lachana, C. Alexopoulos, E. Loukis, and Y. Charalabidis,
“Identifying the Different Generations of Egovernment: an
Analysis Framework,” in The 12th MCIS 2018, 2018, pp. 1
[116]C. Botella, A. Garcia-Palacios, R. M. Baños, and S. Quero,
“Cybertherapy: Advantages, limitations, and ethical issues,”
PsychNology J., 2009.
92Copyright (c) IARIA, 2019. ISBN: 978-1-61208-685-9
ICDS 2019 : The Thirteenth International Conference on Digital Society and eGovernments
... El uso de los chatbots en el gobierno, al igual que todas las herramientas que funcionan a partir de IA, han generado ciertos debates en los ámbitos público y académico en cuanto a las preocupaciones éticas que emergen al implementar dichas herramientas. Esto se debe a que, si bien el uso de ciertas tecnologías -como es el caso de la IA-puede representar una oportunidad para mejorar la gestión de algunos servicios, también puede aumentar la desigualdad en el acceso y el control de la información, acrecentar la vulnerabilidad y privacidad de los datos personales, generar mal uso de los datos, hacer difusa la responsabilidad del gobierno y dificultar la alineación de valores en los servicios públicos (Ronzhyn y Wimmer, 2019). ...
... Es fundamental que el espacio público a cargo de los gobiernos sea regulado por valores normativos que definan ventajas, restricciones y sanciones, así como un sistema de incentivos que motive a los individuos a obrar con sentido constructivo en la vida democrática (Uvalle, 2014). La legalidad, la imparcialidad, la justicia, la igualdad, la libertad, el pluralismo, la responsabilidad, la inclusión, la participación y la transparencia son algunos de los valores de un gobierno democrático que tienen como horizonte la preservación del interés público, generar conductas en favor de la vida colectiva y el sentido de pertenencia a la vida comunitaria, así como asegurar el bienestar general de los ciudadanos y los grupos de interés (Uvalle, 2014;Ronzhyn y Wimmer, 2019). Por lo tanto, la ética de gobierno es el elemento fundamental para garantizar la confianza y la legitimidad del gobierno ante la sociedad, lo cual es vital en cualquier sistema democrático, en donde los ciudadanos esperan que la acción pública sirva a la pluralidad de intereses con equidad y que se administren los recursos de forma correcta (Bautista, 2007). ...
... Esto se debe a que dichas tecnologías permiten generar, recolectar, almacenar, procesar y distribuir importantes cantidades de información, gran parte de la cual está conformada por datos personales de los usuarios que utilizan los servicios públicos; estos datos son considerados sen-sibles, pues contienen información sobre la identidad, las características y las acciones de los ciudadanos (Luna-Reyes et al., 2015). Adicionalmente, el uso de estas tecnologías acentúa la relación desigual entre el gobierno y los ciudadanos, ya que el gobierno tiene el control de la información, mientras que los ciudadanos pueden ser dependientes y vulnerables (Ronzhyn y Wimmer, 2019). Finalmente, las TIC tienen un potencial transformador en la vida y los valores públicos; por lo tanto, pueden ser usadas para modificar las conductas de los ciudadanos (Ronzhyn y Wimmer, 2019). ...
Full-text available
El uso de la inteligencia artificial en los gobiernos ha aumentado en la última década en diversos países, adquiriendo mayor importancia en el contexto de la pandemia por COVID-19. Los chatbots son una de las principales herramientas que funcionan a partir de la inteligencia artificial y que han sido utilizados por los gobiernos para brindar información y servicios a los ciudadanos, así como para el seguimiento, monitoreo y control de la COVID-19. El uso de estas herramientas ha generado debates éticos sobre el uso de datos personales, la privacidad, la transparencia, la rendición de cuentas y el derecho de acceso a la información. El objetivo de este trabajo es identificar y analizar las principales preocupaciones éticas que emergen en torno a los chatbots implementados en el gobierno de México en el contexto de la COVID-19; para ello se analizan los casos de Susana Distancia y Dr. Armando Vaccuno. La metodología consiste en un sondeo abierto de percepción ciudadana. Los resultados muestran que las principales preocupaciones éticas son la transparencia, la rendición de cuentas y la privacidad, las cuales han generado una falta de confianza por parte de la ciudadanía hacia los chatbots, que se ha traducido en un bajo nivel de uso. Para subsanar esto, es necesario eliminar los vacíos regulatorios en torno a la transparencia y la protección de datos implicados en estas nuevas tecnologías.
... Secondly, good governance and sustainability group of smart city governance objectives is mostly impacted by lack of human capacities, inadequate organizational structure and processes and lack of policies and guidelines. These results are supported by (Meijer & Bolivar, 2016;Janowski et al., 2018;Janssen et al., 2020;Ronzhyn et al., 2019;Wimmer et al., 2020) which stress the importance of limiting the need of internal transformation to achieve sustainability of smart city initiatives through increasing human capacities of local government. Thus, lack of human capacities strongly influences smart urban collaboration, citizen centered transformation and decision-making processes and to a lesser extent impact smart administration objectives. ...
... ). As data volume and availability increases, so does the capacity for linkage, correlation, and de-anonymization or re-sensitization of datasets (Lim & Taeihagh, 2019;Lytras & Visvizi, 2018;Ronzhyn & Wimmer, 2019), which may violate individuals' privacy, and fosters distrust of municipal governments and technology providers(Cugurullo, 2020;Pelton et al., 2019). Next, poor data quality could cause misleading administrative decisions that lead to undesirable outcomes(Al Nuaimi et al., 2015;Ubaldi, Ooijen, et al., 2019;Wahyudi et al., 2018). ...
... In any case, moral and social barriers can be distinguished within the selection of AI, and brought about from needs in citizen believe on machine intelligence and the uneasiness on the substitution of representatives by machines (Androutsopoulou, Karacapilidis, Loukis, & Charalabidis, 2019). (Ronzhyn & Wimmer, 2019) conducted a study on the moral issues with troublesome advances, concluding that there's a noteworthy number of moral issues associated to the implementation of troublesome innovations in open administrations. In expansion, (Alexopoulos et al., 2019) prescribe advance inquire about in security and ethical issues within the collection of individual information and the proprietorship of such information by machine learning in government administrations. ...
Full-text available
This study explored how Namibia's public sector management is impacted by information and communication technologies (ICT). Information technology is currently undergoing significant change, beginning with the impact it is having on public sector employees' attendance at training sessions so they can learn new skills like using IT developments, gaining new knowledge, and skills, and utilizing programs in various fields where more productive and profitable results are obtained. Therefore, it is the responsibility of the public sector, civil society, and international actors to work together to develop the policies and programs that will fully realize the potential of digital government for Namibia's public sector at all levels
... Heersmink et al. [110] provided the first bibliometric mapping of RAI-related concepts in AI's parent field of information and communication technology (ICT). Since then, RAI concepts have been mapped for various AI technologies and specific subfields of interest [83,93,[97][98][99][100]102,105,106]. ...
Full-text available
Industry is adopting artificial intelligence (AI) at a rapid pace and a growing number of countries have declared national AI strategies. However, several spectacular AI failures have led to ethical concerns about responsibility in AI development and use, which gave rise to the emerging field of responsible AI (RAI). The field of responsible innovation (RI) has a longer history and evolved toward a framework for the entire research, development, and innovation life cycle. However, this research demonstrates that the uptake of RI by RAI has been slow. RAI has been developing independently, with three times the number of publications than RI. The objective and knowledge contribution of this research was to understand how RAI has been developing independently from RI and contribute to how RI could be leveraged toward the progression of RAI in a causal loop diagram. It is concluded that stakeholder engagement of citizens from diverse cultures across the Global North and South is a policy leverage point for moving the RI adoption by RAI toward global best practice. A role-specific recommendation for policy makers is made to deploy modes of engaging with the Global South with more urgency to avoid the risk of harming vulnerable populations. As an additional methodological contribution, this study employs a novel method, systematic science mapping, which combines systematic literature reviews with science mapping. This new method enabled the discovery of an emerging ‘axis of adoption’ of RI by RAI around the thematic areas of ethics, governance, stakeholder engagement, and sustainability. 828 Scopus articles were mapped for RI and 2489 articles were mapped for RAI. The research presented here is by any measure the largest systematic literature review of both fields to date and the only crossdisciplinary review from a methodological perspective.
... Releasing the robot is generally more prioritized than making sure it functions correctly. A robot that does not work correctly may give considerable consequences to the users, which is the citizens in the case of public organizations (55). ...
Full-text available
Robotic Process Automation (RPA) substitutes repetitive tasks and processes with a bot that uses standard interfaces. It can improve business outcomes and support business transformation, making it popular in many private organizations. RPA is an "add-on" rather than a separate system. It can work with legacy systems, making it an attractive solution for public organizations that move at a slower technological pace and may still use outdated software. Because this technology automates some manual tasks, it changes the existing work routines of some categories of employees who, as a consequence, may become resistant to change brought by automation. In the context of organizational change management, resistance to change is one of the critical factors why implementation cases fail. This is especially true in public organizations where the workforce generally favors stability over innovation. Moreover, as public organizations do not generate profit as private businesses do, they have less incentive to change. Critical Success Factor(s) (CSFs) is a set of key operational areas that need to perform well for the organization to increase the chances to grow and thrive. Identifying these areas is essential to a manager or strategist to ensure possible organizational success. Limited research about CSFs and change management when implementing RPA makes it difficult for public organizations to thoroughly succeed with the implementation process. Knowing the CSFs of change management is therefore essential when implementing RPA in a public organization. The research question this work addresses: What critical success factors for implementing RPA in Swedish public organizations can be identified based on relevant theories of change management? A theoretical evaluation supported the development of a theoretical framework. The framework comprises a mix of change management models and previous research on change management in public organizations. This framework was then empirically tested using a survey study. Nineteen employees experienced in implementing RPA in Swedish public organizations answered the questionnaire. The results were analyzed by descriptive statistics. Results display that most of the framework's CSFs were present during the implementation process. However, two external CSFs outside the organization were identified to impact the implementation process. These external factors were not identified in any of the studied change management models or generally mentioned in the previous research. Furthermore, the results suggest that RPA seems to provide an operational benefit. This research contributes to the limited knowledge pool and provides practical implications for RPA implementation in Swedish public organizations.
... It is important to consider legal, managerial and ethical issues of the introduction of digital public services (Pereira et al., 2017). Ethical issues need to be considered at the stage of development and implementation of new government services, otherwise they may contribute to the increasing digital divide (Ronzhyn & Wimmer, 2019). ...
Full-text available
Emerging technologies and digital transformation in government and society, called Government 3.0, put forward new training needs for graduates in the area. The ERASMUS+ research project “Scientific Foundations Training and Entrepreneurship Activities in the Domain of ICT-enabled Governance” (Gov 3.0) established the scientific domain of Government 3.0 as a vivid scientific domain, encompassing electronic government, ICT-enabled governance and digital government towards decision support for public value creation. To ensure the necessary competencies in achieving such public value creation along the digital transformation of government and society, training needs were analysed and discussed as part of the project Gov 3.0. The result is a baseline for a digital governance curriculum, providing a description of a generic training programme for digital governance and its implementation in the European context. It is complemented with a Master Programme in Digital Governance to build up a comprehensive understanding of the domain of digital government with particular focus on emerging technologies that have the potential to disrupt public governance. The programme deepens the fundamental understanding of digitalization contexts and related organizational modernization of the public sector, knowledge of information systems in the public sector, knowledge of the decision-making systems in public sector and public sector automatization.
... Still, personal and demographic information are seen as an important part of making use of these services and to gain value from big data. Therefore, the protection of personal privacy has to be an inherent design characteristic, with respect to legal, technical, and organizational issues (Schomakers et al., 2021;Wilkowska et al., 2020;Ziefle et al., 2016), but also regarding ethical issues, addressing data use, information and data ownership -a discussion about values and targets of data use (Gupta et al., 2018;Helbing, 2019;Ronzhyn & Wimmer, 2019). It is a mandatory and urgent task of future efforts to carefully balance the tradeoff between the need of data collection (surveillance), the consideration of users' wishes, dignity, private data, and the protection of users. ...
The ongoing digitization and novel smart technologies can deliver enormous benefits to society and individuals. However, smart services often require the vast collection of data conflicting with users’ privacy. The integration of privacy concerns into acceptance research is needed to portray and predict user decisions in smart technologies. To allow a privacy-integrated prediction of users’ technology acceptance, we apply the privacy calculus theory to adoption behavior. We test the new model in three usage contexts, autonomous driving (transportation), activity trackers (fitness), and cardiac device remote monitoring (medical treatment). Using an online questionnaire, 624 German participants evaluated all three technologies. The model fits well in all contexts, although also context-related differences in the weighing of perceived benefits and privacy concerns showed. It is concluded that the trade-off between perceived benefits and privacy concerns is a central keystone not only for the willingness to disclose information but also for technology acceptance.
Full-text available
p> Objetivo : determinar en qué medida existe un consenso en la literatura sobre la caracterización de las diferentes etapas de desarrollo del gobierno electrónico en función de cuatro categorías: las tecnologías empleadas, los objetivos perseguidos, los resultados/servicios obtenidos y el tipo de interacción con los usuarios. Diseño metodológico : se realizó una revisión cualitativa y sistematizada de la literatura científica que aborda las etapas de desarrollo del gobierno electrónico, para buscar definiciones puntuales sobre dichas etapas que, al ser contrastadas y analizadas, permitieron determinar los puntos convergentes. Resultados: existe amplio consenso entre los autores en cuanto a las características del gobierno electrónico 1.0 y 2.0, cierto consenso con respecto al gobierno electrónico 3.0, mientras que no existe suficiente información para determinar el consenso sobre el gobierno electrónico 4.0. Limitaciones de la investigación : el análisis efectuado se centró en las definiciones puntuales sobre las etapas de desarrollo del gobierno electrónico contenidas en los documentos científicos revisados, es necesario hacer un análisis más profundo de estos documentos para determinar con mayor rigor el consenso sobre las características de dichas etapas. Hallazgos: el estudio científico sobre las etapas de desarrollo del gobierno electrónico es relativamente reciente y, en la actualidad, gran parte de los trabajos se enfocan en desarrollar y analizar la discusión teórica, poniendo relativamente menos atención en la evidencia empírica, por ello algunas de las etapas de desarrollo del gobierno electrónico son consideradas prospectivamente. </p
Full-text available
The broad diffusion of so-called disruptive technologies in the public sector is expected to heavily impact and give a strong digital boost to public service provisioning. To ensure acceptance and sustainability, the benefits and challenges of using disruptive technologies in public service provisioning need to be well researched. This chapter applies scenario-based science and technology roadmapping to outline potential future uses of disruptive technologies. It develops a roadmap of research for Government 3.0. Based on a literature review of disruptive technologies in Government 3.0, thirteen scenariosScenarios sketch possible use of internet of things, artificial intelligence, machine learning, virtual and augmented reality, big data and other disruptive technologies in public service provisioning. Subsequently, gap analysis is applied to derive a roadmap of research, which outlines nineteen research actions to boost innovation in public serviceInnovation in public service with the use of disruptive technologies, thereby building on engagement of and interaction with expert stakeholders from different fields. We conclude with recommendations for a broader and more informed discussion about how such new (disruptive) technologies can be successfully deployed in the public sector—leveraging the expected benefits of these technologies while at the same time mitigating the drawbacks affiliated with them.
Among the biggest challenges rural areas currently face are the impact of demographic change and limited mobility offers. While younger people tend to either move to urban areas or have their own car at hand for satisfying everyday needs, elderly or people with compromised health suffer most from these challenges. They prefer to live in their habitual and familiar environment and remain mobile for as long as possible. To enable self-determined living in rural areas and to make rural areas more attractive as places to live, municipalities and regions with substantial rural character need to develop concepts and modern digital solutions to achieve these objectives. In this paper, we introduce stakeholder-driven requirements elicitation for a mobility platform, which supports local communities and citizens in remaining autonomous in their living. The aim of the mobile app is to intelligently match offers of different mobility providers (e.g. voluntary drivers, public transportation services, taxis drivers etc.) with citizens’ mobility demands in rural areas. Citizens needing a ride can find, book and pay the best offer for their journeys through the app. They can also rate the mobility service after its use. In the paper, we present the findings from 60 semi-structured face-to-face interviews with elderly in rural areas, thereby applying qualitative and quantitative content analysis. The empirical results were transcribed, categorized and analyzed quantitatively and qualitatively. We conclude with recommendations for the design and implementation of the mobility app.
Conference Paper
Full-text available
As e-government adoption becomes widespread, governments face a myriad of challenges which are fait-accompli when technology is introduced into organisational processes. Amongst these challenges governments are faced with ethical dilemmas associated with the use of ICTs to provide services to its citizens. Hence questions on what constitutes ethical or unethical actions are still less understood in e-government. Ethical issues in the context of business information systems have been widely investigated for some time. With the advent of e-commerce several studies have focused on ethical concerns in the online business to business (B2B) and business to consumer (B2C) contexts. Issues such as privacy, security, spamming, and the rights of e-customers have inter-alia, been highlighted. Due to the fundamental differences in the objectives of e-commerce and e-government, ethical dilemmas in respect of the latter are potentially different. A review of the literature indicates that e-government ethics have not been widely studied. This paper therefore examines the nature of ethics in e-government with a focus on government to citizen interactions. This paper adds to e-government discourse as it provides an analysis of ethics in the public sector and proceeds to examine these traditional views of ethics in the context of e-government. A comparative analysis of ethical dilemmas is made between e-commerce and e-government. Subsequently the ethical challenges which face e-government planners are highlighted. We identify culture and issues related to inclusivity as important factors in the formulation e-government ethical frameworks. Additionally the concept of trust is found to consist of different dimensions in an e-government context. Lastly we examine the implications of these for e-government in South Africa. Directions for e-government planners in South Africa who wish to develop new ethical frameworks are suggested, as well as enforcement strategies.
Full-text available
The vigorous development of artificial intelligence has had a profound and long-term impact on human production and life. It is a double-edged sword. While letting people enjoy the good life created by new technology, it also allows people to feel its negative effects, such as infringing on human privacy, and bringing new inequalities to human beings. Discussing the social responsibility of artificial intelligence has become a hot topic in academic circles in the past two years. This article starts with adopting the research framework of ISO 26000, comprehensively analyzing the problems of artificial intelligence social responsibility in theory and practice, and putting forward their own thinking. It is concluded that in the age of artificial intelligence, we will proceed from the seven themes of this standard to enhance the social responsibility of artificial intelligence, and ultimately achieve the sustainable development of artificial intelligence by adopting the social responsibility international standard ISO 26000,.
Full-text available
Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.
p>The integrity of businesses and markets is central to the vitality and stability of our economies. So good corporate governance - the rules and practices that govern the relationship between the managers and shareholders of corporations, as well as stakeholders like employees and creditors - contributes to growth and financial stability by underpinning market confidence, financial market integrity and economic efficiency. Recent corporate scandals have focussed the minds of governments, regulators, companies, investors and the general public on weaknesses in corporate governance systems and the need to address this issue.</p
The participation of AI in society is expected to increase significantly, and with that the scope, intensity and significance of morally-burdened effects produced or otherwise related to AI, and the possible future advent of AGI. There is a lack of a comprehensive ethical framework for AI and AGI, which can help manage moral scenarios in which artificial entities are participants. Therefore, I propose the foundations of such a framework in this text, and suggest that it can enable artificial entities to make morally sound decisions in complex moral scenarios.
Emerging combinations of artificial intelligence, big data, and the applications these enable are receiving significant media and policy attention. Much of the attention concerns privacy and other ethical issues. In our article, we suggest that what is needed now is a way to comprehensively understand these issues and find mechanisms of addressing them that involve stakeholders, including civil society, to ensure that these technologies’ benefits outweigh their disadvantages. We suggest that the concept of responsible research and innovation (RRI) can provide the framing required to act with a view to ensuring that the technologies are socially acceptable, desirable, and sustainable. We draw from our work on the Human Brain Project, one potential driver for the next generation of these technologies, to discuss how RRI can be put in practice.
Artificial Intelligence (AI) technology is being used throughout the industry due to the introduction of the era of the Fourth Industrial Revolution. In the financial industry, AI technology is used in sales and marketing, fraud and illegality prevention, credit evaluation and screening, chat-bot and etc. The robo-advisor can apply the AI technology in case of investment advisory to provide a large and cost-effective portfolio of investment information. It also has positive function to the field in the fact that it has ability to generate popular investors and create new customers and services. However, robo-advisor that uses AI is still at its initial stage in introducing the technology and there are currently legal, institutional and policy limitations in providing comprehensive and customized advisory services. Thus, at first, this paper will consider the area of legal argument on the issues related to AI on the legal status and liability, financial IT, security and privacy. And focused on robo-advisor, the main issues concerning the current legal system and security self-regulatory method are elucidated and analyzed to provide the basic direction of regulation for development of utilization of AI technology in financial sector. In an environment that is shifting from ex-ante regulation to ex-post regulation, which is a current paradigm of financial IT security regulation, in order to modernize the regulations for the digital age, we propose specific measures to strengthen the use of regulatory sandbox as an autonomous regulatory scheme for the use of new technologies such as AI.
The emergence of artificial intelligence technologies presents itself as one of the most fundamental technological-economic revolutions in recent history. The implications of their emergence bear the utmost significance for the labor market where ‘smart’ machines have the potential to rapidly displace a material share of work tasks currently performed by humans. Such a situation will require a prudent short-term and long-term governmental response. Moreover, a new type of cooperation between the state and the private enterprises will be needed to successfully overcome the societal and economic challenges posed by work task automation by artificial intelligence technologies.
The networked future will generate a huge amount of data. With this in mind, using big data analytics will be an important capability that will be required to fully leverage the knowledge within the data. However, collecting, storing and analyzing the data can create many ethical situations that data scientists have yet to ponder. Hence, this paper explores some of the possible ethical conundrums that might have to be addressed within a big data network of the future project and proposes a framework that can be used by data scientists working within such a context. These ethical challenges are explored within an example of future networked vehicles. In short, the framework focuses on two high level ethical considerations that need to be considered: data related challenges and model related challenges.