Conference PaperPDF Available

Game-based Learning as a Suitable Approach for Teaching Digital Ethical Thinking in the Field of Artificial Intelligence

Authors:

Abstract

The comprehensive digitalisation of society and economy is a chance to transform technological leadership into the digital era. Above all, it requires a solid education in basic digital skills and core competencies. Our technology-driven world, where we are increasingly interwoven with technologies that are no longer just tools but have become part of our identity, it is essential that these digital skills are complemented with ethical thinking. This touches the understanding of different norms, values and ethical perspectives, and especially their implications for the design and usage of technology. There are two main challenges behind the imperative of digital ethical thinking. On the one hand, technology cannot be neutral because it includes the values of their creators. Those who design, develop, deploy, and control technology have a particular, monocular cultural perspective that is imbued into technology. On the other hand, different cultures have different ethical perspectives. It is not possible to be ethical from competing ethical perspectives at the same time. Teaching learners in the field of today's technologies means teaching digital-ethical basics in a target group-oriented manner in addition to the necessary basic knowledge. This ethical knowledge is particularly important in the field of Artificial Intelligence (AI). Some current AI applications have the ability to make autonomous decisions that can limit our freedom or free will. The aim of digital ethical thinking should be to increase awareness of ethical issues and their moral implications, the so-called digital ethical self-awareness. As a result, both AI developers and AI users can make better decisions. Therefore, one of the most fundamental demands in dealing with AI will be more easily met, namely to have ethically-correct solutions that avoid discrimination and bias, protect our privacy, and preserve our free will. One suitable approach to learn and teach digital ethical thinking could be through gamification. In this context, game-based learning allows for a playful experience of ethical implications and combines them with the learning of basic ethical, culturally dependent views. In this paper, we develop a taxonomy of ethic games that fosters self-awareness in digital ethical thinking. The taxonomy demonstrates types and subtypes of ethical games, and can be used as a pathway for teaching ethical awareness. These ethic games help learners recognise the differences between ethical frameworks, and understand which of these frameworks they most naturally align themselves with. The three main ethical frameworks examine how ethical problems are dealt with by using them. The games develop from the general to the particular as they go higher up in the taxonomy. On the top, the taxonomy includes different ways in which ethical disputes can be solved by expanding the scope of the frameworks so that connections and agreement can be found between them. Additionally, we comment on the design of a first prototype for one of the possible ethic games together with a validation by selected learners. The goal is to evaluate the taxonomy and gather feedback on how to implement all other ethic games in the near future. With this, digital ethical thinking can become an integral part of teaching digital skills to empower the employability of each digital citizen.
GAME-BASED LEARNING AS A SUITABLE APPROACH FOR
TEACHING DIGITAL ETHICAL THINKING IN THE FIELD OF
ARTIFICIAL INTELLIGENCE
M. Bloomfield1, C. Lemke2, D. Monett2
1York Associates (UNITED KINGDOM)
2Berlin School of Economics and Law (GERMANY)
Abstract
The comprehensive digitalisation of society and economy is a chance to transform technological
leadership in the digital era. Above all, it requires a solid education in basic digital skills and core
competencies. In our technology-driven world, where we are increasingly interwoven with technologies
that are no longer just tools but have become part of our identity, it is essential that these digital skills
are complemented with ethical thinking. This touches the understanding of different norms, values and
ethical perspectives, and especially their implications, in the design and usage of technology. In this
paper, we present a taxonomy of ethic games that foster self-awareness in digital ethical thinking. The
taxonomy demonstrates types and subtypes of ethical games as a pathway for teaching ethical
awareness. These ethic games help learners recognise the differences between ethical frameworks and
understand which of these frameworks they most naturally align themselves with. This way, game-based
digital ethical thinking can become an integral part of teaching digital skills that empower the
employability of each digital citizen.
Keywords: Digital ethics, digital ethical thinking, game-based learning, neurodiversity, ethic games.
1 INTRODUCTION
Developments in the fields of modern technologies progress fast in the digital age. For example, artificial
intelligence (AI) is a key or emerging technology that is dramatically influencing and changing both
economy and society. The ability of intelligent algorithms to automate some human tasks completely
and to support or even take over certain human decision-making processes are fuelling forecasts of
further growth of this technology in the years to come.
The corresponding technological dynamics simultaneously influences our attitudes as humans, as well
as the values and norms that we adopt in our interaction with technological artefacts. On the one hand,
it opens up plenty of new options for action and fields of application. On the other hand, these new areas
of application reveal to us a variety of previously non-existent ethical issues and conflicts. In addition to
that, already considered legal norms and traditional moral values are profoundly challenged since they
cannot keep up with the pace of technological change and, thus, cannot be adequately modified nor
easily regulated for solving the new ethical issues and conflicts that arise. We as individuals as well as
society as a whole require a digital-driven ethical mindset that builds on a set of new questions, their
possible answers, and new kinds of regulations.
The fast-moving technology-oriented fields of education especially demand a learner’s digital ethical
self-awareness as a basis for both a holistic understanding and consequent shaping of emerging
technologies. Teaching learners in the fields of today’s technologies means teaching digital-ethical
basics in a target group-oriented manner that is on top of the necessary basic knowledge in those fields.
Such an ethical knowledge is particularly important in and to the field of AI. The aim of a digital ethical
thinking should be then to increase awareness of ethical issues and their moral implications. As a result,
both AI developers and AI users can make better decisions when interacting with intelligent algorithms
in particular, and smart technologies in general. Therefore, one of the most fundamental demands in
dealing with AI, namely, to have ethically-coherent solutions that avoid discrimination and bias, protect
our privacy, and preserve our free will, among others, will be much easier to fulfil. That skill extension
also increases learners’ employability and fosters the necessary economic transition into a digitalized,
networked world.
There are two main challenges behind the imperative of digital ethical thinking. First, technology cannot
be neutral. Rather, it includes the values of its creators: those who design, develop, deploy, and control
technology have a particular, monocultural perspective with which they imbue the technology. Second,
Proceedings of INTED2021 Conference
8th-9th March 2021
ISBN: 978-84-09-27666-0
3447
different cultures have different ethical perspectives. It is not possible to be ethical from competing
ethical perspectives at the same time.
The question then arises as to which concepts and structures teachers should consider, especially those
that are particularly suitable for educating in ethical thinking. In this regard, gamification seems to be a
unique approach to learn and teach digital ethical thinking. In the context of digital technologies for
education, game-based learning allows for a playful experience when dealing with ethical implications
and for combining them with the learning of basic ethical, culturally dependent views.
2 DIGITAL ETHICS IN EMERGING TECHNOLOGIES
2.1 Ethics, Culture, and Ethical Frameworks
The term ethics correlates with moral philosophy. Morality is understood as the totality of moral norms,
feelings, attitudes, and actions. Ethics as science reflects morality by describing moral phenomena and
their theoretical prerequisites, on the one hand, and by examining the rights and wrongs of a matter, on
the other. The valid norms of a society, however, do not arise from the outset; rather, they require active
debate by us humans. The search for optimal ways to deal not only with problems and challenges
appropriately, but also with conflicts and cases of doubt, as well as to find solutions for them, requires
deep ethical consideration. For this, we need a considered attitude towards such things. We must
understand what the problem is, in the first place, how it manifests itself, and which interests are
associated with it.
There are three broad frameworks of ethical thinking, which we can categorise in simple terms as
consequentialism, objectivism, and virtue ethics [1]. While the complexities of human nature and the
ambiguities of our moral choices often mean that very few people consistently adhere to one and only
one of these frameworks, they are prima facie incompatible. Consequentialism (exemplified by
Bentham, Mill, and Singer)
1
holds that the moral value of an action is to be judged by its consequences.
Consequentialists are therefore likely to argue that “the ends justify the means,” or that “the needs of
the many outweigh the needs of the few,” Objectivists (such as Kant, Habermas, and Rawls),
2
on the
other hand, typically disagree with this analysis and hold that what is morally right is universal, on the
one hand, and inalienable, on the other. Objectivists are more likely to argue that “it doesn’t matter that
it came good in the end, it was the wrong thing to do,” or that one must “stick to one’s principles.” The
third framework (held by Confucius, Anscombe, and Foot)
3
is called virtue ethics, and holds that acting
in accordance with a person’s ethical character is, de facto, acting ethically. The development of a
person’s character is therefore primary, rather than the calculation of end benefits or the search for
objective ethical truths. Virtue ethicists are far more likely to argue that you just have to be true to
yourself,” or that “we don’t do that because we’re not barbarians.”
Habermas [2] and West [3], [4] have both argued that ethical norms are contingent on a group’s historical
and cultural development. Where no absolute framework for judging right and wrong can be found, each
moral statement we make is therefore incomplete without the phrase “according to one or another ethical
framework.” Christianised civilisations for example typically develop along ethically divergent lines from
Confucian civilisations; and other underlying values and social norms within cultures (such as the role a
family plays, military achievements, the function of political and social hierarchies, gender roles, and
economic aspiration, to name but a few) complicate this development further.
Our individual cultural background defines our ethical thinking and determines legal regulation and
governance [5]. It forms our individual and societal norms and values, i.e. our morality [6]. The two
dimensions of the culture map from Inglehart and Wetzel [6], for example, represent around 70% of
cross-national cultural differences, one axis showing variances between what are termed “traditional”
values and what are termed “secular-rational” values; the other showing variances between what are
termed “survival” values and what are termed “self-expression” values. Ethical differences can be plotted
on this map also, with virtue ethics occupying Confucian cultures, for instance, and consequentialist
ethical values predominating in the convergence of secular-rational and self-expression zones.
Different cultures have different ethical perspectives. Different groups think differently. With this, it is
crucial to understand how different cultural values manifest themselves so that specific ethical issues
1
See for instance https://plato.stanford.edu/entries/consequentialism/.
2
See for instance https://plato.stanford.edu/entries/ethics-deontological/.
3
See for instance https://plato.stanford.edu/entries/ethics-virtue/.
3448
can be categorised correctly. Ethical theory together with the three most established ethical frameworks
allow for posing different moral statements. As mentioned in [7], “[t]he distinction between morality as
social fact and ethical theory as reflection, while not universally accepted, is widely recognized …, even
though sometimes slightly different terminology is used.”
2.2 AI as Emerging Technology
We are living in times with a huge variety of emerging technologies across different fields like material
science, agriculture and information technology [8]. Especially the latter, the field of information
technology, includes emerging technologies such as 5G, Blockchain, AI, or the Internet of Things. They
are largely influencing our lives in the present and will continue to do so in the years to come. AI is
leading those technologies [9], yet the science behind this field has been around since the 1950s.
Despite its maturity as a field, AI researchers have not been able to find an agreed upon definition of
artificial (or machine) intelligence [10]. One possible explanation is that we do not have an overarching
definition of human intelligence, either, that human capability smart artefacts are expected to replicate,
simulate, or augment. Several definitions for both concepts (i.e. human and artificial intelligence) are
available in the literature (see for instance [10], [11], [12]). However, there are several reasons that have
demonstrated there is still a long path to go regarding digital ethics in emerging technologies, especially
intelligent ones. Examples of reasons include the lack of a conceptual consensus on the boundaries of
the intelligence discourse, the poor public understanding of what AI is or is capable of, the hype around
some of the techniques that have been used in recent data-driven algorithms (like deep learning, a well-
known machine learning technique, which is a subfield of AI), as well as the negative ethical implications
that some of these algorithms have caused [13].
Among the most important determinants of AI are the capability to adopt human decision-making
processes and to act autonomously. Other often-mentioned capabilities of intelligent systems are being
able to operate in environments and to adapt to them with insufficient knowledge and resources [14].
To do that, AI systems revert to different techniques and approaches, hybrid ones included, the most
prominent one being machine learning, which has achieved resonance in the media in recent years.
Decision-making tasks by AI systems previously done by humans is associated with a number of ethical
issues. These concern, above all, the lack of explainability and transparency of such systems, not only
when biased results are obtained, but also in general. As humans, we want to know, for example, why
a system has made a decision and on which assumptions, because the result could disadvantage
certain groups of individuals, minorities most of the cases. In addition, the acquisition, processing and
use of data by AI systems pose serious ethical issues and concerns; AI-driven systems can undermine
not only our most fundamental rights, but also violate our privacy, discriminate individuals, and be used
with malicious intentions.
Further applications of AI and their penetration in more and more areas of life and work require a
targeted ethical debate. Not only new ethical principles, but also the sovereignty of interpretation and
transfer into legal documents are important here. AI practitioners should consider ethical thinking in the
design and development of these systems; ethical awareness must be part of education in AI and related
fields.
2.3 Digital Ethics and Digital Ethical Thinking
Digital ethics is “the branch of ethics that studies and evaluates moral problems relating to data and
information (including generation, recording, curation, processing, dissemination, sharing and use),
algorithms (including AI, artificial agents, machine learning and robots) and corresponding practices and
infrastructures (including responsible innovation, programming, hacking, professional codes and
standards), in order to formulate and support morally good solutions (e.g. good conduct or good values)”
[5].
Digital ethics deals with the interactions between people and technology, especially the modern
technologies of the digital age, reflects on moral values, and contributes to knowledge by means of
ethical principles. A common assumption is that technologies of the digital age are universal. In fact,
standards, models and approaches for a presumed technology are generally valid worldwide. On the
other hand, the values of its creators shape the concrete design of an application or service based on a
specific technology stack. The cultural background, the morality and the ethical principles, as well as
legal regulations, dictate the digital ethical framework for implementation and usage. Those who design,
develop, deploy, and control technology have a particular, monocular cultural perspective. As mentioned
above, technology is not neutral: it depends on those perspectives.
3449
Additionally, nowadays we live onlife, which means that we are simultaneously offline and online [5].
Especially, digital technologies are especially interwoven with both our individual and social identity;
they are no longer a mere tool. Therefore, we need a new mindset of moral statements on how we want
to interact with these digital technologies in order to reach a fulfilled life. This may imply, first, the need
to understand these digital technologies. Then, we need to understand under what conditions we want
to shape these technologies and how, for the benefit of humanity. This digital ethical thinking builds
upon a fundamental understanding of different ethical frameworks and their dependency on cultural
values. The better we understand our own position and the position of others, the better we can make
decisions. More understanding correlates with better dialogue [15]; better dialogue correlates with less
conflict [15]; interpreting the risks and benefits of an emerging technology like AI should consider these
facts [16].
3 LEARNING AND TEACHING IN THE DIGITAL AGE
3.1 Technology Understanding, Technology Usage
Teaching and learning in the fast-moving field of emerging technologies like AI demands some specific
considerations. On the one hand, they concern the content for teaching and learning. On the other hand,
they relate to the technologies themselves and to the design for teaching and learning. Furthermore, the
understanding and shaping of technologies take a common ground of fundamental knowledge, such as
methods and approaches in the specific field, as well as a multidisciplinary way of thinking like digital
ethical thinking. For this kind of knowledge, the question arises as to how it can best be imparted.
Especially in the field of teaching AI as a technology, the learners have to understand some important
topics such as:
the models, approaches and methods used in AI algorithms,
the gathering and mining of data to be processed by AI algorithms,
special aspects concerning human-machine-interaction,
the power of decision making in relation to autonomy and autonomous machines,
the boundaries for and of automated decision making,
digital ethical thinking such as values and norms like freewill, freedom and truth in human-
machine-relationships, privacy, cultural issues, and the influence of privacy through a data and
algorithm economy, and political as well as legal boundaries.
Traditionally, the last topics have not been part of AI education in particular nor of computer science or
technology education in general. They are mainly considered as part of elective courses in STEM
careers or are not given the truly importance they deserve.
3.2 Game-based Learning and Teaching in the Field of Technology Education
A game-based learning approach of fundamental ethical questions promises to counter the above-
mentioned problems [17], promotes an understanding of the behaviour of different cultural groups, and
sharpens the view towards an ethical assessment of the results generated by AI systems. Furthermore,
the learners may be more motivated and engaged to learn; they could also get feedback about their
individual learning progress automatically. The combination of formal and informal social learning
reflects best the changing behaviour of the digital native generation [18]. A game-based approach is
particularly useful for teaching technological knowledge, as it presents the duality of technology very
well: technology as a tool and technology as learning content itself, together with its purposes, aims,
and risks.
Game-based learning and teaching in the field of digital ethics encourages ethical awareness of the
learners and helps to clarify key differences between the three essential ethical frameworks introduced
in Section 2.1. It also facilitates the understanding of these ethical frameworks better and allows for an
individual placement in the moral theory toward learners most naturally orient themselves. By choosing
a specific ethics game, for instance, basic norms and values can be internalized before they are tested
against one of the moral theories’ beliefs and become reflective.
With such an approach, learners gain an extensive set of competencies, such as:
an understanding of different ethical frameworks,
3450
an understanding of the non-neutrality of ethical frameworks,
an understanding the relationship of ethical frameworks to cultures,
an understanding the decisions that are made within these ethical frameworks,
an understanding of the practical outcomes of their ethical decisions,
the ability to critically reflect upon and develop of their own work from the perspective of
understanding these frameworks,
the development of ethical thinking skills in the context of digital ethics for AI, integrating them
into the learners’ decision making,
the development of specific ethical thinking skills that will enable them to reduce discrimination
and unconscious biases, and to develop equality for all regardless of race, gender, sexual
orientation, nationality or culture,
the ability to apply these understandings to deep current and future AI questions through the use
of critically reflective activities,
the creative application of their knowledge to real-life situations,
the acquisition of a self-developed ethical maturity that leads to greater responsibility, critical
thinking, self-reflection, big picture thinking, and
an awareness of the impact of AI systems on environmental issues, our economies, and social
structures.
As a result, game-based learning creates a potential for operating with emerging technologies with a
greater emphasis on social responsibility and ethical perspectives of digital sovereignty.
3.3 Neurodiversity and Interculturality
Meanwhile, any form of education should inclusively support all cultural and learning groups with specific
learning needs. Only in this way will all people have a chance for equal education. When putting together
learning pathways for students, we must consider diversity including neurodiversity.
Up to 20% of the population has some form of dyslexia, autistic spectrum disorder (ASD), or attention-
deficit hyperactivity disorder (ADHD). The British Dyslexia Association puts the number of dyslexics at
between 10% and 15% [19]; the International Dyslexia Association places the number of dyslexics as
“perhaps as many as 15-20% of the population as a whole.”
4
The prevalence of autism is variously
estimated to be anywhere between less than 1% of the population and approaching 2% [20], [21]; while
the ADHD Institute estimates over 2% of the population as having ADHD, yet Centers for Disease
Control and Prevention puts this figure at nearly 9.5%.
5
Without understanding how these students learn, we risk failing them and ourselves. Neurodiverse
learners including various forms of dyslexia and autistic forms or ADHD, need to be considered
inclusively, yet explicitly. Up to 15% of the population of Germany, for example, has some form and
degree of dyslexia. Roughly 1% of the general population has ASD, and between 2% and 5% of the
population may have ADHD. These figures fall as students enter or fail to enter higher education,
with about 5% of students in UK higher education institutions currently being recognised as dyslexic,
although it should be noted that this figure appears to be rising [22].
Neurodiversity addresses the biological diversity of the human brain and its processing. The concept
negates people’s stigmatisation with neurological differences as pathological expressions in the sense
of a disease, disorder or impairment. Neurodiversity accepts the human brain’s natural differences in
terms of sociability, learning, attention, mood and other critical mental functions [23]. Neurodiversity
recognises all people’s diversity so that no one is “neurotypical,” where “typical” is either descriptive or
normative; nevertheless, categorical differences are taken into account. An analogy would be that
physically there is no such thing as a “typical” human being (each person differs from others in terms of
characteristics and abilities). There is no universally accepted standard that judges one group as
superior to another. Despite prejudices in dealing with our physical and physiological diversity, no one
should be judged as better or worse.
4
See https://dyslexiaida.org/dyslexia-basics/.
5
See for instance https://adhd-institute.com/burden-of-adhd/epidemiology/ and https://www.cdc.gov/ncbddd/adhd/data.html.
3451
It is therefore not only essential but logical to understand how neurodiverse groups think, feel and learn.
Only then will all have the opportunity to participate equally in digital life. Simple techniques like reducing
the amount of text we use, providing clear, staged instructions, and accepting differentiated outputs will
help, but the use of gamification in learning accesses styles of knowledge acquisition appropriate for
both neurodiverse and neurotypical students [24]. Studies have demonstrated that active, engaged
gaming bypasses areas of the brain associated with neurodiversity-deficits (such as those involved in
phonological processing), and are hence inherently more inclusive [25]. Especially concerning culturally
determined differences in the interaction with technology in general, it becomes apparent that an
intercultural perspective must also be considered from an ethical point of view [16].
Neurodiversity is often defined in terms of differences and variations (see for instance Understood.org
and Britain’s National Autistic Society). Typically these differences are characterised by how the
neurodiverse individual processes information, interacts with the world (and those people who make up
the world), and understands time, sequences, the written word, and various aspects of learning. What
is curious about all of these definitions is that each one of the differences is given in terms of something
that is culturally contingent. Interactions with people differ from culture to culture, and the norms of
interactions in one culture may be completely inappropriate elsewhere; time is understood differently in
sequential cultures from polychronic cultures [26]; the written word varies not only between different
writing systems (such as alphabetic, logo-syllabic, alphasyllabary and Abjad), but within those systems
themselves; and the very structure and purpose of education is dependent upon the needs of the society
in which a child is being educated. Neurodiversities and Special Educational Needs, therefore, must by
their very nature be defined in terms of the cultures in which their populations lie. Yet clarity is made
even harder to come by because different countries (and in many cases, different Federal States within
these countries) define neurodiversities quite differently.
6
Indeed, even where the definitions are broadly
aligned (such as where most national organisations define dyslexia as a phonological processing
deficiency), differences still occur when it comes to how to measure the deficiencies. What are they
measuring them against? How are they measuring them? How large must the departure from the
standard deviation be? Are they applying medical standards or not? In all, then, without an
understanding of cultural diversity, it is impossible to have a proper understanding of neurodiversity.
4 ETHICS GAMES AS APPROACH FOR DIGITAL ETHICAL THINKING
4.1 Taxonomy of Ethic Games
Ethics games are a subcategory of games for teaching and learning specific knowledge. Each game
has concrete aims and focus on particular aspects of digital ethical thinking. The taxonomy we introduce
in Fig. 1 demonstrates types and subtypes of ethical games, and serves as a pathway for teaching
ethical awareness. The lower band of games helps students recognise the differences between ethical
frameworks and understand which of these frameworks they most naturally align themselves with. The
middle band takes the three main ethical frameworks, and examines how ethical problems are dealt
with by using them. The games develop from the general to the particular as they go higher up the
taxonomy. The top band then examines ways in which ethical disputes can be solved by expanding the
scope of the frameworks so that connections, and agreement, can be found between them.
The following section presents an example of a concrete context where we are using different ethics
games for teaching ethical awareness in emerging technologies like AI.
4.2 Example of an Ethics Game for a Course on the Online Platform “KI-
Campus”
KI-Campus (AI campus) is a German-wide, national R&D initiative funded by the German Federal
Ministry of Education and Research (BMBF) that offers different learning opportunities around AI
implemented as R&D projects.
7
The platform has been available to the public as a beta version since
July 2020 and is in continuous development.
One of the courses on KI-Campus covers the topic “Data and Algorithm Ethics.It follows a game-based
learning approach, is neurodiversity-friendly, and allows for an inclusive learning experience. The course
is composed of different learning episodes, which include a pocket ethic game each. One of the available
6
For a fascinating comparison of the different definitions of dyslexia globally, see http://www.dyslexiabytes.org/international-
definitions-of-dyslexia/.
7
See https://ki-campus.org/?locale=en.
3452
games is the so-called “Rights Poker.” It addresses the tackling and negotiating of objective-based moral
statements and their values in relation to the concrete player’s ethical background. This background
manifests itself through the three ethical frameworks with their associated moral attitudes and their
reflection (see Fig. 1).
Figure 1: Taxonomy of Ethics Games
The Rights Poker can be either a player-vs-dealer game or a multi-player game. In the “single player”
version, the player is presented with a list of thirteen rights-based statements. She has to prioritise these
statements. She is informed that the dealer has also prioritised these same statements, although the
dealer may have prioritised them differently.
The object of the game is to get as high a value “hand” as possible. In prioritising the ethically-oriented
statements, the player assigns them values as you would see on a standard deck of playing cards: the
most important statement is designated the “ace,” the second most important statement is designated
the “king,“ the next is the “queen,” and so on.
Unlike most poker games, getting a hand all of one “suit” will not rank particularly highly, as it is the
value of the cards, not the suits, that will determine victory. Therefore, getting a hand of “2, 3, 4, 5, and
6” in “spades” will rank no higher than getting the same hand in a mixture of suits; and it will rank lower
than “8, 9, 10, jack, and queen” in a hand of mixed suits.
Having assigned values to the rights-based statements, the player will have some idea of which
statements are going to be ranked the highest and which are going to be ranked the lowest. However,
she cannot be certain, as the dealer has also ranked the statements, and the dealer’s rankings may well
be different from the player’s. The cards, when played, do not have their values displayed on them!
This introduces an element of risk to the game; but it also ensures that the player is not “playing to the
numbers”: she is “playing to collect the rights-based statements she most values.” She will find out, at
the end, whether her statements are shared by the dealer (note: the dealer does not generate the
priorities randomly; they are generated according to objectivist principles).
Other artificial players may be generated, but the person playing the game does not need to know they
are not “real.”
The style of poker best suited to this game is “five card stud.” This is a game that works along the
following lines:
Each player pays the ante.
Two cards are dealt to each player. The players bet on the likelihood of these cards forming the
basis of a winning hand.
One more card is dealt, face down, to each player.
3453
The players bet on this card.
A fourth card is dealt, face down, to each player.
Again, the players bet on this card.
A final, fifth, card is dealt, face down, to each player.
The players bet on this card.
Each player then reveals their hands. The hand with the greatest value is deemed to be the winning
hand. This may come as a surprise to the player, who had prioritised the cards differently. The play is
repeated, betting included, until the player wins or loses a significant amount (decided by the player, but
there could be a limit programmed into the game mechanics).
The multi-player game works along the same principles. The chief difference is that there are more
people (individually, and unknown to the other players) prioritising the rights-based statements. This
adds more uncertainty to the “values” of the cards. There is also more interpersonal game-play involved,
for instance in the betting rounds.
The rules of this game will be implemented as an online game, where the other players are generated
by the system as well as the entire game itself.
4.3 Implications of Game-based Learning for Digital Ethical Thinking
Game-based learning has the advantage that the learner can connect closely and intensely with what
she has learned and thus process and contextualize the knowledge better according to her learning
style. That is essential for knowledge transfer in digital ethics, as the theoretical foundations must be
well understood in order for practical application to be possible for the learner. Only then can they better
assess and decide what implications, for example, the concrete design of technology has on its use by
different groups of people or how, for example, discrimination can manifest itself concretely.
Regarding the online course “Data and Algorithm Ethics,the syllabus contains ethics games for each
episode. These ethics games extend the learning content of a specific topic with additional forms of
knowledge acquisition and contribute to acknowledge ethics from different perspectives. For instance,
when the learner experiences the different forms of ethical breaches through specific algorithms such
as facial recognition applications, she can adopt and evaluate this knowledge depending on concrete
moral statements. She can also compare both the evaluation and reflecting upon them to previous
answers or those from other learners.
5 CONCLUSIONS
A game-based learning approach supports the aim to extend the digital ethical self-awareness of the
learners, helps to get a better comprehension about the correlation between digital ethics and
technology, e.g. the design and usage of data and algorithms in the case of the course mentioned above,
and allows better decision-making in the future. The learner’s playful experience fosters a holistic view
of digital ethical thinking at all necessary levels. It starts with a better-educated philosophical and cultural
background, establishes a link between the field of ethics and digital ethics, and enables learners to
transform knowledge into a practical-driven and application-oriented technology knowhow.
Additionally, game-based learning supports the learning process of all human groups, neurodiverse
people included; and offers an equal and inclusive way for learning with different forms of learning
content distribution. In the field of digital ethics, one important part of technology theory and practice, it
offers all humans the chance to participate in the digital age in order to form a good digital life.
REFERENCES
[1] S. Bonde, P. Firenze, J. Green, M. Grinberg, J. Korijn, E. Levoy, A. Naik, L. Ucik, L. Weisberg, “A
Framework for Making Ethical Decisions,” Brown University, Providence, Rhode Island, 2013.
Retrieved from https://www.brown.edu/academics/science-and-technology-studies/framework-
making-ethical-decisions.
[2] J. Habermas, „Erkenntnis und Interesse,“ Frankfurt: Suhrkamp Verlag, 1968.
[3] C. West, “The American Evasion of Philosophy,” Palgrave, 1989.
3454
[4] C. West, “The Ethical Dimensions of Marxist Thought,” Monthly Review Press, 1991.
[5] L. Floridi, Soft Ethics and The Governance of the Digital,” Philosophy & Technology, vol. 31, 1–8,
2018.
[6] R. Inglehart, C. Welzel, “The WVS cultural map of the world,” World Values Survey, 2010. Retrieved
from http://www.worldvaluessurvey.org/WVSContents.jsp?CMSID=Findings.
[7] B. C. Stahl, Teaching Ethical Reflexivity in Information Systems: How to Equip Students to Deal
With Moral and Ethical Issues of Emerging Information and Communication Technologies,” Journal
of Information Systems Education, vol. 22, no. 3, 253260, 2011.
[8] WEF, “Top 10 Emerging Technologies,” Special Report, World Economic Forum, 2020. Retrieved
from http://www3.weforum.org/docs/WEF_Top_10_Emerging_Technologies_2020.pdf.
[9] L. Fitzgerald, “10 Emerging Technologies Making an Impact in 2020,” CompTIA, 2020. Retrieved
from https://www.comptia.org/blog/emerging-technologies-impact-2020.
[10] D. Monett, C. W. P. Lewis, K. R. Thórisson, “Introduction to the JAGI Special Issue “On Defining
Artificial Intelligence”Commentaries and Author's Response,” Journal of Artificial General
Intelligence, vol. 11, no. 2, 1–4, 2020. Retrieved from doi: 10.2478/jagi-2020-0003.
[11] S. Legg, M. Hutter, “Universal Intelligence: A Definition of Machine Intelligence,” Minds and
Machines, vol. 17, no. 4, 391444.
[12] P. Wang, “What Do You Mean by ‘AI’?,” in Artificial General Intelligence 2008, Proceedings of the
First AGI Conference, Frontiers in Artificial Intelligence and Applications (P. Wang, B. Goertzel, S.
Franklin, eds.), vol. 171, 362373, The Netherlands: IOS Press Amsterdam, 2008.
[13] S. M. West, M. Whittaker, K. Crawford, “Discriminating Systems: Gender, Race and Power in AI,”
AI Now Institute, 2019 Retrieved from https://ainowinstitute.org/discriminatingsystems.html.
[14] P. Wang, “On Defining Artificial Intelligence,” Journal of Artificial General Intelligence, vol. 10, no. 2,
1–37, 2019.
[15] R. J. Bernstein, “Beyond Objectivism and Relativism,” Blackwell, 1983.
[16] K. Weber, Information Ethics in a Different Voice, Or: Back to the Drawing Board of Intercultural
Information Ethics,” International Review of Information Ethics, vol. 13, no. 1, 611, 2010.
[17] C. Lemke, D. Monett, M. Bloomfield, “Lernen und lehren mit und über KI: Chancen für eine
Reformierung der Bildung,“ POLITIKUM, 1/2021, Frankfurter/M.: Wochenschau Verlag, 2021 (in
press).
[18] M. Prensky, “Digital Game-Based Learning,” Paragon House, 2007.
[19] P. Aston, J. Crawford, J. Hicks, H. Ross, “The human cost of dyslexia: The emotional and
psychological impact of poorly supported dyslexia,” Report from the All-Party Parliamentary Group
for Dyslexia and other SpLDs, British Dyslexia Association, 2019. Retrieved from
https://cdn.bdadyslexia.org.uk/documents/Final-APPG-for-Human-cost-of-dyslexia-appg-
report.pdf.
[20] WHO, “Autism spectrum disorders,” World Health Organisation, 2019. Retrieved from
https://www.who.int/news-room/fact-sheets/detail/autism-spectrum-disorders.
[21] CDC, “Data & Statistics on Autism Spectrum Disorder,” National Center on Birth Defects and
Developmental Disabilities, Centers for Disease Control and Prevention, 2020. Retrieved from
https://www.cdc.gov/ncbddd/autism/data.html
[22] M. Bloomfield, “Neurodiversity-Friendly KI Campus: Considerations,” Dyslexia Bytes, 2020.
[23] T. Armstrong, “The Power of Neurodiversity: Unleashing the advantages of our differently wired
brain,” Cambridge: Da Capo Lifelong Books, 2011.
[24] D. Gooch, A. Vasalou, L. Benton, R. Khaled, Using Gamification to Motivate Students
with Dyslexia or Other Special Educational Needs,” in Proceedings of the ACM CHI
Conference on Human Factors in Computing Systems, 2016.
3455
[25] T. S. Zamuner, L. Kilbertus, M. Weinhold, “Game-Influenced Methodology: Addressing Child Data
Attrition in Language Development Research,” International Journal of Child-Computer Interaction,
vol. 14, 1522, October 2017.
[26] F. Trompenaars, C. Hampden-Turner, “Riding the Waves of Culture: Understanding Diversity in
Global Business,” McGraw-Hill Education, 1997.
3456
ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Eine zeitgemäße Ausbildung im Bereich der Künstlichen Intelligenz ist essentiell, um zu verstehen, was KI bedeutet, wo ihre derzeitigen Grenzen liegen und welchen Beitrag sie für Wirtschaft und Gesellschaft tatsächlich leisten kann. Eine Erweiterung unserer digitalen Kompetenzen in der Einschätzung künstlicher Intelligenz ermöglicht uns eine bessere Beurteilung der Wirkungsweise digitaler und vernetzter Technologien. Damit können wir unsere Autonomie und Freiheit auch im digitalen Zeitalter bewahren. [Lemke, C., Monett, D., & Bloomfield, M. (2021). Lernen und lehren mit und über KI: Chancen für eine Reformierung der Bildung. In S. Achour, H. Bieling, P. Massing, S. Schieren, J. Varwick (Hrsg.), Künstliche Intelligenz (pp. 54-61), POLITIKUM, Frankfurt/Main: Wochenschau Verlag. DOI: https://doi.org/10.46499/1608.1787]
Article
Full-text available
This article systematically analyzes the problem of defining “artificial intelligence.” It starts by pointing out that a definition influences the path of the research, then establishes four criteria of a good working definition of a notion: being similar to its common usage, drawing a sharp boundary, leading to fruitful research, and as simple as possible. According to these criteria, the representative definitions in the field are analyzed. A new definition is proposed, according to it intelligence means “adaptation with insufficient knowledge and resources.” The implications of this definition are discussed, and it is compared with the other definitions. It is claimed that this definition sheds light on the solution of many existing problems and sets a sound foundation for the field.
Article
Full-text available
What is the relation between the ethics, the law, and the governance of the digital? In this article I articulate and defend what I consider the most reasonable answer.
Conference Paper
Full-text available
The concept of gamification is receiving increasing attention, particularly for its potential to motivate students. However, to date the majority of studies in the context of education have predominantly focused on University students. This paper explores how gamification could potentially benefit a specific student population, children with dyslexia who are transitioning from primary to secondary school. Two teachers from specialist dyslexia teaching centres used classDojo, a gamification platform, during their teaching sessions for one term. We detail how the teachers appropriated the platform in different ways and how the students discussed classDojo in terms of motivation. These findings have subsequently informed a set of provisional implications for gamification distilling opportunities for future pedagogical uses, gamification design for special education and methodological approaches to how gamification is studied.
Article
Full-text available
Teaching ethics to students of information systems (IS) raises a number of conceptual and content-related issues. The present paper starts out by developing a conceptual framework of moral and ethical issues that distinguishes between moral intuition, explicit morality, ethical theory and meta-ethical reflection. This conceptual framework demonstrates the complexity of the field and can be used to categorize different concerns and discourses. The paper then proceeds to discuss ethical issues that can be expected to arise from novel developments in information and communication technologies. These give rise to a set of recommendations, which are aimed at policy makers as well as ICT industry and professionals. The paper concludes by suggesting that the task of IS education is to develop ethical reflexivity in students. Such reflexivity will be required to provide the conceptual complexity and intellectual openness that will be needed to react appropriately to novel challenges.
Conference Paper
Full-text available
Many problems in AI study can be traced back to the confusion of different research goals. In this paper, five typical ways to define AI are clarified, analyzed, and compared. It is argued that though they are all legitimate research goals, they lead the research to very different directions, and most of them have trouble to give AI a proper identity. Finally, a working definition of AI is proposed, which has important advantages over the alternatives.
Article
Research on human development can be challenging for many reasons, one of which is high attrition rates for infants and children. To address the issues of attention and engagement, we examined whether gamification of an experimental methodology improved preschoolers’ participation on a task. The Primed Picture-Naming Task (PPNT) has been widely used to study language processing in adults; however, few studies have successfully implemented the methodology with children under 5 years of age, in part due to children’s difficulty in completing the task. One version of the PPNT incorporated narrative and goal-directed game-like elements, while the control version was presented in a traditional format. The results showed that preschoolers’ participated longer and completed more trials compared to children in the control version. Gamification is a valuable tool for creating assessments of preschoolers’ language development and for improving young children’s motivation and engagement in research. This study provides a beginning point for the development of Child-Computer Interactions which use children’s verbal responses as an interactive tool.