Conference PaperPDF Available

A Virtual Agent Toolkit for Serious Games Developers

Authors:
A Virtual Agent Toolkit for Serious Games
Developers
Samuel Mascarenhas, Manuel Guimar˜
aes,
Rui Prada, Jo˜
ao Dias and Pedro A. Santos
INESC-ID and Instituto Superior T´
ecnico
Universidade de Lisboa
2744-016 Porto Salvo, Portugal
samuel.mascarenhas@gaips.inesc-id.pt,
manuel.m.guimaraes@ist.utl.pt
rui.prada@tecnico.ulisboa.pt,
pedro.santos@tecnico.ulisboa.pt,
joao.dias@tecnico.ulisboa.pt
Kam Star, Ben Hirsh
and Ellis Spice
PlayGen
8-9 Talbot Court, London, UK
kam@playgen.com,
ben@playgen.com,
ellis@playgen.com
Rob Kommeren
Stichting Praktijkleren
3821 AR Amersfoort, NL
r.kommeren@stichtingpraktijkleren.nl
Abstract—The design of serious games requires developers to
tackle pedagogical challenges calling for advanced solutions that
the entertainment industry might deem too risky to pursue. One
such challenge is the creation of autonomous socially intelligent
characters with whom players can practice different social skills.
Although there are several architectures in the field of virtual
agents that are designed specifically to enable more human-like
interactions, they are still not widely adopted by game studios
that develop serious games, in particular for learning. In this
paper, we present a virtual agent toolkit that was specifically
developed with the intent of making agent-based solutions more
accessible and reliable to game developers. To this end, a
collaborative effort was established with a game studio that has
used the toolkit to develop two different serious games. Among
other advantages, the toolkit facilitated the inclusion of a dynamic
model of emotions that affects not just how the character looks
and acts but also how the player’s performance is determined.
Index Terms—serious games, virtual agents, authoring tools,
interactive storytelling, affective computing
I. INTRODUCTION
The industry of video games has seen tremendous growth
to the point that the budget for highly anticipated games
can surpass the cost of big Hollywood films [7]. This led
to extensive development times and quite large development
teams and corresponding high expectations from the players
[13]. On one hand, this state of affairs has enabled the creation
of very detailed game worlds with stories and characters that
players find very engaging to interact with. But, on the other
hand, the huge risk that is now associated with failing to meet
the expectations of players has led the industry to primarily
focus on what has been known to work in the past. This is also
then reflected in the available development tools, with popular
game engines like Unity1being primarily designed to support
the typical requirements and methods used in entertainment
games that were previously successful. As a result, game
developers that are interested in developing games with more
unique characteristics or requirements, which is often the case
1https://unity3d.com/
for pedagogical games, usually find themselves having to
spend a significant amount of time in developing their own
tools and methods.
The serious games industry is growing as well, supported
by the continuous research on the potential in using games for
other purposes than just entertainment [10], [12], [21]. Serious
games can be used to train and teach players on various
subjects (e.g. math fractions [14], logic operators [16]) or raise
awareness on social issues (e.g. sustainability [17], cultural
diversity [4], bullying [20]). In fact, one of the more interesting
aspects in developing games that are designed to teach is that
their design is centered around pedagogical challenges. As
such, even if the game is very engaging for players it will still
fail to achieve its purpose if it does not have a pedagogical
outcome. But, in turn, the game might have great pedagogical
content but fail to deliver it in an engaging manner. One of
the important aspects that make players engaged in a game
world is the appeal of its characters. Particularly, non-player
characters provide the opportunity for the player to engage
in social interactions in a safe environment and within the
confines of the game rules and structures. From a training
perspective, players are free to experiment and observe the
effects their actions have on simulated others in order to obtain
and practice certain social skills. However, the range of social
interactions that are typically offered to players is still quite
limited when compared to real human interaction.
With the goal of expanding the range and complexity of
social interactions between characters and humans, there has
been a substantial amount of research dedicated to the creation
and study of virtual agents. These are embodied characters that
are designed to be able to interact with humans in a natural
manner [8]. The architectures that have been developed for
these characters can be rather complex, having to deal with
the challenges of interpreting and synthesizing both verbal and
non-verbal actions as well as modeling cognitive and affective
processes related to decision making.
Although researchers have been able to successfully apply
virtual agent architectures in the development of serious games
(e.g. [1], [9], [11]), such architectures have not yet been widely
adopted by game studios. While the accessibility of these
architectures can be improved through the creation of better
graphical user interfaces and more extensive documentation,
there are also technical and conceptual issues that must be
addressed [18]. A virtual agent architecture relies on a type of
authoring that is oriented towards cognitive concepts such as
goals and beliefs, which are quite familiar for AI researchers
but not necessarily so for game developers. Also, an agent
model will promote a type of storytelling experience that
is distributed or character-centric [2] whereas popular game
developer tools like Articy:draft2or Twine3are designed
towards a plot-centric approach with branching dialogues.
While these tools can be used to create complex narratives
they make a strong distinction between the player and the
other characters, by giving dialogue options to the former but
not the latter. In the proposed toolkit, while certainly possible,
it is not necessary to tie dialogue options to a specific character
or the player.
In this paper, we present a novel toolkit that aims to
promote the adoption by game developers of virtual agent
tools for creating game characters that are more socially and
emotionally intelligent (e.g. are able to adapt to the situation
and to the players). The toolkit is based on the existing
FAtiMA Modular architecture [5], which is an architecture
that was has been successfully used in the past in several
research applications [1], [3], [4]. These improvements were
derived from a close collaboration with game developers at
the company PlayGen4that used the toolkit to develop two
games for learning. The first one is named Space Modules
Inc and is being developed for an educational institute in
the Netherlands named Stichting Praktijkleren5. The game is
designed to teach its players how to provide better customer
service in technical support. The second game is named Sports
Team Manager and is being developed for OKKAM6, a spinoff
company of the University of Trento in Italy. It is a single
player game where players assume the role of a sailing team
manager. Players must hire, fire and communicate with their
team members in order to succeed and, therefore, learn some
personel managemnet skills.
This collaboration is part of the ongoing RAGE project7,
which is an EU-funded project with the goal of developing and
promoting new technologies for directly supporting applied
game developers at creating better applied games and in a
manner that is more cost-effective [19].
II. FATIMA TOOLKIT
FAtiMA Toolkit is an open-source project8that contains a
collection of tools and libraries with the aim of enabling the
2https://www.nevigo.com/en/articydraft
3http://twinery.org
4http://playgen.com/
5https://www.stichtingpraktijkleren.nl
6http://www.okkam.it/
7http://rageproject.eu
8https://github.com/GAIPS-INESC-ID/FAtiMA-Toolkit
creation of interactive storytelling scenarios with non-player
characters that can interact socially with human players in a
variety of contexts.
Storytelling can bring multiple benefits to serious games
[15]. Not only are people more likely to remember what
they learned if the content is integrated in the context of a
narrative, but also, an emotionally engaging story will greatly
motivate players to achieve the intended learning goals of
the game. This form of storytelling centers on the ability
of players to shape how the story unfolds according to their
actions, as participants rather than as observers. This feeling
of agency increases player engagement and encourages them
to reflect more deeply on the consequences of their choices.
However, the more freedom given to players, the more difficult
it becomes to use a traditional scripting approach to author
the scenarios. This is because the branching factor of possible
narrative paths quickly becomes intractable.
Our proposed storytelling framework deals with this issue
by following a character-centered approach rather than a
plot-centered one. The authoring is thus focused around the
different roles that the characters might play in the game and
the narrative emerges from how the characters behave in their
given roles. The challenge then becomes to author these roles
in a way that characters act in a believable manner but also
serve the intended learning goals of the scenario.
As previously mentioned, the toolkit is the result of sev-
eral improvements that were made to the FAtiMA Modular
architecture [5]. For example, the code was ported from the
Java language to C# in order to streamline the integration with
game engines, such as Unity3D. Also, each component within
the toolkit is able to fully load and save its internal state to
aJSON file. As such, it is possible for the game developer
to use his or her text editor of choice to do any kind of
authoring task. However, the toolkit contains some complex
data structures that refer to one another, such as emotions,
an autobiographical memory, appraisal rules, among others.
For this reason, each component has an authoring tool with
a graphical user interface that help users’ in the creation of
content in a declarative way preventing syntactical errors. The
fact that the entire internal state of each component within
the toolkit can be written to a file also works as a logging
mechanism.
Many agent-based tools are designed to function as a
framework or as a stand-alone application that the game must
communicate with, using a specific protocol. In both of these
cases, the game developer has to accommodate the game
to how the agent tool specifies its communication protocol,
its execution cycle and its extensions points, instead of the
other way around. Moreover, given their opinionated nature,
agent-based frameworks are difficult or even impossible to
compose together. It was based on these limitations that we
applied a functional library design pattern in the development
of the toolkit. Consequentially, all the different components
were developed as libraries, i.e. a collection of functions with
well defined inputs and outputs, that the game developer can
directly import and explore more easily without having to
Fig. 1. Diagram of the Role Play Character Component.
worry about future compatibility issues with other tools.
The main functionality of FAtiMA Toolkit is divided in two
main components, the Role-Play Character and the Integrated
Authoring Tool.
A. Role-Play Character
The Role-Play Character (RPC) is the name given to the
component (see Figure 1) within the toolkit that manages
each character’s reasoning and emotional state based on a
perception-action mechanism, which can be described in the
following manner. Firstly, the events that occur in the game
world are sent as input to the Emotional Appraisal component,
which is based on a formalization of the OCC cognitive theory
of emotions [6]. This component then determines if the event
will trigger a new emotion for the character. Each character can
be configured with different appraisal rules that will result in
having different emotional outcomes for the same events. After
the emotional appraisal process is done, any resulting emotion
is added to the Emotional State. Events are also stored in the
character’s Autobiographical Memory along with any emotion
associated to them. The character’s Knowledge Base keeps
track of what the character believes as logical predicates such
as Weather(Outside) = Raining. These beliefs are also updated
according to the events sent by the game world.
After all the internal structures are updated, the RPC uses
the Emotional Decision Making component to select the next
action of the character. This is done using a rule-based mech-
anism that considers both the beliefs of the character as well
as its emotional state. In addition to regular beliefs that are
directly stored in the Knowledge Base, the decision-making
process also takes into account meta-beliefs, which are added
by Reasoning Components such as the Dialogue Manager
or the MCTS. Syntactically, meta-beliefs are expressed in
the same manner as regular ones. The key distinction is
that, rather than being stored, the values of these beliefs
is determined dynamically by the algorithm specified in the
reasoning component. This allows the combination of multiple
decision-making strategies into a unified rule-based system.
Developers can also register their own modules as additional
reasoning components and the meta-beliefs they introduce
will become available in the conditional rules of all other
components. For instance, consider a game with a specific
scoring mechanism for the player and the developer wants
to create a decision rule for NPCs to congratulate the player
whenever the player’s score reaches a certain threshold. This
could be achieved by registering the scoring mechanism as a
new Reasoning Component that would add Score(Player) =
[x] as a new meta-belief.
Game characters should have believable emotional re-
sponses to give the illusion of life. For applied games that rely
heavily on social interaction, it quickly becomes impractical to
manually script all the emotional reactions of each character
for each possible event. The RPC asset tackles this issue by
allowing game developers to create general profiles of how
characters respond emotionally in their games. They can test
and configure these profiles outside of the game and they can
naturally switch between profiles without having to recompile
the game source code.
B. Integrated Authoring Tool
The Integrated Authoring Tool is the other main component
of the toolkit that is designed to be the central hub for
game developers when creating a new storytelling scenario
or adapting existing ones. It allows the configuration of the
general aspects of the scenario and provides quick access to
the authoring tools of the Role-Play Character component.
However, the main feature of this component is that it contains
a dialogue editor that allows the developer to specify the
dialogue acts that are available for both the player and the
characters.
For the purpose of dialogue management, the author must
define the interaction state where each dialogue may occur as
well as define the next state if a certain dialogue is selected.
During runtime, all characters are informed about the existing
dialogue acts as well as dialogue states. Characters are then
able to use this information to decide what to say according
to their internal state and decision-making mechanisms. To
give an example, consider that the integrated authoring tool
informs a character that at the start of the interaction there
are two valid dialogues, one to greet the player respectfully,
another to greet the player in an angry manner. If the character
is angry, the emotional decision making asset will select the
second option. If not, then the first greeting will be selected
instead.
III. CAS E STU DY 1-SPACE MO DU LES INC
Space Modules Inc is a single player game where the player
takes on the role of a customer service representative for a
spaceship part manufacturer “Space Modules Inc”. The virtual
characters in the game play the role of customers that call
the player (see Figure 2) about hardware and software faults
they are experiencing. Some characters will be angry, others
uncooperative or stressed, and it’s up to the player to manage
the situation and decide how best to respond.
Players have to respond to situations by engaging in con-
versation with customers. This is done by having the player
pick one of the available dialogue options in response to the
character’s chosen dialogue. The process is repeated until the
Fig. 2. Space Modules Inc. Game Flow.
Fig. 3. Space Modules Inc - Dialogue Screen (left image) and Result Screen
(right image) Flow.
final state of the conversation is reached and then the player’s
score is passed to the review screen to be shown to the player
(see Figure 3). The customer satisfaction score depends on
how the player affected the emotional state of the character.
The idea is that each customer can have a different emotional
profile, thus providing a different challenge to the player.
From a pedagogical perspective, players must learn how to
manage intense emotions and how to respond to customers
in a professional manner in the best way. In other words,
the pedagogical goal of the game is to train players in being
able to identify a person’s emotional state through verbal and
nonverbal feedback and gain further experience in providing
effective emotional responses.
The emotional reactions of the customers in Space Mod-
ules are determined by the Role-Play Character component.
According to the selected emotional profile, this component
initializes the overall mood of the character to a given value
between -10 to 10. The component then updates this value
based on how it evaluates the option selected by the player. If
the player decides, for instance, to give the wrong solution for
the problem that the customer has, the component will gener-
ate a “Distress” emotion and the overall mood decreases. The
player can then repair the mood of the character by selecting
a dialogue that shows empathy for the character’s distress.
Fig. 4. Sports Team Manager Game Flow.
However, if this dialogue is selected when the character is not
feeling distressed, then it will be judged negatively instead and
the mood of the character decreases accordingly. The amount
by which the mood decreases or increases is also another
parameter that is possible to configure in the RPC component.
IV. CAS E STU DY 2-SPO RTS TE AM MA NAG ER
Sports Team Manager is an applied game also developed
by PlayGen with the assistance of the FAtiMA Toolkit. The
overall goal of the game is to have the player be able to
assemble together the most optimally performing sailing team
by resolving conflicts and managing the team’s interactions.
The player interviews virtual characters to identify their skills
and personalities. The team has a set of roles, each with
overlapping skill requirements. A successful sailing team is
not solely based on skill, but also on the social relationships
between team members. Players must communicate with their
team, deciding which members are placed into each position
per race and resolve conflict situations as they arise. Figure 4
shows the game flow during an individual race session.
The players must first review the positions they need to
fill on the boat, taking note of the required skills for each.
Next, they must meet with their NPC team members, taking
into account the skills and inter-team relationships already
known, asking questions where further information is needed.
Using this information they should, if required, recruit new
members into the team and place individuals into positions.
After racing with the selected line-up, players will occasionally
have to handle events with team members. After the event
stage concludes, using the result and pieces of feedback from
the race session, players begin the gameplay loop again, but
now with additional information to assist in their decision
making.
The Role-Play Character component is used here to model
the emotional state and decision making of each team member
based on their belief set. The component analyses the actions
of the player and determines their effect on the emotional state
of each NPC based on their current state and the emotional
weighting of the event in their perspective. To give an example,
Fig. 5. Sports Team Manager - Post-Race Event.
after each race session, it is possible for a team member to
come to the player in order to talk to them. The character
might for instance, ask why she was not picked (see Figure 5).
Players can then reply back to the team member by selecting
from a list of dialogue options. If the player selects an overly
aggressive reply, the character is likely to feel angry, affecting
its next response.
As mentioned previously, the Role-Play Character compo-
nent stores the beliefs of every NPC and saves these beliefs
over multiple play sessions. These beliefs are related to
information such as their last position in the team, skill ratings,
opinion ratings and event states. Furthermore, the events sent
to the characters are saved, meaning a history of events can
be preserved. This allows a history of every team selection to
be stored. As all of this information is stored regularly, it can
be also be reloaded in further play sessions, allowing for the
possibility of a persistent game.
Concerning the Integrated Authoring Tool, this component
is used to manage the configuration of the scenario, which
contains a list of all possible role-play characters that are
dynamically created at the beginning of and during each game.
The component also contains all of the dialogue options for
the player and the NPCs during various parts of the game,
such as team member meetings and post-race events.
V. G AME DEVELOPERS FEE DBACK
Game developers from PlayGen were independent in the
integration of the FAtiMA toolkit in their game code and were
successfully able to use the toolkit to support the intended
gameplay in the two games. They relied on the documentation
and examples created for the community and had full access to
the toolkit source code. We conducted an informal interview
to get their impression regarding the technical integration
and the usefulness of the toolkit. Contacts were made by
email and face to face. The conversation was around three
main questions: (1) How was the FAtiMA toolkit used in the
development of the game?, (2) What were the main benefits
of using the FAtiMA toolkit? and (3) What were the main
difficulties of using the FAtiMA toolkit?
Game developers reported that “the integration was not
difficult, but that a proper use of the toolkit requires a steep
initial learning curve”. The toolkit facilitated the creation of
mechanisms “to determine the change in emotional state and
mood depending on the dialogue chosen by the player” and
was also useful “to calculate the NPC response to the provided
piece of player dialogue, depending on their emotional state
and the type of player dialogue selected.” and to “decide how
a NPC should greet the player depending on their current
relationship with the player.”. They highlighted two main
benefits regarding the pedagogical value that the FAtiMA
toolkit provided. First, the use of the toolkit was “good because
players get immediate implicit (contextual) feedback”. They
mean that the emotional responses of the characters were
potentially very good cues for the players to assess if they
were playing well without the need to show explicit numeric
score. The second benefit, was the “ability to dictate the course
of conversation indirectly through using the toolkit’s dialogue
and NPC emotions systems, as these have made setting up
and controlling scenarios a much easier process as a result.”.
What is relevant, in the pedagogical sense, is the fact that the
definition and setting up of the scenarios was made directly
by the trainers who will apply the games. Hence, the game
can be configured and adapted by the people who have the
most knowledge about the content to be delivered in order to
achieve the learning goals of the game.
VI. STUDENTS GAME AI PRO JE CT S
The toolkit was also put to test in a course on Game AI at
IST, University of Lisbon in the fall semester. It was used in
the final project of the course (out of 4) that constituted 30%
of the grade. Sixty-eight students, working in groups of three,
were engaged. They had a workshop on the FAtiMA Toolkit
(of about 2 hours) before tackling the problem. They used a
version of the toolkit that is integrated with the Unity game
engine and uses components to realise the body and expression
of the characters developed by other members of the RAGE
project.
Each group was given the task of using the FAtiMA
toolkit to create two conversational scenarios, one with a
single character interacting with the player and another with
two characters engaging in conversation with the player at
the same time. Students were free to select any theme for
the conversation as long as the non-player characters had
believable emotional responses and could be configured to
have different personalities. All groups managed to finished
the project. Some of the scenarios created had quite interesting
and surprising themes. For instance, one group chose to create
a scenario where players were at the gates of heaven and had
to convince the gatekeeper to let them in. To be successful,
players had to avoid upsetting the gatekeeper too much. Other
groups opted for a more serious theme such as a job interview
(see Figure 6) or a shopping scene with a father, his son, and a
shopkeeper. With the student’s permission, these scenarios will
be publicly available as examples that are part of the toolkit.
From a software quality perspective, given the wide range of
scenarios explored by the students, we were able to identify
some issues with the toolkit, which were promptly fixed.
Fig. 6. Students’ Job Interview Demo.
VII. CONCLUSION
In this paper, we argued that the development of serious
games is faced with additional challenges that are related to
the pedagogical goals that the designers have in mind. For
instance, in games that are about teaching conversational skills,
developers have to figure out how to offer a rich interaction
space that supports the exploration and failure of different
communicative actions and their associated socio-emotional
effects.
In the mainstream gaming industry, dialogues are typically
handled through branching structures that limit the set of
possible interactions, by offering little flexibility in the way
characters respond to what the players say to them. Alterna-
tively, in the research field of virtual agents, researchers have
developed and proposed tools for the creation of conversational
agents that have rich socio-emotional models driving their
behavior. These agents have great potential for being applied
in serious games that teach soft skills, as their behaviors are
more procedural and less scripted. However, so far, agent
architectures are still far from being widely used in the serious
games industry due to, in large part, accessibility issues.
With those issues in mind, we took an existing virtual agent
architecture, FAtiMA Modular, and adapted it to a new toolkit
with the goal of making it more appealing to game developers.
For that effect, we adopted a functional library pattern instead
of a framework-based approach. Moreover, the functionality
was divided in two main components, the Role-Play Character
and the Integrated Authoring Tool. The first is responsible
for managing the character’s beliefs, memories and emotional
state as well as running a decision-making process for each
character according to its ascribed role. The second component
allows the developer to manage the list of all the characters
that are available in each game scenario as well as the available
dialogues that the characters, including the player’s avatar, can
select from at any given state of the interaction.
The resulting toolkit was then applied successfully by a
game studio, PlayGen, in the development of two serious
games. The first game was designed to teach players how to
properly communicate with emotional customers in a customer
service setting. The second game has the player managing a
sport sails team composed by multiple characters with different
role preferences. Both of these games benefited from the use
of the toolkit in adding emotional dynamics to their characters
that is reflected in their decisions. Additionally, a group of 68
students successfully developed projects for a Game AI course
using the toolkit. This experience was a good stress test on
the toolkit given the wide variety of scenarios explored by the
students.
As future work, we plan to conduct more formal user study
centered around the authoring capabilities of the toolkit. The
main idea will be to have participants watch a video tutorial
about how the toolkit works and then be instructed to change
an existing game scenario according to a set of predefined
goals. The feedback obtained will then be used to further
improve the toolkit.
ACKNOWLEDGMENTS
This work was supported by national funds through
Fundac¸˜
ao para a Ciˆ
encia e a Tecnologia (FCT) with reference
UID/CEC/50021/2013 and has been partially funded by the
EC H2020 project RAGE (Realising an Applied Gaming Eco-
System) Grant agreement No 644187.
REFERENCES
[1] R. Aylett, M. Vala, P. Sequeira, and A. Paiva, “Fearnot!–an emergent
narrative approach to virtual dramas for anti-bullying education,” in
International Conference on Virtual Storytelling. Springer, 2007, pp.
202–205.
[2] M. Cavazza, F. Charles, and S. J. Mead, “Character-based interactive
storytelling,” IEEE Intelligent systems, vol. 17, no. 4, pp. 17–24, 2002.
[3] F. Correia, P. Alves-Oliveira, N. Maia, T. Ribeiro, S. Petisca, F. S. Melo,
and A. Paiva, “Just follow the suit! trust in human-robot interactions
during card game playing,” in Robot and Human Interactive Communi-
cation (RO-MAN), 2016 25th IEEE International Symposium on. IEEE,
2016, pp. 507–512.
[4] N. Degens and G. Hofstede, “Traveller - Intercultural training with
intelligent agents for young adults,” Proceedings of the . . . , 2013.
[5] J. Dias, S. Mascarenhas, and A. Paiva, “Fatima modular: Towards
an agent architecture with a generic appraisal framework.Emotion
Modeling, vol. 8750, pp. 44–56, 2014.
[6] J. Dias and A. Paiva, “Feeling and reasoning: A computational model
for emotional characters,” in EPIA, vol. 3808. Springer, 2005, pp.
127–140.
[7] T. Economist, “Why video games are so expen-
sive to develop,The Economist Group Limited, 2014.
[Online]. Available: http://www.economist.com/blogs/economist-
explains/2014/09/economist-explains-15
[8] J. Gratch, J. Rickel, E. Andr´
e, J. Cassell, E. Petajan, and N. Badler,
“Creating interactive virtual humans: Some assembly required,IEEE
Intelligent systems, vol. 17, no. 4, pp. 54–63, 2002.
[9] W. L. Johnson and A. Valente, “Tactical language and culture training
systems: Using artificial intelligence to teach foreign languages and
cultures.” in AAAI, 2008, pp. 1632–1639.
[10] F. Khatib, S. Cooper, M. D. Tyka, K. Xu, I. Makedon, Z. Popovic,
D. Baker, and F. Players, “From the Cover: Algorithm discovery by
protein folding game players,” Proceedings of the National Academy of
Sciences, vol. 108, no. 47, pp. 18 949–18 953, 2011.
[11] J. M. Kim, R. W. Hill Jr, P. J. Durlach, H. C. Lane, E. Forbell, M. Core,
S. Marsella, D. Pynadath, and J. Hart, “Bilat: A game-based environment
for practicing negotiation in a cultural context,International Journal of
Artificial Intelligence in Education, vol. 19, no. 3, pp. 289–308, 2009.
[12] G. Koo and S. Seider, “Video Games for Prosocial Learning,
Ethics and Game Design, pp. 16–33, 2010. [Online]. Available:
http://128.197.153.21/seider/Consolidated papers/Prosocial Learning
in Video Games Final Version Ko.pdf%5Cnhttp://services.igi-
global.com/resolvedoi/resolve.aspx?doi=10.4018/978-1-61520-845-
6.ch002
[13] A. Martens, H. Diener, and S. Malo, “Game-based learning with
computers–learning, simulations, and games,” Transactions on edutain-
ment I, pp. 172–190, 2008.
[14] M. Ninaus, K. Kiili, J. McMullen, and K. Moeller, “A Game-
Based Approach to Examining Students’ Conceptual Knowledge
of Fractions,” in Games and Learning Alliance: 5th International
Conference, GALA 2016, Utrecht, The Netherlands, December 5–7,
2016, Proceedings, R. Bottino, J. Jeuring, and R. C. Veltkamp, Eds.
Cham: Springer International Publishing, 2016, pp. 37–49. [Online].
Available: http://dx.doi.org/10.1007/978-3-319-50182-6 4
[15] N. Padilla-Zea, F. L. Gutirrez, J. R. Lpez-Arcos, A. Abad-
Arranz, and P. Paderewski, “Modeling storytelling to be
used in educational video games,” Computers in Human
Behavior, vol. 31, pp. 461 – 474, 2014. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0747563213001295
[16] P. Sajjadi, E. El Sayed, and O. De Troyer, “On the Impact of
the Dominant Intelligences of Players on Learning Outcome and
Game Experience in Educational Games: The TrueBiters Case,” in
Games and Learning Alliance: 5th International Conference, GALA
2016, Utrecht, The Netherlands, December 5–7, 2016, Proceedings,
R. Bottino, J. Jeuring, and R. C. Veltkamp, Eds. Cham: Springer
International Publishing, 2016, pp. 221–231. [Online]. Available:
http://dx.doi.org/10.1007/978-3-319-50182-6 20
[17] A. Santos, F. Strada, and A. Bottino, “Games and Learning Alliance,”
in Games and Learning Alliance: 5th International Conference, GALA
2016, Utrecht, The Netherlands, December 5–7, 2016, Proceedings,
ser. Lecture Notes in Computer Science, A. De Gloria, Ed. Cham:
Springer International Publishing, 2015, vol. 9221, no. April 2016, pp.
73–82. [Online]. Available: http://link.springer.com/10.1007/978-3-319-
22960-7
[18] U. Spierling and N. Szilas, “Authoring issues beyond tools,” in Joint
International Conference on Interactive Digital Storytelling. Springer,
2009, pp. 50–61.
[19] W. Van Der Vegt, W. Westera, E. Nyamsuren, A. Georgiev, and I. M.
Ortiz, “RAGE Architecture for Reusable Serious Gaming Technology
Components,” International Journal of Computer Games Technology,
vol. 2016, 2016.
[20] N. Vannini, S. Watson, K. Dautenhahn, S. Enz, M. Sapouna, D. Wolke,
S. Woods, L. Hall, A. Paiva, E. Andr´
e, R. Aylett, and W. Schneider,
“”FearNot!”: A computer-based anti-bullying-programme designed to
foster peer intervention,” European Journal of Psychology of Education,
vol. 26, no. 1, pp. 21–44, 2011.
[21] L. von Ahn and L. Dabbish, “Labeling images with a computer
game,” Proceedings of the 2004 conference on Human factors in
computing systems - CHI ’04, pp. 319–326, 2004. [Online]. Available:
http://portal.acm.org/citation.cfm?doid=985692.985733
... All rights reserved. generating dancing digital characters for applications such as character design (Mascarenhas et al. 2018), storyboard visualization for consumer media (Kucherenko et al. 2020;Watson et al. 2019), building metaverse tools (Omniverse 2021) and even advancing our understanding of the relationships between music and dance (Brown and Parsons 2008). ...
Preprint
Full-text available
We present DanceAnyWay, a generative learning method to synthesize beat-guided dances of 3D human characters synchronized with music. Our method learns to disentangle the dance movements at the beat frames from the dance movements at all the remaining frames by operating at two hierarchical levels. At the coarser "beat" level, it encodes the rhythm, pitch, and melody information of the input music via dedicated feature representations only at the beat frames. It leverages them to synthesize the beat poses of the target dances using a sequence-to-sequence learning framework. At the finer "repletion" level, our method encodes similar rhythm, pitch, and melody information from all the frames of the input music via dedicated feature representations. It generates the full dance sequences by combining the synthesized beat and repletion poses and enforcing plausibility through an adversarial learning framework. Our training paradigm also enforces fine-grained diversity in the synthesized dances through a randomized temporal contrastive loss, which ensures different segments of the dance sequences have different movements and avoids motion freezing or collapsing to repetitive movements. We evaluate the performance of our approach through extensive experiments on the benchmark AIST++ dataset and observe improvements of about 7%-12% in motion quality metrics and 1.5%-4% in motion diversity metrics over the current baselines, respectively. We also conducted a user study to evaluate the visual quality of our synthesized dances. We note that, on average, the samples generated by our method were about 9-48% more preferred by the participants and had a 4-27% better five-point Likert-scale score over the best available current baseline in terms of motion quality and synchronization. Our source code and project page are available at https://github.com/aneeshbhattacharya/DanceAnyWay.
... Human communications through digital platforms and virtual spaces are prevalent in many applications, including online learning [34,36,52], virtual interviewing [7], counseling [16], social robotics [60], automated character designing [40], storyboard visualizing for consumer media [31,58], and creating large-scale metaverse worlds [45]. Simulating immersive experiences in such digital applications necessitates the development of plausible human avatars with expressive faces and body motions. ...
Preprint
Full-text available
We present a multimodal learning-based method to simultaneously synthesize co-speech facial expressions and upper-body gestures for digital characters using RGB video data captured using commodity cameras. Our approach learns from sparse face landmarks and upper-body joints, estimated directly from video data, to generate plausible emotive character motions. Given a speech audio waveform and a token sequence of the speaker's face landmark motion and body-joint motion computed from a video, our method synthesizes the motion sequences for the speaker's face landmarks and body joints to match the content and the affect of the speech. We design a generator consisting of a set of encoders to transform all the inputs into a multimodal embedding space capturing their correlations, followed by a pair of decoders to synthesize the desired face and pose motions. To enhance the plausibility of synthesis, we use an adversarial discriminator that learns to differentiate between the face and pose motions computed from the original videos and our synthesized motions based on their affective expressions. To evaluate our approach, we extend the TED Gesture Dataset to include view-normalized, co-speech face landmarks in addition to body gestures. We demonstrate the performance of our method through thorough quantitative and qualitative experiments on multiple evaluation metrics and via a user study. We observe that our method results in low reconstruction error and produces synthesized samples with diverse facial expressions and body gestures for digital characters.
... The game business has undergone a change due to AI, becoming a dynamic and interactive environment (Mascarenhas et al., 2018). The user experience has been improved and the bounds of what is feasible have been pushed by the integration of AI technology into different parts of gaming in recent years. ...
Chapter
Full-text available
This chapter explores the fascinating nexus of artificial intelligence (AI), gaming, and individualized mental health care, creating a revolutionary environment for the field of mental health in the future. Early detection and specialized treatments for mental health concerns are now possible due to AI's data-processing skills. Gamification, which uses gaming to make therapy exciting and effective, is transforming conventional gaming from a purely recreational activity to a powerful tool for mental health assistance. A more approachable and stigma-free setting for mental health discussions is promoted by the convergence of AI and gaming, which offers enormous potential for tailored mental health therapy. In this era of technology advancement, these developments have the potential to redefine what it means to be well-being, ushering in a new era of intelligent, interactive, and highly individualized mental health care.
... All rights reserved. generating dancing digital characters for applications such as character design (Mascarenhas et al. 2018), storyboard visualization for consumer media (Kucherenko et al. 2020;Watson et al. 2019), building metaverse tools (Omniverse 2021) and even advancing our understanding of the relationships between music and dance (Brown and Parsons 2008). ...
Article
We present DanceAnyWay, a generative learning method to synthesize beat-guided dances of 3D human characters synchronized with music. Our method learns to disentangle the dance movements at the beat frames from the dance movements at all the remaining frames by operating at two hierarchical levels. At the coarser "beat" level, it encodes the rhythm, pitch, and melody information of the input music via dedicated feature representations only at the beat frames. It leverages them to synthesize the beat poses of the target dances using a sequence-to-sequence learning framework. At the finer "repletion" level, our method encodes similar rhythm, pitch, and melody information from all the frames of the input music via dedicated feature representations. It generates the full dance sequences by combining the synthesized beat and repletion poses and enforcing plausibility through an adversarial learning framework. Our training paradigm also enforces fine-grained diversity in the synthesized dances through a randomized temporal contrastive loss, which ensures different segments of the dance sequences have different movements and avoids motion freezing or collapsing to repetitive movements. We evaluate the performance of our approach through extensive experiments on the benchmark AIST++ dataset and observe improvements of about 7%-12% in motion quality metrics and 1.5%-4% in motion diversity metrics over the current baselines, respectively. We also conducted a user study to evaluate the visual quality of our synthesized dances. We noted that, on average, the samples generated by our method were about 9-48% more preferred by the participants and had a 4-27% better five-point Likert-scale score over the best available current baseline in terms of motion quality and synchronization. Our source code and project page are available at https://github.com/aneeshbhattacharya/DanceAnyWay.
... This heterogeneous environment allows for assessing large-scale systems, as it may be difficult to find human participants for the "humans-only" Serious Game. The system can instead be padded with autonomous, socially intelligent characters with whom players can practise Mascarenhas et al. (2018). However, equipping the agents with 'human-like' characteristics to better engage human players in a realistic environment is a challenge Magnenat-Thalmann and Kasap (2009). ...
Chapter
Full-text available
People’s decisions and actions are informed, influenced, and constrained by socially constructed social arrangements. Usually, these social arrangements are pre-determined, and people joining institutions or organizations may have, at least initially, little control or influence over them. Occasionally, however, but increasingly commonly in the transition to the “Digital Society,” people have an opportunity to self-determine their social arrangements “from scratch.” The issues then are: how do people gain experience in such founding processes, experiment safely with alternative social arrangements, and gain expertise so that they can participate meaningfully, for example in consultation events. In this research paper, we address these issues by presenting a framework for “computer-supported social arrangements” tools which would enable people to specify, learn, apply, evaluate, and innovate social arrangements. This would contribute to empowering people and communities with the necessary experience, experimentation, and expertise for effective and sustainable self-determination of their own social arrangements.
... Especially valuable in the context of serious games is not only the aforemen�oned adapta�on of game contents to the player's needs but, for instance, also guidance of the player by means of intelligent virtual agents (K. Anderson et al., 2013;Mascarenhas et al., 2018) or the ability to interact with the game harnessing all human communica�on channels including voice and gestures, i.e. in a mul�-modal fashion (Kocsis, Ganchev, Mporas, Papadopoulos, & Fakotakis, 2009;Goli, Teymournia, Naemabadi, & Garmaroodi, 2022). By applying the technology of natural language processing (NLP), chatbots allow humans to interact with computers through natural language (Winkler & Söllner, 2018). ...
Chapter
More than fifty years have passed since the first serious computer games appeared. Their general goal is to motivate players to learn and to improve their learning outcomes. To this end, the players are given specific tasks and challenges in entertaining and engaging contexts. Serious games have evolved from a traditional video game setup to diverse, playful, real-time interactive systems also harnessing, for instance, novel extended reality interfaces. There are two obvious challenges in the design of serious games: (1) Turning learning objectives into games, and (2) establishing a playful, rewarding game-loop that ensures the learning objectives are met. The paths to address these challenges vary depending on the learning objectives and contexts, the deployment technology and the available resources for development. The purpose of this chapter is to elaborate on the underlying conceptual and technological challenges and to present perspectives on theoretical and practical solutions.
Article
Full-text available
Computer-based simulations for learning offer affordances for advanced capabilities and expansive possibilities for knowledge construction and skills application. Virtual agents, when powered by artificial intelligence (AI), can be used to scaffold personalized and adaptive learning processes. However, a synthesis or a systematic evaluation of the learning effectiveness of AI-powered virtual agents in computer-based simulations for learning is still lacking. Therefore, this meta-analysis is aimed at evaluating the effects of AI-powered virtual agents in computer-based simulations for learning. The analysis of 49 effect sizes derived from 22 empirical studies suggested a medium positive overall effect, g¯=0.43g=0.43\overline{g }=0.43, SE = 0.08, 95% C.I. [0.27, 0.59], favoring the use of AI-powered virtual agents over the non-AI-powered virtual agent condition in computer-based simulations for learning. Further, moderator analyses revealed that intervention length, AI technologies, and the representation of AI-powered virtual agents significantly explain the heterogeneity of the overall effects. Conversely, other moderators, including education level, domain, the role of AI-powered virtual agents, the modality of AI-powered virtual agents, and learning environment, appeared to be universally effective among the studies of AI-powered virtual agents in computer-based simulations for learning. Overall, this meta-analysis provides systematic and existing evidence supporting the adoption of AI-powered virtual agents in computer-based simulations for learning. The findings also inform about evidence-based design decisions on the moderators analyzed.
Chapter
The main purpose of this chapter is to propose an AI Interactive system. In today’s Fast Working world, hard work and time management holds a key feature. While in that time management one doesn’t get enough time to explore himself, his/her emotions and enough time to interact with Friends and Family. This busy schedule leads one apart from socialization, and it’s the key to mental wellbeing. The term loneliness is often misunderstood. It is not an objective condition, but rather a subjective one. Loneliness is different for everyone and there are many factors that contribute to it, including the feelings of isolation, anxiety, mental health and depression. Machine learning has been used for many purposes and it can also be used to help people with loneliness, anxiety, and depression. Machine learning is helping people who are lonely and struggling with anxiety and depression. The technology is being used to create virtual assistants that provide support and guidance. By using machine learning, these assistants can learn about the user's specific needs and provide targeted assistance.
Article
Full-text available
p>There is a growing demand for new forms of intercultural training. We describe a demonstration of an agent-based application designed to teach young adults (18-25) general patterns of behaviour that can distinguish a broad range of cultures. Training is done through an interactive-story telling approach where the user must go through a series of critical incidents, interacting with agents capable of simulating different synthetic cultures in their behaviour.</p
Conference Paper
Full-text available
This paper presents a digital educational game, TrueBiters, developed in order to help students practice the use of the truth tables to compute the truth-value of logical expressions in proposition logic. Next to improving the pass rate of our logic course, we also use the game to investigate whether there is a difference in learning outcome and game experience for students with different dominant types of intelligences. The results of a pilot study show that the use of TrueBiters resulted in an improvement of the learning outcome for logically-mathematically intelligent players. The results of a pilot study on game experience show differences for kinesthetically intelligent and logically-mathematically intelligent players with respect to certain game experience aspects. The number of participants was too small to draw definitive conclusions, but the results are an indication that the dominant types of intelligences do matter for the effectiveness of an educational game.
Conference Paper
Full-text available
Robots are currently being developed to enter our lives and interact with us in different tasks. For humans to be able to have a positive experience of interaction with such robots, they need to trust them to some degree. In this paper, we present the development and evaluation of a social robot that was created to play a card game with humans, playing the role of a partner and opponent. This type of activity is especially important, since our target group is elderly people-a population that often suffers from social isolation. Moreover, the card game scenario can lead to the development of interesting trust dynamics during the interaction, in which the human that partners with the robot needs to trust it in order to succeed and win the game. The design of the robot's behavior and game dynamics was inspired in previous user-centered design studies in which elderly people played the same game. Our evaluation results show that the levels of trust differ according to the previous knowledge that players have of their partners. Thus, humans seem to significantly increase their trust level towards a robot they already know, whilst maintaining the same level of trust in a human that they also previously knew. Henceforth, this paper shows that trust is a multifaceted construct that develops differently for humans and robots.
Article
Full-text available
For seizing the potential of serious games, the RAGE project—funded by the Horizon-2020 Programme of the European Commission—will make available an interoperable set of advanced technology components (software assets) that support game studios at serious game development. This paper describes the overall software architecture and design conditions that are needed for the easy integration and reuse of such software assets in existing game platforms. Based on the component-based software engineering paradigm the RAGE architecture takes into account the portability of assets to different operating systems, different programming languages, and different game engines. It avoids dependencies on external software frameworks and minimises code that may hinder integration with game engine code. Furthermore it relies on a limited set of standard software patterns and well-established coding practices. The RAGE architecture has been successfully validated by implementing and testing basic software assets in four major programming languages (C#, C++, Java, and TypeScript/JavaScript, resp.). Demonstrator implementation of asset integration with an existing game engine was created and validated. The presented RAGE architecture paves the way for large scale development and application of cross-engine reusable software assets for enhancing the quality and diversity of serious gaming.
Article
Full-text available
This paper presents a generic and flexible architecture for emotional agents, with what we consider to be the minimum set of functionalities that allows us to implement and compare different appraisal theories in a given scenario. FAtiMA Modular, the architecture proposed is composed of a core algorithm and by a set of components that add particular functionality (either in terms of appraisal or behaviour) to the architecture, which makes the architecture more flexible and easier to extend.
Chapter
In this chapter, the authors consider the capabilities video games offer to educators who seek to foster prosocial development using three popular frameworks: moral education, character education, and care ethics. While all three of these frameworks previously considered literature and film as helpful tools, the authors suggest that video games are unique from these other media in the multiple levers through which they can influence the worldview, values, and behaviors of players. Similar to literature and film, video games possess content—plot, characters, conflict, themes, and imagery—with which participants interact. Unlike other media, however, video games scaffold players’ experiences not only via narrative and audio-visual content but by the rules, principles, and objectives governing what participants do. Moreover, many video games possess an ecosystem that impacts players’ interpretation of the game itself—for example, on-line hint guides and discussion groups as well as the opportunity to play in the company of peers in either physical or virtual proximity. The authors consider opportunities and challenges presented by each of these unique facets of video games for fostering the prosocial development of participants.
Conference Paper
Considering the difficulties many students and even educated adults face with reasoning about fractions, the potential for serious games to augment traditional instructional approaches on this topic is strong. The present study aims at providing evidence for the validity of a serious game used for studying students’ conceptual knowledge of fractions. A total of 54 Finnish fifth graders played the math game on tablet computers using tilt-control to maneuver an avatar along a number line for a total of 30 min. Results indicated that most of the hallmark effects of fraction magnitude processing as identified in basic research on numerical cognition were successfully replicated using our serious game. This clearly suggests that game-based approaches for fraction education (even using tilt-control) are possible and may be effective tools for assessing and possibly promoting students’ conceptual knowledge of fractions.
Article
In this chapter, we consider the capabilities video games offer to educators who seek to foster prosocial development using three popular frameworks: moral education, character education, and care ethics. While all three of these frameworks previously considered literature and film as helpful tools, we suggest that video games are unique from these other media in the multiple levers through which they can influence the worldview, values, and behaviors of players. Similar to literature and film, video games possess content — plot, characters, conflict, themes, and imagery — with which participants interact. Unlike other media, however, video games scaffold players' experiences not only via narrative and audio-visual content but by the rules, principles, and objectives governing what participants do. Moreover, many video games possess an ecosystem that impacts players' interpretation of the game itself — for example, on-line hint guides and discussion groups as well as the opportunity to play in the company of peers in either physical or virtual proximity. We consider opportunities and challenges presented by each of these unique facets of video games for fostering the prosocial development of participants.
Article
Including storytelling in educational video games is currently a highly studied field as it is one element with which to maintain students' motivation. From previous studies, we have confirmed that including changes in the story changes the way in which students perceive the video game. In this paper, we present an extension of our previously defined VGSCL (a reference model for educational game development incorporating collaborative activities), in which balanced ludic and educative contents were designed. With this extension we focus on the storytelling itself, highlighting elements included in the story composition, attributes to be defined and relationships to be specified in order to integrate this proposal in the existing model. In addition, due to our target group being aged from 3 to 7, we have introduced some considerations to adapt the general rules to these children. Finally, we present the process followed to incorporate digital storytelling in the educational videogame ''Ato's Adventure'', the educational goal of which is to train grapho-motor skills.