Conference Paper

SENSEI: An Intelligent Advisory System for the eSport Community and Casual Players

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... only game developers but also end users -i.e. casual human players -who may wish to seek for summaries of their game experiences in a simplified, higher-level language [4]. ...
... It is much easier to analyze compact representations. This aspect corresponds also to another game-related project that we are developing, namely, a support system that advises players how to improve their skills [4]. The ability to explain advices at a level that is understandable for human players is actually quite analogous to the ability to explain AI agent behaviors to game creators -we believe that both these tasks require a kind of conceptual language that describes a given game in a simplified, hierarchical way. ...
... IV. GRANULAR MCTS GAMES: THE MODEL As already outlined, our idea is to help the MCTS algorithm by representing a complex game in a simplified (abstract) fashion. This kind of approach could be useful in many aspects of game analytics [4], although in this particular paper we concentrate mainly on supporting game creators in embedding AI players into the designed environments. This way, we continue our research related to MCTS-based game-playing AI Fig. 2. A sample simplified game, which features three soldiers: blue, red and green from the left, respectively. ...
Conference Paper
Full-text available
We propose a new approach to assist computer game creators in introducing AI agent-players into their games. We point out that traditional methods, such as Monte Carlo Tree Search (MCTS), may not provide creators with good interfaces to embed the required AI elements because of too fine-grained space of (often loosely defined) game states. Thus, we suggest to follow the paradigms of information granulation and redefine states/actions at a higher level of abstraction, so the MCTS algorithms can operate on more general concepts, which reflect the creators' domain knowledge. In our approach, the game developers are responsible for specification of mechanisms behind particular high-level states/actions from the perspective of "real world of the game". Meanwhile, the MCTS routines take advantage of the fact that many unique sequences of fine-grained actions become to fall into the same clusters reflecting information granules corresponding to the introduced concepts.
... While systems and methodologies for tracking and analyzing player data have grown and advanced, there is a notable lack of understanding of how players use their data to gain expertise in esports. This is because many of the tools that have been developed have not involved player input or evaluation [16,21] or are not intended for player use at all, instead targeting developers [33] or spectators [19]. That being said, there does exist a collection of work targeting players, some of which conducted evaluations with those players, however, those evaluations focused predominantly on identifying best practices for design, comprehension, and usability [31,35]. ...
... There are numerous systems and methods that have been developed with player use in mind. One key example is SENSEI [16], described as an intelligent advisory system intended to help esports players improve their gameplay through advanced analytics and ML. There is also the work of Christiansen et al. [6] who developed a novel approach for measuring the causal effects of game features or player performance on chances of winning, motivated by a desire to help players and developers better understand the connection between a given set of statistics and victory. ...
Preprint
Full-text available
The rapid increase in the availability of player data and the advancement of player modeling technologies have resulted in an abundance of data-driven systems for the domain of esports, both within academia and the industry. However, there is a notable lack of research exploring how players use their data to gain expertise in the context of esports. In this position paper we discuss the current state of the field and argue that there is a need for further research into how players use their data and what they want from data-driven systems. We argue that such knowledge would be invaluable to better design data-driven systems that can aid players in gaining expertise and mastering gameplay.
... LogDL is designed to be queried interactively, used in the knowledge discovery processes and to provide new insights from the data. Such insights can be utilized in game analytics and coaching to increase human players' skills [27] or to comment e-sport games. ...
... A system that incorporates LogDL can be developed in such a way that technical AI/ML details are hidden and friendly interfaces are exposed. Actually, our aforementioned system that advises players how to improve their skills can be treated as an example of business intelligence development in the game industry [27]. Similarly, the analytics can be conducted in "real world" over complex multimodal data sources [35]. ...
Conference Paper
Full-text available
We propose a new logic-based language called Log Description Language (LogDL), designed to be a medium for the knowledge discovery workflows over complex data sets. It makes it possible to operate with the original data along with machine-learning-driven insights expressed as facts and rules, regarded as so-called descriptive logs characterizing the observed processes in real or virtual environments. LogDL is inspired by the research at the border of AI and games, precisely by Game Description Language (GDL) that was developed for General Game Playing (GGP). We emphasize that such formal frameworks for analyzing the gameplay data are a good prerequisite for the case of real, "not digital" processes. We also refer to Fogs of War (FoW)-our upcoming project related to AI in video games with limited information-whereby LogDL will be used as well.
... OF FULL-PLAYERS (IN MATCHES AGAINST EACH OTHER AND HUMAN PLAYERS) PER TYPES OF VICTORY CONDITIONS.be attributed to the human-likeness or realism of the bot's behavior. As a base for this evaluation, we use the game analytics/advisory portal for Tactical Troops[31], which is available online (https://sensei.tacticaltroops.net/analysis/playstyle/ ). ...
Conference Paper
Full-text available
Tactical Troops: Anthracite Shift is a squad-based tactics game with complex environment. It is a commercial game released on the Steam platform. This paper is authored by its original developers. It presents how the AI-driven players, that are featured in the game, have been designed and implemented. Their procedure of operation is based on a hierarchical combination of Monte Carlo Tree Search and Utility AI. Although these methods alone have already been applied to video games, there have not been published works that employ such a combination. Solutions to creating the AI in commercial games are rarely open to the public, which motivated us even more. The quality of the AI agents has been assessed from two perspectives: playing strength and believability.
... The competition's task could also be viewed as a continuation of the topic started in the previous year, i.e. the prediction of win-rates of decks from collectible card video games [9]. The ability to assess quality of decks in a continuously evolving game is one of core features of an advisory system for players, called SENSEI, which is being developed by one of the competition's sponsors [10]. ...
... The outcomes of our methods can be used in various ways, e.g., as an input for clustering and a supervised classification of decks into archetypes, prediction of opponent's deck archetype based on a few played cards or as a part of a game state representation for the value and policy models used by AI agents playing the game. They can also be considered as a component of a personalized advisory system that suggests replacements of cards in a deck to maximize player's win chances [3]. Moreover, the proposed methodology can be followed in other applications. ...
... The outcomes of our methods can be used in various ways, e.g., as an input for clustering and a supervised classification of decks into archetypes, prediction of opponent's deck archetype based on a few played cards or as a part of a game state representation for the value and policy models used by AI agents playing the game. They can also be considered as a component of a personalized advisory system that suggests replacements of cards in a deck to maximize player's win chances [3]. Moreover, the proposed methodology can be followed in other applications. ...
Article
We investigate the impact of combining multiple representations derived from heterogeneous data sources on the performance of machine learning (ML) models. In particular, we experimentally compare the approach in which independent models are trained on data representations from different sources with the one in which a single model is trained on joined data representations. As a case study, we discuss various entity representation learning methods and their applications in our data-driven advisory framework for video game players, called SENSEI. We show how to use the discussed methods to learn representations of cards and decks for two popular collectible card games (CCGs), namely Clash Royale (CR) and Hearthstone: Heroes of Warcraft (HS). Then, we follow our approach to create ML models which constitute the back-end for several out of SENSEI’s end-user functionalities. When learning representations, we consider techniques inspired by the NLP domain, as they allow us to create embeddings which capture various aspects of similarity between entities. We put them together with representations composed of manually engineered features and standard bags-of-cards. On top of that, we propose a new end2end deep learning architecture with an attention mechanism aimed at reflecting meaningful inter-entity interactions.
Conference Paper
Full-text available
We propose a new approach to assist computer game creators in introducing AI agent-players into their games. We point out that traditional methods, such as Monte Carlo Tree Search (MCTS), may not provide creators with good interfaces to embed the required AI elements because of too fine-grained space of (often loosely defined) game states. Thus, we suggest to follow the paradigms of information granulation and redefine states/actions at a higher level of abstraction, so the MCTS algorithms can operate on more general concepts, which reflect the creators' domain knowledge. In our approach, the game developers are responsible for specification of mechanisms behind particular high-level states/actions from the perspective of "real world of the game". Meanwhile, the MCTS routines take advantage of the fact that many unique sequences of fine-grained actions become to fall into the same clusters reflecting information granules corresponding to the introduced concepts.
Article
Full-text available
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo. © 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved.
Article
Full-text available
This paper summarizes the AAIA'17 Data Mining Challenge: Helping AI to Play Hearthstone which was held between March 23, and May 15, 2017 at the Knowledge Pit platform. We briefly describe the scope and background of this competition in the context of a more general project related to the development of an AI engine for video games, called Grail. We also discuss the outcomes of this challenge and demonstrate how predictive models for the assessment of player's winning chances can be utilized in a construction of an intelligent agent for playing Hearthstone. Finally, we show a few selected machine learning approaches for modeling state and action values in Hearthstone. We provide evaluation for a few promising solutions that may be used to create more advanced types of agents, especially in conjunction with Monte Carlo Tree Search algorithms.
Conference Paper
Full-text available
Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present the first architecture to tackle 3D environments in first-person shooter games, that involve partially observable states. Typically, deep reinforcement learning methods only utilize visual input for training. We present a method to augment these models to exploit game feature information such as the presence of enemies or items, during the training phase. Our model is trained to simultaneously learn these features along with minimizing a Q-learning objective, which is shown to dramatically improve the training speed and performance of our agent. Our architecture is also modularized to allow different models to be independently trained for different phases of the game. We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as humans in deathmatch scenarios.
Conference Paper
Full-text available
One of the most notable features of collectible card games is deckbuilding, that is, defining a personalized deck before the real game. Deckbuilding is a challenge that involves a big and rugged search space, with different and unpredictable behaviour after simple card changes and even hidden information. In this paper, we explore the possibility of automated deckbuilding: a genetic algorithm is applied to the task, with the evaluation delegated to a game simulator that tests every potential deck against a varied and representative range of human-made decks. In these preliminary experiments, the approach has proven able to create quite effective decks, a promising result that proves that, even in this challenging environment, evolutionary algorithms can find good solutions.
Article
Advances in deep reinforcement learning have allowed autonomous agents to perform well on Atari games, often outperforming humans, using only raw pixels to make their decisions. However, most of these games take place in 2D environments that are fully observable to the agent. In this paper, we present the first architecture to tackle 3D environments in first-person shooter games, that involve partially observable states. Typically, deep reinforcement learning methods only utilize visual input for training. We present a method to augment these models to exploit game feature information such as the presence of enemies or items, during the training phase. Our model is trained to simultaneously learn these features along with minimizing a Q-learning objective, which is shown to dramatically improve the training speed and performance of our agent. Our architecture is also modularized to allow different models to be independently trained for different phases of the game. We show that the proposed architecture substantially outperforms built-in AI agents of the game as well as average humans in deathmatch scenarios.
Conference Paper
Trading Card Games are turn-based games involving strategic planning, synergies and rather complex gameplay. An interesting aspect of this game domain is the strong influence of their metagame: in this particular case deck-construction. Before a game starts, players select which cards from a vast card pool they want to take into the current game session, defining their available options and a great deal of their strategy. We introduce an approach to do automatic deck construction for the digital Trading Card Game Hearthstone, based on a utility system utilizing several metrics to cover gameplay concepts such as cost effectiveness, the mana curve, synergies towards other cards, strategic parameters about a deck as well as data on how popular a card is within the community. The presented approach aims to provide useful information about a deck for a player-level AI playing the actual game session at runtime. Herein, the key use case is to store information on why cards were included and how they should be used in the context of the respective deck. Besides creating new decks from scratch, the algorithm is also capable of filling holes in existing deck skeletons, fitting an interesting use case for Human Hearthstone players: adapting a deck to their specific pool of available cards. After introducing the algorithms and describing the different utility sources used, we evaluate how the algorithm performs in a series of experiments filling holes in existing decks of the Hearthstone eSports scene.
Garners: The Social and Cultural Significance of Online Games
  • G Crawford
  • V Gosling
  • B Light
G. Crawford, V. Gosling, and B. Light, Gamers: The Social and Cultural Significance of Online Games. Taylor & Francis, 2013.
Learning Apache Kafka
  • N Garg
N. Garg, Learning Apache Kafka -Second Edition. Packt Publishing, 2015.
Mastering Apache Spark 2.x
  • R Kienzler
R. Kienzler, Mastering Apache Spark 2.x -Second Edition. Packt Publishing, 2017.
Knowledge Pit - A Data Challenge Platform
  • A Janusz
  • D Ślęzak
  • S Stawicki
  • M Rosiak
A. Janusz, D.Ślęzak, S. Stawicki, and M. Rosiak, "Knowledge Pit -A Data Challenge Platform," in Proceedings of CS&P 2015, 2015, pp. 191-195.
Knowledge Pit - A Data Challenge Platform
  • janusz