ArticlePDF Available

Towards AI-based Interactive Game Intervention to Monitor Concentration Levels in Children with Attention Deficit


Abstract and Figures

Preliminary results to a new approach for neurocognitive training on academic engagement and monitoring of attention levels in children with learning difficulties is presented. Machine Learning (ML) techniques and a Brain-Computer Interface (BCI) are used to develop an interactive AI-based game for educational therapy to monitor the progress of children’s concentration levels during specific cognitive tasks. Our approach resorts to data acquisition of brainwaves of children using electroencephalography (EEG) to classify concentration levels through model calibration. The real-time brainwave patterns are inputs to our game interface to monitor concentration levels. When the concentration drops, the educational game can personalize to the user by changing the challenge of the training or providing some new visual or auditory stimuli to the user in order to reduce the attention loss. To understand concentration level patterns, we collected brainwave data from children at various primary schools in Brazil who have intellectual disabilities e.g. autism spectrum disorder and attention deficit hyperactivity disorder. Preliminary results show that we successfully benchmarked (96%) the brainwave patterns acquired by using various classical ML techniques. The result obtained through the automatic classification of brainwaves will be fundamental to further develop our full approach. Positive feedback from questionnaires was obtained for both, the AI-based game and the engagement and motivation during the training sessions.
Content may be subject to copyright.
Abstract Preliminary results to a new approach for
neurocognitive training on academic engagement and
monitoring of attention levels in children with learning
difficulties is presented. Machine Learning (ML) techniques
and a Brain-Computer Interface (BCI) are used to develop an
interactive AI-based game for educational therapy to monitor
the progress of children’s concentration levels during specific
cognitive tasks. Our approach resorts to data acquisition of
brainwaves of children using electroencephalography (EEG) to
classify concentration levels through model calibration. The
real-time brainwave patterns are inputs to our game interface
to monitor concentration levels. When the concentration drops,
the educational game can personalize to the user by changing
the challenge of the training or providing some new visual or
auditory stimuli to the user in order to reduce the attention loss.
To understand concentration level patterns, we collected
brainwave data from children at various primary schools in
Brazil who have intellectual disabilities e.g. autism spectrum
disorder and attention deficit hyperactivity disorder.
Preliminary results show that we successfully benchmarked
(96%) the brainwave patterns acquired by using various
classical ML techniques. The result obtained through the
automatic classification of brainwaves will be fundamental to
further develop our full approach. Positive feedback from
questionnaires was obtained for both, the AI-based game and
the engagement and motivation during the training sessions.
Keywords - technology for educational therapy, children with
attention deficit, BCI, AI-based games, Machine Learning.
Over the past years, games have become a technological
service tool in different application domains. Some projects
combine a direct application of games as therapeutic tools in
healthcare for rehabilitation and development of cognitive
function. Experiments with children demand the design of the
interaction functionalities and interface for ease of use. It has
been proven that games can boost confidence and interest in
learners who otherwise have lack of confidence in
classrooms. Gaming can be used to encourage engagement
and motivation as well as promote learning if applied
properly to areas in which they would be most beneficial.
Diego R Faria and Jordan J. Bird are with 1ARVIS Lab, Computer Science
Department, School of Engineering and Applied Science, Aston University,
UK, emails: {d.faria. birdj1} Cintia Daquana is with
2Education Secretariat of Cambe-PR Brazil, Department of Special
Education and Inclusion, Email:
Jhonatan Kobylarz is with Federal University of Parana, Brazil. Pedro
Ayrosa is with 4LABTED-UEL and 5Computer Science, Londrina State
University, Brazil,
Fig. 1. Overview of the proposed AI-based Interactive Game (full system)
using EEG brain waves to classify concentration levels, hand motion for
gesture recognition to control the game, and emotion recognition to analyze
the mood of the children. The multimodal data will help the system to adapt
to the current state of the users to keep them motivated/concentrated.
Undoubtedly, technology in education helps individuals
develop necessary skills, but it also opens up opportunities
for a much more fluid learning experience. The need to
deliver continuous improvement, are driving schools and
higher education institutions to seek out proven new ways of
delivering learning, including people with special needs. By
combining technology with traditional forms of teaching,
learning providers can attain powerful means of achieving
results, while benefitting from a strong return on investment.
Whether cost savings are a primary or additional objective,
technology opens up a realm of opportunities for new and
improved content delivery and personalized learning.
Technology can be personalized to meet each person’s needs
through a blend of cognitive-motor gaming functions. In
addition, parents have the independence to have their
children learn within and outside clinical or school sessions
by using this technology on mobile devices. In the long term,
it is a worthwhile investment (e.g. training teachers /
educators of students with special needs), with tools having a
proven record of maximizing children engagement. Games
can be a decisive vehicle through which children learn about
themselves, the environment, and develop social skills.
However, for many children with any kind of impairment,
adapted play opportunities are often limited.
AI-based interactive games can be applied to
child-machine interaction within the context of educational
therapy for assisting the facilitation of adaptive
learning-related coping and improved cognitive skill
outcomes in educational settings, focusing on aiding children
with learning difficulties. The application of technology to
this intervention is a promising and ground-breaking avenue
to promote adjustment and development in children, who
tend to be increasingly enthusiastic about the use of this
Towards AI-based Interactive Game Intervention to Monitor
Concentration Levels in Children with Attention Deficit
Diego R. Faria1, Jordan J. Bird1, Cintia Daquana2, Jhonatan Kobylarz3, Pedro P. S. Ayrosa4,5
Fig. 2. Overview of the proposed approach to monitor the progress of
children over time by using our neurocognitive training through AI-based
In this work we introduce a new approach for an intelligent
system by building an adaptive AI-based game for
neurocognitive training to boost and maintain the
concentration levels of children with intellectual disability
and attention deficit. Preliminary work using ML techniques
combined with BCI, and a Natural User Interface (NUI) for
tracking hand motion, makes this game more attractive for
children, engaging them to this kind of technology,
awakening their interest and motivation to play educational
games. During the playing session we recorded and measured
children’s brainwave patterns to monitor their concentration
level, towards adapting the game interface when the
concentration drops by stimulating them through some visual
or auditory stimuli on the interface to keep their attention and
focus on the training session. Figures 1 and 2 depict a sketch
of our proposed approach in terms of technological
development and future implementation steps.
The remainder of this paper is as follows: Section II
describes Related Work, Section III presents the
technological development and methods used for the
intelligent game implementation, Section IV describes the
apparatus and experimental setup for data acquisition,
Section V presents the preliminary results followed by
Section VI with discussions and strategy for future
improvements and finally, Section VII draws the
Games can provide a better and more efficient learning
environment as they introduce a ‘fun’ element to education
which makes the whole experience more appealing for
students. Gamification of education is not only there to
engage users but also to develop cognitive abilities as well as
problem solving skills. The lack of enjoyment in education
leads to failure in learning [1]. When students are engaged,
their motivation to learn will increase because they become
more focused [2]. Students thrive on dynamic learning
experiences. Boredom is still a challenge in traditional
education settings. The view of combining gaming with
education is not a new one, however recently, there has been
a steady increase in “gamifying” education and creating more
interactable content to motivate learners and increase
engagement [3].
A study conducted by [4] which investigated how the role
of modern games in children’s development concluded that
games with enough dynamic content can enable children to
develop physically their motor skills and intellectually
through problem solving skills. There is a strong association
between engaging in computer-based learning activities and
cognitive development in children. Research conducted by
[5] found the evidence that specially designed games do have
benefits in increasing coordinated motor skills. Authors in [6]
found strong evidence in measurable changes of neural
processing through the use of gaming. It is important to note
that enhanced cognitive performance is not a product of all
the digital games that are available, but the essence of
increasing cognitive processing through games is a
possibility, if designed correctly. There are several ways to
assess the usability and efficacy of educational games.
Human-Computer Interaction (HCI) techniques, (e.g.
semiotic inspection), allow computer scientists and
educational game developers to evaluate such games and
efficacy on children development [7].
The authors in [8] designed an interesting and novel
interface integrating a commercial headband with one single
dry electrode to get beta frequency from the frontal lobe to
measure the concentration level. They integrated the virtual
environment with the leap motion technology for gesture
recognition. They aimed at using this novel interface
developed in Unity for children with cerebral palsy. Despite
the idea being interesting, the researchers did not use any
machine learning concepts for gesture recognition with the
leap motion sensor, nor to automatically classify brainwave
activity from the EEG sensor. Instead, this work opted for
basic functionalities provided by both commercial sensors as
tools, and thus analysis of the performance of concentration
levels of the users or the efficacy of the gesture recognition in
specific tasks was not possible. However, they evaluate the
acceptance of their novel virtual environment as a potential
user-friendly interface that can be used to measure
concentration level.
Different approaches have been developed using EEG data
to classify emotions [9], immersive 3D games [10], control
games [11], [12], and analysis of brainwave patterns of
concentration, mood, etc. while patients play games [13]. But
none of them have combined an intelligent game based on
response of EEG, natural user interface and emotional
reactions together to keep the user’s concentration during the
interaction. Furthermore, none of them have monitored and
analyzed long-term data to monitor the progress of the user
over time after multiple training sessions. This work is going
beyond state-of-the-art by developing an intelligent system
that will be able to adapt to the users by generating
appropriate stimuli to keep their concentration during the
training sessions based on the classification of EEG
brainwave patterns. In addition, the future implementation
shown in Figure 2 will make our approach a potential tool for
neurocognitive training of children with attention deficit.
The idea of an AI-based interactive game is conceptualized
here for an intervention context, with a number of potential
applications in the education system, such as cognitive
educational therapy (e.g. facilitation of coping skills), and
health promotion settings (e.g. training of health-related
skills). Current empirical evidence suggests that the use of
smart technology in therapeutic and educational context is
likely to enhance the outcomes of cognitive-behavioral
interventions. The primary question of this work is: How can
one endow an intelligent game with advanced perception,
learning and interaction capabilities in order to stimulate
children with learning difficulties to enhance their cognitive
and motor functions without stressing, but engaging them? In
order to answer this question, our strategy is focused on: (i)
the development of an AI-based game which adopts an
approach to automatically classify mental states to measure
concentration levels through EEG data; (ii) control and
interaction via gesture recognition using hand motion
patterns within a game environment; (iii) focusing the
neurocognitive training using user-friendly interfaces for
tasks such as reasoning, hand-eye coordination, and memory
to improve the attention deficit of children with learning
difficulties; (iv) real-time, autonomous adaption by the game
when concentration drops. Once the concentration level is
classified during the user interaction, the intelligent system
can make decision on whether new stimulus (visual or
auditory) is needed to keep the user attention to avoid
concentration loss.
A. EEG Data Processing and Classification
Our EEG classification technique is based on previous
observations with mental state [14] and emotional state [15].
In these works, the temporal brainwave signals are processed,
and mathematical features are extracted. This is performed
through a sliding window technique, where windows of
length 1s with an overlap of 0.5s capture short segments of
brain activity, and the temporal data is then converted to
singular data points covering the window through various
algorithms. The 988 features derived from the 1s windows as
well as the 0.25s and 0.5s windows by offset include Fourier
Transforms, Shannon Entropy, and Log-energy Entropy
among many others
. An extensive set of features beyond
those usually observed in a clinical setup [17], [18] are
extracted due to the more cost-effective nature of the Muse
headband sensor in comparison to expensive but higher
quality clinical EEGs.
Following this, the features are organized into a matrix, and
since temporal segments are now represented as numerical
features, machine learning algorithms can learn useful
pattern recognition rules from the data. In this study, we
consider the ground-truth dataset; that is, the data collected
during the calibration phase at the start of the experiment. We
benchmark various classical machine learning algorithms on
the ternary problem of concentrating/relaxed/neutral and
compare the predictions to the collected data in order to
discern how classifiable the data is prior to future exploration
into the unknown data, which is the data recorded while the
child is interacting with the educational game. Training is
performed on the basis of 10-fold cross validation technique
in order to prevent overfitting.
B. Interfaces Development using UNITY Game Engine
Unity is a 2D and 3D game development engine created by
Unity Technologies, allowing the development of a wide
range of games on multiple platforms. It has been extensively
The full list of features is detailed in [14].
Fig. 3. Spelling game interfaces developed in Unity. The user has to pick-up
the corresponding letters of the animal name and throw them to the right side
(inside the box).
Fig. 4. Aircraft interface designed in Unity and used with leap motion to grab
the user hand motion to control the aircraft.
Fig.5. Leap Motion Data. Left: Palm tracking data. Right: Fingertips tracking
used in industry and academia since it has a number of
features that make it particularly appealing including relevant
standpoints, such as inexpensive, easy to use, has a fairly
shallow learning curve, accommodates multiple scripting
languages, and ships with intuitive prototyping tools. Our
games are based on educational games reported in the
state-of-the-art, focused on perception, memory, reasoning
and attention. We designed a spelling game, where the
interface randomly selects images of animals and their sound,
separated into categories of easy, medium and complex
spelling. Once children interact with the interface and
progress on each phase, the difficulty is increased. The
interface generates random cubes with alphabetic letters on
the ground, and the user has to pick-up the letters in the
correct order corresponding the animal’s name and throw it
into a box until the word is complete. If the player succeeds,
then a new image of animal is randomly given, otherwise, the
game warns the user about the mistake and prompts the user
to try again. Figure 3 shows an example of this interface.
Another interface designed is an aircraft controlled by the
hand, where its pose and orientation is given by the hand pose
(roll-pitch-yaw), and the acceleration as well by moving the
hand towards the screen or backwards, and also allowing
other directions such as left and right. Figure 4 shows the
interface. The main idea of this game is to control the aircraft
and avoid obstacles, find free routes, and gather objectives
(coins) to increase the player’s score. It demands a hand-eye
coordination to follow specific routes and moving the hand
appropriately for the correct motion within the virtual
environment, with the ability to increase the difficulty
(minimum speed, obstacles) if required, based on
concentration levels measured from the brain.
C. Gesture Recognition
Our gesture recognition module is based on the Leap Motion
Controller (LMC) Software Development Kit (SDK)
functions. The LMC is primarily designed for hand gesture
and finger position detection in interactive software
applications. It consists of three infrared emitters and two
CCD cameras to track the hand. It provides preprocessed data
through their Application Programming Interface (API). The
data provided is given as follows (Fig. 5): the position of the
palm frame-by frame, its normal and velocity; the hand
direction; fingertips position and velocity; and arm direction.
Currently we are using the leap-motion API functions for
gesture recognition instead of implementing our own
classifier, since it works very well for gestures such as
opening and closing the hand for simple tasks (e.g. pinch,
pick-up and release an object, and also static gesture like
pointing, two-fingers, three-fingers and so on). The hand
pose provided by the API can be directly mapped as the
controller (e.g. palm and finger position and orientation) in
the game interface to move an object forward, backward, left,
right and rotate it based on the hand orientation.
The Unity Game Engine and LMC SDK are easily
compatible with one another, making game design and
implementation integration easier, since the LMC functions
are embedded within the Unity development interface.
In order to gather data for the experiment, the commercial
Muse headband (EEG) sensor is used as can be observed in
Figure 6. The subject group was made up of a total of 30
children from 3 different public primary schools in Cambé, a
municipality within the State of Paraná in Southern Brazil.
All of the children’s legal parents or guardians were informed
in detail of the project characteristics and written consent was
given. Special monitoring roles were created for the trial in
order to ensure compliance of ethical requirements,
confidentiality and protection of personal data. No trials were
performed without previous approval by the ethical
committee and data protection authorities in Brazil. The pilot
trials were conducted in accordance with the highest ethical
standards from the UK and Brazil. To ensure that the
information is easy to understand, all written information that
was given to the involved people was approved by experts on
“Easy to Read” guidelines. All trials were conducted after
receiving the signed “Informed Consent Form”. The sample
size of 30 children is selected for preliminary
experimentation, with positive results presenting assurance
and reason for collecting a much larger dataset in the future.
The subject group of 30 children were aged from six to ten
years old, 24 were male and 6 were female. The children
considered in this experiment all had forms of disability
including Intellectual Disability (ID), Autism Spectrum
Disorder (ASD) and Attention Deficit Hyperactivity Disorder
(ADHD) which were widespread within the subject group,
Fig. 6. EEG recording.
Left: Muse Headband sensor. Right: Example of temporal data (alfa, beta,
theta and gamma signals provided by the sensor).
Fig. 7. Brain activity observed by an instructor.
The child has been omitted from the photo for privacy reasons. Note that this
data is not being recorded, since the child is behind the camera attempting to
make the headband comfortable. Prior to data being recorded, the instructor
made sure the headband was properly calibrated and recording normally.
Fig. 8. Example of the experimental set up.
On the left table, the Leap Motion sensor on top of the laptop and game are
ready. On the right, the Shell Game is set up for calibration data as well as a
smartphone used to record videos for future extraction of facial features to
classify emotions etc
oppositional defiant disorder (ODD) and Hydrocephaly were
also present within the subject group. One of the subjects
suffered with Cerebral Palsy. Finally, one subject with
Down’s Syndrome could not concentrate for a long enough
period for data to be recorded, but through informal
observation was seen to enjoy the Leap Motion game. Due to
this, another child was added to the subject group to retain the
planned 30 (the above ethical consent was obtained). An
example of brain activity being recorded can be seen in
Figure 7, and the full experimental setup can be seen in
Figure 8. For each of the children, up to two minutes of data
is recorded during the interaction with games for each of the
Fig. 9. Data Acquisition: Examples of the calibration and educational
activities using the games for interaction. Faces are blurred for privacy
three states of interest by the experimental setup shown in
Figure 8. For states of concentration, several of the
subjects were not recorded for the full two minutes due to
issues arising with attention for the duration of the recording.
This leads to a slight class imbalance of 820 data objects for
concentrating and 830 for both relaxed and neutral data.
Although this slight imbalance is present, it was noted that
Precision and Recall strongly supported the observed
accuracy of the algorithms, detailed further in Section V. The
calibration and educational activities can be seen in Figure 9.
For the concentration state, children played the ‘shell game’
with an instructor. The child was tasked with following the
location of a ball when passed under three cups, with
difficulty increasing based on the child’s performance in
order to retain attention through challenge. For the relaxed
state, the child was asked to breathe slowly (following the
instructor) and to sit in a position which allowed for
relaxation of muscles. For the neutral state, no stimuli were
present. The data was recorded based on observation, if the
subject were to lose concentration on the task, then the
recording would be stopped. This was done in order to
prevent contamination of the calibration data.
Following the calibration experiment, all the children
played two educational games one with the Leap Motion
and one without, using the mouse to select objects starting
with a specific letter of the alphabet given by the game
interface. In future we plan to take the calibrated model
derived in this experiment and apply it to this data in order to
discern the effects of physical engagement in games.
At this stage, the results are related to the performance of
classification algorithms given the EEG data. The results
from the single algorithm benchmarking experiment can be
observed in Table I. Random Forest and Logistic Regression
hyperparameters are manually optimized in Figures 10 and
11. For the three performance measures, Logistic Regression
seemed to be the best with an overall classification accuracy
of 96.24% given three classes: concentrated, relaxed and
neutral. The Logistic Regression model took 130 seconds to
execute whereas the Random Forest model took only 19
seconds, and thus the reduction of 0.34% overall score may
be worth the reduction in computational complexity in future
when the algorithm is required to run on standard office
Accuracy (%)
Random Forest
SVM classifier
Accuracy (%)
(Avg. Prob)
(Max. Prob)
Fig. 10. Random Forest Tuning: The number of Random Trees in the Forest
are stepped from 25 to 125 in order to select the best hyperparameters
Fig. 11. Logistic Regression Tuning: The number of boosting iterations is
stepped from 100 to 500 in order to select the best hyperparameters.
machines. In Table II, it can be observed that when
combining the two best models, a higher classification
accuracy is achieved for the dataset. This is observed
through both two voting methods and stacking, where the
Clas s ifica tio n A ccu rac y (%)
Trees in the Random Forest
Clas s ification A ccu rac y (%)
Boos tin g Iteration s
Fig. 12. Visualization of the highest relative entropy score of an attribute
(Eigenvalue IG=0.685) mapped to the three-class problem.
voting method achieves the highest classification accuracy
for the dataset. It must be noted that the models are complex
and take a lengthy amount of time, around 10 to 30 minutes,
on a high-end computer (Intel core i7@3.7GHz). When the
two models vote on class both by average and maximum
probability, scores of 97% are achieved.
If the neutral state is not considered, only 2 data objects
from relaxed and 1 from concentrating were misclassified as
one another by the Logistic Regression model for the whole
dataset, showing that the classification ability of the two
classes is strong. In terms of the best ensemble methods, only
1 data object was misclassified. A preliminary attribute
selection search given the brainwave features was performed
and sorted based on attribute Information Gain (relative
entropy). All attributes were noted to have at least some form
of prediction value. The highest were eigenvalues and
covariance matrices of all temporal windows which had an
Information Gain value of around 0.6, this is visualized on a
per-class basis in Figure 12. The lowest attribute was the
skewness (statistical moments) of the first quarter window
which had a small value of 0.006. Computational complexity
could likely be reduced in future by performing
dimensionality reduction of the benchmark matrix as well as
the minimization of data extracted from brainwave activity to
be classified in real time.
In this work, we successfully benchmarked various classical
machine learning techniques on the dataset acquired in
primary schools. In future, we plan on benchmarking a deep
neural network through the same technique as [16].
Additionally, the complexity of the model was high due to the
consideration of 988 numerical features, in future we will
attempt to reduce the complexity of the dataset by performing
and benchmarking dimensionality reduction algorithms. This
will aim to improve the speed of classification without the
loss of accuracy and will aid in real-time classification.
Through the derivation of a less complex model which can
classify in real time, education can then be customized for the
individual based on brain activity. The educational game will
consider the predictions by the model as additional input to
the child’s physical activity, allowing for the activities to
react in real time to a drop-in concentration, in order to raise
the concentration level and improve the educational
We will also use our educational framework as a long-term
form of cognitive behavioral therapy for certain conditions.
To do this, we plan on repeating the experiment in this study
for a group of children who do not suffer with intellectual
disability and generating a signature to describe the wave
activity (see Figure 2). Over a period of months of using our
proposed framework, the children with intellectual
disabilities will periodically have a similarity measure
generated between their own signature and the signature
generated from the non-intellectually disabled. The goal
therein is to aim to increase the similarity metric over time
through engaging educational activities, with real-time
classification guiding the quality of individual sessions.
The planned activities outlined in this section are now
enabled due to the strong classification ability of the models
derived from this study which pave the way for more in-depth
studies and the development of the educational framework.
In regard to evaluation of usability and acceptance of
children with our developed games, where they have to wear
the headband for BCI and also interact with a NUI via LMC,
we applied a questionnaire to record the opinions of users
after the experiments were all complete as shown below:
a) How easy was it to play the game?
b) What game interface is easier to interact with: (1) leap
motion controller or (2) the traditional game using the
computer mouse/pad?
c) Which game did you prefer?
d) Has been difficult or uncomfortable to wear the Muse
EEG headband (BCI)?
e) Was it comfortable to wear the EEG for a long period?
f) Would you agree that you enjoyed playing the leap
motion games?
g) What did you like about these games?
h) What did you dislike about these games?
i) Would you play these games again?
For a) 83.3% of the responses stated it was as easy; 10% as
medium and 6.7% difficult; b) 86.7% of the children pointed
that the leap motion controller is easier to use than
mouse/pad; c) 83.3% of children chose the aircraft game as
their favorite and the other 16.7% is split into the other
games; d) 93.3% of the children answered no, it is not
difficult to wear the EEG sensor; e) 96.7% of the children
answered yes; f) 100% agreed; g) 83.3% answered the NUI
(using the hand to control the game); h) in this question, it is
interesting to point out that 53.3% mentioned that after long
time playing with leap motion, sometimes they got tired to
keep the hand up to control the aircraft; i) 100% yes.
It is important to mention here that all children, with their
disabilities, some of them even with motor disability, got
well-engaged with the game interfaces. Most of them wanted
to continue playing after their sessions. The educational staff
observed and pointed out that even the most difficult students
in terms of attention deficit (specifically ODD) were focused
on the games interface. They noticed that with only a few
instructions the children were able to play the games, and
through intuition they realized how to solve specific
problems, by observing us (instructors demonstrating only
once how to play in a short of period of time). Since most of
the children in these schools we visited were children
belonging to low-income families, some of them had never
played a similar type of game before. Most of them showed
high anxiety to participate on these experiments, since they
were told a few days before the experiments, and they were
very motivated and anxious waiting for the day.
The next step is to check the confidence given by the
concentration level classification to trigger the game to adapt
to the user and change the stimuli in case the concentration
level drops below a certain threshold. This functionality is
implemented, waiting for the real-time classification of the
EEG data, that is currently being developed. Following this
step, we will implement the approach mentioned in Figure 2,
as a final version of our artificially intelligent game to
monitor the progress of children over time at schools. With
all validation done so far, we can affirm that this approach
has potential to have strong societal impact in children with
attention deficit towards improving their cognitive abilities.
This work introduces a new approach for neurocognitive
training towards improving children’s cognition and
concentration by using intelligent games. We combined BCI
to measure concentration levels and engagement of children
during the interaction with the game and a NUI controlled by
hand gestures. Data acquisition of brainwave patterns of
different groups of children with disability or disorder (e.g.
ID, ODD, ASD, ADHD) were acquired, so that we can check
their patterns of concentration level when performing
multiple tasks. Preliminary results on children brainwaves
classification of concentration show that our framework
achieved 96% accuracy. Questionnaires pointed out positive
feedback on the use of the game and acceptance of children to
this technology in this kind of intervention. Future work will
address our system fully integrated, enabling the game ability
to learn the user concentration pattern and make decisions to
generate new stimuli when the user's attention drops,
effectively enabling the real-time individual personalization
of education.
The authors declare no conflict of interest.
D. Faria came up with the idea and proposal for this new
approach and defined the mathematical models of features
extraction for EEG brainwaves. D. Faria has also
collaborated on the games interface design. J. Bird defined
and tested the classification and optimization models. J. Bird
contributed with the system implementation and integration,
e.g. sensors, games and machine learning. C. Daquana and J.
Kobylarz helped to conduct the pilots with children,
interacting with them and explaining the technology prior to
the experiments. They prepared and applied the
questionnaires as well. P. Ayrosa helped with discussions and
feedback on each phase of the project, including the
theoretical methods and research practice. D. Faria, J. Bird, C.
Daquana and P. Ayrosa looked at the ethical process. D. Faria
and J. Bird were in charge of writing, but all authors provided
feedback and contributed with specific sessions in this paper.
This work is partially supported by Fundação Araucária
Paraná / CONFAP Brazil and the Royal Society UK / Newton
Fund, through a mobility grant awarded to Dr Diego Faria
and Prof. Pedro Ayrosa for the project: “Engagement through
AI-based Interactive Games: Neurocognitive training for
children with learning difficulties” in 2019. We would like to
especially thank the Municipal Education Secretariat of
Cambé-PR, Brazil, in particular, Claudia S. C. Segura, for the
kind assistance and help in conducting the experiments with
children in local primary schools in Cambé-PR, and the
directors of these schools: Irma Hilda Soares Municipal
School; Padre Symphoriano Kopf Municipal School; and
Professor Jacídio Correia Municipal School.
[1] D. J. Shernoff, M. Csikszentmihalyi, B. Schneider, E. S. Shernoff,
"Student engagement in high school classrooms from the perspective
of flow theory". School Psychology Quarterly, 18(2), 158-176, 2003.
[2] I. Glover, "Play as you learn: gamification as a technique for
motivating learners", W. Conf. on Edu., Hyperm. and Telecom., 2013.
[3] C. Pappas, "Gamify the Classroom", 2013. Retrieved from (March
[4] S. Petrovska, D. Sivevskaa, O. Cackov, "Role of the Game in the
Development of Preschool Child", Social and Behav. Sci, 92, 2013.
[5] N. Pierce,"State-of-the-art Report: Digital Game-based Learning for
Early Childhood", Report available at
[6] I. Granic, A. Lobel, R. Engels, "The Benefits of Playing Video
Games", American Psychologists, 2014.
[7] A. Brandao, D.G. Trevisan, L. Brandao, B. Moreira, G. Nascimento,
C.N. Vasconcelos, E. Clua, P.T. Mourao, "Semiotic Inspection of a
game for children with Down syndrome", Brazilian Symposium on
Computer Games and Digital Entertain, 2010.
[8] J.M. de Oliveira, R. Carneiro, G. Fernandes, C.S. Pinto, P.R. Pinheiro,
S. Ribeiro, V.H.C. de Albuquerque, "Novel Virtual Environment for
Alternative Treatment of Children with Cerebral Palsy", Comput Intell
Neurosci., 2016.
[9] G. Chanel, C. Rebetez, M. Bétrancourt, T. Pun, "Emotion Assessment
from Physiological Signals for Adaptation of Game Difficulty", IEEE
Trans. Syst. Man Cybern. A Syst. Hum. 41, 10521063, 2011.
[10] E. Lalor, S. Kelly, C. Finucane, R. Burke, R. Smith, R. Reilly, G.
McDarby, "Steady-State VEP-Based Brain-Computer Interface
Control in an Immersive 3D Gaming Environment",Eurasip J. Appl.
Signal Process, 2005.
[11] D. Huang, K. Qian, D.Y. Fei, W. Jia, X. Chen, O. Bai,
"Electroencephalography (EEG)-Based Brain-Computer Interface
(BCI): A 2-D Virtual Wheelchair Control Based on Event-Related
Desynchronization/Synchronization and State Control", IEEE Trans.
Neural Syst. Rehabil. Eng. 20, 379388, 2012.
[12] A. Finke, A. Lenhardt, H. Ritter, "The MindGame: A P300-Based
Brain-Computer Interface Game", Neural Netw., 22, 13291333, 2009.
[13] C. Russoniello, K. O’Brien, J. Parks, "The Effectiveness of Casual
Video Games in Improving Mood and Decreasing Stress", J. Cyber
Ther. Rehabil. 2, 5366, 2009.
[14] J.J. Bird, L.J. Manso, E.P. Ribeiro, A.Ekart, D. R. Faria, "A study on
mental state classification using EEG-based brain-machine interface",
International Conference on Intelligent Systems (IS). IEEE, 2018.
[15] J.J. Bird, A. Ekart, C.D. Buckingham, D.R. Faria, "Mental emotional
sentiment classification with an eeg-based brain-machine interface",
Int. Conf. on Digital Image and Signal Processing (DISP’19), 2019.
[16] J.J. Bird, D.R. Faria, L.J. Manso, A. Ekárt, C.D. Buckingham, A deep
evolutionary approach to bioinspired classifier optimisation for
brain-machine interaction”, Complexity, 2019.
[17] R.P. Brenner, N. Schaul. "Periodic EEG patterns: classification,
clinical correlation, and pathophysiology." Journal of clinical
neurophysiology: official publication of the American
Electroencephalographic Society 7.2: 249-267, 1990.
[18] N.F. Güler, D.U. Elif, and I. Güler. "Recurrent neural networks
employing Lyapunov exponents for EEG signals classification."
Expert systems with applications 29.3: 506-514, 2005.
Diego R. Faria is a Senior Lecturer in Computer
Science. He is with the School of Engineering and
Applied Science, Aston University, Birmingham
(UK). He is the coordinators and founder of the
ARVIS Lab (Aston Robotics, Vision and Intelligent
Systems). Currently (2019-2022) he is the project
coordinator of EU CHIST-ERA InDex project
(Robot In-hand Dexterous manipulation by
extracting data from human manipulation of objects
to improve robotic autonomy and dexterity) funded
by EPSRC UK. Dr Faria is also PI and Co-I (2020-2022) of two projects with
industry (KTP-Innovate UK scheme) related to perception and autonomous
systems applied to autonomous vehicles, and NLP and image processing for
multimedia retrieval. He received his Ph.D. degree in electrical and
computer engineering from the University of Coimbra, Portugal, in 2014. He
holds an M.Sc. degree in computer science from the Federal University of
Parana, Brazil, in 2005. In 2001, he earned a bachelor’s degree in informatics
technology (data computing & information) and he has finished a computer
science specialization in 2002 at Londrina State University, Brazil. From
2014 to 2016 he was a postdoctoral fellow at the Institute of Systems and
Robotics, University of Coimbra where he collaborated on different projects
funded by EU commission and the Portuguese government in areas of Robot
Grasping, Perception, Cognitive Robotics and Assisted Living. His research
interests are assisted living, intelligent systems, and cognitive robotics.
Jordan J. Bird achieved a first class Bachelor's
Degree with Honours in Computer Science at Aston
University in the United Kingdom, before
continuing with PhD studies at the same institution
in 2018 with an awarded scholarship. Garnered
through a deep Scientific passion from an early age,
his research interests exist largely within the field of
Human-robot Interaction; these include, the
Emergence of Artificial Intelligence, Intelligent
Social Frameworks, Turing's Imitation Game, Deep
Machine Learning, and Transfer Learning. Jordan is a founding member of
the Aston Robotics, Vision and Intelligent Systems (ARVIS) laboratory at
Aston University.
Cintia Daquana is the Pedagogical Coordinator at
the Municipal Education Secretariat of Cambé-PR,
Brazil. She is responsible for special education and
inclusion of children with disabilities. She has a
degree in pedagogy/education by the State
University of Londrina, Brazil. She also has a
certified specialization course (Lato-sensu) in
Pedagogical Work in Early Childhood Education
(State University of Londrina, Brazil); Special
Education (UNOPAR, Brazil), and Psychopedagogy (INESUL, Brazil). Her
research interests are: Special Education, Children with Learning
Difficulties, and Technology for Education.
Jhonatan Kobylarz is an Electronic Engineering
student at Universidade Federal do Paraná (UFPR),
Brazil. His research interests include Deep Machine
Learning towards Social Robotic Interaction,
Bioengineering and Computer Vision.
Pedro P. S. Ayrosa obtained his M.Sc. and PhD
degrees in Computer Engineering and Systems
from the Federal University of Rio de Janeiro
(COPPE / UFRJ) in 1992 and 2001, respectively.
He earned a B.Sc. degree in Mathematics from
Universidade Federal FluminenseUFF, Brazil,
in 1988. He is currently an Associate Professor at
the State University of Londrina (UEL) having
worked in undergraduate, specialization and
master's degrees in computer science. He was a member of the editorial
board of the State University of Londrina Publisher (Eduel). He is the
institutional examiner of courses and Distance Learning Education at
INEP-Brazil. He is an expert in Ad-hoc distance learning at the Department
of Science and Technology of the State of Parana. He is the general
coordinator of the Center of Distance Education at State University of
Londrina (NEAD-UEL), the local coordinator of the Open University of
Brazil (UAB), and director of the Laboratory of Educational Technology
(LABTED). His research interests are: Distance Learning Technology,
Artificial Intelligence, and Neural Networks.
... A pilot study on :ive children with ASD shows a clear correlation between dynamic changes in the behavior and the corresponding physiological signals. (Faria et al., 2020) developed an interactive game for educational therapy based on an EEG brain computer interface to assess the concentration levels of children with ASD when engaged in cognitive tasks. Preliminary results indicate that brainwaves can be successfully related to the three classes concentrated, relaxed and neutral by using various classical machine learning techniques. ...
... For the therapy of restricted and repetitive behavior, (Di Palma et al., 2017;Faria et al., 2020) presents an interactive game in combination with a brain-computer interface, to assess the concentration levels. The humanoid robot and a haptic bracelet in (Beaudoin et al., 2021) teach children with ASD be more time ef:icient. ...
Full-text available
The progress in Information and Communication Technologies (ICT) can make a real difference in the quality-of-life of persons with Autism Spectrum Disorder (ASD) by acting on several aspects, from customized software for the communication, to emotion recognition, to social behavior and also to provide systems for the observation of the wide spectrum of manifestations, to ease the diagnosis, to support the therapy, and to monitor the improvement and the growth of children with ASD. This has been achieved by the introduction of a large number of innovative technologies, spanning from Internet of Things, to robotics, virtual and augmented reality etc. Differently from other surveys on the same research area, we focus this survey on innovative technologies used in this field, and we organize a classification of the works based on three different but strictly crossed axis, namely the triad of impairment (either communication, social interaction, or social behaviors), research purpose (either diagnosis or therapy) and system activity (either monitoring or intervention).
... In a study conducted by [10], a description of a different approach is presented, based on children's brain wave data, obtained from electroencephalograms (EEG), used to classify and monitor concentration levels. When the concentration is low, the serious game can be customized, changing, for instance, the training challenge or providing some new look or auditory stimuli to increase attention [10]. ...
... In a study conducted by [10], a description of a different approach is presented, based on children's brain wave data, obtained from electroencephalograms (EEG), used to classify and monitor concentration levels. When the concentration is low, the serious game can be customized, changing, for instance, the training challenge or providing some new look or auditory stimuli to increase attention [10]. This study reveals the "power" of serious games, that have the potential to adapt certain characteristics, namely, their layout in order to arouse the interest of the player, whenever he detects that the motivational levels are below the expected. ...
... In a study conducted by [10], a description of a different approach is presented, based on children's brain wave data, obtained from electroencephalograms (EEG), used to classify and monitor concentration levels. When the concentration is low, the serious game can be customized, changing, for instance, the training challenge or providing some new look or auditory stimuli to increase attention [10]. ...
... In a study conducted by [10], a description of a different approach is presented, based on children's brain wave data, obtained from electroencephalograms (EEG), used to classify and monitor concentration levels. When the concentration is low, the serious game can be customized, changing, for instance, the training challenge or providing some new look or auditory stimuli to increase attention [10]. This study reveals the "power" of serious games, that have the potential to adapt certain characteristics, namely, their layout in order to arouse the interest of the player, whenever he detects that the motivational levels are below the expected. ...
Conference Paper
The teaching method has varied and evolved over the years. The year 2020 is a milestone in this variability. The COVID-19 pandemic unleashed the strict need for a radical adaptation of teaching processes that, worldwide, become exclusively or almost exclusively at a distance. The impact of the digital world on our lives has been and is being felt like never before. Non-formal teaching processes gain crucial importance in this scenario. Serious games are engaging and provide a stimulating environment in which students can explore and discover in a fun and interactive way, improving student’s motivation and performance in mathematics and making them active learners. The adoption in the educational process of serious games, promoting the development of critical thinking, and its interest, as a research topic, by scientists from various areas, namely, mathematics, have gained increasing prominence. With regard to mathematics, despite its recognized importance in the intellectual human development, children and adolescents usually believe that it is a difficult subject, both at a conceptual and procedural level, leading to a lack of motivation and high failure rates. In this paper, based on a solid and recent literature review, we look at the role that serious games play in the learning and motivation of children and adolescents, especially the narrative educational games focused on mathematics. A narrative interactive game is defined as a serious game, in which the story exists to improve the gameplay. Within this context, the Thematic Line Geometrix of the Center for Research and Development of Mathematics and Applications (CIDMA) of the University of Aveiro developed a narrative serious game entitled CNME, based on the historical event “Magellan - Elcano circumnavigation around the world”. This game runs on every platform that has a recent browser and it also has an application for Android and iOS. In the CNME digital and interactive game there are two game modes, the generic and the academic, depending on the player's profile. The generic modality was designed to promote mathematical literacy and is aimed at any citizen. The academic modality was designed to promote critical and creative thinking and is aimed at young people with mathematical knowledge at the level of the 3rd Cycle of the Portuguese Basic Education. In short, CNME is an interactive mathematical narrative game, aiming at mathematical learning in an interactive, playful and motivating way, anchored in a notable and true historical event conceived under a set of scrutinized scientific evidence.
Full-text available
In modern Human-Robot Interaction, much thought has been given to accessibility regarding robotic locomotion, specifically the enhancement of awareness and lowering of cognitive load. On the other hand, with social Human-Robot Interaction considered, published research is far sparser given that the problem is less explored than pathfinding and locomotion. This thesis studies how one can endow a robot with affective perception for social awareness in verbal and non-verbal communication. This is possible by the creation of a Human-Robot Interaction framework which abstracts machine learning and artificial intelligence technologies which allow for further accessibility to non-technical users compared to the current State-of-the-Art in the field. These studies thus initially focus on individual robotic abilities in the verbal, non-verbal and multimodality domains. Multimodality studies show that late data fusion of image and sound can improve environment recognition, and similarly that late fusion of Leap Motion Controller and image data can improve sign language recognition ability. To alleviate several of the open issues currently faced by researchers in the field, guidelines are reviewed from the relevant literature and met by the design and structure of the framework that this thesis ultimately presents. The framework recognises a user's request for a task through a chatbot-like architecture. Through research in this thesis that recognises human data augmentation (paraphrasing) and subsequent classification via language transformers, the robot's more advanced Natural Language Processing abilities allow for a wider range of recognised inputs. That is, as examples show, phrases that could be expected to be uttered during a natural human-human interaction are easily recognised by the robot. This allows for accessibility to robotics without the need to physically interact with a computer or write any code, with only the ability of natural interaction (an ability which most humans have) required for access to all the modular machine learning and artificial intelligence technologies embedded within the architecture. Following the research on individual abilities, this thesis then unifies all of the technologies into a deliberative interaction framework, wherein abilities are accessed from long-term memory modules and short-term memory information such as the user's tasks, sensor data, retrieved models, and finally output information. In addition, algorithms for model improvement are also explored, such as through transfer learning and synthetic data augmentation and so the framework performs autonomous learning to these extents to constantly improve its learning abilities. It is found that transfer learning between electroencephalographic and electromyographic biological signals improves the classification of one another given their slight physical similarities. Transfer learning also aids in environment recognition, when transferring knowledge from virtual environments to the real world. In another example of non-verbal communication, it is found that learning from a scarce dataset of American Sign Language for recognition can be improved by multi-modality transfer learning from hand features and images taken from a larger British Sign Language dataset. Data augmentation is shown to aid in electroencephalographic signal classification by learning from synthetic signals generated by a GPT-2 transformer model, and, in addition, augmenting training with synthetic data also shows improvements when performing speaker recognition from human speech. Given the importance of platform independence due to the growing range of available consumer robots, four use cases are detailed, and examples of behaviour are given by the Pepper, Nao, and Romeo robots as well as a computer terminal. The use cases involve a user requesting their electroencephalographic brainwave data to be classified by simply asking the robot whether or not they are concentrating. In a subsequent use case, the user asks if a given text is positive or negative, to which the robot correctly recognises the task of natural language processing at hand and then classifies the text, this is output and the physical robots react accordingly by showing emotion. The third use case has a request for sign language recognition, to which the robot recognises and thus switches from listening to watching the user communicate with them. The final use case focuses on a request for environment recognition, which has the robot perform multimodality recognition of its surroundings and note them accordingly. The results presented by this thesis show that several of the open issues in the field are alleviated through the technologies within, structuring of, and examples of interaction with the framework. The results also show the achievement of the three main goals set out by the research questions; the endowment of a robot with affective perception and social awareness for verbal and non-verbal communication, whether we can create a Human-Robot Interaction framework to abstract machine learning and artificial intelligence technologies which allow for the accessibility of non-technical users, and, as previously noted, which current issues in the field can be alleviated by the framework presented and to what extent.
Conference Paper
Full-text available
Este artigo busca apresentar a pesquisa que mapeou o desenvolvimento, na literatura, sobre o tema Transtorno de Déficit de Atenção/Hiperatividade (TDAH) e Jogos Digitais, por meio de pesquisa bibliométrica nas bases científicas SpringerLink, ScienceDirect (Elsevier), Scopus (Elsevier), Web of Science - Coleção Principal (Clarivate Analytics), ACM Digital Library, MEDLINE/PubMed (via National Library of Medicine), IEEE Xplore e Compendex (Engineering Village - Elsevier. Para cumprir com esta meta, lançamos mão das diretrizes do protocolo propostas por Kitchenham e Charters para realizar o mapeamento sistemático, começando pela identificação da necessidade do estudo, seguido por: formulação da questão de pesquisa, busca dos estudos primários, avaliação de qualidade, extração de dados, síntese e análise dos resultados. Por fim, os principais resultados obtidos na pesquisa comprovam que os jogos digitais conseguem auxiliar no desenvolvimento das funções cognitivas no tratamento do TDAH. Ao fim deste artigo apresentamos a composição de estudos que foram escritos nas línguas portuguesa e espanhola e que serviram de subsídio para extração dos dados utilizados na pesquisa supracitada.
Full-text available
This study suggests a new approach to EEG data classification by exploring the idea of using evolutionary computation to both select useful discriminative EEG features and optimise the topology of Artificial Neural Networks. An evolutionary algorithm is applied to select the most informative features from an initial set of 2550 EEG statistical features. Optimisation of a Multilayer Perceptron (MLP) is performed with an evolutionary approach before classification to estimate the best hyperparameters of the network. Deep learning and tuning with Long Short-Term Memory (LSTM) are also explored, and Adaptive Boosting of the two types of models is tested for each problem. Three experiments are provided for comparison using different classifiers: one for attention state classification, one for emotional sentiment classification, and a third experiment in which the goal is to guess the number a subject is thinking of. The obtained results show that an Adaptive Boosted LSTM can achieve an accuracy of 84.44%, 97.06%, and 9.94% on the attentional, emotional, and number datasets, respectively. An evolutionary-optimised MLP achieves results close to the Adaptive Boosted LSTM for the two first experiments and significantly higher for the number-guessing experiment with an Adaptive Boosted DEvo MLP reaching 31.35%, while being significantly quicker to train and classify. In particular, the accuracy of the nonboosted DEvo MLP was of 79.81%, 96.11%, and 27.07% in the same benchmarks. Two datasets for the experiments were gathered using a Muse EEG headband with four electrodes corresponding to TP9, AF7, AF8, and TP10 locations of the international EEG placement standard. The EEG MindBigData digits dataset was gathered from the TP9, FP1, FP2, and TP10 locations.
Conference Paper
Full-text available
This paper explores single and ensemble methods to classify emotional experiences based on EEG brainwave data. A commercial MUSE EEG headband is used with a resolution of four (TP9, AF7, AF8, TP10) electrodes. Positive and negative emotional states are invoked using film clips with an obvious valence, and neutral resting data is also recorded with no stimuli involved, all for one minute per session. Statistical extraction of the alpha, beta, theta, delta and gamma brainwaves is performed to generate a large dataset that is then reduced to smaller datasets by feature selection using scores from OneR, Bayes Network, Information Gain, and Symmetrical Uncertainty. Of the set of 2548 features, a subset of 63 selected by their Information Gain values were found to be best when used with ensemble classifiers such as Random Forest. They attained an overall accuracy of around 97.89%, outperforming the current state of the art by 2.99 percentage points. The best single classifier was a deep neural network with an accuracy of 94.89%.
Conference Paper
Full-text available
This work aims to find discriminative EEG-based features and appropriate classification methods that can categorise brainwave patterns based on their level of activity or frequency for mental state recognition useful for human-machine interaction. By using the Muse headband with four EEG sensors (TP9, AF7, AF8, TP10), we categorised three possible states such as relaxing, neutral and concentrating based on a few states of mind defined by cognitive behavioural studies. We have created a dataset with five individuals and sessions lasting one minute for each class of mental state in order to train and test different methods. Given the proposed set of features extracted from the EEG headband five signals (alpha, beta, theta, delta, gamma), we have tested a combination of different features selection algorithms and classifier models to compare their performance in terms of recognition accuracy and number of features needed. Different tests such as 10-fold cross validation were performed. Results show that only 44 features from a set of over 2100 features are necessary when used with classical classifiers such as Bayesian Networks, Support Vector Machines and Random Forests, attaining an overall accuracy over 87%.
Full-text available
Cerebral palsy is a severe condition usually caused by decreased brain oxygenation during pregnancy, at birth or soon after birth. Conventional treatments for cerebral palsy are often tiresome and expensive, leading patients to quit treatment. In this paper, we describe a virtual environment for patients to engage in a playful therapeutic game for neuropsychomotor rehabilitation, based on the experience of the occupational therapy program of the Nucleus for Integrated Medical Assistance (NAMI) at the University of Fortaleza, Brazil. Integration between patient and virtual environment occurs through the hand motion sensor “Leap Motion,” plus the electroencephalographic sensor “MindWave,” responsible for measuring attention levels during task execution. To evaluate the virtual environment, eight clinical experts on cerebral palsy were subjected to a questionnaire regarding the potential of the experimental virtual environment to promote cognitive and motor rehabilitation, as well as the potential of the treatment to enhance risks and/or negatively influence the patient’s development. Based on the very positive appraisal of the experts, we propose that the experimental virtual environment is a promising alternative tool for the rehabilitation of children with cerebral palsy.
Full-text available
Stress related medical disorders such as cardiovascular disease, diabetes and depression are serious medical issues that can cause disability and death. Techniques to prevent their development and exacerbation are needed. Casual video games (CVGs) are fun, easy to play, spontaneous and are tremendously popular. In this randomized controlled study we tested the effects of CVGs on mood and stress by comparing people playing CVGs with control subjects under similar conditions. Electroencephalography (EEG) changes during game play were consistent with increased mood and corroborated findings on psychological reports. Moreover, heart rate variability (HRV) changes were consistent with autonomic nervous system relaxation or decreased physical stress. In some cases CVGs produced different brain wave, heart rate variability and psychological effects. These finding have broad implications which include the potential development of prescriptive interventions using casual video games to prevent and treat stress related medical disorders.
Conference Paper
Full-text available
Motivation can sometimes be a problem for learners, especially when they do not find the purpose of a learning activity to be clear. Gamification is a recent concept, primarily from the web development industry, that can make learning activities more active and participatory. This paper provides an overview of the background of gamification, the relevant key game concepts, gives an overview of examples from outside education and provides some suggestions for implementing gamification in education generally, and e-learning specifically. This paper is intended to give readers an overview of gamification, allowing an informed analysis of the technique in their own context to be made.
Full-text available
A child’s play is the meaning of its life in preschool age. It was his refuge from fears, field of battles, the polygon of game, achievements and successes, soothing and dreams. There come to the fore desires, aspirations, feelings, thoughts and needs of the child for active action in the environment in which it lives. The game satisfies the biological and psychological needs of children and contributes to their mental, emotional, social and moral development. Different roles in the games, although the product of a child’s fantasy, allow the child to gain personal experience of good and bad, about what is positive and what is not in behavior. Games are an important form of entertainment for children and adults, through which children organize independently and they have special educational significance. They are a powerful tool for education because through games children acquire knowledge, enrich their experience, and develop skills and habits. The goal of our research is to investigate how the game is implemented in the educational work of preschool institutions, and at the same time to determine which of the traditional games are used in kindergartens, and how modern games enable and assist the process of coming to self-knowledge.
Full-text available
Video games are a ubiquitous part of almost all children's and adolescents' lives, with 97% playing for at least one hour per day in the United States. The vast majority of research by psychologists on the effects of "gaming" has been on its negative impact: the potential harm related to violence, addiction, and depression. We recognize the value of that research; however, we argue that a more balanced perspective is needed, one that considers not only the possible negative effects but also the benefits of playing these games. Considering these potential benefits is important, in part, because the nature of these games has changed dramatically in the last decade, becoming increasingly complex, diverse, realistic, and social in nature. A small but significant body of research has begun to emerge, mostly in the last five years, documenting these benefits. In this article, we summarize the research on the positive effects of playing video games, focusing on four main domains: cognitive, motivational, emotional, and social. By integrating insights from developmental, positive, and social psychology, as well as media psychology, we propose some candidate mechanisms by which playing video games may foster real-world psychosocial benefits. Our aim is to provide strong enough evidence and a theoretical rationale to inspire new programs of research on the largely unexplored mental health benefits of gaming. Finally, we end with a call to intervention researchers and practitioners to test the positive uses of video games, and we suggest several promising directions for doing so. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Full-text available
There are not many initiatives in the area of gamedevelopment for children with special needs, specially childrenwith Down syndrome. The major purpose of our research isto promote cognitive development of disabled children in thecontext of inclusive education. In order to do so, we addressaspects of interaction, communication and game design instimulating selected cognitive abilities. By using a Human-Computer Interaction method based on the Inspection ofEvaluation, it was possible to study and understand userinteraction with the interface and thus examine the positiveaspects as well as the communicability problems found withthe JECRIPE game - a game developed specially for childrenwith Down syndrome in pre-scholar age.
We present a Brain–Computer Interface (BCI) game, the MindGame, based on the P300 event-related potential. In the MindGame interface P300 events are translated into movements of a character on a three-dimensional game board. A linear feature selection and classification scheme is applied to identify P300 events and calculate gradual feedback features from a scalp electrode array. The classification during the online run of the game is computed on a single-trial basis without averaging over subtrials. We achieve classification rates of 0.65 on single-trials during the online operation of the system while providing gradual feedback to the player.