Content uploaded by Diego R. Faria
Author content
All content in this area was uploaded by Diego R. Faria on Sep 16, 2020
Content may be subject to copyright.
Abstract— Preliminary results to a new approach for
neurocognitive training on academic engagement and
monitoring of attention levels in children with learning
difficulties is presented. Machine Learning (ML) techniques
and a Brain-Computer Interface (BCI) are used to develop an
interactive AI-based game for educational therapy to monitor
the progress of children’s concentration levels during specific
cognitive tasks. Our approach resorts to data acquisition of
brainwaves of children using electroencephalography (EEG) to
classify concentration levels through model calibration. The
real-time brainwave patterns are inputs to our game interface
to monitor concentration levels. When the concentration drops,
the educational game can personalize to the user by changing
the challenge of the training or providing some new visual or
auditory stimuli to the user in order to reduce the attention loss.
To understand concentration level patterns, we collected
brainwave data from children at various primary schools in
Brazil who have intellectual disabilities e.g. autism spectrum
disorder and attention deficit hyperactivity disorder.
Preliminary results show that we successfully benchmarked
(96%) the brainwave patterns acquired by using various
classical ML techniques. The result obtained through the
automatic classification of brainwaves will be fundamental to
further develop our full approach. Positive feedback from
questionnaires was obtained for both, the AI-based game and
the engagement and motivation during the training sessions.
Keywords - technology for educational therapy, children with
attention deficit, BCI, AI-based games, Machine Learning.
I. INTRODUCTION
Over the past years, games have become a technological
service tool in different application domains. Some projects
combine a direct application of games as therapeutic tools in
healthcare for rehabilitation and development of cognitive
function. Experiments with children demand the design of the
interaction functionalities and interface for ease of use. It has
been proven that games can boost confidence and interest in
learners who otherwise have lack of confidence in
classrooms. Gaming can be used to encourage engagement
and motivation as well as promote learning if applied
properly to areas in which they would be most beneficial.
Diego R Faria and Jordan J. Bird are with 1ARVIS Lab, Computer Science
Department, School of Engineering and Applied Science, Aston University,
UK, emails: {d.faria. birdj1}@aston.ac.uk. Cintia Daquana is with
2Education Secretariat of Cambe-PR Brazil, Department of Special
Education and Inclusion, Email: educacaoinclusiva@cambe.pr.gov.
Jhonatan Kobylarz is with Federal University of Parana, Brazil. Pedro
Ayrosa is with 4LABTED-UEL and 5Computer Science, Londrina State
University, Brazil, ayrosa@uel.br.
Fig. 1. Overview of the proposed AI-based Interactive Game (full system)
using EEG brain waves to classify concentration levels, hand motion for
gesture recognition to control the game, and emotion recognition to analyze
the mood of the children. The multimodal data will help the system to adapt
to the current state of the users to keep them motivated/concentrated.
Undoubtedly, technology in education helps individuals
develop necessary skills, but it also opens up opportunities
for a much more fluid learning experience. The need to
deliver continuous improvement, are driving schools and
higher education institutions to seek out proven new ways of
delivering learning, including people with special needs. By
combining technology with traditional forms of teaching,
learning providers can attain powerful means of achieving
results, while benefitting from a strong return on investment.
Whether cost savings are a primary or additional objective,
technology opens up a realm of opportunities for new and
improved content delivery and personalized learning.
Technology can be personalized to meet each person’s needs
through a blend of cognitive-motor gaming functions. In
addition, parents have the independence to have their
children learn within and outside clinical or school sessions
by using this technology on mobile devices. In the long term,
it is a worthwhile investment (e.g. training teachers /
educators of students with special needs), with tools having a
proven record of maximizing children engagement. Games
can be a decisive vehicle through which children learn about
themselves, the environment, and develop social skills.
However, for many children with any kind of impairment,
adapted play opportunities are often limited.
AI-based interactive games can be applied to
child-machine interaction within the context of educational
therapy for assisting the facilitation of adaptive
learning-related coping and improved cognitive skill
outcomes in educational settings, focusing on aiding children
with learning difficulties. The application of technology to
this intervention is a promising and ground-breaking avenue
to promote adjustment and development in children, who
tend to be increasingly enthusiastic about the use of this
technology.
Towards AI-based Interactive Game Intervention to Monitor
Concentration Levels in Children with Attention Deficit
Diego R. Faria1, Jordan J. Bird1, Cintia Daquana2, Jhonatan Kobylarz3, Pedro P. S. Ayrosa4,5
Fig. 2. Overview of the proposed approach to monitor the progress of
children over time by using our neurocognitive training through AI-based
games.
In this work we introduce a new approach for an intelligent
system by building an adaptive AI-based game for
neurocognitive training to boost and maintain the
concentration levels of children with intellectual disability
and attention deficit. Preliminary work using ML techniques
combined with BCI, and a Natural User Interface (NUI) for
tracking hand motion, makes this game more attractive for
children, engaging them to this kind of technology,
awakening their interest and motivation to play educational
games. During the playing session we recorded and measured
children’s brainwave patterns to monitor their concentration
level, towards adapting the game interface when the
concentration drops by stimulating them through some visual
or auditory stimuli on the interface to keep their attention and
focus on the training session. Figures 1 and 2 depict a sketch
of our proposed approach in terms of technological
development and future implementation steps.
The remainder of this paper is as follows: Section II
describes Related Work, Section III presents the
technological development and methods used for the
intelligent game implementation, Section IV describes the
apparatus and experimental setup for data acquisition,
Section V presents the preliminary results followed by
Section VI with discussions and strategy for future
improvements and finally, Section VII draws the
conclusions.
II. BACKGROUND AND RELATED WORK
Games can provide a better and more efficient learning
environment as they introduce a ‘fun’ element to education
which makes the whole experience more appealing for
students. Gamification of education is not only there to
engage users but also to develop cognitive abilities as well as
problem solving skills. The lack of enjoyment in education
leads to failure in learning [1]. When students are engaged,
their motivation to learn will increase because they become
more focused [2]. Students thrive on dynamic learning
experiences. Boredom is still a challenge in traditional
education settings. The view of combining gaming with
education is not a new one, however recently, there has been
a steady increase in “gamifying” education and creating more
interactable content to motivate learners and increase
engagement [3].
A study conducted by [4] which investigated how the role
of modern games in children’s development concluded that
games with enough dynamic content can enable children to
develop physically their motor skills and intellectually
through problem solving skills. There is a strong association
between engaging in computer-based learning activities and
cognitive development in children. Research conducted by
[5] found the evidence that specially designed games do have
benefits in increasing coordinated motor skills. Authors in [6]
found strong evidence in measurable changes of neural
processing through the use of gaming. It is important to note
that enhanced cognitive performance is not a product of all
the digital games that are available, but the essence of
increasing cognitive processing through games is a
possibility, if designed correctly. There are several ways to
assess the usability and efficacy of educational games.
Human-Computer Interaction (HCI) techniques, (e.g.
semiotic inspection), allow computer scientists and
educational game developers to evaluate such games and
efficacy on children development [7].
The authors in [8] designed an interesting and novel
interface integrating a commercial headband with one single
dry electrode to get beta frequency from the frontal lobe to
measure the concentration level. They integrated the virtual
environment with the leap motion technology for gesture
recognition. They aimed at using this novel interface
developed in Unity for children with cerebral palsy. Despite
the idea being interesting, the researchers did not use any
machine learning concepts for gesture recognition with the
leap motion sensor, nor to automatically classify brainwave
activity from the EEG sensor. Instead, this work opted for
basic functionalities provided by both commercial sensors as
tools, and thus analysis of the performance of concentration
levels of the users or the efficacy of the gesture recognition in
specific tasks was not possible. However, they evaluate the
acceptance of their novel virtual environment as a potential
user-friendly interface that can be used to measure
concentration level.
Different approaches have been developed using EEG data
to classify emotions [9], immersive 3D games [10], control
games [11], [12], and analysis of brainwave patterns of
concentration, mood, etc. while patients play games [13]. But
none of them have combined an intelligent game based on
response of EEG, natural user interface and emotional
reactions together to keep the user’s concentration during the
interaction. Furthermore, none of them have monitored and
analyzed long-term data to monitor the progress of the user
over time after multiple training sessions. This work is going
beyond state-of-the-art by developing an intelligent system
that will be able to adapt to the users by generating
appropriate stimuli to keep their concentration during the
training sessions based on the classification of EEG
brainwave patterns. In addition, the future implementation
shown in Figure 2 will make our approach a potential tool for
neurocognitive training of children with attention deficit.
III. AI-BASED INTELLIGENT GAME DEVELOPMENT
The idea of an AI-based interactive game is conceptualized
here for an intervention context, with a number of potential
applications in the education system, such as cognitive
educational therapy (e.g. facilitation of coping skills), and
health promotion settings (e.g. training of health-related
skills). Current empirical evidence suggests that the use of
smart technology in therapeutic and educational context is
likely to enhance the outcomes of cognitive-behavioral
interventions. The primary question of this work is: How can
one endow an intelligent game with advanced perception,
learning and interaction capabilities in order to stimulate
children with learning difficulties to enhance their cognitive
and motor functions without stressing, but engaging them? In
order to answer this question, our strategy is focused on: (i)
the development of an AI-based game which adopts an
approach to automatically classify mental states to measure
concentration levels through EEG data; (ii) control and
interaction via gesture recognition using hand motion
patterns within a game environment; (iii) focusing the
neurocognitive training using user-friendly interfaces for
tasks such as reasoning, hand-eye coordination, and memory
to improve the attention deficit of children with learning
difficulties; (iv) real-time, autonomous adaption by the game
when concentration drops. Once the concentration level is
classified during the user interaction, the intelligent system
can make decision on whether new stimulus (visual or
auditory) is needed to keep the user attention to avoid
concentration loss.
A. EEG Data Processing and Classification
Our EEG classification technique is based on previous
observations with mental state [14] and emotional state [15].
In these works, the temporal brainwave signals are processed,
and mathematical features are extracted. This is performed
through a sliding window technique, where windows of
length 1s with an overlap of 0.5s capture short segments of
brain activity, and the temporal data is then converted to
singular data points covering the window through various
algorithms. The 988 features derived from the 1s windows as
well as the 0.25s and 0.5s windows by offset include Fourier
Transforms, Shannon Entropy, and Log-energy Entropy
among many others
1
. An extensive set of features beyond
those usually observed in a clinical setup [17], [18] are
extracted due to the more cost-effective nature of the Muse
headband sensor in comparison to expensive but higher
quality clinical EEGs.
Following this, the features are organized into a matrix, and
since temporal segments are now represented as numerical
features, machine learning algorithms can learn useful
pattern recognition rules from the data. In this study, we
consider the ground-truth dataset; that is, the data collected
during the calibration phase at the start of the experiment. We
benchmark various classical machine learning algorithms on
the ternary problem of concentrating/relaxed/neutral and
compare the predictions to the collected data in order to
discern how classifiable the data is prior to future exploration
into the unknown data, which is the data recorded while the
child is interacting with the educational game. Training is
performed on the basis of 10-fold cross validation technique
in order to prevent overfitting.
B. Interfaces Development using UNITY Game Engine
Unity is a 2D and 3D game development engine created by
Unity Technologies, allowing the development of a wide
range of games on multiple platforms. It has been extensively
1
The full list of features is detailed in [14].
Fig. 3. Spelling game interfaces developed in Unity. The user has to pick-up
the corresponding letters of the animal name and throw them to the right side
(inside the box).
Fig. 4. Aircraft interface designed in Unity and used with leap motion to grab
the user hand motion to control the aircraft.
Fig.5. Leap Motion Data. Left: Palm tracking data. Right: Fingertips tracking
data.
used in industry and academia since it has a number of
features that make it particularly appealing including relevant
standpoints, such as inexpensive, easy to use, has a fairly
shallow learning curve, accommodates multiple scripting
languages, and ships with intuitive prototyping tools. Our
games are based on educational games reported in the
state-of-the-art, focused on perception, memory, reasoning
and attention. We designed a spelling game, where the
interface randomly selects images of animals and their sound,
separated into categories of easy, medium and complex
spelling. Once children interact with the interface and
progress on each phase, the difficulty is increased. The
interface generates random cubes with alphabetic letters on
the ground, and the user has to pick-up the letters in the
correct order corresponding the animal’s name and throw it
into a box until the word is complete. If the player succeeds,
then a new image of animal is randomly given, otherwise, the
game warns the user about the mistake and prompts the user
to try again. Figure 3 shows an example of this interface.
Another interface designed is an aircraft controlled by the
hand, where its pose and orientation is given by the hand pose
(roll-pitch-yaw), and the acceleration as well by moving the
hand towards the screen or backwards, and also allowing
other directions such as left and right. Figure 4 shows the
interface. The main idea of this game is to control the aircraft
and avoid obstacles, find free routes, and gather objectives
(coins) to increase the player’s score. It demands a hand-eye
coordination to follow specific routes and moving the hand
appropriately for the correct motion within the virtual
environment, with the ability to increase the difficulty
(minimum speed, obstacles) if required, based on
concentration levels measured from the brain.
C. Gesture Recognition
Our gesture recognition module is based on the Leap Motion
Controller (LMC) Software Development Kit (SDK)
functions. The LMC is primarily designed for hand gesture
and finger position detection in interactive software
applications. It consists of three infrared emitters and two
CCD cameras to track the hand. It provides preprocessed data
through their Application Programming Interface (API). The
data provided is given as follows (Fig. 5): the position of the
palm frame-by frame, its normal and velocity; the hand
direction; fingertips position and velocity; and arm direction.
Currently we are using the leap-motion API functions for
gesture recognition instead of implementing our own
classifier, since it works very well for gestures such as
opening and closing the hand for simple tasks (e.g. pinch,
pick-up and release an object, and also static gesture like
pointing, two-fingers, three-fingers and so on). The hand
pose provided by the API can be directly mapped as the
controller (e.g. palm and finger position and orientation) in
the game interface to move an object forward, backward, left,
right and rotate it based on the hand orientation.
The Unity Game Engine and LMC SDK are easily
compatible with one another, making game design and
implementation integration easier, since the LMC functions
are embedded within the Unity development interface.
IV. EXPERIMENTAL SETUP AND DATA ACQUISITION
In order to gather data for the experiment, the commercial
Muse headband (EEG) sensor is used as can be observed in
Figure 6. The subject group was made up of a total of 30
children from 3 different public primary schools in Cambé, a
municipality within the State of Paraná in Southern Brazil.
All of the children’s legal parents or guardians were informed
in detail of the project characteristics and written consent was
given. Special monitoring roles were created for the trial in
order to ensure compliance of ethical requirements,
confidentiality and protection of personal data. No trials were
performed without previous approval by the ethical
committee and data protection authorities in Brazil. The pilot
trials were conducted in accordance with the highest ethical
standards from the UK and Brazil. To ensure that the
information is easy to understand, all written information that
was given to the involved people was approved by experts on
“Easy to Read” guidelines. All trials were conducted after
receiving the signed “Informed Consent Form”. The sample
size of 30 children is selected for preliminary
experimentation, with positive results presenting assurance
and reason for collecting a much larger dataset in the future.
The subject group of 30 children were aged from six to ten
years old, 24 were male and 6 were female. The children
considered in this experiment all had forms of disability
including Intellectual Disability (ID), Autism Spectrum
Disorder (ASD) and Attention Deficit Hyperactivity Disorder
(ADHD) which were widespread within the subject group,
Fig. 6. EEG recording.
Left: Muse Headband sensor. Right: Example of temporal data (alfa, beta,
theta and gamma signals provided by the sensor).
Fig. 7. Brain activity observed by an instructor.
The child has been omitted from the photo for privacy reasons. Note that this
data is not being recorded, since the child is behind the camera attempting to
make the headband comfortable. Prior to data being recorded, the instructor
made sure the headband was properly calibrated and recording normally.
Fig. 8. Example of the experimental set up.
On the left table, the Leap Motion sensor on top of the laptop and game are
ready. On the right, the Shell Game is set up for calibration data as well as a
smartphone used to record videos for future extraction of facial features to
classify emotions etc
oppositional defiant disorder (ODD) and Hydrocephaly were
also present within the subject group. One of the subjects
suffered with Cerebral Palsy. Finally, one subject with
Down’s Syndrome could not concentrate for a long enough
period for data to be recorded, but through informal
observation was seen to enjoy the Leap Motion game. Due to
this, another child was added to the subject group to retain the
planned 30 (the above ethical consent was obtained). An
example of brain activity being recorded can be seen in
Figure 7, and the full experimental setup can be seen in
Figure 8. For each of the children, up to two minutes of data
is recorded during the interaction with games for each of the
Fig. 9. Data Acquisition: Examples of the calibration and educational
activities using the games for interaction. Faces are blurred for privacy
reasons.
three states of interest by the experimental setup shown in
Figure 8. For states of concentration, several of the
subjects were not recorded for the full two minutes due to
issues arising with attention for the duration of the recording.
This leads to a slight class imbalance of 820 data objects for
concentrating and 830 for both relaxed and neutral data.
Although this slight imbalance is present, it was noted that
Precision and Recall strongly supported the observed
accuracy of the algorithms, detailed further in Section V. The
calibration and educational activities can be seen in Figure 9.
For the concentration state, children played the ‘shell game’
with an instructor. The child was tasked with following the
location of a ball when passed under three cups, with
difficulty increasing based on the child’s performance in
order to retain attention through challenge. For the relaxed
state, the child was asked to breathe slowly (following the
instructor) and to sit in a position which allowed for
relaxation of muscles. For the neutral state, no stimuli were
present. The data was recorded based on observation, if the
subject were to lose concentration on the task, then the
recording would be stopped. This was done in order to
prevent contamination of the calibration data.
Following the calibration experiment, all the children
played two educational games – one with the Leap Motion
and one without, using the mouse to select objects starting
with a specific letter of the alphabet given by the game
interface. In future we plan to take the calibrated model
derived in this experiment and apply it to this data in order to
discern the effects of physical engagement in games.
V. PRELIMINARY RESULTS
At this stage, the results are related to the performance of
classification algorithms given the EEG data. The results
from the single algorithm benchmarking experiment can be
observed in Table I. Random Forest and Logistic Regression
hyperparameters are manually optimized in Figures 10 and
11. For the three performance measures, Logistic Regression
seemed to be the best with an overall classification accuracy
of 96.24% given three classes: concentrated, relaxed and
neutral. The Logistic Regression model took 130 seconds to
execute whereas the Random Forest model took only 19
seconds, and thus the reduction of 0.34% overall score may
be worth the reduction in computational complexity in future
when the algorithm is required to run on standard office
TABLE I: BENCHMARKING OF MACHINE LEARNING ALGORITHMS FOR
CLASSIFICATION OF CALIBRATION DATA (BRAINWAVES)
Algorithm
Accuracy (%)
Precision
Recall
Random Forest
95.9
0.96
0.959
SVM classifier
93.42
0.934
0.934
Logistic
Regression
96.24
0.962
0.962
Bayesian
Network
82.25
0.823
0.823
10-Nearest
Neighbour
88.75
0.898
0.887
TABLE II: METHODS OF ENSEMBLE FOR THE TWO BEST MODELS, LOGISTIC
REGRESSION AND RANDOM FOREST TO CLASSIFY BRAINWAVES
Algorithm
Accuracy (%)
Precision
Recall
Vote
(Avg. Prob)
97.01
0.97
0.97
Vote
(Max. Prob)
97.01
0.97
0.97
Stacking
96.84
0.96
0.96
Fig. 10. Random Forest Tuning: The number of Random Trees in the Forest
are stepped from 25 to 125 in order to select the best hyperparameters
Fig. 11. Logistic Regression Tuning: The number of boosting iterations is
stepped from 100 to 500 in order to select the best hyperparameters.
machines. In Table II, it can be observed that when
combining the two best models, a higher classification
accuracy is achieved for the dataset. This is observed
through both two voting methods and stacking, where the
91
92
93
94
95
96
97
25
(92.82)
50
(95.88)
75
(95.9)
100
(95.9)
125
(95.9)
Clas s ifica tio n A ccu rac y (%)
Trees in the Random Forest
95.7
95.8
95.9
96
96.1
96.2
96.3
100
(95.92)
200
(96.24)
300
(96.24)
400
(96.24)
500
(96.24)
Clas s ification A ccu rac y (%)
Boos tin g Iteration s
Fig. 12. Visualization of the highest relative entropy score of an attribute
(Eigenvalue IG=0.685) mapped to the three-class problem.
voting method achieves the highest classification accuracy
for the dataset. It must be noted that the models are complex
and take a lengthy amount of time, around 10 to 30 minutes,
on a high-end computer (Intel core i7@3.7GHz). When the
two models vote on class both by average and maximum
probability, scores of 97% are achieved.
If the neutral state is not considered, only 2 data objects
from relaxed and 1 from concentrating were misclassified as
one another by the Logistic Regression model for the whole
dataset, showing that the classification ability of the two
classes is strong. In terms of the best ensemble methods, only
1 data object was misclassified. A preliminary attribute
selection search given the brainwave features was performed
and sorted based on attribute Information Gain (relative
entropy). All attributes were noted to have at least some form
of prediction value. The highest were eigenvalues and
covariance matrices of all temporal windows which had an
Information Gain value of around 0.6, this is visualized on a
per-class basis in Figure 12. The lowest attribute was the
skewness (statistical moments) of the first quarter window
which had a small value of 0.006. Computational complexity
could likely be reduced in future by performing
dimensionality reduction of the benchmark matrix as well as
the minimization of data extracted from brainwave activity to
be classified in real time.
VI. DISCUSSION AND FUTURE WORK
In this work, we successfully benchmarked various classical
machine learning techniques on the dataset acquired in
primary schools. In future, we plan on benchmarking a deep
neural network through the same technique as [16].
Additionally, the complexity of the model was high due to the
consideration of 988 numerical features, in future we will
attempt to reduce the complexity of the dataset by performing
and benchmarking dimensionality reduction algorithms. This
will aim to improve the speed of classification without the
loss of accuracy and will aid in real-time classification.
Through the derivation of a less complex model which can
classify in real time, education can then be customized for the
individual based on brain activity. The educational game will
consider the predictions by the model as additional input to
the child’s physical activity, allowing for the activities to
react in real time to a drop-in concentration, in order to raise
the concentration level and improve the educational
experience.
We will also use our educational framework as a long-term
form of cognitive behavioral therapy for certain conditions.
To do this, we plan on repeating the experiment in this study
for a group of children who do not suffer with intellectual
disability and generating a signature to describe the wave
activity (see Figure 2). Over a period of months of using our
proposed framework, the children with intellectual
disabilities will periodically have a similarity measure
generated between their own signature and the signature
generated from the non-intellectually disabled. The goal
therein is to aim to increase the similarity metric over time
through engaging educational activities, with real-time
classification guiding the quality of individual sessions.
The planned activities outlined in this section are now
enabled due to the strong classification ability of the models
derived from this study which pave the way for more in-depth
studies and the development of the educational framework.
In regard to evaluation of usability and acceptance of
children with our developed games, where they have to wear
the headband for BCI and also interact with a NUI via LMC,
we applied a questionnaire to record the opinions of users
after the experiments were all complete as shown below:
a) How easy was it to play the game?
b) What game interface is easier to interact with: (1) leap
motion controller or (2) the traditional game using the
computer mouse/pad?
c) Which game did you prefer?
d) Has been difficult or uncomfortable to wear the Muse
EEG headband (BCI)?
e) Was it comfortable to wear the EEG for a long period?
f) Would you agree that you enjoyed playing the leap
motion games?
g) What did you like about these games?
h) What did you dislike about these games?
i) Would you play these games again?
For a) 83.3% of the responses stated it was as easy; 10% as
medium and 6.7% difficult; b) 86.7% of the children pointed
that the leap motion controller is easier to use than
mouse/pad; c) 83.3% of children chose the aircraft game as
their favorite and the other 16.7% is split into the other
games; d) 93.3% of the children answered no, it is not
difficult to wear the EEG sensor; e) 96.7% of the children
answered yes; f) 100% agreed; g) 83.3% answered the NUI
(using the hand to control the game); h) in this question, it is
interesting to point out that 53.3% mentioned that after long
time playing with leap motion, sometimes they got tired to
keep the hand up to control the aircraft; i) 100% yes.
It is important to mention here that all children, with their
disabilities, some of them even with motor disability, got
well-engaged with the game interfaces. Most of them wanted
to continue playing after their sessions. The educational staff
observed and pointed out that even the most difficult students
in terms of attention deficit (specifically ODD) were focused
on the games interface. They noticed that with only a few
instructions the children were able to play the games, and
through intuition they realized how to solve specific
problems, by observing us (instructors demonstrating only
once how to play in a short of period of time). Since most of
the children in these schools we visited were children
belonging to low-income families, some of them had never
played a similar type of game before. Most of them showed
high anxiety to participate on these experiments, since they
were told a few days before the experiments, and they were
very motivated and anxious waiting for the day.
The next step is to check the confidence given by the
concentration level classification to trigger the game to adapt
to the user and change the stimuli in case the concentration
level drops below a certain threshold. This functionality is
implemented, waiting for the real-time classification of the
EEG data, that is currently being developed. Following this
step, we will implement the approach mentioned in Figure 2,
as a final version of our artificially intelligent game to
monitor the progress of children over time at schools. With
all validation done so far, we can affirm that this approach
has potential to have strong societal impact in children with
attention deficit towards improving their cognitive abilities.
VII. CONCLUSION
This work introduces a new approach for neurocognitive
training towards improving children’s cognition and
concentration by using intelligent games. We combined BCI
to measure concentration levels and engagement of children
during the interaction with the game and a NUI controlled by
hand gestures. Data acquisition of brainwave patterns of
different groups of children with disability or disorder (e.g.
ID, ODD, ASD, ADHD) were acquired, so that we can check
their patterns of concentration level when performing
multiple tasks. Preliminary results on children brainwaves
classification of concentration show that our framework
achieved 96% accuracy. Questionnaires pointed out positive
feedback on the use of the game and acceptance of children to
this technology in this kind of intervention. Future work will
address our system fully integrated, enabling the game ability
to learn the user concentration pattern and make decisions to
generate new stimuli when the user's attention drops,
effectively enabling the real-time individual personalization
of education.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
AUTHOR CONTRIBUTIONS
D. Faria came up with the idea and proposal for this new
approach and defined the mathematical models of features
extraction for EEG brainwaves. D. Faria has also
collaborated on the games interface design. J. Bird defined
and tested the classification and optimization models. J. Bird
contributed with the system implementation and integration,
e.g. sensors, games and machine learning. C. Daquana and J.
Kobylarz helped to conduct the pilots with children,
interacting with them and explaining the technology prior to
the experiments. They prepared and applied the
questionnaires as well. P. Ayrosa helped with discussions and
feedback on each phase of the project, including the
theoretical methods and research practice. D. Faria, J. Bird, C.
Daquana and P. Ayrosa looked at the ethical process. D. Faria
and J. Bird were in charge of writing, but all authors provided
feedback and contributed with specific sessions in this paper.
ACKNOWLEDGMENT
This work is partially supported by Fundação Araucária
Paraná / CONFAP Brazil and the Royal Society UK / Newton
Fund, through a mobility grant awarded to Dr Diego Faria
and Prof. Pedro Ayrosa for the project: “Engagement through
AI-based Interactive Games: Neurocognitive training for
children with learning difficulties” in 2019. We would like to
especially thank the Municipal Education Secretariat of
Cambé-PR, Brazil, in particular, Claudia S. C. Segura, for the
kind assistance and help in conducting the experiments with
children in local primary schools in Cambé-PR, and the
directors of these schools: Irma Hilda Soares Municipal
School; Padre Symphoriano Kopf Municipal School; and
Professor Jacídio Correia Municipal School.
REFERENCES
[1] D. J. Shernoff, M. Csikszentmihalyi, B. Schneider, E. S. Shernoff,
"Student engagement in high school classrooms from the perspective
of flow theory". School Psychology Quarterly, 18(2), 158-176, 2003.
[2] I. Glover, "Play as you learn: gamification as a technique for
motivating learners", W. Conf. on Edu., Hyperm. and Telecom., 2013.
[3] C. Pappas, "Gamify the Classroom", 2013. Retrieved from (March
2019) http://elearningindustry.com/gamify-the-classroom.
[4] S. Petrovska, D. Sivevskaa, O. Cackov, "Role of the Game in the
Development of Preschool Child", Social and Behav. Sci, 92, 2013.
[5] N. Pierce,"State-of-the-art Report: Digital Game-based Learning for
Early Childhood", Report available at
https://www.learnovatecentre.org/.
[6] I. Granic, A. Lobel, R. Engels, "The Benefits of Playing Video
Games", American Psychologists, 2014.
[7] A. Brandao, D.G. Trevisan, L. Brandao, B. Moreira, G. Nascimento,
C.N. Vasconcelos, E. Clua, P.T. Mourao, "Semiotic Inspection of a
game for children with Down syndrome", Brazilian Symposium on
Computer Games and Digital Entertain, 2010.
[8] J.M. de Oliveira, R. Carneiro, G. Fernandes, C.S. Pinto, P.R. Pinheiro,
S. Ribeiro, V.H.C. de Albuquerque, "Novel Virtual Environment for
Alternative Treatment of Children with Cerebral Palsy", Comput Intell
Neurosci., 2016.
[9] G. Chanel, C. Rebetez, M. Bétrancourt, T. Pun, "Emotion Assessment
from Physiological Signals for Adaptation of Game Difficulty", IEEE
Trans. Syst. Man Cybern. A Syst. Hum. 41, 1052–1063, 2011.
[10] E. Lalor, S. Kelly, C. Finucane, R. Burke, R. Smith, R. Reilly, G.
McDarby, "Steady-State VEP-Based Brain-Computer Interface
Control in an Immersive 3D Gaming Environment",Eurasip J. Appl.
Signal Process, 2005.
[11] D. Huang, K. Qian, D.Y. Fei, W. Jia, X. Chen, O. Bai,
"Electroencephalography (EEG)-Based Brain-Computer Interface
(BCI): A 2-D Virtual Wheelchair Control Based on Event-Related
Desynchronization/Synchronization and State Control", IEEE Trans.
Neural Syst. Rehabil. Eng. 20, 379–388, 2012.
[12] A. Finke, A. Lenhardt, H. Ritter, "The MindGame: A P300-Based
Brain-Computer Interface Game", Neural Netw., 22, 1329–1333, 2009.
[13] C. Russoniello, K. O’Brien, J. Parks, "The Effectiveness of Casual
Video Games in Improving Mood and Decreasing Stress", J. Cyber
Ther. Rehabil. 2, 53–66, 2009.
[14] J.J. Bird, L.J. Manso, E.P. Ribeiro, A.Ekart, D. R. Faria, "A study on
mental state classification using EEG-based brain-machine interface",
International Conference on Intelligent Systems (IS). IEEE, 2018.
[15] J.J. Bird, A. Ekart, C.D. Buckingham, D.R. Faria, "Mental emotional
sentiment classification with an eeg-based brain-machine interface",
Int. Conf. on Digital Image and Signal Processing (DISP’19), 2019.
[16] J.J. Bird, D.R. Faria, L.J. Manso, A. Ekárt, C.D. Buckingham, “A deep
evolutionary approach to bioinspired classifier optimisation for
brain-machine interaction”, Complexity, 2019.
[17] R.P. Brenner, N. Schaul. "Periodic EEG patterns: classification,
clinical correlation, and pathophysiology." Journal of clinical
neurophysiology: official publication of the American
Electroencephalographic Society 7.2: 249-267, 1990.
[18] N.F. Güler, D.U. Elif, and I. Güler. "Recurrent neural networks
employing Lyapunov exponents for EEG signals classification."
Expert systems with applications 29.3: 506-514, 2005.
Diego R. Faria is a Senior Lecturer in Computer
Science. He is with the School of Engineering and
Applied Science, Aston University, Birmingham
(UK). He is the coordinators and founder of the
ARVIS Lab (Aston Robotics, Vision and Intelligent
Systems). Currently (2019-2022) he is the project
coordinator of EU CHIST-ERA InDex project
(Robot In-hand Dexterous manipulation by
extracting data from human manipulation of objects
to improve robotic autonomy and dexterity) funded
by EPSRC UK. Dr Faria is also PI and Co-I (2020-2022) of two projects with
industry (KTP-Innovate UK scheme) related to perception and autonomous
systems applied to autonomous vehicles, and NLP and image processing for
multimedia retrieval. He received his Ph.D. degree in electrical and
computer engineering from the University of Coimbra, Portugal, in 2014. He
holds an M.Sc. degree in computer science from the Federal University of
Parana, Brazil, in 2005. In 2001, he earned a bachelor’s degree in informatics
technology (data computing & information) and he has finished a computer
science specialization in 2002 at Londrina State University, Brazil. From
2014 to 2016 he was a postdoctoral fellow at the Institute of Systems and
Robotics, University of Coimbra where he collaborated on different projects
funded by EU commission and the Portuguese government in areas of Robot
Grasping, Perception, Cognitive Robotics and Assisted Living. His research
interests are assisted living, intelligent systems, and cognitive robotics.
Jordan J. Bird achieved a first class Bachelor's
Degree with Honours in Computer Science at Aston
University in the United Kingdom, before
continuing with PhD studies at the same institution
in 2018 with an awarded scholarship. Garnered
through a deep Scientific passion from an early age,
his research interests exist largely within the field of
Human-robot Interaction; these include, the
Emergence of Artificial Intelligence, Intelligent
Social Frameworks, Turing's Imitation Game, Deep
Machine Learning, and Transfer Learning. Jordan is a founding member of
the Aston Robotics, Vision and Intelligent Systems (ARVIS) laboratory at
Aston University.
Cintia Daquana is the Pedagogical Coordinator at
the Municipal Education Secretariat of Cambé-PR,
Brazil. She is responsible for special education and
inclusion of children with disabilities. She has a
degree in pedagogy/education by the State
University of Londrina, Brazil. She also has a
certified specialization course (Lato-sensu) in
Pedagogical Work in Early Childhood Education
(State University of Londrina, Brazil); Special
Education (UNOPAR, Brazil), and Psychopedagogy (INESUL, Brazil). Her
research interests are: Special Education, Children with Learning
Difficulties, and Technology for Education.
Jhonatan Kobylarz is an Electronic Engineering
student at Universidade Federal do Paraná (UFPR),
Brazil. His research interests include Deep Machine
Learning towards Social Robotic Interaction,
Bioengineering and Computer Vision.
Pedro P. S. Ayrosa obtained his M.Sc. and PhD
degrees in Computer Engineering and Systems
from the Federal University of Rio de Janeiro
(COPPE / UFRJ) in 1992 and 2001, respectively.
He earned a B.Sc. degree in Mathematics from
Universidade Federal Fluminense – UFF, Brazil,
in 1988. He is currently an Associate Professor at
the State University of Londrina (UEL) having
worked in undergraduate, specialization and
master's degrees in computer science. He was a member of the editorial
board of the State University of Londrina Publisher (Eduel). He is the
institutional examiner of courses and Distance Learning Education at
INEP-Brazil. He is an expert in Ad-hoc distance learning at the Department
of Science and Technology of the State of Parana. He is the general
coordinator of the Center of Distance Education at State University of
Londrina (NEAD-UEL), the local coordinator of the Open University of
Brazil (UAB), and director of the Laboratory of Educational Technology
(LABTED). His research interests are: Distance Learning Technology,
Artificial Intelligence, and Neural Networks.