Content uploaded by Id Tue
Author content
All content in this area was uploaded by Id Tue on Apr 08, 2017
Content may be subject to copyright.
Human-Drone interaction for relationship building using
emotion recognition
Lang Bai
Eindhoven University of
Technology
Human Technology
Interaction
l.bai@student.tue.nl
Simon à Campo
Eindhoven University of
Technology
Industrial Design
mail@simonacampo.nl
Tim Treurniet
Eindhoven University of
Technology
Industrial Design
info@timtreurniet.nl
Xintong Wang
Eindhoven University of
Technology
Industrial Design
x.wang.5@student.tue.nl
ABSTRACT
Robots are increasingly becoming part of our daily lives which
makes human-robot interaction an important topic to design for.
In this study, we explore the technologic possibilities of a
domestic drone building a relationship with a human using
emotion recognition. To our knowledge, emotion recognition has
not been used for human-robot interactive relationships with
drones. A prototype was made that was able to detect the
emotions using a webcam and a neural network. The person
would see a screen with the drones eyes responding differently to
their emotion based on their relationship level. By our research,
we are one step closer to drones becoming part of our daily lives.
Keywords
Domestic drone, emotion recognition, neural network,
Human-Robot Interaction
1. INTRODUCTION
1.1 Vision
The use and implementation of robots to replace human tasks is
growing. In the future, robots are expected to replace around 47
percent of total US employment by automation of jobs [1]. Robots
are expanding in the social fields and used for therapy in elderly
houses [2], children for autism [3], games [4] or even assisting us
in our own house [5]. Robots will become more part of our lives
and thus designers must shape and explore the potential of
human-machine interactions.
1.2 Social drone
Flying drones, are currently a kind of robot with a broad potential,
because of their flexibility. Current implications are mostly
outdoor for example transport [6], search and rescue [7], the film
industry and 3D mapping [8] just to name a few. The student team
BlueJay at the Eindhoven University of Eindhoven is exploring
the use of drones indoors. They are building the world's first
domestic drone for personal assistance indoors. One of the goals
is to make the drone social, Here we see the opportunity as
designers to design the interaction from drones to human. If these
will be implemented, we would create the first social drone and be
one step further in the future of human-robot interaction..
1.3 Related work
The technology today has already achieved huge steps in
intelligence design, which make life more convenient and
efficient. The next generation concept evolving in the field of
Artificial Intelligence is how to improve Machine Perception and
Social Intelligence by making a smarter system capable of reading
and understanding human behavior. Kashyap In attempted to
achieve higher efficiency in recognizing human emotions
accurately by using minimal number of geometrical feature points
[9]. The authors narrow down the dataset of input from a series of
position numbers to only two features: Eyes and eyebrows of the
human face and their result turned out to be a 87.4% recognition
rate.
Other varieties of work investigate the nature of the Human-
Robot Interaction (HRI). Projects explored emotion [10], facial
[11] and other kinds [12] of interactions techniques. Drones are
one of the more popular intelligent robots nowadays. We consider
intelligence interaction a major challenge in the future and
conclude by presenting design insights for Human-Drone
Interaction (HDI).
Controlling a drone using face poses were prior investigated by
work in hand gesture [13]. And the history of user-defined
interaction techniques and gesture elicitation studies for new
technology [14].
2. BACKGROUND
The research is done as part of a course Designing Intelligence in
Interaction given at the Eindhoven University of Technology.
Based on the lectures given at the course, we chose to use a
trained neural network program to demonstrate our idea.
2.1 Approach
The realisation of the interacting drone in a iterative manner. To
kickstart the project, a small brainstorm was organised in
collaboration with the BlueJay members. Here the best idea was
selected as a goal to work towards. Based on the vision, the
minimal value product (MVP) was built and tested and then
moved on to the next MVP. With this method, a working
prototype could be demonstrated at any time. Over time the
prototype would become more complex to the current state as
described in this paper. Given the technological orientation of the
research, not many user tests were done other than testing it on
ourselves and using our background knowledge as designers to
make decisions. The process started with familiarizing ourselves
with the programs and linking them together. After being able to
create our own neural network, time was spent in creating
heuristics for relationship building with the drone. The final
prototype was then presented to BlueJay.
Social behaviors
1
In order to enable the drone to perform natural interaction with
people based on the affective facial expression. The drone is
designed to contain not only human thinking model, but also
behavior model. It has the ability to read human emotion via the
webcam on the head. At present stage, the drone can recognize
four different emotions from humans: happy, sad, angry and
neutral. Each emotion was marked with three different levels, in
other words, the drone is able to recognize to what extent the
emotion is. For example, human beings use different words to
describe others’ happiness based on the smiling face, smiling,
beaming, and laughing, which can be mapped into the values of
emotion classification.
In this project, the drone tends to behave like a social robot. The
drone is owned by the BlueJay student team from TU/e, and it is
supposed to recognize people from BlueJay and declare its love to
the team members, in the meanwhile, show its neutral attitude or
aversion to people of specific features. For the drone, to interact
with different people can be interpreted to different social
situations, various baselines of relationship level are set up. The
affection to BlueJay team members can be regarded as the halo
effect in human social judgment.
Furthermore, for a human being, subjective experience of emotion
is thought to guide behavior, decision making, and information
processing. To the drone, other people’s emotion can also have
impact on the evaluation, whereby people’s emotion change the
interpersonal (drone and other person) interaction. The value of
relation level indicated the drone’s attitude towards the human, it
begins with a default baseline, people’s negative emotion, anger,
will lead to the decrease of relation level. Instead, people’s
positive attitude will lead to the increase of relation level. The
drone will show its eye movement based on the relation level to
express its mood.
2.2 Intelligent algorithm
A neural network is made up of units that are based on the
biological neuron. In many real world problems, few priori
assumptions can be made, the major strength of neutral network
lies in the fact that a priori assumptions regarding the underlying
structure of the relationship is not required. [15]
Universal approximation theorem [16] states that feedforward
neural networks with one hidden layer can approximate an
arbitrary nonlinear, continuous and multidimensional function
with any desired accuracy. However, the number of hidden
neurons is the most difficult part to determine. "A rule of thumb is
for the size of this hidden layer to be somewhere between the
input layer size and the output layer size ." [17], "How large
should the hidden layer be? One rule of thumb is that it should
never be more than twice as large as the input layer. " [18], and
"Typically, we specify as many hidden nodes as dimensions
needed to capture 70-90% of the variance of the input data
set.”[19].
Figure 1. A three-layer feedforward neural network model
3. DESIGN
In this section, we describe how the final design of the prototype
has been created and how it has been tested.
3.1 Technology and realisation
For emotion expression in this research, simplified eyes were
used. The eyes consist of a static black circle on a white
background. A white circle with a fixed radius and a variable
position is then projected on top of it, rendering part of the black
circle invisible. The white circle is mirrored on the opposite black
eye, thus requiring only two variables to generate the eyes.
Depending on the location of the white circle, the black circles
will represent eyes with different emotions such as happiness,
sadness and anger.
In order to find out which eyes fit the different emotions, an
application was made for user testing, see appendix A. The
application randomly generated eyes which could be linked to
various emotions and levels of emotions through a user interface.
The results were exported in a table, providing the authors with a
lookup table for each of the required facial expressions.
Figure 2. Emotion lookup table to find the best positions of the
drones eyes.
3.2 Design of the interaction
Although the concept is created for a drone, the current version of
the prototype has been developed for a computer. The aspects that
are tested in this research do not require an actual drone to be
present. In the current context, the user will be in front of a laptop,
or a computer with a webcam. The user’s facial expressions will
be monitored, which he or she will not be aware of. The screen
represents the drone, or rather its eyes.
2
3.3 Intelligent behavior and embodiment.
The sensory information
Current human-computer interaction (HCI) designs usually
involve traditional interface devices such as the keyboard and
mouse and are constructed to emphasize the transmission of
explicit messages while ignoring implicit information about the
user, such as changes in the affective state. [13]
On the level of physiology, the sympathetic nervous system
prepares the body for action to indicate the change of emotion:
increased blood pressure and heart rate, respiration increases,
pupil dilate. Meanwhile, the behavioral component of emotion is
expressed as body posture, facial expressions and
approach/avoidance. On account of the physiological and
behavioral components, plenty of methods to detect human
emotion are developed on audio or visual inputs. In the scenario
of a public area, audio information is too difficult to process due
to the noise. In public area, few people use body gesture to
express emotion. Hence, facial expression is the most
straightforward and easiest to detect.
In this project, Affectiva SDK is used for face detection, and
analyzing the expression obtained from the camera. The 21
expressions output (e.g., frown, pout, etc) are later used as the
inputs of neutral network.
Figure 3. Face tracking dots placed on the face from the Affective
SDK, to track facial expressions
The learning algorithm you used and how
Figure 4. An overview of the neural network in Neuroph studio
Dataset was collected from 5 people, each person contributed 12
data rows. One hidden layer perceptron network with 21 inputs,
50 hidden nodes and 12 outputs was used in this project. The
inputs composed of 21 facial expressions and the outputs
composed of 12 emotion classification, each output indicated the
emotion and level (3 levels in one emotion). Data was trained in
the Neurophstudio program, supervised learning is taken to train
the data. We set the learning parameters in 0.01 max error, 0.2
learning rate. After getting a nice network error graph, the *.nnet
file is called in Processing to provide the basis of drone’s
behavior.
Figure 5. Graph displaying the total network error over the
different iterations in Neurophstudio
The network was trained in three iterations. First having a mean
square root error of 0,10 then 0,05 and finally as can be seen in
figure 4 to 0,01, which is small enough to recognise the emotions
for our demo.
Relationship building
Part of intelligence are short and long term memory. Recognising
a certain person and being able to remember how well the
relationship is with that person, would make our prototype more
intelligent. Although we used heuristics for program the
relationship, still features of intelligence are used by the memory.
Although the process of building a relationship is simplified in the
current version, the user will notice that the drone will not simply
be mimicking. The user will see the relationship level displayed
on the screen as well, in order to display the process for demo
purposes. This level is simply a number where low means the
relationship is mostly negative and high means vice versa. When
the relationship is good, the drone will react more happy and
compassionate, whereas in a bad relationship the drone will be a
bit more angry. Furthermore, the user will notice that positive
facial expressions will make the relationship value go up, and see
it decline when looking angry for example.
Based on the Affectiva SDK, we can detect if the person being
recognised is either male or female and wears glasses. The
relationship levels has be set to low for males without glasses and
high for females with glasses. The other two possibilities are
considered as a new relationship, thus the level will be 0. These
preferences have been used for quick demonstration purposes.
3
3.4 Testing and analysis
Figure 6. Visualisation of the eyes of the drone in neutral state
Figure 7. Visualisation of the eyes of the drone in happy state
where the lines show where the eyes will be cut of on different
levels of happiness
Figure 8. Visualisation of the eyes of the drone in sad state, where
the lines show where the eyes will be cut of on different levels of
sadness.
Figure 9. Visualisation of the eyes of the in angry state, where the
lines show where the eyes will be cut of on different levels of
anger.
As previously mentioned, the Affectiva SDK was used to gather
live data. This SDK features emotion detection as well based on
its own deep learning algorithm, which was not used in the setup
for this research. The Affectiva emotion detection did prove
useful for comparing results between theirs and the neural
network. For a small comparison (n=1), different facial
expressions were analyzed by the two systems, showing again
three different emotion levels for happy, angry, sad and neutral
and comparing the results. The output from Affectiva is measured
in percentages. The test subject had not been used for training
either of the two systems.
Expressed emotion
Our trained Neural
network
Affectiva
Happy 1
Happy 2
Joy: 90-100%
Happy 2
Happy 2
Joy: 100%
Happy 3
Happy 3
Joy: 100%
Sad 1
Sad 1
Neutral
Sad 2
Sad 2
Sad: 0-2%
Sad 3
Sad 1
Sad: 20-40%
Angry 1
Neutral
Anger: 10-20%
Angry 2
Angry 2
Anger: 20-30%
Angry 3
Angry 2
Anger: 35-40%
Neutral
Neutral
Neutral
Table 1. Comparison of our trained network versus Affectiva
Except for a few mistakes, the presented neural network is quite
accurate at detecting the expressed emotion, and the intensity of
that emotion to some extent. Affectiva’s algorithms had some
difficulties distinguishing different levels of emotions but was
nearly similar in accuracy. It should be mentioned that although
4
the test subject had not been used to train the neural network, he
was tested in a similar setup, compared to the training data. While
happiness is rather easy to express, the facial expressions for
sadness and anger may not fully resemble actual facial
expressions during those emotions, as these were registered while
acting.
4. CONCLUSION
In this paper, we designed a social drone with human-drone
interaction, to build relationships. The designers were able to
create the social drone using a neural network for emotion
recognitions and used heuristics for relationship building. The
current version has to be tweaked to find the right heuristics, but
the prototype is working and ready for team BlueJay to use and
implement in their drone. Currently the prototype is a screen
version, but after implementation in a drone, the first social drone
will be created.
5. DISCUSSION
The study was done within a course Designing Intelligence in
Interaction, which gave the researchers a short amount of time to
extensively user test the interaction. The focus is now on the
technological prove, which team BlueJay could use for further
testing and analysing. These could lead testing the emotions of the
eyes better, since when presenting the demo at the course, there
was reported that they couldn’t directly translate the eyes towards
certain emotions.
Our learning algorithm is able to recognise four different
emotions on three levels. These could be expanded to create more
complex relationships and better interactions from drone to
human. Examples of more emotions could be: surprise, disgust,
excitement, boredom etc.
The relationship level is currently fastly changing from one to the
other side of the spectrum. For demonstration purposes, these
values are close together and change based on the emotion
displayed. The borders of these relationship levels could be based
on research or testing the interaction with our drone, where they
are now based on our own interpretation. The relationship now
changes negatively when the user looks angry, and increases when
they look happy. Since human relationships are far more complex,
future studies could find correlations based on emotions and
relationships. A learning algorithm might be appropriate as well to
solve this issue. Research might already been done in this subject,
which could be used to improve the interaction.
The drone reacts based on the emotion of the person that he
recognises. However, it would be interesting if the drone could
also trigger interaction and catch a person's attention, creating a
more real life scenario.
Our training data is based on our own facial expressions, meaning
that these could display a more accurate use on our faces. If we
would have a larger training data with multiple people, it could be
more precise on multiple people, but less on our own emotion
recognition. Currently we are unaware how well our algorithm
recognises people from different ages and ethnicities, although
within the team already a diversity of male & female and asian &
caucasian is present.
Team BlueJay saw the added value to their drone and will use our
prototype to implement it in the drone. The eyes will be further
developed and more advanced relationships building will be used.
After a demo, one of our team member will explain the code so
they can build it further. At the planned drone event, thousands of
people can experience the first social drone based on our work.
6. REFERENCES
[1] Frey, C. B., & Osborne, M. A. (2017). The future of
employment: how susceptible are jobs to computerisation?.
Technological Forecasting and Social Change, 114, 254-280.
[2] Wada, K., & Shibata, T. (2007). Living with seal robots—its
sociopsychological and physiological influences on the
elderly at a care house. IEEE Transactions on Robotics,
23(5), 972-980.
[3] Barakova EI (2008) Emotion recognition in robots in a social
game for autistic children. In: Sturm J, Bekker MM (eds)
Proceedings of the 1st workshop on design for social
interaction through physical play; Eindhoven, The
Netherlands, pp 21–26
[4] Lourens T, Barakova EI (2009) My sparring partner is a
humanoid robot. IWINAC (2) 2009:344–352, LNCS
[5] Park, K. H., Lee, H. E., Kim, Y., & Bien, Z. Z. (2008). A
steward robot for human-friendly human-machine interaction
in a smart house environment. IEEE Transactions on
Automation Science and Engineering, 5(1), 21-25
[6] George, A. (2013). Forget roads, drones are the future of
goods transport. New Scientist, 219(2933), 27.
[7] Search and Rescue (SAR) drone - Unmanned Aircraft
Systems - Aerialtronics. (n.d.). Retrieved January 27,
2017,from www.aerialtronics.com/search-rescue-sar/
[8] Pix4D - Drone Mapping Software for Desktop + Cloud +
Mobile. (n.d.). Retrieved January 27, 2017, from
https://pix4d.com/
[9] Kashyap, Chiranjiv Devendra, and Priya R. Vishnu. "Facial
Emotion Recognition." International Journal of Engineering
and Future Technology™
7.7 (2016): 18-29.
[10] Kwon D S, Kwak Y K, Park J C, et al. Emotion interaction
system for a service robot[C]//Robot and Human interactive
Communication, 2007. RO-MAN 2007. The 16th IEEE
International Symposium on. IEEE, 2007: 351-356.
[11] Ekman P, Oster H. Facial expressions of emotion[J]. Annual
review of psychology, 1979, 30(1): 527-554.
[12] Cauchard J R, Zhai K Y, Landay J A. Drone & me: an
exploration into natural human-drone
interaction[C]//Proceedings of the 2015 ACM International
Joint Conference on Pervasive and Ubiquitous Computing.
ACM, 2015: 361-365.
[13] Nagi, J., Giusti, A., Caro, G.A.D. and Gambardella, L.M.
2014. Human Control of UAVs using Face Pose Estimates
and Hand Gestures. In Proceedings of the 2014 ACM/IEEE
International Conference on Human-Robot Interaction
(HRI'14), 252-253.
[14] Morris, M.R. 2012. Web on the wall: insights from a
multimodal interaction elicitation study. In Proc. of the 2012
ACM International Conference on Interactive Tabletops and
Surfaces
(ITS '12), 95-104.
[15] Moody, John E. "The effective number of parameters: An
analysis of generalization and regularization in nonlinear
learning systems." NIPS
. Vol. 4. 1991.
5
[16] B. C. Csji, Approximation with Artificial Neural Networks
.
PhD diss, EtvLornd University Hungary (2001)
[17] Blum, Adam. "Neural networks in C++." NY: Wiley
697
(1992).
[18] Berry, Michael J., and Gordon Linoff. Data mining
techniques: for marketing, sales, and customer support. John
Wiley & Sons, Inc., 1997.
[19] Boger, Zvi, and Hugo Guterman. "Knowledge extraction
from artificial neural network models." Systems, Man, and
Cybernetics, 1997. Computational Cybernetics and
Simulation., 1997 IEEE International Conference on. Vol. 4.
IEEE, 1997.
[20] Zeng, Zhihong, et al. "A survey of affect recognition
methods: Audio, visual, and spontaneous expressions."[J]
IEEE transactions on pattern analysis and machine
intelligence
31.1 (2009): 39-58.
7. APPENDIX
7.1 Reflections
Xintong Wang
The reason I choose this elective is that I would like to improve
my skills of programming and the eight different lectures also
attracted me a lot. As an international student, I always consider
that I did not have enough lectures than before that most of the
knowledge I want to use I have to self-teaching. The lecture is so
much interesting that each one seems an introductory theory for
me and I got so excited that these technology I might use in the
future, which is really cool I think.
After these wonderful lectures, time is really treasure for us to
think of our concept and also make it realize. The most difficult
part is that none of us is an expert of neural network. And even
worse, I just get to learn processing on my own so I have too
much basic problems to be solved. As everyone in TU/e seems
know about programing, it's very essential for me to got quick
knowledge to read and write code. Also our project is mainly
about how to realize our idea by using technology and I am much
better at visual and appearance design so not much help I can do,
which was really annoying me.
During the process, I basically did the part of user data gather,
design the outlook of drones’ eyes and the emotion look up table.
In the mean while, I learn JAVA language and processing on my
own and I think I benefit from it a lot.
After hard working with my perfect teammates (Tim Treurniet,
Simon à Campo and Lang Bai), the outcome turned out to be
desirable, at least in my opinion. I was also impressed at the
presentation day that other teams did a really great job. I really
enjoy the part that students and coaches share their ideas and
discuss about it no matter true or false. I always got some
inspiration from it.
The most important thing I learnt from this elective is that
programing is essential but not all part of design. It is only the
way supporting the idea to demonstrate. In my future individual
project I definitely will use it so it is very sensible to learn it
quickly and use it skillfully.
I also impressed by our outcome. At first it seems a hard project
and I have no idea even where to get into it., but only a few weeks
later we made it. Although we struggled with the neuroph network
and Mac stuff a lot, we still made a good work. Thanks to the help
with my teammates and also the coaches.
------
Lang Bai
As the information in OASE shows, this course consists of eight
lectures given by different teachers on different topics and a final
product based on one topic. As a student coming from
non-computer science background, it’s an optimal opportunity to
get access to the topics and gain my hands-on skill. With no
doubt, I chose this course.
At the fourth week we had to form a team and discus on the
project definition, I signed my name in the team with other three
students, we were rarely with experience in designing intelligence
and programming. If I was asked about “what is intelligence?” at
that time, my answer was “I must use one of the 8 skills introduce
by teachers with programming.” Two group members suggested
us working on the drone to let it recognize human’s emotion.
“Emotion, sounds intelligent, I agree.” We found a SDK working
on emotion detection through webcam soon. Thanks to the
meeting with our couch, Emilia, we got important feedback and
focused on how to use neutral network to build our own emotion
detection system. Later on, after Dr.Jun Hu’s patient explaining, I
gradually understood how a computer could perceive human’s
face changing and recognize people’s emotion.
Finding the right direction, I was responsible for data training. For
better data collecting, I changed the Javascript code of using the
Affectiva SDK, and the new code allowed us to save the values of
21 expressions manually. There were 5 participants in total. The
values of expressions were between 0 and 100, the outputs of
emotion were between 1-3. With the first version dataset, we
couldn’t get a diminishing error graph. After dividing 100 on
inputs, changing the numbers of outputs and giving a specified
classification to each row of data, with the multilayer perceptron,
we got a working error graph. Later on, Simon improved our
training to be more precise. Tim and Wang developed the
expressions and behavior model of drone.
I used to regard programming as a tool people let computers to
talk, to send a request, to display a picture, etc. After working on
the final project, I recognize that, people can also teach computer
to THINK like a human. Learning, the process built upon and
shaped by previous knowledge, is a capability used to be
6
[21] Hu, Jun, and Loe Feijs. "An agent-based architecture for distributed interfaces and timed media in a storytelling application." Proceedings of the second international joint conference on Autonomous agents and multiagent systems. ACM, 2003.
[22] Hu J, Feijs LM. An Adaptive Architecture for Presenting Interactive Media Onto Distributed Interfaces. InApplied Informatics 2003 (pp. 899-904).
[23] Hu J, Feijs L. IPML: extending SMIL for distributed multimedia presentations. InInternational Conference on Virtual Systems and Multimedia 2006 Oct 18 (pp. 60-70). Springer Berlin Heidelberg.
[24] Hu J. Move, but right on time. In1st European workshop on design and semantics of form and movement (DeSForM), Newcastle upon Tyne 2005 Nov 11 (pp. 130-131
possessed only by human, some animal. On certain questions,
machines can be taught to use probably the same mental model as
human. Neurocomputing sometimes can be called brainlike
computation. By transplanting the learning model from human to
machine, letting machines show the cognition ability in gaining
new information and making decision, that’s probably what the
item intelligence means to machine.
As a long-term user of Google Translate, I witness the magic AI
brings to machine. It used to be so rigid that the translating
mistakes and monotone voice drew people to laugh. But now,
despite some small mistakes, I regard it a better translator than
me. Even though I only have access to the most fundamental
algorithm, using machine learning makes me like holding a magic
stick. Combining the knowledge gained through Human
Technology Interaction master program, the psychology of
human, I found some of my previous project can be improved by
using learning algorithm. While at the beginning of the course, I
thought none of them can be intelligent. Now I no more treat my
designing project as a tool with specific function, it can be
developed to predict the unknown situation. To be friends with
Artificial Intelligence, that’s what I have received from this
course.
-----
Simon à Campo
The lecture consisted of eight lecture that would provide the
information to create a prototype. For our team the most valuable
lectures where from Matthias Rauterberg, creating a basic
understanding of what is intelligence based on our already present
knowledge. Secondly the lecture from Barakova explained the
theory behind pattern recognition using neural networks and Jun
Hu provided the practical side of creating a neural network in
Neuroph Studio. Within the group, I was responsible of creating
the network and training it. When the training data is correct, it
was rather simple to create a trained network. Although the
implementation within Processing gave some problems, the use of
real intelligence in design projects is now a possibility for me. The
course provided a clear difference of heuristic programming,
which normally is seen as intelligence and real intelligence by
learning algorithms. Using both within the project, allowed us to
create an interesting interaction with intelligence.
Furthermore, I kickstarted the project, because I knew the BlueJay
team. I could lead the project by making concept choices based on
the interest of BlueJay. Based on the input from Barakova and Jun
Lu, we had to change focus from the initial prototype to fit the
courses criteria. Based on the MVP process, we were able to bring
the prototype one step further than initially expected. Already
being able to mimic the emotion of a person using our neural
network, would be sufficient for the course. Since we had extra
time, we were able to quickly add a second layer of intelligence
by the memory of persons and the relationship the drone would
have with the person. I was responsible for the logic of the
emotions linked with the relationship level. Together with Tim, I
figured out the logical flow of creating the heuristics within
Processing. The relationship level was also implemented in the
final prototype. I see my role in the team as the team leader, since
I divided the tasks, presented the work, made and sustained
contact with BlueJay and took the initiative to finalise the report
you are reading now. I made sure we had a good process so we
were sure to have a working prototype at the presentation.
The most valuable learning point for me is that implementing real
intelligence in a product, isn’t as hard as it seems. Furthermore I
do now understand how and when to use intelligence over
heuristics. I hope to be able to use it within my FMP for the
creating of community software. Intelligence could help to detect
patterns of relationships with communities and based on that
perform suggestions. I am satisfied and surprised by the current
prototype, since we were able to link all the codes and programs
together and create an interactive prototype. Already with a small
sample, we were able to match around the same accuracy as a
professional company creating emotion recognition.
-----
Tim Treurniet
During my master’s program I intend to further develop in areas
that personally interest me and support the kind of products and
systems I wish to design in the future. Among other things, I have
previously tried to create “smart” products which would react in
specific ways based on combinations of various inputs and, in
some cases, would adapt to new contexts and its users. In
hindsight I can tell that my history of heuristic intelligence did not
scratch the surface of what intelligence in programming really is. I
chose to follow this course since the description promised it to be
an introduction to a next level of intelligence.
It became clear that I did not have any understanding of
intelligence in programming at all. Throughout the course, we
came up with two, in my opinion, intelligent concepts. While
discussing those during the meetings, we found that we had not
come up with anything intelligent, the first one consisting of data
which we could just as well process manually, and the second
being a series of “if-statements”. We had to step up and do
something that was new to us all.
I can confidently say that this has been one of the most valuable
courses for me personally over the last few years. Mainly because
it allowed me to use and apply an intelligent algorithm for the first
time, but also because it showed me tools on how to use these in a
simple way. After all I am a designer, not a computer scientist,
and I intend to apply these techniques with the least required
effort. Because of this, we were able to successfully accomplish
our intended goals regarding the application of a neural network,
and were left with spare time to incorporate relationship building,
be it on a more heuristic level.
Apart from the general concept developing, my specific tasks
were mainly in the Processing aspects of this course. In order to
come up with the proper eyes for emotion expression of the drone,
I made a sketch that would randomly generate eyes. With a simple
user interface, we could let people choose an emotion with a
certain level, they thought fitted the expression best. This data was
exported to an excel file in order to use it for a lookup table.
Furthermore, I made the sketch for the demo, which caused
challenges in importing data from the affective SDK and using it
in the trained network.
Overall, I believe I have acquired some practical skills which can
7
directly be applied in future projects with intelligence aspects.
During the entire course, my confidence in ending up with a
working intelligent demo was rather low, thus I am satisfied with
our team’s results and growth.
7.2 Codes and network
Due to the size of the code used in this research, all files will be
available externally via this link:
https://drive.google.com/open?id=0B6Nol2ljSKPXazliM0dTSVl
mWEU
Appendix A. Processing eye generator code
Appendix B. Processing neural network code
Appendix C. Neural network
Appendix D. Javascript & html code
8