PreprintPDF Available
Preprints and early-stage research may not have been peer reviewed yet.

Abstract and Figures

The emergence of big data enables us to evaluate the various human emotions at places from a statistic perspective by applying affective computing. In this study, a novel framework for extracting human emotions from large-scale georeferenced photos at different places is proposed. After the construction of places based on spatial clustering of user generated footprints collected in social media websites, online cognitive services are utilized to extract human emotions from facial expressions using the state-of-the-art computer vision techniques. And two happiness metrics are defined for measuring the human emotions at different places. To validate the feasibility of the framework, we take 80 tourist attractions around the world as an example and a happiness ranking list of places is generated based on human emotions calculated over 2 million faces detected out from over 6 million photos. Different kinds of geographical contexts are taken into consideration to find out the relationship between human emotions and environmental factors. Results show that much of the emotional variation at different places can be explained by a few factors such as openness. The research may offer insights on integrating human emotions to enrich the understanding of sense of place in geography and in place-based GIS.
Content may be subject to copyright.
Extracting human emotions at different places based on
facial expressions and spatial clustering analysis
Yuhao Kanga,b, Qingyuan Jiab, Song Gaoa, Xiaohuan Zengb, Yueyao Wangb,c,
Stephan Angsuesserb, Yu Liud, Xinyue Yee, Teng Feib
aGeospatial Data Science Lab, Department of Geography, University of Wisconsin,
Madison, United States
bSchool of Resources and Environmental Sciences, Wuhan University, Wuhan, China
cCollege of Urban and Environment Sciences, Peking University, Beijing, China
dInstitute of Remote Sensing and Geographical Information Systems, Peking University,
Beijing, China
eUrban Informatics and Spatial Computing Lab, Department of Informatics, New Jersey
Institute of Technology, Newark, United States
Abstract
The emergence of big data enables us to evaluate the various human emotions
at places from a statistic perspective by applying affective computing. In
this study, a novel framework for extracting human emotions from large-scale
georeferenced photos at different places is proposed. After the construction of
places based on spatial clustering of user generated footprints collected in social
media websites, online cognitive services are utilized to extract human emotions
from facial expressions using state-of-the-art computer vision techniques. And
two happiness metrics are defined for measuring the human emotions at different
places. To validate the feasibility of the framework, we take 80 tourist
attractions around the world as an example and a happiness ranking list of places
is generated based on human emotions calculated over 2 million faces detected
out from over 6 million photos. Different kinds of geographical contexts are
taken into consideration to find out the relationship between human emotions
and environmental factors. Results show that much of the emotional variation at
different places can be explained by a few factors such as openness. The research
may offer insights on integrating human emotions to enrich the understanding
of sense of place in geography and in place-based GIS.
Keywords: affective computing, human emotion, place, big data, GeoAI
1. Introduction
Place, which plays a central role in daily life not only as a location reference
but also reflecting the way human perceive, experience and understand the
environment, is a key issue in geography and GIScience (Tuan, 1977; Goodchild,
2011; Winter and Freksa, 2012; Scheider and Janowicz, 2014; Goodchild, 2015;
McKenzie et al., 2015; Gao et al., 2017a,b; Blaschke et al., 2018; Zhang et al.,
Preprint submitted to Transactions in GIS May 7, 2019
arXiv:1905.01817v1 [cs.CV] 6 May 2019
2018; Purves et al., 2019; Wu et al., 2019). Agnew (2011) proposed three
aspects of place: location, locale, and the sense of place, which refers to the
experiences of people and their perceptions and conceptualizations of a place.
And place has been comprehensively depicted as the context and affordance with
various human activities, which linked to memories and emotions of individuals
(Jordan et al., 1998; Kabachnik, 2012; Scheider and Janowicz, 2014; Merschdorf
and Blaschke, 2018). Human emotions, which are innately stored in human
neural systems (Wierzbicka, 1986; Izard, 2013), provide bridges linking the
surrounding environments and human perceptions. On one hand, emotions
tint human experiences (Tuan, 1977), and show how places are psychologically
felt by people (Davidson and Milligan, 2004). One the other hand, emotions
are proved to be connected and affiliated with the surrounding things including
living organisms (Wilson, 1984), nature environment (Capaldi et al., 2014),
and cultural environment (Mesquita and Markus, 2004), and so on. Therefore,
understanding human emotions to the environment is important for human
behavior analysis towards the sense of place (Grossman, 1977; Rentfrow and
Jokela, 2016; Smith and Bondi, 2016).
Amount of early studies usually use questionnaires to investigate the emotion
of people in different environmental contexts, which costs a lot of human
resources and lacks timeliness (Golder and Macy, 2011). The emergence
of big data and the development of information, communication, technology
(ICT), and artificial intelligence (AI) provide advanced methodologies and
opportunities to solve the problems aforementioned in social sensing (Liu et al.,
2015; Ye et al., 2016; Janowicz et al., 2019). Affective computing, as an
interdisciplinary domain spanning computer science, psychology, and cognitive
science, was proposed by Picard et al. (1995) with a focus on investigating
the interactions between computer sensors and human emotions. Every day,
large volumes of geo-tagged user generated content (UGC) are uploaded to
social networking websites such as Facebook and Twitter, photo-sharing sites
such as Flickr and Instagram, as well as the video-sharing platform YouTube
(OConnor, 2008), which can reflect human perceptions of environments as
sensors and their contributions to the volunteered geographic information (VGI)
(Goodchild, 2007). And affective memories are produced and archived in these
technology-mediated platforms (Elwood and Mitchell, 2015). In those UGC,
people express their emotions actively through tones of the voice (Schuller
et al., 2009), facial expressions (Ekman, 1993), body gestures, and written forms
(Bollen et al., 2011), or their emotions are captured passively by various types
of sensors. Also, state-of-the-art AI technologies make it possible to collect
human emotions from massive data sources and have revolutionized the research
of human emotions. Several existing studies have tried to connect geography
and collective emotions from those UGC using advanced technologies and got
promising results (Mitchell et al., 2013). However, the absence of attention to
the role of place as locale to human emotions still exists (Smith and Bondi,
2016). In addition, most existing research used natural language processing
(NLP) to extract human emotions from textual corpus (Strapparava et al., 2004;
Cambria et al., 2012). Such methods may face challenges such as multi-cultural
2
differences in language, which may not be suitable for global-scale research (more
discussions in Section 2 and Section 5). In comparison, the facial expression of
emotions is said to be universal across countries and different periods, and can
capture human emotions in real-time, which may be suitable for a place-based
emotion extraction framework in a global-scale.
In this research, our goal is to investigate human emotions in places and
explore potentially influential environmental factors. We term the study
phenomenon as Place Emotion, which is a special case of the general affective
computing in geography, i.e., to examine the human emotions at different places
with different affordances (including the environment and human activities).
The research questions are as follows: (1) How to extract and compute human
emotion scores from amount of georeferenced photos taken in different places?
(2) What is the relationship between human emotions and environment factors
at places? To answer these questions, a general framework utilizing UGC to
compute human emotion scores at places based on facial-expression recognition
and spatial clustering techniques is proposed. However, since there are many
types of places with variety of environment factors, we only select one specific
type of place (i.e., tourist attraction sites) as a case study to test the feasibility
of our proposed workflow.
Tourist attractions, which attract “non-local” travelers for sightseeing,
participating activities, and experiences (Leiper, 1990; Lew, 1987), are a
popular type of places (Jones et al., 2008), and are located across the world,
which are suitable for being chosen as a case study of global scale research.
In the past decades, with the growth of economy and the development of
modern transportation techniques, tourism has experienced continued growth
and deepening diversification to become one of the fastest growing economic
sectors in the world (Ashley et al., 2007). For a tourist, the choice of places to
visit in planning a trip is the first step (Bieger and Laesser, 2004; Sun et al.,
2018), while the options are often numerous. When retrieving information of
tourist sites, a fair and comprehensive ranking list on tourist attractions is often
useful. However, existing ranking lists relying on the environment (Amelung
et al., 2007) and socio-economic (Bojic et al., 2016; Chon, 1991) aspects of the
tourist sites. These factors indeed influence travel flows but from an objective
way. The perceptions and feelings of tourists are often ignored. A ranking list
based on human emotions might provide different insights from human-oriented
preferences. Additionally, happiness is one of the most common basic emotions
(Ekman and Davidson, 1994; Eimer et al., 2003; Izard, 2007). Therefore, a
ranking list of the happiest tourist sites in the world will be created as an
outcome of the affective computing at each site.
To this end, this study presents a novel framework to measure human
emotions at places from facial expressions and to explore influential factors
to the degree of happiness at different places. Tourist sites are taken as a
specific type of place for experiment. The contributions of the study are
three-fold. (1) We propose a novel approach for extracting and characterizing
the average happiness score at each place using computer vision and spatial
analysis techniques. (2) We explore the relationship between different kinds of
3
environmental contexts and the degree of happiness extracted from human facial
expressions. (3) We create a ranking list of the happiest tourist sites based on
crowdsourcing human emotions rather than objective indices, and provide new
insights on integrating human emotions to enrich the understanding of sense of
place in geography and in place-based GIS.
The remainder of paper is organized as follows. First, in the section 2
“Related Work”, we conduct the literature review on place emotion related
studies. In the section 3 “Methodology”, we present a methodology framework
and explain our computational procedures. Then in the section 4 “Experiments
and Results”, we test the framework with a case study of human emotions at 80
worldwide tourist attractions. We discuss the implications and comparison of
our image-based method to the text-based studies in the section 5 “Discussion”.
Finally, we conclude this work and present our vision for future research in the
section 6 “Conclusion and Future Work”.
2. Related Work
There are two categories of affective computing. One is about several
instinctive basic emotions like happiness, sadness, anger, etc (Ekman and
Davidson, 1994). The other is to detect the polarity of sentiments like positive,
neutral and negative expressions, which are organized feelings and mental
attitude (Pang et al., 2008). Unless specifically clarified, we use the general
term “emotion” to represent both categories interchangeably in this paper.
Both emotion and sentiment studies enable us to understand human perceptions
of the society and the environment (Zeng et al., 2009). Exploration and
understanding of human emotions and sentiments have attracted volumes of
interest from psychology (Ekman, 1993; Berman et al., 2012; Svoray et al.,
2018), biology (Darwin and Prodger, 1998), computer science (Lisetti, 1998),
geography (Davidson and Milligan, 2004; Mitchell et al., 2013; Svoray et al.,
2018; Hu et al., 2019), and public health (Zheng et al., 2019), just to name a
few.
The emotion collection methods evolve over time. Traditionally, scholars
from social sciences often use questionnaires and self-reports to investigate the
emotions of people in different environmental contexts (Niedenthal et al., 2018).
Several rankings of human’s happiness are published in recent years, including
the World Happiness Report released by the United Nations Sustainable
Development Solutions Network1, which ranks the happiness of countries’
citizens by investigating the social-economic indices. The Measuring of National
Well-being Program, which is released by the Office of National Statistics, UK,
monitors the well-being of citizens by producing assessment measures of the
nation2. The Gross National Happiness is used in guiding the government
1http://worldhappiness.report
2https://www.ons.gov.uk/peoplepopulationandcommunity/wellbeing/articles/
measuringnationalwellbeing/qualityoflifeintheuk2018
4
of Bhutan with aspects of living standards, health, education, etc. And
the Satisfaction With Life Scale measures the life satisfaction components of
subjective well-being3. However, those methods encounter some challenges
despite the spread usage in psychological science. For example, it costs a lot
of human resources and lacks timeliness (Golder and Macy, 2011). And the
results relied on the questionnaires may have constraints of self-knowledge and
psychological influence of informed consent (Baumeister et al., 2007).
With the emergence of affective computing technologies, more efficient
ways for detecting human emotions are used. Numerous studies on affective
computing have been conducted and gained great success, especially using
NLP methods to extract emotions from texts and explain from a geographic
perspective. For example, Mitchell et al. (2013) estimated human happiness at
the state-level in the United States and explored the impact of socioeconomic
attributes on human moods. Ballatore and Adams (2015) utilized a corpus
of about 100,000 travel blogs for extracting the emotional structure (including
joy, anger, fear, sadness, etc.) of different place types. Bertrand et al. (2013)
generated a sentiment map of New York city via extraction of emotions from
tweet data. Zhen et al. (2018) calculated the human emotion scores using
the Weibo tweet data and explored the spatial distribution of sentiments in
Nanjing. Zheng et al. (2019) demonstrated that high levels of air pollution
(e.g., PM 2.5) may contribute to the urban populations reported low level of
happiness in social media based on analytics of over 210 million geotagged tweets
on Weibo. Hu et al. (2019) presented a semantic-specific sentiment analysis on
online neighborhood textual reviews for understanding the perceptions of people
toward their living environments.
Aside from the success, text-based measurements of emotions may encounter
some challenges: One problem is that texts are often recorded after events. It
means that the emotions expressed are not in real-time, but often after a period
of transition. The buffering time period may be beneficial to the user who
expresses emotions. Because during a calm-down period, the user may utilize
more dispassionate linguistic expression to maintain a stable social identity
(Coleman and Williams, 2013). Another challenge in extracting emotions from
texts is the multi-lingual environment. Different languages may vary in words
and syntax for expressing the emotions. Most of emotion extraction models are
based on the words or the syntactic and semantic structures of the sentences,
which are unique in each language (Shaheen et al., 2014). So far, no existing
method is established to standardize the emotional scores computed from all
different language models. Therefore, affective computing based on texts has
been limited to analyze materials in one language at a time. For problems
being multi-lingual, text-based affective computing may be confronted with
difficulties.
In comparison, image-based approaches (Zhang et al., 2018), especially facial
expression-based emotion extraction methods have been improved greatly in
3http://www.midss.org/content/satisfaction-life- scale-swl
5
recent years because of the emergence of deep convolutional neural networks (Yu
and Zhang, 2015), which even perform better than human in face-recognition
benchmark testing (Wang and Deng, 2018). Svoray et al. (2018) analyzed Flickr
photos and found the positive relationship between human facial expressions of
happiness and their exposure to nature with urban density, green vegetation, and
proximity to water bodies in the city of Boston. By extracting and identifying
key points from face images based on facial activities and muscles, machine
learning models can learn the visual patterns of faces according to the emotional
labels (Calvo and D’Mello, 2010). Therefore, the emotions of faces can be
extracted. Each culture has its own verbal language, and emotion has its own
language of facial expressions. The relationship between emotions and facial
expressions has been extensively explored. Levenson et al. (1990) pointed out
that subjective emotions have significant connections to the facial activities,
which provides the fundamental theories of facial expression-based affective
computing. Facial expression-based emotion recognition methods have several
advantages. First, facial expressions are both universal and culturallyspecific
(Matsumoto, 1991). Though connections between emotions and cultures varied
(Cohn, 2007), strong evidence has been provided that there is a pan-cultural
element in facial expressions of emotion (Ekman and Keltner, 1970). People
from ancient times to the present, from all over the world, and even our
primate relatives hold similar basic facial expressions, especially smiling and
laughter (Preuschoft, 2000; Parr and Waller, 2006). It indicates that humans are
universal when representing basic emotions and facial expression-based emotion
extraction methods are suitable for global-scaled issues especially for solving
the multi-lingual problem. In fact, some existing researches have explored the
worldwide expression of emotions based on facial expressions in photos (Kang
et al., 2018), which shows the universal compatibility of such methods. In
addition, facial expressions are produced spontaneously when emotions are
elicited (Berenbaum and Rotter, 1992). By recording and analyzing facial
expressions, researchers can track down emotions as they were just formed.
As advanced computer-vision based systems and algorithms are becoming more
mature, facial expressions as well as facial muscle actions can be recognized and
computed with quantitative scores (possibilities) of recognized emotions (Ding
et al., 2017; Kim et al., 2016; Zeng et al., 2009). As a result, facial-expression
based researches are spreading in affective computing. For instance, Kang et al.
(2017) examined the emotion expressed by users in Manhattan, New York City
and compared the human emotions fluctuation with stock market movement to
find out the relationship between the two. Abdullah et al. (2015) used images
from Twitter to calculate emotions from facial expressions and compared them
with socio-economic attributes. Singh et al. (2017) analyzed the smiles and
diversity via social media photos and pointed out people smile more in a diverse
company.
In sum, considering that emotions can be recorded in real time and are
universal in multi-lingual environment, facial expressions might be more suitable
for place-based emotion extraction in a global scale, as places are located around
the world with affording different groups of people and various kinds of activities.
6
To the best of our knowledge, our research is among the pioneer studies that
utilize the state-of-the-art facial expression recognition techniques and large-
scale georeferenced photos for exploring the human emotions at different places
at the global scale.
Figure 1: The workflow of this research.
7
3. Methodology
3.1. Framework
As shown in Figure 1, there are four steps for the framework of extracting and
measuring human emotions at different places. First, large-scale geo-referenced
crowdsourcing photos in social media are collected and positioned on the data
server. Several geographical and environmental attributes (e.g., the proximity
to water bodies, openness, landscape type) of each place are also retrieved and
recorded. Second, the footprints of places in our study are generated using
the area of interest (AOI) extraction approach based on the spatial density of
photos (Li and Goodchild, 2012; Hu et al., 2015). Then, with the state-of-
the-art cognitive recognition methods based on computer vision technologies
(e.g., object detection and localization), human emotions are extracted and
measured via facial expressions detected in the social media photos. In order
to examine whether results of affective computing are robust and solid, we also
implemented sensitivity tests to check the concordance of results with varying
algorithm parameter settings.
After the calculation of human emotions at different places, it is necessary to
explore what environmental factors have influences on the expressions of human
emotions. Correlation analysis and multi-linear regression models are utilized
to explore the relationship between human emotions and environmental factors.
3.2. Data Preparation
There are two datasets used in this research. One is the places as well
as their geographic attributes for exploring the relationship between human
emotions and environments. And the other is georeferenced social media photos
for affective computing, which are collected from the Flickr website based on
the coordinate information of place names.
In many geographic information systems and digital gazetteers, places, are
often represented as points of interest (POIs) although places have footprints
that vary by type (e.g. points, lines, or polygons) (Goodchild and Hill, 2008).
Based on the place names, the coordinates of those place centers are harvested
from the Google Maps Places Application Programming Interface (API)4. A list
of geographic attributes and environment factors are recorded at each place (see
section 4.1 for more details).
Photos taken at different places are obtained from Yahoo Flickr platform.
Flickr is a publicly available social media platform where users can upload and
share their photos, and it is one of the most commonly cited websites in the
era of Web 2.0 (Cox, 2008). Options for geo-tagging photos are also provided
in the website as more and more GPS chips are embedded in smart phones and
cameras. Time and geographical information are recorded automatically when
saving photos from location-aware devices. In addition, users can drag their
photos on a map and input their locations for geo-tagging upon uploading.
4https://developers.google.com/places/web-service/intro
8
Therefore, each photo can be positioned on the map. For most photos, the
locations can be labeled correctly and the uncertainty of the data (eg. incorrect
location of photo) can be removed by construction of the place introduced in
Section 3.3.
Flickrs API5allows developers and researchers to collect a huge amount of
data from the platform. Public geo-tagged photos with information including
user ID, photo ID, latitude, longitude, tag text, time stamp and so on, are
retrieved and recorded within a certain distance to the center point at specific
places. And the center point coordinates of places are retrieved from the Google
Place API. Each photo is saved with its original resolution while keeping a link
pointing to its original URL. All the information is stored in a database for data
manipulation.
3.3. Construction of Places
As place is a product of human conceptualization that is derived from human
experience to describe a specific space (Tuan, 1977; Couclelis, 1992; Curry, 1996;
Merschdorf and Blaschke, 2018), one main challenge for modeling places in GIS
is the vague boundaries (Burrough and Frank, 1996; Montello et al., 2014; Gao
et al., 2017b). The boundary is often generated from the density estimation
and spatial clustering of georeferenced photos (Feick and Robertson, 2015; Hu
et al., 2015). In this reseasrch, places are constructed by the following steps
based on user generated photo footprints: (1) utilizing a density-based spatial
clustering for application with noise (DBSCAN) to extract the hotspot zones of
human activities. (2) using the convex hull to find out the minimum bounding
geometry based on a set of points remained after spatial clustering.
DBSCAN, which is a point-based spatial clustering algorithm (Ester et al.,
1996), is used to identify clusters of geo-tagged photos. Compared with the K-
means clustering algorithm, DBSCAN can find arbitrarily shaped clusters, and
it does not require the predefined number of clusters in advance. In addition,
it is relatively stable and robust to the noise data. Some geo-tagged photos are
manually uploaded by users without specific criteria and may generate noise; for
example, a user may drag photos to wrong locations. By applying the DBSCAN
algorithm, those noise data can be removed and the core areas of each place, in
other words, the hotspots where users most likely to stay and to take photos,
will be remained.
The DBSCAN algorithm requires two parameters, namely εand minP ts.
The εis the search radius representing the maximum distance of the search
neighborhood to the center point. And the minP ts indicates the minimum
number of points a cluster should have. Different settings of the two parameters
will influence the result, and proper values according to the characteristics of
places should be selected. As suggested by several previous research (Hu et al.,
2015; Mai et al., 2018; Liu et al., 2019), a value between 40 m to 300 m of
εis suggested in clustering human activities. As the number of photos and
5https://www.flickr.com/services/api/
9
users may vary in different places, it is not suitable to use an universal absolute
number as minP ts. Therefore, a percentage of the number of photos at a certain
place is used as minP ts. Consequently, a combination of different parameter
settings should be tested to find out the best parameter combination.
After the spatial clustering of photo locations, the next step is to derive the
core areas of places from the clustered points. The convex hull is a high-quality
geometric approximation method used for efficiently clustering geographical
features (Graham, 1972; Barber et al., 1996; Liu et al., 2019; Yu et al., 2014).
A convex hull is the minimum bounding polygon containing a set of points. It
has been utilized in a number of studies to find the minimum bounding shape
of the clustered points (Liu et al., 2019).
Figure 2 shows the process of construction of places that are represented as
polygons generated from the aforementioned steps.
Figure 2: Construction of a place based on spatial clustering and the convex hull approach.
3.4. Measurement of Human Emotion
One main research question in this work is how to extract basic human
emotions and to quantify the degree of happiness expressed by users at
different places. The state-of-the-art computer vision and cognitive recognition
technologies make it possible to extract and quantify human emotions from
facial expressions. In this study, we propose two indices, namely the “Joy
Index” and the “Average Happiness Index”, to measure the degree of “happiness
atmosphere” at each place.
3.4.1. Affective Computing
We used the Face++ Emotion API6to detect human faces in photos and to
extract human emotions based on their facial expressions. The Face++ platform
is a mature commercial cloud-computing enabled AI technology provider with
a great number of customers and developers using its products, and is said to
perform well in several facial recognition-related competitions7, which proves
the reliability of the system, and hence is selected for extracting emotions
6https://www.faceplusplus.com/emotion-recognition/
7https://www.faceplusplus.com/blog/article/coco-mapillary-eccv-2018/
10
from human faces. A set of computer vision-based services are provided for
human facial recognition and analyses. The attributes of all faces in a photo are
extracted, including the face position and extent, human emotion, age, ethnicity,
gender, and even beauty. The Face++ API produces two measurements for
evaluating the emotion of human faces. One is the smile, which describes the
smile intensity (Whitehill et al., 2009) and includes two elements smile value
and smile threshold. The smile value is a numeric score (from 0 to 100) to
indicate the degree of smiling while the smile threshold is provided by the cloud
AI system to judge whether the detected face is smiling or not. Generally, if the
smile value is larger than the smile threshold, the face is judged as a smiling
face. Therefore, based on the smile attribute, each face in the photo is classified
as either smiling or not-smiling. The other measurement is emotion structure,
which is a vector of scores (from 0 to 100) to describe seven basic emotional
fields: anger, disgust, fear, happiness, neutral, sadness and surprise. All scores
of one face sum up to 100. The higher the score is, the more confidence an
emotion represents. Hence, the emotion field could illustrate the intensity of a
particular emotion from different dimensions.
It is worth noting that not all emotion fields are used in this study. Happiness
is often recognized as one of the most common basic emotions (Izard, 2007).
Although some arguments exist (Frank and Ekman, 1993), smile can represent
happiness in general. In addition, as happiness is the clearest emotional domain
compared with all other dimensions of emotions (Wilhelm et al., 2014), we only
use happiness value from the emotion structure. Figure 3 shows the happiness
scores extracted from different human faces in photos. Notice that the actual
human faces rather than those on paintings will be detected and analyzed in
the experiments.
Figure 3: Emotional indices calculated for faces. (Source: Face++)
11
3.4.2. Emotional Indices for Places
Two place-based human emotion measurement indices are proposed to
evaluate the degree of happiness at different places in this study. Namely, the
“Joy Index” based on the smiling score and the “Average Happiness Index”
based on the happiness score.
The “Joy Index” is calculated with consideration of the normalized difference
between the number of smiling faces and the number of non-smiling faces using
geo-tagged photos within the spatial footprint of each place as follows.
Ji=CsCns
Cs+Cns
(1)
Where Jiis the joy index calculated at place i,Csis the number of smiling faces
in the photos within this place, Cns is the number of non-smiling faces. The
range of this index is between -1 to 1, a symmetric closed interval. A positive
value represents that more people are smiling at a place, which indicates positive
emotion conditions. While a negative value represents that more people don’t
have smiling faces, which may indicate a serious atmosphere at that place.
In comparison, the “Average Happiness Index (AHI)” calculates the average
of happiness values for all detected faces in those geo-tagged photos at a place.
AHIi=1
n
n
X
j=1
Hj(i) (2)
Where Hj(i) is the happiness value of human face jat place i. The AHI
illustrates the average degree of happiness for people at each place.
3.5. Sensitivity Tests
3.5.1. Test for Place Construction
During the construction of place described in section 3.3, a set of
combinations for parameters εand minP ts are used. Although the shape of
place boundaries may vary with different parameters, the derived place emotion
results should have a similar distribution and trend if the proposed approach is
stable. In order to check this, human emotion scores are calculated at each place
with different parameter settings. Then, the Kendalls coefficient of concordance
(W) is utilized to measure the agreement among those different human emotion
detection results.
In order to do so, a ranking of places based on the detected average
happiness score is created for each pair of εand minP ts. Assuming there
are mcombinations of parameters for nplaces. Summing up the ranks ri,j in
all mscenarios in which a place igets a total rank Rivia Equation 3. Then,
Ris calculated as the average value of those total ranks across all places via
Equation 4, and the sum of the squared deviation Sis calculated via Equation
5. And the Kendalls W can be calculated by Equation 6.
Ri=
m
X
j=1
ri,j (3)
12
R=1
n
n
X
i=1
Ri(4)
S=
n
X
i=1
(RiR)2(5)
W=12S
m2(n3n)(6)
In general, if the test statistic Wis 1, it means all judges with different
parameter settings are assigned as the same order of the places. While W= 0
indicates that there is no agreement among all judges and the ranks are in
random. If the results of W prove that the emotion score rankings are similar
with different parameters in place construction, the influence of shape during the
place construction process is limited. Also it would also show that the emotion
scores calculated at each place are solid.
3.5.2. Test for Affective Computing
As the number of photos varied at different places, it is necessary to know
whether the data collected are sufficient for human emotion calculation. To test
the reliability and stability of the facial-expression based on emotion recognition
results, a bootstrapping strategy was applied to assess the robustness of the
emotional indices calculated in the section 3.4.
Bootstrapping is a resampling approach proposed by Efron (Efron, 1992). It
is often used to approximate the distribution of the test samples. By doing this,
a confidence interval can be derived to show how confident the range of emotion
scores is at each place. The step-by-step details are described as follows:
(1) Assume that nfaces are collected at place ias a sample set D0. Perform n
times random sampling with replacement to form a new sample set with
the same size as D. Note that more than one face may exist in D.
(2) Then, the emotion indices eof the new sample set Dis calculated.
(3) Repeat the two steps above for Ntimes to generate D1, D2, ..., DNwith
emotion results e1, e2, ..., eN.
(4) Rank the affective computing results and calculate the average value of
the emotional indices as the final output of the place i. Discard the lowest
2.5% and the highest 2.5% results. The remaining results show the 95%
confidence interval of emotional indices calculated at the place i.
The results of bootstrapping show the confidence intervals of the possible
emotion scores at each place, which help evaluate the stability of the emotion
calculation results. Although it is impossible to know the true confidence
interval as photos are collected in bias anyway, the derived results are more
asymptotical to be the truth (DiCiccio and Efron, 1996). Further analyses are
conducted based on the emotional results after the bootstrapping processing.
13
3.6. Influence of Environment Factors
As suggested by environmental psychology studies (Capaldi et al., 2014;
Svoray et al., 2018), human emotions can be affected by the surrounding
environment. Therefore, exploring the potentially influential geographical and
environmental factors and their importance has great significance to understand
human emotions at different places. To do so, the Pearson’s correlation analysis
(Benesty et al., 2009) and the multiple linear regression (MLR) were employed
in this study.
As mentioned in section 3.2, a group of social and physical geographic
attributes are collected when retrieving the information for each place. Those
factors are represented as a1, a2, ..., anat each place. Please note that since
this paper aims at proposing a general computational framework for extracting
place emotions, and the environmental factors may vary in different types of
places. Therefore, we do not define a complete set of factors in this research and
further research is needed for enumerating a complete list of variables related
to a specific type of place. As a case study, by referring several existing works
and our geographical knowledge, several environmental factors are chosen in
this work as described in Section 4.1.
The Pearson’s correlation coefficient ρis employed to explore the positive
and negative impacts, and the strength of linear relationship between an
environmental factor and the emotion score at each place. As correlation
analysis is only suitable for numeric values, for categorical variables (e.g.,
continents), the correlation coefficient between the emotion and each category
is calculated by converting categorical variables to dummy variables (0,1).
For each influential factor a, the correlation analysis was performed with a
combination of one emotion index evia equation 7:
ρe,a =E[(eµe)(aµa)]
ρeρa
(7)
Where the Pearson’s correlation coefficient ρe,a is calculated with the
expected covariance value Eof the two variables eand awith their mean values
µeand µa, and the standard deviations ρeand ρa. A positive value shows that
the factor has positive impacts on the emotion index eand vice versa.
And the MLR uses all geographical and environmental variables
(a1, a2, ..., an) to predict the emotion index value Eiat each place ias:
Ei=f(a1+a2+... +an) + γ(8)
Where γis an unobserved error term. The impact of each attribute could
be measured using the coefficient of each independent variable. The R2is
calculated as a goodness-of-fit statistic to determine how well the MLR model
fits the observed place emotion data.
4. Experiments and Results
As the experience of travel and tourism is deeply connected with the place
(Wearing et al., 2009), we take tourist attractions as a case study place type to
14
examine the feasibility of our Place Emotion sensing framework.
4.1. Input Datasets
Tourist sites selected in this study are located around the world. And
the sites selected have to be famous regarding the annual number of visitors,
comprehensive in terms of cultural representativeness, and diverse in terms of
site types. In addition, in order to get reliable emotion detection results, the
site should have a large number of photos taken and uploaded by tourists. To
find them, several official resources8and open statistics were checked9,10 . Those
selected attractions are distributed all over the world and listed in Figure 5. In
total, there are 80 sites, including 24 sites located in Asia, 25 in Europe, 29 in
North America, and only 1 site located in Africa and in Oceania respectively.
They are from 22 countries around the world. The spatial distribution of all
these tourist sites can be found in the Figure 5.
A group of geographical attributes and environmental factors are searched
and recorded, to the best of our knowledge, which may influence the tourists
degree of happiness at each site. As human emotions are complex and influenced
by multiple individual and environment variables, we only selected a small group
of variables according to some existing studies from environmental psychology
(White et al., 2010; Capaldi et al., 2014; Svoray et al., 2018). Other socio-
economic and environmental factors as well as individual differences may be
added in future work to explore. The selected variables are:
(1) The coordinates of the site location which are searched by the Google
Maps Place API.
(2) The continent where the site is located.
(3) The country where the site is located.
(4) The existence of water bodies. As suggested by several related studies
from psychology, landscape containing water bodies can influence human
activities, and consequently affect moods (White et al., 2010). Therefore,
taking water bodies into consideration is necessary. There are two
circumstances where the water bodies exist. One is that the water body
exists within the tourist place. The other is that the water body is nearby
the tourist site and can be directly viewed by the people. Otherwise, we
deem that there exists no water bodies at that place.
(5) The distance to the nearest water bodies, which calculates the shortest
distance from the nearest water bodies (lakes, oceans, etc.) to the place.
If the water body exists within the site, the distance is 0.
8https://whc.unesco.org/en/map/
9https://www.lovehomeswap.com/blog/latest-news/the-50-most-visited-tourist-
attractions-in-the-world
10https://www.travelandleisure.com/slideshows/worlds-most-visited-tourist-attractions
15
(6) Whether the main part of the site is in an open or closed space. Most
of previous studies have proved that activities in an outdoor environment
have a positive effect on happiness (Thompson Coon et al., 2011). Hence,
the tourists sites are classified as open or closed space. Parks, squares,
lakes, etc., which are open to the air, are defined as open space, while
sites like museums, stations, cathedrals, etc., whose main contents are
indoor, are considered as closed space.
(7) The green vegetation coverage of each place. Several studies suggested
that green space could reduce pressure and has positive impact on mental
health (Maas et al., 2009; Thompson et al., 2012). In order to measure
the green space and its impact on the human emotions, the Normalized
Difference Vegetation Index (NDVI), which is widely used in remote
sensing of vegetation (Goward et al., 1991), was harvested from NASA
Earth Observations11. The NDVI product in June 2017 was downloaded
and values were spatially joined to each site.
(8) Whether the place is located at an urban or rural environment. Similar
to the open or closed space, urban areas which have higher building
density than rural areas which have more natural environment, have great
influence on human emotions (Wooller et al., 2018).
(9) The type of a tourist site. Different types of tourist sites may have
different groups of visitors and mobility patterns. The type of tourist
sites may be associated with the mental conditions (Leiper, 1990). Based
on the site type defined by the Google Maps Place API as well as
several tourism-related studies (Lew, 1987), there are six types of tourist
attractions defined in this research. Namely, natural (like waterfalls, where
places have limited human-made things), amusement (like the Disneyland
theme parks, where tourists visit places for enjoying games and other
activities for fun), religious (like cathedral, where people visit mostly
for religious-related activities), museum (like the Metropolitan Museum
of Art, where historical, scientific, and artistic objects are kept), palace
(like the Forbidden City, where old cities and castles located), and other
cultural categories (like the Grand Bazaar, where places have cultural
and historical values, but not belong to other categories aforementioned).
Note that the six types are selected only based on the attributes of the
80 tourist attractions. More types of tourist attractions can be defined in
other datasets.
After the selection of tourist attractions, all photos taken between Jan. 2012
and Jun. 2017, within the distance of 1km to the center of each attraction site
were downloaded from the Flickr website. The search radius is larger than
the spatial footprint of a place in most cases to ensure the number of photos
harvested is sufficient. In total, 6,199,615 photos were collected.
11https://neo.sci.gsfc.nasa.gov/view.php?datasetId=MOD NDVI M
16
4.2. Construction of Place and Affective Computing
Following the steps in section 3.3 and 3.4, each tourist site is constructed
by the user generated footprints with the DBSCAN spatial clustering and the
convex-hull minimum bounding geometry algorithms, and the place emotions
are calculated. In total, 2,416,191 faces are detected and evaluated, and
the ratio between the number of faces to all the photos is about 38.97%
while the proportion of pictures with faces is about 20%. For each site, two
emotional indices, namely the Joy Index and the Average Happiness Index
(AHI) are calculated by the faces remaining within each site. However, since
different parameters settings of DBSCAN in the place construction process may
impact the generated sites, a set of combinations of parameters are tested. In
the experiment, we iteratively chose the εas 50 m, 100 m, 200 m, 300 m,
and the minP ts as 0.5%, 1% and 2% according to the recommendations of
previous studies (Hu et al., 2015; Gao et al., 2017b). In sum, 12 combinations
of parameter settings were experimented individually and applied into the
Kendall’s concordance test.
For each pair of parameters, a ranking of sites are returned based on each
emotional index. The output of Kendall’s W is 0.99 for 12 rankings based
on the normalized Joy Index, and the same value 0.99 for all rankings based
on the AHI, and 0.98 for 24 ranking lists including both indices. The results
illustrate that all pairs of parameters can result in a very similar ranking order
of the happiest places, which means that the proposed method is stable and
the selected parameters of DBSCAN have limited impact on the overall place
emotion ranking. The happiness indices calculated at each place are almost
invariant in all the different experiments. According to this, we chose only one
parameter setting: 100 m as εand 1% as minP ts for further analyses.
As for the exploration analysis, four famous tourist attractions of interest:
the Great Wall, the Amiens Cathedral, the Magic Kingdom Park at Disneyland,
and the Universal Studio Hollywood, are selected as individual examples to
demonstrate the specific place emotion distributions (in Figure 4). The first
column of figures show the spatial distribution of photos with smiling faces and
without smiling faces inside the place constructed, while the second column of
the figures show the most frequent word tags shared by the Flickr users across
those sites. The constructed places are multi-part polygons and the photos
outside the polygons are removed in order to reduce the data noise. The red
points show smiling faces while blue points indicate no-smiling faces. According
to the figure, the Great Wall, the Magic Kingdom Park at Disneyland and
the Universal Studio Hollywood have more smiling tourists while tourists at
the Amiens Cathedral have less smiling faces as people in the religious site
may be less inclined to smile. Moreover, the semantics of the photos are also
explored. The word cloud visualization shows the top 100 tags of those social
media photos at each place. In addition to the country, department (in France)
and city names, a list of tourist site names including Mutianyu, Great Wall,
Cathedrale, Disney World, Universal Studios are identified from those geotags,
which indicate that those photos could represent the place information although
not all the words and topics are necessarily indicative to a specific place (Adams
17
and Janowicz, 2012; Adams and McKenzie, 2013). These examples show that
the computational framework for emotion extraction based on facial-expressions
at places is generally effective.
Figure 4: The spatial distribution of Flickr photos with smiling faces and without smiling
faces and their most frequent word tags across four sample tourist sites: the Great Wall,
the Amiens Cathedral, the Magic Kingdom Park at Disneyland, and the Universal Studio
Hollywood.
18
4.3. The World Ranking List of Happiest Tourist Attractions
Figure 5 shows the spatial distribution of tourist sites as well as their
emotional indices. The circles represent the joy index while diamonds represent
the AHI of each site. A deeper red color shows more happiness at a site while
a deeper blue indicates less happiness. For each site on the map, its associated
name can be found via the index. Based on the emotional indices, two ranking
lists of tourist attractions were generated in Figure 6 (Joy Index) and in Figure
7 (AHI). After using the bootstrapping strategy, the 95% confidence interval
of the emotional indices at each site is characterized by the blue bars, and
the circles at the center of the lines indicate the averaged values of emotional
indices. For the joy index, a positive value represents more enjoyment smiling
faces indicating a happiness atmosphere, while a negative value indicates that
happiness cannot be deduced clearly from facial expressions in a site. The
average joy index across all sites is about -0.115, a little bit lower than 0, while
the average of all AHI values is about 38.04. The correlation analysis result
shows that the Pearson’s correlation coefficient between the two rankings is
0.97, which means that the two rankings are similar. Interestingly, the official
slogan for Disneyland is “The Happiest Place On Earth”. However, according to
the ranking lists from user generated crowdsourcing data, the top site that has
the highest happiness indices in the world is the Great Wall, China based on the
two measurements (Joy Index: 0.429, and AHI: 63.72). But several amusement
parks like the Disneyland Parks, the Everland in South Korea, the Ocean Park
in Hong Kong indeed appear with high rankings though, which is in accordance
with the public opinions. Meanwhile at the bottom of the ranking list is the
Amiens Cathedral, with only -0.489 for the joy index and 20.79 for AHI. It is
worth noting that low happiness scores don’t necessarily mean that people at
those sites (e.g., religious places) are not as happy as people in other types of
places, but it could mean that people are less inclined to smile at those sites.
However, since only tourist sites are chosen in our case study, most smiling
faces are enjoyment smiles and seem to be associated with positive emotion and
happiness in the top ranked sites.
19
Figure 5: The spatial distribution of all tourist sites and their associated happiness indices.
20
Figure 6: The ranking list of tourist sites based on the Joy Index. The 95% confidence interval
of the emotional index at each site is characterized by the blue bars, and the circles indicate
the averaged values of the emotional index. (Note: figure is zoomable)
21
Figure 7: The ranking list of tourist sites based on the Average Happiness Index. The 95%
confidence interval of the emotional index at each site is characterized by the blue bars, and
the circles indicate the averaged values of the emotional index. (Note: figure is zoomable)
22
4.4. Relationships between Human Emotions and Environmental Factors
Each tourist site listed in Figure 5 was assigned with a set of aforementioned
attributes, namely, the continent, open or closed space, urban or rural area,
attraction type, vegetation coverage, water body existence, and the distance to
the nearest water body. Figure 8, Table 1, and Table 2 show the results of
correlation analysis and multiple linear regression of the emotional indices and
those attributes.
Results of the correlation analysis show that amusement parks have
significant positive impact (0.41 in Joy Index and 0.46 in AHI) on the tourists’
smiles and happiness, which is in accordance with our common knowledge. As
tourists often go to amusement parks to relax and enjoy holidays, they may
be happier than going to other places. Natural landscapes (0.27 in Joy Index
and 0.28 in AHI), open space (0.25 in Joy Index and 0.28 in AHI), existence
of the water body (0.21 in Joy Index and 0.25 in AHI), North America (0.19
in Joy Index and 0.23 in AHI), rural areas (0.31 in Joy Index and 0.22 in
AHI), vegetation coverage by NDVI (0.18 in Joy Index and 0.2 in AHI) all have
positive impact respectively. Except for the continent variable, the coefficients
of those aforementioned variables hint that, to some degree, places with more
open environment can increase the degree of happiness of tourists with more
enjoyment smiles. On the contrary, compared with sites in other continents,
people staying in the sites in Europe (sites selected in our case study only) may
not explicitly express as much happiness with smile as that in other continents.
What is more, religious site (-0.31 in Joy Index and -0.34 in AHI), closed space
(-0.25 in Joy Index and -0.28 in AHI), nonexistence of water body (-0.21 in
Joy Index and -0.26 in AHI), Palace (-0.16 in Joy Index and -0.23 in AHI) and
urban areas (-0.31 in Joy Index and -0.22 in AHI) have negative impacts on the
average happiness score of tourists.
According to the MLR results (Table 1 and Table 2), the impact of most
variables show similar results with the correlation analysis. The impact of
Europe on the happiness conditions is negative and is statistically significant in
the regression model. Sites with water bodies have a positive impact on human
happiness but are not significant in our samples. Conversely, for sites located
in urban areas, the emotional indices are negative and this result is significant.
Natural landscape has positive impacts on the happiness indices but is not
statistically significant. The goodness of fit R2is about 0.57 and statistically
significant with p-value <0.001 for both indices, showing that the variation of
human emotions at different places can be explained by those geographical and
environmental factors to a certain degree.
In addition, as the type of tourist sites has impacts on human emotions
in the statistical analyses, we further explore a specific type of tourist
attractionsamusement park to illustrate the results. As shown in Table 3,
there are 17 amusement parks in this study and they generally have higher
AHI (average 45.72) and Joy Index scores (average 0.52) compared with other
types of tourist sites. For most amusement parks, they are located in urban
areas and with open space, as well as containing water bodies (e.g., lakes) inside
23
the park. The average NDVI value at amusement parks is about 0.52, which
is similar to the value of all sites (about 0.54) and not type-biased. A more
specific exploration can be conducted in future to investigate more factors that
may impact on the human emotions at amusement parks.
(a)
(b)
Figure 8: The Pearson’s correlation coefficients of the geographical and environmental
attributes to the human emotions: (a) Joy Index; (b) Average Happiness Index
24
Table 1: The coefficients of multi-linear regression based on the joy index and the geographical
and environmental factors
Attributes Regression Coefficient
Constant 0.486**
Continent
Asia -0.396*
Europe -0.458**
North America -0.353*
Oceania -0.235
Africa N/A
Open/Closed Space
Open Space -0.0044
Closed Space N/A
Urban/Rural
Urban -0.1458*
Rural N/A
Type
Cultural Landscape -0.1923**
Museum -0.254*
Natural Landscape 0.002
Palace -0.203*
Religious Site -0.319*
Amusement Park N/A
Water Body
Existence of the Water Body 0.021
Inexistence of the Water Body N/A
Distance to the Nearest Water Body 0.0004
NDVI 0.0004
R2= 0.57**
p<0.05
∗∗ p<0.001
25
Table 2: The coefficients of multi-linear regression based on the average happiness index (AHI)
and the geographical and environmental factors.
Attributes Regression Coefficient
Constant 60.649**
Continent
Asia -13.805*
Europe -16.647*
North America -12.196
Oceania -5.058
Africa N/A
Open/Closed Space
Open Space -0.015
Closed Space N/A
Urban/Rural
Urban -5.083*
Rural N/A
Type
Cultural Landscape -9.264**
Museum -10.645*
Natural Landscape 0.81
Palace -10.889**
Religious Site -15.547**
Amusement Park N/A
Water Body
Existence of the Water Body 0.7484
Inexistence of the Water Body N/A
Distance to the Nearest Water Body 0.0172
NDVI 0.018
R2= 0.57**
p<0.05
∗∗ p<0.001
26
Table 3: The list of amusement parks with their average happiness index (AHI) and joy index
scores.
Tourist site AHI Joy Index
Epcot, USA 53.86 0.60
Disney Animal Kingdom, USA 53.07 0.60
Disney Worlds Magic Kingdom, USA 52.91 0.60
Disney Holly Wood Studios, USA 50.37 0.56
Universal Studios, Hollywood, USA 48.05 0.54
Everland, Gyeonggi-Do, South Korea 47.34 0.50
Disneyland Park, France 46.34 0.51
Disney California Adventure, USA 46.14 0.52
Ocean Park, Hong Kong 45.89 0.53
Disneyland Hong Kong, Hong Kong 45.76 0.51
Universal Studios, Florida, USA 45.55 0.53
Islands of Adventure, USA 45.34 0.52
Tokyo Disneyland, Japan 44.68 0.51
Universal Studios, Japan 41.57 0.47
Disneyland Park, USA 40.86 0.47
Balboa Park, USA 38.6 0.45
Lotte World, South Korea 30.95 0.35
5. Discussion
5.1. Human-environment perspective of results
Scholars from environmental psychology have proved that surrounding
environment has impacts on human emotions. Results of this study demonstrate
a similar conclusion from a big-data-driven perspective. By combining the
results of correlation analysis and the multiple linear regression, amusement
parks are the places that most positively affect individual’s happiness
expressions. Environments such as open spaces, places with the existence of
water body, places where the green vegetables is denser and rural areas, seem
to be summarized as one kind of places. All of these variables aforementioned
present positive impacts on the degree of human happiness. Therefore, it can
be concluded that people who stay in such areas may tend to feel happier.
Our findings are consistent with several existing theory in psychology (Kaplan,
1995), that exposure to nature has a positive impact on human moods (Bowler
et al., 2010), which also supports the theoretical foundation of the framework
and proves the validity of this study to some extent.
However, some limitations should be pointed out. As expressions of human
emotions are quite complex and are influenced by multiple variables both
internally and externally, the results from this study may not be guaranteed for
individuals (Junot et al., 2017) nor for all tourist attractions around the world.
And some cultural environment, religious sites and museums may suppress
27
people’s positive emotional expressions. It is worth noting that being suppressed
does not mean that people are not happy at those places, but just express less
enjoyment smiles explicitly. In addition, although the semantics of geotags show
that most photos are related to the places, tourists’ emotions may not be directly
relevant to the views of surrounding environments but could be affected by the
activities they are doing or the events they are participating in at that place.
A deeper exploration should be conducted to find out other factors impacting
human emotion expressions.
5.2. Uncertainty of the Data
Social media data are uploaded by volunteers based on their experiences
and opinions, which caused “ambient geographic information” (Degrossi et al.,
2018). As user-generated photos are used in this study, the uncertainty and
quality of the data should be tested (Goodchild and Li, 2012). Three types of
data uncertainty are addressed: the vagueness of the place, the different number
of faces, and the different groups of people.
As the size and the boundary of a place might be vague, it is not suitable
to use a fixed distance for data analysis. Place boundaries are constructed
based on the density distribution of photos. Georeferenced photos outside the
place boundary are removed to minimize the error of the results. In addition,
by using the DBSCAN algorithm, which is not sensitive to the noise data for
place construction, the vagueness of the results are also decreased. Besides,
a combination of parameter settings as well as the Kendalls W are tested for
ensuring data consistency. Therefore, the uncertainty of the result is minimized.
Since the number of faces varied across different tourist sites, a key issue is
to examine whether the number of photos collected at one site is sufficient to
extract the human emotion and whether the emotional condition calculated is
stable. Using the bootstrapping strategy, a 95% confidence interval of emotion
scores at each site is generated. The variability is derived by subtracting the
lower bound from the upper bound of the confidence interval. To explore the
relationship between the uncertainty of the emotional indices and the number of
faces analyzed at each site, the linear regression analysis was employed. Figure
9 illustrates that the relation between the variability of emotional indices at
each site and the number of faces identified from photos taken at each site fits
into a power model very well (with a goodness of fit coefficient 0.99). In general,
the more faces detected at a site, the more stable is the emotion measurement
calculated. For most sites, the variability of 95% confidence interval for the joy
index is less than 0.05 and is less than 3 for the AHI, and have limited influence
on the ranking lists. Therefore, the results of the emotional conditions at each
site are reliable and can be trusted.
In addition, as different groups of people with various culture backgrounds
and being locals or visitors may express different degree of excitement,
enjoyment, and emotions to the same place, the results might be influenced by
the proportion of various types of tourists. For instance, in order to distinguish
the tourists and local people, we follow the criteria used in previous studies to
define tourists: if the period of one user who takes multiple photos at one place
28
longer than one month, then the user was labeled as locals otherwise as visitors
(Garc´ıa-Palomares et al., 2015). Results show that for most tourist attractions
(more than 90%), the majority of Flickr photos (more than 80%) are uploaded
by tourists. The average difference of the AHI scores between tourists and locals
in those tourist sites is just 3, showing that such influence is minimal and will
not change the ranking list.
Although we tried our best to reduce the uncertainty, some limitations still
exist. Data bias commonly exists in the VGI data (Senaratne et al., 2017). As
suggested previously (Gao et al., 2017a; Jolivet and Olteanu-Raimond, 2017),
one data bias issue of VGI is that the contributions of volunteers often follow
a power-law or an exponential-law of frequency distribution with a long tail,
which indicates that most photos are posted by only a small proportion of users
and a large number of users only contribute few (Goodchild and Li, 2012). In
this study, a large number of faces detected might belong to a small group
of users. And the information provided by social media users may not always
comply with quality standards. However, the results of emotions based on facial
expressions do reflect active users’ experiences, opinions, interests, and feelings
at those places, and can provide new insights for the place-based information
research (Blaschke et al., 2018).
5.3. Comparison between text-based and facial-expression-based methods
Though facial-expression based emotional detection methods have becoming
more mature, and a few of studies implemented it in research, a key issue is
whether the facial-expression based methodology is reliable. Therefore, we
conducted a comparison between our methodology and a text-based framework.
To address this, we referred to the Mitchell’s research (Mitchell et al., 2013).
In this study, scholars followed Dodd’s method (Dodds et al., 2011), where a
daily happiness score was calculated from Twitter with the state-of-the-art NLP
technologies, summarizing a range of human emotions in United States from a
state level, and examining the connections to the socioeconomic attributes. In
comparison, the YFCC100 dataset containing the most photos in Flickr was used
(Thomee et al., 2015). We also evaluated emotions using our framework of all
photos in each state in United States. Since Mitchell’s research was conducted
in the year 2011, we only retrieved photos taken in the same year within the
United States to ensure the consistence of the time period. Then, both Joy Index
and Average Happiness Index were applied for the photo data to calculate the
happiness scores for each state. The results of our metrics and the results of the
Mitchell’s research across 50 states were analyzed via the Spearman’s correlation
analysis (Fieller et al., 1957), which makes comparison between the rank of each
value in the data series. As shown in Table 4, results in the two studies have
positive correlation: the Joy Index 0.28 and the Average Happiness Index 0.30,
which show some degree of similarity between the two technologies.
It should be noted that the focus of our research is not to compare and
even contrast the existing text-based emotion extraction technologies with facial
expression approaches. Different research methods have their own pros and cons.
As aforementioned, text-based approaches cannot record real-time emotions
29
(a)
(b)
Figure 9: The relation between the variability of emotional indices at each site and the number
of faces identified from photos based on: (a) Joy Index, (b) Average Happiness Index.
30
and might not be suitable for global-scale research due to the multi-linguistic
environment. But it typically has larger volumes of datasets and rich semantics
(Hu et al., 2019). By combining those two approaches for affective computing
could help improve the holistic understanding of human emotions from different
aspects and enrich the understanding of such innate neural program (Abdullah
et al., 2015).
Our approach also has some limitations. First, as aforementioned, the results
might be biased to certain group of people (visitors v.s. local citizens and
different ethnicities or culture backgrounds) and affected by the diversity of faces
in the trained data sets. Second, people’s emotions may not be directly relevant
to the views of surrounding environments. Moreover, people may not always
express emotions explicitly by either facial expressions nor texts. A further
exploration should be conducted in order to show the collective connections
between human emotions and the facial expressions in the technology-mediated
platforms (Elwood and Mitchell, 2015).
Table 4: The Spearman’s correlation coefficients between the text-based method and the facial
expression-based method.
Emotion Index Correlation Coefficient p-value
Joy Index 0.28 0.0472
Average Happiness Index 0.30 0.0314
6. Conclusion and Future Work
In this research, in order to understand the interaction between human
emotions and the environment, a data-driven framework of measuring the
human emotions at different places using large-scale user generated photos from
social media is proposed. We utilize the state-of-the-art social computing tools
to detect and measure human happiness from facial expressions in photos.
Tourist attractions, as a specific type of place are exemplified for deriving
place-based human emotion indices. A ranking list of 80 tourist sites across
the world is created not from the statistics of tourist flow, but from the
degree of happiness expressed and shared by millions of tourists, which also
shows that our framework is suitable for global-scale issues. In addition, we
explore the impacts of several geographical and environmental factors to human
happiness. Results are consistent with common sense and with existing studies
from psychology, stating that people in the environment with more openness
and with more opportunities exposing to nature express more happiness and
smiling on faces. Overall, this research advances our knowledge of the human
emotions at different tourist attractions. Our study connects crowdsourced
human emotions to the geographic attributes of environment using advanced
AI techniques and spatial analytics, and provides a new paradigm for research
in geography and in GIScience. The proposed framework and the findings could
31
also lead to practical guidance for environmental psychology, human geography,
tourism management and urban planning.
In the future, several potential directions will be focused to explain related
research questions. One direction is about data fusion. Although only Flickr
photos are employed in the experiment, this study can be further improved
with diverse data sources such as surveys. A data-synthesis-driven method
might provide varied perspectives of human emotions. And the mix of text-
based and facial expression-based emotion extraction methods may enhance the
confidence of the final output. Another direction is to explore fundamental
factors impacting human emotions. As we propose the framework for Place
Emotion research, we will focus more on spatial analysis of the emotion patterns.
Human emotions at different scales will be compared to revisit the scale effect
in geography. And different groups of people, as suggested by existing studies
(Niedenthal et al., 2018; Kang et al., 2018), will be explored to figure out deeper
insights on influential factors of human emotions. Moreover, different place
types as well as spatial units from different scales, including points of interest,
census blocks, neighborhoods, and communities will be combined to examine
the geographic patterns and socioeconomic linkages of human emotions. One
specific research taking a limited number of places but with more environmental
and socioeconomic factors to be examined can be conducted to enrich the
understanding of place-based emotions.
Acknowledgement
The authors would like to thank Wanjuan Bie, Shan Lu, and Dan’nan Shen
at Wuhan University, for their contributions on figures. Thank Timothy Prestby
at UW-Madison for his help on language edits of the manuscript. And thank
Jialin Wang, Zimo Zhang, Wenyuan Kong and Zijun Xu in the Place&Emotion
Group, Urban Playground Lab, Wuhan University, for their helpful discussions.
The funding support for this research is provided by the Office of Vice Chancellor
for Research and Graduate Education at the University of Wisconsin-Madison
with funding from the Wisconsin Alumni Research Foundation, and the Fund
for National College Students Innovations Special Pro ject of China (Grant No.
201810486033).
References
Y.-F. Tuan, Space and place: The perspective of experience, U of Minnesota
Press, 1977.
M. F. Goodchild, Formalizing place in geographic information systems, in:
Communities, neighborhoods, and health, Springer, 2011, pp. 21–33.
S. Winter, C. Freksa, Approaching the notion of place by contrast, Journal of
Spatial Information Science 2012 (2012) 31–50.
32
S. Scheider, K. Janowicz, Place reference systems, Applied Ontology 9 (2014)
97–127.
M. F. Goodchild, Space, place and health, Annals of GIS 21 (2015) 97–100.
G. McKenzie, K. Janowicz, S. Gao, J.-A. Yang, Y. Hu, POI pulse:
A multi-granular, semantic signature–based information observatory for
the interactive visualization of big geosocial data, Cartographica: The
International Journal for Geographic Information and Geovisualization 50
(2015) 71–85.
S. Gao, L. Li, W. Li, K. Janowicz, Y. Zhang, Constructing gazetteers from
volunteered big geo-data based on hadoop, Computers, Environment and
Urban Systems 61 (2017a) 172–186.
S. Gao, K. Janowicz, D. R. Montello, Y. Hu, J.-A. Yang, G. McKenzie, Y. Ju,
L. Gong, B. Adams, B. Yan, A data-synthesis-driven method for detecting
and extracting vague cognitive regions, International Journal of Geographical
Information Science 31 (2017b) 1245–1271.
T. Blaschke, H. Merschdorf, P. Cabrera-Barona, S. Gao, E. Papadakis,
A. Kovacs-Gy¨ori, Place versus space: From points, lines and polygons in
gis to place-based representations reflecting language and culture, ISPRS
International Journal of Geo-Information 7 (2018) 452.
F. Zhang, D. Zhang, Y. Liu, H. Lin, Representing place locales using scene
elements, Computers, Environment and Urban Systems 71 (2018) 153 – 164.
R. S. Purves, S. Winter, W. Kuhn, Places in information science, Journal of
the Association for Information Science and Technology (2019).
X. Wu, J. Wang, L. Shi, Y. Gao, Y. Liu, A fuzzy formal concept analysis-based
approach to uncovering spatial hierarchies among vague places extracted
from user-generated data, International Journal of Geographical Information
Science 33 (2019) 991–1016.
J. Agnew, Space and place, Handbook of geographical knowledge 2011 (2011)
316–331.
T. Jordan, M. Raubal, B. Gartrell, M. Egenhofer, An affordance-based model
of place in gis, in: 8th Int. Symposium on Spatial Data Handling, SDH,
volume 98, pp. 98–109.
P. Kabachnik, Nomads and mobile places: Disentangling place, space and
mobility, Identities 19 (2012) 210–228.
H. Merschdorf, T. Blaschke, Revisiting the role of place in geographic
information science, ISPRS International Journal of Geo-Information 7 (2018)
364.
33
A. Wierzbicka, Human emotions: universal or culture-specific?, American
anthropologist 88 (1986) 584–594.
C. E. Izard, Human emotions, Springer Science & Business Media, 2013.
J. Davidson, C. Milligan, Embodying emotion sensing space: introducing
emotional geographies, 2004.
E. O. Wilson, Sociobiology (1980) and biophilia: The human bond to other
species, 1984.
C. A. Capaldi, R. L. Dopko, J. M. Zelenski, The relationship between nature
connectedness and happiness: a meta-analysis, Frontiers in psychology 5
(2014) 976.
B. Mesquita, H. R. Markus, Culture and emotion, in: Feelings and emotions:
The Amsterdam symposium, Cambridge University Press, p. 341.
L. Grossman, Man-environment relationships in anthropology and geography,
Annals of the Association of American Geographers 67 (1977) 126–144.
P. J. Rentfrow, M. Jokela, Geographical psychology: The spatial organization
of psychological phenomena, Current Directions in Psychological Science 25
(2016) 393–398.
M. Smith, L. Bondi, Emotion, place and culture, Routledge, 2016.
S. A. Golder, M. W. Macy, Diurnal and seasonal mood vary with work, sleep,
and daylength across diverse cultures, Science 333 (2011) 1878–1881.
Y. Liu, X. Liu, S. Gao, L. Gong, C. Kang, Y. Zhi, G. Chi, L. Shi, Social sensing:
A new approach to understanding our socioeconomic environments, Annals
of the Association of American Geographers 105 (2015) 512–530.
X. Ye, Q. Huang, W. Li, Integrating big social data, computing and modeling
for spatial social science, Cartography and Geographic Information Science
43 (2016) 377–378.
K. Janowicz, G. McKenzie, Y. Hu, R. Zhu, S. Gao, Using semantic signatures
for social sensing in urban environments, in: Mobility Patterns, Big Data and
Transport Analytics, Elsevier, 2019, pp. 31–54.
R. W. Picard, et al., Affective computing (1995).
P. OConnor, User-generated content and travel: A case study on tripadvisor.
com, Information and communication technologies in tourism 2008 (2008)
47–58.
M. F. Goodchild, Citizens as sensors: the world of volunteered geography,
GeoJournal 69 (2007) 211–221.
34
S. Elwood, K. Mitchell, Technology, memory, and collective knowing, 2015.
B. Schuller, B. Vlasenko, F. Eyben, G. Rigoll, A. Wendemuth, Acoustic emotion
recognition: A benchmark comparison of performances, in: Automatic Speech
Recognition & Understanding, 2009. ASRU 2009. IEEE Workshop on, IEEE,
pp. 552–557.
P. Ekman, Facial expression and emotion., American psychologist 48 (1993)
384.
J. Bollen, H. Mao, A. Pepe, Modeling public mood and emotion: Twitter
sentiment and socio-economic phenomena., Icwsm 11 (2011) 450–453.
L. Mitchell, M. R. Frank, K. D. Harris, P. S. Dodds, C. M. Danforth,
The geography of happiness: Connecting twitter sentiment and expression,
demographics, and objective characteristics of place, PloS one 8 (2013)
e64417.
C. Strapparava, A. Valitutti, et al., Wordnet affect: an affective extension of
wordnet., in: Lrec, volume 4, Citeseer, pp. 1083–1086.
E. Cambria, C. Havasi, A. Hussain, Senticnet 2: A semantic and affective
resource for opinion mining and sentiment analysis., in: FLAIRS conference,
pp. 202–207.
N. Leiper, Tourist attraction systems, Annals of tourism research 17 (1990)
367–384.
A. A. Lew, A framework of tourist attraction research, Annals of tourism
research 14 (1987) 553–575.
C. B. Jones, R. S. Purves, P. D. Clough, H. Joho, Modelling vague places with
knowledge from the web, International Journal of Geographical Information
Science 22 (2008) 1045–1065.
C. Ashley, P. De Brine, A. Lehr, H. Wilde, The role of the tourism sector in
expanding economic opportunity, John F. Kennedy School of Government,
Harvard University Cambridge, 2007.
T. Bieger, C. Laesser, Information sources for travel decisions: Toward a source
process model, Journal of Travel Research 42 (2004) 357–371.
X. Sun, Z. Huang, X. Peng, Y. Chen, Y. Liu, Building a model-based
personalised recommendation approach for tourist attractions from geotagged
social media data, International Journal of Digital Earth (2018) 1–18.
B. Amelung, S. Nicholls, D. Viner, Implications of global climate change for
tourism flows and seasonality, Journal of Travel research 45 (2007) 285–296.
I. Bojic, A. Belyi, C. Ratti, S. Sobolevsky, Scaling of foreign attractiveness for
countries and states, Applied Geography 73 (2016) 47–52.
35
K.-S. Chon, Tourism destination image modification process: Marketing
implications, Tourism management 12 (1991) 68–72.
P. E. Ekman, R. J. Davidson, The nature of emotion: Fundamental questions.,
Oxford University Press, 1994.
M. Eimer, A. Holmes, F. P. McGlone, The role of spatial attention in the
processing of facial expression: an erp study of rapid brain responses to six
basic emotions, Cognitive, Affective, & Behavioral Neuroscience 3 (2003)
97–110.
C. E. Izard, Basic emotions, natural kinds, emotion schemas, and a new
paradigm, Perspectives on psychological science 2 (2007) 260–280.
B. Pang, L. Lee, et al., Opinion mining and sentiment analysis, Foundations
and Trends R
in Information Retrieval 2 (2008) 1–135.
Z. Zeng, M. Pantic, G. I. Roisman, T. S. Huang, A survey of affect recognition
methods: Audio, visual, and spontaneous expressions, IEEE transactions on
pattern analysis and machine intelligence 31 (2009) 39–58.
M. G. Berman, E. Kross, K. M. Krpan, M. K. Askren, A. Burson, P. J. Deldin,
S. Kaplan, L. Sherdell, I. H. Gotlib, J. Jonides, Interacting with nature
improves cognition and affect for individuals with depression, Journal of
affective disorders 140 (2012) 300–305.
T. Svoray, M. Dorman, G. Shahar, I. Kloog, Demonstrating the effect of
exposure to nature on happy facial expressions via flickr data: Advantages of
non-intrusive social network data analyses and geoinformatics methodologies,
Journal of Environmental Psychology 58 (2018) 93–100.
C. Darwin, P. Prodger, The expression of the emotions in man and animals,
Oxford University Press, USA, 1998.
C. Lisetti, Affective computing, Pattern Analysis & Applications 1 (1998)
71–73.
Y. Hu, C. Deng, Z. Zhou, A semantic and sentiment analysis on online
neighborhood reviews for understanding the perceptions of people toward
their living environment, Annals of the Association of American Geographers
(2019).
S. Zheng, J. Wang, C. Sun, X. Zhang, M. E. Kahn, Air pollution lowers chinese
urbanites’expressed happiness on social media, Nature Human Behaviour
(2019) 1.
P. M. Niedenthal, M. Rychlowska, A. Wood, F. Zhao, Heterogeneity of long-
history migration predicts smiling, laughter and positive emotion across the
globe and within the united states, PloS one 13 (2018) e0197651.
36
R. F. Baumeister, K. D. Vohs, D. C. Funder, Psychology as the science of
self-reports and finger movements: Whatever happened to actual behavior?,
Perspectives on Psychological Science 2 (2007) 396–403.
A. Ballatore, B. Adams, Extracting place emotions from travel blogs, in:
Proceedings of AGILE, volume 2015, pp. 1–5.
K. Z. Bertrand, M. Bialik, K. Virdee, A. Gros, Y. Bar-Yam, Sentiment in
new york city: A high resolution spatial and temporal view, arXiv preprint
arXiv:1308.5010 (2013).
F. Zhen, J. Tang, Y. Chen, Spatial distribution characteristics of residents
emotions based on sina weibo big data: A case study of nanjing, in: Big Data
Support of Urban Planning and Management, Springer, 2018, pp. 43–62.
N. V. Coleman, P. Williams, Feeling like my self: Emotion profiles and social
identity, Journal of Consumer Research 40 (2013) 203–222.
S. Shaheen, W. El-Hajj, H. Hajj, S. Elbassuoni, Emotion recognition from
text based on automatically generated rules, in: 2014 IEEE International
Conference on Data Mining Workshop, IEEE, pp. 383–392.
F. Zhang, B. Zhou, L. Liu, Y. Liu, H. H. Fung, H. Lin, C. Ratti, Measuring
human perceptions of a large-scale urban region using machine learning,
Landscape and Urban Planning 180 (2018) 148–160.
Z. Yu, C. Zhang, Image based static facial expression recognition with multiple
deep network learning, in: Proceedings of the 2015 ACM on International
Conference on Multimodal Interaction, ACM, pp. 435–442.
M. Wang, W. Deng, Deep face recognition: a survey, arXiv preprint
arXiv:1804.06655 (2018).
R. A. Calvo, S. D’Mello, Affect detection: An interdisciplinary review of models,
methods, and their applications, IEEE Transactions on affective computing
1 (2010) 18–37.
R. W. Levenson, P. Ekman, W. V. Friesen, Voluntary facial action generates
emotion-specific autonomic nervous system activity, Psychophysiology 27
(1990) 363–384.
D. Matsumoto, Cultural influences on facial expressions of emotion, Southern
Journal of Communication 56 (1991) 128–137.
J. F. Cohn, Foundations of human computing: Facial expression and emotion,
in: Artifical Intelligence for Human Computing, Springer, 2007, pp. 1–16.
P. Ekman, D. Keltner, Universal facial expressions of emotion, California mental
health research digest 8 (1970) 151–158.
37
S. Preuschoft, Primate faces and facial expressions, Social Research (2000)
245–271.
L. A. Parr, B. M. Waller, Understanding chimpanzee facial expression:
insights into the evolution of communication, Social Cognitive and Affective
Neuroscience 1 (2006) 221–228.
Y. Kang, X. Zeng, Z. Zhang, Y. Wang, T. Fei, Who are happier? spatio-
temporal analysis of worldwide human emotion based on geo-crowdsourcing
faces, in: 2018 Ubiquitous Positioning, Indoor Navigation and Location-Based
Services (UPINLBS), IEEE, pp. 1–8.
H. Berenbaum, A. Rotter, The relationship between spontaneous facial
expressions of emotion and voluntary control of facial muscles, Journal of
Nonverbal Behavior 16 (1992) 179–190.
H. Ding, S. K. Zhou, R. Chellappa, Facenet2expnet: Regularizing a deep face
recognition net for expression recognition, in: Automatic Face & Gesture
Recognition (FG 2017), 2017 12th IEEE International Conference on, IEEE,
pp. 118–126.
B.-K. Kim, J. Roh, S.-Y. Dong, S.-Y. Lee, Hierarchical committee of deep
convolutional neural networks for robust facial expression recognition, Journal
on Multimodal User Interfaces 10 (2016) 173–189.
Y. Kang, J. Wang, Y. Wang, S. Angsuesser, T. Fei, Mapping the sensitivity of
the public emotion to the movement of stock market value: A case study of
manhattan., International Archives of the Photogrammetry, Remote Sensing
& Spatial Information Sciences 42 (2017).
S. Abdullah, E. L. Murnane, J. M. Costa, T. Choudhury, Collective smile:
Measuring societal happiness from geolocated images, in: Proceedings of the
18th ACM Conference on Computer Supported Cooperative Work & Social
Computing, ACM, pp. 361–374.
V. K. Singh, A. Atrey, S. Hegde, Do individuals smile more in diverse social
company?: Studying smiles and diversity via social media photos, in:
Proceedings of the 2017 ACM on Multimedia Conference, ACM, pp. 1818–
1827.
L. Li, M. F. Goodchild, Constructing places from spatial footprints, in:
Proceedings of the 1st ACM SIGSPATIAL international workshop on
crowdsourced and volunteered geographic information, ACM, pp. 15–21.
Y. Hu, S. Gao, K. Janowicz, B. Yu, W. Li, S. Prasad, Extracting and
understanding urban areas of interest using geotagged photos, Computers,
Environment and Urban Systems 54 (2015) 240–254.
38
M. F. Goodchild, L. L. Hill, Introduction to digital gazetteer research,
International Journal of Geographical Information Science 22 (2008) 1039–
1044.
A. M. Cox, Flickr: a case study of web2. 0, in: Aslib Proceedings, volume 60,
Emerald Group Publishing Limited, pp. 493–516.
H. Couclelis, Location, place, region, and space, Geography’s inner worlds 2
(1992) 15–233.
M. R. Curry, The work in the world: geographical practice and the written
word, U of Minnesota Press, 1996.
P. A. Burrough, A. Frank, Geographic objects with indeterminate boundaries,
volume 2, CRC Press, 1996.
D. R. Montello, A. Friedman, D. W. Phillips, Vague cognitive regions in
geography and geographic information science, International Journal of
Geographical Information Science 28 (2014) 1802–1820.
R. Feick, C. Robertson, A multi-scale approach to exploring urban places in
geotagged photographs, Computers, Environment and Urban Systems 53
(2015) 96–109.
M. Ester, H.-P. Kriegel, J. Sander, X. Xu, et al., A density-based algorithm for
discovering clusters in large spatial databases with noise., in: Kdd, volume 96,
pp. 226–231.
G. Mai, K. Janowicz, Y. Hu, S. Gao, Adcn: An anisotropic density-
based clustering algorithm for discovering spatial point patterns with noise,
Transactions in GIS 22 (2018) 348–369.
X. Liu, Q. Huang, S. Gao, Exploring the uncertainty of activity zone detection
using digital footprints with multi-scaled dbscan, International Journal of
Geographical Information Science (2019) 1–28.
R. L. Graham, An efficient algorithm for determining the convex hull of a finite
planar set, Info. Pro. Lett. 1 (1972) 132–133.
C. B. Barber, D. P. Dobkin, H. Huhdanpaa, The quickhull algorithm for convex
hulls, ACM Transactions on Mathematical Software (TOMS) 22 (1996) 469–
483.
B. Yu, S. Shu, H. Liu, W. Song, J. Wu, L. Wang, Z. Chen, Object-based
spatial cluster analysis of urban landscape pattern using nighttime light
satellite images: A case study of china, International Journal of Geographical
Information Science 28 (2014) 2328–2355.
J. Whitehill, G. Littlewort, I. Fasel, M. Bartlett, J. Movellan, Toward
practical smile detection, IEEE transactions on pattern analysis and machine
intelligence 31 (2009) 2106–2111.
39
M. G. Frank, P. Ekman, Not all smiles are created equal: The differences
between enjoyment and nonenjoyment smiles, Humor-International Journal
of Humor Research 6 (1993) 9–26.
O. Wilhelm, A. Hildebrandt, K. Manske, A. Schacht, W. Sommer, Test battery
for measuring the perception and recognition of facial expressions of emotion,
Frontiers in psychology 5 (2014) 404.
B. Efron, Bootstrap methods: another look at the jackknife, in: Breakthroughs
in statistics, Springer, 1992, pp. 569–593.
T. J. DiCiccio, B. Efron, Bootstrap confidence intervals, Statistical science
(1996) 189–212.
J. Benesty, J. Chen, Y. Huang, I. Cohen, Pearson correlation coefficient, in:
Noise reduction in speech processing, Springer, 2009, pp. 1–4.
S. Wearing, D. Stevenson, T. Young, Tourist cultures: Identity, place and the
traveller, Sage, 2009.
M. White, A. Smith, K. Humphryes, S. Pahl, D. Snelling, M. Depledge, Blue
space: The importance of water for preference, affect, and restorativeness
ratings of natural and built scenes, Journal of Environmental Psychology 30
(2010) 482–493.
J. Thompson Coon, K. Boddy, K. Stein, R. Whear, J. Barton, M. H. Depledge,
Does participating in physical activity in outdoor natural environments have a
greater effect on physical and mental wellbeing than physical activity indoors?
a systematic review, Environmental science & technology 45 (2011) 1761–
1772.
J. Maas, R. A. Verheij, S. de Vries, P. Spreeuwenberg, F. G. Schellevis, P. P.
Groenewegen, Morbidity is related to a green living environment, Journal of
Epidemiology & Community Health (2009) jech–2008.
C. W. Thompson, J. Roe, P. Aspinall, R. Mitchell, A. Clow, D. Miller, More
green space is linked to less stress in deprived communities: Evidence from
salivary cortisol patterns, Landscape and urban planning 105 (2012) 221–229.
S. N. Goward, B. Markham, D. G. Dye, W. Dulaney, J. Yang, Normalized
difference vegetation index measurements from the advanced very high
resolution radiometer, Remote sensing of environment 35 (1991) 257–277.
J. J. Wooller, M. Rogerson, J. Barton, D. Micklewright, V. Gladwell, Can
simulated green exercise improve recovery from acute mental stress?, Frontiers
in Psychology 9 (2018).
B. Adams, K. Janowicz, On the geo-indicativeness of non-georeferenced text,
in: Sixth International AAAI Conference on Weblogs and Social Media, pp.
375–378.
40
B. Adams, G. McKenzie, Inferring thematic places from spatially referenced
natural language descriptions, in: Crowdsourcing geographic knowledge,
Springer, 2013, pp. 201–221.
S. Kaplan, The restorative benefits of nature: Toward an integrative framework,
Journal of environmental psychology 15 (1995) 169–182.
D. E. Bowler, L. M. Buyung-Ali, T. M. Knight, A. S. Pullin, A systematic
review of evidence for the added benefits to health of exposure to natural
environments, BMC public health 10 (2010) 456.
A. Junot, Y. Paquet, C. Martin-Krumm, Passion for outdoor activities and
environmental behaviors: A look at emotions related to passionate activities,
Journal of Environmental Psychology 53 (2017) 177–184.
L. C. Degrossi, J. Porto de Albuquerque, R. d. Santos Rocha, A. Zipf, A
taxonomy of quality assessment methods for volunteered and crowdsourced
geographic information, Transactions in GIS 22 (2018) 542–560.
M. F. Goodchild, L. Li, Assuring the quality of volunteered geographic
information, Spatial statistics 1 (2012) 110–120.
J. C. Garc´ıa-Palomares, J. Guti´errez, C. M´ınguez, Identification of tourist
hot spots based on social networks: A comparative analysis of european
metropolises using photo-sharing services and GIS, Applied Geography 63
(2015) 408–417.
H. Senaratne, A. Mobasheri, A. L. Ali, C. Capineri, M. Haklay, A
review of volunteered geographic information quality assessment methods,
International Journal of Geographical Information Science 31 (2017) 139–167.
L. Jolivet, A.-M. Olteanu-Raimond, Crowd and community sourced data quality
assessment, in: International Cartographic Conference, Springer, pp. 47–60.
P. S. Dodds, K. D. Harris, I. M. Kloumann, C. A. Bliss, C. M. Danforth,
Temporal patterns of happiness and information in a global social network:
Hedonometrics and twitter, PloS one 6 (2011) e26752.
B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, D. Poland,
D. Borth, L.-J. Li, Yfcc100m: The new data in multimedia research, arXiv
preprint arXiv:1503.01817 (2015).
E. C. Fieller, H. O. Hartley, E. S. Pearson, Tests for rank correlation coefficients.
i, Biometrika 44 (1957) 470–481.
41
... The design of our study as well as the research questions we attempted to investigate only allow us to sketch the assessment framework for a specific type of UGS spatial quality that possibly contribute to emotional responses, and the environmental indicators may differ in various types of places [54]. We do not try to define a complete set of indicators in this research, and future research is needed to explore a complete series of spatial variables related, to the greatest extent possible, to specific places. ...
... To ensure a moderate value of Asam, the percentage of plants should be no less than 20%. The same results were found in research by Kang et al. (2019): places where green vegetation is denser seem to exert positive impacts on the degree of human happiness [54]. In addition, season may influence participants' emotional responses [60]. ...
... To ensure a moderate value of Asam, the percentage of plants should be no less than 20%. The same results were found in research by Kang et al. (2019): places where green vegetation is denser seem to exert positive impacts on the degree of human happiness [54]. In addition, season may influence participants' emotional responses [60]. ...
Article
Full-text available
Although creating a high-quality urban green space (UGS) is of considerable importance in public health, few studies have used individuals’ emotions to evaluate the UGS quality. This study aims to conduct a multidimensional emotional assessment method of UGS from the perspective of spatial quality. Panoramic videos of 15 scenes in the West Lake Scenic Area were displayed to 34 participants. For each scene, 12 attributes regarding spatial quality were quantified, including perceived plant attributes, spatial structure attributes, and experiences of UGS. Then, the Self-Assessment-Manikin (SAM) scale and face recognition model were used to measure people’s valence-arousal emotion values. Among all the predictors, the percentages of water and plants were the most predictive indicators of emotional responses measured by SAM scale, while the interpretation rate of the model measured by face recognition was insufficiently high. Concerning gender differences, women experienced a significantly higher valence than men. Higher percentages of water and plants, larger sizes, approximate shape index, and lower canopy densities were often related to positive emotions. Hence, designers must consider all structural attributes of green spaces, as well as enrich visual perception and provide various activities while creating a UGS. In addition, we suggest combining both physiological and psychological methods to assess emotional responses in future studies. Because the face recognition model can provide objective measurement of emotional responses, and the self-report questionnaire is much easier to administer and can be used as a supplement.
... The emergence of new technologies, such as deep learning, and geo-related cloud services, such as Flickr and GSV, provide advanced methods and data sources for large-scale analysis of human sensing about the environment. For example, Kang et al. [307] extracted human emotions from over 2 million faces detected from over 6 million photos and then connected emotions with environmental factors. They first focused on famous tourist sites and their corresponding geographical attributes from Google Maps API and Flickr photos using geo-tagged information by Flickers API. ...
Article
Full-text available
GeoAI, or geospatial artificial intelligence, has become a trending topic and the frontier for spatial analytics in Geography. Although much progress has been made in exploring the integration of AI and Geography, there is yet no clear definition of GeoAI, its scope of research, or a broad discussion of how it enables new ways of problem solving across social and environmental sciences. This paper provides a comprehensive overview of GeoAI research used in large-scale image analysis, and its methodological foundation, most recent progress in geospatial applications, and comparative advantages over traditional methods. We organize this review of GeoAI research according to different kinds of image or structured data, including satellite and drone images, street views, and geo-scientific data, as well as their applications in a variety of image analysis and machine vision tasks. While different applications tend to use diverse types of data and models, we summarized six major strengths of GeoAI research, including (1) enablement of large-scale analytics; (2) automation; (3) high accuracy; (4) sensitivity in detecting subtle changes; (5) tolerance of noise in data; and (6) rapid technological advancement. As GeoAI remains a rapidly evolving field, we also describe current knowledge gaps and discuss future research directions.
... Space and sentiment are essentially circular [22]. Sentiments assume a bridging role between human perceptions and the surrounding environment and reflect changes in one's attitudes and experiences of the environment and people around them [23,24]. Within a given space, spatial quality affects public sentiment [25]. ...
Article
Full-text available
The comprehensive quality evaluation of the lakefront landscape relies on a combination of subjective and objective methods. This study aims to evaluate the coupling coordination between spatial quality and public sentiment in Wuhan’s lakefront area, and explore the distribution of various coupling coordination types through machine learning of street view images and sentiment analysis of microblog texts. Results show that: (1) The hot and cold spots of spatial quality are distributed in a contiguous pattern, whereas the public sentiments are distributed in multiple clusters. (2) A strong coupling coordination and correlation exists between spatial quality and public sentiment. High green visibility, high sky visibility, and natural revetment have remarkable positive effects on public sentiment. In comparison, high water visibility has a negative effect on public sentiment, which may be related to the negative impact of traffic-oriented streets on the lakefront landscape. (3) Lakefront areas close to urban centers generally show a low spatial quality–high public sentiment distribution, which may be related to factors such as rapid urbanization. This study can help planners identify critical areas to be optimized through coupling coordination relationship evaluation, and provides a practical basis for the future development of urban lakefront areas.
... We presented a possible solution at scale by showing the potential of using Instagram data as a proxy to investigate greenway park usages in urban cities. Similar to previous visual preference survey methods (Nassauer et al., 2009), Instagram postings, which are self-reported experiences and represent real user engagements, provide rich visual content that could reveal park visitations patterns, users, activities, and social interactions (Donahue et al., 2018;Kang et al., 2019;Sessions et al., 2016). They are great resources for landscape architecture and urban design research studying topics in sense of place, cultural landscapes, mental health, physical activities, place activation, etc. (Shaw et al., 2016;Ye and Liu, 2018). ...
Article
Full-text available
Urban greenway is an emerging form of urban landscape offering multifaceted benefits to public health, economy, and ecology. However, the usage and user experiences of greenways are often challenging to measure because it is costly to survey such large areas. Based on the online postings from Instagram in 2017, this paper used Computer Vision (CV) technology to analyze and compare how the general public uses two typical greenway parks, The High Line in New York City and the Atlanta Beltline in Atlanta. Face and object detection analysis were conducted to infer user composition, activities, and key experiences. We presented the temporal patterns of Instagram postings as well as the group gatherings, smiling, and representative objects detected from photos. Our results have shown high user engagement levels for both parks while teens are significantly underrepresented. The High Line had more group activities and was more active during weekdays than the Atlanta Beltline. Stronger sense of escape and physical activities can be found in Atlanta Beltline. In summary, social media images like Instagram can provide strong empirical evidence for urban greenway usage when combined with artificial intelligence technologies, which can support the future practice of landscape architecture and urban design.
... DBSCAN is known to be robust to outliers (noise points), supports non-globular structures, and does not require the number of clusters to be specified beforehand [1,3]. DBSCAN has also been used in other studies related to affective computing (e.g., [14,18,23]). ...
... The authors could finally remap the model's perceptual scores into the street network so as to obtain an overall picture for each of the indicators, across the case study area. In a similar manner, other studies used machine learning algorithms trained on data sets of pictures to "inform urban renewal" (Ma et al., 2021), predict people's perception of safety and compare it with actual reported crimes (Zhang, Fan, Kang, Hu, & Ratti, 2021), extract human emotions and relate them to different places (Kang et al., 2019), or estimate perceptions about urban scenes (Yao et al., 2019). ...
Article
Full-text available
Evaluative images can help urban planners understand the way citizens perceive their cities. In the seminal article by Jack Nasar, evaluative images were created by means of face-to-face interviews. The present article proposes, implements and evaluates a complementary approach to create evaluative images of cities, namely through web mapping. The approach was implemented in an open-source system called eImage. A user study with 59 participants was conducted in Lisbon, Portugal, to test eImage for usability and usefulness. The study revealed that eImage can help different groups of society quickly voice their opinions on places in their cities that they like/dislike, and that a face-to-face data-collection setting (as opposed to an online one) has a positive influence on the data-collection workflow. The lessons learned in this study may be useful to planners interested in generating composite pictures of how citizens perceive their cities.
Chapter
In this study, a powerful prediction method based on machine learning is presented. The proposed work suggests the experimental study of the proposed data analysis tool to predict for the Ultimate Fighting Championship (UFC). In this work, UFC is particularly chosen to understand the overlapping of feature vectors and detection of outlier features as UFC prediction is a multidimensional problem. The proposed work focuses on the features of age and height of the fighter. The efficiency of the proposed work is presented with clear visualization.
Article
Full-text available
Spatial‐query‐by‐sketch is an intuitive tool to explore human spatial knowledge about geographic environments and to support communication with scene database queries. However, traditional sketch‐based spatial search methods perform inadequately due to their inability to find hidden multiscale map features from mental sketches. In this research, we propose a deep convolutional neural network, namely the Deep Spatial Scene Network (DeepSSN), to better assess the spatial scene similarity. In DeepSSN, a triplet loss function is designed as a comprehensive distance metric to support the similarity assessment. A positive and negative example mining strategy is designed to ensure a consistently increasing distinction of triplets during the training process. Moreover, we develop a prototype spatial scene search system using the proposed DeepSSN, in which the users input spatial queries via sketch maps and the system can automatically augment the sketch training data. The proposed model is validated using multisource conflated map data including 131,300 labeled scene samples after data augmentation. The empirical results demonstrate that the DeepSSN outperforms baseline methods including k‐nearest neighbors, the multilayer perceptron, AlexNet, DenseNet, and ResNet using mean reciprocal rank and precision metrics. This research advances geographic information retrieval studies by introducing a novel deep learning method tailored to spatial scene queries.
Chapter
Analyzing large volumes of big geo‐data through social sensing provides new research opportunities in urban studies. Such big geo‐data include mobile phone records, social media posts, vehicle trajectories, and street view images. They can be used to extract human behavior patterns and infer the geographical characteristics of cities. This chapter discusses a number of analytical methods for big geo‐data in social sensing studies, such as temporal signature analysis, text analysis, and image analysis. These methods can be used for various applications such as estimating urban vibrancy, formalizing place semantics, and modeling intraurban human mobility patterns. We structure the chapter sections from a perspective of first‐ and second‐order properties in spatial statistics .
Article
Full-text available
Street-level imagery has covered the comprehensive landscape of urban areas. Compared to satellite imagery, this new source of image data has the advantage in fine-grained observations of not only physical environment but also social sensing. Prior studies using street-level imagery focus primarily on urban physical environment auditing. In this study, we demonstrate the potential usage of street-level imagery in uncovering spatio-temporal urban mobility patterns. Our method assumes that the streetscape depicted in street-level imagery reflects urban functions and that urban streets of similar functions exhibit similar temporal mobility patterns. We present how a deep convolutional neural network (DCNN) can be trained to identify high-level scene features from street view images that can explain up to 66.5% of the hourly variation of taxi trips along with the urban road network. The study shows that street-level imagery, as the counterpart of remote sensing imagery, provides an opportunity to infer fine-scale human activity information of an urban region and bridge gaps between the physical space and human space. This approach can therefore facilitate urban environment observation and smart urban planning.
Article
Full-text available
Human spatial concepts, such as the concept of place, are not immediately translatable to the geometric foundations of spatial databases and information systems developed over the past 50 years. These systems typically rest on the concepts of objects and fields, both bound to coordinates, as two general paradigms of geographic representation. The match between notions of place occurring in everyday where questions and the data available to answer such questions is unclear and hinders progress in place‐based information systems. This is particularly true in novel application areas such as the Digital Humanities or speech‐based human–computer interaction, but also for location‐based services. Although this shortcoming has been observed before, we approach the challenges of relating places to information system representations with a fresh view, based on a set of core concepts of spatial information. These concepts have been proposed in information science with the intent of serving human–machine spatial question asking and answering. Clarifying the relationship of the notion of place to these concepts is a significant step toward geographically intelligent systems. The main result of the article is a demonstration that the notion of place fits existing concepts of spatial information, when these are adequately exploited and combined.
Article
Full-text available
The perceptions of people toward neighborhoods reveal their satisfaction with their living environments and their perceived quality of life. Recently, there is an emergence of Web sites designed for helping people to find suitable places to live. On these Web sites, current and previous residents can review their neighborhoods by providing numeric ratings and textual comments. Such online neighborhood review data provide novel opportunities for studying the perceptions of people toward their neighborhoods. In this article, we analyze such online neighborhood review data. Specifically, we extract two types of knowledge from the data: (1) semantics, or the semantic topics (or aspects) that people talk about regarding their neighborhoods, and (2) sentiments, or the emotions that people express toward the different aspects of their neighborhoods. We experiment with a number of different computational models in extracting these two types of knowledge and compare their performances. The experiments are based on a data set of online reviews about the neighborhoods in New York City, which were contributed by 7,673 distinct Web users. We also conduct correlation analyses between the subjective perceptions extracted from this data set and the objective socioeconomic attributes of New York City neighborhoods and find similarities and differences. The effective models identified in this research can be applied to neighborhood reviews in other cities for supporting urban planning and quality of life studies.
Article
Full-text available
High levels of air pollution in China may contribute to the urban population’s reported low level of happiness1–3. To test this claim, we have constructed a daily city-level expressed happiness metric based on the sentiment in the contents of 210 million geotagged tweets on the Chinese largest microblog platform Sina Weibo4–6, and studied its dynamics relative to daily local air quality index and PM2.5 concentrations (fine particulate matter with diameters equal or smaller than 2.5 μm, the most prominent air pollutant in Chinese cities). Using daily data for 144 Chinese cities in 2014, we document that, on average, a one standard deviation increase in the PM2.5 concentration (or Air Quality Index) is associated with a 0.043 (or 0.046) standard deviation decrease in the happiness index. People suffer more on weekends, holidays and days with extreme weather conditions. The expressed happiness of women and the residents of both the cleanest and dirtiest cities are more sensitive to air pollution. Social media data provides real-time feedback for China’s government about rising quality of life concerns. © 2019, The Author(s), under exclusive licence to Springer Nature Limited.
Article
Full-text available
The density-based spatial clustering of applications with noise (DBSCAN) method is often used to identify individual activity clusters (i.e., zones) using digital footprints captured from social networks. However, DBSCAN is sensitive to the two parameters, eps and minpts. This paper introduces an improved density-based clustering algorithm, Multi-Scaled DBSCAN (M-DBSCAN), to mitigate the detection uncertainty of clusters produced by DBSCAN at different scales of density and cluster size. M-DBSCAN iteratively calibrates suitable local eps and minpts values instead of using one global parameter setting as DBSCAN for detecting clusters of varying densities, and proves to be effective for detecting potential activity zones. Besides, M-DBSCAN can significantly reduce the noise ratio by identifying all points capturing the activities performed in each zone. Using the historic geo-tagged tweets of users in Washington, D.C. and in Madison, Wisconsin, the results reveal that: 1) M-DBSCAN can capture dispersed clusters with low density of points, and therefore detecting more activity zones for each user; 2) A value of 40 m or higher should be used for eps to reduce the possibility of collapsing distinctive activity zones; and 3) A value between 200 and 300 m is recommended for eps while using DBSCAN for detecting activity zones. © 2019
Article
Full-text available
Around the globe, Geographic Information Systems (GISs) are well established in the daily workflow of authorities, businesses and non-profit organisations. GIS can effectively handle spatial entities and offer sophisticated analysis and modelling functions to deal with space. Only a small fraction of the literature in Geographic Information Science—or GIScience in short—has advanced the development of place, addressing entities with an ambiguous boundary and relying more on the human or social attributes of a location rather than on crisp geographic boundaries. While the GIScience developments support the establishment of the digital humanities, GISs were never designed to handle subjective or vague data. We, an international group of authors, juxtapose place and space in English language and in several other languages and discuss potential consequences for Geoinformatics and GIScience. In particular, we address the question of whether linguistic and cultural settings play a role in the perception of place. We report on some facts revealed by this multi-language and multi-cultural dialogue, and what particular aspects of place we were able to discern regarding the few languages addressed.
Article
The spatial hierarchy of part-whole relationships is an essential characteristic of the platial world. Constructing spatial hierarchies of places is valuable in association analysis and qualitative spatial reasoning. The emergence of large amounts of geotagged user-generated content provides strong support for modelling places. However, the vague nature of places and the complex spatial relationships among places make it intractable to understand and represent the hierarchies among places. In this paper, we introduce a fuzzy formal concept analysis-based approach to uncovering the spatial hierarchies among vague places. Each place is represented as a concept that consists of its extent and its intent. Based on the place concepts, the spatial hierarchies are generated and expressed as a graph that is easy to comprehend and contains abundant information on spatial relations. We also demonstrate the rationality of our result by comparing it with the result of a questionnaire survey.