ArticlePDF Available

Me, Myself and My Killfie: Characterizing and Preventing Selfie Deaths


Abstract and Figures

Over the past couple of years, clicking and posting selfies has become a popular trend. However, since March 2014, 127 people have died and many have been injured while trying to click a selfie. Researchers have studied selfies for understanding the psychology of the authors, and understanding its effect on social media platforms. In this work, we perform a comprehensive analysis of the selfie-related casualties and infer various reasons behind these deaths. We use inferences from incidents and from our understanding of the features, we create a system to make people more aware of the dangerous situations in which these selfies are taken. We use a combination of text-based, image-based and location-based features to classify a particular selfie as dangerous or not. Our method ran on 3,155 annotated selfies collected on Twitter gave 73% accuracy. Individually the image-based features were the most informative for the prediction task. The combination of image-based and location-based features resulted in the best accuracy. We have made our code and dataset available at
Content may be subject to copyright.
Me, Myself and My Killfie:
Characterizing and Preventing Selfie Deaths
Hemank Lamba1, Varun Bharadhwaj3, Mayank Vachher2,
Divyansh Agarwal2, Megha Arora1, Ponnurangam Kumaraguru2
1Carnegie Mellon University, USA
2Indraprastha Institude of Information Technology, Delhi, India
3National Institute of Technology, Tiruchirappalli
Over the past couple of years, clicking and posting selfies has
become a popular trend. However, since March 2014, 127 peo-
ple have died and many have been injured while trying to click
a selfie. Researchers have studied selfies for understanding the
psychology of the authors, and understanding its effect on social
media platforms. In this work, we perform a comprehensive anal-
ysis of the selfie-related casualties and infer various reasons be-
hind these deaths. We use inferences from incidents and from our
understanding of the features, we create a system to make peo-
ple more aware of the dangerous situations in which these self-
ies are taken. We use a combination of text-based, image-based
and location-based features to classify a particular selfie as danger-
ous or not. Our method ran on 3,155 annotated selfies collected
on Twitter gave 73% accuracy. Individually the image-based fea-
tures were the most informative for the prediction task. The com-
bination of image-based and location-based features resulted in the
best accuracy. We have made our code and dataset available at
With the rise in the amount and type of content being posted on
social media, various trends have emerged. In the past, social me-
dia trends like memes [19, 27, 37], social media advertising [30],
firestorm [24], crisis event reporting [34,35], and much more have
been extensively analyzed. Another trend that has emerged over
social media in the past few years is of clicking and uploading self-
ies. According to Oxford dictionary, a selfie is defined as a photo-
graph that one has taken of oneself, typically one taken with a smart
phone or web cam and shared via social media [2]. A selfie can not
only be seen as a photographic object that initiates the transmission
of the human feeling in the form of a relationship between the pho-
tographer and the camera, but also as a gesture that can be sent
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full cita-
tion on the first page. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-
publish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from
WWW ’17 April 03–07, 2017, Perth, Australia
2016 ACM. ISBN 123-4567-24-567/08/06. . . $15.00
DOI: 10.475/123_4
via social media to a broader population [36]. Google estimated
that a staggering 24 billion selfies were uploaded to Google Photos
in 2015 [1]. The selfie trend is popular with millennials (ages 18
to 33). Pew research center found that around 55% of millennials
have posted a "selfie" on a social media service [6]. The popularity
of selfie trend is so massive that "selfie" was declared as the word
of the year in 2013 by Oxford Dictionary [9]. The virality of the
selfie culture has also been known to cause service interruptions on
popular social media platforms. For instance, the selfie taken by
Ellen Degeneres, a popular television host, at the Academy Awards
brought down Twitter website due to its immense popularity [3].
Selfies have proved instrumental in revolutionary movements [14],
and have also known to help election candidates increase their pop-
ularity [12]. Many researchers have studied selfies for understand-
ing psychological attributes of the selfie authors [23, 33], investi-
gating the effect of selfies on social protests [14], understanding
the effect of posting selfies on its authors [36], dangerous incidents
and deaths related to selfies [13, 20,39] and using computer vision
methods to interpret whether a given image is a selfie or not [15].
Clicking selfies has become a symbol of self-expression and of-
ten people portray their adventurous side by uploading crazy self-
ies [7]. This has proved to be dangerous [13, 20, 39]. Keeping
in mind the hazardous implications of taking selfies at dangerous
locations, Russian authorities came up with public posters, indicat-
ing the dangers of taking selfies [11]. Similarly, Mumbai police
recently classified 16 zones across Mumbai as no-selfie zones [8].
Through the process of data collection, we found 127 people have
been killed since 2014 till September 2016 while attempting to take
selfies. From 15 casualties in 2014 and 39 in 2015, the death toll
due to selfies has reached 73 till September 2016. It has been re-
ported that the number of selfie deaths in 2015 was more than the
number of deaths due to shark attacks [5]. Some of the selfies that
led to casualties are shown in the Figure 1. Given the influence
of selfies and the significant rise in the number of deaths and in-
juries reported when users are taking selfies, it is important to study
these incidents in detail and move towards developing a technology
which can help reduce the number of selfie casualties.
In this paper, we characterize the demographics and analyze rea-
sons behind selfie deaths; based on the obtained insights, we pro-
pose features which can differentiate potentially dangerous selfie
images from the non-dangerous ones. Our methodology is briefly
explained in Figure 2. Specifically, the major contributions of the
paper are as follows:
arXiv:1611.01911v1 [cs.SI] 7 Nov 2016
Figure 1: Left: Selfie took by a group of individuals shortly before
they drowned in the lake. Right: Photograph of a girl taking a selfie
on train tracks immediately before a train hit her.
Data Characterization: We do a thorough analysis of the
selfie casualties, and provide insights about all the previous
fatal selfie-related incidents.
Feature Identification: We propose features that are eas-
ily extractable from the social media data and learn signals
which determine if a particular selfie is dangerous.
Discriminative Model: We present a model that based on
the proposed features can differentiate between dangerous
selfies and non-dangerous selfies.
Real World Data: We test our given approach on a real-
world dataset collected from a popular social media website.
We also test the efficacy of our approach in absence of certain
features, a situation which is possible while working on such
real datasets.
Furthermore, we believe our contributions could lead to generation
of tools or treatments that can have a significant impact on reducing
the number of selfie deaths.
Reproducibility: More detailed analysis of the selfie deaths is
shown on our web page1, and our code and the dataset is also avail-
able for download.
The trend and culture of posting selfies on social media have
been investigated widely over the past few years. The popularity
of selfies being posted on online social media has drawn a lot of
researchers from different fields to study the various aspects of the
selfie trend. We present the relevant work from major fields in this
The impact of selfies: Brager et al. studied the effect of a par-
ticular selfie on playing a part in a revolutionary movement [14].
The authors specifically analyzed death of a young teenager in
Lebanon who died moments after taking a selfie near a golden SUV,
that blew up. His death and the specific selfie stirred the Western
news media and spectators, revolutionizing the movement - #No-
tAMartyr over the Internet. The authors argued that the practice
of selfie-taking made the young boy’s story legible as a subject of
grievance for the Western social media audience. Porch et al. ana-
lyzed how the selfie trend has affected women’s self-esteem, body
esteem, physical appearance comparison score, and perception of
self [32]. Baishya et al. found the effect of selfies by candidate
prime minister in Indian general elections was significant towards
his victory [12]. Lim et al. suggested that insights into the selfie
phenomenon can be understood from socio-historical, technologi-
cal, social media, marketing, and ethical perspectives [28].
Psychology Studies: Qiu et al. analyzed the correlations be-
tween selfies and the personalities according to Big Five personal-
ity test of the participants [33]. Authors used signals such as camera
height, lips position and the portrayed emotion to make predictions
about their emotional positivity, openness, neuroticism and con-
scientiousness. Li et al. proposed that people taking selfies have
narcissistic tendencies and the selfie-takers use selfies as a form of
self-identification and expression. The role of selfies was also an-
alyzed in making the selfie-taker a journalist who posts images on
social media after witnessing events [22]. Senft et al. analyzed the
role that selfies play in affecting the online users. It further shows
how selfie as a medium has a narcissistic or negative effect on peo-
ple [36].
Dangers of Selfie: An important theme, which is directly related
to our paper is work related to the dangers that trend of selfie tak-
ing puts a selfie-taker in. Lakshmi et al. explain how the number of
likes, comments and shares they get for their selfies are the social
currency for the youth. The desire of getting more of this social cur-
rency prompts youth to extreme lengths [23]. Flaherty et al. [17]
and Bhogesha et al. [13] talk about how selfies have been a risk
during international travel. Howes et al. analyzed the selfie trends
as a cultural practice in the contemporary world [20]. Authors par-
ticularly analyzed the case of spectators clicking selfies in the sport
of cycling. The spectators wanted to capture the moment but ended
up in obstructing the path of cyclists, leading to crashes. Subrah-
manyam et al. work is the closest to ours discussing the dangers of
taking a selfie [39]. Authors also provided statistical data about the
number of deaths and injuries. A noble initiative #selfietodiefor2
has been posting about the dangers of taking a selfie in a risky sit-
uation. They use Twitter handle @selfietodiefor for sending out
awareness tweets and news stories related to selfie deaths.
Besides all the above-mentioned areas, researchers have also tried
to distinguish selfies from other images by use of automated meth-
ods [15]. A project called Selfie City has been investigating the style
of selfies in five cities across the world [10]. Using the dataset col-
lected, they explored the age distribution, gender distribution, pose
distribution and moods in all of the selfies collected. Researchers
have also explored the use of nudging to alert a smart phone user
about the possible privacy leaks [41], a technique which can read-
ily be applied to warn users of the dangers of taking selfies in the
present location/situation.
In this work, we study the dangerous impacts of clicking a selfie.
Our work is the first in trying to characterize all the selfie deaths
that have occurred in the past couple of years. Till now, there has
been no research that proposes features and methods to identify
dangerous and non-dangerous selfies posted on social media, which
is what we propose to do in this work.
In our work, we define a selfie-related casualty as a death of an
individual or a group of people that could have been avoided had
the individual(s) not been taking a selfie. This may even involve
the unfortunate death of other people who died while saving or be-
ing present with people who were clicking a selfie in a dangerous
manner. To be able to better understand the reasons behind selfie
deaths, victims, and such incidents, we collected every news ar-
ticle reporting selfie deaths. We used a keyword based extensive
web searching mechanism to identify these articles [38]. Further,
we only considered those articles as credible sources which were
hosted on the websites having either their Global Alexa ranking
less than 5,000, or having a country specific Alexa rank less than
Is Tweet
Image a
Tweet has
Geo Location?
Text Based
Image Based
Location Based
Sele Tweets
having GeoLoc
Sele Tweets
not having
Given Sele is
Not Dangerous
Given Sele is
Figure 2: A brief overview of our approach - Tweets tagged with a geolocation are analyzed using text, location and image-based features.
Whereas tweets without a geolocation are analyzed only using text and image-based features.
1,000. The earliest article reporting a selfie death that we were able
to collect was published in March 2014. Two annotators manually
annotated the articles to identify the country, the reason for death,
the number of people who died, and the location where the selfie
was being taken.
Country Number of Casualties
India 76
Pakistan 9
Russia 6
Philippines, China 4
Spain 3
Indonesia, Portugal, Peru,
Romania, Australia,
Mexico, South Africa,
Italy, Serbia, Chile, Nepal,
Hong Kong
Table 1: Country-wise number of selfie casualties
Using our approach, we were able to find 127 selfie-related deaths
since March 2014. These deaths involved 24 group incidents, and
others were individual incidents. By group incidents, it is meant
that multiple deaths were reported in a single incident. An example
of this could be an incident near Mangrul lake in the Kuhi district
in India, where a group of 10 youth had gone for boating in the
lake. While they were trying to take selfie, the boat tilted, and 7
people died. We count all such incidents as group incidents. Out
of all the group incidents, 16 of the incidents involved 2individ-
uals, 5involved 3people, 1incident had 5casualties, and there
were 2group incidents claiming the lives of 7people each. By
analyzing selfie deaths - in terms of group and individual deaths,
it can be concluded that taking dangerous selfies not only puts the
selfie-taker at a risk but also can also be hazardous to the people
around them. Although it is known that women take more selfies
than men [10], however, our incident analysis showed that men are
more prone to taking dangerous selfies, and accounted for roughly
75.5% of the casualties. Out of all the deaths, 41 victims were aged
less than 20 years, 45 were between 20 and 24 years of age and 17
victims were 30 years old or above. This is consistent with our ear-
lier finding that the trend of taking selfies is really popular among
Studying the geographic trends of the selfie deaths, we observed
that India accounted for more than 51.76% of the overall incidents,
out of which 87% were water-related casualties. In the USA, 3
deaths occurred while trying to click a selfie with a weapon, fol-
lowed by Russia with 2 casualties. This might be a consequence of
the open gun laws in both the countries. Distribution of incidents
according to the country is shown in Table 1.
0 5 10 15 20 25 30
35 30 25 20 15 10 5 0
Height & Water
(a) (b)
Figure 3: (a) Number of Incidents and (b) Number of Deaths due
to various reasons
We looked at all the articles in our database to figure out what
are the most common factors/reasons behind selfie deaths. Over-
all, we were able to find 8unique reasons behind the deaths. We
found that most common reason of selfie death was height-related.
These involve people falling off buildings or mountains while try-
ing to take dangerous selfies. Figure 3 shows the number of ca-
sualties for various reasons of selfie deaths. From the plot, it can
be observed that for water-related causes, there were more group
incidents. There were also considerable number of incidents where
the selfie-taker exposed himself to both the height related and water
body related dangers, thus we have analyzed such incidents sepa-
rately. Twenty-seven individuals who died in 14 incidents qualified
for this category. The second most popular category was being hit
by trains. We found that taking selfies on train tracks is a trend.
This trend caters to the belief that posting on or next to train tracks
with their best friend is regarded as romantic and a sign of never-
ending friendship.3
After analyzing selfie deaths, we can claim that a dangerous
selfie is the one which can potentially trigger any of the above-
mentioned reasons for selfie deaths. For instance, a selfie being
taken on the peak of a mountain is dangerous as it exposes the selfie
taker to the risk of falling down from a height. To be able to warn
more users about the perils of taking dangerous selfies, it is essen-
tial to have a solution that can distinguish between the dangerous
and non-dangerous selfies. Motivated by the reasons that we found
for selfie deaths, we formulated features which would be ideal to
provide enough differentiation between the 2categories. In future
sections, we discuss in detail as to how we generated features for
different selfie-related risks and develop the classifier to identify
selfies that are potentially dangerous.
We used Twitter for our data collection. Twitter is a popular so-
cial media website which allows access to the data posted by its
users through APIs. Twitter provides an interface via its Streaming
API to enable researchers and developers to collect data.4Stream-
ing API is used to extract tweets in real-time based on the query
parameters like words in a tweet, location from where the tweet is
posted and other attributes. The API provides 1% sample of the
entire dataset [31]. We collected tweets related to selfies using key-
words like #selfie,#dangerousselfie,#extremeselfie,#letmetakea-
selfie,#selfieoftheday, and #drivingselfie. We collected about 138K
unique tweets by 78K unique users. The descriptive statistics of the
data are given in Table 2.
Total Tweets 138,496
Total Users 78,236
Total Tweets with Images 91,059
Total Tweets with geo-location 9,444
Total Tweets with Text besides Hashtags 112,743
Time of first Tweet in our Dataset Mon Aug 01
Time of last Tweet in our Dataset Tue Sep 27
Table 2: Descriptive statistics of Dataset collected for Selfies
Out of the 138,496 tweets collected, we only found 91,059 to
have images in them. We consider only those tweets for further
analysis. However, it is not clear if all of those images were actually
selfies or not. To retain only the true selfie images, we build a
classifier based on image features to retain only the images that are
selfies. We explain the classifier used below.
Preprocessing: We manually annotated 2,161 images as to de-
termine whether they were selfies or not. Out of the tagged images,
we found that 1,307 (roughly 60%) were selfies, and remaining 854
were not selfies. Using the manual annotations as ground truth,
we constructed a classifier to discriminate between the selfies and
non-selfies. The classifier was based on the transfer learning based
model called DeCAF proposed by Donahue et al. [16]. DeCAF
model first trains a deep convolutional model in fully supervised
setting, and then various features from this network are extracted
and tested on generic vision tasks. The deep convolutional model
is as mentioned in Szegedy et al. [40]. The convolutional model
has been trained and tested on the task of classifying 1.2million
images in ImageNet LSVRC - 2010 contest into 1,000 classes. It
obtained top-1 and top-5 error rates as 21.2% and 5.6% respec-
tively. As specified in the DeCAF framework, we use this trained
model for the task of identifying if an image is a selfie image or
not. This approach is useful as the cost of annotating all images
as to whether it is a selfie or not is saved, and most convolutional
deep learning models require enormous amounts of training data to
train effectively from scratch. Therefore by using DeCAF, we built
on the generic features provided by the original convolutional neu-
ral network. We found that algorithm gave 88.48% accuracy with
10-fold cross validation.
Using the model trained on the annotated dataset, we obtained
labels for all of the non-annotated images. We found that out of
90K images (tweets with images or tweets hyper-linking to im-
ages), 62K were actually selfies. These 62K tweet set contained
only 6,842 tweets which had a geolocation.
In this section, we discuss the features we use for our classifier to
differentiate between dangerous and non-dangerous selfies. Based
on the analysis of selfie casualties we did in Section 3, we design
different features for every major possible selfie-related risk (see
Figure 3). We analyze each of the possible causes and consider
what all features are possible in terms of tractability and availabil-
ity. We first review the location-based features.
Height Related Risks: From our dataset, we observed that 29
selfie deaths were because of falling from an elevated location. We
take this as an indication that taking selfies at an elevated location
is dangerous. Based on the location of the selfie, we want to gen-
erate features that tell us if an image has been taken at an elevated
location or not. To estimate the elevation of a location, we used
Google Elevation API.5
Taking only the elevation of a particular place is not be informa-
tive to tell if the location is actually dangerous or not. For example,
if a city is at a higher altitude, that does not make it necessarily
dangerous. However, sudden changes in the nearby terrain indi-
cate that there is a steep decrease in elevation, making the location
dangerous. Google Elevation API returns negative values for cer-
tain locations such as water body. We formulated the following
features based on the elevation of the location:
Elevation of the exact location of the selfie: This feature was
not informative as it captures only the elevation of the loca-
tion, and that does not necessarily mean a risk due to height.
This was validated by the fact that p-value of Kolmogorov-
Smirnov (KS) 2sampled test was 0.12; which we can reject
only in 15% confidence interval.
Maximum Elevation of the surrounding area: To get a sense
of the area surrounding the exact location, we sample 10 lo-
cations in 1-km radius and return the maximum elevation out
of those. We choose the specified value of radius and number
of locations because they returned the lowest p-value after
applying 2-sample KS test for dangerous and non-dangerous
selfie distribution.
Difference Elevation of the surrounding area: We calculate
this as the maximum difference between the elevation of our
exact location and the sampled locations’ elevation. These
features capture the sudden elevation drop that might exist
near the surrounding area. For this feature, we sampled 5
locations in a 5-km radius for the same reason as mentioned
Figure 4: CDF Plots showing the difference in the distribution of height-related features for dangerous and non-dangerous images. Left:
Maximum Elevation in 5km radius and 5 sampled locations (p-value:0.028). Center: Maximum difference in elevation of 10 points sampled
in 1km radius with the elevation of the location (p-value: 7.09e-6). Right: Maximum Elevation Difference of 10 points sampled in 1km
radius (p-value: 1.22e-9).
Maximum Elevation Difference in the surrounding area: Tak-
ing the maximum difference between the highest elevation
and lowest elevation of the sampled points helped us capture
the amount of elevation variation in the surrounding area.
We did not work with other possible statistics such as the average
elevation or median elevation as those statistics try to capture the
center point or a single representative value of the distribution. We
are however interested in sudden elevation drops in the surrounding
area, which will lie on the extremes of the elevation distribution.
To evaluate the efficiency (or the discriminative power) of the
above-mentioned features, we plot the empirical cumulative distri-
butions (CDF) of height-related dangerous selfies and non-dangerous
selfies. This can be seen in Figure 4. We can notice that for the 3
features, the empirical CDF of dangerous and non-dangerous self-
ies are considerably different. The KS test returned p-values:0.028
for Maximum elevation, 7.09e-6 for Elevation difference between
maximum elevation and our location and 1.22e-9 for Maximum el-
evation difference.
Water Related Risks: Another prominent reason of selfie ca-
sualties that we infer from Figure 3 is water-related risks. After
analyzing the water-related incidents, we found that often people
took selfies while being in a water body or in close proximity to
one. They ended up drowning by losing their body balance and
falling into the water body. To tackle water related risks, we gener-
ate features based on the proximity of their location to a water body.
Consider the selfie in Figure 5(a) which has been taken in the mid-
dle of a water body. We mapped the exact location of the selfie
to Google Maps and considered 500 ×500 pixel image pertaining
to level 13 zoom factor on Google Maps [4]. The image after this
step looked like in Figure 5(b). We applied image segmentation to
identify the contour of all the water bodies shown in Figure 5(c).
To infer whether a given location is in close proximity to a water
body or not, we use the minimum distance to a water body from
the location of the image as a feature. Since all the segmented im-
ages were of maps with same scale and zoom factor, the distance
was treated as pixel location distance. Proximity to a small water
body like a stream or a river might not make a selfie dangerous,
therefore we also use fraction of the pixels in the segmented image
(Figure 5(c)) to further help us in distinguishing between dangerous
and non-dangerous selfies.
We can observe from the Figure 6 that for both of the water fea-
tures - minimum distance to a water body and the fraction of wa-
ter pixels in the segmented image, the distribution of water-related
dangerous and non-dangerous selfies is considerably different. We
use 2-sampled KS test to statistically confirm our observations. We
(b) (c)
Figure 5: Segmentation Example: Different Stages of processing
to get the final segmented image distinguishing between the water
and land.
obtained p-values of 1.18e-19 (minimum distance to a water body)
and 2.79e-19 (fraction of water pixels in the segmented image) in-
dicating that we can safely reject that the features are being gener-
ated from the same distribution.
Train/ Railway Related Risks: Besides water and height-related
risks, another common reason of selfie casualties is train-related
risks which accounted for 11 casualties. We used Google Places
API to determine if there is a railway track or a railway station
close to the location of the selfie or not. We used the minimum
distance between the location and the railway track as a feature.
Though this feature is not sufficient to distinguish between danger-
ous and non-dangerous selfie, it still provides valuable information
which when appended to other features proves to be helpful in the
classification task.
Driving/Road Related Risks: It is challenging to account for
Figure 6: CDF Plots showing the difference in dangerous and non-
dangerous distributions for water-related features. Left: Minimum
distance to a water body. Right: Fraction of water pixels in the
segmented image
driving-related risks in all possible contexts. The location of the
selfie can provide information about how close a person is to a
road. Using only the location data is not sufficient to determine
if the selfie-taker was driving at the time of taking a selfie, or was
standing in the middle of a busy road to take the selfie. However,
we still think that the minimum distance of the location of the selfie
to the highway/road will be informative in determining the ‘danger-
ousness’ of the selfie when used in conjuction with other features.
For all the other reasons such as weapons, animal, electricity, it is
difficult to find location based insights, and thus impossible to find
location based features. We rely on other signals based on the text
accompanying the selfie, and the content of the image to be able
to derive features which can provide insights about these reasons.
For example, the presence of a weapon or animal can be easily
inferred from the image content. Below, we discuss the text-based
and image content-based features.
Text-based Features: The content of the tweet can be a useful
source for indicating if the image accompanying it is a dangerous
selfie. Users tend to provide context to the image either directly
in the tweet text or through hashtags. We use both to generate our
text-based features. After removing the URLs, tokenizing the tweet
content, and processing emojis, we obtain our text input. We use
TF-IDF over the set of unigrams and bigrams. For further enriching
the text feature space, we convert the text into a lower dimension
embedded vector obtained using doc2vec [26].
Image-based Features: Since an image could be dangerous due
to various reasons, we cannot simply apply a classifier to the ac-
tual pixels of the image. Classifying an image as to whether it is
dangerous or not requires more understanding of the context and
the elements in the image. Therefore, we first extract the salient
regions in images and then generate captions for each of those re-
To extract informative regions in images and for the caption-
generating process, we used DenseCap [21]. DenseCap is start-
of-the-art deep learning based captioning technique for regions in
an image. It outperforms other models such as Full Image RNN,
Region RNN on both tasks of dense captioning and as well as im-
age retrieval comfortably. The average precision on the dense cap-
tioning task by DenseCap was 5.24, way higher than the closest
competitor 4.88. The architecture of DenseCap involves a fully
convolutional layer, a fully convolutional localization layer used
for extracting ROI (regions of interest) and their features, a recog-
nition network for finding relevant ROI’s, and a language model
to generate captions for the ROI. An example of the output of the
DenseCap on a selfie in our dataset is shown in Figure 7.
We treat the generated captions as the text describing the image
in natural language. From the text, we compute natural language
Figure 7: An example of the DenseCap on one of the images (Left)
from our dataset. We use the dense captions produced by DenseCap
(Right) to come up with text based features over them.
features such as unigrams, bigrams to determine if the content of
the image is dangerous or not. We also convert the captions gen-
erated into a lower dimension vector in a similar fashion we did
for text-based features. To empirically view the validity of our ap-
proach, we plotted the 2-dimensional t-SNE (Stochastic Neighbor
Embedding) [29] mapping of the embedded doc2vec vectors in Fig-
ure 8. In the plot, we can see that the triangles (dangerous selfies)
are negative in the 1st vector components (X-axis), whereas the
circles (non-dangerous selfies) are largely positive. On the plot, we
can imagine a line easily separating most of the dangerous and non-
dangerous selfies. Our entire feature space could be categorized as
shown in Table 3.
Figure 8: t-SNE scatter plot of doc2vec output of generated cap-
tions for 50 randomly chosen dangerous and non-dangerous selfies.
6.1 Manual Annotation
From the selfie data set described in Section 4, we sampled a ran-
dom set of 3,155 selfies with geolocation for creating an annotated
data set. We manually labeled the images to determine whether
they are dangerous or not. For the process of annotations, we asked
questions such as, whether the image depicted is dangerous or not?
If yes, then what is the possible reason for it being dangerous?
And, whether text accompanying the image helped them in clas-
sifying if image is dangerous or not, and so on. A screenshot of
Feature Type Feature
Location Based Features
Elevation of the location
Maximum Elevation
Difference between Maximum eleva-
tion out of sampled points and eleva-
tion of the location.
Maximum elevation difference in the
set of sampled points
Minimum Distance to water body
Fraction of water pixels in the seg-
mented image
Distance to railway tracks
Distance to major roadway/highway
Image Based Features
TF-IDF of unigrams and bigrams on
DenseCap captions
Doc2Vec representation of DenseCap
Text Based Features TF-IDF of unigrams and bigrams on
the Twitter text
Doc2Vec representation of Twitter
Table 3: Location-based, Image-based and Text-based features
used for classification of selfies.
the tool is shown in Figure 9.6We asked 8annotators to annotate
the set of 3,155 selfies, randomly split into a common set having
400 images. The common set was annotated by every annotator,
and the shared set was divided equally among all the annotators.
The inter-annotator agreement rate obtained on the common set of
400 selfies, using the Fleiss Kappa metric [18] was 0.74. Fleiss
kappa metric interpretation reveals that the above value indicates
substantial agreement between the annotators [25]. The annotated
dataset contained 396 dangerous and 2,676 non-dangerous selfies.
Annotators were unsure about the remaining selfies in our dataset.
For the annotated images, we found that vehicle related causes for a
selfie being dangerous, like taking a selfie in a car, is the maximum,
followed by water related risks. Statistics about the risks that an-
notators perceived from the dangerous images is given in Table 4.
Annotators frequently found images to be dangerous in more than
one aspect. For such cases, we counted their labels for all the men-
tioned risk types. One striking observation is that even though we
didn’t find any selfie casualties due to road related incidents in our
research, it was identified as a potential risk by the annotators in as
many as 29 dangerous images (7%).
Figure 9: Screenshot of the annotation tool. We asked above ques-
tions to the annotators based on a selfie image shown to them.
6The annotation tool we used is available at http:
Reason Number of Dangerous Selfies
Vehicle Related 120
Water Related 118
Height Related 86
Height and Water Related 55
Road Related 29
Animal Related 16
Train Related 8
Weapons Related 4
Table 4: Reasons marked by annotators for a selfie being danger-
6.2 Classifier
Considering the annotations performed in the section above as
ground truth, we evaluate the performance of our classifier on the
task of classifying whether a selfie is dangerous or not. The prob-
lem of classifying dangerous selfies is a highly unbalanced prob-
lem. We have only 396 (roughly 13%) dangerous selfies in com-
parison to the remaining 2.6K non-dangerous selfies. Therefore,
we use random under-sampling to reduce the majority class sam-
ples (non-dangerous) such that the number of non-dangerous selfies
is equal to the number of dangerous selfies.
We divide the process of experimentation into two broad parts:
6.2.1 Identifying Dangerous Selfies
Using the features generated, we try to predict if a given selfie
is dangerous or not. As shown in Table 3, our feature space can
be easily classified into 3categories - text-based, image-based and
location-based. To compare all of the feature types, we build and
test the classifiers for every possible combination of the features.
For all our experiments, we perform 10-fold cross validation. Fur-
thermore, we use grid search to find ideal set of hyperparameters
for each classifier by doing a 3-fold cross validation on the train-
ing set. We tested the performance of our method using 4differ-
ent classification algorithms - Random Forests, Nearest Neighbors,
SVM and Decision Trees. Each of the classifier was trained and
tested on similar dataset and using the same feature configuration.
Table 6 lists the accuracy obtained by using various classification
techniques over different combinations of our feature space.
Insight 1: We observe that image-based features consistently
perform better than either of the text-based and location-based fea-
tures. This is because the image-based features can capture the risk
type which cannot be captured by location-based features, for ex-
ample weapon-related or animal-related risks. Another reason is
that the image-based features try to contextualize and infer mean-
ing directly out of the image, and in a certain sense this is equivalent
to our human annotators who have marked selfies as dangerous by
looking at them and inferring whether they are dangerous or not
Insight 2: We applied 4 distinct machine learning classifiers -
Random Forests, SVM (Support Vector Machines), Decision Trees
and Nearest Neighbors. We noticed that Random Forests and SVMs
performed consistently the best for all the given feature configura-
tions. Random Forest, being an ensemble classifier has the prop-
erty of reducing variance while not increasing the bias. It does so
by training many individual decision trees on partitioned feature
subspace. This also makes Random Forest robust towards high
dimensional feature space. This is ideal in case due to the high
dimensionality of the feature space.
Insight 3: It can be noticed that all 3 features when combined
give the highest accuracy. However, certain users might decide
Water Related Danger Height Related Danger Vehicle/Road Related Danger
Accuracy 0.851 0.773 0.705
Precision 0.873 0.81 0.738
Recall 0.851 0.801 0.714
F1-Score 0.857 0.801 0.721
Technique Random Forest Random Forest SVM
Table 5: Performance of individual risk classifiers with 10-fold cross validation, along with the technique which yielded these results
SVM RandomForest Nearest Neighbors Decision Tree
Image Only 0.72 0.73 0.55 0.67
Text Only 0.61 0.51 0.51 0.53
Location Only 0.58 0.56 0.56 0.57
Image + Location 0.70 0.72 0.55 0.64
Text + Location 0.61 0.57 0.52 0.56
Text + Image 0.70 0.70 0.52 0.65
Text + Image + Location 0.68 0.73 0.54 0.65
Table 6: Average accuracy for 10-fold cross validation over different classification techniques and different feature configurations.
to not share location or might not have any text for their selfie,
which might make it more challenging for the machine learning
algorithms to classify if a selfie is dangerous or not. We observe
that the features perform decently even in the absence of other fea-
tures. The best feature type - image-based features perform with an
accuracy of 73.6% set.
6.2.2 Risk-Based Individual Classifier
Besides trying to classify if a selfie is dangerous or not, we also
wanted to test how well can we predict that a particular selfie is
dangerous due to a particular reason. We used a similar method-
ology as mentioned in the above section for all our experiments.
Out of the 8risks that were marked by the annotators and also in-
ferred by characterizing selfie casualties, we developed classifier
for 3categories - water related, height related and vehicle/road re-
lated. For the remaining categories, number of positive samples
was insufficient to be able to train a classifier and more importantly,
the generalizability of a classifier trained over such low number
of samples will be doubtful. For a particular task, we used only
those features which intuitively made sense to be used for predict-
ing the given risk-type related dangerous selfies. An example of
this could be that while predicting water-related dangerous selfie,
it does not make sense to use height-based features or vehicle re-
lated features. The features space used for each risk type consists
of image-based, text-based and location-based features. Location-
based features consisted of features relevant to the risk type. To
identify road-related dangerous selfies, we used the same location-
based features that we used for vehicle/driving-related risks.
We present the results for this experiment in Table 5. We present
the results for only the best configuration, and the best classifier.
The best feature set for all 3tasks - water, height and vehicle re-
lated dangers was the combined space of all 3feature types - image-
based, text-based and location-based. However, as mentioned ear-
lier, the location-based classifier for every task was different and
has been explained above.
Insight 4: We were able to get better accuracy, precision statis-
tics than the overall classifier for all the three tasks (water, height
and vehicle) than the overall classifier discuss in the previous sub-
section. This is largely because we reduce the noise being added
by other dangerous selfies that were dangerous because of different
reasons, and had different distributions. Since most of the feature
space was tuned to find a specific class of danger, it was hard for
those features to be able to classify dangerous selfies for other dis-
Insight 5: The highest accuracy statistic was obtained on the
water-task. This could be attributed to the design of features - min-
imum distance to water body and fraction of water pixels were easy
to compute and unambiguous to indicate if a person is near to or in a
waterbody. On manual investigation, we also found that DenseCap
(source for image-based features) was able to identify water bod-
ies in the selfies accurately. Moreover, the unambiguity in labeling
water risks also helped.
In this paper we put forth a novel characterization of the selfie
casualties that have occurred in the past. The rising trend of self-
ies and the dangers associated with careless selfie taking behaviour
have been addressed in this paper. Our work helps in both under-
standing the various reasons behind selfie casualties and provides
a potential solution to reduce such deaths. We presented a way to
classify if a selfie image posted on the social media is dangerous or
not. We used various classes of features such as - text-based fea-
tures, image-based features and location-based features to represent
different risk types. Location-based features were customized to
capture the common reasons such as water-related, height-related
reasons pertaining to selfie deaths. We used state of the art deep
learning techniques such as DenseCap to get information about the
content of the image to determine the nature of the selfie. We also
tested the approach in the case of absence of one or more of the
above mentioned features. We were able to identify dangerous self-
ies with an accuracy of 73%. Further, we also investigated if our
feature space can form a classifier to predict a specific reason for
the selfie being dangerous. We showed that we were able to identify
water-related and height-related dangerous selfies with satisfactory
Our classifier results are based on the human annotations and
features that we learned from the selfie casualties. There is scope
for improvement in the accuracy by increasing the dataset and the
annotated dataset. The proposed methodology can help users know
dangerous situations before taking a selfie. We hope to use our un-
derstanding from this paper to build a technology which can help
users identify if a particular location is dangerous for taking selfies,
and also provide information about casualties that have happened
there in the past. We believe that the study can inspire and pro-
vide footprints for technologies which can stop users from clicking
dangerous selfies, and thus preventing more of such casualties.
[1] 24 billion selfies uploaded to google photos in one year.
year-200-million.html. Accessed: 2016-05-30.
[2] Definition of selfie. Online.
[3] Ellen degeneres orchestrates the most famous selfie ever at
the oscars.
[4] Google api zoom levels.
maps/documentation/static-maps/intro#Zoomlevels. Online.
[5] More people have died by taking selfies this year than by
shark attacks.
[6] More than half of millennials have shared a selfie.
than-half-of-millennials- have-shared-a-selfie/.
[7] Most dangerous selfies.
world-s-dangerous-selfies- meet-adventure-photographers-
putting-lives-risk- perfect-self-portrait.html.
[8] Mumbai bans selfies after 19 people die.
[9] The oxford dictionaries word of the year 2013 is ‘selfie’.
[10] Selfie city. Online.
[11] A selfie with a weapon kills’: Russia launches campaign
urging photo safety.
with-a-weapon-kills- russia-launches-safe-selfie-campaign.
[12] A. K. Baishya. # namo: The political work of the selfie in the
2014 inter-annotatorian general elections. International
Journal of Communication, 9:1686–1700, 2015.
[13] S. Bhogesha, J. R. John, and S. Tripathy. Death in a flash:
selfie and the lack of self-awareness. Journal of Travel
Medicine, 23(4):taw033, 2016.
[14] J. Brager. The selfie and the other: consuming viral tragedy
and social media (after) lives. International Journal of
communication, 9:1660–1671, 2015.
[15] D. M. Carmean and M. E. Morris. Selfie examinations?:
applying computer vision, hashtag scraping and sentiment
analysis to finding and interpreting selfies.
[16] J. Donahue, Y. J., V. O., J. Hoffman, N. Zhang, E. Tzeng, and
T. Dravell. Decaf: A deep convolutional activation feature
for generic visual recognition. In ICML, 2014.
[17] G. T. Flaherty and J. Choi. The selfie phenomenon: reducing
the risk of harm while using smartphones during
international travel. Journal of travel medicine, 23(2), 2016.
[18] J. L. Fleiss. Measuring nominal scale agreement among
many raters. Psychological bulletin, 76(5):378, 1971.
[19] S. O. Gharan, F. Ronaghi, and Y. Wang. What memes say
about the news cycle. Technical report, Stanford University,
[20] M. Howes. Let me take a# selfie: An analysis of how cycling
should respond to the increasing threats posed by exuberant
spectators? Laws of the Game, 1(1):7, 2015.
[21] J. Johnson, A. Karpathy, and L. Fei-Fei. Densecap: Fully
convolutional localization networks for dense captioning. In
CVPR, 2016.
[22] M. Koliska and J. Roberts. Selfies| selfies: Witnessing and
participatory journalism with a point of view. International
Journal of Communication, 9:14, 2015.
[23] A. LAKSHMI. The selfie culture: Narcissism or counter
hegemony? Journal of Communication and media Studies
(JCMS), 5:2278–4942, 2015.
[24] H. Lamba, M. M. Malik, and J. Pfeffer. A tempest in a
teacup? analyzing firestorms on twitter. In ASONAM, pages
17–24. IEEE, 2015.
[25] J. R. Landis and G. G. Koch. The measurement of observer
agreement for categorical data. Biometrics, 33, 1977.
[26] Q. Le and T. Mikolov. Distributed representations of
sentences and documents. In ICML, 2014.
[27] J. Leskovec, L. Backstrom, and J. Kleinberg. Memetracker:
tracking news phrase over the web, 2009.
[28] W. M. Lim and J. Schroeder. Understanding the selfie
phenomenon: current insights and future research directions.
European Journal of Marketing, 50(9/10), 2016.
[29] L. v. d. Maaten and G. Hinton. Visualizing data using t-sne.
JMLR, 9, 2008.
[30] R. Miller and N. Lammas. Social media and its implications
for viral marketing. AP Public Relations Journal, 2010.
[31] F. Morstatter, J. ürgen Pfeffer, H. Liu, and K. M. Carley. Is
the sample good enough? comparing data from twitter’s
streaming api with twitter’s firehose. In ICWSM, 2013.
[32] T. C.-D. Porch. Society, Culture, and the Selfie: Analysis of
the Impact of the Selfie Practice on Women’s Body Image.
PhD thesis, Emory University, 2015.
[33] L. Qiu, J. Lu, S. Yang, W. Qu, and T. Zhu. What does your
selfie say about you? Computers in Human Behavior,
52:443–449, 2015.
[34] T. Sakaki, M. Okazaki, and Y. Matsuo. Earthquake shakes
twitter users: real-time event detection by social sensors. In
WWW, pages 851–860. ACM, 2010.
[35] T. Sakaki, F. Toriumi, and Y. Matsuo. Tweet trend analysis in
an emergency situation. In Proceedings of the Special
Workshop on Internet and Disasters. ACM, 2011.
[36] T. M. Senft and N. K. Baym. Selfies introduction˜ what does
the selfie say? investigating a global phenomenon.
International Journal of Communication, 9, 2015.
[37] M. P. Simmons, L. A. Adamic, and E. Adar. Memes online:
Extracted, subtracted, injected, and recollected. In ICWSM,
[38] G. Stringhini, G. Wang, M. Egele, C. Kruegel, G. Vigna,
H. Zheng, and B. Y. Zhao. Follow the green: growth and
dynamics in twitter follower markets. In IMC, pages
163–176. ACM, 2013.
[39] B. Subrahmanyam, K. S. Rao, R. Sivakumar, and G. C.
Sekhar. Selfie related deaths perils of newer technologies.
Narayana Medical Journal, 5(1), 2016.
[40] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna.
Rethinking the inception architecture for computer vision.
CoRR, abs/1512.00567, 2015.
[41] Y. Wang, P. G. Leon, A. Acquisti, L. F. Cranor, A. Forget,
and N. M. Sadeh. A field trial of privacy nudges for
facebook. In CHI, pages 2367–2376, 2014.
... The aim of the present paper is to develop a better understanding of the phenomenon by means of conducting a systematic review of published empirical research on self-photography behaviour and associated risks. The focus of the present paper is on incidents where the injury or death of an individual or a group of people could have been avoided had the individual(s) not been taking a selfie (Lamba et al., 2016). In order to develop a clearer picture of antecedent human and situational variables and risk factors, this paper poses the research question: What is the current state of knowledge in academic literature on the interaction between people (visitors/tourists) taking selfies and the risk of injury or harm? ...
... Four out of the eight papers examined the extent of selfie-related deaths occurring on a global scale based on a review of news media reports published in English language. The first analysis of selfie deaths undertaken, with a preliminary report published in November 2016, included curating a comprehensive dataset of incidents involving the 'death of an individual or a group of people that could have been avoided had the individual(s) not been taking a selfie' (Lamba et al., 2016). With this definition, the authors also included accidents where people died as they attempted to save those who had clicked the selfies. ...
... As can be seen in Fig. 3, three of the four studies identified the first fatal incident to have occurred in early 2014; however, one study reported that three selfie deaths had occurred in late 2011 and a further two in 2013 (Bansal et al., 2018). Frequency of selfie deaths in the studies ranged between 2.6 deaths (Jain & Mavani, 2017) and 4 deaths (Lamba et al., 2016) per year, on average. With a total of 72 months, the study by Bansal et al. (2018) was the longest period of data analysis published. ...
This paper reviews empirical research on the extent and nature of risks associated with dangerous tourist self-photography (selfies) and management responses. Global epidemiological studies have captured the extent of the problem, with studies recording 250+ media-reported deaths within the past decade. Nearly half occurred in natural environments, with key hazards being cliff edges, waterbodies, and wildlife. Researchers exploring the nature of the phenomenon identify contextual factors along with technology-induced distractions, as risk factors in selfie-taking. Demographics also feature, with the majority of casualties being young males. The literature points to management responses that relate to either the social or the risky nature of the phenomenon. The most prevalent are communication-related, ranging from education and awareness-raising to persuasive communication. Targeted communications that invoke social norms and innovative media are suggested for addressing the problem.
... One of these worrying practices is the posting of risky selfies. Based on emerging literature (Lamba et al., 2016;Zuckerman, 2014), risky selfies can be defined as pictures displaying the social media user in a dangerous situation, such as the climbing on a cliff or the inattentive driving of a vehicle. Risky selfies target particularly intense, dangerous behaviors, and involve taking social and/or legal risks. ...
... During their social media use, adolescents may encounter multiple examples of peers engaging in moderately to highly risky behavior. Such examples may support them to also post risky selfies (Lamba et al., 2016). Second, the study explores the potential value of the prototype willingness model to explain how (some of) the links between social media use and risky selfie behavior develop (Gerrard et al., 2008). ...
... The current study responded to concerns that have arisen in the public and scholarly discourse concerning risky selfies (Ayeh, 2018;Lamba et al., 2016). The study data were the first to examine this behavior among social media users and underline the validity of these concerns to some extent among adolescents. ...
Full-text available
Risky selfies are recent, but worrying phenomena in which adolescents take pictures of themselves during the act of risk behavior. By applying the principles of the prototype willingness model, the current cross-sectional study among adolescents (N = 686) aged 15–18 years old examined the relation between social media use and adolescents’ risky selfie behavior. A structural equation modeling indicated that adolescents’ general social media use was positively related to descriptive norm estimations of risky selfie takers and favorable prototype perceptions of risky selfie takers. Moreover, attitudes toward the taking of risky selfies and prototype perceptions of risky selfie takers were found to positively relate to adolescents’ willingness to engage in risky selfie taking and their actual risky selfie behavior. Furthermore, no support was found for the moderating roles of gender, developmental status, narcissism, and sensation seeking in the reported relations with social media use.
... (2020), India has the highest number of Facebook users in the world. India also accounts for more selfie deaths (76) in the world compared to any other country from a total of 127 worldwide (Lamba et al., 2016). Nigeria is the 19th on the hierarchy of Facebook users in the world with an estimated 24 million users, aside other social networking sites such as Instagram, Telegram, Twitter, Whatsapp etc. ...
Full-text available
Female sex workers constitute a diverse group working in a wide array of contexts. They face disproportionate burdens of HIV, HIV risk and limited access to healthcare services. Sub-Saharan Africa bears the brunt of HIV among sex workers, with the highest proportion of global sexual transmission of HIV in sex work (17.8%) occurring in the region.
... It is a smartphone application which is based on usage of location service. This app will identify when someone is taking a selfie at risky / unsafe location and alert him/ her about the probable risk to life [5]. ...
Full-text available
... Aside from influencing decision-making, smartphone use in outdoor settings has already lead to fatal outcomes. For example, Lamba et al. (2016) reported 127 known deaths while taking selfie photos from 2014 to 2016. The implications of smartphone use outdoor settings are expansive and deserve further exploration. ...
Full-text available
As smartphone use continues to become more embedded within daily life, identifying the factors driving their use in extreme environments may have numerous meaningful implications. Little is currently known about mountaineers' intentions to use smartphones in high-alpine environments. Therefore, the purpose of this study was to examine the extent to which attitude, subjective norm, and perceived behavioral control predicted mountaineers' intentions to use smartphones in high-alpine environments. A sample of 167 mountaineers from 37 countries completed a brief questionnaire about their intentions to use smartphones during their next high-alpine expedition. A series of multiple regression analyses were used to determine the salient beliefs influencing mountaineers' smartphone use in high-alpine environments. The study findings provide a better understanding of the potential factors driving mountaineers' use of smartphones. More broadly, these findings add to the growing body of literature regarding smartphone use in extreme environments.
... With the penetration of internet in masses, various types of uploaded content in social media have set new trends. Previous trends, like advertising (Miller and Lammas 2010), crisis reportage (Sakaki, Toriumi, and Matsuo 2011), dangerous selfies (killfies) (Lamba et al. 2016) have been thoroughly studied. There has also been some work on challenges like Blue Whale (Mukhra et al. 2017). ...
Full-text available
There has been upsurge in the number of people participating in challenges made popular through social media channels. One of the examples of such a challenge is the Kiki Challenge, in which people step out of their moving cars and dance to the tunes of the song, 'Kiki, Do you love me?'. Such an action makes the people taking the challenge prone to accidents and can also create nuisance for the others traveling on the road. In this work, we introduce the prevalence of such challenges in social media and show how the machine learning community can aid in preventing dangerous situations triggered by them by developing models that can distinguish between dangerous and non-dangerous challenge videos. Towards this objective, we release a new dataset namely MIDAS-KIKI dataset, consisting of manually annotated dangerous and non-dangerous Kiki challenge videos. Further, we train a deep learning model to identify dangerous and non-dangerous videos, and report our results.
In 2014 the American Psychiatric Association (APA) published a report officially declaring selfie taking as a mental disorder named as selfitis. However, some scholars have argued that to attribute selfie taking to a mental disorder is inappropriate as it has not crossed the borderline. This chapter reviews the literature on mental disorder occasioned by Social Networking Sites (SNS) addiction and data from students, medical professionals and psychologists to examine the various factors responsible for this addiction and its socio-cultural implications.
Conference Paper
Clicking selfies using mobile phones has become a trend in the past few years. It is documented that the thrill of clicking selfies at adventurous places has resulted in serious injuries and even death in some cases. To overcome this, we propose a system which can alert the user by detecting the level of danger in the background while capturing selfies. Our app is based on a deep Convolutional Neural Network (CNN). The prediction is performed as a 5 class classification problem with classes representing a different level of danger. Face detection and device orientation information are also used for robustness and lesser battery consumption.
Full-text available
Background. Photography is an integral component of the international travel experience. Self-photography is becoming a mainstream behaviour in society and it has implications for the practice of travel medicine. Travellers who take selfies, including with the use of selfie sticks, may be subject to traumatic injuries associated with this activity. This review article is the first in the medical literature to address this emerging phenomenon. Methods. Articles indexed on PubMed and Scopus databases through 2015 were retrieved, using the search terms ‘travel’, combined with ‘selfie’, ‘self-photography’, ‘smartphone’, ‘mobile phone’ and ‘social media’. The reference lists of articles were manually searched for additional publications, and published media reports of travel-related self-photography were examined. Results. The lack of situational awareness and temporary distraction inherent in selfie-taking exposes the traveller to potential hazards. A diverse group of selfie injuries has been reported, including injury and death secondary to selfie-related falls, attacks from wild animals, electrocution, lightning strikes, trauma at sporting events, road traffic and pedestrian accidents. Public health measures adopted by the Russian Federation in response to over 100 reported selfie injuries in 2015 alone are presented. The review also discusses the potential for direct trauma from the use of selfie sticks. Travel-related scenarios where selfies should be avoided include photographs taken from a height, on a bridge, in the vicinity of vehicular traffic, during thunderstorms, at sporting events, and where wild animals are in the background. Recommendations exist which discourage use of mobile phones in drivers and pedestrians. Conclusions. The travel medicine practitioner should routinely counsel travellers about responsible self-photography during international travel and should include this advice in printed material given to the patient. The travel and mobile phone industries should reinforce these health promotion messages. Future research should offer greater insights into traveller selfie-taking behaviour.
Full-text available
Whereas a vast number of selfies contain little more than a face or faces, highlighting the presentation of self (Goffman, 1959), this article goes beyond the notion of self and identity by examining the relationship between the self and the geographical and social space around it. In particular, we examine a type of selfie that places the self in an event or location of interest such as a sporting event, tourist attraction, or even disaster area or war zone. We argue that the visual interaction between the person and the space can be considered a process of meaning making, resulting in a particular identity that is informed by both the space and the self, and presenting the photographer/subject as a witness. The relationship between space and self is not only a claim that “I’m here!” in a particular time and space but also a claim that “I witnessed this event,” which is elementary to any form of journalism.
Full-text available
A May 2014 issue of Open Magazine, an Indian weekly news digest, celebrated the victory of Narendra Modi, the new Indian prime minister, with an iconic portrait with the caption "Triumph of the Will" (see Figure 1). Given the pronounced right-wing leanings of the Bharatiya Janata Party (BJP), the political party with which Modi is affiliated, the reference in the caption to the Leni Reifenstahl film of the same name might have been more than just a mere play on words. Added to the oft-iterated implication of the BJP ministry in the Gujarat riots of 2002 during Narendra Modi's chief ministership, Open Magazine's low-angle portrait of the incumbent prime minister, together with the caption, seemed to convey a deliberate construction of Modi as a man of iron will, whose government promised not to stray from the "right" path (pun intended). © 2015 (Anirban K. Baishya, [email protected]/* */).
Purpose This paper aims to define the conceptual boundary of the selfie and to discuss the role of the selfie in the social media marketplace. Design/methodology/approach This paper extensively reviews and draws themes from the extant literature on consumer identities in the social media marketplace to explain the selfie phenomenon and to identify potentially fruitful directions for further research. Findings Current insights into the selfie phenomenon can be understood from socio-historical, technological, social media, marketing and ethical perspectives. Research limitations/implications Despite the limitations of a general review (e.g. absence of empirical data and analysis), this paper identifies multiple avenues to extend existing lines of inquiry on the selfie phenomenon. Thus, this paper should encourage further research on the topic in the academic and scientific community. Practical implications The selfie can be used as a marketing tool to improve marketing performance and accomplish marketing-related goals. Originality/value This paper sheds light on how marketing academics and practitioners can better understand the impact of the selfie in the social media marketplace.
Conference Paper
Convolutional networks are at the core of most stateof-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we are exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error and 17.3% top-1 error.
Conference Paper
'Firestorms,' sudden bursts of negative attention in cases of controversy and outrage, are seemingly widespread on Twitter and are an increasing source of fascination and anxiety in the corporate, governmental, and public spheres. Using media mentions, we collect 80 candidate events from January 2011 to September 2014 that we would term 'firestorms.' Using data from the Twitter decahose (or gardenhose), a 10% random sample of all tweets, we describe the size and longevity of these firestorms. We take two firestorm exemplars, #myNYPD and #CancelColbert, as case studies to describe more fully. Then, taking the 20 firestorms with the most tweets, we look at the change in mention networks of participants over the course of the firestorm as one method of testing for possible impacts of firestorms. We find that the mention networks before and after the firestorms are more similar to each other than to those of the firestorms, suggesting that firestorms neither emerge from existing networks, nor do they result in lasting changes to social structure. To verify this, we randomly sample users and generate mention networks for baseline comparison, and find that the firestorms are not associated with a greater than random amount of change in mention networks.
Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error and 17.3% top-1 error.
We introduce the dense captioning task, which requires a computer vision system to both localize and describe salient regions in images in natural language. The dense captioning task generalizes object detection when the descriptions consist of a single word, and Image Captioning when one predicted region covers the full image. To address the localization and description task jointly we propose a Fully Convolutional Localization Network (FCLN) architecture that processes an image with a single, efficient forward pass, requires no external regions proposals, and can be trained end-to-end with a single round of optimization. The architecture is composed of a Convolutional Network, a novel dense localization layer, and Recurrent Neural Network language model that generates the label sequences. We evaluate our network on the Visual Genome dataset, which comprises 94,000 images and 4,100,000 region-grounded captions. We observe both speed and accuracy improvements over baselines based on current state of the art approaches in both generation and retrieval settings.