Content uploaded by Ashwin Singh
Author content
All content in this area was uploaded by Ashwin Singh on Jul 21, 2022
Content may be subject to copyright.
Erasing Labor with Labor: Dark Paerns and Lockstep Behaviors
on Google Play
Ashwin Singh
IIIT Hyderabad, India
ashwin19.iiith@gmail.com
Arvindh Arun
IIIT Hyderabad, India
arvindh.a@research.iiit.ac.in
Pulak Malhotra
IIIT Hyderabad, India
pulak.malhotra@students.iiit.ac.in
Pooja Desur
IIIT Hyderabad, India
pooja.desur@students.iiit.ac.in
Ayushi Jain
IIIT Delhi, India
ayushi19031@iiitd.ac.in
Dueng Horng Chau
Georgia Institute of Technology, USA
polo@gatech.edu
Ponnurangam Kumaraguru
IIIT Hyderabad, India
pk.guru@iiit.ac.in
ABSTRACT
Google Play’s policy forbids the use of incentivized installs, rat-
ings, and reviews to manipulate the placement of apps. However,
there still exist apps that incentivize installs for other apps on
the platform. To understand how install-incentivizing apps aect
users, we examine their ecosystem through a socio-technical lens
and perform a mixed-methods analysis of their reviews and per-
missions. Our dataset contains 319K reviews collected daily over
ve months from 60 such apps that cumulatively account for over
160.5M installs. We perform qualitative analysis of reviews to re-
veal various types of dark patterns that developers incorporate in
install-incentivizing apps, highlighting their normative concerns
at both user and platform levels. Permissions requested by these
apps validate our discovery of dark patterns, with over 92% apps ac-
cessing sensitive user information. We nd evidence of fraudulent
reviews on install-incentivizing apps, following which we model
them as an edge stream in a dynamic bipartite graph of apps and
reviewers. Our proposed reconguration of a state-of-the-art micro-
cluster anomaly detection algorithm yields promising preliminary
results in detecting this fraud. We discover highly signicant lock-
step behaviors exhibited by reviews that aim to boost the overall
rating of an install-incentivizing app. Upon evaluating the 50 most
suspicious clusters of boosting reviews detected by the algorithm,
we nd (i) near-identical pairs of reviews across 94% (47 clusters),
and (ii) over 35% (1,687 of 4,717 reviews) present in the same form
near-identical pairs within their cluster. Finally, we conclude with
a discussion on how fraud is intertwined with labor and poses a
threat to the trust and transparency of Google Play.
KEYWORDS
Google Play, Dark Patterns, Fraud, Labor
1 INTRODUCTION
Google Play lists over 2.89 million apps on its platform [
17
]. In
the last year alone, these apps collectively accounted for over 111
billion installs by users worldwide [
15
]. Given the magnitude of
this scale, there is tremendous competition amongst developers
to boost the visibility of their apps. As a result, developers spend
considerable budgets on advertising, with expenditure reaching
96.4 billion USD on app installs in 2021 [
16
]. Owing to this compet-
itiveness, certain developers resort to inating the reviews, ratings,
and installs of their apps. The legitimacy of these means is deter-
mined by Google Play’s policy, under which the use of incentivized
installs is strictly forbidden [
7
]. Some apps violate this policy by
oering users incentive in the form of gift cards, coupons, and other
monetary rewards in return for installing other apps; we refer to
these as install-incentivizing apps. Past work [
6
] found that apps
promoted on install-incentivizing apps are twice as likely to ap-
pear in the top charts and at least six times more likely to witness
an increase in their install counts. While their work focuses on
measuring the impact of incentivized installs on Google Play, our
work aims to develop an understanding of how it aects the users of
install-incentivizing apps. To this end, we perform a mixed-methods
analysis of the reviews and permissions of install-incentivizing apps.
Our ongoing work makes the following contributions:
(1)
We provide a detailed overview of various dark patterns
present in install-incentivizing apps and highlight several
normative concerns that disrupt the welfare of users on
Google Play.
(2)
We examine dierent types of permissions requested by
install-incentivizing apps to discover similarities with dark
patterns, with 95% apps requesting permissions that access
restricted data or perform restricted actions
(3)
We show promising preliminary results in algorithmic de-
tection of fraud and lockstep behaviors in reviews that boost
overall rating of install-incentivizing apps, detecting near-
identical review pairs in 94% of the 50 most suspicious review
clusters.
(4)
We release our dataset comprising 319K reviews written by
301K reviewers over a period of ve months and 1,825 most
relevant reviews with corresponding qualitative codes across
60 install-incentivizing apps. [14]
2 DATASET
We created queries by prexing “install apps” to phrases like “earn
money”, “win prizes”, “win rewards”, etc., and searched them on
Google Play to curate a list of potentially install-incentivizing apps.
Then, we proceeded to install the apps from this list on our mobile
Conference’17, July 2017, Washington, DC, USA Ashwin Singh, Arvindh Arun, Pulak Malhotra, Pooja Desur, Ayushi Jain, Dueng Horng Chau, and Ponnurangam Kumaraguru
Figure 1: Distribution and CDF plot of install count for the
60 shortlisted install-incentivizing apps that collectively ac-
count for over 160.5M installs. Eighty-ve percent of these
apps have 100K or more installs, demonstrating their popu-
larity.
Figure 2: Network of apps showing labels of ve apps
that share the most reviewers with other apps. App
‘us.current.android’ shares 6.4K reviewers with other install-
incentivizing apps.
devices to manually verify whether these apps incentivized installs
for other apps; we discarded the apps that did not t this criterion.
Following this process, we shortlisted 60 install-incentivizing apps.
In Figure 1, we plot a distribution and CDF of their installs, nd-
ing that most apps (85%) have more than 100K installs. We used a
scraper to collect reviews written daily on these apps, over a pe-
riod of 5 months from November 1, 2021 to April 8, 2022. Reviews
were collected daily to avoid over-sampling of reviews from cer-
tain temporal periods over others. This resulted in 319,198 reviews
from 301,188 reviewers. Figure 2 shows a network of apps where
edges denote the number of reviewers shared by any two apps. We
observe that certain apps share more reviewers with some apps
over others, hinting at the possibility of collusion. Lastly, we also
collected the permissions requested by apps on users’ devices.
3 QUALITATIVE ANALYSIS
To understand the various ways in which install-incentivizing apps
aect their users, we performed qualitative analysis of their reviews.
Unless a user expands the list of reviews, Google Play displays only
the top four “most relevant” reviews under its apps. Owing to
their default visibility, we sampled these reviews for all 60 apps
over a one-month period, obtaining 1,825 unique reviews. Then,
we adopted an inductive open coding approach to thematically
code [
10
] these reviews. In the rst iteration, all researchers inde-
pendently worked on identifying high-level codes for these reviews
which were then compared and discussed. During this process, we
dened the ‘completion of oers on install-incentivizing apps’ as
an act of labor by users and the ‘incentive promised for their labor’
as value. Then, we reached a consensus on four high-level themes:
exploitation, UI challenges, satisfaction, and promotion, which we
dene below:
(1)
Exploitation: User invests labor but is unable to gain value.
(2)
UI challenges: User invests labor but the app’s UI makes it
challenging for them to gain value.
(3) Satisfaction: User invests labor and is able to gain value.
(4)
Promotion: User invests labor in promoting an app through
their review, rating or a referral code to gain value.
While all themes were useful for capturing the inter-relationship
between a user’s labor and its value, the rst three themes were
relatively more prevalent in our data. Next, we performed two
iterations of line-by-line coding of reviews within the high-level
themes where the researchers identied emerging patterns under
each theme until the principle of saturation was established.
3.1
How Install-Incentivizing Apps aect Users
In this section, we describe our ndings from the qualitative analy-
sis to shed light on how install-incentivizing apps aect their users.
More specically, we elaborate on the commonalities and dier-
ences of patterns within high-level codes that we discovered using
line-by-line coding to depict how labor invested by users in these
apps is not only exploited but also leads to negative consequences
for them as well as the platform.
3.1.1 Dark Paerns.
Dark patterns can be dened as tricks embedded in apps that make
users perform unintended actions [
2
]. We nd comprehensive de-
scriptions of dark patterns present within install-incentivizing apps
in reviews coded as ‘exploitation’ and ‘UI challenges’. These pat-
terns make it dicult for users to redeem value for their labor. First,
our low-level codes uncover the dierent types of dark patterns
present in reviews of install-incentivizing apps. Then, we ground
these types in prior literature [
9
] by utilizing lenses of both indi-
vidual and collective welfare to highlight their normative concerns.
The individual lens focuses on dark patterns that allow developers
to benet at the expense of users whereas the collective lens looks
at users as a collective entity while examining expenses. In our case,
the former comprises three normative concerns. First, patterns that
enable developers to extract labor from users without compensating
cause nancial loss (I1) to users. Second, cases where the data of
users is shared with third parties without prior consent, leading to
invasion of privacy (I2). Third, when the information architecture
Erasing Labor with Labor: Dark Paerns and Lockstep Behaviors on Google Play Conference’17, July 2017, Washington, DC, USA
Table 1: Dierent types of dark patterns mapped to their individual {Finanical Loss (I1), Invasion of Privacy (I2), Cognitive
Burden (I3)} and collective {Competition (C1), Price Transparency (C2), Trust in the Market (C3)} normative concerns.
High-Level
Code
Low-Level
Code Review Normative Concerns
I1 I2 I3 C1 C2 C3
Exploitation
Withdrawal Limit 100000 is equal to 10 dollars. Just a big waste of time.
You can not reach the minimum cashout limit. ✓ ✓ ✓ ✓
Cannot Redeem
Absolute scam. Commit time and even made in app
purchases to complete tasks ... I have over 89k points
that it refuses to cash out!
✓ ✓ ✓ ✓
Only Initial Payouts Good for the rst one week then it will take forever to
earn just a dollar. So now I quit this app ... ✓ ✓ ✓ ✓
Paid Oers
In the task I had to deposit 50 INR in an app and I
would receive 150 INR as a reward in 24 hrs. 5 days
have passed and I get no reply to mail.
✓ ✓ ✓ ✓
Hidden Costs
Most surveys say that the user isn’t eligible for them,
after you complete them! Keep in mind you may not
be eligible for 90% of the surveys.
✓ ✓ ✓ ✓
Privacy Violations
Enter your phone number into this app and you’ll be
FLOODED with spam texts and scams. I might have
to change my phone number because I unwittingly ...
✓ ✓
UI Challenges
Too Many Ads
Pathetic with the dam ads! Nothing but ads!!! Money
is coming but only pocket change. It’ll be 2022 before
i reach $50 to cashout, if then.
✓ ✓
Progress Manipulation
I redownload the app since the app would crash all the
time ... I logged in and guess what?? ALL MY POINTS
ARE GONE.. 12k points all gone...
✓ ✓ ✓ ✓
Permission Override
When you give it permission to go over other apps it
actually blocks everything else on your phone from
working correctly including Google to leave this review.
✓ ✓ ✓
of apps manipulates users into making certain choices due to the
induced cognitive burden (I3). The lens of collective welfare fa-
cilitates understanding of the bigger picture of install-incentivizing
apps on Google Play by listing three additional concerns. Due to
high competition (C1), some developers incorporate dark patterns
in apps that empower them to ‘extract wealth and build market
power at the expense of users’ [
4
] on the platform. In conjunction
with their concerns at the individual level, they also pose a serious
threat to the price transparency (C2) and trust in the market
(C3) of Google Play. In Table 1, we show these dierent types of
dark patterns mapped to their individual and collective normative
concerns using sample reviews from our data.
3.1.2 Evidence of Fraudulent Reviews and Ratings.
During qualitative analysis, we found that most reviews coded as
‘satisfaction’ were relatively shorter and lacked sucient context to
explain how the app benetted the user, for e.g. “Good app”, “Nice
App”, “Very easy to buy money.”, “Nice app for earning voucher”. We
performed welch’s t-test to validate that the number of words in
reviews coded as satisfaction were very highly signicantly lower
than reviews coded as exploitation or UI challenges (
𝑝<
0
.
001
, 𝑡 =
−
11
.
41). The shorter length of reviews, along with the excessive use
of adjectives and unrelatedness to the apps represented key spam-
detection signals [
13
], raising suspicions about their fraudulence.
We discovered evidence of the same in reviews coded as ‘promotion’
–“Gets high rating because it rewards people to rate it so”, “I rated it 5
stars to get credits”, thus nding that install-incentivizing apps also
violate Google Play’s policy by incentivizing users to boost their
ratings and reviews. Other reviews coded as ‘promotion’ involved
users promoting other competitor apps (“No earning 1 task complete
not give my wallet not good ! CASHADDA App is good fast earning
is good go install now thanks”) or posting their referral codes to
get more credits within the install-incentivizing app (‘The app is
Awesome. Use My Referral Code am****02 to get extra coin‘”).
4 QUANTITATIVE ANALYSIS
In this section, we ascertain ndings from our qualitative analysis
as well as reveal more characteristics about the behavior of install-
incentivizing apps and their reviews. For the same, we examine the
permissions requested by these apps to establish their relevance to
the dark patterns discussed in Section 3.1.1, and perform anomaly
detection on their reviews to build upon the evidence of fraud from
Section 3.1.2.
4.1 Permissions in Install-Incentivizing Apps
App permissions support user privacy by protecting access to re-
stricted data and restricted actions on a user’s device [
5
]. Most
permissions fall into two protection levels as determined by An-
droid, namely normal and dangerous, based on the risk posed to
user privacy. Similarly, another distinction can be made between
permissions that access user information and permissions that only
control device hardware [
3
]. We leverage these categories in our
Conference’17, July 2017, Washington, DC, USA Ashwin Singh, Arvindh Arun, Pulak Malhotra, Pooja Desur, Ayushi Jain, Dueng Horng Chau, and Ponnurangam Kumaraguru
analysis to identify types of permissions prominent across install-
incentivizing apps. Figure 3 shows an UpSet plot [
8
] of dierent
types of permissions present in install-incentivizing apps. First,
we observe that over 92% of apps comprise dangerous permissions
that access user information. The most popular permissions in this
category include ‘modify or delete the contents of your USB stor-
age’ (41 apps), ‘read phone status and identity’ (24 apps), ‘access
precise location’ (19 apps) and ‘take pictures and videos’ (14 apps).
Second, despite being requested by relatively fewer apps, some
permissions in this category enable an alarming degree of control
over user information; for e.g. ‘create accounts and set passwords’
(5 apps), ‘add or modify calendar events and send email to guests
without owners’ knowledge’ (3 apps) and ‘read your contacts’ (2
apps). Third, 34% of install-incentivizing apps contain permissions
that access dangerous hardware-level information, the most promi-
nent one being ‘draw over other apps’ (14 apps). Fourth, we note
that all but three apps request at least one dangerous permission.
Lastly, permissions requested by install-incentivizing apps share
common characteristics with the dark patterns discussed above,
thus validating their qualitative discovery.
4.2 Lockstep Behaviors
In Section 3.1.2, we found evidence of install-incentivizing apps
indulging in review and rating fraud. Thus, we build upon the same
to investigate reviews of these apps for anomalous behaviors such
as lockstep that are indicative of fraud. Specically, we focus on
detecting groups of reviews that exhibit similar temporal and rating
patterns; for e.g. bursts of reviews on an app within a short period
of time to boost its overall rating.
4.2.1 Modelling and Experimental Setup.
Given that reviews are a temporal phenomenon, we model them
as an edge-stream
𝐸={𝑒1, 𝑒2, ...}
of a dynamic graph
𝐺
. Each
edge
𝑒𝑖∈𝐸
represents a tuple
(𝑟𝑖, 𝑎𝑖, 𝑡𝑖)
where
𝑟𝑖
is a reviewer
who reviews an app
𝑎𝑖
at time
𝑡𝑖
(see Fig 4). Groups of fraudulent
reviewers may either aim to boost the overall rating of an install-
incentivizing app or sink the rating of a competitor app. Thus, we
partition our edge stream into two sub-streams as follows:
(1) 𝐸𝑏𝑜𝑜𝑠𝑡 ={(𝑟𝑖, 𝑎𝑖, 𝑡𝑖) ∈ 𝐸|Score(𝑟𝑖,𝑎𝑖) ≥ 𝑅𝑎𝑖},|𝐸𝑏𝑜𝑜𝑠𝑡 |=
215,759
(2) 𝐸𝑠𝑖𝑛𝑘 ={ (𝑟𝑖, 𝑎𝑖, 𝑡𝑖) ∈ 𝐸|Score(𝑟𝑖, 𝑎𝑖)<𝑅𝑎𝑖},|𝐸𝑠𝑖 𝑛𝑘 |=
103,439
where
Score(𝑟𝑖, 𝑎𝑖) ∈ {
1
,
2
,
3
,
4
,
5
}
is the score assigned by re-
viewer
𝑟𝑖
to the app
𝑎𝑖
and
𝑅𝑎𝑖
denotes the overall rating of app
𝑎𝑖
.
Next, we recongure a state-of-the-art microcluster anomaly detec-
tion algorithm Midas-F [
1
] for our use. In particular, we modify the
denition of a microcluster to accommodate the bipartite nature of
our dynamic graph. Given an edge
𝑒∈𝐸
, a detection period
𝑇≥
1
and a threshold
𝛽>
1, there exists a microcluster of reviews on an
app 𝑎if it satises the following equation:
𝑐(𝑒, (𝑛+1)𝑇)
𝑐(𝑒, 𝑛𝑇 )>𝛽where 𝑐(𝑒, 𝑛𝑇 )=
{(𝑟𝑖, 𝑎, 𝑡 𝑖)|(𝑟𝑖, 𝑎, 𝑡𝑖) ∈ 𝐸𝑏𝑜 𝑜𝑠𝑡 ∧ (𝑛−1)𝑇<𝑡𝑖≤𝑛𝑇}
(1)
if
𝑒∈𝐸𝑏𝑜𝑜𝑠𝑡
and vice versa for
𝐸𝑠𝑖𝑛𝑘
. Depending on whether
𝑒
is a boosting or sinking edge,
𝑐(𝑒, 𝑛𝑇 )
counts similar edges for the
0
5
10
15
20
44.2%
30.8%
13.5%
5.8% 3.8% 1.9%
hardware (dangerous)
user info (normal)
user info (dangerous)
hardware (normal)
100.0%
92.3%
80.8%
34.6%
1020304050
Intersection size (number of apps)
0
Figure 3: UpSet plot demonstrating dierent types of permis-
sions present in install-incentivizing apps. Over ninety two
percent of apps request permissions that access sensitive user
information.
r1
a1
Apps
Reviewers
Time
r2
a1
r3
a2
rm
ak
t1t1t2tn
Figure 4: Reviews are modelled as an edge-stream in a dynamic
bipartite graph of apps and reviewers. Each edge
𝑒∈𝐸
repre-
sents a tuple (𝑟 , 𝑎, 𝑡 )where 𝑟is a reviewer who reviews an app
𝑎at time 𝑡.
app
𝑎
within consecutive detection periods
(𝑛−
1
)𝑇
and
𝑛𝑇
. Values
recommended by the authors are used for the remaining parameters
𝛼
and
𝜃
. It is worth noting that our modication preserves its
properties of (i) theoretical guarantees on false positive probability,
and (ii) constant-time and constant-memory processing of new
edges [1].
4.2.2 Analysis and Preliminary Results.
Midas-F follows a streaming hypothesis testing approach that de-
termines whether the observed and expected mean number of edges
for a node at a given timestep are signicantly dierent. Based on
a chi-squared goodness-of-t test, the algorithm provides anomaly
scores
S(𝑒)
for each edge
𝑒
in a streaming setting. Upon computing
anomaly scores for both sub-streams
𝐸𝑏𝑜𝑜𝑠𝑡
and
𝐸𝑠𝑖𝑛𝑘
, we visual-
ize their CDF with an inset box plot in Fig 5. It can be observed
that
𝐸𝑏𝑜𝑜𝑠𝑡
exhibits more anomalous behavior than
𝐸𝑠𝑖𝑛𝑘
. To ascer-
tain statistical signicance of the same, we make use of Welch’s
t-test for the hypothesis
𝐻1
:
S𝜇(𝐸𝑏𝑜𝑜𝑠 𝑡 )>S𝜇(𝐸𝑠𝑖𝑛𝑘 )
. We infer
that reviews that aim to boost the rating of an install-incentivizing
Erasing Labor with Labor: Dark Paerns and Lockstep Behaviors on Google Play Conference’17, July 2017, Washington, DC, USA
Figure 5: CDF plot of anomaly scores for the two edge streams
𝐸𝑏𝑜𝑜𝑠𝑡
and
𝐸𝑠𝑖𝑛𝑘
. Reviews that boost the overall rating of an
install incentivizing app exhibit signicantly more anoma-
lous behavior than reviews that aim to bring it down.
app show anomalous behavior that is highly signicantly more
(𝑡=157.23, 𝑝 <0.0) than reviews that aim to bring it down.
Next, we examine fraud across anomalous microclusters detected
by the algorithm. Figure 6 shows one such microcluster anomaly
where the algorithm detects reviews from three reviewers boosting
the overall rating of two install-incentivizing apps on the same day.
We extract the 50 most suspicious clusters of reviews from both sub-
streams
𝐸𝑏𝑜𝑜𝑠𝑡
and
𝐸𝑠𝑖𝑛𝑘
based on their average anomaly scores.
For each pair of reviews
(𝑟𝑖, 𝑟 𝑗)
within these clusters, we compute
their cosine similarity
𝐶𝑆 (𝑟𝑖, 𝑟 𝑗)
using embeddings generated by
Sentence-BERT [
12
]. Over 35% of reviews (1,687 of 4,717) from
the suspicious clusters in
𝐸𝑏𝑜𝑜𝑠𝑡
form at least one pair of highly
identical reviews i.e.,
𝐶𝑆 (𝑟𝑖, 𝑟 𝑗)=
1. However, this percentage drops
to 10% (45 of 432 reviews) in case of
𝐸𝑠𝑖𝑛𝑘
. On closer inspection, we
nd that these are all extremely short reviews with at most three
to four words that comprise mostly of adjectives; for e.g.,
𝐸𝑏𝑜𝑜𝑠𝑡
:
(‘good app’, ‘very good app’), (‘good earning app’, ‘very good for
earning app’), (‘best app’, ‘very best app’) and
𝐸𝑠𝑖𝑛𝑘
: (‘bad’, ‘very
bad’), (‘super’, ‘super’), (‘nice’, ‘very nice’). It is surprising to see
that all but four identical pairs from
𝐸𝑠𝑖𝑛𝑘
contain only positive
adjectives considering they assign the app a low rating. A potential
reason for this dissonance can be that reviewers writing these
reviews want to camouage as normal users in terms of their rating
patterns. Lastly, from the fty most suspicious clusters, we nd
such pairs across 47 (94%) clusters from
𝐸𝑏𝑜𝑜𝑠𝑡
and 21 (42%) clusters
from
𝐸𝑠𝑖𝑛𝑘
. This demonstrates that the ecacy of our approach
towards detecting lockstep behaviors is not only limited to the
temporal and rating dimensions, but also extends to the content
present in reviews.
5 DISCUSSION AND FUTURE WORK
Our current work sheds light on how lax implementation of Google
Play’s policy on fraudulent installs, ratings and reviews empowers
developers of install-incentivizing apps to deplete the trust and
r1r2
Cashyy
r3
Appflame
Figure 6: A microcluster anomaly detected by the algorithm
where three reviewers are boosting the overall rating of two
install-incentivizing apps ‘Cashyy’ and ‘Appame’ on the
same day.
transparency of the platform. Through use of permissions that
access restricted data and perform restricted actions, developers
incorporate dark patterns in these apps to deceive users and extort
labor from them in the form of oers. The second form of labor that
we study in our work is the writing of fraudulent reviews. We nd
evidence of their presence qualitatively and show promising results
in detecting them algorithmically. Both types of fraud (incentivized
installs and reviews) are only made possible by the labor of users
who are vulnerable or crowd-workers who are underpaid [
11
]. This
enables developers to extract prots as they get away with violating
Google Play’s policies without any consequences or accountability.
However, a question that remains unanswered is, if reviews under
these apps describe exploitative experiences of users, what is it
that facilitates their continued exploitation? For now, we can only
conjecture that fraudulent positive reviews on install-incentivizing
apps suppress ranks of reviews containing exploitative experiences
of users. Whether the same holds true or not is a question that
remains to be explored in our future work.
REFERENCES
[1]
Siddharth Bhatia, Rui Liu, Bryan Hooi, Minji Yoon, Kijung Shin, and Christos
Faloutsos. 2022. Real-Time Anomaly Detection in Edge Streams. ACM Trans.
Knowl. Discov. Data 16, 4, Article 75 (jan 2022), 22 pages. https://doi.org/10.1145/
3494564
[2]
Harry Brignull. 2018. Deceptive Designs. Retrieved Jan 27, 2021 from https:
//www.deceptive.design/
[3]
Pew Research Center. 2015. An Analysis of Android App Permissions. Retrieved
Apr 15, 2022 from https://www.pewresearch.org/internet/2015/11/10/an-analysis-
of-android- app-permissions/
[4]
Gregory Day and Abbey Stemler. 2020. Are Dark Patterns Anticompetitive? Ala.
L. Rev. 72 (2020), 1.
[5]
Android Developers. 2022. Permissions on Android. Retrieved Apr 15, 2022 from
https://developer.android.com/guide/topics/permissions/overview
[6]
Shehroze Farooqi, Álvaro Feal, Tobias Lauinger, Damon McCoy, Zubair Shaq,
and Narseo Vallina-Rodriguez. 2020. Understanding Incentivized Mobile App
Installs on Google Play Store. In Proceedings of the ACM Internet Measurement
Conference (IMC ’20). 696–709. https://doi.org/10.1145/3419394.3423662
[7]
Google. 2022. User Ratings, Reviews, and Installs. Retrieved Apr 15, 2022 from
https://support.google.com/googleplay/android-developer/answer/9898684
Conference’17, July 2017, Washington, DC, USA Ashwin Singh, Arvindh Arun, Pulak Malhotra, Pooja Desur, Ayushi Jain, Dueng Horng Chau, and Ponnurangam Kumaraguru
[8]
Alexander Lex, Nils Gehlenborg, Hendrik Strobelt, Romain Vuillemot, and
Hanspeter Pster. 2014. UpSet: Visualization of Intersecting Sets. IEEE Transac-
tions on Visualization and Computer Graphics (InfoVis) 20, 12 (2014), 1983–1992.
https://doi.org/10.1109/TVCG.2014.2346248
[9]
Arunesh Mathur, Mihir Kshirsagar, and Jonathan Mayer. 2021. What Makes a
Dark Pattern... Dark? Design Attributes, Normative Considerations, and Mea-
surement Methods. In Proceedings of the 2021 CHI Conference on Human Factors
in Computing Systems (CHI ’21). Article 360, 18 pages. https://doi.org/10.1145/
3411764.3445610
[10] Matthew B Miles and A Michael Huberman. 1994. Qualitative data analysis: An
expanded sourcebook. sage.
[11]
Mizanur Rahman, Nestor Hernandez, Ruben Recabarren, Syed Ishtiaque Ahmed,
and Bogdan Carbunar. 2019. The Art and Craft of Fraudulent App Promotion in
Google Play. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and
Communications Security (CCS ’19). 2437–2454. https://doi.org/10.1145/3319535.
3345658
[12]
Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings
using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Em-
pirical Methods in Natural Language Processing. Association for Computational
Linguistics. https://arxiv.org/abs/1908.10084
[13]
Somayeh Shojaee, Azreen Azman, Masrah Murad, Nurfadhlina Sharef, and Nasir
Sulaiman. 2015. A framework for fake review annotation. In Proceedings of the
2015 17th UKSIM-AMSS International Conference on Modelling and Simulation.
[14]
Ashwin Singh, Arvindh Arun, Pulak Malhotra, Pooja Desur, Ayushi Jain,
Dueng Horng Chau, and Ponnurangam Kumaraguru. 2022. Install-Incentivising
Apps on Google Play. Retrieved May 18, 2022 from https://precog.iiit.ac.in/
requester.php?dataset=google_play
[15]
Statista. 2022. Global Google Play app downloads 2016-2021. Retrieved Apr 15,
2022 from https://www.statista.com/statistics/734332/google-play-app-installs-
per-year/
[16]
Statista. 2022. Global mobile app install advertising spending 2017-2022. Retrieved
Apr 15, 2022 from https://www.statista.com/statistics/986536/mobile-app-install-
advertising-spending- global/
[17]
Statista. 2022. Google Play: number of available apps 2009-2022. Retrieved Apr
15, 2022 from https://www.statista.com/statistics/266210/number-of-available-
applications-in- the-google-play- store/