Conference PaperPDF Available

Erasing Labor with Labor: Dark Patterns and Lockstep Behaviors on Google Play

Erasing Labor with Labor: Dark Paerns and Lockstep Behaviors
on Google Play
Ashwin Singh
IIIT Hyderabad, India
Arvindh Arun
IIIT Hyderabad, India
Pulak Malhotra
IIIT Hyderabad, India
Pooja Desur
IIIT Hyderabad, India
Ayushi Jain
IIIT Delhi, India
Dueng Horng Chau
Georgia Institute of Technology, USA
Ponnurangam Kumaraguru
IIIT Hyderabad, India
Google Play’s policy forbids the use of incentivized installs, rat-
ings, and reviews to manipulate the placement of apps. However,
there still exist apps that incentivize installs for other apps on
the platform. To understand how install-incentivizing apps aect
users, we examine their ecosystem through a socio-technical lens
and perform a mixed-methods analysis of their reviews and per-
missions. Our dataset contains 319K reviews collected daily over
ve months from 60 such apps that cumulatively account for over
160.5M installs. We perform qualitative analysis of reviews to re-
veal various types of dark patterns that developers incorporate in
install-incentivizing apps, highlighting their normative concerns
at both user and platform levels. Permissions requested by these
apps validate our discovery of dark patterns, with over 92% apps ac-
cessing sensitive user information. We nd evidence of fraudulent
reviews on install-incentivizing apps, following which we model
them as an edge stream in a dynamic bipartite graph of apps and
reviewers. Our proposed reconguration of a state-of-the-art micro-
cluster anomaly detection algorithm yields promising preliminary
results in detecting this fraud. We discover highly signicant lock-
step behaviors exhibited by reviews that aim to boost the overall
rating of an install-incentivizing app. Upon evaluating the 50 most
suspicious clusters of boosting reviews detected by the algorithm,
we nd (i) near-identical pairs of reviews across 94% (47 clusters),
and (ii) over 35% (1,687 of 4,717 reviews) present in the same form
near-identical pairs within their cluster. Finally, we conclude with
a discussion on how fraud is intertwined with labor and poses a
threat to the trust and transparency of Google Play.
Google Play, Dark Patterns, Fraud, Labor
Google Play lists over 2.89 million apps on its platform [
]. In
the last year alone, these apps collectively accounted for over 111
billion installs by users worldwide [
]. Given the magnitude of
this scale, there is tremendous competition amongst developers
to boost the visibility of their apps. As a result, developers spend
considerable budgets on advertising, with expenditure reaching
96.4 billion USD on app installs in 2021 [
]. Owing to this compet-
itiveness, certain developers resort to inating the reviews, ratings,
and installs of their apps. The legitimacy of these means is deter-
mined by Google Play’s policy, under which the use of incentivized
installs is strictly forbidden [
]. Some apps violate this policy by
oering users incentive in the form of gift cards, coupons, and other
monetary rewards in return for installing other apps; we refer to
these as install-incentivizing apps. Past work [
] found that apps
promoted on install-incentivizing apps are twice as likely to ap-
pear in the top charts and at least six times more likely to witness
an increase in their install counts. While their work focuses on
measuring the impact of incentivized installs on Google Play, our
work aims to develop an understanding of how it aects the users of
install-incentivizing apps. To this end, we perform a mixed-methods
analysis of the reviews and permissions of install-incentivizing apps.
Our ongoing work makes the following contributions:
We provide a detailed overview of various dark patterns
present in install-incentivizing apps and highlight several
normative concerns that disrupt the welfare of users on
Google Play.
We examine dierent types of permissions requested by
install-incentivizing apps to discover similarities with dark
patterns, with 95% apps requesting permissions that access
restricted data or perform restricted actions
We show promising preliminary results in algorithmic de-
tection of fraud and lockstep behaviors in reviews that boost
overall rating of install-incentivizing apps, detecting near-
identical review pairs in 94% of the 50 most suspicious review
We release our dataset comprising 319K reviews written by
301K reviewers over a period of ve months and 1,825 most
relevant reviews with corresponding qualitative codes across
60 install-incentivizing apps. [14]
We created queries by prexing “install apps” to phrases like “earn
money”, “win prizes”, “win rewards”, etc., and searched them on
Google Play to curate a list of potentially install-incentivizing apps.
Then, we proceeded to install the apps from this list on our mobile
Conference’17, July 2017, Washington, DC, USA Ashwin Singh, Arvindh Arun, Pulak Malhotra, Pooja Desur, Ayushi Jain, Dueng Horng Chau, and Ponnurangam Kumaraguru
Figure 1: Distribution and CDF plot of install count for the
60 shortlisted install-incentivizing apps that collectively ac-
count for over 160.5M installs. Eighty-ve percent of these
apps have 100K or more installs, demonstrating their popu-
Figure 2: Network of apps showing labels of ve apps
that share the most reviewers with other apps. App
‘’ shares 6.4K reviewers with other install-
incentivizing apps.
devices to manually verify whether these apps incentivized installs
for other apps; we discarded the apps that did not t this criterion.
Following this process, we shortlisted 60 install-incentivizing apps.
In Figure 1, we plot a distribution and CDF of their installs, nd-
ing that most apps (85%) have more than 100K installs. We used a
scraper to collect reviews written daily on these apps, over a pe-
riod of 5 months from November 1, 2021 to April 8, 2022. Reviews
were collected daily to avoid over-sampling of reviews from cer-
tain temporal periods over others. This resulted in 319,198 reviews
from 301,188 reviewers. Figure 2 shows a network of apps where
edges denote the number of reviewers shared by any two apps. We
observe that certain apps share more reviewers with some apps
over others, hinting at the possibility of collusion. Lastly, we also
collected the permissions requested by apps on users’ devices.
To understand the various ways in which install-incentivizing apps
aect their users, we performed qualitative analysis of their reviews.
Unless a user expands the list of reviews, Google Play displays only
the top four “most relevant” reviews under its apps. Owing to
their default visibility, we sampled these reviews for all 60 apps
over a one-month period, obtaining 1,825 unique reviews. Then,
we adopted an inductive open coding approach to thematically
code [
] these reviews. In the rst iteration, all researchers inde-
pendently worked on identifying high-level codes for these reviews
which were then compared and discussed. During this process, we
dened the ‘completion of oers on install-incentivizing apps’ as
an act of labor by users and the ‘incentive promised for their labor’
as value. Then, we reached a consensus on four high-level themes:
exploitation, UI challenges, satisfaction, and promotion, which we
dene below:
Exploitation: User invests labor but is unable to gain value.
UI challenges: User invests labor but the app’s UI makes it
challenging for them to gain value.
(3) Satisfaction: User invests labor and is able to gain value.
Promotion: User invests labor in promoting an app through
their review, rating or a referral code to gain value.
While all themes were useful for capturing the inter-relationship
between a user’s labor and its value, the rst three themes were
relatively more prevalent in our data. Next, we performed two
iterations of line-by-line coding of reviews within the high-level
themes where the researchers identied emerging patterns under
each theme until the principle of saturation was established.
How Install-Incentivizing Apps aect Users
In this section, we describe our ndings from the qualitative analy-
sis to shed light on how install-incentivizing apps aect their users.
More specically, we elaborate on the commonalities and dier-
ences of patterns within high-level codes that we discovered using
line-by-line coding to depict how labor invested by users in these
apps is not only exploited but also leads to negative consequences
for them as well as the platform.
3.1.1 Dark Paerns.
Dark patterns can be dened as tricks embedded in apps that make
users perform unintended actions [
]. We nd comprehensive de-
scriptions of dark patterns present within install-incentivizing apps
in reviews coded as ‘exploitation’ and ‘UI challenges’. These pat-
terns make it dicult for users to redeem value for their labor. First,
our low-level codes uncover the dierent types of dark patterns
present in reviews of install-incentivizing apps. Then, we ground
these types in prior literature [
] by utilizing lenses of both indi-
vidual and collective welfare to highlight their normative concerns.
The individual lens focuses on dark patterns that allow developers
to benet at the expense of users whereas the collective lens looks
at users as a collective entity while examining expenses. In our case,
the former comprises three normative concerns. First, patterns that
enable developers to extract labor from users without compensating
cause nancial loss (I1) to users. Second, cases where the data of
users is shared with third parties without prior consent, leading to
invasion of privacy (I2). Third, when the information architecture
Erasing Labor with Labor: Dark Paerns and Lockstep Behaviors on Google Play Conference’17, July 2017, Washington, DC, USA
Table 1: Dierent types of dark patterns mapped to their individual {Finanical Loss (I1), Invasion of Privacy (I2), Cognitive
Burden (I3)} and collective {Competition (C1), Price Transparency (C2), Trust in the Market (C3)} normative concerns.
Code Review Normative Concerns
I1 I2 I3 C1 C2 C3
Withdrawal Limit 100000 is equal to 10 dollars. Just a big waste of time.
You can not reach the minimum cashout limit.
Cannot Redeem
Absolute scam. Commit time and even made in app
purchases to complete tasks ... I have over 89k points
that it refuses to cash out!
Only Initial Payouts Good for the rst one week then it will take forever to
earn just a dollar. So now I quit this app ...
Paid Oers
In the task I had to deposit 50 INR in an app and I
would receive 150 INR as a reward in 24 hrs. 5 days
have passed and I get no reply to mail.
Hidden Costs
Most surveys say that the user isn’t eligible for them,
after you complete them! Keep in mind you may not
be eligible for 90% of the surveys.
Privacy Violations
Enter your phone number into this app and you’ll be
FLOODED with spam texts and scams. I might have
to change my phone number because I unwittingly ...
UI Challenges
Too Many Ads
Pathetic with the dam ads! Nothing but ads!!! Money
is coming but only pocket change. It’ll be 2022 before
i reach $50 to cashout, if then.
Progress Manipulation
I redownload the app since the app would crash all the
time ... I logged in and guess what?? ALL MY POINTS
ARE GONE.. 12k points all gone...
Permission Override
When you give it permission to go over other apps it
actually blocks everything else on your phone from
working correctly including Google to leave this review.
of apps manipulates users into making certain choices due to the
induced cognitive burden (I3). The lens of collective welfare fa-
cilitates understanding of the bigger picture of install-incentivizing
apps on Google Play by listing three additional concerns. Due to
high competition (C1), some developers incorporate dark patterns
in apps that empower them to ‘extract wealth and build market
power at the expense of users’ [
] on the platform. In conjunction
with their concerns at the individual level, they also pose a serious
threat to the price transparency (C2) and trust in the market
(C3) of Google Play. In Table 1, we show these dierent types of
dark patterns mapped to their individual and collective normative
concerns using sample reviews from our data.
3.1.2 Evidence of Fraudulent Reviews and Ratings.
During qualitative analysis, we found that most reviews coded as
‘satisfaction’ were relatively shorter and lacked sucient context to
explain how the app benetted the user, for e.g. “Good app”, “Nice
App”, “Very easy to buy money.”, “Nice app for earning voucher”. We
performed welch’s t-test to validate that the number of words in
reviews coded as satisfaction were very highly signicantly lower
than reviews coded as exploitation or UI challenges (
, 𝑡 =
41). The shorter length of reviews, along with the excessive use
of adjectives and unrelatedness to the apps represented key spam-
detection signals [
], raising suspicions about their fraudulence.
We discovered evidence of the same in reviews coded as ‘promotion’
“Gets high rating because it rewards people to rate it so”, “I rated it 5
stars to get credits”, thus nding that install-incentivizing apps also
violate Google Play’s policy by incentivizing users to boost their
ratings and reviews. Other reviews coded as ‘promotion’ involved
users promoting other competitor apps (“No earning 1 task complete
not give my wallet not good ! CASHADDA App is good fast earning
is good go install now thanks”) or posting their referral codes to
get more credits within the install-incentivizing app (‘The app is
Awesome. Use My Referral Code am****02 to get extra coin‘”).
In this section, we ascertain ndings from our qualitative analysis
as well as reveal more characteristics about the behavior of install-
incentivizing apps and their reviews. For the same, we examine the
permissions requested by these apps to establish their relevance to
the dark patterns discussed in Section 3.1.1, and perform anomaly
detection on their reviews to build upon the evidence of fraud from
Section 3.1.2.
4.1 Permissions in Install-Incentivizing Apps
App permissions support user privacy by protecting access to re-
stricted data and restricted actions on a user’s device [
]. Most
permissions fall into two protection levels as determined by An-
droid, namely normal and dangerous, based on the risk posed to
user privacy. Similarly, another distinction can be made between
permissions that access user information and permissions that only
control device hardware [
]. We leverage these categories in our
Conference’17, July 2017, Washington, DC, USA Ashwin Singh, Arvindh Arun, Pulak Malhotra, Pooja Desur, Ayushi Jain, Dueng Horng Chau, and Ponnurangam Kumaraguru
analysis to identify types of permissions prominent across install-
incentivizing apps. Figure 3 shows an UpSet plot [
] of dierent
types of permissions present in install-incentivizing apps. First,
we observe that over 92% of apps comprise dangerous permissions
that access user information. The most popular permissions in this
category include ‘modify or delete the contents of your USB stor-
age’ (41 apps), ‘read phone status and identity’ (24 apps), ‘access
precise location’ (19 apps) and ‘take pictures and videos’ (14 apps).
Second, despite being requested by relatively fewer apps, some
permissions in this category enable an alarming degree of control
over user information; for e.g. ‘create accounts and set passwords’
(5 apps), ‘add or modify calendar events and send email to guests
without owners’ knowledge’ (3 apps) and ‘read your contacts’ (2
apps). Third, 34% of install-incentivizing apps contain permissions
that access dangerous hardware-level information, the most promi-
nent one being ‘draw over other apps’ (14 apps). Fourth, we note
that all but three apps request at least one dangerous permission.
Lastly, permissions requested by install-incentivizing apps share
common characteristics with the dark patterns discussed above,
thus validating their qualitative discovery.
4.2 Lockstep Behaviors
In Section 3.1.2, we found evidence of install-incentivizing apps
indulging in review and rating fraud. Thus, we build upon the same
to investigate reviews of these apps for anomalous behaviors such
as lockstep that are indicative of fraud. Specically, we focus on
detecting groups of reviews that exhibit similar temporal and rating
patterns; for e.g. bursts of reviews on an app within a short period
of time to boost its overall rating.
4.2.1 Modelling and Experimental Setup.
Given that reviews are a temporal phenomenon, we model them
as an edge-stream
𝐸={𝑒1, 𝑒2, ...}
of a dynamic graph
. Each
represents a tuple
(𝑟𝑖, 𝑎𝑖, 𝑡𝑖)
is a reviewer
who reviews an app
at time
(see Fig 4). Groups of fraudulent
reviewers may either aim to boost the overall rating of an install-
incentivizing app or sink the rating of a competitor app. Thus, we
partition our edge stream into two sub-streams as follows:
(1) 𝐸𝑏𝑜𝑜𝑠𝑡 ={(𝑟𝑖, 𝑎𝑖, 𝑡𝑖) 𝐸|Score(𝑟𝑖,𝑎𝑖) 𝑅𝑎𝑖},|𝐸𝑏𝑜𝑜𝑠𝑡 |=
(2) 𝐸𝑠𝑖𝑛𝑘 ={ (𝑟𝑖, 𝑎𝑖, 𝑡𝑖) 𝐸|Score(𝑟𝑖, 𝑎𝑖)<𝑅𝑎𝑖},|𝐸𝑠𝑖 𝑛𝑘 |=
Score(𝑟𝑖, 𝑎𝑖) {
is the score assigned by re-
to the app
denotes the overall rating of app
Next, we recongure a state-of-the-art microcluster anomaly detec-
tion algorithm Midas-F [
] for our use. In particular, we modify the
denition of a microcluster to accommodate the bipartite nature of
our dynamic graph. Given an edge
, a detection period
and a threshold
1, there exists a microcluster of reviews on an
app 𝑎if it satises the following equation:
𝑐(𝑒, (𝑛+1)𝑇)
𝑐(𝑒, 𝑛𝑇 )>𝛽where 𝑐(𝑒, 𝑛𝑇 )=
{(𝑟𝑖, 𝑎, 𝑡 𝑖)|(𝑟𝑖, 𝑎, 𝑡𝑖) 𝐸𝑏𝑜 𝑜𝑠𝑡 (𝑛1)𝑇<𝑡𝑖𝑛𝑇}
and vice versa for
. Depending on whether
is a boosting or sinking edge,
𝑐(𝑒, 𝑛𝑇 )
counts similar edges for the
5.8% 3.8% 1.9%
hardware (dangerous)
user info (normal)
user info (dangerous)
hardware (normal)
Intersection size (number of apps)
Figure 3: UpSet plot demonstrating dierent types of permis-
sions present in install-incentivizing apps. Over ninety two
percent of apps request permissions that access sensitive user
Figure 4: Reviews are modelled as an edge-stream in a dynamic
bipartite graph of apps and reviewers. Each edge
sents a tuple (𝑟 , 𝑎, 𝑡 )where 𝑟is a reviewer who reviews an app
𝑎at time 𝑡.
within consecutive detection periods
. Values
recommended by the authors are used for the remaining parameters
. It is worth noting that our modication preserves its
properties of (i) theoretical guarantees on false positive probability,
and (ii) constant-time and constant-memory processing of new
edges [1].
4.2.2 Analysis and Preliminary Results.
Midas-F follows a streaming hypothesis testing approach that de-
termines whether the observed and expected mean number of edges
for a node at a given timestep are signicantly dierent. Based on
a chi-squared goodness-of-t test, the algorithm provides anomaly
for each edge
in a streaming setting. Upon computing
anomaly scores for both sub-streams
, we visual-
ize their CDF with an inset box plot in Fig 5. It can be observed
exhibits more anomalous behavior than
. To ascer-
tain statistical signicance of the same, we make use of Welch’s
t-test for the hypothesis
S𝜇(𝐸𝑏𝑜𝑜𝑠 𝑡 )>S𝜇(𝐸𝑠𝑖𝑛𝑘 )
. We infer
that reviews that aim to boost the rating of an install-incentivizing
Erasing Labor with Labor: Dark Paerns and Lockstep Behaviors on Google Play Conference’17, July 2017, Washington, DC, USA
Figure 5: CDF plot of anomaly scores for the two edge streams
. Reviews that boost the overall rating of an
install incentivizing app exhibit signicantly more anoma-
lous behavior than reviews that aim to bring it down.
app show anomalous behavior that is highly signicantly more
(𝑡=157.23, 𝑝 <0.0) than reviews that aim to bring it down.
Next, we examine fraud across anomalous microclusters detected
by the algorithm. Figure 6 shows one such microcluster anomaly
where the algorithm detects reviews from three reviewers boosting
the overall rating of two install-incentivizing apps on the same day.
We extract the 50 most suspicious clusters of reviews from both sub-
based on their average anomaly scores.
For each pair of reviews
(𝑟𝑖, 𝑟 𝑗)
within these clusters, we compute
their cosine similarity
𝐶𝑆 (𝑟𝑖, 𝑟 𝑗)
using embeddings generated by
Sentence-BERT [
]. Over 35% of reviews (1,687 of 4,717) from
the suspicious clusters in
form at least one pair of highly
identical reviews i.e.,
𝐶𝑆 (𝑟𝑖, 𝑟 𝑗)=
1. However, this percentage drops
to 10% (45 of 432 reviews) in case of
. On closer inspection, we
nd that these are all extremely short reviews with at most three
to four words that comprise mostly of adjectives; for e.g.,
(‘good app’, ‘very good app’), (‘good earning app’, ‘very good for
earning app’), (‘best app’, ‘very best app’) and
: (‘bad’, ‘very
bad’), (‘super’, ‘super’), (‘nice’, ‘very nice’). It is surprising to see
that all but four identical pairs from
contain only positive
adjectives considering they assign the app a low rating. A potential
reason for this dissonance can be that reviewers writing these
reviews want to camouage as normal users in terms of their rating
patterns. Lastly, from the fty most suspicious clusters, we nd
such pairs across 47 (94%) clusters from
and 21 (42%) clusters
. This demonstrates that the ecacy of our approach
towards detecting lockstep behaviors is not only limited to the
temporal and rating dimensions, but also extends to the content
present in reviews.
Our current work sheds light on how lax implementation of Google
Play’s policy on fraudulent installs, ratings and reviews empowers
developers of install-incentivizing apps to deplete the trust and
Figure 6: A microcluster anomaly detected by the algorithm
where three reviewers are boosting the overall rating of two
install-incentivizing apps ‘Cashyy’ and ‘Appame’ on the
same day.
transparency of the platform. Through use of permissions that
access restricted data and perform restricted actions, developers
incorporate dark patterns in these apps to deceive users and extort
labor from them in the form of oers. The second form of labor that
we study in our work is the writing of fraudulent reviews. We nd
evidence of their presence qualitatively and show promising results
in detecting them algorithmically. Both types of fraud (incentivized
installs and reviews) are only made possible by the labor of users
who are vulnerable or crowd-workers who are underpaid [
]. This
enables developers to extract prots as they get away with violating
Google Play’s policies without any consequences or accountability.
However, a question that remains unanswered is, if reviews under
these apps describe exploitative experiences of users, what is it
that facilitates their continued exploitation? For now, we can only
conjecture that fraudulent positive reviews on install-incentivizing
apps suppress ranks of reviews containing exploitative experiences
of users. Whether the same holds true or not is a question that
remains to be explored in our future work.
Siddharth Bhatia, Rui Liu, Bryan Hooi, Minji Yoon, Kijung Shin, and Christos
Faloutsos. 2022. Real-Time Anomaly Detection in Edge Streams. ACM Trans.
Knowl. Discov. Data 16, 4, Article 75 (jan 2022), 22 pages.
Harry Brignull. 2018. Deceptive Designs. Retrieved Jan 27, 2021 from https:
Pew Research Center. 2015. An Analysis of Android App Permissions. Retrieved
Apr 15, 2022 from
of-android- app-permissions/
Gregory Day and Abbey Stemler. 2020. Are Dark Patterns Anticompetitive? Ala.
L. Rev. 72 (2020), 1.
Android Developers. 2022. Permissions on Android. Retrieved Apr 15, 2022 from
Shehroze Farooqi, Álvaro Feal, Tobias Lauinger, Damon McCoy, Zubair Shaq,
and Narseo Vallina-Rodriguez. 2020. Understanding Incentivized Mobile App
Installs on Google Play Store. In Proceedings of the ACM Internet Measurement
Conference (IMC ’20). 696–709.
Google. 2022. User Ratings, Reviews, and Installs. Retrieved Apr 15, 2022 from
Conference’17, July 2017, Washington, DC, USA Ashwin Singh, Arvindh Arun, Pulak Malhotra, Pooja Desur, Ayushi Jain, Dueng Horng Chau, and Ponnurangam Kumaraguru
Alexander Lex, Nils Gehlenborg, Hendrik Strobelt, Romain Vuillemot, and
Hanspeter Pster. 2014. UpSet: Visualization of Intersecting Sets. IEEE Transac-
tions on Visualization and Computer Graphics (InfoVis) 20, 12 (2014), 1983–1992.
Arunesh Mathur, Mihir Kshirsagar, and Jonathan Mayer. 2021. What Makes a
Dark Pattern... Dark? Design Attributes, Normative Considerations, and Mea-
surement Methods. In Proceedings of the 2021 CHI Conference on Human Factors
in Computing Systems (CHI ’21). Article 360, 18 pages.
[10] Matthew B Miles and A Michael Huberman. 1994. Qualitative data analysis: An
expanded sourcebook. sage.
Mizanur Rahman, Nestor Hernandez, Ruben Recabarren, Syed Ishtiaque Ahmed,
and Bogdan Carbunar. 2019. The Art and Craft of Fraudulent App Promotion in
Google Play. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and
Communications Security (CCS ’19). 2437–2454.
Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings
using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Em-
pirical Methods in Natural Language Processing. Association for Computational
Somayeh Shojaee, Azreen Azman, Masrah Murad, Nurfadhlina Sharef, and Nasir
Sulaiman. 2015. A framework for fake review annotation. In Proceedings of the
2015 17th UKSIM-AMSS International Conference on Modelling and Simulation.
Ashwin Singh, Arvindh Arun, Pulak Malhotra, Pooja Desur, Ayushi Jain,
Dueng Horng Chau, and Ponnurangam Kumaraguru. 2022. Install-Incentivising
Apps on Google Play. Retrieved May 18, 2022 from
Statista. 2022. Global Google Play app downloads 2016-2021. Retrieved Apr 15,
2022 from
Statista. 2022. Global mobile app install advertising spending 2017-2022. Retrieved
Apr 15, 2022 from
advertising-spending- global/
Statista. 2022. Google Play: number of available apps 2009-2022. Retrieved Apr
15, 2022 from
applications-in- the-google-play- store/
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Black Hat App Search Optimization (ASO) in the form of fake reviews and sockpuppet accounts, is prevalent in peer-opinion sites, e.g., app stores, with negative implications on the digital and real lives of their users. To detect and filter fraud, a growing body of research has provided insights into various aspects of fraud posting activities, and made assumptions about the working procedures of the fraudsters from online data. However, such assumptions often lack empirical evidence from the actual fraud perpetrators. To address this problem, in this paper, we present results of both a qualitative study with 18 ASO workers we recruited from 5 freelancing sites, concerning activities they performed on Google Play, and a quantitative investigation with fraud-related data collected from other 39 ASO workers. We reveal findings concerning various aspects of ASO worker capabilities and behaviors, including novel insights into their working patterns, and supporting evidence for several existing assumptions. Further, we found and report participant-revealed techniques to bypass Google-imposed verifications, concrete strategies to avoid detection, and even strategies that leverage fraud detection to enhance fraud efficacy. We report a Google site vulnerability that enabled us to infer the mobile device models used to post more than 198 million reviews in Google Play, including 9,942 fake reviews. We discuss the deeper implications of our findings, including their potential use to develop the next generation fraud detection and prevention systems.
Conference Paper
Full-text available
The effectiveness of opinion mining relies on the availability of credible opinion for sentiment analysis. Often, there is a need to filter out deceptive opinion from the spammer, therefore several studies are done to detect spam reviews. It is also problematic to test the validity of spam detection techniques due to lack of available annotated dataset. Based on the existing studies, researchers perform two different approaches to overcome the mentioned problem, which are to hire annotators to manually label reviews or to use crowdsourcing websites such as Amazon Mechanical Turk to make artificial dataset. The data collected using the latter method could not be generalized for real world problems. Furthermore, the former method of detecting fake reviews manually is a difficult task and there is a high chance of misclassification. In this paper, we propose a novel technique to annotate review dataset for spam detection by providing more information and meta data about both reviews and reviewers to the annotators for effective spam annotation. We proposed a framework and developed an on-line annotation system to improve the review annotation process. The system is tested for several reviews from the and the results is promising with 0.10 error on labeling.
Full-text available
Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains.
Given a stream of graph edges from a dynamic graph, how can we assign anomaly scores to edges in an online manner, for the purpose of detecting unusual behavior, using constant time and memory? Existing approaches aim to detect individually surprising edges. In this work, we propose Midas , which focuses on detecting microcluster anomalies , or suddenly arriving groups of suspiciously similar edges, such as lockstep behavior, including denial of service attacks in network traffic data. We further propose Midas -F, to solve the problem by which anomalies are incorporated into the algorithm’s internal states, creating a “poisoning” effect that can allow future anomalies to slip through undetected. Midas -F introduces two modifications: (1) we modify the anomaly scoring function, aiming to reduce the “poisoning” effect of newly arriving edges; (2) we introduce a conditional merge step, which updates the algorithm’s data structures after each time tick, but only if the anomaly score is below a threshold value, also to reduce the “poisoning” effect. Experiments show that Midas -F has significantly higher accuracy than Midas . In general, the algorithms proposed in this work have the following properties: (a) they detects microcluster anomalies while providing theoretical guarantees about the false positive probability; (b) they are online, thus processing each edge in constant time and constant memory, and also processes the data orders-of-magnitude faster than state-of-the-art approaches; and (c) they provides up to 62% higher area under the receiver operating characteristic curve than state-of-the-art approaches.
Libro de metodología cualitativo para investigación en las ciencias sociales. La utilización de la computadora, el uso de datos y la recolección de los mismos. Se describen detalladamente numerosos métodos de datos y análisis.