A Secure and Privacy-Preserving Targeted Ad-System
Elli Androulaki and Steven M.Bellovin
Abstract. Thanks to its low product-promotion cost and its efficiency, targeted
online advertising has become very popular. Unfortunately, being profile-based,
online advertising methods violate consumers’ privacy, which has engendered
resistance to the ads. However, protecting privacy through anonymity seems to
encourage click-fraud. In this paper, we define consumer’s privacy and present a
privacy-preserving, targeted ad system (PPOAd) which is resistant towards click
fraud. Our scheme is structured to provide financial incentives to all entities in-
Thanks to its ability to target audiences combined with its low cost, online ad-
vertising has become very popular throughout the past decade. However, cur-
rent profile-based advertising techniques raise privacy risks and may contra-
vene users’ expectations, while privacy-preserving techniques, e.g., anonymous
browsing, create many opportunities for fraud. In this way, security and privacy
seem to contradict each other. In this paper we show that the aforementioned
concepts are not mutually exclusive. In particular, we analyze the privacy con-
cerns raised by online advertising as well as the subsequent security issues, and
anteed fraud detection.
Privacy Concern: Targeted Ads To increase their banner-ads’ effectiveness,
publishers— usually serviceoriented websites paid to showadvertising spots of
More specifically, third party cookies enable special ad networks to track users’
browsing activity across multiple websites, construct very accurate user-profiles
[KW06], and target ads accordingly. These advertising models track users even
on sensitive sites, such as medical information websites, which could result in
embarrassing advertisements appearing on other sites and in other contexts. A
recent study [TKH+09] show broad rejection of the concept:
not want marketers to tailor advertisements to their interests. Moreover,
when Americans are informed of three common ways that marketers
gather data about people in order to tailor ads, even higher percentages
–between 73% and 86%—say they would not want such advertising.
The study found that over half of Americans felt that the punishment for ille-
gal use of personal information should be jail time for the executives or that
the company “be put out of business”. The privacy issues become more serious
when a conversion takes place, i.e., an online credit-card-based purchase or any
activity which requires a login, thus linking a profile to a particular identity.
Security Concerns: Fraudulent Clicks In the mechanism described before,
publishers and ad-networks get paid by the advertisers in proportion to the num-
ber of clicks an advertisment receives from users. To dishonestly increase their
revenue, publishers often fake clicks on ads. The existing privacy-preserving
techniques, such as anonymizing networks, make detection of fraudulent clicks
more difficult as all user identification elements are concealed.
Our Contribution In this paper we present an online target advertising tech-
nique combining both privacy and security, PPOAd. More specifically,
1. we provide a concrete defintion of consumers’ privacy
tructure guaranteeing similar or better revenues for all the entities involved
3. we present a privacy-preserving mechanism for click-fraud detection and
show how this mechanism is applied in our system, and
4. we based our protocols on ecash and unlinkable credential systems
Organization In the following section we present current ad-systems’ archi-
tecture. In sections 3 and 4 we demonstrate our system’s requirements, threat
model and protocols, while in sections 5 and 6, we elaborate on our system’s
security, privacy and innovation w.r.t. the exising work.
2Targeted-Ads System Architecture
the principle parties are advertisers, ad networks and the publishers. Adver-
tisers are the companies selling and promoting a particular product or group of
products. Publishers are usually service-oriented websites paid to publish ad-
vertisements of advertisers’ products. Ad networks are paid by advertisers to
choose the list of advertisements which will appear on publishers and filter the
clicks the ads receive. Typical examples of ad-networks are Doubleclick (owned
by Google), Atlas Solutions (owned by Microsoft), Brightcove, and more. It is
often the case that an ad network offers various services and also acts as a pub-
When a user visits a website (publisher), the browser sends to the publisher
some pieces of information called cookies, which link multiple visits of the
same user. In fact, a special type of cookies, the third party cookies, are sent
user activity across multiple websites. In this way, especially as ad networks
collaborate with many publishers, they construct very accurate user profiles and
target ads accordingly. There are many policies regarding how ad-networks and
publishers are paid. The most popular one is the “cost per click” (CPC), where
both parties are paid by the advertisers in proportion to the number of clicks the
latters’ ads receive.
As clearly shown before, targeted ads violate privacy, while CPC payment
method motivates many attacks: publishers may fake clicks on ads they publish
to increase their income, while advertisers may generate clicks on their competi-
tors’ advertisements to deplete the latter’s daily advertising budget. Detection
of click-fraud is currently the responsibility of ad networks. Unfortunately, it is
apparent that any conventional mechanism concealing users’ browsing activity
may strengthen click fraud.
3 Requirements-Threat Model
In this section we will define privacy, security and deployability in the context
of our system w.r.t. our system’s requirements and threat model.
in our system. Privacy refers to the user-protection, while security refers to the
protection of the other entities of the system. More specifically, we define pri-
vacy, as the union of:
– User Activity Unlinkability. No system entity should be able to profile a
particular honest user, i.e., link two or more browsing activities as having
originated by the same party, and
– User Anonymity. No system entity should be able to link a particular brows-
ing activity to an identity.
In addition, we define security as the combination of the following properties:
– Correctness. We require that if all parties are honest, advertisers will pay
publishers and ad networks in accordance to the number of clicks their ads
have received, while privacy is maintained.
– Fairness. We require that parties in our system will be paid if and only if
they do their duty properly.
– Accountability. Our system should also be accountable, i.e., misbehaving
parties should be detected and identified.
– Unframability. We require that no user can frame an honest user for be-
ing responsible for a misbehavior, i.e., for click-fraud. It is conceivable that
strong accountability implies unframability.
– Mis-Authentication. Unless authorized, no user should be able to make use
of our system.
We can easily see how the click fraud detection requirement is covered through
the fairness and accountability requirements: fairness requires that publishers
should not receive payments for fake clicks on a particular advertisement, while
accountability requires that the attacker is traced.
In addition, we require that our system provide similar ad-efficiency, which
would result in similar profitability to the parties involved. At least as important,
it must be deployable. Similar ad-efficiency and, thus, similar profitability for
publishers and ad networks aims to eliminate any monetary constraints against
the adoption of a new system. Deployability is important for the same reasons.
We examine deployability from two aspects: (a) w.r.t. our system’s architecture:
not substantial changes in current ad-system architecture should be required for
our protocols to be applied; (b) w.r.t. our threat model, where we make real
It is essential to note that both privacy and security provisions are required
in the application layer. Also, we extend the current ad-system architecture with
a single entity — which may or may not be distributed — the User Ad Proxy
(UAP), which acts as a mediator between the user and each visiting website.
Threat Model. Ad-systems’ strong monetary nature, imposes “following the
money” the safest way to define our adversaries’ motives and powers. In what
follows, we examine our adversary w.r.t. users’ privacy and ad-system’s secu-
Publishers may be “curious” w.r.t. users’ privacy, i.e., they may collaborate
with ad networks, advertisers or other users in order to reveal the identity of a
particular user or to link browsing activities of the same user. In addition, we
assume that publishers are “honest and dishonest” w.r.t. the ad networks and
advertisers. In particular, we assume that they do provide correct user-profile
related information to the ad networks, but may attempt to fake clicks to the
advertisements they publish in order to increase their revenues.
Ad networks’ revenues depend on the efficiency of the way they list ads
in the various publishers, as well as on their credibility. Ads’ efficiency de-
pends on the accuracy of users-profiling, while credibility depends of the ad
network’s click frauds’ detectability. It is, consequently, reasonable to assume
that ad networks are “honest but curious”, w.r.t. users, while they are “honest”
Advertisers are considered to be “curious” w.r.t. the users. In particular,
since advertisers have no direct interaction with them, we believe that they may
collaborate with publishers or ad-networks to make user-profiling more accu-
UAP is considered to be “honest but curious” w.r.t. the users. More specif-
ically, we assume that UAP is trusted to perform its functional operations hon-
estly towards the users, but may collaborate with publishers or any other entity
to link separate browsing activities of the same user. We also adopt a economic
model so that UAP does not have a motive to cheat the advertisers.
4 A Privacy preserving Targeted-Ad System
As mentioned in the previous section, we extend the current ad-system architec-
ture with the User Ad Proxy (UAP). UAP may be considered either as a single
entity or as a group of collaborating entities and acts as a communication medi-
ator between a user U visiting a publisher-website Pub and Pub. It is important
to note that to hide any lower layer information emitted, U interacts with the
rest of the system entities through an anonymizing network, while to automat-
ically erase any cookies acquired and to be able to communicate with UAP or
an UAP-member (if distributed), user-side installs a piece of software, which
basically establishes an anonymous — communication layer — registration of
user with the UAP.
The three core operations of our system: (a) the registration procedure of
a user U at PPOAd, during which U obtains credentials to use the services of
UAP, (b) the visit to a publisher, where a PPOAd-user requests a webpage, and
(c) the ad-clicking procedure, where the user clicks on one of the publisher’s
ads (fig. 1). For convenience, we will assume that a user U is interacting with a
publisher Pub. In addition, we will assume a single UAP, while in section 5, we
will refer to the distributed UAP case.
Our scheme is based on the use of two types of tokens, issued by the user-
UAP collaboration during the registration procedure: a registration credential
regtick, which authorizes U as member of PPOAd multiple times anonymously
and unlinkably, and a wallet with adticks, Wadtick, which will enable U to click
on ads. regticks are blind towards the UAP, their possession can be demon-
strated by their owner anonymously and unlinkably many times, each time re-
sulting in a session-oriented ticket tick. Issued by the valid collaboration be-
tween U and the UAP, adticks are blind towards the UAP and can only be used
for a limited number of times (MaxClicks) strictly by the person who issued