Incentive-Compatible Online Auctions for Digital Goods

Source: CiteSeer


Goldberg et al. [6] recently began the study of incentivecompatible auctions for digital goods, that is, goods which are available in unlimited supply. Many digital goods, however, such as books, music, and software, are sold continuously, rather than in a single round, as is the case for traditional auctions. Hence, it is important to consider what happens in the online version of such auctions. We de ne a model for online auctions for digital goods, and within this model, we examine auctions in which bidders have an incentive to bid their true valuations, that is, incentivecompatible auctions. Since the best oine auctions achieve revenue comparable to the revenue of the optimal xed pricing scheme, we use the latter as our benchmark. We show that deterministic auctions perform poorly relative to this benchmark, but we give a randomized auction which is within a factor O(exp( p log log h)) of the benchmark, where h is the ratio between the highest and lowest bids. As part of this result, we also give a new oine auction, which improves upon the previously best auction in a certain class of auctions for digital goods. We also give lower bounds for both randomized and deterministic online auctions for digital goods. 1

8 Reads
  • Source
    • "While in this model the agent's value is private, her arrival time is public. A wide variety of additional online auction settings, such as digital goods [6] [4] and combinatorial auctions [2], have been studied under the assumption of private values and public arrival times. Similarly to our model and the model presented in [20], online settings in which agents arrive in a random order were also considered in [3] [23] [22]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We study online auction settings in which agents arrive and depart dynamically in a random (secretary) order, and each agent's private type consists of the agent's arrival and departure times, value and budget. We consider multi-unit auctions with additive agents for the allocation of both divisible and indivisible items. For both settings, we devise truthful mechanisms that give a constant approximation with respect to the auctioneer's revenue, under a large market assumption. For divisible items, we devise in addition a truthful mechanism that gives a constant approximation with respect to the liquid welfare --- a natural efficiency measure for budgeted settings introduced by Dobzinski and Paes Leme [ICALP'14]. Our techniques provide high-level principles for transforming offline truthful mechanisms into online ones, with or without budget constraints. To the best of our knowledge, this is the first work that addresses the non-trivial challenge of combining online settings with budgeted agents.
    Full-text · Article · Apr 2015
  • Source
    • "the reflection principle: the random walk with reflection at 1 and absorption at 0 is equivalent to a random walk between [0] [2] with absorption at both 0 and 2. Thus, reaching u in the old walk is equivalent to reaching one of u and its reflection 2 − u in the new walk. Equation (5) is Exercise 17.1 in [14] (proved by considering a martingale related to V t , namely M t = Y 3 t − 3tY t , where Y t = "
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider pricing in settings where a consumer discovers his value for a good only as he uses it, and the value evolves with each use. We explore simple and natural pricing strategies for a seller in this setting, under the assumption that the seller knows the distribution from which the consumer's initial value is drawn, as well as the stochastic process that governs the evolution of the value with each use. We consider the differences between up-front or "buy-it-now" pricing (BIN), and "pay-per-play" (PPP) pricing, where the consumer is charged per use. Our results show that PPP pricing can be a very effective mechanism for price discrimination, and thereby can increase seller revenue. But it can also be advantageous to the buyers, as a way of mitigating risk. Indeed, this mitigation of risk can yield a larger pool of buyers. We also show that the practice of offering free trials is largely beneficial. We consider two different stochastic processes for how the buyer's value evolves: In the first, the key random variable is how long the consumer remains interested in the product. In the second process, the consumer's value evolves according to a random walk or Brownian motion with reflection at 1, and absorption at 0.
    Preview · Article · Nov 2014
  • Source
    • "Kleinberg and Leighton study a posted price repeated auction with goods sold sequentially to T bidders who either all have the same fixed private value, private values drawn from a fixed distribution, or private values that are chosen by an oblivious adversary (an adversary that acts independently of observed seller behavior) [15] (see also [7] [8] [14]). Cesa-Bianchi et al. study a related problem of setting the reserve price in a second price auction with multiple (but not repeated) bidders at each round [9]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Inspired by real-time ad exchanges for online display advertising, we consider the problem of inferring a buyer's value distribution for a good when the buyer is repeatedly interacting with a seller through a posted-price mechanism. We model the buyer as a strategic agent, whose goal is to maximize her long-term surplus, and we are interested in mechanisms that maximize the seller's long-term revenue. We define the natural notion of strategic regret --- the lost revenue as measured against a truthful (non-strategic) buyer. We present seller algorithms that are no-(strategic)-regret when the buyer discounts her future surplus --- i.e. the buyer prefers showing advertisements to users sooner rather than later. We also give a lower bound on strategic regret that increases as the buyer's discounting weakens and shows, in particular, that any seller algorithm will suffer linear strategic regret if there is no discounting.
    Preview · Article · Nov 2013 · Advances in neural information processing systems
Show more