Peter Key

University of Illinois, Urbana-Champaign, Urbana, Illinois, United States

Are you Peter Key?

Claim your profile

Publications (84)33.08 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We examine trade-offs among stakeholders in ad auctions. Our metrics are the revenue for the utility of the auctioneer, the number of clicks for the utility of the users and the welfare for the utility of the advertisers. We show how to optimize linear combinations of the stakeholder utilities, showing that these can be tackled through a GSP auction with a per-click reserve price. We then examine constrained optimization of stakeholder utilities. We use simulations and analysis of real-world sponsored search auction data to demonstrate the feasible trade-offs, examining the effect of changing the allowed number of ads on the utilities of the stakeholders. We investigate both short term effects, when the players do not have the time to modify their behavior, and long term equilibrium conditions. Finally, we examine a combinatorially richer constrained optimization problem, where there are several possible allowed configurations (templates) of ad formats. This model captures richer ad formats, which allow using the available screen real estate in various ways. We show that two natural generalizations of the GSP auction rules to this domain are poorly behaved, resulting in not having a symmetric Nash equilibrium or having one with poor welfare. We also provide positive results for restricted cases.
    Full-text · Article · Apr 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We demonstrate how crowdsourcing can be used to automatically build a personalized tourist attraction recommender system, which tailors recommendations to specific individuals, so different people who use the system each get their own list of recommendations, appropriate to their own traits. Recommender systems crucially depend on the availability of reliable and large scale data that allows predicting how a new individual is likely to rate items from the catalog of possible items to recommend. We show how to automate the process of generating this data using crowdsourcing, so that such a system can be built even when such a dataset is not initially available. We first find possible tourist attractions to recommend by scraping such information from Wikipedia. Next, we use crowdsourced workers to filter the data, then provide their opinions regarding these items. Finally, we use machine learning methods to predict how new individuals are likely to rate each attraction, and recommend the items with the highest predicted ratings. Copyright © 2014, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
    No preview · Article · Jan 2014
  • G. Blocq · Y. Bachrach · P. Key
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose an extension to the Assignment Game [37] in which sellers provide indivisible heterogeneous goods to their buyers. Each good takes up various amounts of resources and each seller has capacity constraints with respect to the total amount of resources it can provide. Hence, the total amount of goods that the seller can provide is dependent on the set of buyers. In this model, we first demonstrate that the core is empty and proceed to suggest a fair allocation of the resulting utility of an optimal match, using the Shapley value. We then examine scenarios where the worth and resource demands of each good are private information of selfish buyers and consider ways in which they can manipulate the system. We show that such Shapley value manipulations are bounded in terms of the gain an agent can achieve by using them. Finally, since this model can be of use when considering elastic resource allocation and utility sharing in cloud computing domains, we provide simulation results which show our approach maximizes welfare and, when used as a pricing scheme, can also increase the revenue of the cloud server providers over what is achieved with the widely-used fixed pricing scheme. Copyright © 2014, International Foundation for Autonomous Agents and Multiagent Systems (www.ifaamas.org). All rights reserved.
    No preview · Article · Jan 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In a sponsored search auction, decisions about how to rank ads impose tradeoffs between objectives such as revenue and welfare. In this paper, we examine how these tradeoffs should be made. We begin by arguing that the most natural solution concept to evaluate these tradeoffs is the lowest symmetric Nash equilibrium (SNE). As part of this argument, we generalise the well known connection between the lowest SNE and the VCG outcome. We then propose a new ranking algorithm, loosely based on the revenue-optimal auction, that uses a reserve price to order the ads (not just to filter them) and give conditions under which it raises more revenue than simply applying that reserve price. Finally, we conduct extensive simulations examining the tradeoffs enabled by different ranking algorithms and show that our proposed algorithm enables superior operating points by a variety of metrics.
    Preview · Article · Apr 2013
  • M. Salek · Y. Bachrach · P. Key
    [Show abstract] [Hide abstract]
    ABSTRACT: Object localization is an image annotation task which consists of finding the location of a target object in an image. It is common to crowdsource annotation tasks and aggregate responses to estimate the true annotation. While for other kinds of annotations consensus is simple and powerful, it cannot be applied to object localization as effectively due to the task's rich answer space and inherent noise in responses. We propose a probabilistic graphical model to localize objects in images based on responses from the crowd. We improve upon natural aggregation methods such as the mean and the median by simultaneously estimating the difficulty level of each question and skill level of every participant. We empirically evaluate our model on crowdsourced data and show that our method outperforms simple aggregators both in estimating the true locations and in ranking participants by their ability. We also propose a simple adaptive sourcing scheme that works well for very sparse datasets. Copyright © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
    No preview · Article · Jan 2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the wellknown Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.
    Preview · Article · Oct 2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: How should agents bid in repeated sequential auctions when they are budget constrained? A motivating example is that of sponsored search auctions, where advertisers bid in a sequence of generalized second price (GSP) auctions. These auctions, specifically in the context of sponsored search, have many idiosyncratic features that distinguish them from other models of sequential auctions: First, each bidder competes in a large number of auctions, where each auction is worth very little. Second, the total bidder population is often large, which means it is unrealistic to assume that the bidders could possibly optimize their strategy by modeling specific opponents. Third, the presence of a virtually unlimited supply of these auctions means bidders are necessarily expense constrained. Motivated by these three factors, we first frame the generic problem as a discounted Markov Decision Process for which the environment is independent and identically distributed over time. We also allow the agents to receive income to augment their budget at a constant rate. We first provide a structural characterization of the associated value function and the optimal bidding strategy, which specifies the extent to which agents underbid from their true valuation due to long term budget constraints. We then provide an explicit characterization of the optimal bid shading factor in the limiting regime where the discount rate tends to zero, by identifying the limit of the value function in terms of the solution to a differential equation that can be solved efficiently. Finally, we proved the existence of Mean Field Equilibria for both the repeated second price and GSP auctions with a large number of bidders.
    Preview · Article · May 2012 · SSRN Electronic Journal
  • Source
    Vineet Abhishek · Ian A. Kash · Peter Key
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers two simple pricing schemes for selling cloud instances and studies the trade-off between them. We characterize the equilibrium for the hybrid system where arriving jobs can choose between fixed or the market based pricing. We provide theoretical and simulation based evidence suggesting that fixed price generates a higher expected revenue than the hybrid system.
    Full-text · Conference Paper · Jan 2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal's utility (i.e. the best solution's quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal's utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal's utility.
    Full-text · Article · Jan 2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a natural model for agent failures in congestion games. In our model, each of the agents may fail to participate in the game, introducing uncertainty regarding the set of active agents. We examine how such uncertainty may change the Nash equilibria (NE) of the game. We prove that although the perturbed game induced by the failure model is not always a congestion game, it still admits at least one pure Nash equilibrium. Then, we turn to examine the effect of failures on the maximal social cost in any NE of the perturbed game. We show that in the limit case where failure probability is negligible new equilibria never emerge, and that the social cost may decrease but it never increases. For the case of non-negligible failure probabilities, we provide a full characterization of the maximal impact of failures on the social cost under worst-case equilibrium outcomes.
    Full-text · Article · Jan 2012
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigate dynamic channel, rate selection and schedul-ing for wireless systems which exploit the large number of channels available in the White-space spectrum. We first present measurements of radio channel characteristics from an indoor testbed operating in the 500 to 600MHz band and comprising 11 channels. We observe significant and unpre-dictable (non-stationary) variations in the quality of these channels, and demonstrate the potential benefit in through-put from tracking the best channel and also from optimally adapting the transmission rate. We propose adaptive learn-ing schemes able to efficiently track the best channel and rate for transmission, even in scenarios with non-stationary chan-nel condition variations. We also describe a joint scheduling scheme for providing fairness in an Access Point scenario. Finally, we implement the proposed adaptive scheme in our testbed, and demonstrate that it achieves significant through-put improvement (typically from 40% to 100%) compared to traditional fixed channel selection schemes.
    Full-text · Article · Dec 2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: e cancellation is a challenging task. Links can be scheduled concurrently, but only if they either (i) don't interfere or (ii) allow for self­ interference cancellation. 1\vo issues arise: Firstly, it is difficult to construct a schedule that fully exploits the potentials for self­ interference cancellation for arbitrary traffic patterus. Secondly, designing an efficient and fair distributed MAC is a daunting task; the issues become even more pronounced when scheduling under the constraints. We propose ContraFlow, a novel MAC that exploits the benefits of self-interferenc e cancellation and increases spatial reuse. We use full-duplex to eliminate hidden terminals, and we rectify decentralized coordination inefficiencies among nodes, thereby improving fairness. Using measurements and simulations we illustrate the performance gains achieved when ContraFlow is used and we obtain both a throughput increase over current systems, as well as a significant improvement in fairness.
    Preview · Conference Paper · May 2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Home networks are comprised of applications running over multiple wired and wireless devices competing for shared network resources. Despite all the devices operating in a single administrative domain in such networks, applications operate independently, and users cannot express or enforce policies. By studying multiple households' network performance at the packet-level correlated with diaries capturing user experiences, we show that the lack of cooperation across applications leads to observable performance problems and associated user frustration. We describe HomeMaestro, a cooperative host-based system that monitors local and global application performance, and automatically detects contention for network resources. HomeMaestro is designed to manage home and small networks and requires no modification to routers, access points, applications, or protocols. At each host, it transparently monitors per-flow and per-process network usage statistics, such as throughput, RTT, and loss rates. We propose novel
    Preview · Article · Apr 2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The elegant Vickrey Clarke Groves (VCG) mechanism is well-known for the strong properties it offers: dominant truth-revealing strategies, efficiency and weak budget-balance in quite general settings. Despite this, it suffers from several drawbacks, prominently susceptibility to collusion. By jointly setting their bids, colluders may increase their utility by achieving lower prices for their items. The colluders can use monetary transfers to share this utility, but they must reach an agreement regarding their actions. We analyze the agreements that are likely to arise through a cooperative game theoretic approach, transforming the auction setting into a cooperative game. We examine both the setting of a multi-unit auction as well as path procurement auctions.
    Preview · Article · Mar 2011 · ACM SIGecom Exchanges
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we investigate the benefits that accrue from the use of multiple paths by a session coupled with rate control over those paths. In particular, we study data transfers under two classes of multipath control, coordinated control where the rates over the paths are determined as a function of all paths, and uncoordinated control where the rates are determined independently over each path. We show that coordinated control exhibits desirable load balancing properties; for a homogeneous static random paths scenario, we show that the worst-case throughput performance of uncoordinated control behaves as if each user has but a single path (scaling like log(log(N) )/ log(N) where N is the system size, measured in number of resources), whereas coordinated control yields a worstcase throughput allocation bounded away from zero. We then allow users to change their set of paths and introduce the notion of a Nash equilibrium. We show that both coordinated and uncoordinated control lead to Nash equilibria corresponding to desirable welfare maximizing states, provided in the latter case, the rate controllers over each path do not exhibit any round-trip time (RTT) bias (unlike TCP Reno). Finally, we show in the case of coordinated control that more paths are better, leading to greater welfare states and throughput capacity, and that simple path reselection polices that shift to paths with higher net benefit can achieve these states.
    Full-text · Article · Jan 2011 · Communications of the ACM
  • Source
    Furcy Pin · Peter Key
    [Show abstract] [Hide abstract]
    ABSTRACT: Sponsored search advertisement slots are currently sold via Generalized Second Price (GSP) auctions. Despite the simplicity of their rules, these auctions are far from being fully understood. Our observations on real ad-auction data show that advertisers usually enter many distinct auctions with different opponents and with varying parameters. We describe some of our findings from these observations and propose a simple probabilistic model taking them into account. This model can be used to predict the number of clicks received by the advertisers and the total price they can expect to pay depending on their bid, or even to estimate the players valuations, all at a very low computational cost.
    Preview · Conference Paper · Jan 2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider collusion in path procurement auctions, where payments are determined using the VCG mechanism. We show that collusion can increase the utility of the agents, and in some cases they can extract any amount the pro- curer is willing to offer. We show that computing how much a coalition can gain by colluding is NP-complete in general, but that in certain interesting restricted cases, the optimal collusion scheme can be computed in polynomial time. We ex- amine the ways in which the colluders might share their payments, using the core and Shapley value from cooperative game theory. We show that in some cases the collusion game has an empty core, so although beneficial manipulations ex- ist, the colluders would find it hard to form a stable coalition due to inability to decide how to split the rewards. On the other hand, we show that in several com- mon restricted cases the collusion game is convex, so it has a non-empty core, which contains the Shapley value. We also show that in these cases colluders can compute core imputations and the Shapley value in polynomial time.
    Full-text · Conference Paper · Dec 2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Existing indoor WiFi networks in the 2.5GHz and 5 GHz use too much transmit power, needed because the high carrier frequency limits signal penetration and connectivity. Instead, we propose a novel indoor wireless mesh design paradigm, based on Low Frequency, using the newly freed white spaces previously used as analogue TV bands, and Low Power - 100 times less power than currently used. Preliminary experiments show that this maintains a similar level of connectivity and performance to existing networks. It also yields more uniform connectivity, thus simplifies MAC and routing protocol design. We also advocate full-duplex networking in a single band, which becomes possible in this setting (because we operate at low frequencies). It potentially doubles the throughput of each link and eliminates hidden terminals.
    Full-text · Conference Paper · Jul 2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider opportunistic routing in wireless mesh networks. We exploit the inherent diversity of the broadcast nature of wireless by making use of multipath routing. We present a novel optimization framework for opportunistic routing based on network utility maximization (NUM) that enables us to derive optimal flow control, routing, scheduling, and rate adaptation schemes, where we use network coding to ease the routing problem. All previous work on NUM assumed unicast transmissions; however, the wireless medium is by its nature broadcast and a transmission will be received by multiple nodes. The structure of our design is fundamentally different; this is due to the fact that our link rate constraints are defined per broadcast region instead of links in isolation. We prove optimality and derive a primal-dual algorithm that lays the basis for a practical protocol. Optimal MAC scheduling is difficult to implement, and we use 802.11-like random scheduling rather than optimal in our comparisons. Under random scheduling, our protocol becomes fully decentralized (we assume ideal signaling). The use of network coding introduces additional constraints on scheduling, and we propose a novel scheme to avoid starvation. We simulate realistic topologies and show that we can achieve 20%-200% throughput improvement compared to single path routing, and several times compared to a recent related opportunistic protocol (MORE).
    Full-text · Article · May 2010 · IEEE/ACM Transactions on Networking
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As more technologies enter the home, householders are burdened with the task of digital housekeeping-managing and sharing digital resources like bandwidth. In response to this, we created and evaluated a domestic tool for bandwidth management called Home Watcher. Our field trial showed that when resource contention amongst different household members is made visible, people's understanding of bandwidth changes and household politics are revealed. In this paper, we describe the consequences of showing real time resource usage in a home, and how this varies depending on the social make up of the household.
    Full-text · Conference Paper · Apr 2010

Publication Stats

2k Citations
33.08 Total Impact Points

Institutions

  • 2012
    • University of Illinois, Urbana-Champaign
      Urbana, Illinois, United States
  • 1999-2012
    • Microsoft
      Washington, West Virginia, United States
  • 2006-2011
    • Cancer Research UK Cambridge Institute
      Cambridge, England, United Kingdom
  • 2010
    • University of Nebraska at Kearney
      Керни, Nebraska, United States
  • 2007
    • University of Massachusetts Amherst
      • School of Computer Science
      Amherst Center, Massachusetts, United States
  • 1994
    • University of Cambridge
      • Computer Laboratory
      Cambridge, ENG, United Kingdom