Peter Key

Microsoft, Washington, West Virginia, United States

Are you Peter Key?

Claim your profile

Publications (79)26.68 Total impact

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We examine trade-offs among stakeholders in ad auctions. Our metrics are the revenue for the utility of the auctioneer, the number of clicks for the utility of the users and the welfare for the utility of the advertisers. We show how to optimize linear combinations of the stakeholder utilities, showing that these can be tackled through a GSP auction with a per-click reserve price. We then examine constrained optimization of stakeholder utilities. We use simulations and analysis of real-world sponsored search auction data to demonstrate the feasible trade-offs, examining the effect of changing the allowed number of ads on the utilities of the stakeholders. We investigate both short term effects, when the players do not have the time to modify their behavior, and long term equilibrium conditions. Finally, we examine a combinatorially richer constrained optimization problem, where there are several possible allowed configurations (templates) of ad formats. This model captures richer ad formats, which allow using the available screen real estate in various ways. We show that two natural generalizations of the GSP auction rules to this domain are poorly behaved, resulting in not having a symmetric Nash equilibrium or having one with poor welfare. We also provide positive results for restricted cases.
    04/2014;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In a sponsored search auction, decisions about how to rank ads impose tradeoffs between objectives such as revenue and welfare. In this paper, we examine how these tradeoffs should be made. We begin by arguing that the most natural solution concept to evaluate these tradeoffs is the lowest symmetric Nash equilibrium (SNE). As part of this argument, we generalise the well known connection between the lowest SNE and the VCG outcome. We then propose a new ranking algorithm, loosely based on the revenue-optimal auction, that uses a reserve price to order the ads (not just to filter them) and give conditions under which it raises more revenue than simply applying that reserve price. Finally, we conduct extensive simulations examining the tradeoffs enabled by different ranking algorithms and show that our proposed algorithm enables superior operating points by a variety of metrics.
    04/2013;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the wellknown Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.
    10/2012;
  • [Show abstract] [Hide abstract]
    ABSTRACT: How should agents bid in repeated sequential auctions when they are budget constrained? A motivating example is that of sponsored search auctions, where advertisers bid in a sequence of generalized second price (GSP) auctions. These auctions, specifically in the context of sponsored search, have many idiosyncratic features that distinguish them from other models of sequential auctions: First, each bidder competes in a large number of auctions, where each auction is worth very little. Second, the total bidder population is often large, which means it is unrealistic to assume that the bidders could possibly optimize their strategy by modeling specific opponents. Third, the presence of a virtually unlimited supply of these auctions means bidders are necessarily expense constrained. Motivated by these three factors, we first frame the generic problem as a discounted Markov Decision Process for which the environment is independent and identically distributed over time. We also allow the agents to receive income to augment their budget at a constant rate. We first provide a structural characterization of the associated value function and the optimal bidding strategy, which specifies the extent to which agents underbid from their true valuation due to long term budget constraints. We then provide an explicit characterization of the optimal bid shading factor in the limiting regime where the discount rate tends to zero, by identifying the limit of the value function in terms of the solution to a differential equation that can be solved efficiently. Finally, we proved the existence of Mean Field Equilibria for both the repeated second price and GSP auctions with a large number of bidders.
    05/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers two simple pricing schemes for selling cloud instances and studies the trade-off between them. We characterize the equilibrium for the hybrid system where arriving jobs can choose between fixed or the market based pricing. We provide theoretical and simulation based evidence suggesting that fixed price generates a higher expected revenue than the hybrid system.
    01/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal's utility (i.e. the best solution's quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal's utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal's utility.
    01/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We propose a natural model for agent failures in congestion games. In our model, each of the agents may fail to participate in the game, introducing uncertainty regarding the set of active agents. We examine how such uncertainty may change the Nash equilibria (NE) of the game. We prove that although the perturbed game induced by the failure model is not always a congestion game, it still admits at least one pure Nash equilibrium. Then, we turn to examine the effect of failures on the maximal social cost in any NE of the perturbed game. We show that in the limit case where failure probability is negligible new equilibria never emerge, and that the social cost may decrease but it never increases. For the case of non-negligible failure probabilities, we provide a full characterization of the maximal impact of failures on the social cost under worst-case equilibrium outcomes.
    01/2012;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigate dynamic channel, rate selection and schedul-ing for wireless systems which exploit the large number of channels available in the White-space spectrum. We first present measurements of radio channel characteristics from an indoor testbed operating in the 500 to 600MHz band and comprising 11 channels. We observe significant and unpre-dictable (non-stationary) variations in the quality of these channels, and demonstrate the potential benefit in through-put from tracking the best channel and also from optimally adapting the transmission rate. We propose adaptive learn-ing schemes able to efficiently track the best channel and rate for transmission, even in scenarios with non-stationary chan-nel condition variations. We also describe a joint scheduling scheme for providing fairness in an Access Point scenario. Finally, we implement the proposed adaptive scheme in our testbed, and demonstrate that it achieves significant through-put improvement (typically from 40% to 100%) compared to traditional fixed channel selection schemes.
    12/2011;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we investigate the benefits that accrue from the use of multiple paths by a session coupled with rate control over those paths. In particular, we study data transfers under two classes of multipath control, coordinated control where the rates over the paths are determined as a function of all paths, and uncoordinated control where the rates are determined independently over each path. We show that coordinated control exhibits desirable load balancing properties; for a homogeneous static random paths scenario, we show that the worst-case throughput performance of uncoordinated control behaves as if each user has but a single path (scaling like log(log(N) )/ log(N) where N is the system size, measured in number of resources), whereas coordinated control yields a worstcase throughput allocation bounded away from zero. We then allow users to change their set of paths and introduce the notion of a Nash equilibrium. We show that both coordinated and uncoordinated control lead to Nash equilibria corresponding to desirable welfare maximizing states, provided in the latter case, the rate controllers over each path do not exhibit any round-trip time (RTT) bias (unlike TCP Reno). Finally, we show in the case of coordinated control that more paths are better, leading to greater welfare states and throughput capacity, and that simple path reselection polices that shift to paths with higher net benefit can achieve these states.
    Commun. ACM. 01/2011; 54:109-116.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The elegant Vickrey Clarke Groves (VCG) mechanism is well-known for the strong properties it offers: dominant truth-revealing strategies, efficiency and weak budget-balance in quite general settings. Despite this, it suffers from several drawbacks, prominently susceptibility to collusion. By jointly setting their bids, colluders may increase their utility by achieving lower prices for their items. The colluders can use monetary transfers to share this utility, but they must reach an agreement regarding their actions. We analyze the agreements that are likely to arise through a cooperative game theoretic approach, transforming the auction setting into a cooperative game. We examine both the setting of a multi-unit auction as well as path procurement auctions.
    SIGecom Exchanges. 01/2011; 10:17-22.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: e cancellation is a challenging task. Links can be scheduled concurrently, but only if they either (i) don't interfere or (ii) allow for self­ interference cancellation. 1\vo issues arise: Firstly, it is difficult to construct a schedule that fully exploits the potentials for self­ interference cancellation for arbitrary traffic patterus. Secondly, designing an efficient and fair distributed MAC is a daunting task; the issues become even more pronounced when scheduling under the constraints. We propose ContraFlow, a novel MAC that exploits the benefits of self-interferenc e cancellation and increases spatial reuse. We use full-duplex to eliminate hidden terminals, and we rectify decentralized coordination inefficiencies among nodes, thereby improving fairness. Using measurements and simulations we illustrate the performance gains achieved when ContraFlow is used and we obtain both a throughput increase over current systems, as well as a significant improvement in fairness.
    9th International Symposium on Modeling and Optimization in Mobile, Ad-Hoc and Wireless Networks (WiOpt 2011), May 9-13, 2011, Princeton, NJ, USA; 01/2011
  • Source
    Furcy Pin, Peter Key
    [Show abstract] [Hide abstract]
    ABSTRACT: Sponsored search advertisement slots are currently sold via Generalized Second Price (GSP) auctions. Despite the simplicity of their rules, these auctions are far from being fully understood. Our observations on real ad-auction data show that advertisers usually enter many distinct auctions with different opponents and with varying parameters. We describe some of our findings from these observations and propose a simple probabilistic model taking them into account. This model can be used to predict the number of clicks received by the advertisers and the total price they can expect to pay depending on their bid, or even to estimate the players valuations, all at a very low computational cost.
    Proceedings 12th ACM Conference on Electronic Commerce (EC-2011), San Jose, CA, USA, June 5-9, 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Existing indoor WiFi networks in the 2.5GHz and 5 GHz use too much transmit power, needed because the high carrier frequency limits signal penetration and connectivity. Instead, we propose a novel indoor wireless mesh design paradigm, based on Low Frequency, using the newly freed white spaces previously used as analogue TV bands, and Low Power - 100 times less power than currently used. Preliminary experiments show that this maintains a similar level of connectivity and performance to existing networks. It also yields more uniform connectivity, thus simplifies MAC and routing protocol design. We also advocate full-duplex networking in a single band, which becomes possible in this setting (because we operate at low frequencies). It potentially doubles the throughput of each link and eliminates hidden terminals.
    Wireless Mesh Networks (WIMESH 2010), 2010 Fifth IEEE Workshop on; 07/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider opportunistic routing in wireless mesh networks. We exploit the inherent diversity of the broadcast nature of wireless by making use of multipath routing. We present a novel optimization framework for opportunistic routing based on network utility maximization (NUM) that enables us to derive optimal flow control, routing, scheduling, and rate adaptation schemes, where we use network coding to ease the routing problem. All previous work on NUM assumed unicast transmissions; however, the wireless medium is by its nature broadcast and a transmission will be received by multiple nodes. The structure of our design is fundamentally different; this is due to the fact that our link rate constraints are defined per broadcast region instead of links in isolation. We prove optimality and derive a primal-dual algorithm that lays the basis for a practical protocol. Optimal MAC scheduling is difficult to implement, and we use 802.11-like random scheduling rather than optimal in our comparisons. Under random scheduling, our protocol becomes fully decentralized (we assume ideal signaling). The use of network coding introduces additional constraints on scheduling, and we propose a novel scheme to avoid starvation. We simulate realistic topologies and show that we can achieve 20%-200% throughput improvement compared to single path routing, and several times compared to a recent related opportunistic protocol (MORE).
    IEEE/ACM Transactions on Networking 05/2010; · 2.01 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As more technologies enter the home, householders are burdened with the task of digital housekeeping-managing and sharing digital resources like bandwidth. In response to this, we created and evaluated a domestic tool for bandwidth management called Home Watcher. Our field trial showed that when resource contention amongst different household members is made visible, people's understanding of bandwidth changes and household politics are revealed. In this paper, we describe the consequences of showing real time resource usage in a home, and how this varies depending on the social make up of the household.
    Proceedings of the 28th International Conference on Human Factors in Computing Systems, CHI 2010, Atlanta, Georgia, USA, April 10-15, 2010; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider collusion in path procurement auctions, where payments are determined using the VCG mechanism. We show that collusion can increase the utility of the agents, and in some cases they can extract any amount the pro- curer is willing to offer. We show that computing how much a coalition can gain by colluding is NP-complete in general, but that in certain interesting restricted cases, the optimal collusion scheme can be computed in polynomial time. We ex- amine the ways in which the colluders might share their payments, using the core and Shapley value from cooperative game theory. We show that in some cases the collusion game has an empty core, so although beneficial manipulations ex- ist, the colluders would find it hard to form a stable coalition due to inability to decide how to split the rewards. On the other hand, we show that in several com- mon restricted cases the collusion game is convex, so it has a non-empty core, which contains the Shapley value. We also show that in these cases colluders can compute core imputations and the Shapley value in polynomial time.
    Internet and Network Economics - 6th International Workshop, WINE 2010, Stanford, CA, USA, December 13-17, 2010. Proceedings; 01/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of traffic management in small networks with both wireless and wired devices, connected to the Internet through a single gateway. Examples of such networks are small of- fice networks or residential networks, where typically traffic man- agement is limited to flow prioritization through port-based filter- ing. We propose a practical resource allocation framework that pro- vides simple mechanisms to applications and users to enable traf- fic management functionality currently not present due to the dis- tributed nature of the system and various technology or protocol limitations. To allow for control irrespective of whether traffic flows cross wireless, wired or even broadband links, the proposed framework jointly optimizes rate allocations across wireless and wired devices in a weighted fair manner. Additionally, we propose a model for estimating the achievable capacity regions in wireless networks. This model is used by the controller to achieve a specific rate allocation. We evaluate a decentralized, host-based implementation of the proposed framework. The controller is incrementally deployable by not requiring modifications to existing network protocols and equipment or the wireless MAC. Using analytical methods and experimental results with realistic traffic, we show that our con- troller is stable with fast convergence for both UDP and TCP traffic, achieves weighted fairness, and mitigates scheduling inefficiencies of the existing hardware.
    Proceedings of the 2009 ACM Conference on Emerging Networking Experiments and Technology, CoNEXT 2009, Rome, Italy, December 1-4, 2009; 01/2009
  • Source
    Peter B. Key, Alexandre Proutiere
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we introduce and investigate a novel class of multipath routing games with elastic traffic. Users open one or more connections along diffierent feasible paths from source to destination and act selfishly--seeking to transfer data as fast as possible.Users only control their routing choices , and once these choices have been made, the connection rates are elastic and determined via congestion control algorithms (e.g.TCP) which ultimately maximize a certain notion of the network utility. We analyze the existence and the performance of the Nash Equilibria (NEs) of the resulting routing games.
    SIGMETRICS Performance Evaluation Review. 01/2009; 37:63-64.
  • Source
    Peter B Key, Laurent Massoulié
    [Show abstract] [Hide abstract]
    ABSTRACT: We discuss control strategies for communication networks such as the Internet. We advocate the goal of welfare maximization as a paradigm for network resource allocation. We explore the application of this paradigm to the case of parallel network paths. We show that welfare maximization requires active balancing across paths by data sources, and potentially requires implementation of novel transport protocols. However, the only requirement from the underlying 'network layer' is to expose the marginal congestion cost of network paths to the 'transport layer'. We further illustrate the versatility of the corresponding layered architecture by describing transport protocols with the following properties: they welfare maximization, each communication may use an arbitrary collection of paths, where paths may be from an overlay, and paths may be combined in series and parallel. We conclude by commenting on incentives, pricing and open problems.
    Philosophical Transactions of The Royal Society A Mathematical Physical and Engineering Sciences 07/2008; 366(1872):1955-71. · 2.89 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider the problem of determining the "closest", or best Internet host to connect to, from a list of candidate servers. Most existing approaches rely on the use of metric, or more specifically Euclidean coordinates to infer network proximity. This is problematic, given that network distances such as latency are known to violate the triangle inequality. This leads us to consider non-metric coordinate systems. We perform an empirical comparison between the "min-plus" non-metric coordinates and two metric coordinates, namely L-infinity and Euclidean. We observe that, when sufficiently many dimensions are used, min-plus outperforms metric coordinates for predicting Internet latencies. We also consider the prediction of "widest path capacity" between nodes. In this framework, we propose a generalization of min-plus coordinates. These results apply when node coordinates consist in measured network proximity to a random subset of landmark nodes. We perform empirical validation of these results on widest path bandwidth between PlanetLab nodes. We conclude that appropriate non-metric coordinates such as generalized min-plus systems are better suited than metric systems for representing the underlying structure of Internet distances, measured either via latencies or bandwidth.
    INFOCOM 2008. The 27th Conference on Computer Communications. IEEE; 05/2008

Publication Stats

1k Citations
26.68 Total Impact Points

Institutions

  • 1999–2012
    • Microsoft
      Washington, West Virginia, United States
  • 2004–2011
    • Cancer Research UK Cambridge Institute
      Cambridge, England, United Kingdom
  • 2008
    • University of California, Irvine
      Irvine, California, United States
  • 2006
    • D. E. Shaw Research
      New York City, New York, United States
  • 2005
    • Massachusetts Institute of Technology
      • Laboratory for Information and Decision Systems
      Cambridge, MA, United States
  • 2002
    • University of Massachusetts Amherst
      • School of Computer Science
      Amherst Center, Massachusetts, United States
  • 1994–1995
    • University of Cambridge
      • Computer Laboratory
      Cambridge, ENG, United Kingdom