[Show abstract][Hide abstract] ABSTRACT: We examine trade-offs among stakeholders in ad auctions. Our metrics are the
revenue for the utility of the auctioneer, the number of clicks for the utility
of the users and the welfare for the utility of the advertisers. We show how to
optimize linear combinations of the stakeholder utilities, showing that these
can be tackled through a GSP auction with a per-click reserve price. We then
examine constrained optimization of stakeholder utilities.
We use simulations and analysis of real-world sponsored search auction data
to demonstrate the feasible trade-offs, examining the effect of changing the
allowed number of ads on the utilities of the stakeholders. We investigate both
short term effects, when the players do not have the time to modify their
behavior, and long term equilibrium conditions.
Finally, we examine a combinatorially richer constrained optimization
problem, where there are several possible allowed configurations (templates) of
ad formats. This model captures richer ad formats, which allow using the
available screen real estate in various ways. We show that two natural
generalizations of the GSP auction rules to this domain are poorly behaved,
resulting in not having a symmetric Nash equilibrium or having one with poor
welfare. We also provide positive results for restricted cases.
[Show abstract][Hide abstract] ABSTRACT: In a sponsored search auction, decisions about how to rank ads impose
tradeoffs between objectives such as revenue and welfare. In this paper, we
examine how these tradeoffs should be made. We begin by arguing that the most
natural solution concept to evaluate these tradeoffs is the lowest symmetric
Nash equilibrium (SNE). As part of this argument, we generalise the well known
connection between the lowest SNE and the VCG outcome. We then propose a new
ranking algorithm, loosely based on the revenue-optimal auction, that uses a
reserve price to order the ads (not just to filter them) and give conditions
under which it raises more revenue than simply applying that reserve price.
Finally, we conduct extensive simulations examining the tradeoffs enabled by
different ranking algorithms and show that our proposed algorithm enables
superior operating points by a variety of metrics.
[Show abstract][Hide abstract] ABSTRACT: We consider the budget optimization problem faced by an advertiser
participating in repeated sponsored search auctions, seeking to maximize the
number of clicks attained under that budget. We cast the budget optimization
problem as a Markov Decision Process (MDP) with censored observations, and
propose a learning algorithm based on the wellknown Kaplan-Meier or
product-limit estimator. We validate the performance of this algorithm by
comparing it to several others on a large set of search auction data from
Microsoft adCenter, demonstrating fast convergence to optimal performance.
[Show abstract][Hide abstract] ABSTRACT: How should agents bid in repeated sequential auctions when they are budget constrained? A motivating example is that of sponsored search auctions, where advertisers bid in a sequence of generalized second price (GSP) auctions. These auctions, specifically in the context of sponsored search, have many idiosyncratic features that distinguish them from other models of sequential auctions: First, each bidder competes in a large number of auctions, where each auction is worth very little. Second, the total bidder population is often large, which means it is unrealistic to assume that the bidders could possibly optimize their strategy by modeling specific opponents. Third, the presence of a virtually unlimited supply of these auctions means bidders are necessarily expense constrained. Motivated by these three factors, we first frame the generic problem as a discounted Markov Decision Process for which the environment is independent and identically distributed over time. We also allow the agents to receive income to augment their budget at a constant rate. We first provide a structural characterization of the associated value function and the optimal bidding strategy, which specifies the extent to which agents underbid from their true valuation due to long term budget constraints. We then provide an explicit characterization of the optimal bid shading factor in the limiting regime where the discount rate tends to zero, by identifying the limit of the value function in terms of the solution to a differential equation that can be solved efficiently. Finally, we proved the existence of Mean Field Equilibria for both the repeated second price and GSP auctions with a large number of bidders.
Preview · Article · May 2012 · SSRN Electronic Journal
[Show abstract][Hide abstract] ABSTRACT: This paper considers two simple pricing schemes for selling cloud instances
and studies the trade-off between them. We characterize the equilibrium for the
hybrid system where arriving jobs can choose between fixed or the market based
pricing. We provide theoretical and simulation based evidence suggesting that
fixed price generates a higher expected revenue than the hybrid system.
[Show abstract][Hide abstract] ABSTRACT: We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expectation and variance of the principal's utility (i.e. the best solution's quality), and empirically test our theoretical predictions using a controlled experiment on Amazon Mechanical Turk. Our evaluation method is also crowdsourcing based and relies on the peer prediction mechanism. Our theoretical analysis shows an expectation-variance tradeoff of the principal's utility in such contests through a Pareto efficient frontier. In particular, we show that the simple contest with 2 authors and the 2-pair contest have good theoretical properties. In contrast, our empirical results show that the 2-pair contest is the superior design among all designs tested, achieving the highest expectation and lowest variance of the principal's utility.
[Show abstract][Hide abstract] ABSTRACT: We propose a natural model for agent failures in congestion games. In our model, each of the agents may fail to participate in the game, introducing uncertainty regarding the set of active agents. We examine how such uncertainty may change the Nash equilibria (NE) of the game. We prove that although the perturbed game induced by the failure model is not always a congestion game, it still admits at least one pure Nash equilibrium. Then, we turn to examine the effect of failures on the maximal social cost in any NE of the perturbed game. We show that in the limit case where failure probability is negligible new equilibria never emerge, and that the social cost may decrease but it never increases. For the case of non-negligible failure probabilities, we provide a full characterization of the maximal impact of failures on the social cost under worst-case equilibrium outcomes.
[Show abstract][Hide abstract] ABSTRACT: We investigate dynamic channel, rate selection and schedul-ing for wireless systems which exploit the large number of channels available in the White-space spectrum. We first present measurements of radio channel characteristics from an indoor testbed operating in the 500 to 600MHz band and comprising 11 channels. We observe significant and unpre-dictable (non-stationary) variations in the quality of these channels, and demonstrate the potential benefit in through-put from tracking the best channel and also from optimally adapting the transmission rate. We propose adaptive learn-ing schemes able to efficiently track the best channel and rate for transmission, even in scenarios with non-stationary chan-nel condition variations. We also describe a joint scheduling scheme for providing fairness in an Access Point scenario. Finally, we implement the proposed adaptive scheme in our testbed, and demonstrate that it achieves significant through-put improvement (typically from 40% to 100%) compared to traditional fixed channel selection schemes.
[Show abstract][Hide abstract] ABSTRACT: e cancellation is a challenging task. Links can be scheduled concurrently, but only if they either (i) don't interfere or (ii) allow for self interference cancellation. 1\vo issues arise: Firstly, it is difficult to construct a schedule that fully exploits the potentials for self interference cancellation for arbitrary traffic patterus. Secondly, designing an efficient and fair distributed MAC is a daunting task; the issues become even more pronounced when scheduling under the constraints. We propose ContraFlow, a novel MAC that exploits the benefits of self-interferenc e cancellation and increases spatial reuse. We use full-duplex to eliminate hidden terminals, and we rectify decentralized coordination inefficiencies among nodes, thereby improving fairness. Using measurements and simulations we illustrate the performance gains achieved when ContraFlow is used and we obtain both a throughput increase over current systems, as well as a significant improvement in fairness.
[Show abstract][Hide abstract] ABSTRACT: Home networks are comprised of applications running over multiple wired and wireless devices competing for shared network resources. Despite all the devices operating in a single administrative domain in such networks, applications operate independently, and users cannot express or enforce policies. By studying multiple households' network performance at the packet-level correlated with diaries capturing user experiences, we show that the lack of cooperation across applications leads to observable performance problems and associated user frustration. We describe HomeMaestro, a cooperative host-based system that monitors local and global application performance, and automatically detects contention for network resources. HomeMaestro is designed to manage home and small networks and requires no modification to routers, access points, applications, or protocols. At each host, it transparently monitors per-flow and per-process network usage statistics, such as throughput, RTT, and loss rates. We propose novel
[Show abstract][Hide abstract] ABSTRACT: The elegant Vickrey Clarke Groves (VCG) mechanism is well-known for the strong properties it offers: dominant truth-revealing strategies, efficiency and weak budget-balance in quite general settings. Despite this, it suffers from several drawbacks, prominently susceptibility to collusion. By jointly setting their bids, colluders may increase their utility by achieving lower prices for their items. The colluders can use monetary transfers to share this utility, but they must reach an agreement regarding their actions. We analyze the agreements that are likely to arise through a cooperative game theoretic approach, transforming the auction setting into a cooperative game. We examine both the setting of a multi-unit auction as well as path procurement auctions.
Preview · Article · Mar 2011 · ACM SIGecom Exchanges
[Show abstract][Hide abstract] ABSTRACT: In this paper, we investigate the benefits that accrue from the use of multiple paths by a session coupled with rate control over those paths. In particular, we study data transfers under two classes of multipath control, coordinated control where the rates over the paths are determined as a function of all paths, and uncoordinated control where the rates are determined independently over each path. We show that coordinated control exhibits desirable load balancing properties; for a homogeneous static random paths scenario, we show that the worst-case throughput performance of uncoordinated control behaves as if each user has but a single path (scaling like log(log(N) )/ log(N) where N is the system size, measured in number of resources), whereas coordinated control yields a worstcase throughput allocation bounded away from zero. We then allow users to change their set of paths and introduce the notion of a Nash equilibrium. We show that both coordinated and uncoordinated control lead to Nash equilibria corresponding to desirable welfare maximizing states, provided in the latter case, the rate controllers over each path do not exhibit any round-trip time (RTT) bias (unlike TCP Reno). Finally, we show in the case of coordinated control that more paths are better, leading to greater welfare states and throughput capacity, and that simple path reselection polices that shift to paths with higher net benefit can achieve these states.
Full-text · Article · Jan 2011 · Communications of the ACM
[Show abstract][Hide abstract] ABSTRACT: Sponsored search advertisement slots are currently sold via Generalized Second Price (GSP) auctions. Despite the simplicity of their rules, these auctions are far from being fully understood. Our observations on real ad-auction data show that advertisers usually enter many distinct auctions with different opponents and with varying parameters. We describe some of our findings from these observations and propose a simple probabilistic model taking them into account. This model can be used to predict the number of clicks received by the advertisers and the total price they can expect to pay depending on their bid, or even to estimate the players valuations, all at a very low computational cost.
[Show abstract][Hide abstract] ABSTRACT: We consider collusion in path procurement auctions, where payments are determined using the VCG mechanism. We show that collusion can increase the utility of the agents, and in some cases they can extract any amount the pro- curer is willing to offer. We show that computing how much a coalition can gain by colluding is NP-complete in general, but that in certain interesting restricted cases, the optimal collusion scheme can be computed in polynomial time. We ex- amine the ways in which the colluders might share their payments, using the core and Shapley value from cooperative game theory. We show that in some cases the collusion game has an empty core, so although beneficial manipulations ex- ist, the colluders would find it hard to form a stable coalition due to inability to decide how to split the rewards. On the other hand, we show that in several com- mon restricted cases the collusion game is convex, so it has a non-empty core, which contains the Shapley value. We also show that in these cases colluders can compute core imputations and the Shapley value in polynomial time.
[Show abstract][Hide abstract] ABSTRACT: Existing indoor WiFi networks in the 2.5GHz and 5 GHz use too much transmit power, needed because the high carrier frequency limits signal penetration and connectivity. Instead, we propose a novel indoor wireless mesh design paradigm, based on Low Frequency, using the newly freed white spaces previously used as analogue TV bands, and Low Power - 100 times less power than currently used. Preliminary experiments show that this maintains a similar level of connectivity and performance to existing networks. It also yields more uniform connectivity, thus simplifies MAC and routing protocol design. We also advocate full-duplex networking in a single band, which becomes possible in this setting (because we operate at low frequencies). It potentially doubles the throughput of each link and eliminates hidden terminals.
[Show abstract][Hide abstract] ABSTRACT: We consider opportunistic routing in wireless mesh networks. We exploit the inherent diversity of the broadcast nature of wireless by making use of multipath routing. We present a novel optimization framework for opportunistic routing based on network utility maximization (NUM) that enables us to derive optimal flow control, routing, scheduling, and rate adaptation schemes, where we use network coding to ease the routing problem. All previous work on NUM assumed unicast transmissions; however, the wireless medium is by its nature broadcast and a transmission will be received by multiple nodes. The structure of our design is fundamentally different; this is due to the fact that our link rate constraints are defined per broadcast region instead of links in isolation. We prove optimality and derive a primal-dual algorithm that lays the basis for a practical protocol. Optimal MAC scheduling is difficult to implement, and we use 802.11-like random scheduling rather than optimal in our comparisons. Under random scheduling, our protocol becomes fully decentralized (we assume ideal signaling). The use of network coding introduces additional constraints on scheduling, and we propose a novel scheme to avoid starvation. We simulate realistic topologies and show that we can achieve 20%-200% throughput improvement compared to single path routing, and several times compared to a recent related opportunistic protocol (MORE).
Full-text · Article · May 2010 · IEEE/ACM Transactions on Networking
[Show abstract][Hide abstract] ABSTRACT: As more technologies enter the home, householders are burdened with the task of digital housekeeping-managing and sharing digital resources like bandwidth. In response to this, we created and evaluated a domestic tool for bandwidth management called Home Watcher. Our field trial showed that when resource contention amongst different household members is made visible, people's understanding of bandwidth changes and household politics are revealed. In this paper, we describe the consequences of showing real time resource usage in a home, and how this varies depending on the social make up of the household.