Publications (151)77.73 Total impact
 [Show abstract] [Hide abstract]
ABSTRACT: We study the optimal scaling of the expected total queue size in an $n\times n$ inputqueued switch, as a function of the number of ports $n$ and the load factor $\rho$, which has been conjectured to be $\Theta (n/(1\rho))$. In a recent work, the validity of this conjecture has been established for the regime where $1\rho = O(1/n^2)$. In this paper, we make further progress in the direction of this conjecture. We provide a new class of scheduling policies under which the expected total queue size scales as $O(n^{1.5}(1\rho)^{1}\log(1/(1\rho)))$ when $1\rho = O(1/n)$. This is an improvement over the state of the art; for example, for $\rho = 1  1/n$ the best known bound was $O(n^3)$, while ours is $O(n^{2.5}\log n)$.05/2014;  [Show abstract] [Hide abstract]
ABSTRACT: This paper presents a novel meta algorithm, PartitionMerge (PM), which takes existing centralized algorithms for graph computation and makes them distributed and faster. In a nutshell, PM divides the graph into small subgraphs using our novel randomized partitioning scheme, runs the centralized algorithm on each partition separately, and then stitches the resulting solutions to produce a global solution. We demonstrate the efficiency of the PM algorithm on two popular problems: computation of Maximum A Posteriori (MAP) assignment in an arbitrary pairwise Markov Random Field (MRF), and modularity optimization for community detection. We show that the resulting distributed algorithms for these problems essentially run in time linear in the number of nodes in the graph, and perform as well  or even better  than the original centralized algorithm as long as the graph has geometric structures. Here we say a graph has geometric structures, or polynomial growth property, when the number of nodes within distance r of any given node grows no faster than a polynomial function of r. More precisely, if the centralized algorithm is a Cfactor approximation with constant C \ge 1, the resulting distributed algorithm is a (C+\delta)factor approximation for any small \delta>0; but if the centralized algorithm is a nonconstant (e.g. logarithmic) factor approximation, then the resulting distributed algorithm becomes a constant factor approximation. For general graphs, we compute explicit bounds on the loss of performance of the resulting distributed algorithm with respect to the centralized algorithm.09/2013; 
Conference Paper: Efficient crowdsourcing for multiclass labeling
[Show abstract] [Hide abstract]
ABSTRACT: Crowdsourcing systems like Amazon's Mechanical Turk have emerged as an effective largescale humanpowered platform for performing tasks in domains such as image classification, data entry, recommendation, and proofreading. Since workers are lowpaid (a few cents per task) and tasks performed are monotonous, the answers obtained are noisy and hence unreliable. To obtain reliable estimates, it is essential to utilize appropriate inference algorithms (e.g. Majority voting) coupled with structured redundancy through task assignment. Our goal is to obtain the best possible tradeoff between reliability and redundancy. In this paper, we consider a general probabilistic model for noisy observations for crowdsourcing systems and pose the problem of minimizing the total price (i.e. redundancy) that must be paid to achieve a target overall reliability. Concretely, we show that it is possible to obtain an answer to each task correctly with probability 1ε as long as the redundancy per task is O((K/q) log (K/ε)), where each task can have any of the $K$ distinct answers equally likely, q is the crowdquality parameter that is defined through a probabilistic model. Further, effectively this is the best possible redundancyaccuracy tradeoff any system design can achieve. Such a singleparameter crisp characterization of the (order)optimal tradeoff between redundancy and reliability has various useful operational consequences. Further, we analyze the robustness of our approach in the presence of adversarial workers and provide a bound on their influence on the redundancyaccuracy tradeoff. Unlike recent prior work [GKM11, KOS11, KOS11], our result applies to nonbinary (i.e. K>2) tasks. In effect, we utilize algorithms for binary tasks (with inhomogeneous error model unlike that in [GKM11, KOS11, KOS11]) as key subroutine to obtain answers for Kary tasks. Technically, the algorithm is based on lowrank approximation of weighted adjacency matrix for a random regular bipartite graph, weighted according to the answers provided by the workers.Proceedings of the ACM SIGMETRICS/international conference on Measurement and modeling of computer systems; 06/2013  [Show abstract] [Hide abstract]
ABSTRACT: This brief presents a technique to evaluate the timing variation of static random access memory (SRAM). Specifically, a method called loop flattening, which reduces the evaluation of the timing statistics in the complex highly structured circuit to that of a single chain of component circuits, is justified. Then, to very quickly evaluate the timing delay of a single chain, a statistical method based on importance sampling augmented with targeted highdimensional spherical sampling can be employed. The overall methodology has shown 650× or greater speedup over the nominal Monte Carlo approach with 10.5% accuracy in probability. Examples based on both the largesignal and smallsignal SRAM read path are discussed, and a detailed comparison with stateoftheart accelerated statistical simulation techniques is given.IEEE Transactions on Very Large Scale Integration (VLSI) Systems 01/2013; 21(8):15581562. · 1.22 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: The question of aggregating pairwise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g. MSR's TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining a ranking, finding `scores' for each object (e.g. player's rating) is of interest for understanding the intensity of the preferences. In this paper, we propose Rank Centrality, an iterative rank aggregation algorithm for discovering scores for objects (or items) from pairwise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects if they are compared; the score, which we call Rank Centrality, of an object turns out to be it's stationary probability under this random walk. To study the efficacy of the algorithm, we consider the popular BradleyTerryLuce (BTL) model in which each object has an associated score which determines the probabilistic outcomes of pairwise comparisons between objects. We bound the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. In particular, the number of samples required to learn the score well with high probability depends on the structure of the comparison graph. When the Laplacian of the comparison graph has a strictly positive spectral gap, e.g. each item is compared to a subset of randomly chosen items, this leads to an orderoptimal dependence on the number of samples. Experimental evaluations on synthetic datasets generated according to the BTL model show that our algorithm performs as well as the Maximum Likelihood estimator for that model and outperforms a recently proposed algorithm by Ammar and Shah (2011).09/2012;  [Show abstract] [Hide abstract]
ABSTRACT: The question of aggregating pairwise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g. MSR's TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining ranking, finding 'scores' for each object (e.g. player's rating) is of interest for understanding the intensity of the preferences. In this paper, we propose a novel iterative rank aggregation algorithm for discovering scores for objects (or items) from pairwise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects if they are compared; the scores turn out to be the stationary probability of this random walk. The algorithm is model independent. To establish the efficacy of our method, however, we consider the popular BradleyTerryLuce (BTL) model in which each object has an associated score which determines the probabilistic outcomes of pairwise comparisons between objects. We bound the finite sample error rates between the scores assumed by the BTL model and those estimated by our algorithm. In particular, the number of samples required to learn the score well with high probability depends on the structure of the comparison graph. When the Laplacian of the comparison graph has a strictly positive spectral gap, e.g. each item is compared to a subset of randomly chosen items, this leads to orderoptimal dependence on the number of samples. Experimental evaluations on synthetic datasets generated according to the BTL model show that our (model independent) algorithm performs as well as the Maximum Likelihood estimator for that model and outperforms a recently proposed algorithm by Ammar and Shah [AS11].09/2012;  [Show abstract] [Hide abstract]
ABSTRACT: We consider the problem of detecting the source of a rumor (information diffusion) in a network based on observations about which set of nodes possess the rumor. In a recent work [10], this question was introduced and studied. The authors proposed rumor centrality as an estimator for detecting the source. They establish it to be the maximum likelihood estimator with respect to the popular Susceptible Infected (SI) model with exponential spreading time for regular trees. They showed that as the size of infected graph increases, for a line (2regular tree) graph, the probability of source detection goes to 0 while for dregular trees with d ≥ 3 the probability of detection, say αd, remains bounded away from 0 and is less than 1/2. Their results, however stop short of providing insights for the heterogeneous setting such as irregular trees or the SI model with nonexponential spreading times. This paper overcomes this limitation and establishes the effectiveness of rumor centrality for source detection for generic random trees and the SI model with a generic spreading time distribution. The key result is an interesting connection between a multitype continuous time branching process (an equivalent representation of a generalized Polya's urn, cf. [1]) and the effectiveness of rumor centrality. Through this, it is possible to quantify the detection probability precisely. As a consequence, we recover all the results of [10] as a special case and more importantly, we obtain a variety of results establishing the universality of rumor centrality in the context of treelike graphs and the SI model with a generic spreading time distribution.ACM SIGMETRICS Performance Evaluation Review 06/2012;  [Show abstract] [Hide abstract]
ABSTRACT: We consider switched queueing networks in which there are constraints on which queues may be served simultaneously. The scheduling policy for such a network specifies which queues to serve at any point in time. We introduce and study a variant of the popular maximum weight or backpressure policy which chooses the collection of queues to serve that has maximum weight. Unlike the maximum weight policies studied in the literature, the weight of a queue depends on logarithm of its queuesize in this paper. For any multihop switched network operating under such maximum logweighted policy, we establish that the network Markov process is positive recurrent as long as it is underloaded. As the main result of this paper, a meaningful fluid model is established as the formal functional law of large numbers approximation. The fluid model is shown to be workconserving. That is, work (or total queuesize) is nonincreasing as long as the network is underloaded or critically loaded. We identify invariant states or fixed points of the fluid model. When underloaded, null state is the unique invariant state. For a critically loaded fluid model, the space of invariant states is characterized as the solution space of an optimization problem whose objective is lexicographic ordering of total queuesize and the negative entropy of the queue state. An important contribution of this work is in overcoming the challenge presented by the logweight function in establishing meaningful fluid model. Specifically, the known approaches in the literature primarily relied on the “scale invariance” property of the weight function that logfunction does not possess.Queueing Systems 06/2012; 71(12). · 0.44 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: This paper looks at the problem of designing medium access algorithm for wireless networks with the objective of providing high throughput and low delay performance to the users, while requiring only a modest computational effort at the transmitters and receivers. Additive interuser interference at the receivers is an important physical layer characteristic of wireless networks. Today's WiFi networks are based upon the abstraction of physical layer where interuser interference is considered as noise leading to the 'collision' model in which users are required to coordinate their transmissions through Carrier Sensing Multiple Access (CSMA)based schemes to avoid interference. This, in turn, leads to an inherent performance tradeoff [1]: it is impossible to obtain high throughput and low delay by means of low complexity medium access algorithm (unless P=NP). As the main result, we establish that this tradeoff is primarily due to treating interference as noise in the current wireless architecture. Concretely, we develop a simple medium access algorithm that allows for simultaneous transmissions of users to the same receiver by performing joint decoding at receivers, over time. For a receiver to be able to decode multiple transmissions quickly enough, we develop appropriate congestion control where each transmitter maintains a "window" of undecoded transmitted data that is adjusted based upon the "feedback" from the receiver. In summary, this provides an efficient, low complexity "online" code operating at varying rate, and the system as a whole experiences only small amount of delay (including decoding time) while operating at high throughput.01/2012;  [Show abstract] [Hide abstract]
ABSTRACT: We consider the problem of detecting the source of a rumor (information diffusion) in a network based on observations about which set of nodes posses the rumor. In a recent work \cite{ref:rc} by the authors, this question was introduced and studied. The authors proposed {\em rumor centrality} as an estimator for detecting the source. They establish it to be the maximum likelihood estimator with respect to the popular Susceptible Infected (SI) model with exponential spreading time for regular trees. They showed that as the size of infected graph increases, for a line (2regular tree) graph, the probability of source detection goes to 0 while for $d$regular trees with $d \geq 3$ the probability of detection, say $\alpha_d$, remains bounded away from 0 and is less than 1/2. Their results, however stop short of providing insights for the heterogeneous setting such as irregular trees or the SI model with nonexponential spreading times. This paper overcomes this limitation and establishes the effectiveness of rumor centrality for source detection for generic random trees and the SI model with a generic spreading time distribution. The key result is an interesting connection between a multitype continuous time branching processes (an equivalent representation of a generalized Polya's urn, cf. \cite{ref:an}) and the effectiveness of rumor centrality. Through this, it is possible to quantify the detection probability precisely. As a consequence, we recover all the results of \cite{ref:rc} as a special case and more importantly, we obtain a variety of results establishing the {\em universality} of rumor centrality in the context of treelike graphs and the SI model with a generic spreading time distribution.10/2011;  [Show abstract] [Hide abstract]
ABSTRACT: We consider a switched (queueing) network in which there are constraints on which queues may be served simultaneously; such networks have been used to effectively model inputqueued switches and wireless networks. The scheduling policy for such a network specifies which queues to serve at any point in time, based on the current state or past history of the system. In the main result of this paper, we provide a new class of online scheduling policies that achieve optimal average queuesize scaling for a class of switched networks including inputqueued switches. In particular, it establishes the validity of a conjecture about optimal queuesize scaling for inputqueued switches.10/2011;  [Show abstract] [Hide abstract]
ABSTRACT: Crowdsourcing systems, in which numerous tasks are electronically distributed to numerous "information pieceworkers", have emerged as an effective paradigm for humanpowered solving of large scale problems in domains such as image classification, data entry, optical character recognition, recommendation, and proofreading. Because these lowpaid workers can be unreliable, nearly all such systems must devise schemes to increase confidence in their answers, typically by assigning each task multiple times and combining the answers in an appropriate manner, e.g. majority voting. In this paper, we consider a general model of such crowdsourcing tasks and pose the problem of minimizing the total price (i.e., number of task assignments) that must be paid to achieve a target overall reliability. We give a new algorithm for deciding which tasks to assign to which workers and for inferring correct answers from the workers' answers. We show that our algorithm, inspired by belief propagation and lowrank matrix approximation, significantly outperforms majority voting and, in fact, is optimal through comparison to an oracle that knows the reliability of every worker. Further, we compare our approach with a more general class of algorithms which can dynamically assign tasks. By adaptively deciding which questions to ask to the next arriving worker, one might hope to reduce uncertainty more efficiently. We show that, perhaps surprisingly, the minimum price necessary to achieve a target reliability scales in the same manner under both adaptive and nonadaptive scenarios. Hence, our nonadaptive approach is orderoptimal under both scenarios. This strongly relies on the fact that workers are fleeting and can not be exploited. Therefore, architecturally, our results suggest that building a reliable workerreputation system is essential to fully harnessing the potential of adaptive designs.10/2011; 
Conference Paper: Fast averaging
[Show abstract] [Hide abstract]
ABSTRACT: We are interested in the following question: given n numbers x<sub>1</sub>, ..., x<sub>n</sub>, what sorts of approximation of average x<sub>ave</sub> = 1overn (x<sub>1</sub> + ... + x<sub>n</sub>) can be achieved by knowing only r of these n numbers. Indeed the answer depends on the variation in these n numbers. As the main result, we show that if the vector of these n numbers satisfies certain regularity properties captured in the form of finiteness of their empirical moments (third or higher), then it is possible to compute approximation of x<sub>ave</sub> that is within 1 ±ε multiplicative factor with probability at least 1  δ by choosing, on an average, r = r(ε, δ, σ) of the n numbers at random with r is dependent only on ε, δ and the amount of variation σ in the vector and is independent of n. The task of computing average has a variety of applications such as distributed estimation and optimization, a model for reaching consensus and computing symmetric functions. We discuss implications of the result in the context of two applications: loadbalancing in a computational facility running MapReduce, and fast distributed averaging.Information Theory Proceedings (ISIT), 2011 IEEE International Symposium on; 09/2011  [Show abstract] [Hide abstract]
ABSTRACT: We consider the problem of static assortment optimization, where the goal is to find the assortment of size at most $C$ that maximizes revenues. This is a fundamental decision problem in the area of Operations Management. It has been shown that this problem is provably hard for most of the important families of parametric of choice models, except the multinomial logit (MNL) model. In addition, most of the approximation schemes proposed in the literature are tailored to a specific parametric structure. We deviate from this and propose a general algorithm to find the optimal assortment assuming access to only a subroutine that gives revenue predictions; this means that the algorithm can be applied with any choice model. We prove that when the underlying choice model is the MNL model, our algorithm can find the optimal assortment efficiently.08/2011; 
Article: Efficient Distributed Medium Access
[Show abstract] [Hide abstract]
ABSTRACT: Consider a wireless network of n nodes represented by a graph G=(V, E) where an edge (i,j) models the fact that transmissions of i and j interfere with each other, i.e. simultaneous transmissions of i and j become unsuccessful. Hence it is required that at each time instance a set of noninterfering nodes (corresponding to an independent set in G) access the wireless medium. To utilize wireless resources efficiently, it is required to arbitrate the access of medium among interfering nodes properly. Moreover, to be of practical use, such a mechanism is required to be totally distributed as well as simple. As the main result of this paper, we provide such a medium access algorithm. It is randomized, totally distributed and simple: each node attempts to access medium at each time with probability that is a function of its local information. We establish efficiency of the algorithm by showing that the corresponding network Markov chain is positive recurrent as long as the demand imposed on the network can be supported by the wireless network (using any algorithm). In that sense, the proposed algorithm is optimal in terms of utilizing wireless resources. The algorithm is oblivious to the network graph structure, in contrast with the socalled `polynomial backoff' algorithm by HastadLeightonRogoff (STOC '87, SICOMP '96) that is established to be optimal for the complete graph and bipartite graphs (by GoldbergMacKenzie (SODA '96, JCSS '99)).Computing Research Repository  CORR. 04/2011;  [Show abstract] [Hide abstract]
ABSTRACT: We consider the problem of designing a fair scheduling algorithm for discretetime constrained queuing networks. Each queue has dedicated exogenous packet arrivals. There are constraints on which queues can be served simultaneously. This model effectively describes important special instances like network switches, interference in wireless networks, bandwidth sharing for congestion control and traffic scheduling in road roundabouts. Fair scheduling is required because it provides isolation to different traffic flows; isolation makes the system more robust and enables providing quality of service. Existing work on fairness for constrained networks concentrates on flow based fairness. As a main result, we describe a notion of packet based fairness by establishing an analogy with the ranked election problem: packets are voters, schedules are candidates, and each packet ranks the schedules based on its priorities. We then obtain a scheduling algorithm that achieves the described notion of fairness by drawing upon the seminal work of Goodman and Markowitz (1952). This yields the familiar Maximum Weight (MW) style algorithm. As another important result, we prove that the algorithm obtained is throughput optimal. There is no reason a priori why this should be true, and the proof requires nontraditional methods.IEEE Transactions on Information Theory 04/2011; · 2.62 Impact Factor  [Show abstract] [Hide abstract]
ABSTRACT: The theory of network coding promises significant benefits in network performance, especially in lossy networks and in multicast and multipath scenarios. To realize these benefits in practice, we need to understand how coding across packets interacts with the acknowledgment (ACK)based flow control mechanism that forms a central part of today's Internet protocols such as transmission control protocol (TCP). Current approaches such as rateless codes and batchbased coding are not compatible with TCP's retransmission and slidingwindow mechanisms. In this paper, we propose a new mechanism called TCP/NC that incorporates network coding into TCP with only minor changes to the protocol stack, thereby allowing incremental deployment. In our scheme, the source transmits random linear combinations of packets currently in the congestion window. At the heart of our scheme is a new interpretation of ACKsthe sink acknowledges every degree of freedom (i.e., a linear combination that reveals one unit of new information) even if it does not reveal an original packet immediately. Thus, our new TCP ACK rule takes into account the network coding operations in the lower layer and enables a TCPcompatible slidingwindow approach to network coding. Coding essentially masks losses from the congestion control algorithm and allows TCP/NC to react smoothly to losses, resulting in a novel and effective approach for congestion control over lossy networks such as wireless networks. An important feature of our solution is that it allows intermediate nodes to perform reencoding of packets, which is known to provide significant throughput gains in lossy networks and multicast scenarios. Simulations show that our scheme, with or without reencoding inside the network, achieves much higher throughput compared to TCP over lossy wireless links. We present a realworld implementation of this protocol that addresses the practical aspects of incorporating network coding and decoding with TCP's wind ow management mechanism. We work with TCPReno, which is a widespread and practical variant of TCP. Our implementation significantly advances the goal of designing a deployable, general, TCPcompatible protocol that provides the benefits of network coding.Proceedings of the IEEE 04/2011; · 6.91 Impact Factor 
Conference Paper: Medium Access Using Queues.
[Show abstract] [Hide abstract]
ABSTRACT: Consider a wireless network of n nodes represented by a (undirected) graph G where an edge (i,j) models the fact that transmissions of i and j interfere with each other, i.e. simultaneous transmissions of i and j become unsuccessful. Hence it is required that at each time instance a set of noninterfering nodes (corresponding to an independent set in G) access the wireless medium. To utilize wireless resources efficiently, it is required to arbitrate the access of medium among interfering nodes properly. Moreover, to be of practical use, such a mechanism is required to be totally distributed as well as simple. As the main result of this paper, we provide such a medium access algorithm. It is randomized, totally distributed and simple: each node attempts to access medium at each time with probability that is a function of its local information. We establish efficiency of the algorithm by showing that the corresponding network Markov chain is positive recurrent as long as the demand imposed on the network can be supported by the wireless network (using any algorithm). In that sense, the proposed algorithm is optimal in terms of utilizing wireless resources. The algorithm is oblivious to the network graph structure, in contrast with the socalled polynomial backoff algorithm by HastadLeightonRogoff (STOC '87, SICOMP '96) that is established to be optimal for the complete graph and bipartite graphs (by GoldbergMacKenzie (SODA '96, JCSS '99)).IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011, Palm Springs, CA, USA, October 2225, 2011; 01/2011  [Show abstract] [Hide abstract]
ABSTRACT: The authors consider the problem of counting the number of independent sets or the partition function of a hardcore model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a messagepassing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple messagepassing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in (n²E log³(nE¹)) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'timevarying' messagepassing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is (n{sup }) for some > 0. As an application, they find that for random 3regular graph, Bethe approximation of logpartition function (log of the number of independent sets) is within o(1) of corret logpartition  this is quite surprising as previous physicsbased predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.SIAM J. Discrete Math. 01/2011; 25:10121034. 
Article: Information Theoretic Bounds for Distributed Computation Over Networks of PointtoPoint Channels
[Show abstract] [Hide abstract]
ABSTRACT: A network of nodes communicate via pointtopoint memoryless independent noisy channels. Each node has some realvalued initial measurement or message. The goal of each of the nodes is to acquire an estimate of a given function of all the initial measurements in the network. As the main contribution of this paper, a lower bound on computation time is derived. This bound must be satisfied by any algorithm used by the nodes to communicate and compute, so that the meansquare error in the nodes' estimate is within a given interval around zero. The derivation utilizes information theoretic inequalities reminiscent of those used in rate distortion theory along with a novel “perturbation” technique so as to be broadly applicable. To understand the tightness of the bound, a specific scenario is considered. Nodes are required to learn a linear combination of the initial values in the network while communicating over erasure channels. A distributed quantized algorithm is developed, and it is shown that the computation time essentially scales as is implied by the lower bound. In particular, the computation time depends reciprocally on “conductance”, which is a property of the network that captures the informationflow bottleneck. As a byproduct, this leads to a quantized algorithm, for computing separable functions in a network, with minimal computation time.IEEE Transactions on Information Theory 01/2011; · 2.62 Impact Factor
Publication Stats
2k  Citations  
77.73  Total Impact Points  
Top Journals
Institutions

2006–2013

Massachusetts Institute of Technology
 • Laboratory for Information and Decision Systems
 • Department of Electrical Engineering and Computer Science
Cambridge, Massachusetts, United States


2011

Georgia Institute of Technology
Atlanta, Georgia, United States


2001–2008

Stanford University
 • Department of Computer Science
 • Department of Electrical Engineering
Stanford, CA, United States 
Politecnico di Torino
 DET  Department of Electronics and Telecommunications
Torino, Piedmont, Italy
