Publications (191)107.02 Total impact
 [show abstract] [hide abstract]
ABSTRACT: In this paper we provide a new geometric characterization of the HirschfeldGebeleinR\'{e}nyi maximal correlation of a pair of random $(X,Y)$, as well as of the chordal slope of the nontrivial boundary of the hypercontractivity ribbon of $(X,Y)$ at infinity. The new characterizations lead to simple proofs for some of the known facts about these quantities. We also provide a counterexample to a data processing inequality claimed by Erkip and Cover, and find the correct tight constant for this kind of inequality.04/2013; 
Conference Proceeding: Improved cardinality bounds on the auxiliary random variables in Marton's inner bound
[show abstract] [hide abstract]
ABSTRACT: Marton's region is the best known inner bound for a general discrete memoryless broadcast channel. We establish improved bounds on the cardinalities of the auxiliary random variables. We combine the perturbation technique along with a representation using concave envelopes to achieve this improvement. As a corollary of this result, we show that a randomized time division strategy achieves the entire Marton's region for binary input broadcast channels, extending the previously known result for the sumrate and validating a previous conjecture due to the same authors.Information Theory Proceedings (ISIT), 2013 IEEE International Symposium on; 01/2013  [show abstract] [hide abstract]
ABSTRACT: Shannon's Entropy Power Inequality can be viewed as characterizing the minimum differential entropy achievable by the sum of two independent random variables with fixed differential entropies. The entropy power inequality has played a key role in resolving a number of problems in information theory. It is therefore interesting to examine the existence of a similar inequality for discrete random variables. In this paper we obtain an entropy power inequality for random variables taking values in an abelian group of order 2^n, i.e. for such a group G we explicitly characterize the function f_G(x,y) giving the minimum entropy of the sum of two independent Gvalued random variables with respective entropies x and y. Random variables achieving the extremum in this inequality are thus the analogs of Gaussians in this case, and these are also determined. It turns out that f_G(x,y) is convex in x for fixed y and, by symmetry, convex in y for fixed x. This is a generalization to abelian groups of order 2^n of the result known as Mrs. Gerber's Lemma.07/2012;  [show abstract] [hide abstract]
ABSTRACT: Marton's inner bound is the best known achievable region for a general discrete memoryless broadcast channel. To compute Marton's inner bound one has to solve an optimization problem over a set of joint distributions on the input and auxiliary random variables. The optimizers turn out to be structured in many cases. Finding properties of optimizers not only results in efficient evaluation of the region, but it may also help one to prove factorization of Marton's inner bound (and thus its optimality). The first part of this paper formulates this factorization approach explicitly and states some conjectures and results along this line. The second part of this paper focuses primarily on the structure of the optimizers. This section is inspired by a new binary inequality that recently resulted in a very simple characterization of the sumrate of Marton's inner bound for binary input broadcast channels. This prompted us to investigate whether this inequality can be extended to larger cardinality input alphabets. We show that several of the results for the binary input case do carry over for higher cardinality alphabets and we present a collection of results that help restrict the search space of probability distributions to evaluate the boundary of Marton's inner bound in the general case. We also prove a new inequality for the binary skewsymmetric broadcast channel that yields a very simple characterization of the entire Marton inner bound for this channel.02/2012; 
Conference Proceeding: Noninteractive simulation of joint distributions: The HirschfeldGebeleinRényi maximal correlation and the hypercontractivity ribbon
[show abstract] [hide abstract]
ABSTRACT: We consider the following problem: Alice and Bob observe sequences Xn and Y n respectively where {(Xi, Yi)}i=1∞ are drawn i.i.d. from P(x, y), and they output U and V respectively which is required to have a joint law that is close in total variation to a specified Q(u, v). One important technique to establish impossibility results for this problem is the HirschfeldGebeleinRényi maximal correlation which was considered by Witsenhausen [1]. Hypercontractivity studied by Ahlswede and Gács [2] and reverse hypercontractivity recently studied by Mossel et al. [3] provide another approach for proving impossibility results. We consider the tightest impossibility results that can be obtained using hypercontractivity and reverse hypercontractivity and provide a necessary and sufficient condition on the source distribution P(x, y) for when this approach subsumes the maximal correlation approach. We show that the binary pair source distribution with symmetric noise satisfies this condition.Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on; 01/2012  [show abstract] [hide abstract]
ABSTRACT: A positive recurrent, aperiodic Markov chain is said to be longrange dependent (LRD) when the indicator function of a particular state is LRD. This happens if and only if the return time distribution for that state has infinite variance. We investigate the question of whether other instantaneous functions of the Markov chain also inherit this property. We provide conditions under which the function has the same degree of longrange dependence as the chain itself. We illustrate our results through three examples in diverse fields: queueing networks, source compression, and finance.Journal of Applied Probability 01/2012; 49(2012). · 0.55 Impact Factor  [show abstract] [hide abstract]
ABSTRACT: We discuss functions of long range dependent Markov chains. We state sufficient conditions under which an instantaneous function of a long range dependent Markov chain has the same Hurst index as the underlying chain. We discuss several applications of the theorem in the fields of information theory, queuing networks, and finance.01/2012; 
Conference Proceeding: Stable, distributed P2P protocols based on random peer sampling
[show abstract] [hide abstract]
ABSTRACT: In a peertopeer file sharing system based on random contacts where the upload capacity of the seed is small, a single chunk of the file may become rare, causing an accumulation of peers who lack the rare chunk. To prevent this from happening, we propose a protocol where each peer samples a small population of peers and makes an intelligent decision to pick which chunk to download based on this sample. We prove that the resulting system is stable under any arrival rate of peers even if the seed has small, bounded upload capacity.Communication, Control, and Computing (Allerton), 2012 50th Annual Allerton Conference on; 01/2012 
Conference Proceeding: Pointwise lossy source coding theorem for sources with memory
[show abstract] [hide abstract]
ABSTRACT: We investigate the minimum pointwise redundancy of variable length lossy source codes operating at fixed distortion for sources with memory. The redundancy is defined by ln(X1n)  nR(D), where ln(X1n) is the code length at block size n and R(D) is the rate distortion function. We restrict ourselves to the case where R(D) can be calculated, namely the cases where the Shannon lower bound to R(D) holds with equality. In this case, for balanced distortion measures, we provide a pointwise lower bound to the code length sequence in terms of the entropy density process. We show that the minimum coding variance with distortion is lower bounded by the minimum lossless coding variance, and is nonzero unless the entropy density is deterministic. We also examine lossy coding in the presence of long range dependence, showing the existence of information sources for which long range dependence persists under any codec operating at the Shannon lower bound with fixed distortion.Information Theory Proceedings (ISIT), 2012 IEEE International Symposium on; 01/2012 
Conference Proceeding: Generalized Network Sharing Outer Bound and the TwoUnicast Problem
[show abstract] [hide abstract]
ABSTRACT: We describe a simple improvement over the Network Sharing outer bound for the multiple unicast problem. We call this the Generalized Network Sharing (GNS) outer bound. We note two properties of this bound with regard to the twounicast problem: a) it is the tightest bound that can be realized using only edgecut bounds and b) it is tight in the special case when all edges except those from a socalled minimal GNS set have sufficiently large capacities. Finally, we present an example showing that the GNS outer bound is not tight for the twounicast problem.Network Coding (NetCod), 2011 International Symposium on; 08/2011  [show abstract] [hide abstract]
ABSTRACT: P2P systems provide a scalable solution for distributing large files in a network. The file is split into many chunks, and peers contact other peers to collect missing chunks to eventually complete the entire file. The socalled `rare chunk' phenomenon, where a single chunk becomes rare and prevents peers from completing the file, is a threat to the stability of such systems. Practical systems such as BitTorrent overcome this issue by requiring a global search for the rare chunk, which necessitates a centralized mechanism. We demonstrate a new system based on an approximate rarechunk rule, allowing for completely distributed file sharing while retaining scalability and stability. We assume nonaltruistic peers and the seed is required to make only a minimal contribution.07/2011;  [show abstract] [hide abstract]
ABSTRACT: In this paper we study the problem of generation of dependent random variables, known as the "coordination capacity" [4,5], in multiterminal networks. In this model $m$ nodes of the network are observing i.i.d. repetitions of $X^{(1)}$, $X^{(2)}$,..., $X^{(m)}$ distributed according to $q(x^{(1)},...,x^{(m)})$. Given a joint distribution $q(x^{(1)},...,x^{(m)},y^{(1)},...,y^{(m)})$, the final goal of the $i^{th}$ node is to construct the i.i.d. copies of $Y^{(i)}$ after the communication over the network where $X^{(1)}$, $X^{(2)}$,..., $X^{(m)}, Y^{(1)}$, $Y^{(2)}$,..., $Y^{(m)}$ are jointly distributed according to $q(x^{(1)},...,x^{(m)},y^{(1)},...,y^{(m)})$. To do this, the nodes can exchange messages over the network at rates not exceeding the capacity constraints of the links. This problem is difficult to solve even for the special case of two nodes. In this paper we prove new inner and outer bounds on the achievable rates for networks with two nodes.05/2011;  [show abstract] [hide abstract]
ABSTRACT: We study a network security game where strategic players choose their investments in security. Since a player's investment can reduce the propagation of computer viruses, a key feature of the game is the positive externality exerted by the investment. With selfish players, unfortunately, the overall network security can be far from optimum. The contributions of this paper are as follows. 1) We first characterize the price of anarchy (POA) in the strategicform game under an “Effectiveinvestment” model and a “Badtraffic” model, and give insight on how the POA depends on individual players' cost functions and their mutual influence. We also introduce the concept of “weighted POA” to bound the region of payoff vectors. 2) In a repeated game, players have more incentive to cooperate for their long term interests. We consider the socially best outcome that can be supported by the repeated game, as compared to the social optimum. 3) Next, we compare the benefits of improving security technology and improving incentives, and show that improving technology alone may not offset the price of anarchy. 4) Finally, we characterize the performance of correlated equilibrium (CE). Although the paper focuses on network security, many results are generally applicable to games with positive externalities .IEEE/ACM Transactions on Networking 05/2011; · 2.01 Impact Factor 
Conference Proceeding: Network Topology Design Game: How to Choose Communication Links in an Adversarial Environment?
2nd International ICST Conference on Game Theory for Networks (GameNets); 04/2011 
Conference Proceeding: Stochastic approximation with ‘bad’ noise
[show abstract] [hide abstract]
ABSTRACT: Stability and convergence properties of stochastic approximation algorithms are analyzed when the noise includes a long range dependent component (modeled by a fractional Brownian motion) and a heavy tailed component (modeled by a symmetric stable process), in addition to the usual `martingale noise'. This is motivated by the emergent applications in communications. The proofs are based on comparing suitably interpolated iterates with a limiting ordinary differential equation. Related issues such as asynchronous implementations, Markov noise, etc. are briefly discussed.Information Theory and Applications Workshop (ITA), 2011; 03/2011  IEEE/ACM Trans. Netw. 01/2011; 19:549560.

Conference Proceeding: Generating dependent random variables over networks
[show abstract] [hide abstract]
ABSTRACT: In this paper we study the problem of generation of dependent random variables, known as the “coordination capacity” [4], [5], in multiterminal networks. In this model m nodes of the network are observing i.i.d. repetitions of X(1), X(2),..., X(m) distributed according to q(x(1), ..., x(m)). Given a joint distribution q(x(1), ..., x(m), y(1), ..., y(m)), the final goal of the ith node is to construct the i.i.d. copies of Y (i) after the communication over the network where X(1), X(2),..., X(m), Y (1), Y (2),..., Y (m) are jointly distributed according to q(x(1), ..., x(m), y(1), ..., y(m)). To do this, the nodes can exchange messages over the network at rates not exceeding the capacity constraints of the links. This problem is difficult to solve even for the special case of two nodes. In this paper we prove new inner and outer bounds on the achievable rates for networks with two nodes.Information Theory Workshop (ITW), 2011 IEEE; 01/2011  [show abstract] [hide abstract]
ABSTRACT: This paper studies the Shannon regime for the random displacement of stationary point processes. Let each point of some initial stationary point process in $\R^n$ give rise to one daughter point, the location of which is obtained by adding a random vector to the coordinates of the mother point, with all displacement vectors independently and identically distributed for all points. The decoding problem is then the following one: the whole mother point process is known as well as the coordinates of some daughter point; the displacements are only known through their law; can one find the mother of this daughter point? The Shannon regime is that where the dimension $n$ tends to infinity and where the logarithm of the intensity of the point process is proportional to $n$. We show that this problem exhibits a sharp threshold: if the sum of the proportionality factor and of the differential entropy rate of the noise is positive, then the probability of finding the right mother point tends to 0 with $n$ for all point processes and decoding strategies. If this sum is negative, there exist mother point processes, for instance Poisson, and decoding strategies, for instance maximum likelihood, for which the probability of finding the right mother tends to 1 with $n$. We then use large deviations theory to show that in the latter case, if the entropy spectrum of the noise satisfies a large deviation principle, then the error probability goes exponentially fast to 0 with an exponent that is given in closed form in terms of the rate function of the noise entropy spectrum. This is done for two classes of mother point processes: Poisson and Mat\'ern. The practical interest to information theory comes from the explicit connection that we also establish between this problem and the estimation of error exponents in Shannon's additive noise channel with power constraints on the codewords.12/2010; 
Conference Proceeding: A new dual to the GácsKörner common information defined via the GrayWyner system
[show abstract] [hide abstract]
ABSTRACT: We consider jointly distributed random variables X and Y. After describing the GácsKörner common information between the random variables from the viewpoint of the capacity region of the GrayWyner system, we propose a new notion of common information between the random variables that is dual to the GácsKörner common information from this viewpoint in a welldefined sense. We characterize this quantity explicitly in terms of two auxiliary quantities that are asymmetric in nature, and illustrate the operational significance of these new quantities by characterizing a corner point of the solution to a problem of source coding with sideinformation in terms of them. We also contrast this new concept of common information for a pair of random variables with the Wyner common information of the random variables, which is also a kind of dual to the GácsKörner common information.Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on; 11/2010  [show abstract] [hide abstract]
ABSTRACT: This is the second part of a twopart paper on informationtheoretically secure secret key agreement. This part covers the secret key capacity under the channel model. In this model, multiple terminals wish to create a shared secret key that is secure from an eavesdropper with unlimited computational resources. The terminals are all connected to a noiseless and authenticated but insecure channel, called the “public channel.” Furthermore, the terminals have access to a secure but noisy discrete memoryless broadcast channel (DMBC). The first terminal can choose a sequence of inputs to the DMBC, which has outputs at the other terminals and at the eavesdropper. After each channel use, the terminals can engage in arbitrarily many rounds of interactive authenticated communication over the public channel. At the end, each legitimate terminal should be able to generate the secret key. In this paper, we derive new lower and upper bounds on the secrecy capacity. In each case, an example is provided to show that the new bound represents a strict improvement over the previously best known bound. This part of the paper is not standalone, and is written under the assumption that the reader has access to Part I, which is published in the same issue.IEEE Transactions on Information Theory 09/2010; · 2.62 Impact Factor
Publication Stats
4k  Citations  
107.02  Total Impact Points  
Top Journals
Institutions

1985–2011

University of California, Berkeley
 Department of Electrical Engineering and Computer Sciences
Berkeley, MO, United States


2008–2010

Massachusetts Institute of Technology
 Department of Electrical Engineering and Computer Science
Cambridge, MA, United States


2009

University of Michigan
 Department of Electrical Engineering and Computer Science (EECS)
Ann Arbor, MI, United States


1987–2008

Cornell University
 Department of Electrical and Computer Engineering
Ithaca, NY, United States


2007

Tata Institute of Fundamental Research
 School of Technology and Computer Science
Mumbai, Mahārāshtra, India


2006

École Polytechnique Fédérale de Lausanne
Lausanne, Vaud, Switzerland


2004

University of California, Davis
 Department of Electrical and Computer Engineering
Davis, CA, United States 
University of Illinois, UrbanaChampaign
 Department of Electrical and Computer Engineering
Urbana, Illinois, United States


2002

University of Maryland, College Park
 Department of Electrical & Computer Engineering
College Park, MD, United States


1996–1999

Stanford University
 Department of Electrical Engineering
Stanford, CA, United States


1994–1995

University of Texas at Austin
 Department of Electrical & Computer Engineering
Port Aransas, TX, United States
