## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

We study a network of n nodes communicating via channels. The objective of each of the nodes is to compute a given function of the data in the network. Using Information Theoretic inequalities, we derive a lower bound to the information that must be communicated between nodes for the mean square error in their estimates to converge to zero. We use this bound to express a bound on the rate of the channel code when the mean square error is required to converge to zero exponentially with some rate. We also show how the bound can be applied on different cut-sets of a communication network to determine a lower bound to computation time until convergence of the error in the nodes' estimates to a prescribed interval around zero. Finally, we present a particular scenario for which our bound on the computation time is tight.

To read the full-text of this research,

you can request a copy directly from the authors.

... However, the complexity of network computing has restricted prior work to the analysis of elementary networks. Networks with noisy links were studied in [3,14,16,17,19,26,35,37,50] and distributed computation in networks using gossip algorithms was studied in [4-6, 9, 27, 36]. ...

... Given a (k, n) network code, every edge e ∈ E carries a vector z e of at most n alphabet symbols, 3 which is obtained by evaluating the encoding function h (e) on the set of vectors carried by the in-edges to the node and the node's message vector if it is a source. The objective of the receiver is to compute the target function f of the source messages, for any arbitrary message generator α. ...

... That is, each element of A p is a possible pair of input edge-vectors to the receiver when the target function value equals p. Let j denote the number of components of p that are either 0 or 3. Without loss of generality, suppose the first j components of p belong to {0, 3} and definew (3) ...

The following network computing problem is considered. Source nodes in a directed acyclic network generate independent messages and a single receiver node computes a target function f of the messages. The objective is to maximize the average number of times f can be computed per network usage, i.e., the “computing capacity”. The network coding problem for a single-receiver network is a special case of the network computing problem in which all of the source messages must be reproduced at the receiver. For network coding with a single receiver, routing is known to achieve the capacity by achieving the network min-cut upper bound. We extend the definition of min-cut to the network computing problem and show that the min-cut is still an upper bound on the maximum achievable rate and is tight for computing (using coding) any target function in multi-edge tree networks. It is also tight for computing linear target functions in any network. We also study the bound's tightness for different classes of target functions. In particular, we give a lower bound on the computing capacity in terms of the Steiner tree packing number and a different bound for symmetric functions. We also show that for certain networks and target functions, the computing capacity can be less than an arbitrarily small fraction of the min-cut bound.

... However, the complexity of network computing has restricted prior work to the analysis of elementary networks. Networks with noisy links were studied in [3,14,16,17,19,26,35,37,50] and distributed computation in networks using gossip algorithms was studied in [4-6, 9, 27, 36]. ...

... Given a (k, n) network code, every edge e ∈ E carries a vector z e of at most n alphabet symbols, 3 which is obtained by evaluating the encoding function h (e) on the set of vectors carried by the in-edges to the node and the node's message vector if it is a source. The objective of the receiver is to compute the target function f of the source messages, for any arbitrary message generator α. ...

... That is, each element of A p is a possible pair of input edge-vectors to the receiver when the target function value equals p. Let j denote the number of components of p that are either 0 or 3. Without loss of generality, suppose the first j components of p belong to {0, 3} and definew (3) ...

... We note here that for this case, by an Information Theoretic lower bound derived in [2] we have that the computation time is lower bounded as ...

... Combining the result of Theorem IV.1 with that of Theorem III.2 yields Theorem II.5. Comparison with a lower bound obtained via Information Theoretic inequalities in [2] reveals that the reciprocal dependence between computation time and graph conductance in the upper bound of Theorem II.5 matches the lower bound. Hence the upper bound is tight in capturing the effect of the graph conductance Φ(P ). ...

We consider a network of nodes, each having an initial value or measurement, and seeking to acquire an estimate of a given function of all the nodespsila values in the network. Each node may exchange with its neighbors a finite number of bits every time communication is initiated. In this paper, we present an algorithm for computation of separable functions, under the constraint that communicated messages are quantized, so that with some specified probability, all nodes have an estimate of the function value within a desired interval of accuracy. We derive an upper bound on the computation time needed to achieve this goal, and show that the dependence of the computation time on the network topology, via the ldquoconductancerdquo of the graph representing this topology, matches a lower bound derived from Information Theoretic analysis. Hence, the algorithmpsilas running time is optimal with respect to dependence on the graph structure.

... Our work is related to that of Ramamoorthy [11], who studied computing the parity of a collection of binary sources in a network with two sources and arbitrary number of receivers, or vice versa; however, he considered only the existence of a solution, rather than the rate at which the solution can be computed. Problems related to function computation have been studied in a such areas as communication complexity [8], [12], average consensus [4], [6], and distributed computation [3], [10]. The reader is referred to [7] for a review of various approaches to the problem. ...

We study the computation of the arithmetic sum of the q-ary source messages in the reverse butterfly network. Specifically, we characterize the maximum rate at which the message sum can be computed at the receiver and demonstrate that linear coding is suboptimal.

In the function computation problem, certain nodes of an undirected graph have access to independent data, while some other nodes of the graph require certain functions of the data; this model, motivated by sensor networks and cloud computing, is the focus of this paper. We study the maximum rates at which function computation is possible on a capacitated graph; the capacities on the edges of the graph impose constraints on the communication rate. We consider a simple class of computation strategies based on Steiner-tree packing (so-called computation trees), which does not involve block coding and has minimal delay. With a single terminal requiring function computation, computation trees are known to be optimal when the underlying graph is itself a directed tree, but have arbitrarily poor performance in general directed graphs. Our main result is that computation trees are near optimal for a wide class of function computation requirements even at multiple terminals in undirected graphs. The key technical contribution involves connecting approximation algorithms for Steiner cuts in undirected graphs to the function computation problem. Furthermore, we show that existing algorithms for Steiner tree packings allow us to compute approximately optimal packings of computation trees in polynomial time. We also show a close connection between the function computation problem and a communication problem involving multiple multicasts.

Motivated by applications to wireless sensor, peer-to-peer, and ad hoc networks, we study distributed broadcasting algorithms for exchanging information and computing in an arbitrarily connected network of nodes. Specifically, we study a broadcasting-based gossiping algorithm to compute the (possibly weighted) average of the initial measurements of the nodes at every node in the network. We show that the broadcast gossip algorithm converges almost surely to a consensus. We prove that the random consensus value is, in expectation, the average of initial node measurements and that it can be made arbitrarily close to this value in mean squared error sense, under a balanced connectivity model and by trading off convergence speed with accuracy of the computation. We provide theoretical and numerical results on the mean square error performance, on the convergence rate and study the effect of the ldquomixing parameterrdquo on the convergence rate of the broadcast gossip algorithm. The results indicate that the mean squared error strictly decreases through iterations until the consensus is achieved. Finally, we assess and compare the communication cost of the broadcast gossip algorithm to achieve a given distance to consensus through theoretical and numerical results.

A causal feedback map, taking sequences of measurements and producing sequences of controls, is denoted as finite set if, within any finite time horizon, its range is in a finite set. Bit-rate constrained or digital control are particular cases of finite-set feedback. In this paper, we show that the finite gain (FG) lp stabilization, with 1⩽p⩽∞, of a discrete-time, linear and time-invariant unstable plant is impossible by finite-set feedback. In addition, we show that, under finite-set feedback, weaker (local) versions of FG lp stability are also impossible. These facts are not obvious, since recent results have shown that input to state stabilization is viable by bit-rate constrained control. In view of such existing work, this paper leads to two conclusions: (1) even when input to state stability is attainable by finite-set feedback, small changes in the amplitude of the external excitation may cause, in relative terms, a large increase in the amplitude of the state (2) FG lp stabilization requires logarithmic precision around zero. Since our conclusions hold with no assumption on the feedback structure, they cannot be derived from existing results. We adopt an information theoretic viewpoint, which also brings new insights into the problem of stabilization.

Two sensors obtain data vectors x and y, respectively, and
transmit real vectors m&oarr;<sub>1</sub>(x) and m&oarr;<sub>2</sub>(y),
respectively, to a fusion center. The authors obtain tight lower bounds
on the number of messages (the sum of the dimensions of m&oarr;<sub>1
</sub> and m&oarr;<sub>2</sub>) that have to be transmitted for the
fusion center to be able to evaluate a given function f&oarr;(x,y). When
the function f&oarr; is linear, they show that these bounds are
effectively computable. Certain decentralized estimation problems can be
cast in the framework and are discussed in some detail. In particular,
the authors consider the case where x and y are random variables
representing noisy measurements and f&oarr;(x,y)=E[z|x,y], where z is a
random variable to be estimated. Furthermore, it is established that a
standard method for combining decentralized estimates of Gaussian random
variables has nearly optimal communication requirements