IEEE Transactions on Information Theory

Published by Institute of Electrical and Electronics Engineers
Online ISSN: 0018-9448
Publications
Article
Suppose a string X1n=(X1,X2,…,Xn) generated by a memoryless source (X(n))(n≥1) with distribution P is to be compressed with distortion no greater than D ≥ 0, using a memoryless random codebook with distribution Q. The compression performance is determined by the "generalized asymptotic equipartition property" (AEP), which states that the probability of finding a D-close match between X1n and any given codeword Y1n, is approximately 2(-nR(P, Q, D)), where the rate function R(P, Q, D) can be expressed as an infimum of relative entropies. The main purpose here is to remove various restrictive assumptions on the validity of this result that have appeared in the recent literature. Necessary and sufficient conditions for the generalized AEP are provided in the general setting of abstract alphabets and unbounded distortion measures. All possible distortion levels D ≥ 0 are considered; the source (X(n))(n≥1) can be stationary and ergodic; and the codebook distribution can have memory. Moreover, the behavior of the matching probability is precisely characterized, even when the generalized AEP is not valid. Natural characterizations of the rate function R(P, Q, D) are established under equally general conditions.
 
Article
We propose a semiparametric method for estimating a precision matrix of high-dimensional elliptical distributions. Unlike most existing methods, our method naturally handles heavy tailness and conducts parameter estimation under a calibration framework, thus achieves improved theoretical rates of convergence and finite sample performance on heavy-tail applications. We further demonstrate the performance of the proposed method using thorough numerical experiments.
 
Cystic Fibrosis ∆F 508 Screen as a Bipartite Multigraph Reconstruction. There are two allele nodes, the WT and the ∆F 508 mutation. Samples 1, 2, 3, 5 are WT, while specimen 4 is a carrier. The specimen labeled with 'X' is affected and does not enter to the screen. E risk is the edge between specimen 4 and the 'Mut' node.
Optimal Solution using Tree Search
Example of Factor Graph for Genotyping Reconstruction
The Effect of Damping on Oscillations
Article
Over the past three decades we have steadily increased our knowledge on the genetic basis of many severe disorders. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, mainly due to the relatively tedious and expensive process of genotyping. Since the genetic variations that underlie the disorders are relatively rare in the population, they can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol to detect carriers for severe genetic disorders. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies. The mathematical framework presented here has some important distinctions from the 'traditional' compressed sensing and group testing frameworks in order to address biological and technical constraints of our setting.
 
Article
Consider the relative entropy between a posterior density for a parameter given a sample and a second posterior density for the same parameter, based on a different model and a different data set. Then the relative entropy can be minimized over the second sample to get a virtual sample that would make the second posterior as close as possible to the first in an informational sense. If the first posterior is based on a dependent dataset and the second posterior uses an independence model, the effective inferential power of the dependent sample is transferred into the independent sample by the optimization. Examples of this optimization are presented for models with nuisance parameters, finite mixture models, and models for correlated data. Our approach is also used to choose the effective parameter size in a Bayesian hierarchical model.
 
Medians and robust standard deviations (in parentheses) of PE, L 2 loss, L 1 loss, deviance, #S, and FN over 100 simulations for all methods in logistic regression by BIC and 
Classification errors in the neuroblastoma data set 3-year EFS Gender Method # of genes Test error # of genes Test error 
Article
Penalized likelihood methods are fundamental to ultra-high dimensional variable selection. How high dimensionality such methods can handle remains largely unknown. In this paper, we show that in the context of generalized linear models, such methods possess model selection consistency with oracle properties even for dimensionality of Non-Polynomial (NP) order of sample size, for a class of penalized likelihood approaches using folded-concave penalty functions, which were introduced to ameliorate the bias problems of convex penalty functions. This fills a long-standing gap in the literature where the dimensionality is allowed to grow slowly with the sample size. Our results are also applicable to penalized likelihood with the L(1)-penalty, which is a convex function at the boundary of the class of folded-concave penalty functions under consideration. The coordinate optimization is implemented for finding the solution paths, whose performance is evaluated by a few simulation examples and the real data analysis.
 
Article
The method of types is one of the key technical tools in Shannon theory, and this tool is valuable also in other fields. In this paper, some key applications are presented in sufficient detail enabling an interested nonspecialist to gain a working knowledge of the method, and a wide selection of further applications are surveyed. These range from hypothesis testing and large deviations theory through error exponents for discrete memoryless channels and capacity of arbitrarily varying channels to multiuser problems. While the method of types is suitable primarily for discrete memoryless models, its extensions to certain models with memory are also discussed
 
Article
Linear equations have always been powerful tools in cryptanalysis. In this correspondence, we present a general linear equation of minimum weight 3 in F<sub>2</sub> that holds for all state lengths n and all shifts i of sequences generated by the T-function x<sub>i</sub>=x<sub>i-1</sub> <sup>2</sup>orC+x<sub>i-1</sub> mod 2<sup>n</sup> proposed by Klimov and Shamir. It is surprising that these linear properties exist, and they indicate that the sequences generated by the T-functions have more structures than claimed by Klimov and Shamir
 
Article
An expression is derived for the distribution of a mixture of real and complex normal variates. The asymptotic distribution of the resulting real-complex maximum-likelihood estimates is the real-complex normal distribution derived. The covariance matrix of this distribution is particularly important. It is the asymptotic covariance matrix for maximum-likelihood estimates and the Cramer-Rao lower bound on the variance of the real-complex estimates in general. From this covariance matrix, the variance of the reconstructed complex-valued exit wave then follows using the pertinent propagation formulas. The resulting expressions show the dependence of the variance on the free microscope parameters used for experimental design
 
Article
The detection problem for the model "signal-plus-noise," where the "noise" is represented by a Wiener process, has been approached primarily through the use of martingale theory. There are two core results: a new version of what is called "Girsanov's theorem" and the fact that the Wiener measure on the space of continuous functions can be generated by any Wiener process. In this correspondence a new version of "Girsanov's theorem" is derived and some fragmentary results concerning the measures associated with martingales are presented.
 
Article
Shannon's communication (information) theory cast about as much light on the problem of the communication engineer as can be shed. It reflected or encompassed earlier work, but it was so new that it was not really understood by many of those who first talked or wrote about it. Most of the papers published on information theory through 1950 are irrelevant to Shannon's work. Later work has given us useful information and encoding schemes as well as greater rigor, but the wisdom of Shannon's way of looking at things and his original theorems are of primary importance.
 
Article
The capacity C<sub>0,1</sub><sup>(8)</sup> of a three-dimensional (0,1) run length constrained channel is shown to satisfy 0.522501741838&les;C<sub>0,1</sub><sup>(8)</sup>&les;0.526880847825
 
Article
This paper considers a general linear vector Gaussian channel with arbitrary signaling and pursues two closely related goals: i) closed-form expressions for the gradient of the mutual information with respect to arbitrary parameters of the system, and ii) fundamental connections between information theory and estimation theory. Generalizing the fundamental relationship recently unveiled by Guo, Shamai, and Verdu´, we show that the gradient of the mutual information with respect to the channel matrix is equal to the product of the channel matrix and the error covariance matrix of the best estimate of the input given the output. Gradients and derivatives with respect to other parameters are then found via the differentiation chain rule.
 
Article
The authors apply their queuing-theoretic delay analysis methodology to several variations of the FCFS 0.487 conflict resolution algorithm. In this approach, the main component of the packet delay is viewed as a queuing problem in which each window selected by the channel access algorithm is a customer requiring conflict resolution as its service. The authors' methodology is extended to handle all the features of the full FCFS 0.487 algorithm, including variable-size windows, arrival-time addressing (and hence true FCFS scheduling), and biased interval splitting. Some of these extensions involve approximations, but they make it possible to obtain the Laplace transform and moments of the packet delay. The authors also present an exact analysis for the three-cell algorithm, which is the 0.487 algorithm with a few modifications to satisfy the separability condition that reduces its capacity to 0.48. Comparisons made with other analyses (which provide bounds on the mean delay) and with extensive simulations show that the present results for both the mean and the variance of the packet delay are extremely accurate
 
Article
We consider Poisson packet traffic accessing a single-slotted channel. We assume the existence of a ternary feedback per channel slot. We also adopt the limited feedback sensing model where each user senses the feedback only while he has a packet to transmit. For this model we develop a collision resolution algorithm with last come-first served characteristics. The algorithm attains the same throughput as Gallager's algorithm without the latter's full feedback sensing requirement. In addition, it is easy to implement, requires reasonable memory storage, induces uniformly good transmission delays, and is insensitive to feedback errors. In the presence of binary (collision versus noncollision) feedback the algorithm may attain a throughput of 0.4493 .
 
Article
This paper corrects some errors on a previous paper concerning the synthesis of Feedback with Carry Shift Registers using the Euclidean Algorithm.
 
Article
In the above titled paper (ibid., vol. 53, pp. 2190-2203, Jun 07), equation (B6) in Appendix B incorrectly identifies the Jacobi symbol. The proper definition is provided here.
 
Article
In the above titled paper (ibid., vol. 53, no. 2, pp. 580-598, Feb 07), there was a printing typo in the last line of eq. (30). Necessary revisions are presented here.
 
Article
In the above titled paper (ibid., vol 54, no. 12, pp. 5500-5510, Dec 08), the table containing the list of symbols was corrupted during the publication process. The proper symbol table is presented here.
 
Article
In the above titled paper (ibid., vol. 54, no. 3, pp. 1003-1023, Mar 08), several misprints were introduced. The corrections are presented here.
 
Article
In this paper, a correction is made to the criterion for a linear dispersion space-time block code to achieve full diversity with the partial interference cancellation (PIC) group decoding recently published in the above titled paper (ibid., vol. 55, no. 10, pp. 4366-4385).
 
Article
Describes a (d,k)=(1,8) runlength-limited (RLL) rate 8/12 code with fixed codeword length 12. The code is block-decodable; a codeword can be decoded without knowledge of preceding or succeeding codewords. The code belongs to the class of bounded delay block-decodable (BDB) codes with one symbol (8 bits) look-ahead. Due to its format, this code is particularly attractive for use in combination with error-correcting codes such as Reed-Solomon codes over the finite field GF(2<sup>8</sup>)
 
Article
All binary cyclic codes of odd lengths are checked from 101 to 127 to find codes which are better than those in a table by T. Verhoeff (1989). There are five such cases, namely, [117, 36, 32], [117, 37, 29], [117, 42, 26], [117, 49, 24], and [127, 36 35] cyclic codes. According to Verhoeff's table the previously known ranges of the highest minimum-distance were 28-40, 28-40, 25-37, 22-32, and 32-46, respectively. Applying constructions X and Y1, [120, 37, 32] and [108, 28, 32] codes were found. Moreover, the highest minimum-distances that cyclic codes of length 127 can attain are determined
 
Article
This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem. This paper studies a method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program. This approach is powerful because the optimization can be completed in polynomial time with standard scientific software. The paper provides general conditions which ensure that convex relaxation succeeds. As evidence of the broad impact of these results, the paper describes how convex relaxation can be used for several concrete signal recovery problems. It also describes applications to channel coding, linear regression, and numerical analysis
 
Article
A cryptographic system is described which is secure if and only if computing logarithms over GF(p) is infeasible. Previously published algorithms for computing this function require O(p^{1/2}) complexity in both time and space. An improved algorithm is derived which requires O =(log^{2} p) complexity if p - 1 has only small prime factors. Such values of p must be avoided in the cryptosystem. Constructive uses for the new algorithm are also described.
 
Article
The method of Mykkeltveit, Lam, and McEliece for finding weight enumerators of binary QR-codes is used to prove that the minimum distance of the [38,19] ternary extended QR-code is 11 .
 
Article
A linear [n,k,d]<sub>q</sub> code C is called near maximum-distance separable (NMDS) if d(C)=n-k and d(C<sup>⊥</sup>)=k. The maximum length of an NMDS [n,k,d]<sub>q </sub> code is denoted by m'(k,q). In this correspondence, it has been verified by a computer-based proof that m'(5,8)=15, m'(4,9)=16,m'(5,9)=16, and 20&les;m'(4,11)&les;21. Moreover, the NMDS codes of length m'(4,8), m'(5,8), and m'(4,9) have been classified. As the dual code of an NMDS code is NMDS, the values of m'(k,8), k=10,11,12, and of m'(k,9),k=12,13,14 have been also deduced
 
Article
It is an interesting open question whether a self-dual quaternary (24,12,10) code C exists. It was shown by Conway and Pless that the only primes which can be orders of permutations in the group of C are 11, 7, and 3. In this correspondence we eliminate 11 and 7 not only as permutations but also as orders of monomials in the group of C . This is done by reducing the problems to the consideration of several codes and finding low weight vectors in these codes.
 
Article
Space-time block codes from orthogonal designs have two advantages, namely, fast maximum-likelihood (ML) decoding and full diversity. Rate 1 real (pulse amplitude modulation-PAM) space-time codes (real orthogonal designs) for multiple transmit antennas have been constructed from the real Hurwitz-Radon families, which also provides the rate 1/2 complex (quadrature amplitude modulation-QAM) space-time codes (complex orthogonal designs) for any number of transmit antennas. Rate 3/4 complex orthogonal designs (space-time codes) for three and four transmit antennas have existed in the literature but no high rate (>1/2) complex orthogonal designs for other numbers of transmit antennas exist. We present rate 7/11 and rate 3/5 generalized complex orthogonal designs for five and six transmit antennas, respectively.
 
Article
Recently Kasami {em et al.} presented a linear programming approach to the weight distribution of binary linear codes [2]. Their approach to compute upper and lower bounds on the weight distribution of binary primitive BCH codes of length 2^{m} - 1 with m geq 8 and designed distance 2t + 1 with 4 leq t leq 5 is improved. From these results, the relative deviation of the number of codewords of weight jleq 2^{m-1} from the binomial distribution 2^{-mt} left( stackrel{2^{m}-1}{j} right) is shown to be less than 1 percent for the following cases: (1) t = 4, j geq 2t + 1 and m geq 16 ; (2) t = 4, j geq 2t + 3 and 10 leq m leq 15 ; (3) t=4, j geq 2t+5 and 8 leq m leq 9 ; (4) t=5,j geq 2t+ 1 and m geq 20 ; (5) t=5, j geq 2t+ 3 and 12 leq m leq 19 ; (6) t=5, j geq 2t+ 5 and 10 leq m leq 11 ; (7) t=5, j geq 2t + 7 and m=9 ; (8) t= 5, j geq 2t+ 9 and m = 8 .
 
Article
We give a geometric construction of a [110,5,90]<sub>9</sub>-linear code admitting the Mathieu group M <sub>11</sub> as a subgroup of its automorphism group.
 
Article
A new rate 4/6 (d=1, k'=11) runlength-limited code which is well adapted to byte-oriented storage systems is presented. The new code has the virtue that it can be decoded on a block basis, i.e., without knowledge of previous or next codewords, and, therefore, it does not suffer from error propagation. This code is particularly attractive as many commercially available Reed-Solomon codes operate in GF(2<sup>8 </sup>)
 
Article
A rate R=5/20 hypergraph-based woven convolutional code with overall constraint length 67 and constituent convolutional codes is presented. It is based on a 3-partite, 3-uniform, 4-regular hypergraph and contains rate R<sup>c</sup>=3/4 constituent convolutional codes with overall constraint length 5. Although the code construction is based on low-complexity codes, the free distance of this construction, computed with the BEAST algorithm, is d<sub>free</sub>=120, which is remarkably large.
 
Article
In this correspondence, we give a characterization of certain quasi-cyclic self-complementary codes with parameters [120,9,56] and [136,9,64], some quasi-cyclic self-complementary codes are also constructed with parameters [496,11,240] and [528,11,256]. These codes are optimal in the sense that they meet the Grey-Rankin bound, new quasi-symmetric SDP (symmetric difference property) designs are constructed from these codes
 
Article
This correspondence presents 11 tables of best unshortened Fire codes of length up to 1200 bits, classified into groups according to their relative redundancy.
 
Article
In previous work, a method was presented to compute the weight distribution of a linear block code by using its trellis diagram. In this correspondence, the method is improved by using the trellis structure of linear block codes. Another method with reduced computational complexity is also proposed which uses the invariant property of a code. With these methods, the weight distributions of all extended binary primitive BCH codes of length 128 are computed, except for those for which the formulas of the weight distribution are known. It turns out that (128,64,22) extended binary primitive BCH code is formally self-dual. The probability of an undetectable error for each code is computed and its monotonicity is examined
 
Article
Let r_{i} be the covering radius of the (2^{i},i+ 1) Reed-Muller code. It is an open question whether r_{2m+1}=2^{2_{m}}-2m holds for all m . It is known to be true for m=0,1,2 , and here it is shown to be also true for m=3 .
 
Article
A Chapman-Robbins form of the Barankin bound is used to derive a multiparameter Cramer-Rao (CR) type lower bound on estimator error covariance when the parameter θ∈ R <sup>n</sup> is constrained to lie in a subset of the parameter space. A simple form for the constrained CR bound is obtained when the constraint set Θ<sub>C</sub>, can be expressed as a smooth functional inequality constraint. It is shown that the constrained CR bound is identical to the unconstrained CR bound at the regular points of Θ<sub>C</sub>, i.e. where no equality constraints are active. On the other hand, at those points θ∈Θ<sub>C</sub> where pure equality constraints are active the full-rank Fisher information matrix in the unconstrained CR bound must be replaced by a rank-reduced Fisher information matrix obtained as a projection of the full-rank Fisher matrix onto the tangent hyperplane of the full-rank Fisher matrix onto the tangent hyperplane of the constraint set at θ. A necessary and sufficient condition involving the forms of the constraint and the likelihood function is given for the bound to be achievable, and examples for which the bound is achieved are presented. In addition to providing a useful generalization of the CR bound, the results permit analysis of the gain in achievable MSE performance due to the imposition of particular constraints on the parameter space without the need for a global reparameterization
 
Article
The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points. This rule is independent of the underlying joint distribution on the sample points and their classifications, and hence the probability of error R of such a rule must be at least as great as the Bayes probability of error R^{ast} --the minimum probability of error over all decision rules taking underlying probability structure into account. However, in a large sample analysis, we will show in the M -category case that R^{ast} leq R leq R^{ast}(2 --MR^{ast}/(M-1)) , where these bounds are the tightest possible, for all suitably smooth underlying distributions. Thus for any number of categories, the probability of error of the nearest neighbor rule is bounded above by twice the Bayes probability of error. In this sense, it may be said that half the classification information in an infinite sample set is contained in the nearest neighbor.
 
Article
All binary [ n , n /2] optimal self-dual codes for length 52 ≤ n ≤ 60 with an automorphism of order 7 or 13 are classified up to equivalence. Two of the constructed [54,27,10] codes have weight enumerators that were not previously known to exist. There are also some [58,29,10] codes with new values of the parameters in their weight enumerator.
 
Article
An algebraic decoding algorithm for the ternary (13,7,5) quadratic residue code is presented. This seems to be the first attempt to provide an algebraic decoding algorithm for a quadratic residue code over a nonbinary field
 
Article
An algorithm is developed that can be used to find, with a very low probability of error (10<sup>-100</sup> or less in many cases), the minimum weights of codes far too large to be treated by any known exact algorithm. The probabilistic method is used to find minimum weights of all extended quadratic residue codes of length 440 or less. The probabilistic algorithm is presented for binary codes, but it can be generalized to codes over GF( q ) with q >2
 
Article
We find that the minimum distance of the binary [137,69] quadratic residue code is 21
 
The transmitted signals that achieve capacity are mutually orthogonal with respect to time. The constituent orthonormal unit vectors are isotropically distributed (see Appendix A), and independent of the signal magnitudes, which have mean-square value T. The solid sphere of radius T 1=2 demarcates the root mean-square. For T M, the vectors all lie approximately on the surface of this sphere. The shell of thickness "T 1=2 is discussed in Section 5.
Normalized capacity, and upper and lower bounds, versus coherence interval T (SNR=0dB, one transmitter antenna, one receiver antenna). The lower bound and capacity meet at T = 12. As per Theorem 3, the capacity approaches the perfect-knowledge upper bound as T ! 1.
Normalized capacity, and upper and lower bounds, versus coherence interval T (SNR=12dB, one transmitter antenna, one receiver antenna). The lower bound and capacity meet at T = 3. The capacity approaches the perfect-knowledge upper bound as T ! 1.
Capacity and perfect-knowledge upper bound versus number of receiver antennas N (SNR=0dB, 6dB, 12dB, arbitrary number of transmitter antennas, coherence interval equal to one). The gap between capacity and upper bound only widens as N ! 1. approximately 6.1 bits/T , remains a valid lower bound on capacity for all M > 3. (One could always ignore all but three of the transmitter antennas.) This gives us the modified lower bound also displayed in the figure.
Normalized capacity lower bounds and perfect-knowledge upper bounds versus number of transmitter antennas M (SNR=20dB, one receiver antenna, coherence interval equal to 100). The actual channel capacity lies in the shaded region. Lower bound peaks at M = 3; this peak is a valid lower bound for M 3, giving us the modified lower bound.
Article
We analyze a mobile wireless link comprising M transmitter and N receiver antennas operating in a Rayleigh flat-fading environment. The propagation coefficients between pairs of transmitter and receiver antennas are statistically independent and unknown; they remain constant for a coherence interval of T symbol periods, after which they change to new independent values which they maintain for another T symbol periods, and so on. Computing the link capacity, associated with channel coding over multiple fading intervals, requires an optimization over the joint density of T·M complex transmitted signals. We prove that there is no point in making the number of transmitter antennas greater than the length of the coherence interval: the capacity for M>T is equal to the capacity for M=T. Capacity is achieved when the T×M transmitted signal matrix is equal to the product of two statistically independent matrices: a T×T isotropically distributed unitary matrix times a certain T×M random matrix that is diagonal, real, and nonnegative. This result enables us to determine capacity for many interesting cases. We conclude that, for a fixed number of antennas, as the length of the coherence interval increases, the capacity approaches the capacity obtained as if the receiver knew the propagation coefficients
 
Article
A method is presented to approximate optimally an n -dimensional discrete probability distribution by a product of second-order distributions, or the distribution of the first-order tree dependence. The problem is to find an optimum set of n - 1 first order dependence relationship among the n variables. It is shown that the procedure derived in this paper yields an approximation of a minimum difference in information. It is further shown that when this procedure is applied to empirical observations from an unknown distribution of tree dependence, the procedure is the maximum-likelihood estimate of the distribution.
 
Article
The overall mean recognition probability (mean accuracy) of a pattern classifier is calculated and numerically plotted as a function of the pattern measurement complexity n and design data set size m . Utilized is the well-known probabilistic model of a two-class, discrete-measurement pattern environment (no Gaussian or statistical independence assumptions are made). The minimum-error recognition rule (Bayes) is used, with the unknown pattern environment probabilities estimated from the data relative frequencies. In calculating the mean accuracy over all such environments, only three parameters remain in the final equation: n, m , and the prior probability p_{c} of either of the pattern classes. With a fixed design pattern sample, recognition accuracy can first increase as the number of measurements made on a pattern increases, but decay with measurement complexity higher than some optimum value. Graphs of the mean accuracy exhibit both an optimal and a maximum acceptable value of n for fixed m and p_{c} . A four-place tabulation of the optimum n and maximum mean accuracy values is given for equally likely classes and m ranging from 2 to 1000 . The penalty exacted for the generality of the analysis is the use of the mean accuracy itself as a recognizer optimality criterion. Namely, one necessarily always has some particular recognition problem at hand whose Bayes accuracy will be higher or lower than the mean over all recognition problems having fixed n, m , and p_{c} .
 
Article
It is shown that the covering radius of any binary linear [14, 6] code containing the all-one vector is at least 4. Since the minimum covering radius of a binary linear [14, 6] code is 3, this shows that in general the minimum of the covering radius is not achieved by codes containing the all-one vector
 
Article
We consider the problem of embedding one signal (e.g., a digital watermark), within another “host” signal to form a third, “composite” signal. The embedding is designed to achieve efficient tradeoffs among the three conflicting goals of maximizing the information-embedding rate, minimizing the distortion between the host signal and composite signal, and maximizing the robustness of the embedding. We introduce new classes of embedding methods, termed quantization index modulation (QIM) and distortion-compensated QIM (DC-QIM), and develop convenient realizations in the form of what we refer to as dither modulation. Using deterministic models to evaluate digital watermarking methods, we show that QIM is “provably good” against arbitrary bounded and fully informed attacks, which arise in several copyright applications, and in particular it achieves provably better rate distortion-robustness tradeoffs than currently popular spread-spectrum and low-bit(s) modulation methods. Furthermore, we show that for some important classes of probabilistic models, DC-QIM is optimal (capacity-achieving) and regular QIM is near-optimal. These include both additive white Gaussian noise (AWGN) channels, which may be good models for hybrid transmission applications such as digital audio broadcasting, and mean-square-error-constrained attack channels that model private-key watermarking applications
 
Article
We discovered that a (145, 32) binary cyclic code generated by the product of an irreducible polynomial of degree 28 with exponent 145 and the polynomial x<sup>4</sup>+x<sup>3</sup>+x<sup>2</sup>+x+1 has a minimum distance of 44
 
Top-cited authors
D. L. Donoho
  • Stanford University
Shlomo Shamai
  • Technion - Israel Institute of Technology
Robert Calderbank
  • Duke University
Martin Hellman
  • Stanford University
Giuseppe Caire
  • Technische Universität Berlin