Preprints and early-stage research may not have been peer reviewed yet.
If you want to read the PDF, try requesting it from the authors.

Abstract

The subblock energy-constrained codes (SECCs) have recently attracted attention due to various applications in communication systems such as simultaneous energy and information transfer. In a SECC, each codeword is divided into smaller subblocks, and every subblock is constrained to carry sufficient energy. In this work, we study SECCs under more general constraints, namely bounded SECCs and sliding-window constrained codes (SWCCs), and propose two methods to construct such codes with low redundancy and linear-time complexity, based on Knuth’s balancing technique and sequence replacement technique. For certain codes parameters, our methods incur only one redundant bit.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
An indel refers to a single insertion or deletion, while an edit refers to a single insertion, deletion or substitution. We investigate codes that combat either a single indel or a single edit and provide linear-time algorithms that encode binary messages into these codes of length n. Over the quaternary alphabet, we provide two linear-time encoders. One corrects a single edit with 2log n + 2 redundant bits, while the other corrects a single indel with log n + 2 redundant bits. The latter encoder reduces the redundancy of the best known encoder of Tenengolts (1984) by at least four bits. Over the DNA alphabet, exactly half of the symbols of a GC-balanced word are either C or G. Via a modification of Knuth’s balancing technique, we provide a linear-time map that translates binary messages into GC-balanced codewords and the resulting codebook is able to correct a single edit. The redundancy of our encoder is 3log n + 2 bits and this is the first known construction of a GC-balanced code that corrects a single edit.
Article
Full-text available
We consider coding techniques that limit the lengths of homopolymer runs in strands of nucleotides used in DNA-based mass data storage systems. We compute the maximum number of user bits that can be stored per nucleotide when a maximum homopolymer runlength constraint is imposed. We describe simple and efficient implementations of coding techniques that avoid the occurrence of long homopolymers, and the rates of the constructed codes are close to the theoretical maximum. The proposed sequence replacement method for k-constrained q-ary data yields a significant improvement in coding redundancy than the prior art sequence replacement method for the k-constrained binary data. Using a simple transformation, standard binary maximum runlength limited sequences can be transformed into maximum runlength limited q-ary sequences, which opens the door to applying the vast prior art binary code constructions to DNA-based storage.
Conference Paper
Full-text available
The subblock energy-constrained codes (SECCs) have recently been shown to be suitable candidates for simultaneous energy and information transfer, where bounds on SECC capacity were presented for communication over noisy channels. In this paper, we study binary SECCs with given error correction capability, by considering codes with a certain minimum distance. Binary SECCs are a class of constrained codes where each codeword is partitioned into equal sized subblocks, and every subblock has weight exceeding a given threshold. We present several upper and lower bounds on the optimal SECC code size, and also derive the asymptotic Gilbert-Varshamov (GV) and sphere-packing bounds for SECCs. A related class of codes are the heavy weight codes (HWCs) where the weight of each codeword exceeds a given threshold. We show that for a fixed subblock length, the asymptotic rate for SECCs is strictly lower than the corresponding rate for HWCs when the relative distance of the code is small. The rate gap between HWCs and SECCs denotes the penalty due to imposition of weight constraint per subblock, relative to the codeword based weight constraint.
Article
Full-text available
This paper studies codes that correct a burst of deletions or insertions. Namely, a code will be called a b-burstdeletion/ insertion-correcting code if it can correct a burst of deletions/ insertions of any b consecutive bits. While the lower bound on the redundancy of such codes was shown by Levenshtein to be asymptotically log(n)+b�1, the redundancy of the best code construction by Cheng et al. is b(log(n=b + 1)). In this paper, we close on this gap and provide codes with redundancy at most log(n) + (b � 1) log(log(n)) + b � log(b). We first show that the models of insertions and deletions are equivalent and thus it is enough to study codes correcting a burst of deletions. We then derive a non-asymptotic upper bound on the size of b-burst-deletion-correcting codes and extend the burst deletion model to two more cases: 1) A deletion burst of at most b consecutive bits and 2) A deletion burst of size at most b (not necessarily consecutive). We extend our code construction for the first case and study the second case for b = 3; 4.
Article
Full-text available
The study of subblock-constrained codes has recently gained attention due to their application in diverse fields. We present bounds on the size and asymptotic rate for two classes of subblock-constrained codes. The first class is binary constant subblock-composition codes (CSCCs), where each codeword is partitioned into equal sized subblocks, and every subblock has the same fixed weight. The second class is binary subblock energy-constrained codes (SECCs), where the weight of every subblock exceeds a given threshold. We present novel upper and lower bounds on the code sizes and asymptotic rates for binary CSCCs and SECCs. For a fixed subblock length and small relative distance, we show that the asymptotic rate for CSCCs (resp. SECCs) is strictly lower than the corresponding rate for constant weight codes (CWCs) (resp. heavy weight codes (HWCs)). Further, for codes with high weight and low relative distance, we show that the asymptotic rates for CSCCs is strictly lower than that of SECCs, which contrasts that the asymptotic rate for CWCs is equal to that of HWCs. We also provide a correction to an earlier result by Chee et al. (2014) on the asymptotic CSCC rate. Additionally, we present several numerical examples comparing the rates for CSCCs and SECCs with those for constant weight codes and heavy weight codes.
Article
Full-text available
Consider an energy-harvesting receiver that uses the same received signal both for decoding information and for harvesting energy, which is employed to power its circuitry. In the scenario where the receiver has limited battery size, a signal with bursty energy content may cause power outage at the receiver since the battery will drain during intervals with low signal energy. In this paper, we consider a discrete memoryless channel and characterize achievable information rates when the energy content in each codeword is regularized by ensuring that sufficient energy is carried within every subblock duration. In particular, we study constant subblock-composition codes (CSCCs) where all subblocks in every codeword have the same fixed composition, and this subblock-composition is chosen to maximize the rate of information transfer while meeting the energy requirement. Compared to constant composition codes (CCCs), we show that CSCCs incur a rate loss and that the error exponent for CSCCs is also related to the error exponent for CCCs by the same rate loss term. We show that CSCC capacity can be improved by allowing different subblocks to have different composition while still meeting the subblock energy constraint. We provide numerical examples highlighting the tradeoff between delivery of sufficient energy to the receiver and achieving high information transfer rates. It is observed that the ability to use energy in real-time imposes less of penalty than the ability to use information in real-time.
Article
Full-text available
We introduce the class of multiply constant-weight codes to improve the reliability of certain physically unclonable function (PUF) response. We extend classical coding methods to construct multiply constant-weight codes from known $q$-ary and constant-weight codes. Analogues of Johnson bounds are derived and are shown to be asymptotically tight to a constant factor under certain conditions. We also examine the rates of the multiply constant-weight codes and interestingly, demonstrate that these rates are the same as those of constant-weight codes of suitable parameters. Asymptotic analysis of our code constructions is provided.
Conference Paper
Full-text available
This paper addresses coding for power transfer, modulation, and error control for the reader-to-tag channel in near-field passive radio frequency identification (RFID) systems using inductive coupling as a power transfer mechanism. Different assumptions on channel noise (including two different models for bit-shifts, insertions and deletions, and additive white Gaussian noise) are discussed. In particular, we propose a discretized Gaussian shift channel for the reader-to-tag channel in passive RFID systems, and design some new simple codes for error avoidance on this channel model. Finally, some simulation results are presented to compare the proposed codes to the Manchester code and two previously proposed codes for the bit-shift channel model.
Article
Motivated by applications in DNA-based storage, we introduce the new problem of code design in the Damerau metric. The Damerau metric is a generalization of the Levenshtein distance which, in addition to deletions, insertions and substitution errors also accounts for adjacent transposition edits. We first provide constructions for codes that may correct either a single deletion or a single adjacent transposition and then proceed to extend these results to codes that can simultaneously correct a single deletion and multiple adjacent transpositions. We conclude with constructions for joint block deletion and adjacent block transposition error-correcting codes.
Article
Dimming control is desirable for visible light communication (VLC) systems to provide different levels of brightness. In this letter, we propose a new coding scheme for dimmable VLC systems based on serial concatenation of columnscaled (CS) low-density parity-check (LDPC) codes and constant weight codes (CWCs). In the proposed scheme, the cardinality of the finite field on which the CS LDPC code is defined varies adaptively according to the CWC. Hence, transformation between symbol metrics and bit metrics is avoided. Other advantages of the proposed scheme are described as follows: 1) its coding rates are not constrained by the dimming range as error control and dimming control are decoupled; 2) it can be easily configured to support a wide range of dimming targets; and 3) it requires essentially the same hardware architecture to implement the encoder/decoder pair. Hence, the proposed coding scheme provides an attractive candidate for VLC systems with both error and dimming control.
Article
In various wireless systems, such as sensor RFID networks and body area networks with implantable devices, the transmitted signals are simultaneously used both for information transmission and for energy transfer. In order to satisfy the conflicting requirements on information and energy transfer, this paper proposes the use of constrained run-length limited (RLL) codes in lieu of conventional unconstrained (i.e., random-like) capacity-achieving codes. The receiver's energy utilization requirements are modeled stochastically, and constraints are imposed on the probabilities of battery underflow and overflow at the receiver. It is demonstrated that the codewords' structure afforded by the use of constrained codes enables the transmission strategy to be better adjusted to the receiver's energy utilization pattern, as compared to classical unstructured codes. As a result, constrained codes allow a wider range of trade-offs between the rate of information transmission and the performance of energy transfer to be achieved.
Article
In this article, we study properties and algorithms for constructing sets of 'constant weight' codewords with bipolar symbols, where the sum of the symbols is a constant q, q 6 0. We show various code constructions that extend Knuth's balancing vector scheme, q = 0, to the case where q > 0. We compute the redundancy of the new coding methods. Index Terms—Balanced code, channel capacity, constrained code, magnetic recording, optical recording. I. INTRODUCTION Let q be an integer. A setC, which is a subset of ( w = (w1;w2;:::;wn)2f 1; +1g n : n X i=1 wi = q )
Conference Paper
The fundamental tradeoff between the rates at which energy and reliable information can be transmitted over a single noisy line is studied. Engineering inspiration for this problem is provided by powerline communication, RFID systems, and covert packet timing systems as well as communication systems that scavenge received energy. A capacity-energy function is defined and a coding theorem is given. The capacity-energy function is a non-increasing concave cap function. Capacity-energy functions for several channels are computed.
Article
The sequence replacement technique converts an input sequence into a constrained sequence in which a prescribed subsequence is forbidden to occur. Several coding algorithms are presented that use this technique for the construction of maximum run-length limited sequences. The proposed algorithms show how all forbidden subsequences can be successively or iteratively removed to obtain a constrained sequence and how special subsequences can be inserted at predefined positions in the constrained sequence to represent the indices of the positions where the forbidden subsequences were removed. Several modifications are presented to reduce the impact of transmission errors on the decoding operation, and schemes to provide error control are discussed as well. The proposed algorithms can be implemented efficiently, and the rates of the constructed codes are close to their theoretical maximum. As such, the proposed algorithms are of interest for storage systems and data networks.
Article
Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for $\Pr \{ S - ES \geq nt \}$ depend only on the endpoints of the ranges of the summands and the mean, or the mean and the variance of S. These results are then used to obtain analogous inequalities for certain sums of dependent random variables such as U statistics and the sum of a random sample without replacement from a finite population.
Article
Coding schemes in which each codeword contains equally many zeros and ones are constructed in such a way that they can be efficiently encoded and decoded.
Article
A balanced code with r check bits and k information bits is a binary code of length k+r and cardinality 2<sup>k</sup> such that each codeword is balanced; that is, it has [(k+r)/2] 1's and [(k+r)/2] 0's. This paper contains new methods to construct efficient balanced codes. To design a balanced code, an information word with a low number of 1's or 0's is compressed and then balanced using the saved space. On the other hand, an information word having almost the same number of 1's and 0's is encoded using the single maps defined by Knuth's (1986) complementation method. Three different constructions are presented. Balanced codes with r check bits and k information bits with k&les;2<sup>r+1</sup>-2, k&les;3×2<sup>r</sup>-8, and k&les;5×2<sup>r</sup>-10r+c(r), c(r)∈{-15, -10, -5, 0, +5}, are given, improving the constructions found in the literature. In some cases, the first two constructions have a parallel coding scheme
Article
For n >0, d &ges;0, n ≡ d (mod 2), let K ( n , d ) denote the minimal cardinality of a family V of ±1 vectors of dimension n , such that for any ±1 vector w of dimension n there is a v ∈ V such that | v - w |&les; d , where v - w is the usual scalar product of v and w . A generalization of a simple construction due to D.E. Knuth (1986) shows that K ( n , d )&les;[ n /( d +1)]. A linear algebra proof is given here that this construction is optimal, so that K ( n , d )-[ n /( d +1)] for all n ≡ d (mod 2). This construction and its extensions have applications to communication theory, especially to the construction of signal sets for optical data links