ArticlePDF Available

Abstract and Figures

We study properties and constructions of constrained binary codes that enable simultaneous energy and information transfer. We specifically study sliding-window constrained codes that guarantee that within any prescribed window of ℓ consecutive bits the constrained sequence has at least t, t > 1, 1’s. We present a K-state source, K = ℓ choose t, that models the (ℓ,t) sliding-window constraint. We compute the information capacity of sliding-window (ℓ,t)-constrained sequences. We design efficient coding techniques for translating source data into sliding-window (ℓ,t)-constrained sequences.
Content may be subject to copyright.
Properties and constructions of energy-harvesting
sliding-window constrained codes
Kees A. Schouhamer Immink, Life Fellow, IEEE, and Kui Cai, Senior Member, IEEE
Abstract—We study properties and constructions of con-
strained binary codes that enable simultaneous energy and
information transfer. We specifically study sliding-window con-
strained codes that guarantee that within any prescribed window
of `consecutive bits the constrained sequence has at least t,
t > 1, 1’s. We present a K-state source, K=`choose t,
that models the (`, t)sliding-window constraint. We compute
the information capacity of sliding-window (`, t)-constrained
sequences. We design efficient coding techniques for translating
source data into sliding-window (`, t)-constrained sequences.
I. INTRODUCTION
Signals sent by a transmitter carry both information and
energy to the receiver [1, 2]. Applications of such energy
harvesting receiving devices are anticipated in products of the
Internet of Things (IoT), where the receiving device may reuse
the energy carried by the received signals without the need
for batteries and maintenance. For binary systems emitting
0’s and 1’s signals this has a bearing on the number of 1’s
(that are supposed to carry the energy) that are sent in a
prescribed time slot. A minimal number of 1’s in transmitted
sequences is required so as to carry sufficient energy within a
prescribed time span while also transmitting information. Two
basic approaches have emerged for simultaneous information
and energy communication, namely the subblock-energy con-
straint [3, 4] and the sliding-window constraint [2] on which
we will concentrate here.
A binary sequence is said to obey the sliding-window
(`, t)-constraint if the number of 1’s (also called weight)
within any window of `consecutive bits of that sequence
is at least t. We investigate the information capacity of the
sliding-window (`, t)-constraint, and we present constructions
of efficient low-redundancy codes that generate sequences that
obey the sliding-window (`, t)-constraint.
The paper is organized as follows. The maximum informa-
tion rate, called capacity, of energy-constrained sequences is
computed in Section II. In Section III, we describe simple
constructions of codes that translate source data into binary
(`, t)-constrained sequences, where we pay extra attention to
codes with a low-complexity decoder. Section IV furnishes the
conclusions of our paper.
Kees A. Schouhamer Immink is with Turing Machines Inc, Willem-
skade 15d, 3016 DK Rotterdam, The Netherlands. E-mail: immink@turing-
machines.com.
Kui Cai is with Singapore University of Technology and Design (SUTD),
Science, Mathematics and Technology Cluster, 8 Somapah Rd, 487372,
Singapore. E-mail: cai kui@sutd.edu.sg.
This work is supported by Singapore Ministry of Education Academic
Research Fund Tier 2 MOE2016-T2-2-054.
II. CA PACITY O F (`, t)-CONSTRAINED SEQUENCES
Define the `-bit word x= (x1, x2, . . . , x`)over the binary
symbol alphabet xi∈ Q,Q={0,1}. The weight w(x)of
a word xis the number of 1’s in x, or w(x) = P`
i=1 xi.
Let Y= (y1, y2, . . . , yn)be a sequence of n,n`, binary
symbols. A sequence Yis said to be (`, t)sliding-window
energy constrained if for all i > 0the `-bit sliding window,
w(yi, . . . , yi+`1)t, where t,0t`, is an integer
called threshold. Note that the special cases t= 0 and t=`
are trivial, and not pursued here. The cases (`, 1) and (`, `1)
refer to runlength constrained sequences [5, 6].
The capacity of (`, t)-constrained sequences, denoted by
C(`, t), is defined by [7]
C(`, t) = lim
n→∞
1
nlog2N(n),(1)
where N(n)denotes the number of distinct (`, t)-constrained
sequences Yof length n. The capacity of a channel that forbids
the transmission of undesired words can be computed by an
elegant method presented by Guibas and Odlyzko [8]. For
larger sets of forbidden words the method is cumbersome.
Wu et al. [9, 10] showed that the (`, t)-constrained channel
can be modelled by an autonomous Moore-type finite-state
machine source [11] with K0=P`
i=t`
istates, denoted by
σ0
i,1iK0, whose emitted data depend only on the
present state visited. Each state σ0
i,1iK0, is represented
by an `-bit allowed word, x= (x1, . . . , x`), of weight at least
t. Let (x1, x2, . . . , x`)be the `-bit word associated with the
current state, then the next possible states are associated with
the candidate words (x2, x3, . . . , x`,0) or (x2, x3, . . . , x`,1)
producing a ‘0’ or ‘1’ as an output, respectively. We set up a
square K0×K0transition matrix,D0, with binary elements
d0
i,j ∈ Q that represent a possible transition from state σ0
i
to state σ0
j. A transition from state σ0
ito σ0
jis allowable if
the `-bit word associated with σ0
ican be followed by the `-
word associated with σ0
j. If such a transition is allowable then
di,j = 1 and zero otherwise.
We can reduce the size of the finite-state machine by
merging equivalent states [11]. States σ0
ior σ0
jare equivalent
if the same output sequence is produced regardless of whether
the initial state is σ0
ior σ0
j. Two states σ0
iand σ0
jare merged
by deleting one of them, say σ0
j, and redirecting the incoming
edges of the deleted σ0
jto σ0
i.
An allowable output sequence from a given state is governed
by the position of the ttrailing 1’s of the `-bit word labelled
to that state. Let x= (x1, . . . , x`)be an `-bit word, and let
pdenote the largest index of xi, such that xp= 1 and the
weight of the (`p+ 1)-bit tail is w(xp, . . . , x`) = t. States
associated with `-bit words that have the same (`p+ 1)-
bit tail are equivalent and can be merged into a single state.
σ1
xx11
σ2
x101
σ3
x110
σ4
1001
σ5
1010
σ6
1100
1
0
1
01
0
1
11
1
Fig. 1. State diagram of the (`= 4, t = 2) constraint. Any walk, stepping
from state to state following the arrows, and reading off the symbols tagged to
the arrows, generates a sequence with at least t= 2 1’s in a sliding window
of `= 4 bits.
In stead of associating a state with an `-bit word of weight
at least tas above, we associate a merged state with an `-bit
word of weight t. The number of states, denoted by K, equals
K=`
t. The merged states are denoted by σi,1iK,
and the transition matrix of the reduced machine is denoted
by D.
Example 1: For `= 4 and t= 2, the original machine has
K0= 11 states denoted by ‘0011’, ‘0101’, ‘0110’, ‘0111’,
‘1001’, ‘1010’, ‘1011’, ‘1100’, ‘1101’, ‘1110’, and ‘1111’. We
merge the states with the same 2-bit tail ‘11’, namely ‘0011’,
‘0111’, ‘1011’, and ‘1111’ into ‘xx11’ (the ‘x’ denotes don’t
care). Similarly, we merge ‘0101’ and ‘1101’ into ‘x101’,
and ‘0110’ and ‘1110’ into ‘x110’. States ‘1001’, ‘1010’,
and ‘1100’ cannot be merged, so that we obtain K= 6
remaining states. The merged states are denoted and numbered
by σ1: ‘xx11’, σ2: ‘x101’, σ3: ‘x110’, etc, see Figure 1. State
σ1can be followed by two other states, namely ‘x110’ and
‘x111’(=‘xx11’). Thus, a transition is allowed from σ1to σ3:
‘x110’, or to state σ1: ‘xx11’ (a loop). After working out, we
obtain the following 6×6transition matrix:
D=
101000
100010
010001
100000
010000
000100
.(2)
The (information) capacity of the finite-state data source
equals C(`, t) = log2λ, where λequals the largest (real)
eigenvalue of the transition matrix D[7], or
det[DzI] = 0,(3)
where Idenotes the K×Kidentity matrix and det denotes
the determinant of a matrix. Using numerical methods, we
have computed C(`, t)for a selected number of `and t.
The outcomes are listed in Table I. In the next section, we
discuss practical implementations of codes that generate (`, t)-
constrained sequences.
III. CODE CONSTRUCTIONS
We assume that a long sequence of binary source data,
X, is translated using an encoder into a long sequence,
Y, of binary channel bits in a serial format that obeys
the prescribed sliding-window constraint. To that end, we
partition the source sequence into a sequence of m-bit words,
X=...,xi1, xi, xi+1, . . ., where xi∈ Qm. The encoder
translates the sequence of m-bit words into the n-bit words
. . . , ˆyi1,ˆyi,ˆyi+1, . . ., where ˆyi∈ Qn,n>m. The n-bit
words, ˆyi, are serialized, cascaded to form a long binary se-
quence, Y=...,yi1, yi, yi+1, . . .,yi∈ {0,1}, that satisfies
the prescribed (`, t)energy constraint, and transmitted. At
the receiver’s site, the decoder unambiguously translates the
sequence Yinto X.
A. Block code design
A finite-state data source description of the channel con-
straint is a good starting point of a code design [12, 13, 14, 15].
We follow Franaszek’s method [16] for constructing a con-
strained block code for given values of `and t. We start
by a judicious choice of the source and codeword length,
mand n, respectively, where the quotient of the integers m
and nsatisfy R=m/n < C(`, t). The (real) quantity R
is called the code rate. A finite-state encoder is a finite-state
machine whose edges have two labels, namely the output n-bit
codeword and the assigned m-bit (input) source word. The n-
bit output codeword is a function of the present encoder state
and the m-bit input source word. Note that in the finite-state
data source described in the previous section, the edges are
labelled with the output data only. The procedure for a block
code design has three steps.
We first set up the transition matrix, D. Secondly, we
compute the n-th power of D, denoted by Dn. The encoder has
a set of states, denoted by Σ={σi}, which is a subset of the
set of states of the data source Σ, or ΣΣ. In order to find
Σ, we proceed with a process called successive elimination
of states. Since for each encoder state in Σ, each source word
must be labelled to an outgoing edge, each encoder state in
Σmust have at least 2moutgoing edges.
We start the successive elimination procedure by setting
Σ= Σ. Let Fibe the set of outgoing n-bit codewords of
state σi. The size of the set, called fanout, equals |Fi|=
Pj:σjΣd[n]
i,j , where d[n]
i,j denotes the (i, j)-th element of Dn.
If the fanout |Fi|<2m, we eliminate σifrom Σ. After a
number of elimination rounds, we end up with either an empty
state set Σ=or a Σ6=that serves as a skeleton of the
encoder. Any state σiΣsatisfies
X
j:σjΣ
d[n]
i,j 2m,(4)
so that we have made certain that for each state σiΣ, we
have sufficient outgoing edges. The set Σdefines the set of
encoder states. In the third step of Franaszek’s procedure, we
assign for each encoder state in Σeach of the 2msource
words to edges leaving that state. Then, each edge has two
labels: the codeword and the associated source word. Note that
in the previous step, the successive elimination procedure, we
made sure that at least 2medges are leaving each state in Σ.
By a judicious assignment we may reduce encoder and decoder
complexity and may also reduce a phenomenon called error
propagation. After the assignment step, we have completed
the encoder output function. Let us exemplify the above with
a simple design example.
Example 2: Let `= 4 and t= 2, see also Example 1. The
capacity C(4,2) 0.778, see Table I, so we choose m= 2
TABLE I
CAPACI TY C(`, t)VERSUS WINDOW LENGTH `A ND T HR ESH OL D t.
` t = 1 t= 2 t= 3 t= 4 t= 5 t= 6 t= 7 t= 8 t= 9 t= 10 t= 11 t= 12 t= 13 t= 14 t= 15
2 0.694
3 0.879 0.551
4 0.947 0.778 0.465
5 0.975 0.883 0.698 0.406
6 0.988 0.937 0.823 0.635 0.362
7 0.994 0.965 0.895 0.770 0.583 0.328
8 0.997 0.981 0.936 0.853 0.723 0.541 0.301
9 0.999 0.989 0.961 0.905 0.813 0.681 0.505 0.279
10 0.999 0.994 0.977 0.939 0.873 0.776 0.644 0.474 0.260
11 1.000 0.997 0.986 0.960 0.914 0.842 0.743 0.612 0.447 0.244
12 1.000 0.998 0.992 0.974 0.941 0.889 0.813 0.712 0.583 0.424 0.230
13 1.000 0.999 0.995 0.984 0.960 0.921 0.863 0.785 0.683 0.557 0.403 0.218
14 1.000 0.999 0.997 0.990 0.973 0.944 0.900 0.839 0.758 0.657 0.533 0.384 0.207
15 1.000 1.000 0.998 0.993 0.982 0.961 0.927 0.879 0.815 0.734 0.633 0.512 0.368 0.198
16 1.000 1.000 0.999 0.996 0.988 0.973 0.947 0.910 0.859 0.793 0.711 0.611 0.492 0.353 0.189
TABLE II
BLO CK CO DE O F RATE 2/3, `= 4 AND t= 2.
input output
0 011
1 101
2 110
3 111
and n= 3, so that the rate R=m/n = 2/3is slightly lower
than C(4,2) 0.778. We have from (2)
D3=
211111
211011
211000
111001
111000
101000
.
We find that only the first four row sums 2m= 4, so that
states σ5and σ6are eliminated. In a second round, since σ6
was eliminated in the first round, the row sum of σ4equals
3(<4), so that state σ4must be eliminated, and we obtain
a 3-state encoder. The source and codewords assignment is
straightforward. The codebook is shown in Table II.
The code obtained is a one-to-one translation of the source
into codewords and vice versa whose codewords can be
cascaded without observation of the previous or future source
or codeword. Decoding is accomplished by observing the
n-symbol codeword. The code is an example of a state-
independent encodable and decodable code discussed in detail
in Section III-B.
Franaszek’s recursive elimination procedure delivers, if suc-
cessful, a subset ΣΣ, of encoder states, where each
σiΣhas at least 2medges. It does not, in general, deliver
an encoder with the least number of encoder states. With an
exhaustive search we may find the smallest number of encoder
states. The (`, t)-constrained channel, however, is notorious for
its many states, so that an exhaustive search for a minimum
number of encoder states is often an impracticality. Table III
shows the main parameters of block codes for a selection of `
TABLE III
PARAMETERS OF BLOCK CODES FOR A SELECTION OF `AND tVA LU ES.
THE I NT EGE RS mAN D nARE THE SOURCE WORD AND CODEWORD
LE NGT H,R ESP ECT IV ELY,WHILE |Σ|IS THE NUMBER OF ENCODER
STATES ,AND η=R/C(`, t)I S TH E RATE EFFI CI ENC Y.
` t m n |Σ|η=R/C(`, t)
4 2 2 3 3 0.857
4 2 11 15 3 0.943
5 2 10 12 4 0.943
6 3 14 18 7 0.945
6 4 9 16 3 0.886
7 2 8 9 9 0.921
7 5 8 16 4 0.857
8 2 16 17 5 0.959
10 5 12 16 12 0.859
and tvalues found by invoking a search routine. The parameter
η=m/(nC(`, t)) denotes the rate efficiency of the code.
The encoder look-up tables required for the (`, t)-
constrained codes that are shown in Table III are within easy
reach of modern electronics. The usage of (`, t)-constrained
codes is anticipated in products for the Internet of Things
(IoT), where low cost is paramount. The decoding tables are
therefore major elements of the receiver hardware, and should
be carefully considered. In the next section, we show how we
can reduce the number of encoder and decoder tables.
B. State-independent encoding and decoding
The rate 2/3 code shown in Table II simply translates
the source words into codewords without dependence of the
encoder state. Such a code is called a state-independent
encoder. The observation of a 3-bit codeword is sufficient for
retrieving the 2-bit data word. Such a decoder is often called
astate-independent decoder. Clearly, both state-independent
decoding and encoding are desirable virtues, and below we
look into the feasibility of this feature.
Let ΣΣbe a subset of the state set Σas discussed
above. Let σiand σjbe states in Σ, and let Si,j denote the
set of all allowable n-bit words that are generated by the finite-
state source when it starts in state σiand ends in σj. The size
TABLE IV
SLIDING-BL OCK D ECODAB LE C ODE O F RATE 9/12, `= 4 A ND t= 2.
state 1state 2
input codeword next state codeword next state
0 001110111101 2 101110111101 2
1 001110111110 2 101110111110 2
2 001110111111 2 110011001111 1
3 001111001101 2 101111001101 2
4 001111001110 2 101111001110 2
5 001111001111 2 110011010111 1
.
.
.
of the set equals |Si,j |=d[n]
i,j . The set of codewords leaving
σiΣand ending in one of the states σjΣequals
Fi=[
j:σjΣ
Si,j .
Let
SΣ=\
i:σiΣ
Fi(5)
denote the intersection of the |Σ|sets of codewords Fi. A
single look-up table for encoding and decoding is possible if
|SΣ| ≥ 2m.(6)
The above condition, although similar to condition (4), is,
however, much more numerically involved: condition (4) in-
volves simple addition operations on the elements of Dn,
while condition (6) involves the generation of the sets of n-
bit allowable codewords Si,j, and computing the union and
intersection of these sets. If (6) is satisfied, then the codewords
in SΣcan be uniquely assigned to the 2msource words. We
have, using (6), established that all block codes presented in
Table III can be encoded and decoded with a single look-up
table, which has an immediate bearing on the complexity and
the error propagation.
C. State-splitting method
The state-splitting method or ACH-algorithm developed
in [12 ,17] is a systematic technique for designing constrained
codes. We have applied the ACH-algorithm to the case `= 4
and t= 2. Results are shown in Table IV, which shows a
small part of the encoding tables of the two-state encoder that
translates a series of 9-bit source words into a series of 12-bit
codewords that satisfies the `and tconstraints. The source
input (left column) is represented by a decimal number in the
range 0,...,511, although only the first six words are listed.
The corresponding output when the encoder is in state 1 is
listed in the second column. The next state function of state 1
is listed in column 3. The output and next state functions of
state 2 are listed in columns 4 and 5. The rate efficiency, η,
of the rate 9/12 code is η0.965. Decoding is done by
observation of the previous, the current, and the upcoming
codewords.
IV. CONCLUSIONS
We have studied properties and constructions of binary
codes that enable simultaneous energy and information transfer
by using binary sequences that have at least t1’s in any
sliding window of `consecutive bits. We have presented a
K-state source, K=`
t, that models the (`, t)sliding win-
dow constraint. We have computed the information capacity,
C(`, t), for selected values of `and t. We have presented
methods for designing efficient block codes that translate
user data into sliding-window (`, t)-constrained sequences. We
have presented low-complexity state-independent encodable
and decodable (`, t)-constrained block codes. We employed
the state-splitting ACH method for designing a rate 9/12,
(`= 4, t = 2)-constrained code with a rate efficiency of
η0.964.
REFERENCES
[1] P. Popovski, A. M. Fouladgar, and O. Simeone, “Interactive joint transfer
of energy and information,” IEEE Trans. Commun., vol. 61, no. 5, pp.
2086-2097, May 2013.
[2] E. Rosnes, A. I. Barbero, and Ø. Ytrehus, “Coding for Inductively
Coupled Channels,” IEEE Trans. Inform. Theory, vol. 58, no. 8, pp.
5418-5436, Aug. 2012.
[3] A. Tandon, M. Motani and L. R. Varshney, “Subblock-Constrained
Codes for Real-Time Simultaneous Energy and Information Transfer,”
IEEE Transactions on Information Theory, vol. 62, no. 7, pp. 4212-4227,
July 2016.
[4] A. Tandon, H. M. Kiah and M. Motani, “Bounds on the Size and
Asymptotic Rate of Subblock-Constrained Codes,” IEEE Transactions
on Information Theory, vol. 64, no. 10, pp. 6604-6619, Oct. 2018.
[5] K. A. S. Immink, “Runlength-Limited Sequences,” Proceedings of the
IEEE, vol. 78, no. 11, pp. 1745-1759, Nov. 1990.
[6] A. Tandon, M. Motani, and L. R. Varshney, “Are Run-Length Limited
Codes Suitable for Simultaneous Energy and Information Transfer?”
IEEE Trans. on Green Communications and Networking, vol. 3, no. 4,
pp. 988-996, Dec. 2019.
[7] C. E. Shannon, “A Mathematical Theory of Communication,Bell Syst.
Tech. J., vol. 27, pp. 379-423, July 1948.
[8] L. J. Guibas and A. M. Odlyzko, “String overlaps, pattern matching,
and nontransitive games,Journal of Combinatorial Theory, vol. A30,
pp. 183-208, March 1981.
[9] T.-Y. Wu, A. Tandon, L. R. Varshney, M. Motani, “Skip-Sliding Window
Codes,” ArXiv:1711.09494, 2018.
[10] T.-Y. Wu, A. Tandon, M. Motani, and L. R. Varshney, “On the outage-
constrained rate of skip-sliding window codes,” Proc. IEEE Inform.
Theory Workshop (ITW’19), Visby, Sweden, Aug. 2019.
[11] J. E. Hopcroft and R. Motwani, Introduction to Automata Theory,
Languages, and Computation, Pearson New International Edition, 2013.
[12] B. H. Marcus, P. H. Siegel, and J. K. Wolf, “Finite-state Modulation
Codes for Data Storage,” IEEE Journal on Selected Areas in Commu-
nications, vol. 10, no. 1, pp. 5-37, Jan. 1992.
[13] K. A. S. Immink, P. H. Siegel, and J. K. Wolf, “Codes for Digital
Recorders,” IEEE Trans. Inform. Theory, vol. IT-44, no. 6, pp. 2260-
2299, Oct. 1998.
[14] C. Cao and I. Fair, “Minimal Sets for Capacity-Approaching Variable-
Length Constrained Sequence Codes,” IEEE Trans. on Commun., vol.
67, no. 2, pp. 890-902, Feb. 2019.
[15] C. Cao and I. Fair, “Construction of Multi-State Capacity-Approaching
Variable-Length Constrained Sequence Codes With State-Independent
Decoding,” IEEE Access, vol. 7, pp. 54746-54759, 2019.
[16] P. A. Franaszek, “Sequence-State Encoding for Digital Transmission,
Bell Syst. Tech. J., vol. 47, pp. 143-157, Jan. 1968.
[17] R. L. Adler, D. Coppersmith, and M. Hassner, “Algorithms for Sliding
Block Codes. An Application of Symbolic Dynamics to Information
Theory,IEEE Trans. Inform. Theory, vol. IT-29, no. 1, pp. 5-22, Jan.
1983.
... In contrast, a binary SWCC restricts the number of ones over every window of consecutive symbols (see Figure 1). This approach has been investigated in [11]- [13]. SWCCs have been further studied for other applications of error-correction codes in [14], [15]. ...
... The capacity c W ( , a) is studied and determined for certain values of and a in our companion paper [13]. A special class of bounded SWCCs, namely locally balanced constraints, was introduced in [16] and the capacity c W ( , [a, b]) was also studied when a = /2 − , b = /2 + for > 0. In general, to achieve high information capacity, the sufficient values for a, b are a p 1 and b p 2 for some constants 0 p 1 < 1/2 < p 2 1. ...
Preprint
Full-text available
The subblock energy-constrained codes (SECCs) and sliding window-constrained codes (SWCCs) have recently attracted attention due to various applications in communcation systems such as simultaneous energy and information transfer. In a SECC, each codewod is divided into smaller non-overlapping windows, called subblocks, and every subblock is constrained to carry sufficient energy. In a SWCC, the energy constraint is enforced over every window. In this work, we focus on the binary channel, where sufficient energy is achieved theoretically by using relatively high weight codes, and study SECCs and SWCCs under more general constraints, namely bounded SECCs and bounded SWCCs. We propose two methods to construct such codes with low redundancy and linear-time complexity, based on Knuth's balancing technique and sequence replacement technique. For certain codes parameters, our methods incur only one redundant bit. We also impose the minimum distance constraint for error correction capability of the designed codes, which helps to reduce the error propagation during decoding as well.
... The 8B10B code has many embodiments [2,9], and is widely used in gigabit telecommunication systems and data storage media. Combinations of RLL and balanced codes can be found in data storage, energy harvesting, and communications codes [10,11,12,13]. ...
Article
Full-text available
We present coding methods for generating ℓ-symbol constrained codewords taken from a set, S, of allowed codewords. In standard practice, the size of the set S, denoted by M=|S|, is truncated to an integer power of two, which may lead to a serious waste of capacity. We present an efficient and low-complexity coding method for avoiding the truncation loss, where the encoding is accomplished in two steps: first, a series of binary input (user) data is translated into a series of M-ary symbols in the alphabet M = {0, ... ,M - 1}. Then, in the second step, the M-ary symbols are translated into a series of admissible ℓ-symbol words in S by using a small look-up table. The presented construction of Pearson codes and fixed-weight codes offers a rate close to capacity. For example, the presented 255B320B balanced code, where 255 source bits are translated into 32 10-bit balanced codewords, has a rate 0.1 % below capacity.
Article
Full-text available
Run-length limited (RLL) codes are a well-studied class of constrained codes having application in diverse areas, such as optical and magnetic data recording systems, DNA-based storage, and visible light communication. RLL codes have also been proposed for the emerging area of simultaneous energy and information transfer, where the receiver uses the received signal for decoding information as well as for harvesting energy to run its circuitry. In this paper, we show that RLL codes are not the best codes for simultaneous energy and information transfer, in terms of the maximum number of codewords which avoid energy outage, i.e., outage-constrained capacity. Specifically, we show that sliding window constrained (SWC) codes and sub-block energy constrained (SEC) codes have significantly higher outage-constrained capacities than RLL codes for moderate to large energy buffer sizes.
Article
Full-text available
We consider the construction of capacity-approaching variable-length constrained sequence codes based on multi-state encoders that permit state-independent decoding. Based on the finite state machine description of the constraint, we first select the principal states and establish the minimal sets. By performing partial extensions and normalized geometric Huffman coding, efficient codebooks that enable state-independent decoding are obtained. We then extend this multi-state approach to a construction technique based on n-step FSMs. We demonstrate the usefulness of this approach by constructing capacity-approaching variable-length constrained sequence codes with improved efficiency and/or reduced implementation complexity to satisfy a variety of constraints, including the runlength-limited (RLL) constraint, the DC-free constraint, and the DC-free RLL constraint, with an emphasis on their application in visible light communications.
Article
Full-text available
The use of constrained sequence (CS) codes is important for the robust operation of transmission and data storage systems. While most analysis and development of CS codes has focused on fixedlength codes, recent research has demonstrated advantages of variable-length CS codes. In our design of capacity-approaching variable-length CS codes, the construction of minimal sets is critical. In this paper we propose an approach to construct minimal sets for a variety of constraints based on the finite state machine (FSM) description of constrained sequences. We develop three criteria to select the optimal state of the FSM that enables the design of a single-state encoder which results in the highest maximum possible code rate, and we apply these criteria to several constraints to illustrate the advantages that can be achieved. We then introduce FSM partitions and propose a recursive construction algorithm to establish the minimal set of the specified state. Finally, we present the construction of single-state capacity-approaching variable-length CS codes to show the improved efficiency and reduced implementation complexity that can be achieved compared to CS codes currently in use.
Article
Full-text available
The study of subblock-constrained codes has recently gained attention due to their application in diverse fields. We present bounds on the size and asymptotic rate for two classes of subblock-constrained codes. The first class is binary constant subblock-composition codes (CSCCs), where each codeword is partitioned into equal sized subblocks, and every subblock has the same fixed weight. The second class is binary subblock energy-constrained codes (SECCs), where the weight of every subblock exceeds a given threshold. We present novel upper and lower bounds on the code sizes and asymptotic rates for binary CSCCs and SECCs. For a fixed subblock length and small relative distance, we show that the asymptotic rate for CSCCs (resp. SECCs) is strictly lower than the corresponding rate for constant weight codes (CWCs) (resp. heavy weight codes (HWCs)). Further, for codes with high weight and low relative distance, we show that the asymptotic rates for CSCCs is strictly lower than that of SECCs, which contrasts that the asymptotic rate for CWCs is equal to that of HWCs. We also provide a correction to an earlier result by Chee et al. (2014) on the asymptotic CSCC rate. Additionally, we present several numerical examples comparing the rates for CSCCs and SECCs with those for constant weight codes and heavy weight codes.
Article
Full-text available
Consider an energy-harvesting receiver that uses the same received signal both for decoding information and for harvesting energy, which is employed to power its circuitry. In the scenario where the receiver has limited battery size, a signal with bursty energy content may cause power outage at the receiver since the battery will drain during intervals with low signal energy. In this paper, we consider a discrete memoryless channel and characterize achievable information rates when the energy content in each codeword is regularized by ensuring that sufficient energy is carried within every subblock duration. In particular, we study constant subblock-composition codes (CSCCs) where all subblocks in every codeword have the same fixed composition, and this subblock-composition is chosen to maximize the rate of information transfer while meeting the energy requirement. Compared to constant composition codes (CCCs), we show that CSCCs incur a rate loss and that the error exponent for CSCCs is also related to the error exponent for CCCs by the same rate loss term. We show that CSCC capacity can be improved by allowing different subblocks to have different composition while still meeting the subblock energy constraint. We provide numerical examples highlighting the tradeoff between delivery of sufficient energy to the receiver and achieving high information transfer rates. It is observed that the ability to use energy in real-time imposes less of penalty than the ability to use information in real-time.
Article
Constrained codes are a kev component in the digital recording devices that have become ubiquitous in computer data storage and electronic entertainment applications. This paper surveys the theory and practice of constrained coding, tracing the evolution of the subject from its origins in Shannon's classic 1948 paper to present-day applications in high-density digital recorders. Open problems and future research directions are also addressed.
Article
A systematic approach to the analysis and construction of channel codes for digital baseband transmission is presented. The structure of the codes is dominated by the set of requirements imposed by channel characteristics and system operation. These requirements may be translated into symbol sequence properties which, in turn, specify a set of permissible sequence states. State-dependent coding of both fixed and variable length is a direct result. Properties of such codes are discussed and two examples are presented.
Article
Inductive coupling is a technique wherein one device (the reader) induces an electrical current in another device (the tag), thereby providing not only power for the tag, but also a communication channel. In this paper, we focus exclusively on the reader-to-tag channel. The first part of this paper presents modulation codes that possess a high minimum and a high average power. This is important, since the tag gets its entire power from the received signal, and the information should be modulated in a way that maximizes the power transferred to the tag. The presented modulation codes compare favorably to codes used in radio frequency identification applications today. The second part of the paper describes modulation codes with some error-correcting capabilities. In fact, most errors in the reader-to-tag channel are due to incorrect timing. Here, we propose to model the timing errors in the reader-to-tag communication channel by a simple bit-shift channel, and we will present optimal (in the sense of maximizing the code rate for a given block length) single bit-shift error-correcting codes for this simple bit-shift channel that also have large average power.