Conference Paper

Constrained Coding with Error Control for DNA-Based Data Storage

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this paper, we first propose coding techniques for DNA-based data storage which account the maximum homopolymer runlength and the GC-content. In particular, for arbitrary $\ell,\epsilon > 0$, we propose simple and efficient $(\epsilon, \ell)$-constrained encoders that transform binary sequences into DNA base sequences (codewords), that satisfy the following properties: • Runlength constraint: the maximum homopolymer run in each codeword is at most $\ell$, • GC-content constraint: the GC-content of each codeword is within $[0.5 − \epsilon, 0.5 + \epsilon]$. For practical values of l and ε, our codes achieve higher rates than the existing results in the literature. We further design efficient $(\epsilon,\ell)$-constrained codes with error-correction capability. Specifically, the designed codes satisfy the runlength constraint, the GC-content constraint, and can correct a single edit (i.e. a single deletion, insertion, or substitution) and its variants. To the best of our knowledge, no such codes are constructed prior to this work.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We believe that the studied model would be relevant to practical experiments where the GC-content of DNA strands is only required to be close to 50% (for example, between 45% to 55% in [5], or 40% to 60% in [8]). Via a modification of Knuth's method, we show that the number of redundant bits can be gracefully reduced from log n to O (1). Moreover, we also propose an efficient coding design that satisfy both RLL constraint and GC-content constraint. ...
... For DNA-based storage, we are interested in codewords that are -runlength limited and -balanced for 6, and sufficient small = o (1). ...
Article
We propose coding techniques that simultaneously limit the length of homopolymers runs, ensure the GC-content constraint, and are capable of correcting a single edit error in strands of nucleotides in DNA-based data storage systems. In particular, for given ℓ, ϵ > 0, we propose simple and efficient encoders/decoders that transform binary sequences into DNA base sequences (codewords), namely sequences of the symbols A, T, C and G, that satisfy all of the following properties: • Runlength constraint: the maximum homopolymer run in each codeword is at most ℓ, • GC-content constraint: the GC-content of each codeword is within [0.5 - ϵ; 0.5 + ϵ], • Error-correction: each codeword is capable of correcting a single deletion, or single insertion, or single substitution error. While various combinations of these properties have been considered in the literature, this work provides generalizations of codes constructions that satisfy all the properties with arbitrary parameters of ℓ and ϵ. Furthermore, for practical values of ℓ and ϵ, we show that our encoders achieve higher rates than existing results in the literature and approach capacity. Our methods have low encoding/decoding complexity and limited error propagation.
... There is no good way to solve this NP-complete problem because the number of DNA codes always decreases with increasing constraints. Recently, Nguyen et al. [47] designed constrained codes with error-correction capability, which were capable of correcting a single insertion / edited. wang et al. [48] presented novel similarity significance (SS) model to measure the similarity between DNA sequences. ...
Article
Metaheuristic algorithms have the drawback that local optimal solutions are prone to precocious convergence. In order to overcome the disadvantages of the whale optimization algorithm, we propose an improved selective opposition whale optimization algorithm (ISOWOA) in this paper. Firstly, the enhanced quasi-opposition learning (EQOBL) is applied to selectively update the position of the predator, calculate the fitness of the population before and after, and retain optimal individuals as the food source position; Secondly, an improved time-varying update strategy for inertia weight predator position is proposed, and the position update of the food source is completed by this strategy. The performance of the algorithm is analyzed by 23 benchmark functions of CEC 2005 and 15 benchmark functions of CEC 2015 in various dimensions. The superior results are further shown by Wilcoxon's rank sum test and Friedman's nonparametric rank test. Finally, its applicability is demonstrated through applications to the field of biological computing. In this paper, our aim is to achieve access to DNA files and designs high-quantity DNA code sets by ISOWOA. The experimental results show that the lower bounds of the multi-constraint storage coding sets implemented in this paper equals or surpasses that of previous optimal constructions. The data show that the amount of the DNA storage cods filtered by ISOWOA increased 2–18%, which demonstrates the algorithm's reliability in practical optimization tasks.
... To demonstrate the motivation for our work, let us examine an example as shown in Fig. 1, in which case the DNA-XL code, as well as many other DNA codes, fails to work when multiple errors occur within one block. Recently, Nguyen et al. [19] presented a simple and e cient (✏,`)-constrained encoder that can correct a single indel/edit error. Unfortunately, these existing DNA encoding methods still su↵er from high complexity. ...
Article
Deoxyribonucleic acid (DNA)-based data storage is a promising new storage technology which has the advantage of high storage capacity and long storage time compared with traditional storage media. However, the synthesis and sequencing process of DNA can randomly generate many types of errors, which makes it more difficult to cluster DNA sequences to recover DNA information. Currently, the available DNA clustering algorithms are targeted at DNA sequences in the biological domain, which not only cannot adapt to the characteristics of sequences in DNA storage, but also tend to be unacceptably time-consuming for billions of DNA sequences in DNA storage. In this paper, we propose an efficient DNA clustering method termed Clover for DNA storage with linear computational complexity and low memory. Clover avoids the computation of the Levenshtein distance by using a tree structure for interval-specific retrieval. We argue through theoretical proofs that Clover has standard linear computational complexity, low space complexity, etc. Experiments show that our method can cluster 10 million DNA sequences into 50 000 classes in 10 s and meet an accuracy rate of over 99%. Furthermore, we have successfully completed an unprecedented clustering of 10 billion DNA data on a single home computer and the time consumption still satisfies the linear relationship. Clover is freely available at https://github.com/Guanjinqu/Clover.
Preprint
Due to its higher data density, longevity, energy efficiency, and ease of generating copies, DNA is considered a promising storage technology for satisfying future needs. However, a diverse set of errors including deletions, insertions, duplications, and substitutions may arise in DNA at different stages of data storage and retrieval. The current paper constructs error-correcting codes for simultaneously correcting short (tandem) duplications and at most $p$ edits, where a short duplication generates a copy of a substring with length $\leq 3$ and inserts the copy following the original substring, and an edit is a substitution, deletion, or insertion. Compared to the state-of-the-art codes for duplications only, the proposed codes correct up to $p$ edits (in addition to duplications) at the additional cost of roughly $8p(\log_q n)(1+o(1))$ symbols of redundancy, thus achieving the same asymptotic rate, where $q\ge 4$ is the alphabet size and $p$ is a constant. Furthermore, the time complexities of both the encoding and decoding processes are polynomial when $p$ is a constant with respect to the code length.
Conference Paper
In this work, given n,ϵ > 0, two efficient encoding (decoding) methods are presented for mapping arbitrary data to (from) n×n binary arrays in which the weight of every row and every column is within [(1/2–ϵ)n, (1/2+ϵ)n], which is referred to as the ϵ-balanced constraint. The first method combines the divide and conquer algorithm and a modification of the Knuth’s balancing technique, resulting a redundancy of Θ(n) bits. On the other hand, for sufficiently large n, the second method uses the sequence replacement technique, which costs only one redundant bit. The latter method reduces significantly the redundancy of the best known encoder for two-dimensional p-bounded weight constrained codes from (n + 3) bits to a single bit.
Article
Due to its high data density and longevity, DNA is considered a promising medium for satisfying ever-increasing data storage needs. However, the diversity of errors that occur in DNA sequences makes efficient error-correction a challenging task. This paper aims to address simultaneously correcting two types of errors, namely, short tandem duplication and edit errors, where an edit error may be a substitution, deletion, or insertion. We focus on tandem repeats of length at most 3 and design codes for correcting an arbitrary number of duplication errors and one edit error. Because an edited symbol can be duplicated many times (as part of substrings of various lengths), a single edit can affect an unbounded substring of the retrieved word. However, we show that with appropriate preprocessing, the effect may be limited to a substring of finite length, thus making efficient error-correction possible. We construct a code for correcting the aforementioned errors and provide lower bounds for its rate. Compared to optimal codes correcting only duplication errors, numerical results show that the asymptotic cost of protecting against an additional edit is only 0.003 bits/symbol when the alphabet has size 4, an important case corresponding to data storage in DNA.
Conference Paper
Full-text available
In this work, given n, p>0 , efficient encoding/decoding algorithms are presented for mapping arbitrary data to and from n×n binary arrays in which the weight of every row and every column is at most pn. Such constraint, referred as p-bounded-weight-constraint, is crucial for reducing the parasitic currents in the crossbar resistive memory arrays, and has also been proposed for certain applications of the holographic data storage. While low-complexity designs have been proposed in the literature for only the case p=1/2 , this work provides efficient coding methods that work for arbitrary values of p . The coding rate of our proposed encoder approaches the channel capacity for all p .
Preprint
Due to its high data density and longevity, DNA is considered a promising medium for satisfying ever-increasing data storage needs. However, the diversity of errors that occur in DNA sequences makes efficient error-correction a challenging task. This paper aims to address simultaneously correcting two types of errors, namely, short tandem duplication and substitution errors. We focus on tandem repeats of length at most 3 and design codes for correcting an arbitrary number of duplication errors and one substitution error. Because a substituted symbol can be duplicated many times (as part of substrings of various lengths), a single substitution can affect an unbounded substring of the retrieved word. However, we show that with appropriate preprocessing, the effect may be limited to a substring of finite length, thus making efficient error-correction possible. We construct a code for correcting the aforementioned errors and provide lower bounds for its rate. Compared to optimal codes correcting only duplication errors, numerical results show that the asymptotic cost of protecting against an additional substitution is only 0.003 bits/symbol when the alphabet has size 4, an important case corresponding to data storage in DNA.
Article
Full-text available
We describe properties and constructions of constraint-based codes for DNA-based data storage which account for the maximum repetition length and AT/GC balance. Generating functions and approximations are presented for computing the number of sequences with maximum repetition length and AT/GC balance constraint. We describe routines for translating binary runlength limited and/or balanced strings into DNA strands, and compute the efficiency of such routines. Expressions for the redundancy of codes that account for both the maximum repetition length and AT/GC balance are derived.
Conference Paper
Full-text available
An indel refers to a single insertion or deletion, while an edit refers to a single insertion, deletion or substitution. We investigate codes that combat either a single indel or a single edit and provide linear-time algorithms that encode binary messages into these codes of length n. Over the quaternary alphabet, we provide two linear-time encoders. One corrects a single edit with 2log n + 2 redundant bits, while the other corrects a single indel with log n + 2 redundant bits. The latter encoder reduces the redundancy of the best known encoder of Tenengolts (1984) by at least four bits. Over the DNA alphabet, exactly half of the symbols of a GC-balanced word are either C or G. Via a modification of Knuth’s balancing technique, we provide a linear-time map that translates binary messages into GC-balanced codewords and the resulting codebook is able to correct a single edit. The redundancy of our encoder is 3log n + 2 bits and this is the first known construction of a GC-balanced code that corrects a single edit.
Article
Full-text available
We analyze codes for DNA-based data storage which accounts for the maximum homopolymer repetition length and GC-AT balance. We present a new precoding method for translating words with a maximum run of k zeros into words with a maximum homopolymer run m = k + 1, which is atractive for securing GC-AT balance. Generating functions are presented for enumerating the number of n-symbol k-constrained codewords of given GC-AT balance Various efficient constructions are presented of block codes that satisfy a combined balance and maximum homopolymer run.
Article
Full-text available
We propose a coding method to transform binary sequences into DNA base sequences (codewords), namely sequences of the symbols A, T, C and G, that satisfy the following two properties • Run-length constraint. The maximum run-length of each symbol in each codeword is at most three; • GC-content constraint: The GC-content of each codeword is close to 0.5, say between 0.4 and 0.6. The proposed coding scheme is motivated by the problem of designing codes for DNA-based data storage systems, where the binary digital data is stored in synthetic DNA base sequences. Existing literature either achieve code rates not greater than 1.78 bits per nucleotide or lead to severe error propagation. Our method achieves a rate of 1.9 bits per DNA base with low encoding/decoding complexity and limited error propagation.
Article
Full-text available
Owing to its longevity and enormous information density, DNA, the molecule encoding biological information, has emerged as a promising archival storage medium. However, due to technological constraints, data can only be written onto many short DNA molecules that are stored in an unordered way, and can only be read by sampling from this DNA pool. Moreover, imperfections in writing (synthesis), reading (sequencing), storage, and handling of the DNA, in particular amplification via PCR, lead to a loss of DNA molecules and induce errors within the molecules. In order to design DNA storage systems, a qualitative and quantitative understanding of the errors and the loss of molecules is crucial. In this paper, we characterize those error probabilities by analyzing data from our own experiments as well as from experiments of two different groups. We find that errors within molecules are mainly due to synthesis and sequencing, while imperfections in handling and storage lead to a significant loss of sequences. The aim of our study is to help guide the design of future DNA data storage systems by providing a quantitative and qualitative understanding of the DNA data storage channel.
Article
Full-text available
Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.
Article
Full-text available
We consider coding techniques that limit the lengths of homopolymer runs in strands of nucleotides used in DNA-based mass data storage systems. We compute the maximum number of user bits that can be stored per nucleotide when a maximum homopolymer runlength constraint is imposed. We describe simple and efficient implementations of coding techniques that avoid the occurrence of long homopolymers, and the rates of the constructed codes are close to the theoretical maximum. The proposed sequence replacement method for k-constrained q-ary data yields a significant improvement in coding redundancy than the prior art sequence replacement method for the k-constrained binary data. Using a simple transformation, standard binary maximum runlength limited sequences can be transformed into maximum runlength limited q-ary sequences, which opens the door to applying the vast prior art binary code constructions to DNA-based storage.
Article
Full-text available
DNA-based data storage is an emerging nonvolatile memory technology of potentially unprecedented density, durability, and replication efficiency. The basic system implementation steps include synthesizing DNA strings that contain user information and subsequently retrieving them via high-throughput sequencing technologies. Existing architectures enable reading and writing but do not offer random-access and error-free data recovery from low-cost, portable devices, which is crucial for making the storage technology competitive with classical recorders. Here we show for the first time that a portable, random-access platform may be implemented in practice using nanopore sequencers. The novelty of our approach is to design an integrated processing pipeline that encodes data to avoid costly synthesis and sequencing errors, enables random access through addressing, and leverages efficient portable sequencing via new iterative alignment and deletion error-correcting codes. Our work represents the only known random access DNA-based data storage system that uses error-prone nanopore sequencers, while still producing error-free readouts with the highest reported information rate/density. As such, it represents a crucial step towards practical employment of DNA molecules as storage media.
Article
Full-text available
This paper studies codes that correct a burst of deletions or insertions. Namely, a code will be called a b-burstdeletion/ insertion-correcting code if it can correct a burst of deletions/ insertions of any b consecutive bits. While the lower bound on the redundancy of such codes was shown by Levenshtein to be asymptotically log(n)+b�1, the redundancy of the best code construction by Cheng et al. is b(log(n=b + 1)). In this paper, we close on this gap and provide codes with redundancy at most log(n) + (b � 1) log(log(n)) + b � log(b). We first show that the models of insertions and deletions are equivalent and thus it is enough to study codes correcting a burst of deletions. We then derive a non-asymptotic upper bound on the size of b-burst-deletion-correcting codes and extend the burst deletion model to two more cases: 1) A deletion burst of at most b consecutive bits and 2) A deletion burst of size at most b (not necessarily consecutive). We extend our code construction for the first case and study the second case for b = 3; 4.
Article
Full-text available
Background DNA sequencing technologies deviate from the ideal uniform distribution of reads. These biases impair scientific and medical applications. Accordingly, we have developed computational methods for discovering, describing and measuring bias. Results We applied these methods to the Illumina, Ion Torrent, Pacific Biosciences and Complete Genomics sequencing platforms, using data from human and from a set of microbes with diverse base compositions. As in previous work, library construction conditions significantly influence sequencing bias. Pacific Biosciences coverage levels are the least biased, followed by Illumina, although all technologies exhibit error-rate biases in high- and low-GC regions and at long homopolymer runs. The GC-rich regions prone to low coverage include a number of human promoters, so we therefore catalog 1,000 that were exceptionally resistant to sequencing. Our results indicate that combining data from two technologies can reduce coverage bias if the biases in the component technologies are complementary and of similar magnitude. Analysis of Illumina data representing 120-fold coverage of a well-studied human sample reveals that 0.20% of the autosomal genome was covered at less than 10% of the genome-wide average. Excluding locations that were similar to known bias motifs or likely due to sample-reference variations left only 0.045% of the autosomal genome with unexplained poor coverage. Conclusions The assays presented in this paper provide a comprehensive view of sequencing bias, which can be used to drive laboratory improvements and to monitor production processes. Development guided by these assays should result in improved genome assemblies and better coverage of biologically important loci.
Article
Full-text available
Digital production, transmission and storage have revolutionized how we access and use information but have also made archiving an increasingly complex task that requires active, continuing maintenance of digital media. This challenge has focused some interest on DNA as an attractive target for information storage because of its capacity for high-density information encoding, longevity under easily achieved conditions and proven track record as an information bearer. Previous DNA-based information storage approaches have encoded only trivial amounts of information or were not amenable to scaling-up, and used no robust error-correction and lacked examination of their cost-efficiency for large-scale information archival. Here we describe a scalable method that can reliably store more information than has been handled before. We encoded computer files totalling 739 kilobytes of hard-disk storage and with an estimated Shannon information of 5.2 × 10(6) bits into a DNA code, synthesized this DNA, sequenced it and reconstructed the original files with 100% accuracy. Theoretical analysis indicates that our DNA-based storage scheme could be scaled far beyond current global information volumes and offers a realistic technology for large-scale, long-term and infrequently accessed digital archiving. In fact, current trends in technological advances are reducing DNA synthesis costs at a pace that should make our scheme cost-effective for sub-50-year archiving within a decade.
Article
Full-text available
Digital information is accumulating at an astounding rate, straining our ability to store and archive it. DNA is among the most dense and stable information media known. The development of new technologies in both DNA synthesis and sequencing make DNA an increasingly feasible digital storage medium. We developed a strategy to encode arbitrary digital information in DNA, wrote a 5.27-megabit book using DNA microchips, and read the book by using next-generation DNA sequencing.
Article
We propose coding techniques that simultaneously limit the length of homopolymers runs, ensure the GC-content constraint, and are capable of correcting a single edit error in strands of nucleotides in DNA-based data storage systems. In particular, for given ℓ, ϵ > 0, we propose simple and efficient encoders/decoders that transform binary sequences into DNA base sequences (codewords), namely sequences of the symbols A, T, C and G, that satisfy all of the following properties: • Runlength constraint: the maximum homopolymer run in each codeword is at most ℓ, • GC-content constraint: the GC-content of each codeword is within [0.5 - ϵ; 0.5 + ϵ], • Error-correction: each codeword is capable of correcting a single deletion, or single insertion, or single substitution error. While various combinations of these properties have been considered in the literature, this work provides generalizations of codes constructions that satisfy all the properties with arbitrary parameters of ℓ and ϵ. Furthermore, for practical values of ℓ and ϵ, we show that our encoders achieve higher rates than existing results in the literature and approach capacity. Our methods have low encoding/decoding complexity and limited error propagation.
Article
An indel refers to a single insertion or deletion, while an edit refers to a single insertion, deletion or substitution. In this article, we investigate codes that correct either a single indel or a single edit and provide linear-time algorithms that encode binary messages into these codes of length n. Over the quaternary alphabet, we provide two linear-time encoders. One corrects a single edit with $\lceil {\log \text {n}}\rceil+\text {O}(\log \log \text {n})$ redundancy bits, while the other corrects a single indel with $\lceil {\log \text {n}}\rceil+2$ redundant bits. These two encoders are order-optimal . The former encoder is the first known order-optimal encoder that corrects a single edit, while the latter encoder (that corrects a single indel) reduces the redundancy of the best known encoder of Tenengolts (1984) by at least four bits. Over the DNA alphabet, we impose an additional constraint: the $\mathtt {GC}$ -balanced constraint and require that exactly half of the symbols of any DNA codeword to be either $\mathtt {C}$ or $\mathtt {G}$ . In particular, via a modification of Knuth’s balancing technique, we provide a linear-time map that translates binary messages into $\mathtt {GC}$ -balanced codewords and the resulting codebook is able to correct a single indel or a single edit. These are the first known constructions of $\mathtt {GC}$ -balanced codes that correct a single indel or a single edit.
Conference Paper
The subblock energy-constrained codes (SECCs) have recently attracted attention due to various applications in communication systems such as simultaneous energy and information transfer. In a SECC, each codeword is divided into smaller subblocks, and every subblock is constrained to carry sufficient energy. In this work, we study SECCs under more general constraints, namely bounded SECCs and sliding-window constrained codes (SWCCs), and propose two methods to construct such codes with low redundancy and linear-time complexity, based on Knuth’s balancing technique and sequence replacement technique. For certain codes parameters, our methods incur only one redundant bit.
Article
Motivated by applications in DNA-based storage, we introduce the new problem of code design in the Damerau metric. The Damerau metric is a generalization of the Levenshtein distance which, in addition to deletions, insertions and substitution errors also accounts for adjacent transposition edits. We first provide constructions for codes that may correct either a single deletion or a single adjacent transposition and then proceed to extend these results to codes that can simultaneously correct a single deletion and multiple adjacent transpositions. We conclude with constructions for joint block deletion and adjacent block transposition error-correcting codes.
Article
DNA is an attractive medium to store digital information. Here we report a storage strategy, called DNA Fountain, that is highly robust and approaches the information capacity per nucleotide. Using our approach, we stored a full computer operating system, movie, and other files with a total of 2.14 × 10⁶ bytes in DNA oligonucleotides and perfectly retrieved the information from a sequencing coverage equivalent to a single tile of Illumina sequencing. We also tested a process that can allow 2.18 × 10¹⁵ retrievals using the original DNA sample and were able to perfectly decode the data. Finally, we explored the limit of our architecture in terms of bytes per molecule and obtained a perfect retrieval from a density of 215 petabytes per gram of DNA, orders of magnitude higher than previous reports.
Article
In this article, we study properties and algorithms for constructing sets of 'constant weight' codewords with bipolar symbols, where the sum of the symbols is a constant q, q 6 0. We show various code constructions that extend Knuth's balancing vector scheme, q = 0, to the case where q > 0. We compute the redundancy of the new coding methods. Index Terms—Balanced code, channel capacity, constrained code, magnetic recording, optical recording. I. INTRODUCTION Let q be an integer. A setC, which is a subset of ( w = (w1;w2;:::;wn)2f 1; +1g n : n X i=1 wi = q )
Article
The sequence replacement technique converts an input sequence into a constrained sequence in which a prescribed subsequence is forbidden to occur. Several coding algorithms are presented that use this technique for the construction of maximum run-length limited sequences. The proposed algorithms show how all forbidden subsequences can be successively or iteratively removed to obtain a constrained sequence and how special subsequences can be inserted at predefined positions in the constrained sequence to represent the indices of the positions where the forbidden subsequences were removed. Several modifications are presented to reduce the impact of transmission errors on the decoding operation, and schemes to provide error control are discussed as well. The proposed algorithms can be implemented efficiently, and the rates of the constructed codes are close to their theoretical maximum. As such, the proposed algorithms are of interest for storage systems and data networks.
Article
Coding schemes in which each codeword contains equally many zeros and ones are constructed in such a way that they can be efficiently encoded and decoded.
Article
Two factors are mainly responsible for the stability of the DNA double helix: base pairing between complementary strands and stacking between adjacent bases. By studying DNA molecules with solitary nicks and gaps we measure temperature and salt dependence of the stacking free energy of the DNA double helix. For the first time, DNA stacking parameters are obtained directly (without extrapolation) for temperatures from below room temperature to close to melting temperature. We also obtain DNA stacking parameters for different salt concentrations ranging from 15 to 100 mM Na+. From stacking parameters of individual contacts, we calculate base-stacking contribution to the stability of A•T- and G•C-containing DNA polymers. We find that temperature and salt dependences of the stacking term fully determine the temperature and the salt dependence of DNA stability parameters. For all temperatures and salt concentrations employed in present study, base-stacking is the main stabilizing factor in the DNA double helix. A•T pairing is always destabilizing and G•C pairing contributes almost no stabilization. Base-stacking interaction dominates not only in the duplex overall stability but also significantly contributes into the dependence of the duplex stability on its sequence.
Article
In many digital communications systems, bursts of insertions or deletions are typical errors. A new class of nonbinary codes is proposed that correct a single deletion or insertion. Asymptotically, the cardinality of these codes is close to optimal. The codes can be easily implemented.
Article
For n >0, d ⩾0, n ≡ d (mod 2), let K ( n , d ) denote the minimal cardinality of a family V of ±1 vectors of dimension n , such that for any ±1 vector w of dimension n there is a v ∈ V such that | v - w |⩽ d , where v - w is the usual scalar product of v and w . A generalization of a simple construction due to D.E. Knuth (1986) shows that K ( n , d )⩽[ n /( d +1)]. A linear algebra proof is given here that this construction is optimal, so that K ( n , d )-[ n /( d +1)] for all n ≡ d (mod 2). This construction and its extensions have applications to communication theory, especially to the construction of signal sets for optical data links
Portable and error-free DNAbased data storage
  • S Yazdi
  • R Gabrys
  • O Milenkovic
DNA Codes with Run-Length Limitation and Knuth-Like Balancing of the GC Contents
  • D Dubé
  • W Song
  • K Cai
DNA Codes with Run-Length Limitation and Knuth-Like Balancing of the GC Contents
  • dubé