Breadth-first trellis decoding with adaptive effort

Dept. of Electr. Eng., Queen's Univ., Kingston, Ont.
IEEE Transactions on Communications (Impact Factor: 1.98). 02/1990; DOI: 10.1109/26.46522
Source: IEEE Xplore

ABSTRACT A breadth-first trellis decoding algorithm is introduced for
application to sequence estimation in digital data transmission. The
high degree of inherent parallelism makes a parallel-processing
implementation attractive. The algorithm is shown to exhibit an
error-rate versus average-computational-complexity behavior that is much
superior to the Viterbi algorithm and also improves on the
M -algorithm. The decoding algorithm maintains a variable number
of paths as its computation adapts to the channel noise actually
encountered. Buffering of received samples is required to support this.
Bounds that are evaluated by trellis search are produced for the error
event rate and average number of survivors. Performance is evaluated
with conventional binary convolutional codes over both
binary-synchronous-communication (BSC) and additive-white-Gaussian-noise
(AWGN) channels. Performance is also found for multilevel AM and
phase-shift-keying (PSK) codes and simple intersymbol interference
responses over an AWGN channel. At lower signal-to-noise ratio Monte
Carlo simulations are used to improve on the bounds and to investigate
decoder dynamics

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Convolutional encoding is a forward error correction technique that is used for correction of errors at the receiver end. The two decoding algorithms used for decoding the convolutional codes are Viterbi algorithm and Sequential algorithm. Sequential decoding has advantage that it can perform very well with long constraint length. Viterbi decoding is the best technique for decoding the convolutional codes but it is limited to smaller constraint lengths. It has been widely deployed in many wireless communication systems to improve the limited capacity of the communication channels. The Viterbi algorithm is the most extensively employed decoding algorithm for convolutional codes. The availability of wireless technology has revolutionized the way communication is done in our world today. With this increased availability comes increased dependence on the underlying systems to transmit information both quickly and accurately. Because the communications channels in wireless systems can be much more hostile than in "wired" systems, voice and data must use forward error correction coding to reduce the probability of channel effects corrupting the information being transmitted. A new type of coding, called Viterbi coding, can achieve a level of performance that comes closer to theoretical bounds than more conventional coding systems. The Viterbi Algorithm, an application of dynamic programming, is widely used for estimation and detection problems in digital communications and signal processing. It is used to detect signals in communications channels with memory, and to decode sequential error control codes that are used to enhance the performance of digital communication systems. Though various platforms can be used for realizing Viterbi Decoder including Field Programmable Gate Array (FPGAs), Complex Programmable Logic Devices (CPLDs) or Digital Signal Processing (DSP) chips but in this project benefit of using an FPGA to Implement Viterbi Decoding Algorithm has been described. FPGAs are a technology that gives the designer flexibility of a programmable solution, the performance of a custom solution and lowering overall cost. The advantages of the FPGA approach to DSP Implementation include higher sampling rates than are available from traditional DSP chips, lower costs than an ASIC. The FPGA also adds design flexibility and adaptability with optimal device utilization conserving both board space and system power that is often not the case with DSP chips.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose an efficient architecture based on pre-computation for Viterbi decoders incorporating T-algorithm. Through optimization at both algorithm level and architecture level, the new architecture greatly shortens the long critical path introduced by the conventional T-algorithm. The design example provided in this work demonstrates more than twice improvement in clock speed with negligible computation overhead while maintaining decoding performance.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We consider coding schemes over an independent and identically distributed (i.i.d.) insertion/deletion channel with inter-symbol interference (ISI). The idea is based on a serial concatenation of a low-density parity check (LDPC) code with a marker code. First, we design a maximum-a-posteriori (MAP) detector operating at the bit level which jointly achieves synchronization for the insertion/deletion channel (with the help of the marker code) and equalization for the ISI channel. Utilizing this MAP detector together with an LDPC code with powerful error-correction capabilities, we demonstrate that reliable transmission over this channel is feasible. Then, we consider low-complexity channel detection algorithms needed for proper synchronization/equalization. Specifically, we use separate synchronization and equalization algorithms instead of joint detection and also explore the performance of M- and T-algorithms implemented as low complexity soft output channel detectors. Such schemes greatly reduce the amount of computations needed at the cost of some performance loss as illustrated via a set of simulation results.
    Communications (ICC), 2012 IEEE International Conference on; 01/2012