## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

In this paper we introduce a new type of the residue number system (RNS), floating-point RNS, which can significantly increase the range of the numbers represented by RNS. The concept of the RNS floating-point arithmetics are presented, and the approaches to perform these arithmetic operations are proposed. The introduced methods are simple, efficient and easy to implement.

To read the full-text of this research,

you can request a copy directly from the authors.

... Jen-shiun et al. [9] and Omondi [52] proposed number comparison methods for residue numbers based on parity bits. However, a prerequisite of these parity comparison methods is that all moduli are supposed to be odd (in addition to being pairwise relatively prime). ...

... Chiang et al. [9] provide RNS algorithms for comparison and overflow detection but assume all bases to be odd and do not consider error correction. Similarly, Preethy et al. [57,58] integrate index-sum multiplication into RNS but do not consider its impact on the properties of RRNS bases critical to CREEPY. ...

Dennard scaling has ended. Lowering the voltage supply (Vdd) to sub-volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, it is possible to correct the occasional errors caused due to lower Vdd in an efficient manner and effectively lower power. By deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and energy savings realized by permitting lower Vdd. One promising approach is the Redundant Residue Number System (RRNS) representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication, thus enabling computational error correction at a fraction of an overhead compared to conventional approaches. We use the RRNS scheme to design a Computationally-Redundant, Energy-Efficient core, including the microarchitecture, Instruction Set Architecture (ISA) and RRNS centered algorithms. From the simulation results, this RRNS system can reduce the energy-delay-product by about 3× for multiplication intensive workloads and by about 2× in general, when compared to a non-error-correcting binary core.
This article is summarized in: the morning paper
an interesting/influential/important paper from the world of CS every weekday morning, as selected by Adrian Colyer

... While simple to understand and implement, this introduces an overhead of 200% in area and power, which leaves plenty of room for improvement. Any energy savings from lowering I: A (4, 2)-RRNS example with the simplified base set (3,5,2,7,11,13). ...

... By utilizing RRNS, CREEPY benefits from these, but improves upon generality and energyefficiency. Chiang et al. [3] provide RNS algorithms for comparison and overflow detection, but assume all bases to be odd and do not consider error correction. ...

Dennard scaling has ended. Lowering the voltage supply (Vdd) to sub volt levels causes intermittent losses in signal integrity, rendering further scaling (down) no longer acceptable as a means to lower the power required by a processor core. However, if it were possible to recover the occasional losses due to lower Vdd in an efficient manner, one could effectively lower power. In other words, by deploying the right amount and kind of redundancy, we can strike a balance between overhead incurred in achieving reliability and savings realized by permitting lower Vdd. One promising approach is the Redundant Residue Number System (RRNS) representation. Unlike other error correcting codes, RRNS has the important property of being closed under addition, subtraction and multiplication. Thus enabling correction of errors caused due to both faulty storage and compute units. Furthermore, the incorporated approach uses a fraction of the overhead and is more efficient when compared to the conventional technique used for compute-reliability. In this article, we provide an overview of the architecture of a CREEPY core that leverages this property of RRNS and discuss associated algorithms such as error detection/correction, arithmetic overflow detection and signed number representation. Finally, we demonstrate the usability of such a computer by quantifying a performance-reliability trade-off and provide a lower bound measure of tolerable input signal energy at a gate, while still maintaining reliability.

... when M is even) can be represented. In addition, a floating-point number in RNS was also introduced in[10]. ...

... This correction capability increases with r, tolerating upto r 2 errant residues [33]. There are proposals to perform fractional multiplication [112] and to represent floating point numbers [18] using RNS. The key idea to extend this to RRNS is to protect the exponent and mantissa separately, as they transform differently upon arithmetic operations. ...

... This correction capability increases with r, tolerating upto r 2 errant residues [33]. There are proposals to perform fractional multiplication [112] and to represent floating point numbers [18] using RNS. The key idea to extend this to RRNS is to protect the exponent and mantissa separately, as they transform differently upon arithmetic operations. ...

... Interval-positional characteristic Inclusion of IPC in number representation is the main difference of MFformat from previously known methods of floating-point representation in RNS [26][27][28][29]. In terms of high-precision arithmetic, this offers the following benefits: ...

Floating-point machine precision is often not sufficient to correctly solve large scientific and engineering problems. Moreover, computation time is a critical parameter here. Therefore, any research aimed at developing high-speed methods for multiple-precision arithmetic is of great immediate interest. This paper deals with a new technique of multiple-precision computations, based on the use of modular-positional floating-point format for representation of numbers. In this format, the significands are represented in residue number system (RNS), thus enabling high-speed processing of the significands with possible parallelization by RNS modules. Number exponents and signs are represented in the binary number system. The interval-positional characteristic method is used to increase the speed of executing complex non-modular operations in RNS. Algorithms for rounding off and aligning the exponents of numbers in modular-positional format are presented. The structure and features of a new multiple-precision library are given. Some results of an experimental study on the efficiency of this library are also presented.

Traditional computers data processing is limited by computer data input, output, storage, display. Further computing needs repeated binary-decimal conversions. With the expansion of data intensive computing needs of distributed computing, decimal computing of mass data is widely applied in banking, financial, signal processing, bio-medical, astronomy, geography, data acquisition and image compression and other fields. Independent decimal floating point unit is becoming important in these areas. A floating point unit is a part of a computer system specially designed to carry out operations on floating point numbers. Floating point unit have been implemented as a coprocessor rather than as an integrated unit in various systems. Today's floating point arithmetic operations are very important in the design of Digital Signal Processing and application-specific systems. As Fixed-Point arithmetic logics are faster and more area efficient, but sometimes it is desirable to implement calculation using Floating-Point numbers. In most of the digital signal processing applications addition and multiplication is done frequently. This paper presents a review of the Floating Point unit for a signal processing applications, which has faster rate of operations.

The emergence of embedded systems with severe power restrictions together with technology developments that require fault-tolerant approaches spurred interest in the residue number system (RNS). Owing to its unique characteristics, which lead to fast and low-power arithmetic circuits, RNS has become an interesting option to design embedded systems for nowadays high-performance applications. However, to unlock the full potential of RNS for designing efficient embedded systems, the system’s architect should be aware of the structure and latest advances on RNS circuits and systems, including residue arithmetic, algorithms, and hardware design. RNS has a multi-disciplinary nature supported on mathematical formulation, digital design, and computer architecture. This chapter briefly reviews the most up to date hardware structures of RNS components, and introduces the most efficient designs for each part to ease the designer’s work. Moreover, a teaching method to learn about RNS-based systems, usable both for class lecturing and individual research, is presented. All aspects of RNS, namely moduli set selection, residue-based hardware component design, including forward and reverse converters, and modulo arithmetic circuits are considered in a comprehensive teaching approach. This teaching methodology can lead to a deeper understanding of RNS, and consequently open the gates to perform more effective and applicable investigation on RNS-based embedded systems.

Algorithms are presented for the four elementary arithmetic operations, to perform reliable floating-point arithmetic operations. These arithmetic operations can be achieved by applying residue techniques to the weighted number systems and performed with no accuracy lost in the process of the computing. The arithmetic operations presented can be used as elementary tools (on many existing architectures) to ensure the reliability of numerical computations. Simulation results especially for the solutions of ill-conditioned problems are given with emphasis on the practical usability of the tools.

The extended precision of calculations is required in solving many scientific and engineering problems. The solution time is a critical parameter to accomplish and, therefore, new methods should be developed for fast high-precision arithmetic. In this paper a new modular-positional format for the representation of floating-point multi-digit numbers is proposed. The main concept of this format is to represent and ensure the digit-parallel processing of floating-point mantissas in residue number systems. The method of interval-positional characteristics is used to increase the speed of complex non-modular operations. Several algorithms for performing arithmetic operations and rounding in the new modular-positional floating-point format are considered. The results of studies of their vectorization efficiency and performance compared to some analogs (MPFR - Multiple Precision Floating-Point Reliable library, NTL - Number Theory Library, and Wolfram Mathematica) are discussed.

This paper aims at the implementation of 16 bit floating point multiplier using Residue Number system. Residue Number System (RNS), which is a non-weighted number system gains popularity in the implementation of fast and parallel computing applications. It has inherent properties such as modularity, parallelism and carry free computation, which speeds up the arithmetic computations. Floating Point can be represented as M × BE where M is Mantissa, E is the Exponent and B is the base. Floating point RNS multiplier consists of RNS Exponent modulo Adder and RNS Mantissa modulo multiplier. In this paper, floating point multiplier will be implemented using Verilog HDL using ModelSim and synthesized using Altera Cyclone II Quartus and frequency of multiplier is found to be 311MHz.

A reliable scientific computation approach, substantially
different from the known ones, based on Residue Number System (RNS)
floating-point arithmetic is described. In the approach, the real number
is represented by an expression which consists of two parts, the
approximate part and the interval error part. The approximate part,
represented by an RNS floating-point number, shows an approximate value
for the real number. The interval error value, represented by two RNS
floating-point numbers, shows the left and the right limit of an
interval containing the error. In parallel to the result of operation,
the rounding error induced by that operation is determined and then
summed up in each operation. When a series of operations is completed,
the range of existence for the result can be determined from the result
of the computation and the sum of interval errors. For the illustration
of the proposed method, some examples are also given, which are said to
be difficult to find exact solution in the usual floating-point
calculation

A general algorithm for signed number division in residue number
systems (RNSs) is presented. A parity checking technique is used to
accomplish the sign and overflow detection in this algorithm. Compared
with conventional methods of sign and overflow detection, the parity
checking method is more efficient and practical. Sign magnitude
arithmetic division is implemented using binary search. There is no
restriction on the dividend and the divisor (except zero divisor), and
no quotient estimation is necessary before the division is started. In
hardware implementations, the storage of one table is required for
parity checking, and all the other arithmetic operations are completed
by calculations. Only simple operations are needed to accomplish this
RNS division

Improved residue expression and new arithmetic algorithms for addition and subtraction are proposed. In the proposed system positive and negative integers of any magnitude can be handled regardless of the particular choice of the set of relatively prime bases. And such problems as difficulties in overflow detection, handling of sign change in subtraction, or complementation of negative numbers do not exist in the proposed system as far as addition and subtraction are concerned. The difficult problem itself of magnitude comparison in the residue system, however, still remains to be attacked. The proposed system assumes existence of an I/O unit, independent of the residue mode computer, for the purpose of the necessary conversions before and after the computation. Basic properties of the proposed arithmetic algorithms are also derived.

Abstract—A new residue number system algebra has been previously proposed by the author. The algebra has solved an essential theoretical barrier in the residue number system and has enabled one to pursue additive operations in the residue number system to their full extent, overcoming such difficulties as restrictions on the sign or magnitude of numbers in the system. In this paper, basic theorems in the algebra are introduced first, and then, based on the theorems, table look-up oriented solutions for hardware overflow checking, sign detection, and floating-point additive operations are given. The theorems expound the behavior of a quantity treated as a veiled mysterious function in the literature. To the best knowledge of the author, hardware overflow-checking schemes and floatingpoint additive operations in the residue number system have never been reported elsewhere. So far, the upper limit of the magnitude of numbers in the system ever discussed has been the one that is theory-limited, and floating-point operations in the residue number system have never been discussed.

The residue number system is an integer number system and is inconvenient to represent numbers with fractional parts. In the symmetric residue system, a new representation of floating-point numbers and arithmetic algorithms for its addition, subtraction, multiplication, and division are proposed. A floating-point number is expressed as an integer multiplied by a product of the moduli. The proposed system assumes existence of necessary conversion procedures before and after the computation.

The basis for imp~on of additive operaZlons in the residue m,rnheF syst

- A Sssak

A. Sssak], The basis for imp~on of additive operaZlons in the residue m,rnheF syst,'m, IEEE TranJ-actiona of Electronic ComptfferJ ZC~--1Y, 1066-1073 (1968).

Re,idle A~fhmetlc and I$a Applicatlon $o Computer Technoloal

- N's Soabo
- R I Tanaka

N'S. Soabo and R.I. Tanaka, Re,idle A~fhmetlc and I$a Applicatlon $o Computer Technoloal, McGraw-Hill,
New York, (1967).

Addition and subtraction in the residue number systenl

- A Sosski

A Sosski, Addition and subtraction in the residue number systenl, IEEE 7~'a~actiona on ElectToni¢ Com-puter# EC-16, 164-167 (1967).

A S~eral division algorlthm for residue m,~ systems

- S Ch~
- M Lu

S. Ch~ and M. Lu, A S~eral division algorlthm for residue m,~ systems, X 0//~ IEEE Syrup. on Computer Ari~me~ic (1991).

F1o~inS-polnt arithmetic slsorithm-in the sym,~tric residue number system

- E Kinoahlt~
- H Koeak~
- Y Kojima

E. Kinoahlt~, H. Koeak~ and Y. Kojima, F1o~inS-polnt arithmetic slsorithm-in the sym,,~tric residue number system, IEEE Tvlnascliona on ElecSrond¢ Oompstera C-23, 9-20 (1974).