We presented an architecture for VLSI neural networks, where stochastic products are used in the synapses. Despite an increase in the computation times, in comparison with purely analog matrix products, this method has the advantage of offering a better precision in the computations.
If n bits precision are required (depending of the type of network, the learning algorithm and the application),
... [Show full abstract] 2n steps are needed for a complete computation of the matrix product, but first approximation can be obtained earlier with well-chosen time constants in the low-pass filter included in the neuron to realize the integration. In standard applications, this is however much less than the number of steps required for a similar purely digital processor, where at least as many iterations as the number of synapses are required. In comparison with analog synapses, the precision which can be obtained is much better, since it almost no more depends on the precision of the components (only for the maximum number of connected synapses). The proposed architecture offers thus a good compromise between the precision of digital implementations and the size and speed of analog ones.