Conference Paper

A generic reconfigurable neural network architecture as a network on chip

Pennsylvania State Univ., USA
DOI: 10.1109/SOCC.2004.1362404 Conference: SOC Conference, 2004. Proceedings. IEEE International
Source: IEEE Xplore

ABSTRACT Neural networks are widely used in pattern recognition, security applications and data manipulation. We propose a hardware architecture for a generic neural network, using network on chip (NoC) interconnect. The proposed architecture allows for expandability, mapping of more than one logical unit onto a single physical unit, and dynamic reconfiguration based on application-specific demands. Simulation results show that this architecture has significant performance benefits over existing architectures.

0 Followers
 · 
177 Views
  • Source
    • "While [17] and [19] rely on unicast, [18] chooses multicast. While [17] uses wormhole routing, [18] and [19] employ short AER packets. Packets in [18] consist of source address and [19] uses destination address. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Providing highly flexible connectivity is a major architectural challenge for hardware implementation of reconfigurable neural networks. We perform an analytical evaluation and comparison of different configurable interconnect architectures (mesh NoC, tree, shared bus and point-to-point) emulating variants of two neural network topologies (having full and random configurable connectivity). We derive analytical expressions and asymptotic limits for performance (in terms of bandwidth) and cost (in terms of area and power) of the interconnect architectures considering three communication methods (unicast, multicast and broadcast). It is shown that multicast mesh NoC provides the highest performance/cost ratio and consequently it is the most suitable interconnect architecture for configurable neural network implementation. Routing table size requirements and their impact on scalability were analyzed. Modular hierarchical architecture based on multicast mesh NoC is proposed to allow large scale neural networks emulation. Simulation results successfully validate the analytical models and the asymptotic behavior of the network as a function of its size.
    Microprocessors and Microsystems 03/2011; 35(2):152-166. DOI:10.1016/j.micpro.2010.08.005 · 0.60 Impact Factor
  • Source
    • "While [7] and [9] rely on unicast, [8] chooses multicast. While [7] uses wormhole routing, [8] and [9] employ short AER packets. Packets in [8] consist of source address and [9] uses destination address. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Implementation of reconfigurable neural networks in hardware requires highly flexible connectivity, creating major architectural challenge. We perform an analytical evaluation and comparison of different configurable interconnect architectures (mesh NoC, tree, shared bus and point-to-point) emulating variants of two neural network topologies (having full and random exponential configurable connectivity). We derive analytical expressions and asymptotic limits for performance (in terms of bandwidth) and cost (in terms of area and power) of the interconnect architectures considering three communication methods (unicast, multicast and broadcast). It is shown that planar structure, fault and drop tolerance and pulse-information encoding in spiking neural networks makes simple multicast mesh network-on-chip suitable for massively parallel communication required by these networks. Simulation results successfully validate the analytical models and the asymptotic behavior of the network as a function of its size.
    Electrical and Electronics Engineers in Israel (IEEEI), 2010 IEEE 26th Convention of; 12/2010
  • Source
    • "One of the most important features of artificial neural networks (ANNs) is their learning ability. Size and real-time considerations show that on-chip learning is necessary for a large range of applications [3]. Neural-network-based recognition systems have several levéis of inherent parallelism. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Neural networks are widely used in pattern recognition, security applications, and robot control. We propose a hardware architecture system using tiny neural networks (TNNs) specialized in image recognition. The generic TNN architecture allows for expandability by means of mapping several basic units (layers) and dynamic reconfiguration, depending on the application specific demands. One of the most important features of TNNs is their learning ability. Weight modification and architecture reconfiguration can be carried out at run-time. Our system performs objects identification by the interpretation of characteristics elements of their shapes. This is achieved by interconnecting several specialized TNNs. The results of several tests in different conditions are reported in this paper. The system accurately detects a test shape in most of the experiments performed. This paper also contains a detailed description of the system architecture and the processing steps. In order to validate the research, the system has been implemented and configured as a perceptron network with back-propagation learning, choosing as reference application the recognition of shapes. Simulation results show that this architecture has significant performance benefits.
    IEEE Transactions on Industrial Electronics 09/2009; 56(8-56):3253 - 3263. DOI:10.1109/TIE.2009.2022076 · 6.50 Impact Factor
Show more

Preview

Download
4 Downloads
Available from