December 2024
·
15 Reads
Journal of Circuits Systems and Computers
Deep neural networks (DNNs) have witnessed widespread adoption across various domains. However, their computational demands pose significant challenges due to the extensive inter-neuron communication within the network. Moreover, the energy consumption of DNNs is substantial, primarily driven by the vast data movement and computational requirements. To overcome these challenges, novel accelerator architectures are essential. In this study, we present a novel heuristic algorithm for neuron grouping, catering to both fully connected and partially pruned DNN models. Our algorithm aims to minimize the overall data communication cost among neuron groups while also considering computational load balance. It outperforms existing heuristic neuron grouping methods classified into three main approaches from the literature by an average improvement in communication cost ranging from 33.01% to 47.11%. By optimizing neuron grouping, our approach may be used to enhance the efficiency of DNN accelerators, enabling improved performance and reduced energy consumption.