Impact of level-2 cache sharing on the performance and power requirements of homogeneous multicore embedded systems

Computer Sci. and Eng. Dept, Florida Atlantic University, Boca Raton, FL, USA
Microprocessors and Microsystems (Impact Factor: 0.6). 08/2009; DOI: 10.1016/j.micpro.2009.06.001
Source: DBLP

ABSTRACT In order to satisfy the needs for increasing computer processing power, there are significant changes in the design process of modern computing systems. Major chip-vendors are deploying multicore or manycore processors to their product lines. Multicore architectures offer a tremendous amount of processing speed. At the same time, they bring challenges for embedded systems which suffer from limited resources. Various cache memory hierarchies have been proposed to satisfy the requirements for different embedded systems. Normally, a level-1 cache (CL1) memory is dedicated to each core. However, the level-2 cache (CL2) can be shared (like Intel Xeon and IBM Cell) or distributed (like AMD Athlon). In this paper, we investigate the impact of the CL2 organization type (shared Vs distributed) on the performance and power consumption of homogeneous multicore embedded systems. We use VisualSim and Heptane tools to model and simulate the target architectures running FFT, MI, and DFT applications. Experimental results show that by replacing a single-core system with an 8-core system, reductions in mean delay per core of 64% for distributed CL2 and 53% for shared CL2 are possible with little additional power (15% for distributed CL2 and 18% for shared CL2) for FFT. Results also reveal that the distributed CL2 hierarchy outperforms the shared CL2 hierarchy for all three applications considered and for other applications with similar code characteristics.


Available from: Abu Asaduzzaman, Jan 29, 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We investigate the impact of level-1 cache (CL1) parameters, level-2 cache (CL2) parameters, and cache organizations on the power consumption and performance of multi-core systems. We simulate two 4-core architectures - both with private CL1s, but one with shared CL2 and the other one with private CL2s. Simulation results with MPEG4, H.264, matrix inversion, and DFT workloads show that reductions in total power consumption and mean delay per task of up to 42% and 48%, respectively, are possible with optimized CL1s and CL2s. Total power consumption and the mean delay per task depend significantly on the applications including the code size and locality.
    Microelectronics (ICM), 2010 International Conference on; 01/2011
  • [Show abstract] [Hide abstract]
    ABSTRACT: The leading problem of adopting caches into multicore computing systems is twofold: cache worsens execution time unpredictability (that challenges supporting real-time multimedia applications) and cache is power hungry (that challenges energy constraints). Recently published articles suggest that using cache locking improves timing predictability. However, increased cache activities due to aggressive cache locking make the system consume more energy and become less efficient. In this paper, we investigate the impact of multicore cache parameters and cache locking on performance and power consumption for real-time multimedia applications. We consider an Intel Xeon-like multicore architecture with two-level cache memory hierarchy and use two popular multimedia applications: recently introduced H.265/HEVC (for improved video quality and data compression ratio) and H.264/AVC (the network friendly video coding standard). Experimental results suggest that cache optimization has potential to improve multicore performance by decreasing cache miss rate down to 36% and save power consumption up to 33%. It is observed that H.265/HEVC has significant performance advantage on multicore system over H.264/AVC for smaller cache memories.
    Intelligent Signal Processing and Communications Systems (ISPACS), 2013 International Symposium on; 01/2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We discuss the computational bottlenecks in molecular dynamics (MD) and describe the challenges in parallelizing the computation-intensive tasks. We present a hybrid algorithm using MPI (Message Passing Interface) with OpenMP threads for parallelizing a generalized MD computation scheme for systems with short range interatomic interactions. The algorithm is discussed in the context of nano-indentation of Chromium films with carbon indenters using the Embedded Atom Method potential for Cr–Cr interaction and the Morse potential for Cr–C interactions. We study the performance of our algorithm for a range of MPI–thread combinations and find the performance to depend strongly on the computational task and load sharing in the multi-core processor. The algorithm scaled poorly with MPI and our hybrid schemes were observed to outperform the pure message passing scheme, despite utilizing the same number of processors or cores in the cluster. Speed-up achieved by our algorithm compared favourably with that achieved by standard MD packages.
    Journal of Parallel and Distributed Computing 01/2013; DOI:10.1016/j.jpdc.2013.12.008 · 1.01 Impact Factor