Are you J.H. Tapia?

Claim your profile

Publications (2)0 Total impact

  • K.C. Breen, J.H. Tapia, D.G. Elliott
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents the implementation of three single-instruction, multiple-data (SIMD) parallel algorithms for improved graphical user interface processing in mobile devices. These algorithms, which perform alpha blending, window masking and rendering with antialiasing, are adapted for use with Atsana semiconductor's J2210 media processor, a low-power system-on-chip for graphic, image and video processing in wireless applications. All three SIMD algorithms are successfully realized in software for the J2210, without the use of any floating-point math or integer division. The algorithms are evaluated through architecturally-aware simulation of the J2210's SIMD array processor, and their performance is compared to that of equivalent sequential algorithms on a conventional RISC processor. Results show a performance improvement by a factor of 99.6, 39.3 and 2.4 for alpha blending, window masking and rendering with antialiasing, respectively. Power consumption in the array processor is very low for each algorithm, with a maximum of 4.5 mW during active operation. The combination of high performance and low power consumption achieved by these algorithms demonstrates that they are suitable for use in mobile devices equipped with a SIMD-capable media processor such as the J2210
    Electrical and Computer Engineering, 2005. Canadian Conference on; 06/2005
  • [Show abstract] [Hide abstract]
    ABSTRACT: Among the most important design parameters in cache memories are storage capacity, associativity, and line size. Conventional caches are tuned to provide fast performance across a variety of representative applications; however, there is no fixed cache configuration that best fits the varying memory requirements of every application. In this paper we study the potential performance benefits of using an adaptive cache that dynamically adjusts its line length to better match the spatial locality of any memory access of a running application. In our L2 cache model, a group of fixed-size cache lines can be concatenated to form longer lines called superlines. We develop an optimistic reference lookahead technique to determine the optimal superline size for every cache miss. The effectiveness of alternative superline length adjustment strategies could then be measured against this theoretical "best case" strategy. Our results show that a cache with adaptive line size can improve the hit rate in up to 3.25%, and produce speedups of up to 14%
    Electrical and Computer Engineering, 2005. Canadian Conference on; 01/2005