Conference Paper

Bus Access Optimization for Predictable Implementation of Real-Time Applications on Multiprocessor Systems-on-Chip

DOI: 10.1109/RTSS.2007.24 Conference: Real-Time Systems Symposium, 2007. RTSS 2007. 28th IEEE International
Source: IEEE Xplore


In multiprocessor systems, the traffic on the bus does not solely originate from data transfers due to data dependen- cies between tasks, but is also affected by memory trans- fers as result of cache misses. This has a huge impact on worst-case execution time (WCET) analysis and, in general, on the predictability of real-time applications implemented on such systems. As opposed to the WCET analysis per- formed for a single processor system, where the cache miss penalty is considered constant, in a multiprocessor system each cache miss has a variable penalty, depending on the bus contention. This affects the tasks' WCET which, how- ever, is needed in order to perform system scheduling. At the same time, the WCET depends on the system schedule due to the bus interference. In this paper we present an approach to worst-case execution time analysis and system scheduling for real-time applications implemented on mul- tiprocessor SoC architectures. The emphasis of this paper is on the bus scheduling policy and its optimization, which is of huge importance for the performance of such a pre- dictable multiprocessor application.

Download full-text


Available from: Zebo Peng
  • Source
    • "The execution time may vary greatly depending on the software components being run, i.e. depending on the system integration. Researchers have proposed to upper bound the maximum delay a software component can suffer due to interference when accessing shared resources such as buses [4] [5] or memory controllers [6]. For those resources where considering the maximum delay would remove the benefit of using them, e.g. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This position paper outlines the innovative probabilistic approach being taken by the EU Integrated Project PROXIMA to the analysis of the timing behaviour of mixed criticality real-time systems. PROXIMA supports multi-core and mixed criticality systems timing analysis via the use of probabilistic techniques and hardware/software architectures that reduce dependencies which affect timing. The approach is being applied to DO-178B/C and IS026262.
    Full-text · Article · Jun 2014
  • Source
    • "Works with Deals with effect COTS Multicore E1 E2 E3 E4 Analysis method [16] No No Yes No No Mechanism [15] No No Yes No No [16] No No Yes No No [14] No No Yes Yes Yes [5] No Yes Yes Yes Yes Analysis method [4] Yes No Yes No No [10] Yes No Yes No No [13] Yes No Yes No No Mechanism [9] Yes No No Yes Yes [11] Yes Yes No No No [17] Yes No Yes No No Our work Yes Yes No Yes Yes TABLE I: Comparison with previous work. Consequently, the research community has developed (i) methods for analyzing the impact of these dependencies and (ii) run-time mechanisms that protect the execution time of one task from these effects. "
    [Show abstract] [Hide abstract]
    ABSTRACT: In commercial-off-the-shelf (COTS) multi-core systems, the execution times of tasks become hard to predict because of contention on shared resources in the memory hierarchy. In particular, a task running in one processor core can delay the execution of another task running in another processor core. This is due to the fact that tasks can access data in the same cache set shared among processor cores or in the same memory bank in the DRAM memory (or both). Such cache and bank interference effects have motivated the need to create isolation mechanisms for resources accessed by more than one task. One popular isolation mechanism is cache coloring that divides the cache into multiple partitions. With cache coloring, each task can be assigned exclusive cache partitions, thereby preventing cache interference from other tasks. Similarly, bank coloring allows assigning exclusive bank partitions to tasks. While cache coloring and some bank coloring mechanisms have been studied separately, interactions between the two schemes have not been studied. Specifically, while memory accesses to two different bank colors do not interfere with each other at the bank level, they may interact at the cache level. Similarly, two different cache colors avoid cache interference but may not prevent bank interference. Therefore it is necessary to coordinate cache and bank coloring approaches. In this paper, we present a coordinated cache and bank coloring scheme that is designed to prevent cache and bank interference simultaneously. We also developed color allocation algorithms for configuring a virtual memory system to support our scheme which has been implemented in the Linux kernel. In our experiments, we observed that the execution time can increase by 60% due to inter-task interference when we use only cache coloring. Our coordinated approach can reduce this figure down to 12% (an 80% reduction).
    Full-text · Conference Paper · Dec 2013
  • Source
    • "In the context of this work, it is noteworthy to cite the works dealing with shared memory and shared bus contention. Time Division Multiple Access (TDMA) based schemes have been proposed in [8], [20], [21], [9] and [10]. The methods approach the problem in different ways: precomputing application specific bus schedules, or analyzing buses with the assumption of separate buses for memories and data, restricting accesses to the bus in specific phases of task execution, division of the tasks into superblocks which execute in specific slots and using FlexRay like approaches to have fixed and reserved slots. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Given that power is one of the biggest concerns of embedded systems, many devices have replaced DRAM with non-volatile Phase Change Memories (PCM). Some applications need to adhere to strict timing constraints and thus their temporal behavior must be analyzed before deploying them. Moreover, modern systems typically contain multiple cores, causing an application to incur significant delays due to the contention for the shared bus and shared main memory (PCM in this work). One of the challenges in the timing analysis for PCM main memories is the high discrepancy between read and write latencies and the high contention among cores. Finding an upper bound on these delays is non-trivial mainly because (i) memory requests may be issued by co-executing applications at random times, (ii) it is difficult to determine apriori which applications will be concurrently executing, and (iii) the type of requests applications will issue. This work proposes a method to derive upper bounds on the increase in execution time of applications executing on such PCM-based multicores. It considers the contention on the shared memory and focuses on dealing with the asymmetric read and write latencies of PCM-based memories, while taking into account the specific policy applied to schedule requests by the memory controller.
    Full-text · Conference Paper · Aug 2013
Show more