Conference Paper

Bus Access Optimization for Predictable Implementation of Real-Time Applications on Multiprocessor Systems-on-Chip

DOI: 10.1109/RTSS.2007.24 Conference: Real-Time Systems Symposium, 2007. RTSS 2007. 28th IEEE International
Source: IEEE Xplore

ABSTRACT In multiprocessor systems, the traffic on the bus does not solely originate from data transfers due to data dependen- cies between tasks, but is also affected by memory trans- fers as result of cache misses. This has a huge impact on worst-case execution time (WCET) analysis and, in general, on the predictability of real-time applications implemented on such systems. As opposed to the WCET analysis per- formed for a single processor system, where the cache miss penalty is considered constant, in a multiprocessor system each cache miss has a variable penalty, depending on the bus contention. This affects the tasks' WCET which, how- ever, is needed in order to perform system scheduling. At the same time, the WCET depends on the system schedule due to the bus interference. In this paper we present an approach to worst-case execution time analysis and system scheduling for real-time applications implemented on mul- tiprocessor SoC architectures. The emphasis of this paper is on the bus scheduling policy and its optimization, which is of huge importance for the performance of such a pre- dictable multiprocessor application.

0 Followers
 · 
116 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: In modern non-customized multicore architectures, computing cores commonly share large parts of the memory hierarchy. This paper presents a scheme for controlling the sharing of main memory among cores, respectively the concurrently executing real-time tasks. This is important for the following: concurrent memory accesses are served sequentially by the memory controller. As task execution stalls until memory fetches are served, the latter significantly contributes to the execution time of the tasks. With multiple real-time tasks concurrently competing for the access to the memory, the main memory can easily become the Achilles heel for the timing correctness of the tasks. To provide hard timing guarantees, release of access requests issued to the main memory has therefore to be controlled. Run-time budgeting is a well accepted technique for controlling and coordinating the use of a shared resource, particularly when the underlying hardware cannot be altered. Whilst guaranteeing timing correctness of the hard real-time applications, worst-case based resource budgeting commonly leads to performance degradations of the co-running (so called soft real-time) applications. In this paper we propose to combine worst-case based resource budgeting with run-time monitoring for dynamically reconfiguring the budget schemes. Thereby we aim at increasing the responsiveness of the soft real-time applications, while satisfying the strict timing constraints of the co-running hard real-time tasks. We have implemented the proposed scheme in a microkernel and present its empirical evaluation for which an industrial benchmark suite has been employed.
    2014 9th IEEE International Symposium on Industrial Embedded Systems (SIES; 06/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: The performance and power efficiency of multi-core processors are attractive features for safety-critical applications, for example in avionics. But the inherent use of shared resources complicates timing analysability. In this paper we discuss a novel approach to compute the Worst-Case Execution Time (WCET) of multiple hard real-time applications scheduled on a Commercial Off-The-Shelf (COTS) multi-core processor. The analysis is closely coupled with mechanisms for temporal partitioning as, for instance, required in ARINC 653-based systems. Based on a discussion of the challenges for temporal partitioning and timing analysis in multi-core systems, we deduce a generic architecture model. Considering the requirements for re-usability and incremental development and certification, we use this model to describe our integrated analysis approach.
    Design Automation and Test in Europe; 01/2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: In commercial-off-the-shelf (COTS) multi-core systems, the execution times of tasks become hard to predict because of contention on shared resources in the memory hierarchy. In particular, a task running in one processor core can delay the execution of another task running in another processor core. This is due to the fact that tasks can access data in the same cache set shared among processor cores or in the same memory bank in the DRAM memory (or both). Such cache and bank interference effects have motivated the need to create isolation mechanisms for resources accessed by more than one task. One popular isolation mechanism is cache coloring that divides the cache into multiple partitions. With cache coloring, each task can be assigned exclusive cache partitions, thereby preventing cache interference from other tasks. Similarly, bank coloring allows assigning exclusive bank partitions to tasks. While cache coloring and some bank coloring mechanisms have been studied separately, interactions between the two schemes have not been studied. Specifically, while memory accesses to two different bank colors do not interfere with each other at the bank level, they may interact at the cache level. Similarly, two different cache colors avoid cache interference but may not prevent bank interference. Therefore it is necessary to coordinate cache and bank coloring approaches. In this paper, we present a coordinated cache and bank coloring scheme that is designed to prevent cache and bank interference simultaneously. We also developed color allocation algorithms for configuring a virtual memory system to support our scheme which has been implemented in the Linux kernel. In our experiments, we observed that the execution time can increase by 60% due to inter-task interference when we use only cache coloring. Our coordinated approach can reduce this figure down to 12% (an 80% reduction).
    2013 IEEE 16th International Conference on Computational Science and Engineering (CSE); 12/2013

Full-text (2 Sources)

Download
260 Downloads
Available from
May 16, 2014