Conference Paper

Towards Parallelizing Legacy Embedded Control Software Using the LET Programming Paradigm

Authors:
  • Mercedes-Benz AG
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The growing demand for computing power in automotive applications can only be satisfied by embedded multi-core processors. Significant parts of such applications include OEM-owned legacy software, which has been developed for single-core platforms. While the OEM is faced with the issues of parallelizing the software and specifying the requirements to the ECU supplier, the latter has to deal with implementing the required parallelization within the integrated system. The Logical Execution Time (LET) paradigm addresses these concerns in a clear conceptual framework. We present here initial steps for applying the LET model in this respect: (1) Parallelization of legacy embedded control software, by exploiting existing inherent parallelism. The application software remains unchanged, as adaptations are only made to the middleware. (2) Using the LET programming model to ensure that the parallelized software has a correct functional and temporal behavior. The Timing Definition Language (TDL) and associated tools are employed to specify LET-based requirements, and to generate system components that ensure LET behavior. The work describes two conceptual ways for integrating TDL components in AUTOSAR.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This has prevented designers from applying the LET to the moment when architectural complexity asked for a programming paradigm that leads to deterministic timing and data flow and enables MDD while being compatible with legacy software and hardware artifacts. The LET paradigm has been a success in the automotive industry because it elegantly solves the problem of multiprocessor programming with lock-free inter-core communication [31]. It should be noted that LET by itself does not increase the workload, even though early implementations might have used a nonworkload-preserving scheduling strategy (see Section VII). ...
... This reduction in complexity can be immediately exploited in the implementation of the system. Examples are lock-free communication [31] or the reduction of memory contention [3]. Moreover, the contractual determinism of SL LET communication is a basis for monitoring which can be implemented with low minimal overhead. ...
... The LET paradigm is well known and standardized in the automotive domain [9], but it is restricted to periodic tasks (time triggering) with implicit deadlines as well as a register semantic [2]. Although this was sufficient to adapt control applications from single to shared-memory multicore platforms [31], SL LET has been developed to extend the approach to larger NUMA architectures and distributed CPS. The main differences are that SL LET relaxes synchronous scheduling to tightly couples LET "islands" (time zones) with their own local schedule and allows to incorporate event triggered parts in a cause-effect chain, for example, for communication networks as well as processing pipelines (see Section IV-C). ...
Article
Full-text available
To cope with growing computing performance requirements, cyber-physical systems architectures are moving toward heterogeneous high-performance computer architectures and networks. Such architectures, however, incur intricate side effects that challenge traditional software design and integration. The programming paradigm can take a key role in mastering software design, as experience in automotive design demonstrates. To cope with the integration challenge, this industry has started introducing a programming paradigm that efficiently preserves application data flow under platform integration and changes with minimum performance loss. This article will revisit this paradigm that is currently used for lock-free multicore programming and explain its extension to the system level. It will then explore its application to two important developments in industrial design. This article will conclude with an evaluation of its properties, its overhead, and its application toward a robust design process.
... Here, the above mentioned increase of complexity and the accompanied growing demand for computing power has eventually led to a platform shift towards multi-core processors. Soon the performance of single-core processors won't be sufficient to keep pace with the needs of powertrain control systems, for example [6]. Along with this hardware migration, a substantial effort is required to migrate also the software. ...
... The Logical Execution Timing (LET) programming model [7] was recently identified as a potential candidate to facilitate the migration from single-to multi-core architectures in the automotive sector [6,2,11]. Amongst other benefits, LET is a timing specification that introduces time-and value-deterministic inter-task communication across multiple cores. ...
... For a single-to-multi-core transformation, for example, where LET is used as a mechanism to enforce causality and to ensure data consistency, no buffer would be required. In [6] we report on a real automotive application with more than 1,500 legacy variables of which less than 10 required a buffer. This makes the two approaches difficult to compare exactly in general, but in any case, the Spec-based approach results in a more lightweight simulation model with less buffers and drivers. ...
Chapter
The interest in the logical execution time (LET) paradigm has recently experienced a boost, especially in the automotive industry. This is because it is considered a practical candidate for migrating concurrent legacy software from single- to multi-core platforms by introducing deterministic intra- and inter-core communication. In many cases, the implementation of these individual software components roots in MATLAB/Simulink, a modeling and simulation environment, where the controller functionality is described with a block-oriented formalism and simulated with synchronous reactive semantics. Considering LET already in the modeling and simulation phase instead of deferring this to the integration phase, as it is done now, is an important step towards the idea of models being the single source of truth and to estimate the effect of LET on end-to-end timing in cause-effect-chains at an early stage. This paper presents two approaches of simulating software components with LET semantics in Simulink. In contrast to previous work, which deals with clean slate top-down approaches, we focus on legacy software (in the form of Simulink models) that does not satisfy some of the initial assumptions of the LET programming model.
... The optimization of the memory buffers that are required for the implementation of the LET communication in the case of oversampling or undersampling reuses the concepts and methods that were originally proposed to guarantee flow preservation in synchronous systems [10], [11], [12]. In [13], the authors propose an approach for mapping legacy code on multicores leveraging clustering heuristics and an implementation of the LET paradigm using the Timing Division Language. However, a formal analysis and the details about the case study application are missing. ...
... This is enforced by the first inequality in the constraint. If a i,j,p = 1, the second and third inequalities of the constraint become equivalent to RTC i,j,p ≤R p i ≤ t i,j , hence correctly enforcing the schedulability condition of Eq. (13). In all other cases where a i,j,p = 0, the last two inequalities of the constraint become equivalent to −∞ ≤R p i ≤ ∞, and hence have no effect. ...
... However, it can be considered a restriction of the SR class of systems (in which a constant delay is applied at the end of each computation step), which provide the same properties (flow preservation and determnism), but with a much greater choice of the possible delays to be applied at each input/output stage. This connection can be leveraged by reusing several results from the research on SR systems that define how to provide efficient or even optimal implementations of tasks and communication primitives [6,7,8,9,10,11,12]. CEA LIST ended in 2013 the technology transfer to a spin-off company of a Real-Time Operating System (RTOS), successively called OASIS and then PharOS. ...
... Abandoning this restriction leads to a drastic reduction in run-time and memory overhead [6]. Both dimensions of overhead are largely dependent on the particular application and are depending also on the degree of freedom for choosing the exact LET [7], especially for multi-core targets. There is an enormous potential for optimizations when migrating to multi-cores using LET. ...
... This has to be done in a unified and systematic approach, which covers all development phases and all platforms. As a first step, the concept of Logical Execution Time (LET) [61] [62] has been introduced for the AUTOSAR Classic platform, since it allows a time deterministic design as well as an efficient lock-free implementation of communication [63]. Nevertheless, LET is limited to a local scope and not applicable for SOAs. ...
Conference Paper
Full-text available
Future mobility will be electrified, connected and automated. This opens completely new possibilities for mobility concepts that have the chance to improve not only the quality of life but also road safety for everyone. To achieve this, a transformation of the transportation system as we know it today is necessary. The UNICARagil project, which ran from 2018 to 2023, has produced architectures for driverless vehicles that were demonstrated in four full-scale automated vehicle prototypes for different applications. The AUTOtech.agil project builds upon these results and extends the system boundaries from the vehicles to include the whole intelligent transport system (ITS) comprising, e.g., roadside units, coordinating instances and cloud backends. The consortium was extended mainly by industry partners, including OEMs and tier 1 suppliers with the goal to synchronize the concepts developed in the university-driven UNICARagil project with the automotive industry. Three significant use cases of future mobility motivate the consortium to develop a vision for a Cooperative Intelligent Transport System (C-ITS), in which entities are highly connected and continually learning. The proposed software ecosystem is the foundation for the complex software engineering task that is required to realize such a system. Embedded in this ecosystem, a modular kit of robust service-oriented modules along the effect chain of vehicle automation as well as cooperative and collective functions are developed. The modules shall be deployed in a service-oriented E/E platform. In AUTOtech.agil, standardized interfaces and development tools for such platforms are developed. Additionally, the project focuses on continuous uncertainty consideration expressed as quality vectors. A consistent safety and security concept shall pave the way for the homologation of the researched ITS.
... Atop comes the temporal effects of network protocols with their different communication latencies, which makes it difficult to reason about the real overall timing behavior in advance simulations. In addition, the recent introduction of multi-core architectures in the automotive domain adds a whole new dimension to this [4]. ...
Conference Paper
Full-text available
This paper presents an extension to TrueTime, a widely used toolbox for co-simulation of networked real-time control systems in the block-oriented modeling and simulation environment MATLAB/Simulink. TrueTime provides an implementation of a real-time kernel as a Simulink block, which is able to execute a set of tasks and interrupt handlers under various scheduling schemes. By annotating the controller code with platform-specific timing information, it thereby allows to observe the temporal behavior and scheduling effects arising from the actual controller implementation in closed loop with the plant. This goes beyond traditional development approaches within the synchronous (reactive) block-diagram formalism of Simulink, which typically ignores execution times and jitter. However, TrueTime is said not to be suited for industrial (legacy) applications, mainly due to its structural requirements on the controller code. We propose an alternative execution mechanism that overcomes this limitation and fits with typical controller code as, for example, generated by the Simulink/Embedded Coder. Optionally, tasks may have individual execution stacks and may also execute as individual threads, even allowing for accelerated simulation on multi-core simulation hosts. Furthermore, this introduces support for instruction level timing and deeply branched function call graphs, without distorting the code structure. A case study demonstrates the applicability of our approach.
... TDL shares with Giotto the basic idea of LET but introduces additional high-level concepts, the module, for building transparent distributed real-time systems. The TDL environment can be integrated into AUTOSAR [17], which pursues the objective of creating and establishing an open and standardized software architecture for automotive electronic control units (ECUs). ...
Article
Real-time systems continuously interact with the physical environment and often have to satisfy stringent timing constraints imposed by their interactions. Those systems involve two main properties: reactivity and predictability. Reactivity allows the system to continuously react to a non-deterministic external environment, while predictability guarantees the deterministic execution of safety-critical parts of applications. However, with the increase in software complexity, traditional approaches to develop real-time systems make temporal behaviors difficult to infer, especially when the system is required to address non-deterministic aperiodic events from the physical environment. In this article, we propose a reactive and predictable programming framework, Distributed Clockwerk (DCW), for distributed real-time systems. DCW introduces the Servant, which is a non-preemptible execution entity, to implement periodic tasks based on the Logical Execution Time (LET) model. Furthermore, a joint schedule policy, based on the slack stealing algorithm, is proposed to efficiently address aperiodic events with no violated hard-time constraints. To further support predictable communication among distributed nodes, DCW implements the Time-Triggered Controller Area Network (TTCAN) to avoid collisions while accessing the shared communication medium. Moreover, a programming framework implements to provide a set of programming APIs for defining timing and functional behaviors of concurrent tasks. An example is further implemented to illustrate the DCW design flow. The evaluation results demonstrate that our proposal can improve both periodic and aperiodic reactivity compared with existing work, and the implemented DCW can also ensure the system predictability by achieving extremely low overheads.
... Abandoning this restriction leads to a drastic reduction in run-time and memory overhead [6]. Both dimensions of overhead are largely dependent on the particular application and are depending also on the degree of freedom for choosing the exact LET [7], especially for multi-core targets. There is an enormous potential for optimizations when migrating to multi-cores using LET. ...
Article
The Logical Execution Time (LET) paradigm has recently been recognized as a promising candidate to facilitate the migration to multi-core architectures in automotive real-time software systems. We outline several findings regarding the application of the LET paradigm that corroborate this perception. Our work in this respect deals with LET for legacy systems and LET in the context of model-based development (e.g., in MATLAB/Simulink). Furthermore, we present open issues and highlight implications on the development process when using LET as a synchronization mechanism.
... LET considers abstract intervals between the reading and writing of variables instead of the actual execution time of a program and is focused on software only. In the automotive domain a case of application is e.g. the distribution of single-core software on multi-core platforms [11]. It is also a possible basis for new approaches to increase timing predictability of embedded real-time systems [10], [12]. ...
Article
Full-text available
The computation of a cyber-physical system's reaction to a stimulus typically involves the execution of several tasks. The delay between stimulus and reaction thus depends on the interaction of these tasks and is subject to timing constraints. Such constraints exist for a number of reasons and range from possible impacts on customer experiences to safety requirements. We present a technique to determine end-to-end latencies of such task sequences. The technique is demonstrated on the example of electronic control units (ECUs) in automotive embedded real-time systems. Our approach is able to deal with multi-core architectures and supports four different activation patterns, including interrupts. It is the first formal analysis approach making use of load assumptions in order to exclude infeasible data propagation paths without the knowledge of worst-case execution times or worst-case response times. We employ a constraint programming solver to compute bounds on end-to-end latencies.
Article
In automotive and industrial real-time software systems, the primary timing constraints relate to cause-effect chains. A cause-effect chain is a sequence of linked tasks and it typically implements the process of reading sensor data, computing algorithms, and driving actuators. The classic timing analysis computes the maximum end-to-end latency of a given cause-effect chain to verify that its end-to-end deadline can be satisfied in all cases. This information is useful but not sufficient in practice: Software is usually evolving and updates may always alter the maximum end-to-end latency. It would be desirable to judge the quality of a software design a priori by quantifying how robust the timing of a given cause-effect chain will be in the presence of software updates. In this paper, we derive robustness margins which guarantee that if software extensions stay within certain bounds, then the end-to-end deadline of a cause-effect chain can still be satisfied. Robustness margins are also useful to know if the system model has uncertain parameters. A robust system design can tolerate bounded deviations from the nominal system model without violating timing constraints. The results are applicable to both the bounded execution time programming model and the (system-level) logical execution time programming model. In this paper, we study both an industrial use case from the automotive industry and analyze synthetically generated experiments with our open-source tool TORO.
Article
Timing correctness is crucial in a multi-criticality real-time system, such as an autonomous driving system. It has been recently shown that these systems can be vulnerable to timing inference attacks, mainly due to their predictable behavioral patterns. Existing solutions like schedule randomization cannot protect against such attacks, often limited by the system’s real-time nature. This paper presents “ SchedGuard++ ”: a temporal protection framework for Linux-based real-time systems that protects against posterior schedule-based attacks by preventing untrusted tasks from executing during specific time intervals. SchedGuard++ supports multi-core platforms and is implemented using Linux containers and a customized Linux kernel real-time scheduler. We provide schedulability analysis assuming the Logical Execution Time (LET) paradigm, which enforces I/O predictability. The proposed response time analysis takes into account the interference from trusted and untrusted tasks and the impact of the protection mechanism. We demonstrate the effectiveness of our system using a realistic radio-controlled rover platform. Not only is “ SchedGuard++ ” able to protect against the posterior schedule-based attacks, but it also ensures that the real-time tasks/containers meet their temporal requirements.
Article
Logical Execution Time (LET) is a timed programming abstraction, which features predictable and composable timing. It has recently gained considerable attention in the automotive industry, where it was successfully applied to master the distribution of software applications on multi-core electronic control units. However, the LET abstraction in its conventional form is only valid within the scope of a single component. With the recent introduction of System-level Logical Execution Time (SL LET), the concept could be transferred to a system-wide scope. This article improves over a first paper on SL LET, by providing matured definitions and an extensive discussion of the concept. It also features a comprehensive evaluation exploring the impacts of SL LET with regard to design, verification, performance, and implementability. The evaluation goes far beyond the contexts in which LET was originally applied. Indeed, SL LET allows us to address many open challenges in the design and verification of complex embedded hardware/software systems addressing predictability, synchronization, composability, and extensibility. Furthermore, we investigate performance trade-offs, and we quantify implementation costs by providing an analysis of the additionally required buffers.
Conference Paper
The automotive industry is confronting the multi-core challenge, where legacy and modern software must run correctly and efficiently in parallel, by designing their software around the Logical Execution Time (LET) model. While such designs offer implementations that are platform independent and time predictable, task communications are assumed to complete instantaneously. Thus, it is critical to implement timely data transfers between LET tasks, which may be on different cores, in order to preserve a design's data-flow. In this paper, we develop a lightweight Static Buffering Protocol (SBP) that satisfies the LET communication semantics and supports signal-based communication with multiple signal writers. Our simulation-based evaluation with realistic industrial automotive benchmarks shows that the execution overhead of SBP is at most half that of the traditional Point-To-Point (PTP) communication method. Moreover, SBP needs on average 60% less buffer memory than PTP.
Article
Although the robot taxi is a proof-of-concept, the volume market introduction of automated vehicles represents the main cyber-physical challenge, necessitating drastically increased design complexity. Challenges and possible architecture and design process solutions are discussed.
Chapter
The verification of real-time systems has been an active area of research for several decades now. Some results have been successfully transferred to industry. Still, many obstacles remain that hinder a smooth integration of academic research and industrial application. In this extended abstract, we discuss some of these obstacles and ongoing research and community efforts to bridge this gap. In particular, we present several experimental and theoretical methods to evaluate and compare real-time systems analysis methods and tools.
Article
Software design for automotive systems is highly complex due to the presence of strict data age constraints for event chains in addition to task specific requirements. These age constraints define the maximum time for the propagation of data through an event chain consisting of independently triggered tasks. Tasks in event chains can have different periods, introducing over- and under-sampling effects, which additionally aggravates their timing analysis. Furthermore, different functionality in these systems, is developed by different suppliers before the final system integration on the ECU. The software itself is developed in a hardware agnostic manner and this uncertainty and limited information at the early design phases may not allow effective analysis of end-to-end delays during that phase. In this paper, we present a method to compute end-to-end delays given the information available in the design phases, thereby enabling timing analysis throughout the development process. The presented methods are evaluated with extensive experiments where the decreasing pessimism with increasing system information is shown.
Conference Paper
The Logical Execution Time (LET) programming model has been recently employed as a parallel programming paradigm for multi-core platforms, in particular for porting large single-core applications to multi-core versions, which is a serious challenge faced in major industries, such as the automotive one. In this paper, we consider a transformation process from legacy single-core software to LET-based versions that are ready for multi-core, and focus on the problem of minimizing computational costs rooted in the additional buffering that is required to achieve the LET communication semantics, especially for parallel executions and in the presence of legacy event-driven functions that remain outside of the LET specifications. We propose a static analysis that proceeds top-down through successive layers of abstraction, where an optimal solution at a certain layer represents a minimal upper bound on the set of buffers determined in the next layer. We present a solution for the top layer, which is platform-independent and has no restrictions on executions of event-driven software. Furthermore, we derive optimal buffering for a second layer with a minimal platform configuration. We refer here to a three-layer implementation, which has been successfully applied to industrial automotive software, where further savings are achieved in the mapping from second layer buffers to memory locations. We also point to evaluation results, demonstrating the applicability of the approach in industrial settings.
Conference Paper
Full-text available
A majority of multi-rate real-time systems are constrained by a multitude of timing requirements, in addition to the traditional deadlines on well-studied response times. This means, the timing predictability of these systems not only depends on the schedulability of certain task sets but also on the timely propagation of data through the chains of tasks from sensors to actuators. In the automotive industry, four different timing constraints corresponding to various data propagation delays are commonly specified on the systems. This paper identifies and addresses the source of pessimism as well as optimism in the calculations for one such delay, namely the reaction delay, in the state-of-the-art analysis that is already implemented in several industrial tools. Furthermore, a generic framework is proposed to compute all the four end-to-end data propagation delays, complying with the established delay semantics, in a scheduler and hardware-agnostic manner. This allows analysis of the system models already at early development phases, where limited system information is present. The paper further introduces mechanisms to generate job-level dependencies, a partial ordering of jobs, which need to be satisfied by any execution platform in order to meet the data propagation timing requirements. The job-level dependencies are first added to all task chains of the system and then reduced to its minimum required set such that the job order is not affected. Moreover, a necessary schedulability test is provided, allowing for varying the number of CPUs. The experimental evaluations demonstrate the tightness in the reaction delay with the proposed framework as compared to the existing state-of-the-art and practice solutions.
Conference Paper
Full-text available
In the Logical Execution Time (LET) programming model, fixed execution times of software tasks are specified and a dedicated middleware is employed to ensure their realization , achieving increased system robustness and predictability. This paradigm has been proposed as a top-down development process, which is hardly applicable to a large body of legacy control software encountered in the embedded industry. Applying LET to legacy software entails challenges such as: satisfying legacy constraints, minimizing additional computational costs, maintaining control quality, and dealing with event-triggered computations. Such challenges are addressed here by a systematic approach, where program analysis and modification techniques are employed to introduce efficient buffering into the legacy system such that the given LET specifications are met. The approach has been implemented in a tool suite that performs fully automated transformation of the legacy software and may be carried out incrementally. This paper presents an application to large-scale automotive embedded software, as well as an evaluation of the achieved LET-based behavior for industrial engine control software.
Article
Full-text available
A large class of embedded systems is distinguished from general-purpose computing systems by the need to satisfy strict requirements on timing, often under constraints on available resources. Predictable system design is concerned with the challenge of building systems for which timing requirements can be guaranteed a priori. Perhaps paradoxically, this problem has become more difficult by the introduction of performance-enhancing architectural elements, such as caches, pipelines, and multithreading, which introduce a large degree of uncertainty and make guarantees harder to provide. The intention of this article is to summarize the current state of the art in research concerning how to build predictable yet performant systems. We suggest precise definitions for the concept of “predictability”, and present predictability concerns at different abstraction levels in embedded system design. First, we consider timing predictability of processor instruction sets. Thereafter, we consider how programming languages can be equipped with predictable timing semantics, covering both a language-based approach using the synchronous programming paradigm, as well as an environment that provides timing semantics for a mainstream programming language (in this case C). We present techniques for achieving timing predictability on multicores. Finally, we discuss how to handle predictability at the level of networked embedded systems where randomly occurring errors must be considered.
Article
Full-text available
I discuss two main challenges in embedded systems design: the challenge to build predictable systems, and that to build robust systems. I suggest how predictability can be formalized as a form of determinism, and robustness as a form of continuity.
Article
Full-text available
Giotto provides an abstract programmer's model for the implementation of embedded control systems with hard real-time constraints. A typical control application consists of periodic software tasks together with a mode-switching logic for enabling and disabling tasks. Giotto specifies time-triggered sensor readings, task invocations, actuator updates, and mode switches independent of any implementation platform. Giotto can be annotated with platform constraints such as task-to-host mappings, and task and communication schedules. The annotations are directives for the Giotto compiler, but they do not alter the functionality and timing of a Giotto program. By separating the platform-independent from the platform-dependent concerns, Giotto enables a great deal of flexibility in choosing control platforms as well as a great deal of automation in the validation and synthesis of control software. The time-triggered nature of Giotto achieves timing predictability, which makes Giotto particularly suitable for safety-critical applications.
Article
The underlying theories of both control engineering and real-time systems engineering assume idealized system abstractions that mutually neglect central aspects of the other discipline. Control engineering theory, on the one hand, usually assumes jitter free sampling and constant input-output latencies disregarding complex real-world timing effects. Real-time engineering theory, on the other hand, uses abstract performance models that neglect the functional behavior, and derives worst-case situations that have little expressiveness for control functionalities in physically dominated automotive systems. As a consequence, there is a lot of potential for a systematic co-engineering between both disciplines, increasing design efficiency and confidence. In this paper, we discuss possible approaches for such a co-engineering and their current applicability to real world problems. In particular, we compare simulation-based and formal verification techniques for various construction principles of automotive real-time control software.
Conference Paper
Embedded systems with hard real-time constraints need sound timing-analysis methods for proving that these constraints are satisfied. Computer architects have made this task harder by improving average-case performance through the introduction of components such as caches, pipelines, out-of-order execution, and different kinds of speculation. This article argues that some architectural features make timing analysis very hard, if not infeasible, but also shows how smart configuration of existing complex architectures can alleviate this problem.
Article
This paper deals with a new sequencing problem in which n jobs with ordering restrictions have to be done by men of equal ability. Assume every man can do any of the n jobs. The two questions considered in this paper are 1 How to arrange a schedule that requires the minimum number of men so that all jobs are completed within a prescribed time T, and 2 if m men are available, arrange a schedule that completes all jobs at the earliest time.
Conference Paper
Giotto provides an abstract programmer’s model for the implementation of embedded control systems with hard real-time constraints. A typical control application consists of periodic software tasks together with a mode switching logic for enabling and disabling tasks. Giotto specifies time-triggered sensor readings, task invocations, and mode switches independent of any implementation platform. Giotto can be annotated with platform constraints such as task-to-host mappings, and task and communication schedules. The annotations are directives for the Giotto compiler, but they do not alter the functionality and timing of a Giotto program. By separating the platform-independent from the platform-dependent concerns, Giotto enables a great deal of flexibility in choosing control platforms as well as a great deal of automation in the validation and synthesis of control software. The time-triggered nature of Giotto achieves timing predictability, which makes Giotto particularly suitable for safety-critical applications.
Deterministic and dependable (also known as predictable and robust) embedded real-time systems
  • C Aussaguès
C. Aussaguès, "Deterministic and dependable (also known as predictable and robust) embedded real-time systems...... with the OASIS and PharOS technology," 2012, invited Talk at the 17th IEEE International Conference on Engineering of Complex Computer Systems.