Figure 7 - available via license: Creative Commons Attribution 4.0 International
Content may be subject to copyright.
Source publication
Embedded systems used in critical systems, such as aeronautics, have undergone continuous evolution in recent years. In this evolution, many of the functionalities offered by these systems have been adapted through the introduction of network services that achieve high levels of interconnectivity. The high availability of access to communications n...
Context in source publication
Context 1
... third axis, "the product", indicates the life cycle of the product and highlights the importance of managing production as well as design, Inter-partitions communication channels development, and possible recycling, and the residual value or environmental impact of a product at the end of its useful life. From the point of view of software organization, the RAMI reference model ( Figure 7) proposes an architecture based on services whose interfaces, information models, and meta-models are driven by standards (IEC 61369, IEC 62,264 IEC CDD) [38,39] that enable their precise interpretation. The basic element of the architecture is the "asset", which represents an object that has value for the company, and its software representation, the "management shell" that organizes the properties and functions of the asset from different points of view. ...
Similar publications
Advances in Artificial intelligence (AI) and embedded systems have resulted on a recent increase in use of image processing applications for smart cities’ safety. This enables a cost-adequate scale of automated video surveillance, increasing the data available and releasing human intervention. At the same time, although deep learning is a very inte...
Citations
... TT traffic can be combined with best-effort traffic and audio-video bridging (AVB) traffic or rate constrained (RC) traffic and form a hybrid critical network. In a time-triggered (TT) network, TT traffic is set as the highest priority traffic and is designed off-line [4][5][6]. This implies that it is scheduled in advance and then loaded onto each node to ensure that its transmission (a) does not suffer from blocking delays due to transmission conflicts between TT traffic, and (b) is not affected by other low priority traffic in the network [7,8]. ...
... For example, if p A = 8, p B = 12 and l A = l B = 1, thus lcm = 24 and gcd = 4. When the time-slots occupied by TT flow A are predetermined to be [1,9,17], there are gcd−1 gcd × p B = 4−1 4 × 12 = 9 number of scenarios in which TT flow B can be scheduled, namely [2,14], [3,15], [4,16], [6,18], [7,19], [8,20], [10,22], [11,23], [12,24]. ...
... × 12 = 3 number of scenarios in which TT flow B can be scheduled, namely [2,3,4,14,15,16], [6,7,8,18,19,20], [10,11,12,22,23,24]. ...
Time-triggered networks are deployed in avionics and astronautics because they provide deterministic and low-latency communications. Remapping of partitions and the applications that reside in them that are executing on the failed core and the resulting re-routing and re-scheduling are conducted when a permanent end-system core failure occurs and local resources are insufficient. We present a network-wide reconfiguration strategy as well as an implementation scheme, and propose an Integer Linear Programming based joint mapping, routing, and scheduling reconfiguration method (JILP) for global reconfiguration. Based on scheduling compatibility, a novel heuristic algorithm (SCA) for mapping and routing is proposed to reduce the reconfiguration time. Experimentally, JILP achieved a higher success rate compared to mapping-then-routing-and-scheduling algorithms. In addition, relative to JILP, SCA/ILP was 50-fold faster and with a minimal impact on reconfiguration success rate. SCA achieved a higher reconfiguration success rate compared to shortest path routing and load-balanced routing. In addition, scheduling compatibility plays a guiding role in ILP-based optimization objectives and ‘reconfigurable depth’, which is a metric proposed in this paper for the determination of the reconfiguration potential of a TT network.
... Indeed, the system begins its operation with 1 mode, and it enters into 2 mode whenever an 2 overruns its 1 . Now, the scheduler assumes 2 for all the residual workloads to ensure the system correctness, and it continues in this mode until all the high-level workloads are completed; now, the operation of all 1 and 2 tasks demands heavy computation, which may surpass the system's capacity, and the processor becomes overloaded [8]. ...
Recently, scheduling mixed-criticality tasks on a common computational system has become an imperative study in academia and engineering proposals. Since multicore processors are the main paradigm in mixed-criticality systems (MCS), reliability and energy consumption are vital concerns. In modern MCS, increased peak power dissipation, particularly in critical scenarios, may cause temperature issues, disturbing the system's consistency and timeliness. This work proposes a criticality-cognizant energy-efficient scheduling approach (CESA) that concurrently provides reliability, power management, and failsafe service level in MCS. The proposed approach decreases the system power dissipation as far as achievable at runtime through the dynamic voltage and frequency scaling (DVFS) approach with laxity allocation. CESA simultaneously accepts a number of tasks (i.e., workloads) and creates clusters with one high-criticality workload and a set of low-criticality workloads. It calculates the available laxities effectively and finds the most suitable task cluster to utilize that available laxity by considering its effect on the instantaneous power consumption and thermal issues. At the same time, varying the core speed, assigning an appropriate cluster for remaining laxity, and selecting a suitable core for task migration at runtime are arduous endeavors and lead to deadline defilement which is not acceptable for high-level workloads. Hence, we propose an online scheduling approach with DVFS and task migration during runtime whenever there is laxity. A cost function is defined as finding out the most suitable cluster to allot the laxities to reduce its V/F level or transfer the task to a new processing element. We assess the performance of our approach in an asymmetric multicore platform (i.e., ARM big. LITTLE processor) with several benchmark task sets. Empirical results demonstrate that the proposed algorithm realizes up to a 6.76% drop in maximum power and a 26.17% drop in core temperature related to the state-of-the-art method.
... The evolution of the industrial component model for multi-criticality vehicular software is addressed by Bucaioni et al. [123]. In a wider context, Simo et al. [564] discuss the role of MCS within the context of Industry 4.0. An analysis of task parameters for automotive application is presented by Nair et al. [473]. ...
This review covers research on the topic of mixed criticality systems that has been published since Vestal’s 2007 paper. It covers the period up to end of 2021. The review is organised into the following topics: introduction and motivation, models, single processor analysis (including job-based, hard and soft tasks, fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, related topics, realistic models, formal treatments, systems issues, industrial practice and research beyond mixed-criticality. A list of PhDs awarded for research relating to mixed-criticality systems is also included.
... For example, in an aeroplane, the correct operation of the engines is of higher criticality than the onboard intercom system. With the seminal work by Vestal in 2007 [1], scheduling of mixed-criticality systems became an active research field [2][3][4][5][6][7][8]. ...
... PFD PFH 4 10 −4 to 10 −5 10 −8 to 10 −9 3 10 −3 to 10 −4 10 −7 to 10 −8 2 10 −2 to 10 −3 10 −6 to 10 −7 1 10 −1 to 10 −2 10 −5 to 10 −6 ...
Many safety-critical systems use criticality arithmetic, an informal practice of implementing a higher-criticality function by combining several lower-criticality redundant components or tasks. This lowers the cost of development, but existing mixed-criticality schedulers may act incorrectly as they lack the knowledge that the lower-criticality tasks are operating together to implement a single higher-criticality function. In this paper, we propose a solution to this problem by presenting a mixed-criticality mid-term scheduler that considers where criticality arithmetic is used in the system. As this scheduler, which we term ATMP-CA, is a mid-term scheduler, it changes the configuration of the system when needed based on the recent history of deadline misses. We present the results from a series of experiments that show that ATMP-CA’s operation provides a smoother degradation of service compared with reference schedulers that do not consider the use of criticality arithmetic.
In recent decades, mixed-criticality systems have been widely adopted to reduce the complexity and development times of real-time critical applications. In these systems, applications run on a separation kernel hypervisor, a software element that controls the execution of the different operating systems, providing a virtualized environment and ensuring the necessary spatial and temporal isolation. The guest code can run unmodified and unaware of the hypervisor or be explicitly modified to have a tight coupling with the hypervisor. The former is known as full virtualization, while the latter is known as para-virtualization. Full virtualization offers better compatibility and flexibility than para-virtualization, at the cost of a performance penalty.
LEON is a processor family that implements the SPARC V8 architecture and whose use is widespread in the field of space systems. To the best of our knowledge, all separation kernel hypervisors designed to support the development of mixed-criticality systems for LEON employ para-virtualization, which hinders the adaptation of real-time operating systems.
This paper presents the design of a Virtualization Monitor that allows guest real-time operating systems to run virtualized on LEON-based systems without needing to modify their source code. It is designed as a standalone component within a hypervisor and incorporates a set of techniques such as static binary rewriting, automatic code generation, and the use of operating system profiles. To validate the proposed solution, tests and benchmarks have been implemented for three guest systems: RTEMS, FreeRTOS, and Zephyr, analyzing the overhead introduced in certain situations characteristic of real-time applications. Finally, the same benchmarks have been run on AIR, one of the hypervisors that uses para-virtualization. The results obtained show that the use of the proposed techniques allows us to obtain similar results to those obtained using para-virtualization without the need to modify the source code of the guest real-time operating systems.
Modern embedded real-time systems (RTS) are increasingly facing security threats than the past. A simplistic straightforward integration of security mechanisms might not be able to guarantee the safety and predictability of such systems. In this paper, we focus on integrating security mechanisms into RTS (especially legacy RTS). We introduce Contego-C , an analytical model to integrate security tasks into RTS that will allow system designers to improve the security posture without affecting temporal and control constraints of the existing real-time control tasks. We also define a metric (named tightness of periodic monitoring) to measure the effectiveness of such integration. We demonstrate our ideas using a proof-of-concept implementation on an ARM-based rover platform and show that Contego-C can improve security without degrading control performance.