The MAJIS (Moons And Jupiter Imaging Spectrometer) instrument on board the ESA JUICE (JUpiter ICy moon Explorer) mission is an imaging spectrometer operating in the visible and near-infrared spectral range from 0.50 to 5.55 μm in two spectral channels with a boundary at 2.3 μm and spectral samplings for the VISNIR and IR channels better than 4 nm/band and 7 nm/band, respectively. The IFOV is 150 μrad over a total of 400 pixels. As already amply demonstrated by the past and present operative planetary space missions, an imaging spectrometer of this type can span a wide range of scientific objectives, from the surface through the atmosphere and exosphere. MAJIS is then perfectly suitable for a comprehensive study of the icy satellites, with particular emphasis on Ganymede, the Jupiter atmosphere, including its aurorae and the spectral characterization of the whole Jupiter system, including the ring system, small inner moons, and targets of opportunity whenever feasible. The accurate measurement of radiance from the different targets, in some case particularly faint due to strong absorption features, requires a very sensitive cryogenic instrument operating in a severe radiation environment. In this respect MAJIS is the state-of-the-art imaging spectrometer devoted to these objectives in the outer Solar System and its passive cooling system without cryocoolers makes it potentially robust for a long-life mission as JUICE is. In this paper we report the scientific objectives, discuss the design of the instrument including its complex on-board pipeline, highlight the achieved performance, and address the observation plan with the relevant instrument modes.
Modern applications are often characterized by a tight interaction with I/O devices. At the same time, many application domains are also facing a shift towards an integrated approach where multiple applications with mixed levels of safety and security need to co-exist on top of a shared hardware platform, which is typically managed by a hypervisor. This gives rise to the need for a predictable mechanism allowing multiple virtual machines to share I/O devices, while at the same time controlling contention delays when they access global memory. To deal with these shortcomings, this paper proposes an I/O virtualization framework providing support for controlling the I/O-related memory contention by leveraging the ARM QoS-400 regulators. Extensive experiments are performed to compare the proposed solution with the Xen hypervisor, showing improvements up to 8x when controlling the I/O-related memory contention.
Embedded systems used in critical systems, such as aeronautics, have undergone continuous evolution in recent years. In this evolution, many of the functionalities offered by these systems have been adapted through the introduction of network services that achieve high levels of interconnectivity. The high availability of access to communications networks has enabled the development of new applications that introduce control functions with higher levels of intelligence and adaptation. In these applications, it is necessary to manage different components of an application according to their levels of criticality. The concept of “Industry 4.0” has recently emerged to describe high levels of automation and flexibility in production. The digitization and extensive use of information technologies has become the key to industrial systems. Due to their growing importance and social impact, industrial systems have become part of the systems that are considered critical. This evolution of industrial systems forces the appearance of new technical requirements for software architectures that enable the consolidation of multiple applications in common hardware platforms—including those of different criticality levels. These enabling technologies, together with use of reference models and standardization facilitate the effective transition to this approach. This article analyses the structure of Industry 4.0 systems providing a comprehensive review of existing techniques. The levels and mechanisms of interaction between components are analyzed while considering the impact that the handling of multiple levels of criticality has on the architecture itself—and on the functionalities of the support middleware. Finally, this paper outcomes some of the challenges from a technological and research point of view that the authors identify as crucial for the successful development of these technologies.
Given the increasingly complex and mixed-criticality nature of modern embedded systems, virtualiz-ation emerges as a natural solution to achieve strong spatial and temporal isolation. Widely used hypervisors such as KVM and Xen were not designed having embedded constraints and requirements in mind. The static partitioning architecture pioneered by Jailhouse seems to address embedded concerns. However, Jailhouse still depends on Linux to boot and manage its VMs. In this paper, we present the Bao hypervisor, a minimal, standalone and clean-slate implementation of the static partitioning architecture for Armv8 and RISC-V platforms. Preliminary results regarding size, boot, performance, and interrupt latency, show this approach incurs only minimal virtualization overhead. Bao will soon be publicly available, in hopes of engaging both industry and academia on improving Bao's safety, security, and real-time guarantees. 2012 ACM Subject Classification Security and privacy → Virtualization and security; Software and its engineering → Real-time systems software
Real-time embedded platforms with resource constraints can take the benefits of mixed-criticality system where applications with different criticality-level share computational resources, with isolation in the temporal and spatial domain. A conventional software-based isolation mechanism adds additional overhead and requires certification with the highest level of criticality present in the system, which is often an expensive process. In this work, we present a different approach where the required isolation is established at the hardware-level by featuring partitions within the processor. A 4-stage pipelined soft-processor with replicated resources in the data-path is introduced to establish isolation and avert interference between the partitions. A cycle-accurate scheduling mechanism is implemented in the hardware for hard-real-time partition scheduling that can accommodate different periodicity and execution time for each partition as per user needs, while preserving time-predictability at the individual application level. Applications running within a partition has no sense of the virtualization and can execute either on a host-software or directly on the hardware. The proposed architecture is implemented on FPGA thread and demonstrated with an avionics use case.
Sensitive data processing occurs more and more on machines or devices out of users control. In the Internet of Things world, for example, the security of data could be posed at risk regardless the adopted deployment is oriented on Cloud or Edge Computing. In these systems different categories of attacks — such as physical bus sniffing, cold boot, cache side-channel, buffer overflow, code-reuse, or Iago — can be realized. Software-based countermeasures have been proposed. However, the severity and complexity of these attacks require a level of security that only the hardware support can ensure. In the last years, major companies released a number of architectural extensions aiming at provide hardware-assisted security to software. In this paper, we realize a comprehensive survey of HW-assisted technological solutions produced by vendors like Intel, AMD, and ARM for both embedded edge-devices and hosting machines such as cloud servers. The different approaches are classified based on the type of attacks prevented and the enforced techniques. An analysis on their mechanisms, issues, and market adoption is provided to support investigations of researchers approaching to this field of systems security.
The world is undergoing an unprecedented technological transformation, evolving into a state where ubiquitous Internet-enabled “things” will be able to generate and share large amounts of security- and privacy-sensitive data. To cope with the security threats that are thus foreseeable, system designers can find in Arm TrustZone hardware technology a most valuable resource. TrustZone is a System-on-Chip and CPU system-wide security solution, available on today’s Arm application processors and present in the new generation Arm microcontrollers, which are expected to dominate the market of smart “things.” Although this technology has remained relatively underground since its inception in 2004, over the past years, numerous initiatives have significantly advanced the state of the art involving Arm TrustZone. Motivated by this revival of interest, this paper presents an in-depth study of TrustZone technology. We provide a comprehensive survey of relevant work from academia and industry, presenting existing systems into two main areas, namely, Trusted Execution Environments and hardware-assisted virtualization. Furthermore, we analyze the most relevant weaknesses of existing systems and propose new research directions within the realm of tiniest devices and the Internet of Things, which we believe to have potential to yield high-impact contributions in the future.
Multi-core CPUs are a standard component in many modern embedded systems. Their virtualisation extensions enable the isolation of services, and gain popularity to implement mixed-criticality or otherwise split systems. We present Jailhouse, a Linux-based, OS-agnostic partitioning hypervisor that uses novel architectural approaches to combine Linux, a powerful general-purpose system, with strictly isolated special-purpose components. Our design goals favour simplicity over features, establish a minimal code base, and minimise hypervisor activity. Direct assignment of hardware to guests, together with a deferred initialisation scheme, offloads any complex hardware handling and bootstrapping issues from the hypervisor to the general purpose OS. The hypervisor establishes isolated domains that directly access physical resources without the need for emulation or paravirtualisation. This retains, with negligible system overhead, Linux's feature-richness in uncritical parts, while frugal safety and real-time critical workloads execute in isolated, safe domains.
The PikeOS microkernel is targeted at real-time embedded systems. Its main goal is to provide a partitioned environment for multiple operating systems with different design goals to coexist in a single machine. It was initially modelled after the L4 microkernel and has gradually evolved over the years of its application to the real-time, embedded systems space. This paper describes the concepts that were added or removed during this evolution and it provides the rationale behind these design decisions.
Hierarchical Scheduling (HS) techniques achieve resource partitioning among a set of real-time applications, providing reduction of complexity, confinement of failure modes, and temporal isolation among system applications. This facilitates compositional analysis for architectural verification and plays a crucial role in all industrial areas where high-performance microprocessors allow growing integration of multiple applications on a single platform. We propose a compositional approach to formal specification and schedulability analysis of real-time applications running under a Time Division Multiplexing (TDM) global scheduler and preemptive Fixed Priority (FP) local schedulers, according to the ARINC-653 standard. As a characterizing trait, each application is made of periodic, sporadic, and jittering tasks with offsets, jitters, and nondeterministic execution times, encompassing intra-application synchronizations through semaphores and mailboxes and interapplication communications among periodic tasks through message passing. The approach leverages the assumption of a TDM partitioning to enable compositional design and analysis based on the model of preemptive Time Petri Nets (pTPNs), which is expressly extended with a concept of Required Interface (RI) that specifies the embedding environment of an application through sequencing and timing constraints. This enables exact verification of intra-application constraints and approximate but safe verification of interapplication constraints. Experimentation illustrates results and validates their applicability on two challenging workloads in the field of safety-critical avionic systems.
ARM is the dominant processor architecture for mobile devices and many other high-end embedded systems. Late last year ARM announced architectural support for virtualization, which will allow execution of unmodified guest operating system binaries. We have designed and im-plemented what we believe is the first hypervisor supporting pure virtu-alization using those hardware extensions and evaluated it on simulated hardware. We describe our approach and report our initial experience with the architecture.
XtratuM is an hypervisor designed to meet safety critical requirements. Initially designed for x86 architectures (version 2.0), it has been strongly redesigned for SPARC v8 arquitecture and specially for the to the LEON2 processor. Current version 2.2, includes all the functionalities required to build safety critical systems based on ARINC 653, AUTOSTAR and other standards. Although XtratuMdoes not provides a compliant API with these standards, partitions can offer easily the appropriated API to the applications. XtratuM is being used by the aerospace sector to build software building blocks of future generic on board software dedicated to payloads management units in aerospace. XtratuM provides ARINC 653 scheduling policy, partition management, inter-partition communi-cations, health monitoring, logbooks, traces, and other services to easily been adapted to the ARINC standard. The configuration of the system is specified in a configuration file (XML format) and it is compiled to achieve a static configuration of the final container (XtratuM and the partition's code) to be deployed to the hardware board. As far as we know, XtratuM is the first hypervisor for the SPARC v8 arquitecture. In this paper, the main design aspects are discussed and the internal architecture described. An evaluation of the most significant metrics is also provided. This evaluation permits to affirm that the overhead of a hypervisor is lower than 3% if the slot duration is higher than 1 millisecond. .
Hierarchical Scheduling (HS) systems manage a set of real-time applications through a scheduling hierarchy, enabling partitioning
and reduction of complexity, confinement of failure modes, and temporal isolation among system applications. This plays a
crucial role in all industrial areas where high-performance microprocessors allow growing integration of multiple applications
on a single platform.
We propose a formal approach to the development of real-time applications with non-deterministic Execution Times and local
resource sharing managed by a Time Division Multiplexing (TDM) global scheduler and preemptive Fixed Priority (FP) local schedulers,
according to the scheduling hierarchy prescribed by the ARINC-653 standard. The methodology leverages the theory of preemptive
Time Petri Nets (pTPNs) to support exact schedulability analysis, to guide the implementation on a Real-Time Operating System
(RTOS), and to drive functional conformance testing of the real-time code. Computational experience is reported to show the
feasibility of the approach.
Satellite on-board systems spend their lives in hostile environments where radiation can cause critical hardware failures. One of the most radiation-sensitive elements is memory. The so-called Single Event Effects (SEEs) can corrupt or even irretrievably damage the cells that store the data and program instructions. When one of these cells is corrupted, the program must not use it again during execution. In order to avoid rebuilding and uploading the code, a memory management unit can be used to transparently relocate the program to an error-free memory region. This paper presents the design and implementation of a memory management unit that allows the dynamic relocation of on-board software. This unit provides a hardware mechanism that allows the automatic relocation of sections of code or data at run-time, only requiring software intervention for initialization and configuration. The unit has been implemented on the LEON architecture, a reference for European Space Agency missions. The proposed solution has been validated using the boot and application software of the instrument control unit of the Energetic Particle Detector of the Solar Orbiter mission as a base. Processor synthesis on different FPGAs has shown resource usage and power consumption similar to that of a conventional memory management unit. The results vary between
1-15% in resource usage and
1-7% in power consumption, depending on the number of inputs assigned to the unit and the FPGA used. When comparing performance, both the proposed and conventional memory management units show the same results.
In recent decades, mixed-criticality systems have been widely adopted to reduce the complexity and development times of real-time critical applications. In these systems, applications run on a separation kernel hypervisor, a software element that controls the execution of the different operating systems, providing a virtualized environment and ensuring the necessary spatial and temporal isolation. The guest code can run unmodified and unaware of the hypervisor or be explicitly modified to have a tight coupling with the hypervisor. The former is known as full virtualization, while the latter is known as para-virtualization. Full virtualization offers better compatibility and flexibility than para-virtualization, at the cost of a performance penalty.
LEON is a processor family that implements the SPARC V8 architecture and whose use is widespread in the field of space systems. To the best of our knowledge, all separation kernel hypervisors designed to support the development of mixed-criticality systems for LEON employ para-virtualization, which hinders the adaptation of real-time operating systems.
This paper presents the design of a Virtualization Monitor that allows guest real-time operating systems to run virtualized on LEON-based systems without needing to modify their source code. It is designed as a standalone component within a hypervisor and incorporates a set of techniques such as static binary rewriting, automatic code generation, and the use of operating system profiles. To validate the proposed solution, tests and benchmarks have been implemented for three guest systems: RTEMS, FreeRTOS, and Zephyr, analyzing the overhead introduced in certain situations characteristic of real-time applications. Finally, the same benchmarks have been run on AIR, one of the hypervisors that uses para-virtualization. The results obtained show that the use of the proposed techniques allows us to obtain similar results to those obtained using para-virtualization without the need to modify the source code of the guest real-time operating systems.
This article describes the first public implementation and evaluation of the latest version of the RISC-V hypervisor extension (H-extension v0.6.1) specification in a Rocket chip core. To perform a meaningful evaluation for modern multi-core embedded and mixed-criticality systems, we have ported Bao, an open-source static partitioning hypervisor, to RISC-V. We have also extended the RISC-V platform-level interrupt controller (PLIC) to enable direct guest interrupt injection with low and deterministic latency and we have enhanced the timer infrastructure to avoid trap and emulation overheads. Experiments were carried out in FireSim, a cycle-accurate, FPGA-accelerated simulator, and the system was also successfully deployed and tested in a Zynq UltraScale+ MPSoC ZCU104. Our hardware implementation was open-sourced and is currently in use by the RISC-V community towards the ratification of the H-extension specification.
All the time integrated modular avionics (IMA) is a hot research topic in fields of aircraft and spacecraft. IMA mainly focuses on the deterministic partition protection in avionics resource sharing, which is the core technology to allow mixed-criticality applications developed by multi-vendors to run on the same system. Firstly, the avionic development before 2010 year is reviewed briefly, mainly focused on the key technologies in IMA. Furthermore, heterogeneous computing and virtualization are surveyed respectively, because they grew most rapidly during 2010 and 2020. The content surveyed cover the background and reasons of the fast growth, the critical technologies and their development trends. Finally, some technology and management recommendations are proposed to promote avionics development in the next decade.
Real-time systems are characterised by the fact that they have to meet a set of both functional and temporal requirements. Processor architectures have a significant impact on the predictability of software execution times and can add different sources of indeterminism depending on the features provided. The LEON processor family is the reference platform for space missions of the European Space Agency, with open-source implementations that are written in VHDL language. All versions of the LEON processors conform to the SPARC architecture Version 8. This architecture groups the general-purpose registers into windows to reduce memory transfer overhead in function calls. Unfortunately, this mechanism introduces indeterminism in software execution times at various levels. In this paper, we propose an extension to the original architecture that provides determinism for a configurable subset of tasks and interrupt service routines and eliminates the concurrency-related jitter, all this with a minimum cost in terms of FPGA resource utilisation. For the validation of the proposed solution, we have implemented the extension into the VHDL code of the LEON3 processor and modified the source code of the RTEMS operating system to make use of the new functionality.
Embedded virtualization has gained attention in recent years due to increasing usage of embedded systems in cyber-physical systems and the Industry 4.0 revolution. Especially in combination with multi-core embedded systems, virtualization reduces the number of embedded systems and simultaneously delivers a secure and separated environment in each virtualized system. Applications in such cyber-physical systems often require real-time guarantees with hard deadlines. To guarantee those real-time constraints in virtualization, both hypervisor and guest operating system must support real-time scheduling. Selecting the optimal scheduling algorithm on both scheduling levels is hard and is only optimal for the analysed application. Due to the multiple scheduling levels, a set of scheduling algorithm combinations must be analysed which is too costly without analysis on higher abstraction levels. By using an analysis methodology to find this optimal combination using higher abstraction levels analysis, we reduce the set at every abstraction level. In this paper, we present a real-time hypervisor, based on Xvisor, for multi-core embedded systems. We modified the hypervisor to support real-time scheduling and the compositional schedulability analysis and validated the analysis methodology using this embedded hypervisor.
This survey covers research into mixed criticality systems that has been published since Vestal’s seminal paper in 2007, up until the end of 2016. The survey is organised along the lines of the major research areas within this topic. These include single processor analysis (including fixed priority and Earliest Deadline First (EDF) scheduling, shared resources, and static and synchronous scheduling), multiprocessor analysis, realistic models, and systems issues. The survey also explores the relationship between research into mixed criticality systems and other topics such as hard and soft time constraints, fault tolerant scheduling, hierarchical scheduling, cyber physical systems, probabilistic real-time systems, and industrial safety standards.
This book focuses on the core question of the necessary architectural support provided by hardware to efficiently run virtual machines, and of the corresponding design of the hypervisors that run them. Virtualization is still possible when the instruction set architecture lacks such support, but the hypervisor remains more complex and must rely on additional techniques. Despite the focus on architectural support in current architectures, some historical perspective is necessary to appropriately frame the problem. The first half of the book provides the historical perspective of the theoretical framework developed four decades ago by Popek and Goldberg. It also describes earlier systems that enabled virtualization despite the lack of architectural support in hardware. As is often the case, theory defines a necessary-but not sufficient-set of features, and modern architectures are the result of the combination of the theoretical framework with insights derived from practical systems. The second half of the book describes state-of-the-art support for virtualization in both x86-64 and ARM processors. This book includes an in-depth description of the CPU, memory, and I/O virtualization of these two processor architectures, as well as case studies on the Linux/KVM, VMware, and Xen hypervisors. It concludes with a performance comparison ofvirtualization on current-generationx86-and ARM-based systems across multiple hypervisors.
Surface water storage and fluxes in rivers, lakes, reservoirs and wetlands are currently poorly observed at the global scale, even though they represent major components of the water cycle and deeply impact human societies. In situ networks are heterogeneously distributed in space, and many river basins and most lakes—especially in the developing world and in sparsely populated regions—remain unmonitored. Satellite remote sensing has provided useful complementary observations, but no past or current satellite mission has yet been specifically designed to observe, at the global scale, surface water storage change and fluxes. This is the purpose of the planned Surface Water and Ocean Topography (SWOT) satellite mission. SWOT is a collaboration between the (US) National Aeronautics and Space Administration, Centre National d’Études Spatiales (the French Spatial Agency), the Canadian Space Agency and the United Kingdom Space Agency, with launch planned in late 2020. SWOT is both a continental hydrology and oceanography mission. However, only the hydrology capabilities of SWOT are discussed here. After a description of the SWOT mission requirements and measurement capabilities, we review the SWOT-related studies concerning land hydrology published to date. Beginning in 2007, studies demonstrated the benefits of SWOT data for river hydrology, both through discharge estimation directly from SWOT measurements and through assimilation of SWOT data into hydrodynamic and hydrology models. A smaller number of studies have also addressed methods for computation of lake and reservoir storage change or have quantified improvements expected from SWOT compared with current knowledge of lake water storage variability. We also briefly review other land hydrology capabilities of SWOT, including those related to transboundary river basins, human water withdrawals and wetland environments. Finally, we discuss additional studies needed before and after the launch of the mission, along with perspectives on a potential successor to SWOT.
The complexity of industrial embedded systems is increasing continuously. Companies try to keep a leading position by offering additional functionalities and services. Systems are to be composed by multiple sensor, actuation and computation subsystems running in a coordinated way on a distributed platform. Due to the increment in processor power, it is possible to allocate a large number of functions in the same platform. This gives rise to mixed-criticality systems, when components with different criticality levels coexist in the same processor. This approach can lead to prohibitive certification costs. A better approach is to rely on partitioned systems, based on a hypervisor that isolates each of the virtual machines in the system. Components with different criticality levels are allocated to different partitions, in order to prevent interferences. The aim of this paper is to introduce mixed-criticality systems, to introduce the most challenging research topics, and to provide some background on the most promising techniques and research activities.
This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization to the x86 architecture in 1999. Although virtual machine monitors (VMMs) had been around for decades, they were traditionally designed as part of monolithic, single-vendor architectures with explicit support for virtualization. In contrast, the x86 architecture lacked virtualization support, and the industry around it had disaggregated into an ecosystem, with different vendors controlling the computers, CPUs, peripherals, operating systems, and applications, none of them asking for virtualization. We chose to build our solution independently of these vendors.
As a result, VMware Workstation had to deal with new challenges associated with (i) the lack of virtualization support in the x86 architecture, (ii) the daunting complexity of the architecture itself, (iii) the need to support a broad combination of peripherals, and (iv) the need to offer a simple user experience within existing environments. These new challenges led us to a novel combination of well-known virtualization techniques, techniques from other domains, and new techniques.
VMware Workstation combined a hosted architecture with a VMM. The hosted architecture enabled a simple user experience and offered broad hardware compatibility. Rather than exposing I/O diversity to the virtual machines, VMware Workstation also relied on software emulation of I/O devices. The VMM combined a trap-and-emulate direct execution engine with a system-level dynamic binary translator to efficiently virtualize the x86 architecture and support most commodity operating systems. By relying on x86 hardware segmentation as a protection mechanism, the binary translator could execute translated code at near hardware speeds. The binary translator also relied on partial evaluation and adaptive retranslation to reduce the overall overheads of virtualization.
Written with the benefit of hindsight, this article shares the key lessons we learned from building the original system and from its later evolution.
Fault detection, isolation, and recovery (FDIR) systems are addressed, since the very beginning of any space mission design, and play a relevant role in the definition of their reliability, availability, and safety objectives. Their primary purposes are the safety of spacecraft/mission life and the improvement of its service availability. In this survey paper, current FDIR system engineering and programmatic approaches are investigated along with their strong connection with the wider concept of onboard autonomy, which is becoming the key point in the design of new-generation spacecraft. Different perspectives are presented, covering the whole lifecycle of FDIR system development, which is currently regarded as a self-standing and system-level discipline. Special attention is given to the FDIR early lifecycle phases and FDIR system hierarchybased architecture.
Recent space projects have brought to light some flaws in the current FDIR system design approaches. These findings pave the way for innovative solutions (e.g., qualitative and quantitative model-based methods, formal verification, and analytical redundancy), which can support and not rule out conventional industrial practices. The various model-based FDIR methods are not addressed in this paper; however, exhaustive surveys dealing with this topic are mentioned for further investigation. The experience and the lessons learned in the FDIR field during the manufacturing of the Galileo full operational capability (FOC) satellites atOHB System AGare reported. In particular, it is highlighted how wellestablished and accepted common practices have been exploited to their maximum extent to sort out the aforesaid issues.
Virtualization has become a hot topic in embedded systems for both academia and industry development. Among its main advantages, we can highlight (i) software design quality; (ii) security levels of the system; (iii) software reuse, and; (iv) hardware utilization. However, it still presents constraints that have lessened the excitement towards itself, since the greater concerns are its implicit overhead and whether it is worthy or not. Thus, we detail how to adapt an existing MIPS-based architecture aiming to support the virtualization principles. In this paper we present detailed information about the architecture implementation and results demonstrating its correctness and efficiency.
VM/370 is an operating system which provides its multiple users with seemingly separate and independent IBM System/370 computing systems. These virtual machines are simulated using IBM System/370 hardware and have its same architecture. In addition, VM/370 provides a single-user interactive system for personal computing and a computer network system for information interchange among interconnected machines. VM/370 evolved from an experimental operating system designed and built over fifteen years ago. This paper reviews the historical environment, design influences, and goals which shaped the original system.
When designing integrated modular avionics (IMA) systems, the traditional design life cycle must be adapted and rearranged to allow multiple vendors to contribute not only to the systems design, but also to the safety case for the system. Simply using guidelines from the DO-178B and the ARINC 653 standards does not guarantee that one will be able to have multiple applications running at different safety criticality levels. One needs to be able to merge applications written by different vendors, reuse applications from previous projects, and integrate different safety requirements while constructing a safety case for the overall IMA system. This, of course, must be done within a constrained budget that includes potential costs associated with changing program requirements. In order to achieve these goals, the design life cycle must be constructed in a way that allows for configuration and build partitioning of these applications, in parallel with the IMA platform itself and the overall systems integration. This investigates how the ARINC 653 standard can be used to provide this application and safety criticality level independence using guidelines from DO-178 and DO-297. It explores the use of qualified XML-based configuration tools, the emerging ARINC 653 Supplement 3 XML Schema design and shows the importance of configuration and build partitioning.
Virtual machine systems have been implemented on a limited number of third generation computer systems, e.g. CP-67 on the IBM 360/67. From previous empirical studies, it is known that certain third generation computer systems, e.g. the DEC PDP-10, cannot support a virtual machine system. In this paper, model of a third-generation-like computer system is developed. Formal techniques are used to derive precise sufficient conditions to test whether such an architecture can support virtual machines.
This paper describes the Denali isolation kernel, an operating system architecture that safely multiplexes a large number of untrusted Internet services on shared hardware. Denali's goal is to allow new Internet services to be "pushed" into third party infrastructure, relieving Internet service authors from the burden of acquiring and maintaining physical infrastructure. Our isolation kernel exposes a virtual machine abstraction, but unlike conventional virtual machine monitors, Denali does not attempt to emulate the underlying physical architecture precisely, and instead modifies the virtual architecture to gain scale, performance, and simplicity of implementation. In this paper, we first discuss design principles of isolation kernels, and then we describe the design and implementation of Denali. Following this, we present a detailed evaluation of Denali, demonstrating that the overhead of virtualization is small, that our architectural choices are warranted, and that we can successfully scale to more than 10,000 virtual machines on commodity hardware.
Numerous systems have been designed which use virtualization to subdivide the ample resources of a modern computer. Some require specialized hardware, or cannot support commodity operating systems. Some target 100% binary compatibility at the expense of performance. Others sacrifice security or functionality for speed. Few offer resource isolation or performance guarantees; most provide only best-effort provisioning, risking denial of service. This paper presents Xen, an x86 virtual machine monitor which allows multiple commodity operating systems to share conventional hardware in a safe and resource managed fashion, but without sacrificing either performance or functionality. This is achieved by providing an idealized virtual machine abstraction to which operating systems such as Linux, BSD and Windows XP, can be ported with minimal effort. Our design is targeted at hosting up to 100 virtual machine instances simultaneously on a modern server. The virtualization approach taken by Xen is extremely efficient: we allow operating systems such as Linux and Windows XP to be hosted simultaneously for a negligible performance overhead - at most a few percent compared with the unvirtualized case. We considerably outperform competing commercial and freely available solutions in a range of microbenchmarks and system-wide tests.
ARM Virtualization: Performance and Architectural Implications
C Dall
S.-W Li
J T Lim
J Nieh
G Koloventzos
Managing INFANTE’s payload bay experiments using the air hypervisor