Article

CCMPerf: A Benchmarking Tool for CORBA Component Model Implementations

Real-Time Systems (Impact Factor: 0.61). 01/2005; 29(2):281-308. DOI: 10.1007/s11241-005-6889-6
Source: DBLP

ABSTRACT Commercial off-the-shelf (COTS) middleware is now widely used to develop distributed real-time and embedded (DRE) systems. DRE systems are themselves increasingly combined to form systems of systems that have diverse quality of service (QoS) requirements. Earlier generations of COTS middleware, such as Object Request Brokers (ORBs) based on the CORBA 2.x standard, did not facilitate the separation of QoS policies from application functionality, which made it hard to configure and validate complex DRE applications. The new generation of component middleware, such as the CORBA Component Model (CCM) based on the CORBA 3.0 standard, addresses the limitations of earlier generation middleware by establishing standards for implementing, packaging, assembling, and deploying component implementations.There has been little systematic empirical study of the performance characteristics of component middleware implementations in the context of DRE systems. This paper therefore provides four contributions to the study of CCM for DRE systems. First, we describe the challenges involved in benchmarking different CCM implementations. Second, we describe key criteria for comparing different CCM implementations using key black-box and white-box metrics. Third, we describe the design of our CCMPerf benchmarking suite to illustrate test categories that evaluate aspects of CCM implementation to determine their suitability for the DRE domain. Fourth, we use CCMPerf to benchmark CIAO implementation of CCM and analyze the results. These results show that the CIAO implementation based on the more sophisticated CORBA 3.0 standard has comparable DRE performance to that of the TAO implementation based on the earlier CORBA 2.x standard.

1 Follower
 · 
108 Views
  • Source
    • "The overhead of common container-management operations must be minimised by a CCM implementation to meet the resource constraints of an embedded system. Evaluation of CIAO performance based on a benchmark measurement indicates that by optimising the component communication, CIAO's CORBA 3.x CCM capabilities do not add significant overhead above and beyond its underlying TAO CORBA 2.x implementation (Krishna et al., 2005). However the ORB (Object Request Broker)based communication in TAO can still impose overhead that is not affordable for strict resource-bound embedded systems. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Component-based software engineering promises to provide structure and reusability to embedded-systems software. At the same time, microkernel-based operating systems are being used to increase the reliability and trustworthiness of embedded systems. Since the microkernel approach to designing systems is partially based on the componentisation of system services, component-based software engineering is a particularly attractive approach to developing microkernel-based systems. While a number of widely used component architectures already exist, they are generally targeted at enterprise computing rather than embedded systems. Due to the unique characteristics of embedded systems, a component architecture for embedded systems must have low overhead, be able to address relevant non-functional issues, and be flexible to accommodate application specific requirements. In this paper we introduce a component architecture aimed at the development of microkernel-based embedded systems. The key characteristics of the architecture are that it has a minimal, low-overhead, core but is highly modular and therefore flexible and extensible. We have implemented a prototype of this architecture and confirm that it has very low overhead and is suitable for implementing both system-level and application level services.
    Journal of Systems and Software 05/2007; DOI:10.1016/j.jss.2006.08.039 · 1.25 Impact Factor
  • Source
    • "To support our QA research goals we are creating, validating, and disseminating novel technologies in the focus areas described below: 1. Design and evaluation of scalable DCQA applications . To date only a handful of research efforts [25] [11] [22] [9] [14] [21] have studied DCQA processes. It is not yet clear, therefore, how best to structure these processes, what types of QA tasks can be distributed effectively, or how the costs/benefits of DCQA processes compare to conventional QA processes. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Software scale and complexity are growing by every measure: more hardware and software, more communication links, more interdependency, more lines of code, more storage and data, etc. At the same time business trends are increasingly squeezing development resources. In particular, development processes are straining under severe cost and time-to-market pressures. Global competition and market deregulation are shrinking profit margins and thus limiting budgets for the development and QA of software. In response to these trends, developers have begun to change the way they build and validate software systems by (among other things) moving towards more flexible product designs allowing dynamic reconfiguration. This approach promises to improve cost, quality, and development-time, but creates other problems, especially when used in the context of safety-critical systems. To realize this promise, however, effective certification becomes more important than ever since as static controls are removed or reduced, it becomes even more vital that (1) problems be caught as quickly as possible and (2) systems not be allowed to drift so far from their intended functional and performance requirements that rework costs overwhelm the hoped-for efficiencies. This article can present and discuss some of our recent efforts to address these problems.
    21th International Parallel and Distributed Processing Symposium (IPDPS 2007), Proceedings, 26-30 March 2007, Long Beach, California, USA; 01/2007
  • Source
    • "For example, the Options Configuration Modeling language (OCML) [9] allows developers to model middleware configuration options as high-level models. Likewise, the Benchmarking Generation Modeling Language (BGML) [4] allows developers to automatically generate sophisticated benchmarking experiments. This article describes how model-driven DCQA processes and tools can work separately and together to help monitor , safeguard, enforce, and reassert desirable PSAs after changes occur in QoS-intensive software. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Time and resource constraints often force developers of highly configurable quality of service (QoS)-intensive software sys- tems to guarantee their system's persistent software attributes (PSAs) (e.g., functional correctness, portability, efficiency, and QoS) on a few platform configurations and to extrapolate from these configurations to the entire configuration space, which allows many sources of degradation to escape detection until systems are fielded. This article illustrates how model-driven distributed continuous quality assurance (DCQA) processes can help improve the assessment and assurance of these PSAs across the large configuration spaces of QoS-intensive soft- ware systems. Keywords. Distributed Continuous Quality Assurance, Model-Integrated Computing, Quality of Service, Software Configurations.
Show more

Preview

Download
4 Downloads
Available from