Article

CCMPerf: A Benchmarking Tool for CORBA Component Model Implementations

Real-Time Systems (Impact Factor: 0.55). 01/2005; 29(2):281-308. DOI: 10.1007/s11241-005-6889-6
Source: DBLP

ABSTRACT Commercial off-the-shelf (COTS) middleware is now widely used to develop distributed real-time and embedded (DRE) systems. DRE systems are themselves increasingly combined to form systems of systems that have diverse quality of service (QoS) requirements. Earlier generations of COTS middleware, such as Object Request Brokers (ORBs) based on the CORBA 2.x standard, did not facilitate the separation of QoS policies from application functionality, which made it hard to configure and validate complex DRE applications. The new generation of component middleware, such as the CORBA Component Model (CCM) based on the CORBA 3.0 standard, addresses the limitations of earlier generation middleware by establishing standards for implementing, packaging, assembling, and deploying component implementations.There has been little systematic empirical study of the performance characteristics of component middleware implementations in the context of DRE systems. This paper therefore provides four contributions to the study of CCM for DRE systems. First, we describe the challenges involved in benchmarking different CCM implementations. Second, we describe key criteria for comparing different CCM implementations using key black-box and white-box metrics. Third, we describe the design of our CCMPerf benchmarking suite to illustrate test categories that evaluate aspects of CCM implementation to determine their suitability for the DRE domain. Fourth, we use CCMPerf to benchmark CIAO implementation of CCM and analyze the results. These results show that the CIAO implementation based on the more sophisticated CORBA 3.0 standard has comparable DRE performance to that of the TAO implementation based on the earlier CORBA 2.x standard.

1 Bookmark
 · 
72 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Time and resource constraints often force developers of highly configurable quality of service (QoS)-intensive software sys- tems to guarantee their system's persistent software attributes (PSAs) (e.g., functional correctness, portability, efficiency, and QoS) on a few platform configurations and to extrapolate from these configurations to the entire configuration space, which allows many sources of degradation to escape detection until systems are fielded. This article illustrates how model-driven distributed continuous quality assurance (DCQA) processes can help improve the assessment and assurance of these PSAs across the large configuration spaces of QoS-intensive soft- ware systems. Keywords. Distributed Continuous Quality Assurance, Model-Integrated Computing, Quality of Service, Software Configurations.
    IEEE Software - SOFTWARE. 01/2004;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Distributed real-time and embedded (DRE) applications have become critical in domains such as avionics (e.g., flight mission computers), telecommunications (e.g., wireless phone services), tele-medicine (e.g., robotic surgery), and defense applica-tions (e.g., total ship computing environments). DRE applications are increasingly composed of multiple systems that are interconnected via wireless and wireline net-works to form systems of systems. A challenging requirement for DRE applications involves supporting a diverse set of quality of service (QoS) properties, such as pre-dictable latency/jitter, throughput guarantees, scalability, 24x7 availability, depend-ability, and security that must be satisfied simultaneously in real-time. Although a growing number of DRE applications are based on QoS-enabled commercial-off-the-shelf (COTS) hardware and software components, the complexity of managing long lifecycles (often ∼15-30 years) remains a key challenge for DRE application develop-ers. For example, substantial time and effort is spent retrofitting DRE applications when their COTS technology infrastructure changes. This paper provides three contributions to improving the development and vali-dation of DRE applications throughout their lifecycles. First, we illustrate the chal-lenges in developing and deploying QoS-enabled component middleware-based DRE applications and outline our solution approach to resolve these challenges. Second, we describe a new software paradigm called Model Driven Middleware (MDM) that combines model-based software development techniques with QoS-enabled compo-nent middleware to address key challenges faced by developers of DRE applica-tions -particularly composition, integration, and assured QoS for end-to-end op-erations. Finally, we describe our progress on a MDM tool-chain, called CoSMIC that addresses key DRE application and middleware lifecycle challenges, including developing component functionality, partitioning the components to use distributed resources effectively, validating the software, assuring multiple simultaneous QoS Preprint submitted to Science of Computer Programming 14 November 2003 properties in real-time, and safeguarding against rapidly changing technology.
    04/2004;
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Engineering distributed systems is a challenging activity. This is partly due to the intrinsic complexity of dis-tributed systems, and partly due to the practical obstacles that developers face when evaluating and tuning their design and implementation decisions. This paper addresses the latter aspect, providing techniques for software engineers to automate two key elements of the experimentation activity: (1) workload generation and (2) experiment deployment and execution. Our approach is founded on a suite of models that characterize the client behaviors that drive the experiments, the distributed system under experimentation, and the testbeds upon which the experiments are to be carried out. The models are used by simulation-based and generative techniques to automate the construction of the workloads, as well as construction of the scripts for deploying and executing the experiments on distributed testbeds. The framework is not targeted at a specific system or application model, but rather is a generic, programmable tool. We have validated our approach on a variety of distributed systems. Our experience shows that this framework can be readily applied to different kinds of distributed system architectures, and that using it for meaningful experimentation is advantageous.
    11/2004;

Full-text

View
0 Downloads
Available from