CCMPerf: A Benchmarking Tool for CORBA Component Model
Arvind S. Krishna, Balachandran Natarajan, Aniruddha Gokhale and Douglas C. Schmidt
Electrical Engineering and Computer Science, Vanderbilt University, Nashville TN, USA
?arvindk, bala, gokhale, schmidt
Computer Science Department
Washington University, St. Louis, MO, USA
Lockheed Martin Advanced Technology Labs
Cherry Hill, NJ, USA
Commercial off-the-shelf (COTS) middleware is now
widely used to develop distributed real-time and em-
bedded (DRE) systems. DRE systems are themselves in-
creasingly combined to form “systems of systems” that
have diverse quality of service (QoS) requirements. Ear-
lier generations of COTS middleware, such as Object Re-
quest Brokers (ORBs) based on the CORBA 2.x standard,
did not facilitate the separation of QoS policies from ap-
plication functionality, which made it hard to configure
and validate complex DRE applications. The new gen-
eration of component middleware, such as the CORBA
Component Model (CCM) based on the CORBA 3.0 stan-
dard, addresses the limitations of earlier generation mid-
dleware by establishing standards for implementing,
packaging, assembling, and deploying component imple-
There has been little systematic empirical study of the
performance characteristics of component middleware im-
plementations in the context of DRE systems. This paper
therefore provides four contributions to the study of CCM
for DRE systems. First, we describe the challenges involved
in benchmarking different CCM implementations. Second,
we describe key criteria for comparing different CCM im-
plementations using key black-box and white-box metrics.
Third, we describe the design of our CCMPerf benchmark-
ing suite to illustrate test categories that evaluate aspects of
CCM implementation to determine their suitability for the
DREdomain.Fourth,we useCCMPerf tobenchmarkCIAO
implementation of CCM and analyze the results. These re-
sults show that the CIAO implementationbased on the more
sophisticated CORBA 3.0 standard has comparable DRE
performance to that of the TAO implementation based on
the earlier CORBA 2.x standard.
Keywords: CCM, Benchmarking, CCMPerf, white-box
metrics, black-box metrics.
Emerging trends. Distributed real-time and embed-
ded (DRE) systems are becoming more widespread and
important. Common DRE systems include telecommunica-
tion networks (e.g., wireless phone services), tele-medicine
(e.g., robotic surgery), and defense applications (e.g., to-
tal ship computing environments). These DRE systems are
increasingly used for a range of applications where multi-
ple systems are interconnected to form system of systems
that possess stringent quality of service (QoS) con-
straints, such as bandwidth,latency, jitter and dependability
requirements. A challenging requirement for these sys-
tems involves supporting a diverse set of QoS properties,
such as predictable latency/jitter, throughput guarantees,
scalability, and 24x7 availability, dependability, and se-
curity, that must be satisfied simultaneously in real-time.
Conventional distributed object computing (DOC) middle-
ware frameworks (such as DCOM, Java RMI, and earlier
versions of the CORBA 2.x standard) do not provide ca-
pabilities for developers and end-users to specify and
enforce these QoS requirements simultaneously in com-
plex DRE systems.
Component middleware  is a class of middleware
that enables reusable services to be composed, configured,
CORBA Component Model (CCM)  is a standard com-
ponent middleware technology that addresses limitations
with earlier versions of CORBA 2.x middleware based on
the DOC model. In particular, the CCM standard defined
by the CORBA 3.x specification extends the CORBA 2.x
object model to support the concept of components and es-
tablishes standards for specifying, implementing, packag-
ing, assembling, and deploying components.
Empirically evaluating CCM implementations. Compo-
nent middleware in general – and CCM in particular –
are a maturing technology base that represents a paradigm
shift in the way complex DRE systems have been devel-
oped traditionally. For example, component middleware
provides higher-level capabilities for developers and end-
users to specify and enforce QoS requirements in com-
plex DRE systems. Several implementations of CCM are
now available, including the Component Integrated ACE
ORB (CIAO) , Mico-CCM , Qedo , and Star-
CCM . As CCM platforms mature and become suitable
for DRE systems it is desirable to devise a standard set of
metrics to compare and contrast different CCM implemen-
tations in terms of their:
Suitability, e.g., how suitable is the CCM implementa-
tion for DRE applications in a particular domain, such
as avionics, total ship computing, or telecom systems?
Quality of service, e.g., does a CCM implementation
for the DRE domain provide predictable performance
and consume minimal time/space resources?
Conformance, e.g., does a CCM implementation con-
form to OMG standards by meeting the portability
and interoperability requirements defined by the CCM
Earlier efforts, such as the Open CORBA Benchmark-
ing andMiddlewareComparator projects,havefo-
cused on metrics to compare middlewarebased on the DOC
middleware standard defined by the CORBA 2.x specifi-
cations. Our work enhances these efforts by focusing on
a previously unexplored topic: designing a benchmark-
ing framework to compare CCM implementation qual-
ity by developing metrics that evaluate the suitability of
those implementations for representative DRE applica-
tions. To quantify these comparisons systematically we
developed CCMPerf, which is an an open-source1bench-
marking suite that focuses on black-box and white-box
metrics, using criteria such as latency,throughput,and foot-
print measures. These metrics can be partitioned into the
Distribution middleware tests that quantify the over-
head of CCM-based applications relative to applica-
tions based on earlier versions of the CORBA 2.x stan-
dard that do not support component run-time, configu-
ration, and deployment capabilities.
CCMPerf is available for download from deuce.doc.wustl.
tify the suitability of using different implementations
of CORBA services, such as Real-time Event  and
Notification Services .
Domain-specific middleware tests that quantify the
suitability of CCM implementations to meet the QoS
requirements of a particular DRE application do-
main, such as static linking and deployment of
components in an avionics mission computing archi-
This paper provides the following contributions to the
study of component middleware implemented in accor-
dance with the OMG CCM standard by describing:
1. The challenges involved in benchmarking different
2. ThecriteriaforcomparingdifferentCCM implementa-
tions using key black-box and white-box metrics, and
3. The design of our CCMPerf benchmarking suite that
evaluates aspects of CCM implementations to deter-
mine their suitability for the DRE domain.
The vehicle used to test, obtain and analyze our results
from CCMPerf is the Component Integrated ACE ORB
(CIAO) , which is an open-source2implementation of
CCM built upon the Real-time CORBA infrastructure of
The ACE ORB (TAO) . This paper shows how CCM-
Perf can be used to collect metrics and evaluate CCM im-
plementations in the DRE domain. Our results show that
CIAO and its more sophisticated CORBA 3.x CCM capa-
bilities do not add appreciable overhead relative to its TAO
CORBA 2.x foundation.
Paper organization. The remainder of this paper is orga-
nized as follows: Section 2 provides an overview of the ele-
mentsin CCM; Section 3discusses the designof CCMPerf,
focusing on the performance experiments it supports; Sec-
tion 4 analyzes quantitative results obtained by benchmark-
ing CIAO using CCMPerf; Section 5 compares our work
with other middleware benchmarkingefforts; and Section 6
presents concluding remarks.
2. Overview of CCM
TheCORBA ComponentModel(CCM)formsa keypart
of the CORBA 3.0 standard. CCM is designed to address
the limitations with earlier versions of CORBA 2.x middle-
ware that supported a distributed object computing (DOC)
model. Figure1 depicts the key elements in the architec-
ture of CCM. The remainder of this section describes each
of these CCM elements.
2CIAO is also available for download from deuce.doc.wustl.
clarify the structure of the benchmarks and facilitate the in-
tegration of new benchmark tests. Our empirical results
in Section 4 show how CCMPerf can be used to quan-
tify metrics, such as overhead (i.e., increases in the mean),
that the CIAO CORBA 3.x CCM implementation in-
curs above and beyond its underlying TAO CORBA 2.x
implementation. Our future work on CCMPerf will fo-
cus on benchmarking other open-source CCM implemen-
tations (such as Mico-CCM, Qedo, and StarCCM), as well
as completing the white-box and scenario-based bench-
marks and enhancing CCMPerf’s testsuite.
Our work on CCMPerf has also underscored the impor-
level models. For example, to conduct a simple experiment
requires developers to write (1) the header files and source
benchmarking code that measures QoS, such as roundtrip
latency and throughput,(2) IDL files that describes the con-
tract between the client and the server, (3) the configuration
and script files that tune the underlying middleware and au-
tomate the task of running tests and output generation, and
(3) project build files (e.g., makefiles) required to generate
the executable code. Writing these files repeatedly for each
experiment is tedious and error-prone. Further, in a hand-
crafted approach, changing the configuration would entail
re-writing the benchmarking code. In a model-based ap-
proach, however, the only change will be in the model and
the necessary experimentation code will be automatically
generated. A model-based approach also provides an effec-
tive abstraction to visualize and analyze the overall plan-
ning phase, rather than inspecting the source code manu-
To alleviate the shortcomings described above, we are
developing the Benchmark Generation Modeling Lan-
guage (BGML) [8, 9], which automates the generation
of benchmarking experiments from high-level mod-
els. BGML has been integrated with CoSMIC , which
is an integrated toolsuite for modeling design and run-
time aspects of QoS-enabled component middleware.
CoSMIC’s model-based  approach to benchmark syn-
thesis enables quality assurance engineers and testers to
configure components, model test configurations, and gen-
erate benchmarkingcode automatically.
 A. Corsaro and D. C. Schmidt. Evaluating Real-Time Java
Features and Performance for Real-time Embedded
Systems. In Proceedings of the
Technology and Applications Symposium, San Jose, Sept.
 Douglas Niehaus, et al.. Kansas University Real-Time
(KURT) Linux. www.ittc.ukans.edu/kurt/,2004.
 E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design
Patterns: Elements of Reusable Object-Oriented Software.
Addison-Wesley, Reading, MA, 1995.
 Gautam Thaker et. al. Implementation Experience with
OMG’s SCIOP Mapping. In Proceedings of the 5
International Symposium on Distributed Objects and
Applications, Nov. 2003.
 A. Gokhale, D. C. Schmidt, B. Natarajan, and N. Wang.
Applying Model-Integrated Computing to Component
Middleware and Enterprise Applications. The
Communications of the ACM Special Issue on Enterprise
Components, Service and Business Rules, 45(10), Oct. 2002.
 T. H. Harrison, D. L. Levine, and D. C. Schmidt. The
Design and Performance of a Real-time CORBA Event
Service. In Proceedings of OOPSLA ’97, pages 184–199,
Atlanta, GA, Oct. 1997. ACM.
 Institute for Software Integrated Systems. Component
Synthesis using Model Integrated Computing (CoSMIC).
www.dre.vanderbilt.edu/cosmic, Vanderbilt University.
 A. S. Krishna, D. C. Schmidt, A. Porter, A. Memon, and
D. Sevilla-Ruiz. Improving the Quality of
Performance-intensive Software via Model-integrated
Distributed Continuous Quality Assurance. In Proceedings
of the 8th International Conference on Software Reuse,
Madrid, Spain, July 2004. ACM/IEEE.
 A. S. Krishna, C. Yilmaz, A. Memon, A. Porter, D. C.
Schmidt, A. Gokhale, and B. Natarajan. Preserving
Distributed Systems Critical Properties: a Model-Driven
Approach. IEEE Software special issue on Persistent
Software Attributes, November/December 2004.
 L. M. A. T. Labs. ATL QoS Home Page.
 A. Memon, A. Porter, C. Yilmaz, A. Nagarajan, D. C.
Schmidt, and B. Natarajan. Skoll: Distributed Continuous
Quality Assurance. In Proceedings of the 26th IEEE/ACM
International Conference on Software Engineering,
Edinburgh, Scotland, May 2004. IEEE/ACM.
 MICO. The MICO CORBA Component Project.
 Object Management Group. Notifi cation Service
Specifi cation. Object Management Group, OMG Document
telecom/99-07-01 edition, July 1999.
 Object Management Group. Event Service Specifi cation
Version 1.1, OMG Document formal/01-03-01 edition, Mar.
 Object Management Group. CORBA Components, OMG
Document formal/2002-06-65 edition, June 2002.
 Object Management Group. Data Distribution Service for
Real-Time Systems Specifi cation, 1.0 edition, Mar. 2003.
 Object Management Group. Deployment and Confi guration
Adopted Submission, OMG Document ptc/03-07-08 edition,
 Qedo. QoS Enabled Distributed Objects.
 Ruslan Shevchenko. CORBAConf: A Tool for Providing
Autoconf Support for CORBA. corbaconf.kiev.ua/, 2000.
 D. C. Schmidt, B. Natarajan, A. Gokhale, N. Wang, and
C. Gill. TAO: A Pattern-Oriented Object Request Broker for
Distributed Real-time and Embedded Systems. IEEE
Distributed Systems Online, 3(2), Feb. 2002.
 D. C. Sharp. Reducing Avionics Software Cost Through
Component Based Product Line Development. In
Proceedings of the 10th Annual Software Technology
Conference, Apr. 1998.
 D. C. Sharp, E. Pla, and K. R. Lueck. Evaluating real-time
java for mission-critical large-scale embedded systems. In
G. Bollella, editor, Proceedings of the
Technology and Applications Symposium, pages 30–37,
Washington D.C, 2003.
 StarCCM. StarCCM. starccm.sourceforge.net, 2003.
 J. Sztipanovits and G. Karsai. Model-Integrated Computing.
IEEE Computer, 30(4):110–112, Apr. 1997.
 C. Szyperski. Component Software—Beyond
Object-Oriented Programming. Addison-Wesley, Santa Fe,
 P. Tuma and A. Buble. Open CORBA Benchmarking. In
International Symposium on Performance Evaluation of
Computer and Telecommunication Systems, 2001.
 N. Wang, C. Gill, D. C. Schmidt, and V. Subramonian.
Confi guring Real-time Aspects in Component Middleware.
In Proceedings of the International Symposium on
Distributed Objects and Applications (DOA’04), Agia Napa,
Cyprus, Oct. 2004.
 N. Wang, D. C. Schmidt, A. Gokhale, C. D. Gill,
B. Natarajan, C. Rodrigues, J. P. Loyall, and R. E. Schantz.
Total Quality of Service Provisioning in Middleware and
Applications. The Journal of Microprocessors and
Microsystems, 27(2):45–54, mar 2003.
 N. Wang, D. C. Schmidt, and S. Vinoski. Collocation
Optimizations for CORBA. C++ Report, 11(10):47–52,
 B. White and J. L. et al. An Integrated Experimental
Environment for Distributed Systems and Networks. In
Proceedings of the Fifth Symposium on Operating Systems
Design and Implementation, pages 255–270, Boston, MA,
Dec. 2002. USENIX Association.