Conference PaperPDF Available

Dimensions of Component Based Development.


Abstract and Figures

As the properties of components have gradually become clearer, attention has started to turn to the architectural issues which govern their interaction and composition. In this paper we identify some of the major architectural questions affecting component-based software develop- ment and describe the predominant architectural dimensions. Of these, the most interesting is the "architecture hierarchy" which we believe is needed to address the "interface vicissitude" problem that arises whenever interaction refinement is explicitly documented within a component-based system. We present a solution to this problem based on the concept of stratified architectures and object metamorphosis Finally, we describe how these concepts may assist in increasing the tailorability of component-based frameworks.
Content may be subject to copyright.
Dimensions of Component-based Development
Colin Atkinson
, Thomas Kühne
and Christian Bunse
Universität Kaiserslautern
{atkinson, }
Fraunhofer Institute for Experimental Software Engineering
As the properties of components have gradually become
clearer, attention has started to turn to the architectural
issues which govern their interaction and composition. In
this paper we identify some of the major architectural
questions affecting component-based software develop-
ment and describe the predominant architectural
dimensions. Of these, the most interesting is the
“architecture hierarchy” which we believe is needed to
address the “interface vicissitude” problem that arises
whenever interaction refinement is explicitly documented
within a component-based system. We present a solution
to this problem based on the concept of stratified
architectures and object metamorphosis Finally, we
describe how these concepts may assist in increasing the
tailorability of component-based frameworks.
Much of the recent debate on component-oriented
software development has naturally revolved around the
is a component?” Less attention has been
given to the architectural issues related to the structure of
component-based systems, and the nature of the key
relationships which drive component-based development
- in essence, to the question
is a component”?
Addressing this question we believe will not only help
establish a cleaner and more general theory of
components, but will also shed light on the “what”
question by helping to clarify important characteristics of
We believe four fundamental hierarchies naturally
dominate the structure of component-oriented software
Containment hierarchy
Type hierarchy
Architecture hierarchy
The term hierarchy is used in a general sense here to
represent a set of entities related by some transitive,
partially ordered relationship.
The first three hierarchies may be termed “
since they contain the actual components themselves. In
other words, every component must be assigned a place
in each of these hierarchies. This place is unique for each
component and serves to define its properties and
The fourth hierarchy, in contrast, can be thought of as
” since it is not actually a hierarchy of
components per se, but rather a hierarchy of
“architectures” or “architectural strata.” In other words,
it is not the components themselves which are partially
ordered, but the architectural strata in which they appear.
This hierarchy therefore has more to do with describing
how a component is used than on defining the nature of
the component itself
In the following sections we discuss each of these
dimensions in more detail: section 2 describes the role of
the component hierarchy, section 3 briefly talks about the
type hierarchy, section 4 discusses the ramifications of
the meta-level hierarchy, and finally section 5 introduces
the concept of the architecture hierarchy and describes its
potential benefits. Section 6 provides a summary of the
key points, and an analysis of their implications.
The containment relationship is probably the most
fundamental of those influencing the structure of
component-based systems. It also has the largest number
of different names, including “aggregation”, “part-of”,
“includes”, “embeds” and of coursecomposition”. All
these terms are used to convey the same underlying idea
of “big objects containing “small” objects. In fact, the
very name component is intended to reflect the idea of
Although simple in concept, containment is notoriously
difficult to apply in practice. The problem is that 100%
“pure” containment rarely occurs in the real word.
Contained objects almost always have relationships to
objects other than their container or fellow contained
objects (i.e., they are shared by multiple containers or
temporary clients), and often these can also represent
some form of containment. Most object-oriented systems
typically contain a tangled web of inter-object links,
making the identification of a clear containment tree a
non-trivial problem. In particular, it is often difficult to
The containment hierarchy can actually can be thought of as
playing a dual role in this sense, because as well as determining
the nature of a component’s interface it also plays a role in
describing how it is deployed.
disentangle “containment” relationships from “uses” or
“peer” relationships where no containment is intended.
Why not therefore simply de-emphasize (or ignore) the
idea of containment in the structuring of object-oriented
and component-based systems? To a certain degree this is
the strategy adopted in the UML which views
aggregation as a special case of association, and advises
developers to use the latter whenever they are in any
doubt as to the applicability of the former. While it may
be possible to deemphasize containment between
individual components, however, the idea of the eventual
“system” containing the components from which it is
created seems inescapable. This idea is as fundamental as
the word component itself.
This brings us to a critical question -
should the assembly of a system be viewed as a
different activity (i.e. use different concepts and
techniques) from the assembly of a component?
In other words, should the application (or use) of a
component be viewed as involving different concepts and
techniques than the creation of a component? Most
approaches to component-based development do not
explicitly address this question, but their terminology
implies that they view the two as different activities. In
other words, most approaches view a system as being a
different kind of entity from a component.
We believe this to be fundamentally at odds with the
philosophy of component-based development. There
seems to be no good reason why an assembly of
components developed to meet the requirements for a
“system” should not at a later stage also be viewable as
an individual component, should their collective services
be useful in the creation of a larger system (i.e. as a
component). However, if one accepts the metaphor:
“a system = a component”
one is compelled to provide a uniform component model
which treats all components in the same way regardless
of their location in the composition hierarchy or whether
they are used as a system or as a part of a system. The
only factor which should determine the activities and
concepts applied to a component should be relevant the
requirements (functional or non-functional).
Another hierarchy that plays a fundamental role in
component-based development is the type hierarchy. As
in object-oriented approaches, the basic idea of a type is
to control the linking together and interaction of
components based on some form of explicitly specified
set of expectations (i.e. a contract). Like containment, the
idea of a type also goes by various names, the chief
among them being “role”, “class”, and “interface”. These
concepts all essentially serve to define a set of
expectations that govern interactions and relationships
between objects. They also can placed into hierarchies
which organize such “expectation specifications” in
terms of their commonalties and differences. These
hierarchies also go by various different names, including
type hierarchy, role hierarchy and interface hierarchy.
In most existing component technologies a component
type (i.e. interface) is embodied by the set of operations
that the component exports, and the information which
these operations receive and return (i.e. parameters).
Exception definitions are also sometimes included. While
this provides a rudimentary way of defining expectations,
it leaves a lot of information missing. For example, the
typical interface specification says nothing about the
expected effects of operations, or the expected
interleaving of operations. Guaranteed substitutability of
components, which is the underlying motivation for
typing, requires that the client and supplier of a service
be in complete agreement about the full nature of the
expectations to be satisfied.
The “system = component” metaphor mentioned in the
previous section, suggests one way of approaching this
problem; namely, to model a component interface by a
suite of UML diagrams as if the component were a
system. Various analysis/design methods present ways of
using UML (or equivalent) diagrams to described the
requirements satisfied by a system, so it would seem
reasonable that these might also be useful for modeling
interfaces. At the Fraunhofer Institute for Experimental
Software Engineering we are investigating an approach
based on the diagram suite defined in the Fusion method
[1] as adapted for the UML by FuML [2].
Metamodeling has become fashionable. However, many
of the approaches which claim to be based on meta
modeling fail to follow through with the full
implications. The best example is the UML [3], which
ostensibly assumes the four level modeling framework
illustrated in Figure 1. Each layer, except the bottom
layer, represents a model (i.e. a class diagram)
instantiated from the elements described in the layer
above. The exceptions to this rule are the bottom layer,
which contains the actual objects embodying the data of
the user, and the top layer which is regarded as an
instance of itself. Normal user class diagrams reside at
the second level, immediately above the bottom “data”
(UML Meta-Model)
Figure 1. UML Model Framework
The main consequence of this approach for components
is that elements in all but the bottom layer generally have
the properties of both an object and a class (i.e. they are
clabjects [4]). This is because they represent a template
for instantiating instances at the level below, and at the
same time they are themselves instances of templates
from the level above. This dual faceted view of
components is depicted in figure 2.
Type (class) view Instance (object) view
Figure 2. Class/Object View of a Component
Most approaches that adopt such a multi-layered model
hierarchy, such as UML and OPEN, ignore this fact
because it leads to some awkward consequences.
Ironically, however, this dual object/class facet could
actually help address a problem that has been central to
the component debate for some time; namely “is a
component an object or a template (from which objects
can be created)”? Some authors, such as Orfali et. al.
view a component as an object with certain additional
properties [5], but others such as Szyperski, believe that a
component is not an object, but can only be used to
instantiate objects [6]. If one accepts the class/object
duality implied by a rigorous multi-level modeling
framework, the most general answer would be that a
component is both.
The phrase “most general” is here because not all
components will necessarily have both facets all of the
time. However, the class/object duality occurs more often
than might be expected. For example, components which
are primarily intended to provide a template for
instantiating objects typically tend to have some “static
information, such as a serial number, which essential
corresponds to attribute values in the object facet. In the
UML, such attributes are called “tagged values”, while in
programming languages they are calledstatic data
members”. The only difference from normal attributes is
that that they are not usually changed at run-time.
Similarly, components which are primarily intended to
serve as objects (e.g. CORBA objects), often have an
associated reflection API which can be used to provide
access to certain kinds of “static” information. Also,
environments such as CORBA usually store “meta
information” about running objects, typically in interface
repositories. These both essentially correspond to the
template facet of the components.
Even with existing component technologies, therefore,
explicit class/object duality may provide a natural and
clean unifying model for handling the various char-
acteristics of components and the different, often
separated, pieces of information that are maintained
about them. However, the possibility also remains that a
pure and fully object-oriented component model of the
kind characterized by Smalltalk, in which every class has
an explicit run-time presence, may offer one of the best
long term strategies for promoting component-based
software development.
The three “dimensions” described in the previous
sections are fairly conventional, and in one form or
another appear in most existing component technologies.
However, the fourth “dimension” described in this
section is much less conventional, and as far we are
aware does not exist explicitly in any of the current or
proposed component-based development technologies. It
also differs from the previous three hierarchies in that it
is not a hierarchy of components per se. In order to
explain precisely what it is rather than what it is not, we
first need to elaborate upon the problem that it is aimed at
5.1 The Problem
To illustrate the problem we will consider the classic
scenario of communication between remote entities in a
distributed system. The example will be based on a
simple client/server scenario in which a file manger
(server) supports requests to read and write strings to and
from files. We will consider both function-oriented and
object-oriented version of the system, in both localized
and distributed forms.
Localized File Management System
As might be expected, the localized, function-oriented
version of the system is the simplest. Figure 3 illustrates
a client function, , issuing a call to a server function,
Void Writer {
write (fn, d)
void write {File f, String data}
Figure 3. Localized Function-Oriented Form
The function takes two parameters: a reference of
type serving to identify the target file and a
representing the data to be written to the file. The
server function is similar, but obviously the
parameter would have to be passed by reference in order
to return the value. In this example, the actual file
reference is and the actual string to be written is .
In an object-oriented system all functions have to belong
to objects. The basic difference in the object-oriented
version of the system, therefore, is that the and
functions have to be defined as part of a class definition,
as illustrated in figure 4 ( becomes ). The
basic interaction is the same, however.
public class File_Manager {
public void write (File f, String data);
public void read (File f, &String data);
public class Writer {
File_Manager fm;
public void do_write {
fm.write(fn, d)
Figure 4. Localized Object-Oriented Form
The and classes in the object-oriented
version of the system can also be depicted graphically.
Figure 5 is an equivalent UML collaboration diagram
which indicates that an instance of , called , sends a
message to an instance of called .
w : Writer
write (fn, d)
fm : File_Manager
Figure 5 Localized UML Collaboration Diagram
Distributed File Management System
Whether written in a function-oriented or object-oriented
style, if the client and server are on the same machine the
compiler can simply link all the appropriate components
into a single program, and the interaction between them
will be implemented directly as a normal, local function
(or method) call.
However, if the file manager and writer need to execute
on different machines, things get a little more
complicated. It is now necessary to arrange for the
communication to be implemented via the network.
void writer {
write (f, d);
void write {
make_call(“write”, fn, d);
void write (File f, String data)
void request_dispatcher {
loop {
write {fn, d);
Logical interaction
Node A
Node B
Figure 6. Distributed Function-Oriented Form
A well-known and widely used strategy for
implementing remote communication is to use a “stub, as
depicted in figure 6. Instead of calling the server function
directly, as in the localized system, the client instead
calls the special “stub” which arranges for the interaction
to be implemented in terms of the communication
services supported by the network. Notice that the name
of the function to be called now has to be passed as a
parameter to the remote dispatcher to enable it to decide
which of its local functions to call. In some
circumstances such stubs can be generated automatically,
but in others it may have to be coded by hand. In either
case, the stub is linked into the client’s program instead
of the original implementation of the server function.
On the server’s side, some form of request dispatcher
(a.k.a. entry port) is needed to receive incoming
messages and call the original server function on the
remote client’s behalf. This is the
illustrated in figure 6. The job of this entity is to respond
to incoming service request by decoding the message and
invoking the appropriate function.
This same idea can of course be applied in the object-
oriented version of the system. In fact, this is the basis of
the ubiquitous “request broker” technology underlying
CORBA and other distributed object environments.
public class Writer’ {
ORB o;
public void request_write {
orb.request (“write”, fn, d)
public class ORB {
public request (String name
File, &String);
public class File_Manager {
public void Write(File, String);
public void read (File, &String);
void request_dispatcher {
loop {
Fm.write {fn, s);
Dispatcher (ORB)
Node A
Node B
Figure 7. Distributed Object-Oriented Form
As illustrated in figure 7, the job that previously fell to
the stub
in the function-oriented version of the system
now falls to a method of the ORB. In this example the
method is called The body of this method is
essentially equivalent to the stub, and sends the
appropriate information over the network in order to
implement the required interaction.
The job of the request dispatcher at the other end is also
played by an orb. ORBs therefore play the general role of
mediators between remote objects which wish to interact.
The example is a little artificial since the ORB methods
have parameters which are specific for this application,
whereas in general of course they would be more
generic. Figure 8 provides a UML interaction diagram
for the implementation illustrated in figure 7.
w 2: Writer2
make_call (“write”, fn, d)
fm : File_Manager
oc : ORB
os : ORB
write (fn, d)
request (“write”, fn, d)
Figure 8. Distributed UML Collaboration Diagram
Different distributed object technologies use words such as
“stub” and “proxy” in non-standard ways. In this discussion we
use the word in a general sense, not in the technical sense of
any particular distributed object standard (e.g. COBRA, Java
RMI etc.).
Interface Vicissitude
So what is the problem? The basic issue is that in the
object-oriented (and hence component-oriented) version
of the system, the interface between objects can change
depending on the level of abstraction at which the
interaction or relationship between them is described.
This can be seen by comparing figures 4 and 7, or their
graphical UML equivalents, 5 and 8. In figures 4 and 5,
the client, , has an interface with in
which it invokes the operation In a distributed
implementation, this interaction might be referred to as
the logical interaction. However, in figures 5 and 8, by
contrast, the client, , has no interface with
at all, but instead has an interface with ORB,
in which it invokes the operation.
The phenomenon is not confined to the implementation
of distributed communication, or to just two architecture
levels. On the contrary, it occurs whenever an abstract
interaction is refined into a more detailed description
involving lower level components and less abstract
interactions. Examples include transactions, security,
persistence etc. - in fact, almost any service provided by
component-based environments such a CORBA. The
idea can also obviously be generalized to multiple levels.
In fact, the interaction described in this example can
easily be generalized to a third level by viewing the type,
as a “persistent” class type rather than as a simple
reference type and treating the operation as a
method of this class rather than . This would
give the following view of the interaction illustrated
textually in figure 9 and graphically in figure 10.
public class File {
public void write (String data);
public void read (&String data);
public class Writer {
File fn;
public void do_write {
Figure 9. “Persistent Class” Object-Oriented Form
w : Writer
write (d)
fn : File
Figure 10. “Persistent Class” UML Collaboration Diagram
If we think of the structure and interactions described by
the preceding figures as representing the architecture of
the system (which in essence is what is meant by
“architecture”), this means that the system can be
considered to have different architectures at different
Vicissitude; n: regular change or succession of one thing to
another, alternation; mutual succession, interchange (Webster’s
Unabridged Dictionary).
levels of abstraction. Figures 4 and 5 represent
descriptions of the architecture of the system (the first
textual, the second graphical) which are equally as valid
as figures 5 and 8 (and figures 9 and 10), the only
difference is the level of abstraction at which the
interaction to write information to a file is described.
This would perhaps not be such an issue if the properties
of the component involved in each view remained
constant, but this is not the case the interface
of the
user component is completely different in each
case. In other words, the interface of changes
depending on which architectural perspective it is viewed
from. This is what we refer to as “interface vicissitude”.
Why is this a problem? In this small example we have
shown three equally valid views of the architecture of the
system, each with different interfaces for the
component. This begs the question as to which of the
architectures is the correct (or best) one, or alternatively
which of the interfaces of is the correct (or best)
one? If only one is to be considered the architecture,
which one is it and how is it chosen?
Of course, it is always possible to place a wrapper around
an ORB in the style of the Adapter pattern to make it
have the appearance of the final server. In this example,
this would mean placing a “proxy” on the client side to
present the interface to instead of the
ORB interface. But this essentially represents an attempt
to simulate one architecture in terms of another, and
implies that for some reason one architecture (or
interface) has been chosen as preferable to another.
However, unless superior tools are available at the higher
abstraction level, or the translation to the lower level is
fully automated, inserting such proxies only serve to
complicate the lower-level architecture and decrease its
efficiency. The issue is one of architecture modeling, or
conceptualization, rather than interface adaptation.
It is interesting to consider why this problem does not
arise in function-oriented software architecture. The
reason goes right to the heart of what differentiates
function-oriented approaches from object oriented
approaches; object identity. In the function-oriented
versions of the system (figures 3 and 6) the interface
between the and the is not at all affected
by the identity of the communicating partners. As a
consequence, the real method can be replaced by a
stub (to handle remote communication) without in any
way affecting the original communicating parties. This
facilitates the creation of layered architectures of the kind
characterized by the ISO Open Systems Interconnection
model illustrated in figure 11. Because interactions at a
given level can be refined without affecting the original
communicating parties, clean layers can be established in
which each module occupies one and only one layer.
The interface involved is often called the “required” or
“imported” interface since it defines facilities used by the
component rather than services provided for use by others.
Figure 11. ISO OSI Model
In the object-oriented approach, in contrast, the identity
(and hence the type) of the server object is bound up in
the definition of every client/server interface. This means
that it is not possible to refine a high-level interaction by
the introduction of intermediary objects without
changing the interface of the client objects. This is
illustrated in figure 12, which shows that in refining an
interaction between two objects and , not only are
additional objects A and B introduced (as in the function-
oriented approach) but the interface of the client is
changed to that of .
Figure 12. Interaction Refinement
5.2 The Solution
From a practical perspective the architecture which is the
most “real”, and which is normally regarded as the
architecture, is the one that describes the system at the
highest level of abstraction that can be understood and
manipulated by automated tools. In other words, it is the
one which describes the system in terms of concepts
support directly by a high-level programming or
interface language. For example, a CORBA developer
usually thinks in terms of an architecture that directly
involves ORBs and the other mediating elements that
make up the OMG OMA.
Theoretically speaking, however, this particular
architecture is no more “real” than any of the other
architectures at higher levels of abstraction. They each
represent an equally valid and complete description of
the system. In fact, there are also often additional
architectures at (hidden) lower levels because most
compilers insert additional “system” objects to support
the implementation of the abstractions in the
programming language. For example, the transformation
from the “persistent class” architecture illustrated in
figure 9 and 10, to the “File Manager” architecture
illustrated in figures 3 and 4, is often performed
automatically by a compiler when implementing
persistent objects. Ultimately, interaction with the kernel
itself can be thought of as a refinement of higher level
For these reasons, we believe the only theoretically clean
way of handling the interface vicissitude problem
outlined above is to define a conceptual framework
which makes all the important architectural levels
explicitly visible by organizing them in a hierarchy of the
form illustrated in figure 11. This illustrates a conceptual
architecture that was used within the MISSION project at
the University of Houston – Clear Lake to visualize the
highly complex system architectures needed for safety
critical, non-stop, distributed systems [7]. Each level in
this model defines a full object-oriented architecture,
each providing a complete, and semantically equivalent,
description of the system. An architecture at a given
level represents a refinement of the architecture at the
level above. The compiler (and other automated tools)
may only directly understand the bottom level, but
explicitly identifying and elaborating the more abstract
levels (and the relationships between them) is
increasingly being recognized as important for
supporting systematic development and traceability, and
thus ultimately system quality [8].
Figure 11 Architectural Levels
Stratified Architectures
Although each of the levels in figure 11 represents an
architecture in the sense that the term is usually used (i.e.
a description of the elements, relationships and
interactions in a system), we believe it is not particularly
intuitive to think of a system as having multiple
architectures. Instead, we prefer to stay with the concept
of a single architecture for a single system, but to
introduce the concept of multiple strata within a given
architecture. Thus, instead of saying that figure 11
illustrates multiple architectures, we would say that it
illustrates a single architecture consisting of multiple
It is important to realize that these strata are not layers in
the normal sense. This is because one stratum may
actually contain the same object as another stratum, but
with a different interface reflecting the effects of an
interaction refinement. In contrast, architectural element
in a conventional layered architecture such as the ISO
OSI architecture illustrated in figure 11, appear in one
and only one layer. Of course, it is possible to define a
link between the two concepts. Within a given level of a
stratified architecture it is possible to define layers
corresponding to the higher level strata, in which
elements are allocated to layers depending on their
stratum of first appearance.
It is also important to realize that these layers do not
correspond to the usual analysis, design an implementa-
tion descriptions of a system that are assumed in most
object-oriented development methods. This is because
each architecture level describes “how” the system is
organized (as opposed to what it supposed to do for
analysis models), and each provides as full a description
of the system as any other.
Object Metamorphosis
The architecture strata concept addresses the interface
vicissitude issue from an “architectural” perspective, but
it does not provide a good way of dealing with the
phenomenon from the perspective of an individual object
or class. For example, what is the nature of the
relationship between and in figure 12, or the
and from figures 4 and 7 respectively.
A metaphor from real life which seems to reflect the
phenomenon fairly accurately is the idea of
metamorphosis. The relationship between and and
and seems similar to the relationship between
a caterpillar and the butterfly which it eventually
becomes. The caterpillar and the butterfly are the same
object, but have totally different external forms and
characteristics. This also ties in well with the idea of
architectural strata, since the concept of metamorphosis is
also applied in geology to the process which changes a
certain kind of rock into another form.
In the context of a stratified architecture, therefore, we
describe as a metamorphosis of , and as a
metamorphosis of . In figure 11, a black circle
within an architectural stratum represents a
metamorphosis of an object in the level above whereas a
white circle denotes an object newly introduced at a
given level.
Related Concepts
The idea of viewing a component-based architecture as
containing multiple strata, in which a given component
may appear in numerous strata in different forms, seems
to have a relationship to several other areas of object
technology that are currently generating significant of
interest. We mention the main ones briefly below.
The idea of “connectors” is a recurring theme in abstract
component-based programming models. The goal is
basically to try to reify the connections between
components, so that like components they also can be
treated as first class citizens. However, most attempts
have run into problems in handling the large variety and
form of “connectors” at a single level of abstraction. The
idea of explicitly defining multiple architecture levels
may help address this problem by cleanly allowing
objects to be associated with connectors, albeit at a lower
architectural strata than the components they originally
Reflective Architectures
A fashionable concept in recent years has been the idea
of reflective architectures, in which aspects of a system’s
functionality related to the interaction of regular
components are separated into a distinct “meta” level. In
a sense, a stratified architecture can be viewed as a
generalization of such a reflective architecture, since it
also provides a way of separating functionality related to
component interaction. However, we believe the
stratified architecture concept to be more powerful
because not only can it be generalized to multiple
abstraction levels, but it also requires the introduction of
fewer additional modeling concepts. In contrast,
reflective architectures require quite a complex set of
additional mechanisms.
Last but not least, the stratified architecture concept may
prove useful in the creation of flexible component
frameworks. A framework is essentially a semi-complete
software system which has been carefully parameterized
with respect to the components that represent the most
variable elements of the domain. A system can thus be
instantiated from the framework simply by providing the
specific components needed for the particular applica-
The problem is that components (i.e. objects) are not
normally the most variable elements of a software
architecture - the interactions (i.e. functions) between
components are. Indeed, the validity of this statement is
one of the main grounds given for the superiority of the
object-oriented approach over function oriented
approaches. By providing an explicit representation of
high-level interactions in terms of lower-level objects and
interactions, a stratified architecture can facilitate
component-based parameterization which much more
closely matches the elements of highest variability in a
domain. In other words, in contrast with a normal
framework, based on a normal (single-level) architecture,
a stratified framework can be parameterized with respect
to components form various strata, not just the top level.
In this paper we have identified some of the major
architectural issues affecting component-based software
development, and have described what we believe to be
the four predominant architectural dimensions.
Two other dimensions are also worthy of mention, and
play a significant role in component-based development.
The first of these is the dimension which deals with
different versions and releases of components. In a sense
this can be viewed as the time dimension, since versions
are created and exist over time. In the terminology used
previously this would be thought of as an intrinsic
dimension since it involves the components themselves.
The second might best be thought of as the
“representation” dimension, since it deals with the
different ways in which a given component can be
represented (e.g. graphically, textually etc.). As such it
would be an extrinsic dimension, since it does not
involve the components per se.
The architecture dimension explained in section 4
handles one particular form of refinement, albeit one of
the most important, which is interaction refinement.
However, there are numerous other forms of refinement
which exists between different descriptions of the same
phenomenon at different levels of abstraction. These
different forms of refinement need to be distinguished
from translation which describes a given phenomenon in
a different way but at the same level of abstraction.
Of the various dimensions presented, the most
unconventional is the architectural dimension which we
believe is needed to address the interface vicissitude
problem that arises whenever interaction refinement is
explicitly documented within a component-based system.
After describing the details of the problem, we presented
a solution based on the concepts of stratified architectures
and object metamorphosis. We also explained how this
approach could help in other important object oriented
technologies. In particular, we briefly discussed how
stratified frameworks could be designed to provide a
more optimal representation of the high variability
elements (i.e. hot spots) in most software domains;
namely those related to component interactions.
The main question which remains unanswered in this
paper is how these various dimensions relate to one
another and which if any, is the most dominant. This
question, as well as the other ideas presented in the paper,
are the subject of ongoing research within the component
engineering group at the university of Kaiserslautern, and
the SOUND and KobrA projects at the Fraunhofer
Institute for Experimental Software Engineering..
1. D. Coleman et. al., Object-Oriented Development:
The Fusion Method, Prentice Hall, 1994.
2. C. Atkinson. “Adapting the Fusion Process.” Object
Magazine, pages 32–40, Nov. 1997.
3. C. Atkinson, “Supporting and Applying the UML
Conceptual Framework, Lecture Notes in Computer
Science, UML'98, Mullhouse, France. 1998.
4. C. Atkinson, “Metamodeling for Distributed Object
Environments,” First International Enterprise
Distributed Object Computing Workshop
(EDOC’97). Brisbane, Australia. 1997.
5. R. Orfali, D. Harkey and J. Edwards, “The Essential
Distributed Object Survival Guide”, Wiley and Sons,
6. C. Szyperski, Component Software - Beyond Object-
Oriented Programming, Addison-Wesley /ACM
Press, 1998.
7. C. Atkinson and C. W. McKay, “A Generic
Architecture for Distributed, Non-Stop, Mission and
Safety Critical Systems,” Second IFAC Workshop
on Safety and Reliability in Emerging Control
Technologies, Daytona Beach, FL, November 1995.
8. D. D'Souza and A. C. Wills, Catalysis: Objects,
Frameworks, and Components in UML, Addison-
Wesley, 1998.
... Our work can be useful to check whether an interaction pattern used in an implementation results in a correct refinement. [13,126] present refinement of an interaction point in the entity domain. Interaction refinement in the behaviour domain is briefly discussed as its consequence. ...
... Also, we identify four patterns for interaction refinement. In a broader scope, we contribute to the research toward interaction refinement and abstraction, such as in [6,13,27,28,119,126]. ...
... 13 illustrates an application of this rule. A concrete action b refers to the attributes of an inserted action c. ...
Full-text available
This thesis proposes a concept and transformations for designing interactions in a service composition at related abstraction levels. The concept and transformations are aimed at helping designers to bridge the conceptual gap between the business and software domains. In this way, the complexity of an interaction design can be managed adequately. A service composition is specified as one or more interactions between application components. Interaction design is therefore the central activity in the design of a service composition. Interaction design at related abstraction level requires an interaction concept that can model interactions at a higher abstraction level (called abstract interactions) and interactions at a lower abstraction level (called concrete interactions), in order to avoid any conceptual gap between abstraction levels. An interaction is defined as a unit of activity that is performed by multiple entities or participants in cooperation to establish a common result. Different participants can have different views on the established result. The possible results of an interaction are specified using contribution constraints and distribution constraints. Contribution constraints model the responsibility of the participants in the establishment of the interaction result. Distribution constraints model the relation between the participants’ views. An interaction provides synchronisation or time dependency between the participants on each other. This interaction concept can model abstract and concrete interactions. A designer can hence use a single interaction design concept during a design process. Two design transformations are defined, namely interaction refinement and interaction abstraction. Interaction refinement replaces an abstract interaction with a concrete interaction structure. Interaction abstraction replaces a concrete interaction structure with an abstract interaction. A set of conformance requirements and a conformance assessment method are defined to check the conformance between an abstract interaction and concrete interaction structure. In an interaction design process, a designer first represents a service composition as an abstract interaction that specifies the desired result. This abstract interaction is then refined into a concrete interaction structure that specifies how to establish that result. Interaction refinement can be done recursively until it results in a concrete interaction structure that can be mapped onto available interaction mechanisms. Every refinement is followed by conformance assessment. To facilitate the development process of a service composition, this thesis provides – patterns for interaction refinement, which serve as guidelines on possible refinements of an abstract interaction; – abstract representations of interaction mechanisms, which allow interaction mechanisms to be included in an interaction design at a higher abstraction level; and – a transformation tool to transform an interaction design at an implementation level to an executable implementation. The use of the interaction concept, design transformations, patterns for interaction refinement, abstract representations of interaction mechanisms, and transformation tool are illustrated with two case studies. In the first case study, we design a travel reservation application as a service composition using a top-down design approach. The services and application components that are involved in the service composition have to be developed. In the second case study, we design enterprise application integration for an order management that composes existing services and applications. We follow an integration approach and use our interaction concept during the design process. The obtained integration solution is then transformed to an executable implementation using our transformation tool.
... Customization of the application is then supported by replacing components with more suitable ones. The problem however is that components are normally not the most variable elements of a software architecture -the interactions between components are [2]. As such, the customization process goes beyond replacing individual components, but involves simultaneously refining the interaction behavior of multiple components without breaking consistency. ...
... In this section we will illustrate Lasagne on the dating system example (see section 2). Essential to the Lasagne customization process is that it distinguishes between three separate phases: (1) the phase of implementing an extension based on a specific wrapper-based programming model, (2) deployment/weaving of one or more extensions into the core system with specification of ordering constraints on wrapper chaining, and (3) selective combination of extensions on a per collaboration basis. ...
Full-text available
Support for dynamic and client-specific customization is required in many application areas. We present a (distributed) application as consisting of a minimal functional core - implemented as a component-based system, and an unbound set of potential extensions that can be selectively integrated within this core functionality. An extension to this core may be a new service, due to new requirements of end users. Another important category of extensions we consider, are non-functional services such as authentication, which typically introduce interaction refinements at the application level. In accordance to the separation of concerns principle, each extension is implemented as a layer of mixin-like wrappers. Each wrapper incrementally adds behavior and state to a core component instance from the outside, without modifying the component's implementation. The novelty of this work is that the composition logic, responsible for integrating extensions into the core system, is externalized from the code of clients, core system and extensions. Clients (end users, system integrators) can customize this composition logic on a per collaboration basis by 'attaching' high-level interpretable extension identifiers to their interactions with the core system.
... 145) a way of treating the instances of the class of categories and subclasses of the class of concrete objects as two facets of a single construct shown in the extract from its Fig. 5 below (Fig. 17). This is broadly similar to clabjects of Atkinson [6]. ...
Full-text available
Formalization is becoming more common in all stages of the development of information systems, as a better understanding of its benefits emerges. Classification systems are ubiquitous, no more so than in domain modeling. The classification pattern that underlies these systems provides a good case study of the move toward formalization in part because it illustrates some of the barriers to formalization, including the formal complexity of the pattern and the ontological issues surrounding the “one and the many.” Powersets are a way of characterizing the (complex) formal structure of the classification pattern, and their formalization has been extensively studied in mathematics since Cantor’s work in the late nineteenth century. One can use this formalization to develop a useful benchmark. There are various communities within information systems engineering (ISE) that are gradually working toward a formalization of the classification pattern. However, for most of these communities, this work is incomplete, in that they have not yet arrived at a solution with the expressiveness of the powerset benchmark. This contrasts with the early smooth adoption of powerset by other information systems communities to, for example, formalize relations. One way of understanding the varying rates of adoption is recognizing that the different communities have different historical baggage. Many conceptual modeling communities emerged from work done on database design, and this creates hurdles to the adoption of the high level of expressiveness of powersets. Another relevant factor is that these communities also often feel, particularly in the case of domain modeling, a responsibility to explain the semantics of whatever formal structures they adopt. This paper aims to make sense of the formalization of the classification pattern in ISE and surveys its history through the literature, starting from the relevant theoretical works of the mathematical literature and gradually shifting focus to the ISE literature. The literature survey follows the evolution of ISE’s understanding of how to formalize the classification pattern. The various proposals are assessed using the classical example of classification; the Linnaean taxonomy formalized using powersets as a benchmark for formal expressiveness. The broad conclusion of the survey is that (1) the ISE community is currently in the early stages of the process of understanding how to formalize the classification pattern, particularly in the requirements for expressiveness exemplified by powersets, and (2) that there is an opportunity to intervene and speed up the process of adoption by clarifying this expressiveness. Given the central place that the classification pattern has in domain modeling, this intervention has the potential to lead to significant improvements.
... The process of switching to lower strata is called refinement, when switching to higher strata it is called abstraction. As argued by Atkinson et al. [AKB99], "strata are not layers in the normal sense". In a conventional layered architecture, objects only appear on one layer, whereas in stratification, they may appear on several layers, albeit exposing different interfaces. ...
... Objectives are the factoring out of core business assets, their efficient maintenance, and coping with evolving requirements and thus evolving systems. The entire software development process and its techniques, methodologies, and notations with special emphasis on the analysis and design phases is considered [5]. Component modeling, identification of candidate components during requirements analysis, and re-factoring of legacy OO applications to component-based applications are areas with ongoing activities. ...
Full-text available
This position paper explicitly aims to facilitate a lively discussion on the usefulness of component-based approaches to building software for embedded systems. We first structure the various promises that component-based software engineering (CBSE) is said to deliver. This is done by elaborating on the different points of view for the concept of software components, which in turn influence the definition and subsequently the pros and cons. After a general characterization of embedded systems we then present a case study representing the current practice in embedded systems' software development within the control system domain. Particular emphasis is put on the encountered problems. Analyzing CBSE promises and their appropriateness to solving the stated problems shall help to assess the potential of CBSE for embedded system software. But it shall also (maybe with the help of the workshop) generate a catalogue with open questions and inadequacies that justify a combined effort by the research community, and lead to new (?) items on their research agenda.
... This allows that interactions can be recursively refined by injecting a second wrapper that adapts the interaction behavior of the first wrapper and is appended in the tree as a child node of the latter. As such UHFXUVLYH QHVWLQJJ RI DUFKLWHFWXUDOO VWUDWD as proposed by[18]is possible. This concept presents the architecture of a system as consisting of multiple strata. ...
Conference Paper
Full-text available
The success of distributed object technology, depends on the advent of Object Request Broker (ORB) architectures that are able to integrate flexible support for various nonfunctional requirements such as security, real-time, transactions, etc. We promote component framework technology as the cornerstone for realizing such generic platforms. An ORB component framework leverages the “glue” that connects customized ORB components together. We present a reflective component architecture that improves the dynamics of this glue, such that system-wide integration of new non-functional requirements into a running ORB system becomes possible
... A reusable component is defined as being any design artifact that is specifically developed to be used and is actually used in more than one context (Zhang & Lyytinen, 2001). A large variety of components also called patterns (Fowler, 1997;Gamma, Helm, Johnson, & Vlissides, 1995), business objects (Cauvet & Semmak, 1996), frameworks (Willis, 1996), and COTS or assets (OMG, 2005) have been proposed. Components differ with regard to their granularity, abstraction level (software components, design components, business components), or by the kind of knowledge they allow for reuse (Cauvet & Rosenthal-Sabroux, 2001). ...
New Concepts such as agile modeling, extreme programming, knowledge management, and organizational memory are stimulating new research ideas amoung researchers, and prompting new applications and software. Revolution and evolution are common in the areas of information systemsdevelopment and database. Research Issues in Systems Analysis is a collection of the most up-to-date research-oriented chapters on information systems development and database. Research Issues in Systems Analysis and Design, Databases and Software Development is designed to provide the understanding of the capabilities and features of new ideas and concepts in the information systems development, database, and forthcoming technologies. The chapters in this innovative publication provide a representation of top notch research in all areas of systems analysis and design and database.
Conference Paper
The design of distributed applications is a complex undertaking, especially if the designers are forced to immediately deal with the detailed behaviour of the underlying middleware. It would be better if the designers could first focus on the essentials of the applications using suitable abstractions of interaction mechanisms that are provided by communication middleware. In this paper we present a method for abstracting a structure of interactions into a more abstract interaction. We apply this method to obtain the abstractions of common interaction mechanisms. The abstractions of interaction mechanisms are defined using the same interaction design concept as used to define other interactions. The abstractions can thus be manipulated in the same way as any other interactions. The correctness of an abstraction with respect to the interaction mechanism it represents is assessed by checking whether a set of conformance requirements are satisfied.
Conference Paper
Software reuse, when correctly employed, can make it feasible to extend process control applications with controlled cost and effort. Component-based development is one of the important means to realise software reuse at the different development lifecycle stages. This paper illustrates the component-based development of process control systems using the GOPCSD tool. The GOPCSD tool guides the user to develop flexible requirements specification models for process control components that can be reused in different families of process control applications. The tool automatically generates a B specification corresponding to the corrected requirements. We illustrate the component-based development by examining a case study of a production cell. Finally, we draw conclusions and give directions for future work.
The cost effective development of web applications is perhaps one of the most challenging areas of software engineering today. Not only are the problems to be solved, and the solution technologies to be used, in web application development among the most rapidly changing in the software industry, but the business pressures of cost, quality and time-to-market are among the most extreme. Web application development therefore has potentially the most to gain from software reuse approaches that can offer a greater return on development time than traditional approaches. However, simply combining ideas from these reuse paradigms and traditional web development technologies in ad-hoc ways will not result in sustainable improvements. In this paper we describe a systematic way of combining the benefits of component-based development and model driven architectures, two important reuse approaches, to support the cost effective development and maintenance of web applications. After first defining a suitably abstract component-model, the paper explains how component architectures can be systematically and rigorously modeled using UML. It then describes a powerful technique, known as stratification, for separating the various cross cutting aspects of a web application such that a suitable platform specific architecture can be traceably generated. Finally, the paper introduces a technique for increasing the trustworthiness of components by giving them the capability to check their deployment environment at run-time.
Conference Paper
Full-text available
. The Unified Modelling Language (UML) ostensibly assumes a four level (meta) modelling framework, both for its definition and for the conceptual context in which its users operate. In practice, however, it is still dominated by the traditional two level (model + data) view of object modelling and neither supports nor applies the four level framework properly. This not only diminishes the clarity of the UML semantics, but also complicates the task of those users who do wish to fully embrace a multi-level approach. After outlining the characteristics of the intended conceptual framework, and the problems resulting from the UML's current two-level bias, this paper presents three simple enhancements to the UML which provide the required expressive power for multi-level modelling. The paper then goes on to discuss issues in the application of the conceptual framework within the UML's own definition. 1 Introduction Although the current version of the Unified Modelling Language ...
This paper gives an overview of a prototype architecture for distributed, non-stop, mission and safety critical systems developed in the MISSION research project. After outlining some of the issues involved in supporting these requirements at the systems software level, the paper describes the MISSION generic architecture from two perspectives. First, it discusses the visualization techniques used to handle the complexity of a large objectoriented system. Second, it describes the structure of the bottom architectural level, known as the smokestack model, which is composed of five firewalled subsystems interacting by means of a special MASC kernel
An abstract is not available.
Conference Paper
Meta-modeling is critical to the success of distributed object environments such as CORBA and ActiveX/ DCOM. However, there is a surprisingly large variation in the nature of the meta-models (and meta-meta-models) that have been proposed for such environments. This paper investigates this phenomenon by examining the basic tenets of meta-modeling in the context of distributed object environments, and by defining the basic properties required of a suitable meta-modeling framework. The paper is not concerned with the content of the meta-models, per se, but rather with the form that this content should take, and the rules that it should adhere to. The ramifications of these rules on the notations and languages for distributed object environments are then considered.
Metamodeling for Distributed Object Environments First International Enterprise Distributed Object Computing Workshop (EDOC'97)
  • C Atkinson
C. Atkinson, " Metamodeling for Distributed Object Environments, " First International Enterprise Distributed Object Computing Workshop (EDOC'97). Brisbane, Australia. 1997.