Conference PaperPDF Available

Distributed architectures and components for the virtual enterprises

  • R2M Solution
  • Free Thinker @Moorea

Abstract and Figures

The VEGA1 project aims to establish an information infrastructure which supports the technical and business operations of virtual or extended enterprises. GroupWare tools and distributed architectures are developed in compliance with product data technology standardization activities in line with the current specifications coming from the Object Management Group (OMG) and the ISO Standard for the Exchange of Product model data STEP, ISO/TC184. VEGA will make a significant contribution to the emergence of appropriate technical solutions supporting the advent of distributed objects and architectures as a means to implement distributed companies. The continued growth of the Internet/WEB and its associated standard developments leads to new ways of world-wide information communication, distribution and access to information. VEGA works for a tighter integration of STEP, CORBA and WEB technologies within a DIS2. Thus, the VEGA platform will allow both the support of distributed heterogeneous and interoperable client/server information systems (through CORBA) and the support of WEB based access to information and services through an Internet based navigation, building upon CGI and Java technologies. This paper, which reports on the VEGA’s success story, does only provide a limited picture of the overall objectives and results achieved and should be an incentive for the reader to go further into the project achievements.
Content may be subject to copyright.
Distributed Architectures and Components for
the Virtual Enterprises
Alain Zarli, Patrice Poyet
CSTB, 290, route des Lucioles, B.P. 209, 06904 Sophia Antipolis, France
{zarli | poyet}
A comprehensive solution for enterprise information systems, access to technical or business information, and
electronic publishing and commerce for the concurrent enterprising of tomorrow, requires the integration of new
sophisticated technologies and systems. This paper presents the research and results achieved in the framework
of the VEGA1 project, establishing an infrastructure supporting the technical and business operations of virtual
enterprises, with workflow tools and distributed architectures developed in compliance with product data
technology. The VEGA objective is a platform enabling companies to work together, and being the basement for
future integration and distribution of business components as promoted in the WONDA2 project. The WONDA
main goal is to specify a scalable 3-Tier component-based architecture for secure and sophisticated access to
versatile information systems and electronic content. This paper, which reports on the VEGA’s story and
introduces to some WONDA concepts, provides a limited picture of the overall objectives and results achieved
in this still experimental area, and should be an incentive for the reader to go further into the projects
Product data modelling and STEP, CORBA-based middleware, Workflow, Business objects and components.
1. Context and objectives for distributed enterprises
Large Scale Engineering (LSE) projects nowadays involve companies in remote sites to group
together on short term business relationships for the realisation of complex products, with the need to
communicate effectively, and quickly and accurately store, access and transfer any information. But so
far, the use of many different computing tools to manage vast volumes of information has lead to a
time consuming and labour intensive integration task. Competition now asks for continuous
improvement not only in the individual tools, but in their ability to share data and inter-operate. New
distributed Information Technology (IT) solutions typically have to support companies in their
requirements: networking technology, object-oriented distributed systems, component-based
development and workflow techniques are the foundations for future concurrent enterprising,
including the access to and process of multiple enterprise data, and complex information management.
A major issue is to make sure the information is easily accessed and manipulated by multiple
applications and remains consistent over the distributed computing environment, so that all software
applications within the Virtual Enterprise (VE) can deal with product/project information. A second
issue is that business is now moving fast. Thus, enterprises must be able to promptly modify their
information systems: the only answer is to have standardised, flexible, and extensible IT foundations,
with solutions for scalability, robustness and platform neutrality in heterogeneous environments. This
paper provides an overall picture of the objectives and achievements of 2 complementary ESPRIT
projects focusing on critical parts of 3-Tier architectures, to deal with these new challenges.
1 VEGA (Virtual Enterprises using Groupware tools and distributed Architectures) is the ESPRIT project 20408. Internet site:
2 WONDA (WOrld wide eNterprise Data interoperAbility) is the ESPRIT project 25741. Internet site:
The VEGA project aims to establish an IT infrastructure which supports the technical and business
operations of VEs. Workflow tools and distributed architectures are developed in compliance with
product data technology standardisation activities, in line with the current specifications coming from
the OMG3, the WfMC4 and the ISO STEP5, and extending their capabilities as needed for engineering
collaboration in a flexible distributed environment. VEGA is actually using 3 different technologies:
Product-Data Modelling (PDM) for the specification of meaningful project information;
middleware technology for the distribution of project information; and
workflow management for the control of the flow of information and work in the VE, relying on
workflow technology as defined by the WfMC for design of process control.
Thus, the VEGA platform relies on high level technology, including STEP for PDM, the OMG
CORBA6 framework for middleware-based communication, and WfMC-related workflow technology
for design of process control. Even, the continued growth of the Internet and its de facto standards
leads to new ways of world-wide information communication and transfer. VEGA works for a tighter
integration of STEP, CORBA and WEB technologies within a DIS7, providing both the support of
distributed interoperable client/server systems, and the support of WEB access to information and
services through an Internet based navigation, building upon CGI and Java technologies.
Supposed to rely on an infrastructure for distributed sharing of product databases and knowledge
sources as in VEGA, the WONDA project aims at specifying for the deployment of concurrent
business applications in Intra/Extranets, with secure and sophisticated access to versatile information
systems and electronic content. Based on a scalable 3-Tier architecture, WONDA tackles advanced
concepts like business objects and multimedia content representation, with open flexible solutions
founded on components for information and media management issues, and their interoperability
through standard interfaces. The WONDA approach leads to a clear separation between access to
corporate legacy data, business and application logic, and media management (user interface logic,
compound documents, etc.), regarded as a solid foundation for information systems better fitting
business user requirements. In this paper, we only introduce to notions related to business objects.
2. Distributed Objects and architecture: the vega platform
The VEGA project and infrastructure have been already presented within previous papers: [Amar97],
[Jung97], [Zarli97], [Zarli98], [Step98] among others. So we only summarise in this section the main
concepts and technologies tackled in VEGA, and the main results achieved so far in the fields of
distributed infrastructure and generic services for the VE.
2.1 Methodologies and standards
The best way to allow software applications to inter-operate is to provide them with standard
interfaces, should it be standardised format for data exchange, standard functional API to access data,
or a common communication protocol and framework between the applications. Moreover, in order to
allow for developing different parts of a complex information system with a reasonable independence,
it is obviously worth omitting proprietary interfaces in favour of standard ones, so that existing
developments can be put into a new environment with minimal effort. Such issues have lead to a lot of
research efforts to achieve effective standardisation of methodologies, languages and infrastructures
for information systems. With respect to PDM and data exchange, VEGA refers to:
STEP ([Björ95], [Fowl95]), an ISO standard for the representation and exchange of product
data in a uniform and full way during the whole product life-cycle, promoting the EXPRESS
language [ISO94a], a STEP physical files format [ISO94b] and an API for common access and
sharing of product databases [ISO95] through data and application independent mechanisms.
3 Object Management Group - an international industrial standardisation organisation.
4 Workflow Management Coalition.
5 Standard for the Exchange of Product model data, ISO/TC184/SC4.
6 Common Object Request Broker Architecture.
7 Distributed Information Service (VEGA WorkPackage 4).
The IFC (Industry Foundation Classes), developed by the IAI (International Alliance for
Interoperability) as a universal model for integration purposes and collaborative work in the
AEC/FM industry.
Regarding distributed networked infrastructures, VEGA promotes a CORBA-based backbone.
CORBA ([OMG98a], [Mowb97], [Pope98]) is an OMG specification for application interoperability
in client/server distributed architectures, allowing objects described in any language to be shared
across heterogeneous operating systems, platforms and network environments. All CORBA compliant
applications are coupled to an ORB8, being the middleware in charge of establishing the client-server
relationships between objects, and seamlessly interconnecting multiple object systems.
Eventually, VEGA integrates a set of widespread WEB languages and technologies within the DIS:
HTTP, the WEB protocol for navigating between documents with hyperlinks and dedicated for wide-
scale networks, HTML9, used to create hypertext documents, VRML10, to model interactive 3D objects
and worlds and intended to be a universal interchange format for 3D graphics and multimedia through
the WEB, and Java, an object-oriented, architecture-neutral and platform-independent language, along
with classes libraries and APIs for interfaces to existing technologies (databases, middlewares, etc.),
and a full execution environment for the deployment of distributed applications.
2.2 The COAST communication middleware
The VEGA platform fundamentally relies on the COAST11 integration system. This system, built on
CORBA technology, stores in a distributed way and gives transparent access to product data specified
by EXPRESS schemata. The COAST is designed as a general platform that accommodates the sharing of
product model information in a distributed heterogeneous environment, provided that any model is
defined using EXPRESS. The COAST is based on an ORB, and provides a set of services that are
partially standard OMG and partially native COAST services. It also exposes a COAST API [Köth98],
an object-oriented access method supporting by default distributed environments, and hiding to
COAST-compliant applications all details about distribution, heterogeneity and storage schema
details. Moreover, COAST activities are transactional, for safe information sharing. Based on the
COAST, VEGA provides a model driven platform, supporting plug-in and plug-out capabilities for the
VE. This architecture has the promise that an application can exchange specific types of objects
without dependency on the knowledge of how the data is handled in the other applications.
An important issue in the development of COAST has been its propagation to the OMG. The COAST
specification and implementation have strongly contributed to the new OMG “PDM Enabler
specification” industrial standard [OMG98b] on product management systems, as developed in the
OMG Manufacturing Domain Taskforce. Firm interconnections exist between COAST and PDM
Enablers, and this combined architecture is a good candidate to build the business- and production-
related information infrastructure for the future VE.
2.3 Workflow for process control in the VEGA platform
Workflow systems bridge the gap between processes and data modelling worlds [ Gawl94]. As
described in [Zarli97] and [Schu98], VEGA develops on one hand a workflow process meta-model to
define workflow processes in VEs and to link product model data to the workflow definition, on the
other hand a workflow management architecture to manage workflow across company boundaries.
The VEGA workflow meta-model meets the specific requirements of concurrent engineering
processes in LSE VEs. Its core is based on a generic and extendible meta-model of workflow
processes as specified by the WfMC, which supports the WPDL12 ([WfMC94],[WfMC96]).
But the main achievement in VEGA with respect to workflow is the realisation of a Distributed
Workflow Service (DWS) architecture, along with a specification allowing the integration of invoked
applications into the proposed architecture [Schu98]. It realises both WfMS interoperability and global
workflow monitoring across company boundaries, through a CORBA-based implementation for the
8 Object Request Broker.
9 HyperText Mark-up Language.
10 Virtual Reality Modelling Language.
11 CORBA Access to STEP information storage.
12 Workflow Process Definition Language
communication of workflow enactment services with invoked applications and inter workflow
enactment service communication between heterogeneous, multi-vendor workflow systems.
The current DWS implementation in VEGA is based on the workflow management system PrM
(Process Manager) as a workflow backbone responsible to handle the global VE workflows. PrM is
used to launch activities which can be individual applications or activities for another workflow
system in the VE to carry out (e.g., an application for Energy Calculations has been integrated in the
LinkWorks workflow system, in turn linked with PrM). The global workflow monitoring is achieved
thanks to a (logically) centralised database where all runtime-information of workflows is collected.
An additional global workflow monitor can access the database to show and analyse the information.
2.4 Schema interoperability
Besides applications interoperability, interoperability between models involved in a design process is
needed. This has been an issue in VEGA to develop a Schema Interoperability Service (SIS -
[Rang97]) for the interoperability of STEP product models, and in the future for models coming from
other standardisation initiatives. Supported by the EXPRESS Data Manager (EDM), the SIS provides
STEP physical file I/O, a conformance checking service and a schema mapping service based on the
STEP EXPRESS-X language. The SIS has been used in VEGA to develop various models conversions,
e.g. a set of rules that convert IFC1.5 to AP225 data, used by one DIS application (see next section).
2.5 The VEGA Distributed information Service
Within VEGA, the development of the DIS is a first approach towards an end-user oriented service for
access to information through various representations, leading to consider standardised front-end services.
The WEB offers standards and technologies for open networks, with a common interface for end users
and a generic way to deal with the presentation of multimedia information. As one of the DIS objectives
is to access distributed information through COAST middleware and to present it at level of end user
desktops [Zarli98], the DIS is indeed a service providing basic integration of the WEB (end user
presentation layer) and CORBA technology (object-oriented communication layer). Three main DIS
applications have been chosen to be developed in the VEGA framework:
A server for compliance checking of EXPRESS based product data, using IFC1.5: developed rules
(partially) check the VEGA building for disabled access according to French regulations.
A server for VRML-based 3D visualisation of geometrical (AP225) STEP data.
A server for the visualisation of the structure of IFC-based product data.
To assist these applications, two additional developments have been undertaken in the VEGA DIS.
The first one is a late "Java Binding" on top of the SDAI/C Binding API of the SIS (and the
augmented EDM interface). This library can be imported and called directly from any Java program
for accessing the EDM database. The second one is a "Java Generator", written in Java on top of the
Java Binding, and that generates for any EXPRESS schema (available as meta-data within the SIS) a set
of Java Classes and generic functionality directly used within Java applications. The integration of the
two first DIS applications mentioned above with the COAST, the SIS and the DWS, is now exposed.
3. Distributed architectures: applications at work in VEGA
In the framework of the VEGA platform development and VEGA demonstration, several applications
have been connected to the common services of VEGA. We hereunder give a short insight on two of
them. Both are more detailed in [Zarli98].
3.1 Control of disabled person access to public buildings
This first DIS application checks the conformity of a building under design according to French
national code for Disability Access in Public Buildings. In the VEGA demonstration, the building has
been stored into a COAST database. The application part checking the rules is written in C++ and uses
the COAST C client API to access data. This application is used as a CGI script (a WEB server-side
application). The connection to the DWS is done from inside the user’s WEB browser’s interface
thanks to a Java applet. This Java applet is a DWS interface that allows the user to connect itself to the
DWS, retrieve tasks and acknowledge the DWS of the performed task results. This applet uses IIOP
protocol to contact the DWS over LANs or WANs, as shown in Figure 1. The following software have
contributed to the realisation of this application:
Operating System: Microsoft Windows NT 4.0 sp3.
Development Tools: Microsoft Visual C++ 5.0.
WEB: Apache WEB server 1.3.3, Netscape Communicator 4.5.
COAST environment: DEC Object Broker 2.7, COAST C API.
DWS environment: DEC Order Handler 2.5-1.
WEB Browser
IFC 1.5
COAST middleware
daemon CGI Bin
COAST client binding
Distributed Workflow Service
IFC 1.5
functional calls
Figure 1: the “Check Disable Access” VEGA DIS application.
The main steps of the application runtime are:
The user launches its WEB browser and opens the “check access” HTML entry page from the
WEB server. On reception of this page, the user then has to fill the Java applet form to identify
himself to the DWS in order to ask for a possible task to do ( This is a pull based mechanism,
i.e. the user has to ask for tasks to be run, as opposed to push systems where the user gets
automatically requested to do a task).
When the applet receives a task to realise, the HTML form, the interface to the CGI script gets
updated with the corresponding COAST part and partnode names which contain the building
model to check (a COAST part contains an information model, and is composed of partnodes
which are the actual information containers).
Then the user chooses the rules he wants to be checked on the building. By pressing the
Submit button, the user sends an CGI request to the WEB server. The WEB server then
launches the check access application with the parameters the users gave with respect to the
HTML form code.
The application now connects to the COAST. By using its C API, the application opens the part
and partnode the user asked and performs the different rules asked for on the corresponding
entities the partnode contains. If the building contains errors according to the regulations,
anomalies and special information on the element that failed the rule are stored in an HTML
report which is then passed back to the user.
The user can then inform the DWS that his task has been performed and can ask for previous
workflow steps to be redone (case of anomalies), or permit the workflow normal continuation.
3.2 VRML generation from STEP AP225 data
This DIS application provides a translation facility from AP225 STEP models to VRML models, the
later been viewed over LANs or WANs. This DIS is written in Java on top of an SDAI/Java binding,
itself on top of EPM’s EDM (see Figure 2).The application to convert AP225 models into VRML is
written in Java, and is build on top of three different layers: the first layer is a set of Java classes
resulting from a SDAI/Java generator, which are linked to the conversion application. This generator
creates a set of Java classes corresponding to an EXPRESS schema, the AP225 schema in this DIS.
These early classes (for accessing data) are using a second layer containing a Java/SDAI binding
which is itself built atop of a third layer consisting in EPM technology’s EDM C toolkit using the Java
Native Interface (JNI). The JNI allows Java code that runs within a Java Virtual Machine to operate
with applications and libraries written in other languages, such as C, C++, and assembly.
This DIS application is used as a servlet which acts just like a CGI script based on Java technology,
and it also uses the HTTP communication. The connection to the DWS is done in a similar way as
with the first application described. The following software contributed to realise this application:
Operating System: Microsoft Windows NT 4.0 sp3.
Development Tools: Kawa v3.13 (integrated Java development environment), a Java/SDAI class
generator (developed by TNO - a VEGA partner), EPM Technology’s EDM, version 3.5.
Development Libraries: jdk v1.1.7 (Java development kit), jsdk 2.0 (Java servlet development
kit), swing 1.1b2 (Java graphical windowing library), SDAI/Java binding latest version
(developed by TNO - a VEGA partner), EPM Technology’s EDM toolkit, version 3.5.
WEB: Apache WEB server 1.3.3, jsdk2.0 servlet runner, Netscape Communicator 4.5.
STEP Database in E DMS
EDM toolkit
(Java) Servlet
Dictiona ryRepositor y
Iso10303.SDA I
ModelRe pository
Meta-m odel
AP225 Mod el
Client Desktop
WEB Browser
Distributed Workflow Service
WEB Ser ver
(HTTP da emon
+ Servle t Runner)
Figure 2: the “AP225-VRML conversion” VEGA DIS application.
The main steps of the application runtime are:
The user launches its WEB browser and opens the AP225 to VRML conversion HTML entry
page from the WEB server, then filling the Java applet form to identify himself to the DWS in
order to ask for a possible task to do (again, this is a pull based mechanism).
When the applet receives a task to realise, the HTML form, the interface to the CGI script, gets
updated with the corresponding COAST part and partnode names which contain the building
model to visualise.
Then the user starts the conversion servlet which sends back the HTML conversion interface on
the model he asked for. He further chooses the entities to be converted from the AP225 model,
sending his request to the servlet with his conversion parameters.
The servlet performs the conversion on the WEB server side and sends back a VRML stream
containing the VRML data to the WEB browser. The WEB browser launches automatically his
VRML browser plug-in to view the VRML model it has been given.
The user can then inform the DWS that his task has been performed, and can ask for previous
Workflow steps to be redone (case of anomalies), or permit the workflow normal continuation.
4. Towards distributed components: the concept of Business Objects
The WONDA project goes a step further than VEGA, targeting the industrial deployment of the
information and communication infrastructure as promoted in VEGA for future business benefits.
Such a challenge relies on the following hypothesis: a business framework is required on top of a
VEGA-like architecture, to better deal with enterprise business issues and to mask the underlying
“low-level” infrastructure. WONDA aims at specifying a viable infrastructure for manufacturing,
marketing and commerce, including disparate data sources access, interoperable components, WEB
intuitive access to information by end users using their usual applications, and an intermediate level
targeting the enterprise Business model and linking together all the previous components. To do so,
WONDA specifies an open framework for business objects13, supporting Intranet applications,
electronic publishing and electronic commerce, through the deployment of software component-based
applications. From information system designer and end-user points of view, the WONDA framework
13 This concept is under standardisation by the OMG, along with a standard framework for business applications.
specification is an answer to move from a distributed (low-level) object technology to a distributed
business component technology. We hereafter only focus on the business objects concept.
4.1 Why Business Objects ?
The object technology expected to provide standard, flexible and reusable components is nowadays
emerging through various efforts, e.g. the SUN Enterprise JavaBeans. One of these works are the
OMG Business Objects (BOs) and the Business Application Architecture [Case97], that intend to
promote a standard framework for business applications on top of CORBA. This would represent a
natural “extension” to the DIS developed by VEGA. The need and potential impact of BOs and their
possible integration and usage within the frame of the architecture just described is now addressed.
With the integration and generalised use of tools of various nature, enterprises have now to deal with
increasingly complex and heterogeneous data. Heterogeneous means that the data now may integrate
text, images, sounds, hyperlinks, etc., and can be of different nature. Complex means that one have to
deal with whole collections of structured data, that those networks of data can be distributed and imply
remote access, and that data can be dynamically inter-dependent. This last characteristic also means
that you have not only to manage data, but elements associating various types of data as well 14. Indeed,
networked virtual enterprises make more complex the issues related to management and manipulation
of data within the enterprise.
BOs are supposed to be the “glue” between client applications and enterprise data as contained in large
data stores. They are expected to facilitate communication, design and modelling between
implementers and business domain experts through the sharing of the same concepts. They have to
model the real world so that people focus on main characteristics and relationships among BOs. As
abstractions of the real world, BOs have clearly to inherit from Object Oriented technology, thus
inheriting of its benefits: improved communication between end users and developers (in charge of
defining and managing data) is facilitated through BOs, as it is easier to the end user to describe his
business under the form of BOs, while at the same time, the developer dispose with BOs of a clear
design representation to make the link between data and real needs of end users.
4.2 Main features of Business Objects and business components
From the general needs previously identified and the true emergence of object oriented development
and distributed architectures, BOs are on the way to take off. This is the same for Business software
components: due to their nature, they can be related to BOs and viewed as the projection of BOs in a
technological infrastructure. A component groups a set of classes and/or libraries aiming at achieving
together a specific goal in a seamless way for the end user. Indeed, a software component is defined
through the (organisation-related) role it plays in a system, and one can identify four major roles:
the dialog with the end user (graphical software components),
the BO, which can be decomposed between the object (i.e. structure and data) and the
management rules (processing, appliance, invariant),
the process management (close to the notion of workflow),
and specific objects allowing to access to the existing system, especially to some persistent
repositories (directories or mailing lists).
With respect to product data, which are low-level data describing the basic structure of a part of an
industrial product and that can be considered as atoms in any application dealing with the model in
which the data structures are described, or even class libraries which are mainly an opportunity to
associate together sets of product data according to some basic common properties, the main
characteristics of BOs (and associated components) can be identified as follows:
BOs and components offer high-level views on product data, corresponding to representations
both throughout the product lifecycle (time consideration) and for all the actors involved in the
design/development/use (domain consideration). Indeed, they are associated to logical processes
involving those product data, and correspond to business coarse-grained entities.
They are independent (i.e. can be delivered separately), accessible through high level precisely
defined interfaces, and can communicate with other components.
14 An even more advanced feature of applications is the ability of deducing new information from existing data, but this
feature and associated new objects is not considered here.
They can be assembled into frameworks to support high-level industrial products design and
developments. When considering distributed architectures, they are a concluding solution as
allowing the interoperation of business components in heterogeneous user views on product and
computer implementation of the product data.
Thus, while product models correspond to a conceptual structural approach of data, the BOs conform
to a more functional and process view on information, though relying of course on data structures.
4.3 Towards standardised business objects and components
BOs are concerned with the definition of operations and possible queries available for objects. They
equally have to manage information about objects, identifying constraints and relationships, most of
the time according to some process or business context. They have to be related to ways of retrieving
information like object browsers, query languages, structure-based technologies, etc. A major problem
today is the absence of real standards for BOs, and thus no consensus regarding the interfaces to
provide for BOs. In that context, the current OMG efforts are of primary importance for the
standardisation of Common BOs and domain specific BOs as well, with a focus on the interfaces
(API) which define the communication layer, whilst the communication mechanisms themselves will
be implemented by a tier middleware (the choice for OMG is naturally CORBA). Especially, and in
order to represent an entity or a process in a business domain, an OMG BO may have to manage:
Attributes that stand for associated data elements which provide information about the BO.
Existence of attributes depends on the BO that refers to.
Events that externalise state changes when operations are invoked on BOs. An event model is
intended to avoid tight coupling while integrating several BOs in a specific system. By doing so,
changes are easier to manage.
Business rules that focus on the semantics of the component in a specific domain, e.g. an
appliance rule (first-order constraint) which is plugged into a container on which the rule
applies, and that partly defines the BO’s behaviour. For example, invariant appliances define
rules that must always hold for the component.
States that symbolise the mutually exclusive conditions a component may be in. Moreover,
some states may be reached only when pre-defined conditions apply.
Relationships that connect two types. Relationships help to make components interact. Without
any relationship, components would be isolated islands of knowledge. They allow bi-directional
traversal between instances of different types, permit loose coupling of co-operative distributed
components and ease the assembly of components originating from different vendors.
Dealing with future standardised BOs and software components will enable two properties now
recognised as fundamental for the industry, modularity and re-usability:
modularity means the ability to directly insert the component in an infrastructure (e.g. a modular
framework) and to be able to use it through its API, without any modification of the component
(the component can be viewed as a black box).
Re-usability is a step below, allowing the system designer to adapt the component to some new
requirements on the base of modifications or extensions (and also to maintain the component
through its evolution). This means to assemble components during the development phase, and
no more only for execution time.
5. A preliminary conclusion
With upcoming European and world-wide markets, it becomes obvious that companies will have to
make their employees work more and more in teams, groups and VEs. This leads to new needs in
working practices: ability to be easily linked and to work together quickly and flexibly, location
independence and short-term relationships, need to organise the enterprise around projects 15 (and no
more around applications), need for information handling, management and maintenance, and support
for more complex Business processes. VEGA has developed an infrastructure marrying Internet and
distributed Client/Server technologies to answer to most of these needs, facilitating the creation of
15 This is already most of the time the case in the specific AEC industry.
possible new relationships between partners in the realisation of a project. This is of course true for
Intranets, promoting an architecture for more efficient exchange and access to information between
applications within a company, but this is true for Extranets and expanded VEs as well, where
companies have to collaborate together but at the same time have to protect (at least parts of) their
information systems, especially through firewalls. When connecting their applications and servers to a
VEGA-like platform, firms still use their firewalls as usual thanks to a control of the information
access and flow based on a DWS. Moreover, the collaborative network is controlled by the set of
companies in the VE (and not a specific company), linking trading partners together in inter-
organisational networks with seamless communication and cross-application information circulation.
Eventually, VEGA promotes push technology, through the push of the result of an application to the
next phase of the overall value-added process, via the use of (CORBA-based) workflow technology.
The VEGA infrastructure is well adapted to many-to-many relationships between companies
applications within the VE, allowing a real IT shift from simple technical support to potential strategic
asset. Anyhow, IT systems will lead to complex computer-based infrastructures for the future
deployment of strategic applications. In order to help the management of this complexity, WONDA
encourages a framework based on BOs and components, for improved business processes and better
and easier control in product development. Based on functional specifications, interoperation
(standardised) protocols and unambiguous interface definitions, BOs allow the specification of
Enterprise Business models closely related to business domain and activities. They aim at being
customised in a standard way, distributed and transaction aware, leading to standard general service
components. Thus, BO architectures will provide standard frameworks into which components and
objects can be plugged, as suitable solutions for suites of components and large applications in
enterprise systems (ERP, accountancy, management, etc.) and future electronic commerce as well.
[Amar97] V. Amar, M. Koethe, K. Schultz, A. Zarli, “An open STEP-based distributed infrastructure: the
COAST Platform“, Proceedings of the 1st International Conference on Concurrent Engineering in
Construction ’97, July 3-4 1997, London (UK), p. 227-240.
[Björ95] B.-C. Björk, “Requirements and information structures for building product data models”, Espoo: VTT
Technical Research Centre of Finland, 1995, VTT Publications N° 245.
[Case97] C. Casenave, “Business-Object Architectures and Standards” (white paper), 1997, URL
[Fowl95] J. Fowler, “STEP for Data Management, Exchange and Sharing”, Technology Appraisals 1995.
[Gawl94] D. Gawlick, M. Hsu, R. Obermarck, “Strategic Issues in Workflow Systems“, 1994, Digital
Equipment Corporation: Palo Alto, California, USA.
[ISO94a] Industrial automation systems and integration - Product data representation and exchange Part 11.
Description methods: the EXPRESS language reference manual. 1994.
[ISO94b] Industrial automation systems and integration - Product data representation and exchange Part 21.
Implementation methods: Clear text encoding of the exchange structure. 1994.
[ISO95] Industrial automation systems and integration - Product data representation and exchange Part 22.
Standard Data Access Interface. 1995.
[Jung97] R. Junge, M. Koethe, K. Schulz, A. Zarli, W. Bakkeren, “The VEGA Platform - IT for the Virtual
Enterprise”, Proceedings of the 7th International Conference on Computer Aided Architectural Design
Futures, 1997, Kluwer Academic Publishers, ISBN 0-7923-4726-9.
[Köth98] M. Köthe, K. Schulz, A. Bernotat, “COAST Architecture - The CORBA Access to STEP Information
Storage Architecture and Specification”, ESPRIT 20408 – VEGA D301, Revision 1.8.10: October 1998.
[Mowb97] T. J. Mowbray, W. A. Ruh, Inside CORBA: Distributed Object Standards and Applications”,
Addison Wesley, 1997.
[OMG98a] The Common Object Request Broker Architecture (CORBA) specification, Revision 2.2, March
1998, URL
[OMG98b] The Product Data Management (PDM) Enabler specification, July 1998, URL
[Pope98] A. Pope, “The CORBA Reference Guide: Understanding the Common Object Request Broker
Architecture”, Addison Wesley, 1998.
[Rang97] H.K. Dahl, S. Korsveien, J. Rangnes, “Interoperability between product models in the Virtual
Enterprise: the VEGA platform”, Proceedings of the 4th International Conference on Concurrent
Enterprising (ICE’97), october 8-10 1997, Nottingham (UK), p. 81-88.
[Schu98] K. Schulz, M. Köthe, A. Zarli, “Implementation of the Distributed Workflow Service”, ESPRIT 20408
– VEGA D303, 1998.
[Step98] J. Stephens, M. Böhms, M. Köthe, J. Rangnes, R. Steinman, J. Junge, A. Zarli, “Virtual Enterprise
using Groupware tools and distributed Architectures”, Proceedings of the 2nd European Conference on
Product and Process Modelling (ECPPM) 1998, october 19-21, Watford (UK), p. 459-468.
[WfMC94] WfMC, The Workflow Reference Model, 1994, WfMC: Brussels, Belgium.
[WfMC96] WfMC, Terminology & Glossary, 1996, Workflow Management Coalition: Brussels, Belgium.
[Zarli97] A. Zarli, et al., “Integrating emerging IT paradigms for the Virtual Enterprise: the VEGA platform”,
Proceedings of the 4th International Conference on Concurrent Enterprising (ICE’97), october 8-10 1997,
Nottingham (UK), p. 347-359.
[Zarli98] A. Zarli, P. Debras, “Integration of CORBA and WEB technologies in the VEGA DIS”, Proceedings of
the European Conference on Integration in Manufacturing (IiM) 1998, october 6-8, Gothenburg (Sweden),
... PROMISE even contains a workflow engine to distribute fireable work items to SOAP-connected (Simple Object Access Protocol) clients (Greb et al, 2004). VEGA is another, important workflow based system (Zarli and Poyet, 1999). It supports co-operative teamwork by linking digital product models with WfMS concepts. ...
Full-text available
Modern engineering projects in the application domain of structural design are organized as networked co-operations due to permanently enlarged competition pressure and a high degree of complexity while performing concurrent design activities. One of the major challenges of these networked co-operations constitutes the coordination of the activities of all involved participants. Recent off-the-shelf software systems, however, fail for coordinating these complex activities in this area. In the course of our research activities, we have developed two different directions for coordinating such projects: (1) a workflow-based concept regulating the activities by a global workflow model (University of Bochum) and (2) an awareness model that allows to perceive activities of other partners and to derive new activities mentally (University of Bonn). This paper describes a novel integration approach of these two models: according to the global workflow model, users can be informed such that they are enabled to detect potential inconsistencies while modeling activities at an early stage.
... PROMISE even contains a workflow engine to distribute fireable work items to SOAP 8 -connected clients (Greb et al 2004). VEGA is another, important workflow-based system (Zarli and Poyet 1999). It supports co-operative teamwork by linking digital product models with WfMS concepts. ...
Full-text available
Large engineering projects are organized as networked co-operations due to an increasing competition pressure and a high degree of complexity. One of the major challenges of these networked co-operations constitutes the coordination of the activities of all involved partici-pants. Recent software systems, however, fail for coordinating and effectively supporting these complex design activities. In this paper we propose a novel, holistic, three-tier-based multi-agent solution concept that overcomes these deficiencies and, thus, minimizes potential planning conflicts. The top-level tier represents the real world situation in which the partici-pating design experts are organized in temporary project works, holding varying roles. The middle tier maps the real world conditions into a set of interacting, design specific software agents that effectively support the structural experts while carrying out their concurrent ac-tivities. On the bottom tier, we've settled the required design resources (product models, pro-ject settings, etc.) that are prepared and integrated by software agents. Focusing on the mid-dle tier we have developed a holistic agent model for collaborative structural design that is prototypical implemented and evaluated by the multi-agent system ACOS.
Conference Paper
Full-text available
The major project goal of the ISTforCE project is to provide an open, human-centered Web-based collaboration environment which supports concurrent engineering while working on multiple projects simultaneously and offers easy access to specialized engineering services distributed over the Web. Normally, engineering applications are bought and then installed and used locally, but in the last years there is a growing interest, especially by small, highly specialised vendors, to offer such applications on rental or "pay per use" basis. The main innovation of ISTforCE is in the human-centred approach enabling the integration of multiple applications and services for multiple users and on multi-project basis. This should lead to support the work of each user across projects and team boundaries and to establish a common platform where providers of software and end users (engineers, architects, technicians, project managers) can meet. This paper will focus on three specific aspects of the ISTforCE project. The first addresses product data models also referred to as Building Information Models (BIMs) given the specific scope of the application domain dealt with and also covers the contribution of ISTforCE to standardization activities, either within the frame of ISO / STEP or of the IAI and the development of the Industry Foundation Classes. The second will address part of the software architecture designed to support concurrent engineering activities with the development of the Product Data server (PDS) and finally we will cover AI based applications, more specifically in the area of automated building conformance checking (CCS).
Technical Report
Full-text available
Ce document n'engage en rien le CSTB et ne reflète que les vues de son auteur principal. La mission logicielle telle qu’elle a été définie s’intéresse à un certain nombre de questions fondamentales qui ont toutes un impact sur la capacité du CSTB à agir en qualité de concepteur, développeur, producteur, éditeur et vendeur de logiciels. Bien que de nombreuses relations existent entre ces grandes questions, il a été pour des raisons de clarté de la présentation utile de les aborder de manière séparée, le lecteur devant conserver à l’esprit qu’elles sont souvent intimement liées. Par exemple, il ne saurait être question de politique technique sans s’intéresser aux dispositions organisationnelles qui peuvent être prises pour en garantir la bonne exécution, ou encore d’une stratégie technique qui ferait fi des réalité du terrain du point de vue du parc matériel et logiciel des utilisateurs ou de considérations commerciales lors du déploiement des produits en faisant des hypothèses fortes sur les capacités d’équipement des clients, etc. Ainsi les questions sont interdépendantes, et agir sur l’une sans mesurer les effets ou les conséquences sur les autres est de peu d’utilité. C’est probablement la complexité de chacun des sujets que nous allons aborder, couplée à leurs inter-relations qui rend l’activité logicielle difficile. La politique technique est une question essentielle. On y abordera des thèmes comme la mesure de la pertinence d’une solution technique, les conséquences en termes de productivité, de cohérence entre les développements entrepris dans différents départements du CSTB, d’interopérabilité et pérennité des développements logiciels par l’adoption de langages et de plate-formes de développement préférentiels, de la mise en œuvre de bibliothèques de composants logiciels réutilisables, de la documentation systématique et standardisée selon des normes de l’entreprise, de maintenance corrective et adaptative, etc. Le champ est immense et si le CSTB peut s’enorgueillir de posséder divers métiers celui du logiciel en tant que tel ne lui est finalement pas vraiment naturel. Les questions de responsabilités préoccupent la Direction du CSTB, à juste titre, compte tenu des conséquences importantes que peuvent avoir un usage erroné ou tout simplement hors des limites prévues d’un logiciel. Ceci est particulièrement vrai compte tenu des capitaux engagés dans des opérations de construction ou de TP, les erreurs, et malfaçons engendrés par voie de conséquences ou autres problèmes se soldant généralement par un impact financier important voire considérable et des délais, lesquels engendrent eux-même en retour des pénalités. Les hypothèses de modélisation et les techniques de résolution numériques, doivent être explicitées et comprises des utilisateurs qui doivent s’engager à décliner la responsabilité des concepteurs selon des modalités sur lesquelles nous reviendront par la suite. Au titre de la portée juridique interviennent également les questions de protection des logiciels, des droits qui leurs sont attachés et du transfert de ces droits aux utilisateurs, avec toutes les limites qu’il convient d’imposer pour diverses raisons, ne seraient-ce que celles de responsabilité qui viennent d’être exposées. Les processus décisionnels et la logique économique doivent être bien maîtrisés et la décision de la diffusion d’un logiciel doit être prise en parfaite connaissance de cause des implications sur les autres activités. On peut en effet parfois penser que la mise à disposition d’un outil logiciel sophistiqué auprès des utilisateurs finaux pourrait être de nature à diminuer les montants de contrats obtenus lorsque ces derniers s’appuyaient sur la mise en œuvre d’un logiciel propriétaire assez sophistiqué pour justifier de dispenser des prestations. Nous reviendrons par la suite sur ces arguments, et sans en dire davantage, nous avons tendance à penser que la diffusion du logiciel – si elle est faite correctement et que les utilisateurs peuvent s’appuyer sur l’éditeur en fonction que de besoin – peut au contraire augmenter les volumes d’affaires. Ce point de vue sera argumenté par la suite. La diffusion suppose des organisations et des démarches spécifiques tant pour le développement que pour la promotion et la commercialisation dans un contexte où la connaissance de l’évolution rapide des technologies informatiques reste un investissement permanent et où la cible commerciale est souvent mouvante. Il ne suffit pas de remplir un service que l’on a pu identifier à un moment donné et qui correspond à un besoin clair d’une population d’utilisateurs, encore faut-il que les conditions de la cible n’aient pas changées entre le moment où le chantier est lancé et celui où la commercialisation s’opère. Ce genre de problème s’est déjà malheureusement rencontré, ne serait-ce pour l’illustrer que dans le cas où l’utilisateur ne veut plus – par exemple – d’une solution utilisant des données en local, même si elles sont fréquemment mises à jour sous forme de CDs, mais où il souhaite pouvoir déporter ce coût vers le fournisseur en se connectant par internet à ses serveurs. Dans ce cas, même si le besoin est fonctionnellement satisfait par la solution proposée, le contexte technique ayant suffisamment changé et ce de manière assez rapide pour qu’une migration devienne difficile, il est alors très dur de commercialiser le produit qui rate sa cible. De ce point de vue, les étapes d’investissement doivent être contrôlées régulièrement, et les décisions doivent se fonder sur une analyse du type espérance de gains sur risque en tenant compte de la connaissance du marché, de la concurrence, d’un business plan, etc. Bien sûr et on le comprend aisément de ce qui vient d’être rapidement évoqué, le CSTB a tout intérêt à mettre en place des partenariats pour s’appuyer sur des spécialistes de ces diverses questions. Enfin, une mission comme celle-ci, conduite sur une durée de temps très limitée, n’aura permis que de survoler l’ensemble des questions qui se posent et d’offrir un panorama nécessairement un peu rustique de nombre de sujets qui mériteraient d’être approfondis. Acceptons en l’augure.
Full-text available
Electronic product catalogues and brochures are gaining popularity, but there is little agreement on content, format, and searching methods. This limits their usability and integration with existing construction software tools. This paper examines a product-modelling approach to delivering building product information and describes a proposed multi-tier client-server environment. ISO/STEP and IAI/IFC building product models are considered to facilitate representation, exchange, and sharing of product information. The proposed architecture incorporates scalability with middleware components that would provide single or few points of entry to integrated product information. This paper is part of a research project that builds on the results of related projects including Construct IT Strategy, PROCAT-GEN, Active Catalog, COMBINE, and ARROW, towards implementing the required software components.
A dissertation thesis submitted in partial fulfilment of the requirements for the award of the degree Doctor of Engineering (EngD), at Loughborough University. This thesis reports on a research project undertaken over a four year period investigating and developing a software framework and application for integrating and managing building product information for construction engineering. The research involved extensive literature research, observation of the industry practices and interviews with construction industry practitioners and systems implementers to determine how best to represent and present product information to support the construction process. Applicable product models for information representation were reviewed and evaluated to determine present suitability. The IFC product model was found to be the most applicable. Investigations of technologies supporting the product model led to the development of a software tool, the IFC Assembly Viewer, which aided further investigations into the suitability of the product model (in its current state) for the exchange and sharing of product information. A software framework, or reusable software design and application, called PROduct Information Management System (PROMIS), was developed based on a non-standard product model but with flexibility to work with the IFC product model when sufficiently mature. The software comprises three subsystems namely: ProductWeb, ModelManager.NET and Product/Project Service (or P2Service). The key features of this system were shared project databases, parametric product specification, integration of product information sources, and application interaction and integration through interface components. PROMIS was applied to and tested with a modular construction business for the management of product information and for integration of product and project information through the design and construction (production) process.
Conference Paper
Full-text available
The VEGA 1 project aims to establish an information infrastructure which supports the technical and business operations of virtual or extended enterprises. GroupWare tools and distributed architectures are developed in compliance with product data technology standardisation activities in line with the current specifications coming from the Object Management Group (OMG) and the ISO Standard for the Exchange of Product model data STEP, ISO/TC184. VEGA will make a significant contribution to the emergence of appropriate technical solutions supporting the advent of distributed objects and architectures as a means to implement distributed companies. The continued growth of the Internet/WEB and its associated standard developments leads to new ways of world-wide information communication, distribution and access to information. VEGA works for a tighter integration of STEP, CORBA and WEB technologies within a DIS 2
Technical Report
Full-text available
The VEGA1 project is a visionary project that aims to integrate business and technical processes, adopting the concept of virtual enterprises. The project has now been running for two years and will be completed in December 1998. It targets the Large Scale Engineering industry, which works on shorter term business relationships and supplier chains geared to a specific project. Examples of Large Scale Engineering are process plants and infrastructure projects. They are characterized by their high capital cost, multi-disciplinary inputs, complexity of assembly and one-of-a-kind nature. The project brings together leading European specialists from the computing and construction industries. The consortium comprises three IT vendors (Digital-D, EPM-NOR, Nemetschek-D), two research organisations (TNO-NL, CSTB-F) and one industrial company (Taylor Woodrow-UK). The rapid growth of object orientated technology and the steady increase in hardware performance is making it possible to develop object orientated distributed applications. The VEGA project aims to establish an infrastructure, called the VEGA Platform, which will support the technical activities and thus the business operations of the Virtual Enterprise.
Conference Paper
Full-text available
Today, many Large Scale Engineering projects involve many different companies, located in remote sites and grouped together for just a single project. Global modern distributed Information Technology solutions typically could support these companies in their requirements and achievement of their tasks, thus leading to a so-called virtual enterprise concept. The objective of the VEGA project is to develop a platform which will enable the companies in a virtual enterprise to work together. VEGA will use currently available technology, and extend their capabilities as needed for engineering collaboration in a flexible distributed environment. The VEGA platform relies on high level technology, including STEP for Product Data Modelling, the OMG Common Object Request Broker Architecture (CORBA) for communication between distributed applications, workflow technology as defined by the Workflow Management Coalition (WfMC) for design of process control, and information standards like SGML or various WEB de facto standards for the support of value added distributed information services.
The term computer-integrated construction (CIC) is often used to describe a future type of construction process characterised by the extensive use of information technology. The key to successful CIC is the comprehensive integration of currently isolated computing applications in different phases of the construction process. Among the several types of data exchange standards needed to support such integration, the standards for structuring the information describing buildings (building product data models) are particularly important. No fully operational building product data models have as yet been formally standardised either on the national or international level, but the topic has been a subject of intensive research during the last few years. Building product data model proposals are usually defined using object-oriented information modelling techniques. The research which is presented in this summarising thesis was carried out primarily during the years 1988-92 at the Technical Research Centre of Finland. The report begins with a brief introduction to the general background of research concerning CIC and building product data models. Fundamental concepts of object orientation and product modelling are explained in a separate chapter. In order to position the author's research results, the "state of the art" in this research field is briefly reviewed. The research results are presented against the background of a kernel-aspect model framework, in line with current thinking among several leading researchers in this field. The results can loosely be classified into three distinctive groups: a number of requirements which building product data models should fulfil; specific information structures in building product data models; and the integration of product models with other types of information used in the construction process. The specific information structures which were studied include the abstraction hierarchies used in building product data models, the type object mechanism and information structures needed for modelling spaces and enclosing objects. The report ends with a discussion of the results, comparing them with the proposals and results of other researchers. Some directions for further research are also outlined. VTT publications, ISSN 1455-0849; 245
Conference Paper
Workflows are activities involving the coordinated execution of multiple tasks performed by different processing entities, mostly in distributed heterogeneous environments, which are very common in enterprises of even moderate complexity. In current commercial workflow systems, the workflow scheduler is a single centralized component. A distributed workflow enactment service, on the other hand, should contain several schedulers on different nodes of a network, each executing a part of the process instances. Such an architecture would fit naturally into the distributed heterogeneous environments. Further advantages of a distributed enactment service are failure resiliency and increased performance, since a centralized scheduler is a potential bottleneck. In this paper, we present the design and implementation of a distributed workflow enactment service based on the work of M. Singh (1996). By starting with a block-structured workflow specification language, we avoid a very general set of dependencies and their related problems. In this way, it is possible to present a simple algorithm for the distributed scheduling of process instances. Further benefits of the approach are the ease of testing and debugging the system, and the execution efficiency through having a reduced number of messages
Business-Object Architectures and Standards STEP for Data Management, Exchange and Sharing
  • C Casenave
  • J Fowler
C. Casenave, " Business-Object Architectures and Standards " (white paper), 1997, URL [Fowl95] J. Fowler, " STEP for Data Management, Exchange and Sharing ", Technology Appraisals 1995.