ArticlePDF Available

Distributed shared memory infrastructure for virtual enterprise in building and construction

Authors:

Abstract and Figures

This paper proposes a new approach to building a virtual enterprise (VE) software infrastructure that offers persistence, concurrent access, coherence and security on a distributed datastore based on the distributed shared-memory paradigm. The platform presented, persistent distributed store (PerDiS), is demonstrated with test applications that show its suitability and adequate performance for the building and construction domain. In particular, the successful adaptation of standard data access interface (SDAI) to PerDiS and a comparison with CORBA are presented as examples of the potential of the distributed shared-memory paradigm for the VE environment.
Content may be subject to copyright.
Distributed Shared Memory Infrastructure for Virtual
Enterprise in Building and Construction
1
Fadi Sandakly
*
, João Garcia
**
, Paulo Ferreira
**
and Patrice Poyet
*
*
CSTB, BP 209, 06904 Sophia-Antipolis, France.
sandakly@cstb.fr
, poyet@cstb.fr
**
INESC, Rua Alves Redol 9-6, 1000 Lisboa, Portugal.
joao.c.garcia@inesc.pt
, paulo.ferreira@inesc.pt
Abstract: This paper proposes a new approach to building a Virtual Enterprise (VE) software
infrastructure that offers persistence, concurrent access, coherence and security on a distrib-
uted datastore based on the distributed shared-memory paradigm. The platform presented,
Persistent Distributed Store (PerDiS), is demonstrated with test applications that show its
suitability and adequate performance for the building and construction domain. In particular,
the successful adaptation of SDAI to PerDiS and a comparison with CORBA are presented as
examples of the potential of the distributed shared-memory paradigm for the VE environment.
1 INTRODUCTION
The software infrastructure is still one of the major difficulties for concurrent engineering de-
velopment especially in large-scale projects where participants are geographically dispersed
and belong to different organisations. To build such an infrastructure, traditional approaches
based on remote object invocation like Corba, Microsoft DCOM and Java RMI present differ-
ent limitations when applications manipulate big amounts of data. Among these limitations,
1
This work has been partially financed by the EU PerDiS project (Esprit 22533).
three major ones should be highlighted: (i) performance, (ii) security and (iii) difficulties in
porting legacy applications.
In fact, remote object invocation platforms on a Wide Area Network (WAN) or Local Area
Network (LAN) penalise data sharing between the organisations of a Virtual Enterprise (VE).
These software platforms also lack a homogenous security model defined on data and reflect-
ing the complex security relations between partners of a VE. Finally, most of the time, porting
existing legacy applications to co-operate in a VE implies their complete re-engineering in
order to get decent performance, to make them work concurrently and to integrate security
constraints.
This paper proposes a new approach to building a VE software infrastructure (Sandakly
1999). This approach generated a platform of persistent, distributed and shared memory called
PerDiS
2
. In PerDiS, memory is shared between all applications, even located at different sites
or running at different times. This shared memory represents the shared store of a VE. Coher-
ent caching of data improves performance and availability. It ensures that applications have a
consistent view of data, and frees developers from manually managing objects’ location. Us-
ing the shared memory paradigm facilitates application porting on top of PerDiS. There is no
need to change data structures to make them persistent and/or distributed. To allow concurrent
access to data, the PerDiS platform provides transactions and locking mechanisms for con-
nected environments and a Check-out/Check-in mechanism for disconnected ones. Locking
can be handled transparently by PerDiS or explicitly by the applications.
For security, PerDiS implements a Task/Role model to manage security attributes and access
rights to data. Communication between distant machines can be secured by encrypting mes-
sages. Security refers to data and users independently from applications.
2
The source code of the PerDiS platform is freely available at
http://www.perdis.esprit.ec.org/
The next section outlines the problems of developing software infrastructure for collaborative
engineering in the context of VE and how PerDiS solves these problems. After that, the major
concepts and implementation aspects of the PerDiS platform are detailed. This is followed by
a section describing how the Standard Data Access Interface (SDAI) of the STEP ISO-10303
norm (ISO 10303-22, 1996) for Product Data Representation and Exchange was adapted to
meet the requirements of a VE in the Building & Construction (B&C) sector. Finally, in the
Experiments section a performance comparison between PerDiS and the well-known Corba
approach is presented.
2 COOPERATIVE ENGINEERING AND VIRTUAL ENTERPRISE
Today, the quickly changing global open market pushes companies to react increasingly
quickly and to adapt and modify their products. To achieve this goal, suppliers as well as con-
tractors from different companies have to be tightly involved in the design and the production
cycles. Co-operative or Concurrent Engineering (CE) (ISO 10303-22, 1996) (Wilbur, 1994)
techniques have been generalised giving birth to a new form of Collaborative Work applied at
company level instead of at personal level. The term Virtual Enterprise (VE) (Camarinha-
Matos, 1999) refers to this kind of companies consortium. A VE can be defined as a tempo-
rary alliance of independent organisations that come together to quickly exploit a product
manufacturing opportunity. These organisations have to develop a working environment to
manage all or part of their different resources toward the attainment of their common goal.
Obviously, common information definition and sharing is the essential problem of the VE
(Hardwick, 1996). Actually, partners of a VE usually have different business rules and infor-
mation infrastructures. Being part of a VE means that a company has to adapt its information
system (or part of it) to the VE common information infrastructure in order to share and ex-
change project data with other partners.
The main issues of such infrastructure are:
1. scalability,
2. evolution,
3. ease of use and adaptation.
Regarding scalability, the architecture of a VE infrastructure has to be independent of the
number of partners, the projects they work on, and the type and amount of data they manipu-
late. Furthermore, due to the fast evolution of Information Technologies (IT), this architecture
has to be open and easy to evolve. At the same time it has to be simple to use in order to re-
duce the cost of adapting the IT infrastructure of each partner to it. The main challenges of
developing this kind of infrastructure are:
Definition of a common data model representing the information to be exchanged and
shared between partners,
Definition of a common sharing software infrastructure that can ensure data storage,
data integrity, and data security.
Adapting the existing IT infrastructure of each partner in the VE to work with the
common data model and the common sharing software infrastructure.
2.1 VE in Building & Construction
The B&C sector is intrinsically distributed (Kalay, 1998). Generally, participants on a con-
struction project belong to several independent companies. More than 80% of these compa-
nies are SMEs (with an average of 20 employees). Due to their size, these companies are not
able to invest much in modifying their IT infrastructure each time they work on a new project.
Several aspects characterise the co-operative engineering in VE in the B&C sector:
1. Short term co-operation implies simple and fast setup of computer infrastructure for data
sharing: Except for very big projects and some special actors like the owner of a building,
companies operate for a limited time in a VE specially during the design phase. For in-
stance, architects, structure engineers, HVAC (Heating, Ventilation and Air Conditioning)
engineers and electrical engineers work for a short period on a project just to deliver the
plans to be used by the construction companies. These actors stay involved in the project
until the end but they become less active than others do. This relatively short-term Cuper-
tino aspect constrains the companies to setup and adapt their computer infrastructure
quickly to be able to share data with the other members of a given VE. Sharing data in a
VE means being able to pick up data coming from other partners in a variety of formats
and being able to exploit them without semantic loss.
2. Long transactions in the conception cycle and the permanent need of data availability:
The conception process is cyclic. Starting from a given version of the project plans, de-
signers add new elements and modify existing ones. Modifications made in parallel may
be conflicting. Usually, a reconciliation phase between actors allows them to validate a
new version of the plans and to restart a new conception cycle. Conception cycles are rela-
tively long (in the order of weeks) and a user working session on a project can be rela-
tively long (few hours). This working mode stimulates the notion of disconnected work
session. There is no need to keep computer remote connection when the user is working
with local copy of the data for a few days. Check-out/check-in mechanisms seem well
adapted to the remote data access. Furthermore, attention must be paid to data availability.
When a user is working on a part of a project, other users can't modify it. However, they
must be able to access the latest valid version of this part without being blocked by an-
other users transaction. Another issue raised by long transactions is fault tolerance: users
cannot accept to loose a couple of hours of work because of a remote machine's crash, a
network breakdown or simply because of inappropriate locking of remote data.
3. Large datasets: Another important characteristic of B&C design applications is the large
amount of data in a project (just think about the number of different objects that exist in a
building like walls, windows, doors, stairs, electrical components, sanitary installa-
tions…).
4. Data security, ownership and responsibility: The legal responsibility of data is an impor-
tant issue in VE mainly for engineering aspects related to the human security (fire, mate-
rial resistance…). Sharing data across open networks (like Internet) must take into account
the authentication of received data and its security against modifications. Only authorised
persons may access and change pieces of data. Beside the legal responsibility, financial
aspects increase the weight of security issues. In fact, the VE model encourages the devel-
opment of a unique data store for a whole project including financial information. Protect-
ing this data is an essential concern for all partners.
2.2 Traditional Approaches to VE software
During the last few years, an important effort has been undertaken in different research pro-
jects to define the software infrastructure of the future VE. Among the significant research
efforts in this domain is the NIIIP project (NIIIP, 1996) that aims at developing open industry
software protocols that allow manufacturers and their suppliers to effectively interoperate as if
they were part of the same enterprise. NIIIP bases its architecture on emerging standards
(formal and de facto) like STEP ISO-10303 (Fowler, 1995) for data modelling and Corba
(OMG, 1997)(Mowbray, 1996) as a middleware for interoperation of different applications.
Influenced by the OMG and Corba formalisms, NIIIP views VE activity as a set of Services
offered by partners with interfaces defined with the IDL language and based on Corba stan-
dard services (OMG, 1998) like Naming, Persistence, Security, Transaction, etc.
At European level, an equivalent effort has been realised with the VEGA Esprit project that
aims to establish an information infrastructure to support the technical and business opera-
tions of Virtual or Extended Enterprises. This information infrastructure relies on the Corba
Access to STep models (COAST) architecture (Koethe, 1997), which allows applications to
access data using an API that extends the Standard Data Access Interface (SDAI) (ISO 10303-
22, 1996) of STEP. Like NIIIP, the VEGA COAST platform provides services that can be
used by VE partners such as the Conversion service allowing the mapping between different
data schemas and the Workflow service to manage projects. Another effort has been under-
taken at the University of Salford (Koethe, 1997). In this work, a three-tier architecture is de-
fined based on Corba services and ObjectStore OODBS for data persistence. The ISO STEP
modelling language is used to define the data model. Applications can exchange data using
files or can share fine grain objects whose interfaces are defined in IDL.
While ISO STEP seems to offer a modelling language and methodology, as well as data mod-
els that are largely accepted and used in different manufacturing sectors (Aerospace, Building
& Construction, Electronics…), the Corba approach presents, from a practical point of view,
some difficulties to building VE software infrastructure. Those difficulties can be summarised
as follows:
Currently, few of the commercial object request brokers (ORB) implement all services
needed for a real VE like security, transaction, concurrency control and persistence.
Corba is based on remote method invocation. With this approach, objects used by an ap-
plication reside in a remote host and can only be accessed via their functional interface.
This is a disadvantage when objects are frequently accessed (which is the case in most de-
sign tools, e.g. CAD) because an important amount of processing time is wasted in
communications. This increases non-useful network traffic considerably.
Most of the Corba services like concurrent access, persistence and security are defined at
the level of objects. This means that important problems like concurrent data access, secu-
rity and data distribution (which is a major issue in the application performance because
of the remote method invocation mechanism) have to be solved in early phases of applica-
tion’s design.
VE infrastructure has to integrate existing applications from different partners. Those ap-
plications have to access common data stores and thus have to be interfaced using Corba
mechanisms. These require a deep modification in the application’s data structure (the in-
heritance graph has to be changed to access some Corba services) and sometimes a new
code structure in order to provide their functionality as a service.
3 THE PerDiS APPROACH
The PerDiS platform has been developed to overcome many limitations of traditional ap-
proaches in term of performance, security mechanisms, distribution capabilities and ease of
use. The use of the distributed shared-memory paradigm as the base for PerDiS implementa-
tion improves the performance comparison to remote invocation approach notably. Further-
more, it facilitates porting of existing applications without major modifications. Additionally,
transactions can be transparent to applications. PerDiS offers a default behaviour where lock-
ing data is done implicitly depending on application access mode to each datum on the store.
This implicit behaviour simplifies the extension of a single-user application to a concurrent
application where several users can share the same data since PerDiS guarantees data integ-
rity with its locking and transactional mechanisms.
In comparison to other object distribution approaches based on remote invocation like Corba,
Microsoft DCOM and Java RMI, which impose small grain distribution, PerDiS allows appli-
cations to choose their own distribution granularity based on clusters (which is a set of objects
of variable size). As said before, B&C applications, like CAD systems, manipulate big
amounts of data. Applications using remote call to access to each object attribute spend most
of the execution time in network communications. With PerDiS the whole object (more pre-
cisely, a bunch of objects) is transferred once from the remote site and mapped into the mem-
ory of the local application. The network cost is widely reduced because all object accesses
are done in the application memory.
Regarding security, PerDiS defines security attributes on the data store making them inde-
pendent from different applications. All these aspects will be detailed in the rest of this section
to show how they offer coherent data sharing, persistence and security and how they reduce
time and programming effort when porting applications on top of PerDiS.
3.1 PerDiS Architecture
Figure 1: PerDiS Architecture
A PerDiS system (Shapiro, 1997) consists of a set of machines running two kinds of proc-
esses: application processes and PerDiS daemons (PD). There is a PerDiS daemon on each
machine. Applications communicate with their local PD through a user level library (ULL)
which they access via the PerDiS API. The ULL deals with application-level memory map-
ping, data transformations and management of objects, locks and transactions. When the ap-
plication, through the API and the ULL, requests locks or accesses data, it makes requests to
the local PD, which deals with caching, issuing locks and data, storage, transactions, security
and communication with remote machines. A typical configuration is shown in Figure 1. Note
that application processes are optional. In fact, the PerDiS architecture is a symmetrical client-
server architecture in which each application is a client of the local server and all servers in-
teract in a peer-to-peer mode. A machine with just a PD running behaves as a pure server.
3.2 Objects and clusters
Objects in PerDiS are sequences of bytes representing some data structure. They are not lim-
ited to e.g. C++ (Stroustrup, 1986) objects. An application programmer allocates objects in a
cluster, which is a physical grouping of logically related objects. Clusters have a variable (un-
limited) size. In contrast with current technology like Corba (OMG, 1997), clusters are the
user-visible unit of naming, storage, distribution and security, allowing efficient and large-
scale data sharing applications. In fact, a cluster combines the properties of a heap (programs
allocate data in it) and of a file (it has a name and attributes, and its contents are persistent).
Programmers use URLs (Berners-Lee, 1994) to refer to clusters, e.g.
pds://perdis.esprit.ec.org/clusters/floor1.
Figure 2: Clusters, objects and remote references
An object in some cluster may refer, using standard pointers, to some other object in the same
cluster or in another one, even when the current machine does not actually hold the pointed-to
object. While navigating through an object structure, an application may implicitly access a
previously "unknown" cluster, which may be on a distant machine. For instance, Figure 2
shows two clusters located at two different machines. Starting from the entry point start in the
leftmost cluster, one can navigate through the objects e.g. start->a->b->print(), im-
plicitly accessing the remote cluster. The same function could have been called by first open-
ing the remote cluster and then calling
first->print().
3.3 Persistence
Persistence in PerDiS is based on persistence by reachability (PBR) (Atkinson, 1983): an ob-
ject is persistent if and only if it is transitively reachable from a persistent root. A persistent
root is a distinguishable, named reference, which is persistent by default. To illustrate PBR,
reconsider Figure 2: all objects are reachable from the roots start and first, and thus they are
persistent. If the pointer
first->c
is set to
NULL
, objects X and Y would no longer be
reachable, and would automatically be deleted. Destroying the root objects start and first
would delete all objects in both clusters.
The PBR model has two main advantages: it makes persistence transparent and frees pro-
grammers from memory management. Persistence is transparent since it is deduced from the
reachability property. It frees the programmer from memory management because program-
mers only allocate memory; deallocation is performed by the system if necessary. This pre-
vents dangling pointers and memory leakage. When porting existing code, only the allocation
operator has to be modified to allocate data in persistent memory. There is no need to modify
or extend data structures to make them persistent.
PBR is implemented with a Garbage Collector (GC). The GC runs as a thread of the PD. It
scans the store regularly to eliminate unreachable parts of the persistent store.
3.4 Caching and replication
Data distribution in PerDiS is based on lazy replication and co-operative caching. Lazy repli-
cation means that the system only makes copies of data when an application accesses them.
Co-operative caching means that caches of different PDs interact to fetch the data an applica-
tion accesses. Replication and caching avoid the remote access bottleneck because all data
access is local. In addition, since replicas are kept in several co-operative caches, data re-
quests are spread over several nodes preventing data access bottlenecks. A potential drawback
of sharing implicitly sized units of data is false sharing. False sharing occurs when two appli-
cations access different data items that happen to be stored in the same unit of locking. Each
application has to wait until the other has unlocked the data before it can access it, although
there is no real sharing.
Another important aspect of caching implemented in PerDiS is the prefetching mechanism. In
fact, PerDiS allows an application to specify a prefetching strategy depending on its behav-
iour. For instance, the application can define a set of clusters that the cache prefetches when a
given cluster is opened by the application. This mechanism enhances data availability spe-
cially when it is located on a remote machine with a slow network connection.
3.5 Transactions
The purpose of transactions is to guarantee fault tolerance and concurrency control to PerDiS
applications.
PerDiS provides transactions with the usual ACID semantics (Atomicity, Consistency, Isola-
tion, and Durability) while supporting both optimistic and pessimistic concurrency controls.
Pessimistic concurrency control enforces locking when data is accessed. Locked data is
unlocked when the transaction is committed. Optimistic concurrency control allows users to
access data concurrently without locking. Conflicts between optimistic transactions are de-
tected through data versioning at commit time. Data is marked with a version stamp every
time it is written.
Furthermore, the PerDiS model allows for non-serializable views of data (private copies), for
notifications and for reconciliation transactions. The broad range of these transactional facili-
ties is motivated by the application domain of co-operative engineering. Interactive project
development applications (e.g. CAD) may issue long transactions which are unlikely to in-
volve write conflicts but which, due to their length and complexity, users will want to avoid
aborting at all costs.
Transactions using implicit locking automatically lock data accessed by the application. Data
read by an application is protected by a read lock, whereas data that is to be modified is pro-
tected by a write lock. Modifying data that is already protected by a read lock causes an up-
grade of the lock from read to write. Implicit locking is mainly provided to make porting ex-
isting applications to PerDiS easy. However, it reduces concurrency, because it is done at
memory page granularity, and increases the risk of deadlocks. Therefore, when developing
new applications, explicit locking is preferred. Explicit locking transactions do not use auto-
matic locking, but rely on explicit object intent requests, using functions like
lock(),
and
unlock().
When starting a transaction, a programmer may indicate that the application must be notified
when the involved data is accessed, modified or committed by other transactions. This allows
the development of reactive applications. Applications can, for instance, react to these types
of events by updating their cached data using a "refresh" function and initiating a new transac-
tion in order to reconcile with concurrent users.
All these functionalities are further extended in PerDiS through its two-level architecture. In
PerDiS, there is a distinction between local area (LAN) and wide area (WAN) networks. For
each LAN, there is a node responsible for interacting with PerDiS servers on remote net-
works. This site, called a gateway, includes a file cache and provides any files from machines
outside the LAN to local area PerDiS applications. Each PerDiS server manages a multi-
version store where a sequence of versions for each file is kept as they are submitted for
commit by PerDiS applications. Executing transactions on a WAN using PerDiS does not re-
quire that all servers involved are synchronised. PerDiS uses a transaction commit algorithm
derived from MVGV (Agrawal, 1987) that synchronises only involved servers at commit
time. Furthermore, this algorithm allows the file cache to give coherent views of data to appli-
cations without having any knowledge of transactions. The PerDiS transactional protocol ex-
tends notifications and reconciliation functionalities to wide area transactions.
3.6 Security
Protecting data in VEs is important for two reasons. First, project data often represents a large
asset due to the amount of work needed to create it. Losing data means losing money. Second,
data often represents knowledge and provides companies with a competitive edge. Due to the
nature of a VE, partners co-operating in one project may be competing in another. Therefore,
PerDiS provides security means (Coulouris, 1997) to protect data in a collaborative environ-
ment. Security in PerDiS consists of two parts: data access control and secure communication.
Data access is controlled by groupware-oriented access rights based on users' tasks and roles.
PerDiS clusters are assigned to a particular task and one can specify access rights for a user
having a specific role in a specific task. Access rights can be assigned on a cluster basis to re-
duce management overheads. When using a secure PerDiS application, users are associated to
their role in the task by logging on to a PerDiS security tool.
Secure communication uses public key schemes for signed data access requests, shared keys
for encryption and a combination of the two for authentication of message originators.
4 DISTRIBUTING STEP SDAI
To open the PerDiS platform to the ISO Standard for the Exchange of Product Model Data
(STEP) formalism and to allow its integration in the Building & Construction domain, the au-
thors ported an implementation of the Standard Data Access Interface (SDAI) (ISO 10303-22,
1996) on top of PerDiS. This interface was defined for single user applications accessing a
single local or remote data store. This section details how to proceed in order to extend SDAI
to make it deal with data distribution, security and multi-users concurrent access to data
stores. The main goal of this work is to facilitate the development of STEP concurrent appli-
cations. The size of the SDAI layer is about 50000 lines of C++ code. This code has been
ported to PerDiS in a very short time comparing to the effort of its development.
The rest of this section describes STEP and SDAI shortly and is followed by a discussion of
the issues involved in developing a distributed SDAI application using PerDiS.
4.1 The International Standard for the Exchange of Product Model Data
STEP provides a basis for communicating product information at all stages of a product’s life
cycle. The keyword of data exchange in STEP is Data Model Sharing. In fact, STEP defines
tools like the EXPRESS language (Atkinson, 1983) to develop data models that can be used
in different applications allowing interoperability and common data structures for data shar-
ing. EXPRESS is an OO-like language providing mechanisms to model constraints on data
like Global Rules and the Uniqueness of object values. STEP provides also a definition for
data exchange (ISO 10303-21, 1994), which is an ASCII format that can be used to exchange
data defined with EXPRESS. This file-based format is called STEP Physical File Format
(SPF). Finally, for data storage and access, STEP specifies an application programming inter-
face (API) called SDAI (ISO 10303-22, 1996) (STEP Data Access Interface) that defines the
way applications can store and retrieve instances in databases. The goal is that applications
sharing the same data model can share databases. However, SDAI does not deal with concur-
rent access of data or with distribution of databases. Data security is not defined in the SDAI
either.
4.2 Architecture of the SDAI
The SDAI is defined in four different schemas written in EXPRESS (see Figure 3):
The SDAI Dictionary Schema: It includes definitions of entities needed to represent a
meta-model of an EXPRESS schema. Instances of these entities that correspond to a given
schema constitute the SDAI Data Dictionary.
The SDAI Session Schema: It includes the definition of entities needed to store the cur-
rent state of a SDAI session started by an application. The information stored is mainly
the list of repositories opened by the application and their access mode, current transac-
tions, events and errors.
The SDAI
Population Schema
: It defines the organisation structure of a SDAI popula-
tion. A SDAI population is the set of instances stored in SDAI repositories. Three main
entities to store instances are defined in this schema (see Figure 4):
1. SDAI Model: is a grouping mechanism consisting of a set of related entity instances
based upon one schema,
2. Schema Instance: is a logical collection of SDAI Models based upon one schema. A
schema instance is used as a domain for EXPRESS Global Rules validation, as a do-
main over which references between entity instances (in different models) are sup-
ported or as a domain for EXPRESS Uniqueness validation.
3. Entity Extent: it groups all instances of an EXPRESS entity data type that exist in a
SDAI Model.
The SDAI Parameter Data Schema: This schema describes in abstract terms the various
types of instances that are passed and manipulated through the API. It provides definitions
for EXPRESS simple types, EXPRESS entity instance, EXPRESS entity attribute value,
aggregations and iterators, etc.
Those schemas are independent of any implementation language. The SDAI language bind-
ings (SDAI Implementations) are specified for computing languages like C, C++, Java and
IDL.
Figure 3: SDAI Architecture
Figure 4: SDAI Storage Structure
4.3 Porting SDAI on the PerDiS Platform
This section discusses the main issues in developing a persistent distributed version of a
SDAI, which allows concurrent data access. Some of the design decisions made have been
influenced by one of PerDiS’ principal goals which is the fast porting of existing applications
that were neither developed to work in a concurrent environment nor in a transactional way.
To do this the PerDiS team preserved as far as it was possible the SDAI API described in (ISO
10303-23, 1997) by introducing implicit concurrent and transactional behaviour based on
some of PerDiS features. However, SDAI was extended with a specific distribution and con-
currency API that can be used by new applications.
4.3.1 SDAI Persistence and Storage Structure
All instances created in the SDAI are made persistent in PerDiS. This is simplified by the fact
that in PerDiS no difference exists between persistent and transient object manipulation.
In a SDAI implementation, application instances are stored in a logical structure. These struc-
tures are placed in the PerDiS persistent distributed store (i.e. in clusters). The main object
collector in the SDAI is the Model. From the application point of view, a model is a set of re-
lated instances. Another important collector is the Repository. A repository is a collection of
models. Those two objects are part of the application data organisation and have to be persis-
tent. Schema Instances are also logical collectors that contain several models related to the
same schema. As shown in Figure 5, all these objects are mapped to PerDiS clusters that are a
logical collection of application objects.
4.3.2 Distribution Granularity
In the PerDiS approach, the distribution granularity as seen by the application is a cluster.
Physical distribution is hidden and the PerDiS platform can manage this granularity automati-
cally or semi-automatically to optimise performance using prefetching mechanisms (see sec-
tion 3.4). From the SDAI point of view, the smallest collector structure is the model. Imple-
menting SDAI models with clusters gives applications fine-grain distribution granularity. This
granularity is adjustable because models can contain any number and kind of application in-
stances. Inside a cluster, instances can be grouped using entity extent objects.
Figure 5: Persistent distributed SDAI implementation in PerDiS. Boxes represent PerDiS
clusters
4.3.3 Concurrent data access
There is no concurrent access specification in the current definition of the SDAI. As shown in
Figure 5, the SDAI data are divided into two categories: Dictionary data and Application data.
Normally, the Dictionary data is read by all applications and it is not modified often. The de-
fault access to the dictionary is implemented with the local-copy locking mode. This mean
that the application accesses the last valid version, but modifications to this data are not com-
mitted except when the applications create a new EXPRESS Schema (which means adding a
new data model to the dictionary). In this case, the locking mode is upgraded to read-write
and the application can be blocked if another one modifies the dictionary at the same time.
Application data category represents the project data that users modify most frequently. Usu-
ally, the project data is organised in different SDAI models. This organisation reflects the pro-
ject decomposition in term of tasks and users. For instance, architecture elements (like walls,
openings…) can be stored in several SDAI models. Each one of these models may correspond
to a part of the building (i.e. a floor) under the responsibility of an architect (i.e. a user). Each
user needs to access its own data (its part of the project) to modify it. On the other hand, he
may occasionally need to access some other parts for viewing or comparison needs (the last
valid version is needed). This pattern of work guided this implementation of the default con-
current access to the Application data. By default, SDAI models (i.e. PerDiS clusters) are
locked in read mode. PerDiS automatically upgrades this mode to read-write when the appli-
cation modifies the data. When the user needs to access parts of the project belonging to other
users, applications can open SDAI models with local-copy mode to avoid being blocked when
someone else is modifying the model. Data locked in local-copy mode can have its locks up-
graded if applications need more control of it. This approach simplifies the porting of single-
user STEP applications to make them work concurrently when they were not designed to do
this.
4.3.4 Transactions
Once a SDAI session is started, an application can manipulate instances in stores in a transac-
tional context: the application starts a transaction, locks objects (implicitly or explicitly), ac-
cesses its data, and finally it can commit the transaction or abort it. By default, transactions
are mapped to PerDiS pessimistic transactions. The standard API is extended to support the
optimistic transactions when applications do not want to block others and/or are able to per-
form reconciliation when conflicts arise.
4.3.5 Security
The security model in PerDiS is defined at two levels:
1. Security of communications which is transparent to the application,
2. Security of stored data, expressed as access rights given to different users in a VE accord-
ing to a task-role model (Coulouris, 1997). This model separates the security attributes
from the data model definition. It allows the management of legal and data ownership as-
pects in a VE. No major modifications have to be done to port applications that don't deal
with security except the handling of access rights violation that can be done at the highest
level of an application.
Access rights are defined using the security managing tools developed in PerDiS. The map-
ping of the SDAI storage structure to the PerDiS clusters allows us to use those tools with the
SDAI implementation without modifications. SDAI models become the user security units.
5 EXPERIMENTS
Besides the complete SDAI layer ported on top of PerDiS, several applications have been de-
veloped or ported to PerDiS to manage different aspects of VEs and to test the performance of
the platform (like security management tools, Xfig [a drawing tool for UNIX], OO7…). This
section presents two applications; the first is a mapping program that translates architectural
data from the ISO AP225 (ISO 10303-225, 1996) format to the VRML (ISO 14772-1, 1997)
format. This program has been also ported on top of a Corba ORB (Orbix) and its perform-
ance has been compared to PerDiS. The second application is a document management appli-
cation that allows to manage sets of files distributed over several servers and to access them
transactionally. This application shows the use of PerDiS in transactional disconnected opera-
tions with different kinds of data types.
5.1 AP225 to VRML Mapping Application
ISO AP225 (ISO 10303-225, 1996) is a standard format for representing building elements
and their geometry; it is supported by a number of CAD tools. The application presented here
reads this format and translates it into VRML (ISO 14772-1, 1997) (Virtual Reality Modelling
Language), to allow a virtual visit to a building project through a VRML navigator. This ap-
plication was chosen because it is relatively simple, yet representative of the main kernel of a
CAD tool. The original, stand-alone version is compared with a Corba and a PerDiS version.
The stand-alone version has two modules (see Figure 6). The reader module parses a SPF
(STEP Physical File Format) (ISO 10303-21, 1994) file containing a building project, and in-
stantiates the corresponding objects in memory. The generator module traverses the object
graph to generate a VRML view, according to object geometry (polygons) and semantics. The
object graph contains a hierarchy of high-level objects representing projects, buildings, sto-
reys and staircases. A storey contains rooms, walls, openings and floors; these are represented
by low-level geometric objects such as polyloops, polygons and points.
Figure 6: Stand-Alone version of the AP225 to VRML application
In the Corba port, the reader module is located in a server, which then retains the graph in
memory (see Figure 7). The generator module is a client that accesses objects remotely at the
server. To reduce the porting effort, only five classes were enabled for remote access: four
geometric classes (Point, ListOfPoints, PolyLoop, and ListOfPolyLoops) and a class
(Ap225SpfFile) that allows the client to load the SPF file and to get the list of polyloops to
map. The porting task took two days (for only five classes). The code to access objects in the
generator module had to be completely rewritten.
Figure 7: Corba implementation of the AP225 to VRML application
In the PerDiS port, the reader module runs as a transaction in one process and stores the graph
in a cluster (see Figure 8). The generator module runs in another process and opens that clus-
ter. The porting task took only one day and all classes were made distributed with no modifi-
cation of the application architecture. The PerDiS version has the advantage that the object
graph is persistent, and it is not necessary to re-parse SPF files each time. The VRML views
generated are identical to the original ones.
The stand-alone version is approximately 4,000 lines of C++, in about 100 classes and 20
files. In the Corba version, only five of the classes were made remotely accessible, but 500
lines needed to be changed. In the PerDiS version, only 100 lines were changed.
Figure 8 PerDiS implementation of the AP225 to VRML application
Table 1 compares the three versions for various test sets and two different configurations:
Std-Alone represents the original stand-alone application.
The 1 machine column represents:
1. The PerDiS version where the application accesses data stored on the same machine.
2. The Corba version where the server and the client run on the same machine.
The 2 machines column represents:
1. The PerDiS version where the application accesses data stored on a different machine.
2. The Corba version where the server and the client run on two different machines.
Compared to a remote-object system, even a mature industrial product such as Orbix, the
PerDiS approach yields much better performance.
Table 2 shows the memory consumption comparison. This consumption in the PerDiS version
is almost identical to the stand-alone one, whereas the Corba version consumes an order of
magnitude more memory, for reasons that were not clear. This experience confirms the intui-
tion that the persistent distributed store paradigm performs better (in both time and space)
than an industry-standard remote-invocation system, for data sets and algorithms that are typi-
cal of distributed VE design applications. It also confirms that porting existing code to PerDiS
is straightforward and provides the benefits of sharing, distribution and persistence with very
little effort.
SPF Files Execution Time (s)
On 1 Machine On 2 Machines
File Size
In Kb
N° of
SPF Objects
N° of
Polyloops
Std-Alone
PerDiS Corba PerDiS Corba
293 5200 530 0.03 1.62 54.52 2.08 59.00
633 12080 1024 0.06 4.04 115.60 4.27 123.82
725 12930 1212 0.07 4.04 146.95 5.73 181-96
2031 40780 4091 0.16 13.90 843.94 271.50 1452.11
Table 1: Execution time comparison between the 3 implementation of the AP225-VRML map-
ping application: Stand-Alone version, PerDiS version (on 1 and 2 machines) and Corba ver-
sion (on 1 and 2 machines)
SPF Files
Memory Occupation (Kb)
PerDiS File Size
(Kb)
N° of
SPF Objects
N° of
Polyloops
Std-Alone
In Memory Persistent
Corba
293 5200 530 2269 2073 710 26671
633 12080 1024 2874 2401 1469 51054
725 12930 1212 3087 2504 1759 59185
Table 2: Memory occupation comparison
5.2 Project Manager Application
The second application presented in this paper is a document management application called
Project Manager. This application aims at allowing applications not programmed for PerDiS
to manage sets of files distributed among several servers and to access them transactionally.
Sets of conventional files (text files, spreadsheets, CAD files...) which are related and needed
as a group in order to perform an activity can be put into a common project. The project man-
ager is coupled with a browser and any file at a PerDiS site, which is made visible by the local
WWW server, can be added to a project. The coherence of the view provided over these sets
of files is guaranteed by transactions performed within the Project Manager. When a project is
created and files are inserted into it, these files are included in the PerDiS system and there-
fore from that moment on their coherence, distribution, persistence and concurrency control
are ensured by PerDiS. Every time a user wants to change files belonging to a project, he
(she) starts the project manager and checks out the files to a location of his choice (local di-
rectory, portable computer, and floppy disk...). The project manager can then be terminated
and only has to be started again when the user wishes to check in the project's files. Mean-
while, any external application can be used to view or modify the files. When the files are
checked-in, a transaction is committed which guarantees the ACID properties of the sequence
of changes made by the external application.
6 CONCLUSION
Today, the major difficulty to build a Virtual Enterprise consortium is the lack of a software
infrastructure that can integrate persistence, distribution, concurrent access control, data integ-
rity, coherence and security. In addition, there is a clear need for an efficient interface that al-
lows a fast porting of existing applications to reduce the development effort needed to ensure
the interoperability of different tools from different partners. This paper presented PerDiS, a
new approach to develop VE software infrastructures using a persistent distributed store,
which is based on lazy replication and a global co-operative coherent cache. This approach
reduces the performance problem by avoiding the remote access techniques used by tradi-
tional distribution approaches, i.e. based on RPC. Furthermore, PerDiS includes features such
as implicit locking and optimistic and pessimistic transactions to allow a simple and fast port-
ing of applications that were not developed to work in a transactional and concurrent envi-
ronment.
Another major difficulty of VE is the definition of common data models. For this purpose, the
authors proposed the standardisation efforts as a solution. ISO STEP techniques and method-
ologies start to be widely accepted in different manufacturing sectors. To open the PerDiS
platform to a wide range of application domains a distributed version of the Standard Data
Access Interface (SDAI) to develop concurrent STEP tools was implemented. Finally, the pa-
per presents a performance comparison of a VRML mapper ported to PerDiS and to an indus-
trial implementation of Corba. More details about PerDiS concepts, implementation, perform-
ance and applications can be found in (Ferreira, 2000).
Acknowledgements
The authors would like to thank all partners of the PerDiS Esprit project 22533 who contrib-
uted to the design and the development of the platform and the test applications. Members of
PerDiS consortium are (in alphabetical order): CSTB (France), IEZ (Germany), INESC (Por-
tugal), INRIA (France) and QMW (Great Britain).
7 REFERENCES
Agrawal. D., Bernstein. A. J., Gupta. P., Sengupta. S. (1987) Distributed optimistic concur-
rency control with reduced rollback. Distributed Computing, 2, 45-59.
Atkinson. M.P., Bailey. P.J., Chisholm. K.J., Cockshott. P.W., Morrison. R. (1983) An ap-
proach to persistent programming. The Computer Journal, 26(4), 360-365.
Berners-Lee. T., Masinter. L., McCahill. M. (December 1994) Uniform Resource Locators.
RFC 1738.
Camarinha-Matos. L.M., Afsarmanesh. H. (1999) The Virtual Enterprise Concept. Infrastruc-
ture for Virtual Enterprise, Kluwer Academic Publishers, Boston, USA.
Coulouris. G., Dollimore. J., Roberts. M. (1997) Security Services Design. PerDiS deliverable
PDS-R-97-008. http://www.perdis.esprit.ec.org/deliverables/docs/T.D.1.1/A/T.D.1.1-A.html.
Faraj. L., Alshawi. M., Aouad. G., Child. T., Underwood. J. (1999) Distributed Object Envi-
ronment: Using International Standards for Data Exchange in the Construction Industry.
Computer-Aided Civil and Infrastructure Engineering, 14, Blackwell Publishers, New Jersey,
USA.
Ferreira. P., Shapiro. M., Blondel. X., Fambon. O., Garcia. J., Kloosterman. S., Richer. N.,
Roberts. M., Sandakly. F., Coulouris. G., Dollimore. J., Guedes. P., Hagimont. D. and
Krakoviak. S. (February 2000) PerDiS: Design, Implementation and Use of a PERsistent Dis-
tributed Store. Recent Advances in Distributed Systems, Krakowiak S., Shrivastava S. K.,
Lecture Notes in Computer Science 1752, Springer-Verlag, Heidelberg, Germany.
Fohn. S. M., Greef. A., Young. R. E., O'Grady. P. (1995) Concurrent Engineering. Lecture
Notes in Computer Science, 973. Springer Verlag, Heidelberg, Germany.
Fowler. J. (1995) STEP for data management, Exchange and Sharing. Great Britain: Technol-
ogy Appraisals. ISBN 1-871802-36-9.
Hardwick. M., Spooner D. L., Rondo T., Morris. K. C. (February 1996) Sharing Manufactur-
ing Information in Virtual Enterprise. Communications of the ACM, Vol. 39, 2.
ISO 10303-11. (1994) Industrial automation system and integration-Product data representa-
tion and exchange- Part 11. Description methods: The EXPRESS language reference manual.
ISO 10303-21. (1994) Industrial automation system and integration-Product data representa-
tion and exchange- Part 21. Implementation methods: Clear Text Encoding of the Exchange
Structure.
ISO 10303-22. (1996) Industrial automation system and integration-Product data representa-
tion and exchange- Part 22. Implementation methods: Standard Data Access Interface specifi-
cation. 1996.
ISO 10303-225. (1996) Industrial automation system and integration-Product data representa-
tion and exchange- Part 225. Application Protocol: Building Elements Using Explicit Shape
Representation.
ISO 10303-23. (January 1997) Industrial automation system and integration-Product data rep-
resentation and exchange- Part 23. C++ programming language binding to the standard data
access interface. ISO TC184/SC4/WG11 N004.
ISO 14772-1. (1997) The Virtual Reality Modelling Language.
Kalay. Y. E. (1998) Computational Environment to support design collaboration. Automation
in Construction, 8, Elsevier Science, Amsterdam, Netherlands.
Köthe. M. (January 1997) COST Architecture: the Corba Access to STEP Information Storage
- Architecture and Specification. Deliverable D301 of ESPRIT 20408 VEGA project.
Mowbray. T. J., Zahavi. R. (1996) The Essential CORBA - System Integration Using Distrib-
uted Objects, John Wiley and Sons, New York, USA.
Object Management Group. (December 1998) Corba Services Specification.
http://www.omg.org/corba/sectran1.html.
Object Management Group. (September 1997) The Common Object Request Broker Architec-
ture and Specification (CORBA) Revision 2.1. http://www.omg.org/corba/corbaiiop.htm.
Sandakly. F., Garcia. J., Ferreira. P., Poyet. P. (1999) PerDiS: An Infrastructure for Coopera-
tive Engineering in Virtual Enterprises. Infrastructures for Virtual Enterprises, Networking
Virtual Enterprises, Camarinha-Matos. L.M., Afsarmanesh. H., Kluwer Academic Publishers,
Boston, USA.
Shapiro. M. et al. (1997) The PerDiS Architecture. Deliverable PDS-R-97-002 of ESPRIT
22533 PerDiS project. http://www.perdis.esprit.ec.org/deliverables/docs/architecture.
Stroustrup. B. (1986) The C++ Programming Language, Addison-Wesley, New York, USA.
The National Industrial Information Infrastructure Protocols (NIIIP) Consortium. (1996) The
NIIIP Reference Architecture (Revision 6). http://www.niiip.org/public-forum/NTR96-
01/NTR96-01-HTML-PS/niplongd.html.
Wilbur. S. (June 1994) Computer Support for Co-operative Teams: Applications in Concurrent
Engineering. IEEE Colloquium on Current Development in Concurrent Engineering Method-
ologies and Tools.
... Following the previously developed architectures (e.g. Camarinha-Matos, Afsarmanesh, and Osorio (2001), Sandakly et al. (2001)), Giret, Garcia, and Botti (2016), proposed an open architecture utilising agents that was amenable to emanufacturing systems. Ghomi, Rahmani, and Qader (2019) reviewed concepts, architectures, and platforms of cloud manufacturing. ...
Article
Open systems have been of interest to the research and industrial community for decades, e.g. software development, telecommunication, and innovation. The presence of open manufacturing enterprises in a cloud calls for broadly interpretable models. Though there is no global standard for representation of digital models of processes and systems in a cloud, the existing process modelling methodologies and languages are of interest to the manufacturing cloud. The models residing in the cloud need to be configured and reconfigured to meet different objectives, including complexity reduction and interpretability which coincide with the resilience requirements. Digitisation, greater openness, and growing service orientation of manufacturing offer opportunities to address resilience at the design rather than the operations stage. An algorithm is presented for complexity reduction of digital models. The complexity reduction algorithm decomposes complex structures and enhances interpretability and visibility of their components. The same algorithm and its variants could serve other known concepts supporting resilience such as modularity of products and processes as well as delayed product differentiation. The ideas introduced in the paper and the complexity reduction algorithm of digital models are illustrated with examples. Properties of the graph and matrix representations produced by the algorithm are discussed.
Article
New advancements in computers and information technologies have yielded novel ideas to create more effective virtual collaboration platforms for multiple enterprises. Virtual enterprise (VE) is a collaboration model between multiple independent business partners in a value chain and is particularly suited to small and medium-sized enterprises (SMEs). The most challenging problem in implementing VE systems is ineffcient and inflexible data storage and management techniques for VE systems. In this research, an ontology-based multi-agent virtual enterprise (OMAVE) system is proposed to help SMEs shift from the classical trend of manufacturing part pieces to producing high-value-added, high-tech, innovative products. OMAVE targets improvement in the flexibility of VE business processes in order to enhance integration with available enterprise resource planning (ERP) systems. The architecture of OMAVE supports the requisite flexibility and enhances the reusability of the data and knowledge created in a VE system. In this article, a detailed description of system features along with the rule-based reasoning and decision support capabilities of OMAVE system are presented. To test and verify the functionality and operation of this system, a sample product was manufactured using OMAVE applications and tools with the contribution of three SMEs.
Conference Paper
CNC manufacturing has evolved through the use of faster, more precise and more capable CNC controllers and machine tools. These enhancements in machine tools however have not been integrated under a common platform to support CAD/CAM/CNC software inter-operability and as a result a plethora of standards is being used for these systems. ISO10303 (STEP) and ISO14649 (STEP-NC) seek to eliminate the barriers in the exchange of information in the CNC manufacturing chain and enable interoperability throughout the manufacturing software domain. This paper introduces a novel software platform called the Integrated Platform for Process Planning and Control (IP3AC) to support the rapid development of STEP-NC compliant CNC manufacturing software.
Conference Paper
Full-text available
The major project goal of the ISTforCE project is to provide an open, human-centered Web-based collaboration environment which supports concurrent engineering while working on multiple projects simultaneously and offers easy access to specialized engineering services distributed over the Web. Normally, engineering applications are bought and then installed and used locally, but in the last years there is a growing interest, especially by small, highly specialised vendors, to offer such applications on rental or "pay per use" basis. The main innovation of ISTforCE is in the human-centred approach enabling the integration of multiple applications and services for multiple users and on multi-project basis. This should lead to support the work of each user across projects and team boundaries and to establish a common platform where providers of software and end users (engineers, architects, technicians, project managers) can meet. This paper will focus on three specific aspects of the ISTforCE project. The first addresses product data models also referred to as Building Information Models (BIMs) given the specific scope of the application domain dealt with and also covers the contribution of ISTforCE to standardization activities, either within the frame of ISO / STEP or of the IAI and the development of the Industry Foundation Classes. The second will address part of the software architecture designed to support concurrent engineering activities with the development of the Product Data server (PDS) and finally we will cover AI based applications, more specifically in the area of automated building conformance checking (CCS).
Technical Report
Full-text available
Ce document n'engage en rien le CSTB et ne reflète que les vues de son auteur principal. La mission logicielle telle qu’elle a été définie s’intéresse à un certain nombre de questions fondamentales qui ont toutes un impact sur la capacité du CSTB à agir en qualité de concepteur, développeur, producteur, éditeur et vendeur de logiciels. Bien que de nombreuses relations existent entre ces grandes questions, il a été pour des raisons de clarté de la présentation utile de les aborder de manière séparée, le lecteur devant conserver à l’esprit qu’elles sont souvent intimement liées. Par exemple, il ne saurait être question de politique technique sans s’intéresser aux dispositions organisationnelles qui peuvent être prises pour en garantir la bonne exécution, ou encore d’une stratégie technique qui ferait fi des réalité du terrain du point de vue du parc matériel et logiciel des utilisateurs ou de considérations commerciales lors du déploiement des produits en faisant des hypothèses fortes sur les capacités d’équipement des clients, etc. Ainsi les questions sont interdépendantes, et agir sur l’une sans mesurer les effets ou les conséquences sur les autres est de peu d’utilité. C’est probablement la complexité de chacun des sujets que nous allons aborder, couplée à leurs inter-relations qui rend l’activité logicielle difficile. La politique technique est une question essentielle. On y abordera des thèmes comme la mesure de la pertinence d’une solution technique, les conséquences en termes de productivité, de cohérence entre les développements entrepris dans différents départements du CSTB, d’interopérabilité et pérennité des développements logiciels par l’adoption de langages et de plate-formes de développement préférentiels, de la mise en œuvre de bibliothèques de composants logiciels réutilisables, de la documentation systématique et standardisée selon des normes de l’entreprise, de maintenance corrective et adaptative, etc. Le champ est immense et si le CSTB peut s’enorgueillir de posséder divers métiers celui du logiciel en tant que tel ne lui est finalement pas vraiment naturel. Les questions de responsabilités préoccupent la Direction du CSTB, à juste titre, compte tenu des conséquences importantes que peuvent avoir un usage erroné ou tout simplement hors des limites prévues d’un logiciel. Ceci est particulièrement vrai compte tenu des capitaux engagés dans des opérations de construction ou de TP, les erreurs, et malfaçons engendrés par voie de conséquences ou autres problèmes se soldant généralement par un impact financier important voire considérable et des délais, lesquels engendrent eux-même en retour des pénalités. Les hypothèses de modélisation et les techniques de résolution numériques, doivent être explicitées et comprises des utilisateurs qui doivent s’engager à décliner la responsabilité des concepteurs selon des modalités sur lesquelles nous reviendront par la suite. Au titre de la portée juridique interviennent également les questions de protection des logiciels, des droits qui leurs sont attachés et du transfert de ces droits aux utilisateurs, avec toutes les limites qu’il convient d’imposer pour diverses raisons, ne seraient-ce que celles de responsabilité qui viennent d’être exposées. Les processus décisionnels et la logique économique doivent être bien maîtrisés et la décision de la diffusion d’un logiciel doit être prise en parfaite connaissance de cause des implications sur les autres activités. On peut en effet parfois penser que la mise à disposition d’un outil logiciel sophistiqué auprès des utilisateurs finaux pourrait être de nature à diminuer les montants de contrats obtenus lorsque ces derniers s’appuyaient sur la mise en œuvre d’un logiciel propriétaire assez sophistiqué pour justifier de dispenser des prestations. Nous reviendrons par la suite sur ces arguments, et sans en dire davantage, nous avons tendance à penser que la diffusion du logiciel – si elle est faite correctement et que les utilisateurs peuvent s’appuyer sur l’éditeur en fonction que de besoin – peut au contraire augmenter les volumes d’affaires. Ce point de vue sera argumenté par la suite. La diffusion suppose des organisations et des démarches spécifiques tant pour le développement que pour la promotion et la commercialisation dans un contexte où la connaissance de l’évolution rapide des technologies informatiques reste un investissement permanent et où la cible commerciale est souvent mouvante. Il ne suffit pas de remplir un service que l’on a pu identifier à un moment donné et qui correspond à un besoin clair d’une population d’utilisateurs, encore faut-il que les conditions de la cible n’aient pas changées entre le moment où le chantier est lancé et celui où la commercialisation s’opère. Ce genre de problème s’est déjà malheureusement rencontré, ne serait-ce pour l’illustrer que dans le cas où l’utilisateur ne veut plus – par exemple – d’une solution utilisant des données en local, même si elles sont fréquemment mises à jour sous forme de CDs, mais où il souhaite pouvoir déporter ce coût vers le fournisseur en se connectant par internet à ses serveurs. Dans ce cas, même si le besoin est fonctionnellement satisfait par la solution proposée, le contexte technique ayant suffisamment changé et ce de manière assez rapide pour qu’une migration devienne difficile, il est alors très dur de commercialiser le produit qui rate sa cible. De ce point de vue, les étapes d’investissement doivent être contrôlées régulièrement, et les décisions doivent se fonder sur une analyse du type espérance de gains sur risque en tenant compte de la connaissance du marché, de la concurrence, d’un business plan, etc. Bien sûr et on le comprend aisément de ce qui vient d’être rapidement évoqué, le CSTB a tout intérêt à mettre en place des partenariats pour s’appuyer sur des spécialistes de ces diverses questions. Enfin, une mission comme celle-ci, conduite sur une durée de temps très limitée, n’aura permis que de survoler l’ensemble des questions qui se posent et d’offrir un panorama nécessairement un peu rustique de nombre de sujets qui mériteraient d’être approfondis. Acceptons en l’augure.
Article
Peer-to-Peer (P2P) network is an important component to implement next generation Internet, how to quickly and efficiently search the resources in P2P networks has become one of the most critical issues, at the same time, this is one of greatest concern to users. This paper describes the basic Flooding Peer-to-Peer network search method, followed by analysis of several new search methods pros and cons, and then further analysis of these algorithms is proposed based on a cache-based search algorithm: When a node of the remaining load capacity is high, it will become the center node, and form a joint topology area with the nearby nodes together, then the center node and ordinary nodes also need to store the index cache, at the local region the overheating resources will be copied to the local (that is, the contents cache). The simulation shows that the algorithm can effectively improve the hit rates of resources searching, reduce the query delay.
Article
A comprehensive review is given of recent research on developing Web-based manufacturing systems. First, the key issues in developing Web-based manufacturing systems, including collaboration among product development partners, data modelling, system architecture design, and security management, and researches to address these key issues are presented. Then various approaches of developing Web-based manufacturing systems are introduced to show how these approaches can improve the efficiency and quality in product design, production, life-cycle integration, enterprise management and customer service. Problems of the currently developed Web-based manufacturing systems and future work for developing the next-generation Web-based manufacturing systems are subsequently discussed.
Conference Paper
CNC manufacturing has evolved through the use of faster, more precise and more capable CNC controllers, and machine tools. These enhancements in machine tool however have not been integrated under a common platform to support CAD/CAM/CNC software inter-operability and as a result, a plethora of standards is being used for these systems. ISO10303 (STEP) and ISO14649 (STEP-NC) seek to eliminate the barriers in the exchange of information in the CNC manufacturing chain and enable inter-operability throughout the manufacturing software domain. This chapter introduces a novel software platform called the Integrated Platform for Process Planning and Control (IP3AC) to support the rapid development of STEP-NC compliant CNC manufacturing software. IP3AC has been developed to provide a fully object-oriented encapsulation of IS014649 information, allowing the user application to efficiently manipulate manufacturing data as objects with no need to handle low-level data in the user-written code.
Conference Paper
Along with the rapid development of virtual enterprise (VE), knowledge transfer and share between member enterprises (MEs) of VE become more and more important. Studies around knowledge transfer mainly focused on the procedure and transfer model, while little is discussed for its risks. In our study, we analyzed the causations of knowledge transfer risks between MEs of VE according to quantities of references and wide spot investigations, and put a series of preventative means forward. Our results indicated that: (1) the ability and willingness of MEs, some of the inherent features of knowledge and environments are the key causations of knowledge transfer risks; (2) risk preventative mechanism is established based on the six measures, that is, selecting suitable partner, designing rational contract, setting up the team for knowledge transfer management, strengthening communication and cooperation, protecting knowledge distinctively, and implementing punitive measures. The results of this study may offer important directions to theories and practices to decrease knowledge transfer risks between MEs of VE.
Conference Paper
The shift from made-to-stock to made-to-order has resulted in new manufacturing environments that require IT frameworks able to support this new dynamism. Collaborative manufacturing environments demonstrate considerable potential in responding to this need. As such, open manufacturing approaches are becoming more and more the appropriate tool and technology to meet the high expectations of their customers, who, in today's economy, demand absolutely the best service, price, delivery time and product quality. In this paper an agent-supported infrastructure for e-manufacturing is proposed as the enabling tool for implementing virtual collaborative manufacturing environments. In such an environment an open manufacturing process is seeing as a virtual scenario in which different manufacturing services are choreographed and orchestrated in order to get a product by different autonomous entities from different manufacturing systems.
Conference Paper
Full-text available
In this paper, we propose a new approach to build Virtual Enterprise software infrastructure that offers persistence, concurrent access, coherence and security on a distributed-shared data store based on distributed-shared memory paradigm.
Article
This overview of C++ presents the key design, programming, and language-technical concepts using examples to give the reader a feel for the language. C++ is a general-purpose programming language with a bias towards systems programming that supports efficient low-level computation, data abstraction, object-oriented programming, and generic programming. 1
Article
As a contribution to the international efforts toward the development of an integrated technology (IT) for managing project information, a major research project has been undertaken at the University of Salford. Based on advances in international standards in data models, i.e., the Industry Foundation Classes (IFCs), and communication protocols, i.e., the Common Object Request Broker Architecture (CORBA), this project has successfully developed and implemented a three-tier computer architecture environment to integrated design and construction applications. The IFC data model has been implemented as the single project database in an open and distributed computer environment whereby the World Wide Web has been used as the delivery medium. This article explains the background of this project, outlines other international efforts in this field, introduces the concept of the three-tier computer architecture, and discusses the proposed structure for the integrated environment along with its implementation issues.
Chapter
In this chapter, the philosophy behind Concurrent Engineering is presented as well as current approaches used to implement Concurrent Engineering. Concurrent Engineering design encourages the simultaneous consideration of all aspects of a product's life-cycle at the design stage. It has been shown to be successful in shortening product development time and costs by avoiding the typical problems associated with sequential design. Companies competing in today's global and volatile marketplace cannot afford long development leadtimes or high costs. Success stories in Concurrent Engineering have primarily relied on the design team approach, a collaboration of people from different departments representing different life-cycle perspectives. However, the design team approach and other approaches suffer in their inability to manage (i.e., store, access, update, etc.) the immense amount of data and information required to perform Concurrent Engineering.
Article
The work reported in this paper addresses the paradoxical state of the construction industry (also known as A/E/C, for Architecture, Engineering and Construction), where the design of highly integrated facilities is undertaken by severely fragmented teams, leading to diminished performance of both processes and products. The construction industry has been trying to overcome this problem by partitioning the design process hierarchically or temporally. While these methods are procedurally efficient, their piecemeal nature diminishes the overall performance of the project. Computational methods intended to facilitate collaboration in the construction industry have, so far, focused primarily on improving the flow of information among the participants. They have largely met their stated objective of improved communication, but have done little to improve joint decision-making, and therefore have not significantly improved the quality of the design project itself. We suggest that the main impediment to effective collaboration and joint decision-making in the A/E/C industry is the divergence of disciplinary ‘world-views, ’ which are the product of educational and professional processes through which the individuals participating in the design process have been socialized into their respective disciplines. To maximize the performance of the overall project, these different world-views must be reconciled, possibly at the expense of individual goals. Such reconciliation can only be accomplished if the participants find the attainment of the overall goals of the project more compelling than their individual disciplinary goals. This will happen when the participants have become cognizant and appreciative of world-views other than their own, including the objectives and concerns of other participants. To achieve this state of knowledge
Article
Concurrency control algorithms have traditionally been based on locking and timestamp ordering mechanisms. Recently optimistic schemes have been proposed. In this paper a distributed, multi-version, optimistic concurrency control scheme is described which is particularly advantageous in a query-dominant environment. The drawbacks of the original optimistic concurrency control scheme, namely that inconsistent views may be seen by transactions (potentially causing unpredictable behavior) and that read-only transactions must be validated and may be rolled back, have been eliminated in the proposed algorithm. Read-only transactions execute in a completely asynchronous fashion and are therefore processed with very little overhead. Furthermore, the probability that read-write transactions are rolled back has been reduced by generalizing the validation algorithm. The effects of global transactions on local transaction processing are minimized. The algorithm is also free from dedlock and cascading rollback problems.