Technical ReportPDF Available

Vers une Généralisation de l'Usage de la Norme STEP dans le secteur du Bâtiment

Authors:
  • Free Thinker @Moorea
  • R2M Solution

Abstract

Ce document s'intéresse à l'introduction puis à la possible généralisation de l'usage de la norme STEP dans le secteur du bâtiment, ce dans le contexte de l'avènement de nouvelles formes d'entreprises dites virtuelles, i.e. constituées de partenaires qui se regroupent pour des périodes déterminées correspondant à l'exécution de projets spécifiques.
A preview of the PDF is not available
Technical Report
Full-text available
Ce document n'engage en rien le CSTB et ne reflète que les vues de son auteur principal. La mission logicielle telle qu’elle a été définie s’intéresse à un certain nombre de questions fondamentales qui ont toutes un impact sur la capacité du CSTB à agir en qualité de concepteur, développeur, producteur, éditeur et vendeur de logiciels. Bien que de nombreuses relations existent entre ces grandes questions, il a été pour des raisons de clarté de la présentation utile de les aborder de manière séparée, le lecteur devant conserver à l’esprit qu’elles sont souvent intimement liées. Par exemple, il ne saurait être question de politique technique sans s’intéresser aux dispositions organisationnelles qui peuvent être prises pour en garantir la bonne exécution, ou encore d’une stratégie technique qui ferait fi des réalité du terrain du point de vue du parc matériel et logiciel des utilisateurs ou de considérations commerciales lors du déploiement des produits en faisant des hypothèses fortes sur les capacités d’équipement des clients, etc. Ainsi les questions sont interdépendantes, et agir sur l’une sans mesurer les effets ou les conséquences sur les autres est de peu d’utilité. C’est probablement la complexité de chacun des sujets que nous allons aborder, couplée à leurs inter-relations qui rend l’activité logicielle difficile. La politique technique est une question essentielle. On y abordera des thèmes comme la mesure de la pertinence d’une solution technique, les conséquences en termes de productivité, de cohérence entre les développements entrepris dans différents départements du CSTB, d’interopérabilité et pérennité des développements logiciels par l’adoption de langages et de plate-formes de développement préférentiels, de la mise en œuvre de bibliothèques de composants logiciels réutilisables, de la documentation systématique et standardisée selon des normes de l’entreprise, de maintenance corrective et adaptative, etc. Le champ est immense et si le CSTB peut s’enorgueillir de posséder divers métiers celui du logiciel en tant que tel ne lui est finalement pas vraiment naturel. Les questions de responsabilités préoccupent la Direction du CSTB, à juste titre, compte tenu des conséquences importantes que peuvent avoir un usage erroné ou tout simplement hors des limites prévues d’un logiciel. Ceci est particulièrement vrai compte tenu des capitaux engagés dans des opérations de construction ou de TP, les erreurs, et malfaçons engendrés par voie de conséquences ou autres problèmes se soldant généralement par un impact financier important voire considérable et des délais, lesquels engendrent eux-même en retour des pénalités. Les hypothèses de modélisation et les techniques de résolution numériques, doivent être explicitées et comprises des utilisateurs qui doivent s’engager à décliner la responsabilité des concepteurs selon des modalités sur lesquelles nous reviendront par la suite. Au titre de la portée juridique interviennent également les questions de protection des logiciels, des droits qui leurs sont attachés et du transfert de ces droits aux utilisateurs, avec toutes les limites qu’il convient d’imposer pour diverses raisons, ne seraient-ce que celles de responsabilité qui viennent d’être exposées. Les processus décisionnels et la logique économique doivent être bien maîtrisés et la décision de la diffusion d’un logiciel doit être prise en parfaite connaissance de cause des implications sur les autres activités. On peut en effet parfois penser que la mise à disposition d’un outil logiciel sophistiqué auprès des utilisateurs finaux pourrait être de nature à diminuer les montants de contrats obtenus lorsque ces derniers s’appuyaient sur la mise en œuvre d’un logiciel propriétaire assez sophistiqué pour justifier de dispenser des prestations. Nous reviendrons par la suite sur ces arguments, et sans en dire davantage, nous avons tendance à penser que la diffusion du logiciel – si elle est faite correctement et que les utilisateurs peuvent s’appuyer sur l’éditeur en fonction que de besoin – peut au contraire augmenter les volumes d’affaires. Ce point de vue sera argumenté par la suite. La diffusion suppose des organisations et des démarches spécifiques tant pour le développement que pour la promotion et la commercialisation dans un contexte où la connaissance de l’évolution rapide des technologies informatiques reste un investissement permanent et où la cible commerciale est souvent mouvante. Il ne suffit pas de remplir un service que l’on a pu identifier à un moment donné et qui correspond à un besoin clair d’une population d’utilisateurs, encore faut-il que les conditions de la cible n’aient pas changées entre le moment où le chantier est lancé et celui où la commercialisation s’opère. Ce genre de problème s’est déjà malheureusement rencontré, ne serait-ce pour l’illustrer que dans le cas où l’utilisateur ne veut plus – par exemple – d’une solution utilisant des données en local, même si elles sont fréquemment mises à jour sous forme de CDs, mais où il souhaite pouvoir déporter ce coût vers le fournisseur en se connectant par internet à ses serveurs. Dans ce cas, même si le besoin est fonctionnellement satisfait par la solution proposée, le contexte technique ayant suffisamment changé et ce de manière assez rapide pour qu’une migration devienne difficile, il est alors très dur de commercialiser le produit qui rate sa cible. De ce point de vue, les étapes d’investissement doivent être contrôlées régulièrement, et les décisions doivent se fonder sur une analyse du type espérance de gains sur risque en tenant compte de la connaissance du marché, de la concurrence, d’un business plan, etc. Bien sûr et on le comprend aisément de ce qui vient d’être rapidement évoqué, le CSTB a tout intérêt à mettre en place des partenariats pour s’appuyer sur des spécialistes de ces diverses questions. Enfin, une mission comme celle-ci, conduite sur une durée de temps très limitée, n’aura permis que de survoler l’ensemble des questions qui se posent et d’offrir un panorama nécessairement un peu rustique de nombre de sujets qui mériteraient d’être approfondis. Acceptons en l’augure.
Technical Report
Full-text available
Task 1120: Distribution of Product Data Models - of the VEGA project has developed the deliverable D102 in two parts. The major goal of this task is to define potential elements and guidelines of a methodology for the distribution of STEP model instances. This methodology tries to take into account the different possible levels of granularity for the STEP models distribution, and rely on specific IT concepts for distribution as it will be implemented into the Corba Access to STep models (COAST : VEGA - WP3). Development around distribution of STEP models is not intended as a modification of the current Product Modelling technology or standards, but as a transparent extension to provide distribution control information and a third path (besides SPF files and the SDAI) for access to STEP-based information. Even if the methodology developed here is related to STEP, this is not additional concept to STEP and EXPRESS, or a new EXPRESS flavour: this is why we will talk, as far as possible in that deliverable, about distribution modelling (rather than EXPRESS-D, as initially characterised in the VEGA Technical Annex). The same is also valid for EXPRESS-W which will be referenced as Workflow meta-model (see D103b Bakkeren [1]). The first part of this deliverable, D102a, defined the scope of a model for distribution of STEP data, in relation to existing standards for modelling and distribution. It described what could be the main features of this model and gave some first hints on the different levels of granularity of the STEP distributed data. Moreover, D102a promoted the elaboration of an EXPRESS-D language (as initially identified in the VEGA TA): the main goal of such a language would have been the specification of the availability of data through their distribution, using the EXPRESS language as a basis, with annotation (or pragma instruction) extension, allowing simple addition of information to existing EXPRESS schemata for distribution purpose, and offering an upward compatibility with EXPRESS-IS. This language would have authorised, in particular, the distribution of any entity object at the level of the middleware. The identification of what seems to be the best level of object distribution granularity for the COAST (distribution of STEP models) and consequently the giving up of EXPRESS-D (model and language) are the main results of the current second part of this deliverable (D102b). D102b first identifies in more details the distribution of STEP models and data, constituted by what is directly visible on the network and managed by the COAST. The distribution of product data (defined by EXPRESS entities) may be the cause of dramatic overheads over the networks depending on the type of application (but especially in Building & Construction CAD design, where thousands of similar data may be defined when designing a building under a CAD tool), and this is the major reason why the object layer chosen in VEGA for distribution and management by the COAST is the model: the designer of a COAST based distributed application will have to specify which models of the application have to be distributed (and thus data which will be remotely accessible) and which models are internal to a specific server and not accessible. Distribution of STEP models nevertheless is sometimes not enough when dealing with real industrial applications: D102b also investigates specific notions as "grouping of entities", views or inter-model relationships to meet the specific requirements of the LSE virtual enterprise, in line with needs identified in D501a (Liebich [2]), mainly for performance goals as a mandatory objective for network-based architecture like the COAST. Thus, this deliverable specifies the distribution and associated properties of STEP data objects as it could be further integrated into the COAST architecture. These concepts will be then implemented and evaluated in the VEGA project, especially in the course of the VEGA WP3 Task 3200 - Implementation of the COAST. The distribution of STEP models and data is one of the main objectives of the VEGA project. Data distributed over a network are potentially available for any customer, thus data have to be protected through access rights to control who access the data, and moreover must be accessed in the right time and at the right place: distribution features are the ground for Security and Workflow considerations. The distribution in VEGA relies on the concept of STEP model, as intended in the ISO 10303 standard, a set of related entity instances based on an EXPRESS schema definition, and has to be related to inter-model relationships and schema interoperability. Moreover, the granularity of distribution greatly impacts the way followed by clients to access data through the middleware, and therefore influences the definition of a Query Language for the access and modification of product data distributed across the network. As previously told, the COAST is the VEGA distribution layer, and this deliverable is also an input for the specification and implementation of the COAST, which will include a set of optimised services for the distribution of STEP data. Finally, product data as well as administrative data like documents for instance, will be accessible through the COAST, following the main guidelines identified hereinafter. Consequently, this deliverable, which is the result of Task 1120, can be considered as relevant to the following tasks: Task 1220/1230 - Workflow and Security Modelling supporting tools Task 2100 - Schema Interoperability Task 2200 - Functional Interoperability Task 3100 - Specification of the COAST Task 3200 - Implementation of the COAST Task 4100 - Specification of Distributed Information Service - and also to the different implementation tasks of Work Package 5 leading up to the final demonstrations of the VEGA project. The VEGA TA stated that the main goal of task 1120 should be an EXPRESS-D model and language for a neutral description of the distribution of STEP data based on any object broker, and D102a tried to apprehend what could be the main objectives and elements of such a model, and mainly identifying the following points handling and availability of the potential (different types of ) distributed objects on one hand, the levels of access rights and access protection for the distributed objects on the ther hand. EXPRESS-D meta-model and language are no more considered in this second part of D102, for reasons introduced above and mainly exposed in section 3 of the current deliverable. Consequently, EXPRESS-D will be no more referenced in that deliverable except in section 3.3 for specific purpose revealed in the section itself.
Technical Report
Full-text available
The major goal of this task is to define a STEP-based methodology for describing the distribution of STEP model instances. This methodology is based on the specification of an object model for distribution, which is formalised using EXPRESS. This model provides a specification of an annotation mechanism (EXPRESS-D) to supply the underlying distribution system with guidelines on the distribution of individual model instances. The EXPRESS-D model and language is not intended as an modification of the current Product Modelling technology or standards, but as an transparent extension to provide distribution control information.
Conference Paper
Full-text available
The VEGA 1 project aims to establish an information infrastructure which supports the technical and business operations of virtual or extended enterprises. GroupWare tools and distributed architectures are developed in compliance with product data technology standardisation activities in line with the current specifications coming from the Object Management Group (OMG) and the ISO Standard for the Exchange of Product model data STEP, ISO/TC184. VEGA will make a significant contribution to the emergence of appropriate technical solutions supporting the advent of distributed objects and architectures as a means to implement distributed companies. The continued growth of the Internet/WEB and its associated standard developments leads to new ways of world-wide information communication, distribution and access to information. VEGA works for a tighter integration of STEP, CORBA and WEB technologies within a DIS 2
Technical Report
Full-text available
I have lost this document and I would be deeply indebted to anyone who could help me get it either in paper or better electronic format. It describes a comprehensive Le_Lisp based platform supporting the implementation of systems ensuring the modeling, the representation, the storage and exchange of Product Model Data (PMD) in native XPDI format or according to ISO/STEP 10303. Various expert-systems and knowledge-bases can be implemented on top of the PMDs storage and used either for model conversion or dedicated reasoning functions, e.g. design, conformity with building regulations, etc. The document attached in pdf is a short introduction of 39 pp. to the functionalities of XPDI.
Conference Paper
Full-text available
In this paper product models in applications and software interfaces will be discussed. First we will focus on the different kinds of models that have to be taken into account. It will be shown that both applications and interfaces are based on Product Models. How interfaces work and which information must be converted at which level will be presented. Following this, the need of a modeling framework, as being defined in the APPP and which will appear as a result of international discussions, will be shown. Last of all it will be shown how a future application can access the neutral Product Models using standard interfaces such as SDAI. These neutral Product Models are described in EXPRESS which will be used to define the schema of the Objects Oriented Data Base. It will be shown how in this way it would be possible to evaluate the quality of future product models before time consuming implementation and standardization activities start.
Conference Paper
Full-text available
STEP (Standard for the Exchange of Product Model Data), officially the ISO standard 10303, Product Data Representation and Exchange, is a series of international standards with the goal of defining data across the full engineering and manufacturing life cycle. Ten years after the inception of this international effort, i.e. in July 1984, the initial release of the STEP ISO standard 10303 comprises twelve parts which with reference to the original target omits tolerances and features and moreover parametric and standard parts have been deliberately separated from STEP. The SPACE project aims to contribute to address this situation and establish parametric design as an objective for STEP. Based on an updated understanding of the end users' requirements, SPACE will not only focus on the definition of upward compatible parameterized STEP models, resources and libraries, but will also address the integration of STEP-based parametric CAD systems into STEP implementation architectures (Part 22 / SDAI) and the development of a number of additional paradigms such as rule-based programming, constraint-based programming, case-based programming, which all impact the architectures, the functions and the communication facilities of the STEP Parametric systems of the future. STEP is gaining wide acceptance in the industry and its importance is increasingly acknowledged. It will open up new ways of doing business within, between, and among enterprises. The establishment of STEP will be a major milestone in the Information Age of the Industrial Development. STEP finds its roots a decade years ago, i.e. in July 1984, with one of the major background being to define CAD exchange formats which would support communication across applications and vendor platforms. Given the encouraging results of a number of early research activities, some being supported by the European Community as CAD*I, a very ambitious programme was set up with the objective of completing the effort for the end of 1985. The following is a quote from the resolutions of the first STEP meeting in Washington: « STEP aims at the creation of a standard which enables the capture of information comprising a computerized product model in a neutral form without loss of completeness and integrity, throughout the life cycle of the product ». Among the list of requirements stated in the early days were mentioned a product model core (Geometry, 3D wireframes / surfaces, B-reps, CSG, Tolerances, Features, Bills of materials), application data requirements for areas such as Drafting, FEM, Machining, Quality assurance, Data Management, a mechanism for standard parts, Parametric design features, and Data syntax and file structure independent of the models content. Ten years later, the initial release of the STEP ISO standard 10303 comprises twelve parts which with reference to the original target omits tolerances and features and moreover parametric and standard parts have been deliberately separated from STEP to keep the overall project structure manageable. Parametric has recently been submitted as a New Work Item (NWI) to STEP and standard parts have taken their own route under a project called Parts Library (ISO 13584), i.e. ISO/TC184/SC4/WG2.
Technical Report
Full-text available
This document (D302a) is the first step towards the development of data abstraction / generalization and view conversion mechanisms atop the ATLAS product models. The Technical Annex identifies the need for a mechanism supporting data-abstraction and data-generalization for the life-cycle integration of product/process information. It is also clearly stated that an essential part of the problem to be addressed by the project is the con­version of information created by different disciplines (or partner-roles) having different views on the product/process. Even when the same LSE application is used, different partners will create and/or use different sets of information. For example, a beam or col­umn will be described differently by an architect, a structural engineer, a contractor or a supplier. Therefore a mechanism supporting the conversion of views is needed. It is also acknowledged in the Technical Annex, that related to this topic is the need to describe products /processes on different levels of 'granularity'. A column or beam will be described differently in the context of the entire project (building, plant, etc.) to when a particular detail is being considered (for example a joint). This is true for virtually all aspects of a product; good examples are shape and costs. When a project is in a detailed specification-, construction- or operation-stage, it is required that information on different levels of granularity be made available and that this infor­mation be consistent (for example cost of a single component but also of the entire project).
Technical Report
Full-text available
Origins of CBR (Case-Based Reasoning ) were motivated by a cognitive observation that humans often rely on past experience to solve new problems. For example, the architect in his design process remembers old similar projects and adapt them to realise a new project. Indeed, through the practise of “ real ” tasks for a long time, operators acquire a knowledge on processes and special situations and this is the sort of expert evaluation that operators beginner are lacking for (Willemem Visser Juin 1993). The Case-Based Reasoning can mean to adapt solutions to problems already solved in order to solve a new problem. Let's take an example in the building construction field: if I have to design a new building, the CBR application must be capable of suggesting me a solution, pure result of the adaptation of similar buildings stored in memory. With this method, we can reuse technical solutions to solve new technical problems. Kolodner (1985), Riescheck and Schank (1989) were the pioneer in this new track or alternative to the Case-Based Reasoning. CBR systems (Kolodner 1988, Hammond 1989) made it possible to solve new problems by using the knowledge obtained through the resolution of similar problems in the past.
Book
Full-text available
Ce document a pour but de présenter en des termes compréhensibles par les professionnels de la construction (et donc utilisateurs des technologies concernées) les travaux conduits par l'Alliance Internationale pour l'Interopérabilité (IAI). l'IAI est une initiative regroupant des éditeurs de logiciels (Autodesk, Bentley, Nemetschek, IEZ, etc.), des entreprises de construction, des cabinets d'architecture (HOK, Turner Construction Company), des bureaux d'ingénierie (Ove Arup, Naoki Systems, etc.), des équipementiers (A&T, Carrier, ...), etc. Le présent ouvrage décrit le modèle de données mis au point par l'IAI pour faciliter l'échange de données entre les systèmes des intervenants, désigné sous le nom de classes d'objets IFCs.
Technical Report
Full-text available
This document (D301a) contains the description of the first steps taken to fullfil the aims of the ATLAS projects with respect to the integration of knowledge based systems and describes the efforts made towards the implementation of knowledge based extensions. In fact, a complete software infrastructure for the integration of knowledge base systems is presented. This environment is fully compliant on one hand with the ATLAS modeling methodologies and more generally speaking with the STEP modeling methodologies and on the other hand with the STEP implementation techniques and particularly the SDAI (Standard Data Access Interface - ISO 10303) specification.