Conference Paper

A model for understanding software components

Authors:
  • Institute of Management of Social Sciences,Kalyani
  • Donggguk University
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Understanding the behavior of components is an important task in the component-based software development process. Component users build mental models to understand a component when they use it for the first time. The models are also useful during the evolution of the component and the application that incorporates it. The process of component understanding employed by the component user influences the kind of models that can be developed. lit this paper we examine several comprehension models used in practice, and analyze existing component understanding approaches with respect to the comprehension models. We illustrate the development of comprehension models with the example of a spreadsheet component used in an application.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The main idea is to be able to build systems based on reliable and already tested components as it is done in other engineering disciplines [3]. To achieve this, it is important to extend our comprehension from a software understanding (where models describe software technically and code is the focus) to a component understanding (where the functionality, application and adaptation requirements of components gain importance) [2]. This means that besides a model which technically describes the software, the component must be viewed and understood in terms of its reusability (functionality, requirements and restrictions). ...
... Most recent trends in software engineering show that future developments will follow in the CBD path. This argument is partially confirmed by the large amount of component development technologies that exist today (CORBA, EJB, DCOM, and .NET among others), and also given the amount of components (COTS) available in the market [2]. ...
... This new form of applying Software Engineering requires particular aspects for its success, such as: component and architecture selection; adaptation and integration of components inside the chosen architecture; and maintenance of the components along with the evolution of requirements [2][3] [16]. This new approach might not be easy to implement, because of the fact that it must guarantee coexistence and compatibility among different component versions, and from different sources. ...
... The main idea is to be able to build systems based on reliable and already tested components as it is done in other engineering disciplines [3]. To achieve this, it is important to extend our comprehension from a software understanding (where models describe software technically and code is the focus) to a component understanding (where the functionality, application and adaptation requirements of components gain importance) [2]. This means that besides a model which technically describes the software, the component must be viewed and understood in terms of its reusability (functionality, requirements and restrictions). ...
... Most recent trends in software engineering show that future developments will follow in the CBD path. This argument is partially confirmed by the large amount of component development technologies that exist today (CORBA, EJB, DCOM, and .NET among others), and also given the amount of components (COTS) available in the market [2]. Although CBD promises to improve the software development processes, quality, productivity and reuse in particular [28]; these achievements are not new in other areas of industry [3]. ...
... Such semantics has two dimensions: knowledge of the component's domain and knowledge of the component software [25]. Domain knowledge provides the basic structure (motherboard) over which the product's knowledge is specified [2]. Frequently, this domain information guides component modeling [ibid.: 362]; however, it is focused on non-functional properties (performance, security, ...) with little attention to user-oriented descriptions (access conditions and instantiation, for instance) [12]. ...
Article
Full-text available
Software development has been coupled with time and cost problems through history. This has motivated the search for flexible, trustworthy and time and cost-efficient development. In order to achieve this, software reuse appears fundamental and component-based development, the way towards reuse. This paper discusses the present state of component-based development and some of its critical issues for success, such as: the existence of adequate repositories, component integration within a software architecture and an adequate specification.
... Most recent trends in software engineering show that future developments will follow in the CBD path. This argument is partially confirmed by the large amount of component development technologies that exist today (CORBA, EJB, DCOM, and .NET among others), and also given the amount of components (COTS) available in the market [2]. ...
... It could be assumed that formal descriptions of a " white box " type could solve this problem, providing enough information for the use and understanding of the component, but the effort required to use and comprehend this formal models dissuades developers from using it, inclining them towards " black box " type descriptions [2][27]. Because of this, a more appropriate model would be one that offers not only understanding of the domain (requirements vs. capabilities), but also of the program itself (interfaces, data types, syntax, parameters, and acceptable ranges) and the situation (structure, connections, and flows) [2]. ...
... The main idea is to be able to build systems based on reliable and already tested components as it is done in other engineering disciplines [3]. To achieve this, it is important to extend our comprehension from a software understanding (where models describe software technically and code is the focus) to a component understanding (where the functionality, application and adaptation requirements of components gain importance) [2]. This means that besides a model which technically describes the software, the component must be viewed and understood in terms of its reusability (functionality, requirements and restrictions). ...
Article
Full-text available
Software development has been coupled with time and cost problems through history. This has motivated the search for flexible, trustworthy and time and cost-efficient development. In order to achieve this, software reuse appears fundamental and component-based development, the way towards reuse. This paper discusses the present state of component-based development and some of its critical issues for success, such as: the existence of adequate repositories, component integration within a software architecture and an adequate specification.
... The goal is to understand a component's properties, functionality and possible limitations. It is proposed more as an approach to component understanding [72], rather than as an approach to integration testing. However, it can also be included in the set of user's specification-based testing approaches, in that it is, in effect, the user that designs the exploratory test cases, based on their 'mental model' [72] and expectations of the component when it is first used. ...
... It is proposed more as an approach to component understanding [72], rather than as an approach to integration testing. However, it can also be included in the set of user's specification-based testing approaches, in that it is, in effect, the user that designs the exploratory test cases, based on their 'mental model' [72] and expectations of the component when it is first used. Korel [71] distinguishes among test cases aimed at finding an input on which a desired property is exhibited, test cases aimed at detecting whether there exists any input on which a required property is violated and test cases aimed at identifying component pre-conditions. ...
Article
Full-text available
Component-based development has emerged as a system engineering approach that promises rapid software development with fewer resources. Yet, improved reuse and reduced cost benefits from software components can only be achieved in practice if the components provide reliable services, thereby rendering component analysis and testing a key activity. This paper discusses various issues that can arise in component testing by the component user at the stage of its integration within the target system. The crucial problem is the lack of information for analysis and testing of externally developed components. Several testing techniques for component integration have recently been proposed. These techniques are surveyed here and classified according to a proposed set of relevant attributes. The paper thus provides a comprehensive overview which can be useful as introductory reading for newcomers in this research field, as well as to stimulate further investigation. Copyright © 2006 John Wiley & Sons, Ltd.
... In 1992, Ira Baxter observed that the software engineering community was attacking the problem of program maintenance in the wrong way [8] 1 . He noted that engineers are burdened by the task of code maintenance. ...
... #2 Automatic Programming Challenge: Mapping a declarative specification to an efficient executable is hard. Called Automatic Programming (AP), all but the most pioneering researchers gave up on AP in the early 1980s, as the techniques that were available then did not scale [1]. Design Maintenance requires AP to be solved. ...
Article
The communities of Generative Programming (GP) and Program Comprehension (PC) look at similar problems: GP derives a program from a specification, PC derives a specification from a program. A basic difference between the two is GP's use of specific knowledge representations and mental models that are essential for program synthesis. In this paper, I present a historical review of the Grand Challenges, results, and outlook for GP as they pertain to PC.
... The users could then decide whether they should change their search or browsing strategy. They may, for example, perform an additional operation, such as "Refine Price" (as shown in Fig. 4), which allows users to find products 15 within a specific price range. It should be noted, however, that "Refine Price" could alleviate, but not completely solve the problem because, for example, even the minimal price range in the Walmart website could easily contain more than 2000 results, and yet the system could not show more than 1000. ...
Article
Full-text available
Modern information technology paradigms, such as online services and off-the-shelf products, often involve a wide variety of users with different or even conflicting objectives. Every software output may satisfy some users, but may also fail to satisfy others. Furthermore, users often do not know the internal working mechanisms of the systems. This situation is quite different from bespoke software, where developers and users usually know each other. This paper proposes an approach to help users to better understand the software that they use, and thereby more easily achieve their objectives—even when they do not fully understand how the system is implemented. Our approach borrows the concept of metamorphic relations from the field of metamorphic testing (MT), using it in an innovative way that extends beyond MT. We also propose a "symmetry" metamorphic relation pattern and a "change direction" metamorphic relation input pattern that can be used to derive multiple concrete metamorphic relations. Empirical studies reveal previously unknown failures in some of the most popular applications in the world, and show how our approach can help users to better understand and better use the systems. The empirical results provide strong evidence of the simplicity, applicability, and effectiveness of our methodology.
... Existing program comprehension models have been investigated in specific contexts, such as component based [3] or object oriented [6] software development, but to the best of our knowledge ours is the first work considering the comprehension process followed by professional and public challenge hackers during understanding of protected code to be attacked. The work by Sillito et al. [35] investigates general traits of program comprehension that are common to our observations. ...
Article
Full-text available
When critical assets or functionalities are included in a piece of software accessible to the end users, code protections are used to hinder or delay the extraction or manipulation of such critical assets. The process and strategy followed by hackers to understand and tamper with protected software might differ from program understanding for benign purposes. Knowledge of the actual hacker behaviours while performing real attack tasks can inform better ways to protect the software and can provide more realistic assumptions to the developers, evaluators, and users of software protections. Within Aspire, a software protection research project funded by the EU under framework programme FP7, we have conducted three industrial case studies with the involvement of professional penetration testers and a public challenge consisting of eight attack tasks with open participation. We have applied a systematic qualitative analysis methodology to the hackers’ reports relative to the industrial case studies and the public challenge. The qualitative analysis resulted in 459 and 265 annotations added respectively to the industrial and to the public challenge reports. Based on these annotations we built a taxonomy consisting of 169 concepts. They address the hacker activities related to (i) understanding code; (ii) defining the attack strategy; (iii) selecting and customizing the tools; and (iv) defeating the protections. While there are many commonalities between professional hackers and practitioners, we could spot many fundamental differences. For instance, while industrial professional hackers aim at elaborating automated and reproducible deterministic attacks, practitioners prefer to minimize the effort and try many different manual tasks. This analysis allowed us to distill a number of new research directions and potential improvements for protection techniques. In particular, considering the critical role of analysis tools, protection techniques should explicitly attack them, by exploiting analysis problems and complexity aspects that available automated techniques are bad at addressing.
... Existing program comprehension models have been investigated in specific contexts, such as component based [16] or object oriented [17] software development, but to the best of our knowledge ours is the first work considering the comprehension process followed by professional hackers during understanding of protected code to be attacked. ...
Conference Paper
Full-text available
Code protections aim at blocking (or at least delaying) reverse engineering and tampering attacks to critical assets within programs. Knowing the way hackers understand protected code and perform attacks is important to achieve a stronger protection of the software assets, based on realistic assumptions about the hackers' behaviour. However, building such knowledge is difficult because hackers can hardly be involved in controlled experiments and empirical studies. The FP7 European project Aspire has given the authors of this paper the unique opportunity to have access to the professional penetration testers employed by the three industrial partners. In particular, we have been able to perform a qualitative analysis of three reports of professional penetration test performed on protected industrial code. Our qualitative analysis of the reports consists of open coding, carried out by 7 annotators and resulting in 459 annotations, followed by concept extraction and model inference. We identified the main activities: understanding, building attack, choosing and customizing tools, and working around or defeating protections. We built a model of how such activities take place. We used such models to identify a set of research directions for the creation of stronger code protections.
... Existing program comprehension models have been investigated in specific contexts, such as component based [16] or object oriented [17] software development, but to the best of our knowledge ours is the first work considering the comprehension process followed by professional hackers during understanding of protected code to be attacked. ...
Article
Full-text available
Code protections aim at blocking (or at least delaying) reverse engineering and tampering attacks to critical assets within programs. Knowing the way hackers understand protected code and perform attacks is important to achieve a stronger protection of the software assets, based on realistic assumptions about the hackers' behaviour. However, building such knowledge is difficult because hackers can hardly be involved in controlled experiments and empirical studies. The FP7 European project Aspire has given the authors of this paper the unique opportunity to have access to the professional penetration testers employed by the three industrial partners. In particular, we have been able to perform a qualitative analysis of three reports of professional penetration test performed on protected industrial code. Our qualitative analysis of the reports consists of open coding, carried out by 7 annotators and resulting in 459 annotations, followed by concept extraction and model inference. We identified the main activities: understanding, building attack, choosing and customizing tools, and working around or defeating protections. We built a model of how such activities take place. We used such models to identify a set of research directions for the creation of stronger code protections.
... Component-based Software Development (CSD) is a process in which software applications are developed by reusing readily available components [7]. Two important advantages of CSD are shorter development time and lower development cost [8]. ...
... Esta forma de hacer ingeniería de software requiere de la inclusión de aspectos particulares y fundamentales para su éxito: selección de la arquitectura y de los componentes; adaptación e integración de los componentes dentro de la arquitectura elegida; y modificación de los componentes a medida que cambien los requerimientos del sistema [Griss & Pour, 2001: 37;Andrews et al, 2002: 359: Apperly, 2001. Dicha modificación, es posible que no sea tan sencilla como en un software tradicional, ya que se debe garantizar la compatibilidad entre las distintas versiones de los componentes de fuentes distintas; una recomendación valiosa para lograr esto es que la CBSE sea dividida en dos niveles: el nivel de los componentes y el nivel de la aplicación (Bertolino & Mirandola, 2003), para poder mantener una adecuada independencia que facilite el mantenimiento con componentes que son desarrollados por terceros. ...
... Andrews et al. [5] have developed a comprehension model for component understanding. Their model identifies component understanding as a specialised version of program understanding. ...
Article
Developers need to evaluate reusable components before they decide to adopt them. Whena developer evaluates a component they need to understand how that component can beused, and the behaviour that the component will exhibit. Existing evaluation techniquesuse formal analysis, sophisticated classification/search functionality, or rely on the presenceof extensive component documentation or evaluation component versions.
... These components are known as COTS (Commercial-Off-The-Shelf). The reuse of the mentioned components will allow us to simplify the construction of applications software, since the traditional phases of development and codification are replaced with processes of component search and selection, which can be carried out in less time and with less effort [2]. Component COTS selection is based on different criteria such as functionality, quality, price, confidence in the component manufacturer, cost of the formation of internal personnel for the use of the component, adjustment degree that needs to be carried out so that the component can be integrated with the requirements of our system, etc. Improving component-based system quality assessments implies defining metrics, which allows us to quantify multiple component characteristics used for component selection. ...
Article
Full-text available
In the last few years component-based software development (CBSD) has been imposed as a new paradigm in software systems construction. CBSD is based on the idea that software systems can be developed by selecting and integrating appropriate components, which have already been developed, and then assembling them to obtain the functionality desired in the final application. Multiple authors have proposed metrics to quantify several components characteristics in order to help in its selection. Nevertheless, rather than helping developers, such proposals often provoke more confusion due to the fact that they do not systematically take into account different aspects of the components. Trying to achieve clarity in this line, we have developed the CQM model (Component Quality Model), whose first aim is to propose a classification of the defined metrics for software components. The model will also be able to be used for the evaluation of a component or a component system. Finally, it is necessary to indicate that this is the first version of the model, and it will need to be refined by means of its use and discussion in different forums.
... Understanding component behaviour during the evolution of components and applications is important as maintenance is often necessitated because of changes in components (versions, availability, etc), changes of requirements and a host of other reasons [85]. Gergic [86] proposes an approach to manage versioning in the context of CBSD with focus on identification and description of versioned entities which occur during the component software assembly and configuration phase. ...
Article
Component Based Software Development (CBSD) is focused on assembling existing components to build a software system, with a potential benefit of delivering quality systems by using quality components. It departs from the conventional software development process in that it is integration centric as opposed to development centric. The quality of a component based system using high quality components does not therefore necessarily guarantee a system of high quality, but depends on the quality of its components, and a framework and integration process used. Hence, techniques and methods for quality assurance and assessment of a component based system would be different from those of the traditional software engineering methodology. It is essential to quantify factors that contribute to the overall quality, for instances, the trade off between cost and quality of a component, analytical techniques and formal methods, and quality attribute definitions and measurements. This paper presents a literature survey of component based system quality assurance and assessment; the areas surveyed include formalism, cost estimation, and assessment and measurement techniques for the following quality attributes: performance, reliability, maintainability and testability. The aim of this survey is to help provide a better understanding of CBSD in these aspects in order to facilitate the realisation of its potential benefits of delivering quality systems.
... In the past, research in program understanding has focused mostly on software maintenance and evolution but not reuse [9]. Past studies looked at the construction of mental models in program understanding [9], program understanding behavior during maintenance [10], understanding components [1], and defect repair strategies and behavior [4,8]. Most of these studies were based on source code. ...
Conference Paper
Full-text available
In order to make frameworks easier to use we need to better understand the difficulties that programmers have with them. The questions that programmers ask give clues to the quality of design, documentation, and programmer practice. We describe the method and results of a study on the Java Swing framework. We collected and analyzed a sample of 300 newsgroup questions asked about two Swing components (JButton and JTree), and classified the questions according to the design features of the components. This process revealed key insights that can improve a framework's design, its tutorials, and programmer practice.
... However the approach presented in [Sjachyn and Beus-Dukic 2006] could be adopted to identify and classify the components available on the market. Besides, users need to determine aspects of components, such as functionality, limitations, pre-and post-conditions in order to decide which component is appropriate [Andrews et al. 2002]. Voas [Voas 1999] suggested an approach involving black-box testing to determine quality attribute, and interface perturbation analysis (IPA) and operational system testing to determine the impact of using a certain component in a system. ...
Article
Full-text available
In a component-based development process the selection of components is an activity that takes place over multiple lifecycle phases that span from requirement specifications through design to implementation and integration. In different phases, different assumptions are valid and different granularity of information is available, which has a consequence that different procedure should be used in the selection pro- cess and an automated tool support for an optimized component selection would be very helpful in each phase. In this paper we analyze the assumptions and propose the selection procedure in the requirements phase. The selection criterion is based on cost minimization of the whole system while assuring a certain degree of satisfaction of the system requirements that can be considered before designing the whole architecture. For the selection and optimization procedure we have adopted the DEER (DEcision support for componEnt-based softwaRe) framework, previously developed to be used in the selection process in the design phase. The output of DEER indicates the optimal combination of single COTS (Commercial-Off-The-Shelf) components and assemblies of COTS that satisfy the requirements while minimizing costs. In a case study we illus- trate the selection and optimization procedure and an analysis of the model sensitivity to changes in the requirements.
... Son but est de proposer un mécanisme de sélection de COTS qui tiendrait compte des aspects "socio-techniques" de cette sélection [201]. Ce souci de représenter les aspects socio-techniques est présent dans certaines approches de sélection de COTS [85], telles que le Component Comprehension Model de S. Ghosh et al [12,106] qui propose de "comprendre" les composants afin de mieux les sélectionner. ...
Article
Full-text available
Le paradigme composant propose de construire un système à partir d'éléments faiblement couplés et pour la plupart déjà existants. Le taux de réutilisation ainsi atteint entraîne une diminution des délais et des coûts de développement. Pour faire face à la complexité croissante des applications, les entreprises sont de plus en plus obligées d'avoir recours à des composants commerciaux "sur étagère", fournis par des tierces personnes, et dont la nature même impose de repenser profondément le cycle de développement d'un logiciel. Il n'est plus possible de spécifier un besoin ou une architecture sans se demander s'il existe sur le marché un composant capable de satisfaire le premier ou de s'intégrer dans la seconde. Dans ce contexte, une activité voit son importance renforcée : la sélection de composants. Cette activité est sensible : une mauvaise définition des besoins associée à une mauvaise sélection des composants peut conduire à des catastrophes financières, voire même humaines dans certains cas. Elle est de plus trés coûteuse car elle impose le parcours de marchés comportant des milliers de composants, décrits avec des formats potentiellement tres différents. La sélection devient au final trés consommatrice en temps, au point de menacer les gains que conférait à l'origine ce type d'approche. La seule solution pour espérer maintenir ces gains est de disposer d'un mécanisme de sélection qui soit autant que possible automatisé. Dans cette thèse je propose un mécanisme qui permet de sélectionner, parmi une vaste bibliothèque de composants, le candidat qui répond le mieux à un besoin spécifique, aussi bien sur le plan fonctionnel que non-fonctionnel. L'originalité de cette approche est de permettre une sélection itérative en s'appuyant sur des niveaux de description des besoins de plus en plus détaillés. À cette fin, ce mécanisme intègre des résultats de travaux provenant de domaines variés tels que la recherche de composants, le sous-typage et les métriques de qualité, au sein d'un unique concept : le composant recherché.
... Subsequently the database can be queried by SQL statements to find out about various system properties. Andrews et al. [1] identify Commercial-of-the-Shelf (COTS) component comprehension as a specialized activity within software comprehension. They build a combined comprehension model for COTS components based on a domain model, a situation model, and a program model and evaluate how different component based software development approaches fit with this model. ...
Conference Paper
Understanding architectural characteristics of software components that constitute distributed systems is crucial for maintaining and evolving them. One component framework heavily used for developing component-based software systems is Microsoft's COM+. In this paper we particularly concentrate on the analysis of COM+ components and introduce an iterative and interactive approach that combines component inspection techniques with source code analysis to obtain a complete abstract model of each COM+ component. The model describes important architectural characteristics such as transactions, security, and persistency, as well as creating and use dependencies between components, and maps these higher-level concepts down to their implementation in source files. Based on the model, engineers can browse the software system's COM+ components and navigate from the list of architectural characteristics to the corresponding source code statements. We also discuss the Island Hopper application with which our approach has been validated.
Article
At present most hosts of components employ a Least Recently Used (LRU) or Not Recently Used (NRU) passivation strategy. These passivation strategies are heuristic strategies and cannot work well in most situations. A more efficient passivation strategy based on component invoked-pattern was proposed. According it, the statistical analysis of component-invoked processes was carried out firstly, then the component invoked-pattern was obtained which reflected the logic structures of applications. Furthermore, more efficient passivation was realized. At last, an experimental scheme and result were also provided.
Conference Paper
New programming techniques make claims that software engineers often want to hear. Such is the case with aspect-oriented programming (AOP). This paper describes a quasi-controlled experiment which compares the evolution of two functionally equivalent systems, developed in two different paradigms. The aim of the study is to explore the claims that software developed with aspect-oriented languages is easier to maintain and reuse than this developed with object-oriented languages. We have found no evidence to support these claims.
Article
Software maintenance and evolution (SME) is an important but challenging topic area for university-level computer science education. Seminars can be used to provide students with versatile and up-to-date knowledge on scien-tifically relevant issues. We organized three systematic university-level seminars on SME. In these seminars 127 groups have each been assigned the task of analyzing one scientific SME article. The main results concern background factors re-lating to the students, groups and articles as these affect student success in the seminars. This paper presents a strict statis-tical analysis and a discussion of these factors. Fourteen hypotheses were set and tested regarding the relation of various background factors and a student's success in the seminar. The results indicate a clear relation between some of the factors and success. Most of the student-and group-related factors clearly affected student success, whereas most of the article-related factors did not. The study also revealed many important ancillary results. The results support organizing, studying, and improving feasible seminars in this area.
Chapter
Full-text available
Processes play a great part in information systems engineering projects success. There are a lot of process models and metamodels; however, the "one size fits all" motto has to be moderated: models have to be adapted to the specificities of the organizations or the projects. In order to help method engineers building adapted process models, we propose a method to build process metamodels and to instantiate them according to the organizations context. Our method consists of selecting the concepts needed from a conceptual graph, gathering the current knowledge of metamodelling concepts for information systems engineering processes, and integrating them in a new process metamodel that will be instantiated for any project in an organization. This method is supported by a tool.
Conference Paper
Ensuring proper selection of COTS components is key to the success of component-based software development approaches. Although several approaches and criteria have been proposed for component selection, we lack techniques that can be used to systematically evaluate components against selection criteria for functionality, security, fault tolerance, and quality attributes. We propose a comprehensive approach for enabling the selection of COTS components by employing component understanding and fault injection testing techniques that aid in building an integrated comprehension model of the components. This model accumulates information regarding how each candidate component fared with respect to each criterion. This model can be used not only to aid in the final decision making process, but also serve as a guide during the component comprehension and evaluation stages.
Conference Paper
The scope and purpose of application comprehension is much broader than that of program comprehension. Application comprehension can be viewed as a spectrum spanning the gamut comprising code-level understanding at one end (low level) and understanding the architecture of interorganizational systems at the other end (high level). The nature and the depth of knowledge sought through application comprehension is directly related to the purpose at hand. In this paper, we propose a unified conceptual framework for application comprehension. The framework is influenced by Bloom's taxonomy. The proposed framework considers several aspects of application comprehension and draws upon our experience in developing large-scale, multi-tier distributed applications for brokerage and financial services. We discuss how the proposed conceptual framework can be implemented by leveraging the sophisticated tools that are available as open-source software. We conclude the paper by indicating how the proposed framework can be used to learn software engineering principles, tools, and practices in education and training contexts.
Conference Paper
A process is proposed to support comprehension of COTS components. In this process, software developers build a COTS comprehension model to support component selection. We integrate the process with a UML component based software development approach. We illustrate our approach using a hotel reservation system.
Conference Paper
The communities of generative programming (GP) and program comprehension (PC) look at similar problems: GP derives a program from a specification, PC derives a specification from a program. A basic difference between the two is GP's use of specific knowledge representations and mental models that are essential for program synthesis. In this paper, the author presents a historical review of the grand challenges, results, and outlook for GP as they pertain to PC.
Article
Full-text available
The growing complexity of large-scale systems raises concerns about their effectiveness and manageability. New approaches to system design, use, and management address these concerns through the aggregation and consolidation of system and application components into larger building blocks; systematic and standard ways of integrating such systems and communicating between them; sharing of distributed resources; and automated system management and operation control. As such, an automated service demand-supply control system can improve a large-scale grid infrastructure comprising a federation of distributed utility data centers.
Conference Paper
Full-text available
Comprehending complex, distributed, object-oriented software systems is a difficult task which must be approached in a formal disciplined manner if it is to be solved at all. The authors have developed a formal tool supported approach using relational databases to model both the requirement specification and the system implementation of a very large commercial application system. The approach combines forward and reverse engineering link the implementation to the specification, thereby providing a basis for traceability between system artifacts and requirements. The result is a partial comprehension adequate for system maintenance
Conference Paper
Full-text available
A major portion of the software maintenance effort is spent on the reverse engineering activity of understanding existing software. If one can learn more about how programmers understand code successfully, one can build better tools to support the understanding process. This contributes to higher quality and improved efficiency of maintenance tasks. An integrated code comprehension model and experiences with it in an industrial setting are presented. Audio-taped, think-aloud reports were used to investigate how well the integrated code comprehension model works during industrial maintenance activities that range from code fixes to enhancements, code leverage, and reuse. The tapes were analyzed for information needs during maintenance activities, and tool capabilities were derived accordingly. The results are presented and discussed
Conference Paper
Full-text available
Hypotheses are major drivers of program comprehension. We report on a case study observing an experienced software engineer porting a large software system and the role of hypotheses in accomplishing the porting task. Observations confirm some existing theoretic models and experimental findings, but not all. While generalization based on a case study is of necessity limited, the results could be the basis for further experiments. They also point to information that mould help novices to become experts faster
Conference Paper
Full-text available
A major portion of the maintenance effort is spent understanding existing software. The authors present an integrated code comprehension model and experiences with it in an industrial setting. They use audio-taped, think-aloud reports to investigate how well this integrated code comprehension model works during industrial maintenance activities ranging from code fixes to enhancements, code leverage, and reuse. They analyze the tapes for information needs during maintenance activities and derive tool capabilities accordingly
Article
Full-text available
A component-based software engineering workshop at the 20th international Conference on Software Engineering discussed the current state of components. The authors synthesize these perspectives and ideas into a coherent discussion of how components are reshaping SE practices. A component-based system requires an infrastructure for communication and collaboration. Roger Sessions'sidebar on component middleware examines the Microsoft MTS and OMG Corba infrastructure technologies. David Chappell addresses the fundamental architectural issue of transactions in his sidebar, comparing how Microsoft's MTS and Sun's Enterprise JavaBeans handle transactions and component state.
Article
Full-text available
Code cognition models examine how programmers understand program code. The authors survey the current knowledge in this area by comparing six program comprehension models: the Letovsky (1986) model; the Shneiderman and Mayer (1979) model; the Brooks (1983) model; Soloway, Adelson and Ehrlich's (1988) top-down model; Pennington's (1987) bottom-up model; and the integrated metamodel of von Mayrhauser and Vans (1994). While these general models can foster a complete understanding of a piece of code, they may not always apply to specialized tasks that more efficiently employ strategies geared toward partial understanding. We identify open questions, particularly considering the maintenance and evolution of large-scale code. These questions relate to the scalability of existing experimental results with small programs, the validity and credibility of results based on experimental procedures, and the challenges of data availability
Article
Full-text available
We present results of observing professional maintenance engineers working with industrial code at actual maintenance tasks. Protocol analysis is used to explore how code understanding might differ for small versus large scale code. The experiment confirms that cognition processes work at all levels of abstraction simultaneously as programmers build a mental model of the code. Cognition processes emerged at three levels of aggregation representing lower and higher level strategies of understanding. They show differences in what triggers them and how they achieve their goals. Results are useful for defining core competencies which maintenance engineers need for their work and for documentation and development standards. 1 Introduction During maintenance and software evolution, software engineers must understand code they haven't written for a variety of tasks. Existing cognition models [1, 4, 5, 6, 7, 10] emphasize cognition by what the program does (a functional approach) and how th...
Article
Software maintenance is recognized as the most expensive phase of the software life cycle. The maintenance programmer is frequently presented with code with little or no supporting documentation, so that the understanding required to modify the program comes mainly from the code. This paper discusses some of the current approaches to theories of program comprehension and the tools for assisting the maintenance programmer with this problem.
Book
Component Software: Beyond Object-Oriented Programming explains the technical foundations of this evolving technology and its importance in the software market place. It provides in-depth discussion of both the technical and the business issues to be considered, then moves on to suggest approaches for implementing component-oriented software production and the organizational requirements for success. The author draws on his own experience to offer tried-and-tested solutions to common problems and novel approaches to potential pitfalls. Anyone responsible for developing software strategy, evaluating new technologies, buying or building software will find Clemens Szyperski's objective and market-aware perspective of this new area invaluable.
Article
This 800-page book takes the reader through the techniques, notations and processes that make up the Catalysis approach to object-oriented modelling. The book is by the developers of Catalysis, so it is authoritative. The book ranges widely, but does not avoid discussing important detail. It embodies a wealth of experience in object-oriented modelling. Everyone who is interested in software development should find something of value in this book. It should be high on the ‘must read ’ list of those developing object-oriented methods (particularly methods that are component-based); those responsible for tailoring methods for in-house use; and those teaching, and researching into, object-oriented analysis and design methods.
Article
in this chapter we present our current view on the knowledge and processing strategies programmers employ in attempting to comprehend computer programs we first present an experiment that supports our claims as to the composition of an expert programmer's knowledge base next, we propose processing strategies that may be at work in comprehending programs as support for these latter mechanisms, we draw on our experience in building a computer program that attempts to understand computer programs written by novices (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Comprehension of computer programs involves detecting or inferring different kinds of relations between program parts. Different kinds of programming knowledge facilitate detection and representation of the different textual relations. The present research investigates the role of programming knowledge in program comprehension and the nature of mental representations of programs; specifically, whether procedural (control flow) or functional (goal hierarchy) relations dominate programmers' mental representations of programs. In the first study, 80 professional programmers were tested on comprehension and recognition of short computer program texts. The results suggest that procedural rather than functional units form the basis of expert programmers' mental representations, supporting work in other areas of text comprehension showing the importance of text structure knowledge in understanding. In a second study 40 professional programmers studied and modified programs of moderate length. Results support conclusions from the first study that programs are first understood in terms of their procedural episodes. However, results also suggest that a programmer's task goals may influence the relations that dominate mental representations later in comprehension.
Article
A sufficiency theory is presented of the process by which a computer programmer attempts to comprehend a program. The theory is intended to explain four sources of variation in behavior on this task: the kind of computation the program performs, the intrinsic properties of the program text, such as language and documentation, the reason for which the documentation is needed, and differences among the individuals performing the task. The starting point for the theory is an analysis of the structure of the knowledge required when a program is comprehended which views the knowledge as being organized into distinct domains which bridge between the original problem and the final program. The program comprehension process is one of reconstructing knowledge about these domains and the relationship among them. This reconstruction process is theorized to be a top-down, hypothesis driven one in which an initially vague and general hypothesis is refined and elaborated based on inf ormation extracted from the program text and other documentation.
Article
This paper reports on an empirical study of the cognitive processes involved in program comprehension. Verbal protocols were gathered from professional programmers as they were engaged in a program-understanding task. Based on analysis of these protocols, several types of interesting cognitive events were identified. These include asking questions and conjecturing facts about the code. We describe these event types and use them to derive a computational model of the programmers' mental processes.
Article
Abstract Increasingly, modern-day software systems are being built by combining externally-developed software components with application-specific code. For such systems, existing program-analysis-based softwareengineering techniques may not directly apply, due to lack of information about components. To address this problem, the use of component metadata has been proposed. Component metadata are metadata and metamethods provided with components, that retrieve or calculate information about those components.
Article
Interest in component-based software continues to grow with the recognition of its potential in managing the increasing complexity of software systems. However, the use of externally-provided components has serious drawbacks, in most cases due to the lack of information about the components, for a wide range of activities in the engineering of component-based applications. Consider the activity of regression testing, whose high cost has been, and continues to be, a problem. In the case of component-based applications, regression testing can be even more expensive. When a new version of one or more components is integrated into an application, the lack of information about such externally-developed components makes it difficult to effectively determine the test cases that should be rerun on the resulting application. In previous work, we proposed the use of metadata, which are additional data provided with a component, to support software engineering tasks. In this paper, we present two new metadata-based techniques that address the problem of regression test selection for component-based applications: a code-based approach and a specification-based approach. First, using an example, we illustrate the two techniques. Then, we present a case study that applies the code-based technique to a real component-based system. The results of the study indicate that, on average, 26% of the overall testing effort can be saved over seven releases of the component-based system studied, with a maximum savings of 99% of the testing effort for one version. This reduction demonstrates that metadata can produce benefits in regression testing by reducing the costs related to this activity.
Conference Paper
Developers have to identify properties of COTS components to properly integrate them with a system under development, but COTS components are typically “black boxes” because their source code is not available. We present an approach that can be used in black-box understanding of COTS components. The major objective is to reduce the effort required to reveal component properties by partially automating interface probing. A developer provides a full or partial description of a component property, together with a search scope where assertions are used to describe component properties. Based on this information, a search engine automatically searches for component inputs on which the component property is revealed using a combination of existing automated test generation methods for black-box testing and for white-box testing. Our initial experience has shown that this approach may be a cost-effective way of revealing properties of components
Conference Paper
We report on a software understanding study during adaptation, of large-scale software. Participants were professional software maintenance engineers. The paper reports on the general understanding process, the types of actions programmers preferred during the adaptation task, and the level of abstraction, at which they were working. The results of the observation are also interpreted in terms of the information needs of these software engineers
Conference Paper
We report on a software understanding field study during corrective maintenance of large-scale software by professional software maintenance engineers. We explain the general understanding process, the information needs of these software engineers during their tasks, and the tool capabilities that would help them to be more productive
Conference Paper
This paper reports on an empirical strategy of software understanding during corrective maintenance of large-scale software with professional maintenance programmers. Hypotheses are key drivers in program understanding and influence the direction program understanding can take. This paper reports on the types of hypotheses programmers make, how they resolve them, and the strategies and comprehension processes they tend to use
Conference Paper
The paper describes the research carried out into the process of program comprehension during software maintenance within the EUREKA project REM (Reverse Engineering and Maintenance). Tools to aid maintenance programmers to achieve and document an overall interpretation of the system being maintained, as well as a deep understanding of the fine details of the source code, are presented. The cognition model assumed exploits both the top down and the bottom up approaches: program comprehension is intended as an iterative process of guessing, constructing hypotheses and verifying them This process is supported by providing maintenance programmers with a flexible system for querying source code and testing hypotheses against the evidence in the code. Several facilities generate new documents at the design and specification level, thus allowing maintenance programmers to record the knowledge gained for future use
Conference Paper
We present results of observing professional maintenance engineers working with industrial code at actual maintenance tasks. Protocol analysis is used to explore how code understanding might differ for small versus large scale code. The experiment confirms that cognition processes work at all levels of abstraction simultaneously as programmers build a mental model of the code. Cognition processes emerged at three levels of aggregation representing lower and higher level strategies of understanding. They show differences in what triggers them and how they achieve their goals. Results are useful for defining core competencies which maintenance engineers need for their work and for documentation and development standards
Article
Maintenance frequently consumes more resources than new software development. A major portion of the maintenance effort is spent trying to understand existing software. If more can be learnt about how programmers understand code successfully, better tools to support this understanding process can be built. This contributes to higher quality and improved efficiency of maintenance tasks. Audio-taped `think aloud' reports were used to investigate an integrated code comprehension model during a variety of industrial maintenance activities. The tapes were analysed for information needs during maintenance activities and used to derive useful tool capabilities
Article
It does not make sense to grant carte blanche high-assurance certificates to product that may be used across multiple platforms and in multiple environments. We should bind software certification to a product's known environment and operational profile. The author proposes three techniques for verifying high assurance: desirable-behavior testing, abnormal testing, and fault injection. Each uses the product's operational profile to detect software-related anomalies that might allow a catastrophic event
Article
We present results of observing professional maintenance engineers working with industrial code at actual maintenance tasks. Protocol analysis is used to explore how code understanding might differ for small versus large scale code. The experiment confirms that cognition processes work at all levels of abstraction simultaneously as programmers build a mental model of the code. Analysis focused on dynamic properties and processes of code understanding. Cognition processes emerged at three levels of aggregation representing lower and higher level strategies of understanding. They show differences in what triggers them and how they achieve their goals. Results are useful for defining information which maintenance engineers need for their work and for documentation and development standards
Comprehension Strategies in Programming' , Empirical Studies of Programmers:Second Workshop
  • Nancy Pennington
Nancy Pennington, 'Comprehension Strategies in Programming', Empirical Studies of Programmers:Second Workshop, Eds. Olson, Sheppard, and Soloway, Ablex Publ., 1986, pp. 100 -112.
CognitiveProcessesinProgramCompre-hension
  • Stanleyletovsky
StanleyLetovsky, ‘CognitiveProcessesinProgramCompre-hension’, Empirical Studies of Programmers, Eds. Soloway and Iyengar, Ablex Publ.,1986, pp. 58 - 79.
plex, Distributed Object-Oriented Software System: A Re-port FromtheField”. InInternational Workshop onProgram Understanding
  • H Sneed
  • T Dombovari
H. Sneed and T. Dombovari. plex, Distributed Object-Oriented Software System: A Re-port FromtheField”. InInternational Workshop onProgram Understanding, IEEE Computer Society press, pages 218– 225, Pittsburgh, May 1999.