Article

ABSTRACT Some observations on the application of software metrics to UML models

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

In this position paper we discuss some of the existing work on applying metrics to UML models, present some of our own work in this area, and specify some topics for future research that we regard as important.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... With the development of software shifting to models rather than source code, particularly within the domain of MDE, research is required to investigate how software metrics can be measured from software models and prior to the implementation of the system. Being able to measure the metrics accurately from both models and source code is important for several reasons [MP07], including Introduction code may help in identifying parts of the system that have been incorrectly forward or reverse engineered. ...
... As the thesis is concerned with developing a reliable approach to software measurement using MDE principles, the final part of the chapter reviews software testing research in the context of MDE. Parts of this chapter have been published in McQuillan and Power [MP06c,MP07]. ...
... For software metrics to be considered successful their empirical validation is of crucial importance and several attempts have been made to validate various different software metrics [CDC98,BWDP00]. However, one of the problems with software metrics is that they can be easy to define, but difficult to justify or correlate with external attributes [MP07]. One of the main reasons is the way software measures are defined and this has been noted by several authors in the literature [BDW98,BBA02,KHL01]. ...
... These three metrics are the CBO, RFC and LCOM. Moreover, their methods either lack accuracy (in terms of proximity to the measures obtained directly from the source code) or need very well detailed design information [4,15,37,38,53]. More details concerning this topic can be found in the Chapter of Related Work. ...
... Unfortunately, they do not provide any further information on which these various stages McQuillan et al. in [37] indicate that RFC can be derived by inspecting various behavioral diagrams, but they do not say how, as for the CBO metric, they indicate that it can be approximated from UML class diagrams, but to obtain more precise measures, behavioral diagrams are needed. Later in [38], a formal definition of the CK metrics using UML 2.0 is presented by the same authors. Their work is an improved version of Baroni's previous work using UML 1.3. ...
Thesis
Full-text available
Design-complexity metrics, while measured from the code, have shown to be good predictors of fault-prone object-oriented programs and related to several other managerial factor such as productivity, rework effort for reusing classes and design effort, and maintenance effort. Some of the most often used metrics are the Chidamber and Kemerer metrics (CK). Because earlier assessments of such managerial factors are desirable, prior to the code implementation, our research mainly concerns two topics. The first one concerns to the how can we approximate the code CK metrics using UML diagrams; and the second one concerns the use of such UML approximations to predict faulty object-oriented classes. First, we define our UML metrics, approximations of the Weighted Methods per Class (WMC), Response For Class (RFC) and Coupling Between Objects (CBO) CK code metrics using UML communication diagrams. Second, we evaluate our UML metrics as approximations to their corresponding code metrics. Third, in order to improve the approximations of our UML metrics, we study the application of two different data nor-malization techniques, and select the best one to be used in our experiments. Finally, because code CK metrics have shown repetitively their ability to predict faulty code in several previous works, we evaluate our UML CK metrics as predictors of faulty code. In order to do so, we first construct three prediction models using logistic regression with the source code of a package of an open source software project (Mylyn from Eclipse), and we test them with several other of its packages. Then, we applied these models to three different small-size software projects, using, on the one hand, their UML metrics, and on the other hand, their corresponding code metrics for comparison. The results of our empirical study lead us to conclude that the proposed UML RFC and UML CBO metrics can predict fault-proneness of code almost with the same accuracy as their respective code metrics do. The elimination of outliers and the normalization procedure used were of great utility, not only for enabling our UML metrics to predict fault-proneness of code using a code-based prediction model but also for improving the prediction results of our models across different software packages and projects. As for the WMC metrics, both the proposed UML and its respective code metric showed a poor fault-proneness prediction ability. Our plans for future work mainly concern the exploration of other areas of research in which our UML metrics can be applied and as for the topic of fault prediction the following subjects for further study have been considered: data normalization and other pre-processing techniques, the study of other metrics to be included in our prediction models (such metrics should be easily obtainable before the implementation of the system and different to design-complexity metrics), and other methodologies to predict fault-proneness of code (different to logistic regression). i
... Is it possible to use metamodels in order to create and improve metrics? McQuillan and Power [7] write that definitions of metrics should be reusable. Researchers have used metamodels and ontologies in order to present object-oriented design metrics [8] and database design metrics [9], respectively, as precisely as possible. ...
... A small extent of M is a sign of possible quality problems of M because M may be incomplete. For example, McQuillan and Power [7] note that existing metrics for UML models deal only with a small part of all the possible UML diagram types. The existing metrics evaluation methods [3,13,14] do not take into account whether all the metrics, which belong to a set of related metrics, together help us to evaluate all (or at least most of the) parts of a software entity. ...
Conference Paper
Full-text available
Metric values can be used in order to compare and evaluate software entities, find defects, and predict quality. For some programming languages there are much more known metrics than for others. It would be helpful, if one could use existing metrics in order to find candidates for new metrics. A solution is based on an observation that it is possible to specify abstract syntax of a language by using a metamodel. In the paper a metrics development method is proposed that uses metamodel-based translation. In addition, a metamodel of a language helps us to find the extent of a set of metrics in terms of that language. That allows us to evaluate the extent of the core of a language and to detect possible quality problems of a set of metrics. The paper contains examples of some candidate metrics for object-relational database design, which have been derived from existing metrics.
... Most of the other metrics are built upon the original CK metrics suite. It is easy to lift CK metrics from the code level to the model level [17]. CK suite could be kinked to economic variables (productivity, rework effort, and design) to assess practicing managers [18]. ...
... Bazı araştırmacılar kalite ölçümleri için çeşitli metrik setleri yayınlamışlardır. Bu metrik setlerinden bazıları Chidamber & Kemerer (CK) Metrik Kümesi [6], Brito e Abreu MOOD Metrik Kümesi [7], Bansiya ve Davis QMOOD Metrik Kümesi [8] vb.dir [9]. Bu metrik setleri geleneksel yazılımlara defalarca uygulanmış ve onaylanmıştır. ...
Conference Paper
Full-text available
(ENG) Since it is difficult to measure the (external) quality attributes of a software product directly, an evaluation is made by using some software metrics that represent the value of related internal quality attributes. The objective of this work is to answer the research question – which has not been studied much in the literature yet – of whether the quality attributes of an application will be similar in its desktop and mobile versions. Four open-source applications having both desktop and mobile versions have been studied: AdBlock, KeePass, Telegram and Zulip. First, the source code of these applications was evaluated using Chidamber & Kemerer (CK) object-oriented metric set/suite. DIT, PLOC, WMC, CBO, NOC and RFC metrics from the CK metric set were collected for both versions using the “Understand” static code analysis tool. Then the measurement results for both applications have been reviewed by a comparative analysis to find an answer for our research question. (TR) Yazılımların (dışsal) kalite özelliklerinin doğrudan ölçülmesi kolay olmadığından, bunların değerlendirilmesi içsel kalite özelliklerini ölçen yazılım metrikleri vasıtasıyla yapılmaktadır. Bu çalışmada amacımız, üzerinde henüz çok fazla çalışma bulunmayan bir konu olan, bir yazılımın masaüstü platformlar için geliştirilmiş sürümü ile mobil sürümü arasında kalite özelliklerinin ne derece benzediğinin araştırılmasıdır. Bu soruyu cevaplandırabilmek için AdBlock, KeePass, Telegram ve Zulip adlı açık kaynak uygulamaların masaüstü ve mobil versiyonlarının içsel kalite özellikleri karşılaştırılmıştır. Önce “Understand” statik kod analiz aracı kullanılarak uygulamaların masaüstü ve mobil versiyonları için Chidamber & Kemerer (CK) metrik kümesinden DIT, PLOC, WMC, CBO, NOC, ve RFC metriklerinin değerleri ölçülmüştür. Sonra bu iki platform için elde edilen metrik değerleri karşılaştırmalı olarak analiz edilmiştir.
... Most of the metrics are introduced and used on empirical grounds, and are not formally validated against the representational theory of measurement (Fenton & Pfleeger, 1997). Also, since use cases model usage of a software system , metrics oriented towards absolute or relative counting are limited in scope (McQuillan & Power, 2006). Calculations using metrics and subsequent data analysis can become tedious and error prone if carried out manually, however, support for metrics in modeling tools is at present sketchy. ...
Article
Full-text available
As software systems become ever more interactive, there is a need to model the services they provide to users, and use cases are one abstract way of doing that. As use cases models become pervasive, the question of their communicability to stakeholders arises. In this chapter, we propose a semiotic framework for understanding and systematically addressing the quality of use case models. The quality concerns at each semiotic level are discussed and process- and product-oriented means to address them in a feasible manner are presented. The scope and limitations of the framework, including that of the means, are given. The need for more emphasis on prevention over cure in improving the quality of use case models is emphasized. The ideas explored are illustrated by examples.
... Furthermore, there exists no approach regarding the usage of two articulated formalisms. Nevertheless, McQuillan et al. [37] discuss the challenges in definition and implementation of metrics across different viewpoints throughout different abstraction levels of a software sys-tem. This was our case when creating metrics for the different views of a metamodel, namely the object-oriented structure and the logic-based wellformedness rules. ...
Article
Full-text available
The definition of a metamodel that precisely captures domain knowledge for effective know-how capitalization is a challenging task. A major obstacle for domain experts who want to build a metamodel is that they must master two radically different languages: an object-oriented, MOF-compliant, modeling language to capture the domain structure and first order logic (the Object Constraint Language) for the definition of well-formedness rules. However, there are no guidelines to assist the conjunct usage of both paradigms, and few tools support it. Consequently, we observe that most metamodels have only an object-oriented domain structure, leading to inaccurate metamodels. In this paper, we perform the first empirical study, which analyzes the current state of practice in metamodels that actually use logical expressions to constrain the structure. We analyze 33 metamodels including 995 rules coming from industry, academia and the Object Management Group, to understand how metamodelers articulate both languages. We implement a set of metrics in the OCLMetrics tool to evaluate the complexity of both parts, as well as the coupling between both. We observe that all metamodels tend to have a small, core subset of concepts, which are constrained by most of the rules, in general the rules are loosely coupled to the structure and we identify the set of OCL constructs actually used in rules.
... A metric suite proposed by Chindamber and Kemerer (C&K) is one of the best known suites of OO metrics. The six metrics proposed by CK are Weighted Method per Class (WMC), Depth of Inheritance Tree (DIT), Response For a Class (RFC), Number Of Children (NOC), Lack of Cohesion of Methods (LOCM) and Coupling Between Objects (CBO) [4] [5]. Parvinder Singh Sandhu and Dr. Hardeep Singh [6] have proposed a research that gives the evaluation of CK suite of metrics and suggests the refinements and extensions to these metrics so that these metrics should reflect accurate and precise results for OO based systems. ...
Article
Analyzing object – oriented systems in order to evaluate their quality gains its importance as the paradigm continues to increase in popularity. Consequently, several object-oriented metrics have been proposed to evaluate different aspects of these systems such as class coupling. This paper presents a new cognitive complexity metric namely cognitive weighted coupling between objects for measuring coupling in object-oriented systems. In this metric, five types of coupling that may exist between classes: control coupling, global data coupling, internal data coupling, data coupling and lexical content coupling are consider in computing CWCBO.
... Since in our work we focus on syntactic properties of mergerefactorings, the quality function is built from a set of metrics -syntactically measurable indicators representing specific refactoring objectives (see Fig. 2). Typically, such metrics can assess the size of the resulting model, determine the degree of object coupling, cohesion in methods, weighted number of methods per class and more [17]. The metrics can be reused across different organizations and domains. ...
Conference Paper
In this paper, we consider the problem of refactoring related software products specified in UML into annotative product line representations. Our approach relies on identifying commonalities and variabilities in existing products and further merging those into product line representations which reduce duplications and facilitate reuse. Varying merge strategies can lead to producing several semantically correct, yet syntactically different refactoring results. Depending on the goal of the refactoring, one result can be preferred to another. We thus propose to capture the goal using a syntactic quality function and use that function to guide the merge strategy. We define and implement a quality-based merge-refactoring framework for UML models containing class and statechart diagrams and report on our experience applying it on three case-studies.
... The work presented in this paper takes place in the overall context of developing a framework for calculating metrics from various kinds of models. Our approach is based on designing a single metamodel, called the measurement metamodel that describes the quantifiable elements used in software metrics [17,12]. We are in the process of developing a set of model transformations from other artifacts, such as UML class diagrams and Java programs, into this measurement metamodel. ...
Article
Model transformations are core to MDE, and one of the key aspects for all model transformations is that they are validated. In this paper we develop an approach to testing model transformations based on white-box coverage measures of the transformations. To demonstrate the use of this approach we apply it to some examples from the ATL metamodel zoo.
... Objectoriented design complexity metrics are used to predict critical information about reliability and maintainability [5] of software systems and therefore help to evaluate and improve the quality of the design. Today, the relevant literature provides a variety of object oriented metrics [6][7][8][9][10][11][12][13], to compute the complexity of software. Further, selecting a particular metric is again a problem, as every metric has its own advantages and disadvantages. ...
Article
Full-text available
Software complexity metrics are used to predict critical information about reliability and maintainability of software systems. Object oriented software development requires a different approach to software complexity metrics. In this paper, we propose a metric to compute the structural and cognitive complexity of class by associating a weight to the class, called as Weighted Class Complexity (WCC). On the contrary, of the other metrics used for object oriented systems, proposed metric calculates the complexity of a class due to methods and attributes in terms of cognitive weight. The proposed metric has been demonstrated with 00 examples. The theoretical and practical evaluations based on the information theory have shown that the proposed metric is on ratio scale and satisfies most of the parameters required by the measurement theory.
Chapter
Brazilian organizations must comply with the Brazilian General Data Protection Law (LGPD) and this need must be carried out in harmony with legacy systems and in the new systems developed and used by organizations. In this article we present an overview of the LGPD implementation process by public and private organizations in Brazil. We conducted a literature review and a survey with Information and Communication Technology (ICT) professionals to investigate and understand how organizations are adapting to LGPD. The results show that more than 46% of the organizations have a Data Protection Officer (DPO) and only 54% of the data holders have free access to the duration and form that their data is being treated, being able to consult this information for free and facilitated. However, 59% of the participants stated that the sharing of personal data stored by the organization is carried out only with partners of the organization, in accordance with the LGPD and when strictly necessary and 51% stated that the organization performs the logging of all accesses to the personal data. In addition, 96.7% of organizations have already suffered some sanction / notification from the National Data Protection Agency (ANPD). According to our findings, we can conclude that Brazilian organizations are not yet in full compliance with the LGPD.
Article
Full-text available
Any traditional engineering field has metrics to rigorously assess the quality of their products. Engineers know that the output must satisfy the requirements, must comply with the production and market rules, and must be competitive. Professionals in the new field of software engineering started a few years ago to define metrics to appraise their product: individual programs and software systems. This concern motivates the need to assess not only the outcome but also the process and tools employed in its development. In this context, assessing the quality of programming languages is a legitimate objective; in a similar way, it makes sense to be concerned with models and modeling approaches, as more and more people start the software development process by a modeling phase. In this paper we introduce and motivate the assessment of models quality in the Software Development cycle. After the general discussion of this topic, we focus the attention on the most popular modeling language -- the UML -- presenting metrics. Through a Case-Study, we present and explore two tools. To conclude we identify what is still lacking in the tools side.
Conference Paper
In model driven development (MDD), models are transformed automatically into other models. This leads to transformation chains. The goal of MDD is to set up efficient transformation chains, i.e. adding semantics first and platform detail later. When the platform is changed, only the later, platform specific models have to be replaced. This paper constructs metrics for measuring and comparing the models obtained through transformations in MDD processes, in order to help to set up the more efficient transformation chains.
Article
Aquest projecte descriu la construcció d'una aplicació encarregada de realitzar l'anàlisis d'un model UML. Està encabit dins el marc d'un aplicatiu de gestió de models en un repositori centralitzat de la àrea de Tècniques Avançades d'Enginyeria de Programari de la carrera d'Enginyeria Informàtica de la UOC. Este proyecto describe la construcción de una aplicación encargada de realizar el análisis de un modelo UML. Está concebido dentro del marco de un aplicativo de gestión de modelos en un repositorio centralizado del área de Técnicas Adelantadas de Ingeniería de Software de la carrera de Ingeniería Informática de la UOC. This project describes the construction of an application charged with performing an analysis of a UML model. It is set in the framework of a model management application in a centralized repository of the area of Advanced Software Engineering Techniques in the degree in Computer Engineering at the UOC.
Article
Full-text available
Based on the 2006 edition of the Model Size Metrics work- shop, we believe that counts are undervalued as useful model metrics. In this position paper, we provide arguments from the literature so as to consider counts as important metrics for the model measurement. We then state associated issues and sketch a model-driven framework to raise the abstraction level of the implementation of model metrics, starting with count metrics.
Article
A methodology for risk assessment of product line architectures
Article
Full-text available
Despite the extensive and solid research literature on Object-Oriented Design Metrics (OOD Metrics), a recent survey that was conducted to assess the exploitation of metrics collection and analysis in the design phase within the software industry in Sweden (1), indicated that only 21% of the survey respondents collect metrics in the design phase. 55% of the respondents to the same survey said that they consider metrics collection as a difficult process. A major reason of the difficulty of collecting design metrics is the lack of a common syntax or a common language to express OOD metrics. This lack resulted in shortage of tools that automate collecting design metrics. Researchers who propose OOD metrics express them in plain English or as mathematical formulas. Plain English allows different interpretations for the same metric. Mathematical formulas should be based on a mathematical model, which does not exist for Object- Oriented designs. In this paper, we propose expressing metrics as XQuery expressions that targets XMI documents. XMI documents offer a standard way for representing object-oriented designs, specifically UML diagrams. Also, we present Design-Metrics Crawler, which is a software tool that applies our proposal.
Conference Paper
Full-text available
Design metrics are useful for several means, including the improvement of software quality, the identification of fault-prone classes, the prediction of maintenance efforts, the estimation of rework efforts, etc. However, many of the existing metrics suffer from an ill definition, which leads to different interpretations and to a subsequent lack of their use, due to their informal definitions. So, in spite of research studies, design metrics have not been widely utilized in the software industry. One of the major problems that limitates their use is the nonattendance of available tools to measure the metrics, which in turn can be a consequence of the metrics imprecise specification. This paper presents an approach used to formalize metric suites in a precise way, solving the ambiguity problems that can reduce their use. The MOOSE metrics - Metrics for Object-Oriented Software Engineering - serve to exemplify the simplicity and limitations of our approach.
Article
Full-text available
This paper describes a technique for formalizing metrics for COTS-based architectures. This technique is built upon the UML 2.0 metamodel and uses OCL as a metrics defini-tion language. As a proof of concept, an example based upon a set of reusability metrics for fine-grained JavaBeans components is presented.
Article
Full-text available
Maintainability is an increasingly relevant quality aspect in the development of object oriented software systems (OOSS). It is generally accepted that OOSS maintainability is highly dependent on the decisions made early in the development life cycle. Conceptual modelling is an important task of this early development. So that the maintainability of conceptual models have a great influence on the maintainability of the OOSS which is finally implemented. For assessing the conceptual models maintainability it is useful to have quantitative and objective measurement instruments. Conceptual modelling focus on either static aspects or dynamics aspects of the OOSS. Using the Unified Modelling Language (UML) static aspects at conceptual level are mainly represented in structural diagrams such as class diagrams, whilst dynamic aspects are represented in behavioural diagrams such as statechart diagrams, activity diagrams, sequence diagrams and collaboration diagrams. There exists several works related to metrics for structural diagrams such as class diagrams. However, behavioural diagrams have been little studied This fact leaded us to define measures for UML statechart diagrams. The main goal of this paper is to show how we defined those measures in a methodological way, in order to guarantee their validity. We used the DISTANCE framework, based on measurement theory, to define and theoretically validate the measures. In order to gather empirical evidence that the proposed measures could be early maintainability indicators of UML statechart diagrams, we carried out a controlled experiment. The aim of the experiment was to investigate the relationship between the complexity of UML statechart diagrams and their understandability (one maintainability subcharacteristic).
Article
Full-text available
This paper proposes some new software metrics that can be applied to UML modelling elements like classes and messages. These metrics can be used to predict var-ious characteristics at the earlier stages of the software life cycle. A CASE tool is de-veloped on top of Rational Rose 1 using its BasicScript language and we provide some examples using it.
Conference Paper
Full-text available
In this paper we present an infrastructure that supports interoperability among various reverse engineering tools and applications. We include an application programmer's interface that permits extraction of information about declarations, including classes, functions and variables, as well as information about scopes, types and control statements in C++ applications. We also present a hierarchy of canonical schemas that capture minimal functionality for middle-level graph structures. This hierarchy facilitates an unbiased comparison of results for different tools that implement the same or a similar schema. We have a repository, hosted by SourceForge.net, where we have placed the artifacts of our infrastructure.
Article
Full-text available
Currently, more and more research results on measuring class diagrams have been developed in literatures. In order to study these metrics systematically and deeply, this paper analyzes and compares some typical metrics for UML class diagrams from different viewpoints, different types of relationships, different types of metric values, complexity, and theoretical and empirical validation. Finally, the authors discuss their advantages and disadvantages as well as the existing problems and the prospects.
Article
Full-text available
Building software models before implementing them has become widely accepted in the software industry. Object models, graphically represented by class diagrams, lay the foundation for all later design work. So, their quality can have a significant impact on the quality of the software which is ultimately implemented, and an even greater impact if we take into account the size and complexity of current software systems. It is widely recognised that the production of better software requires the improvement of early development phases and the artifacts they produce. In this paper, we will introduce and analyse a set of an existent object oriented metrics that can be applied for assessing class diagrams complexity at the initial phases of the object oriented development life cycle. We also define our own proposal for new ones. KEY WORDS: object oriented metrics, object oriented software quality, class diagram complexity RESUME: La construction de modèles d'un logiciel bien avant sa mise en oeuvre est une pratique maintenant largement acceptée dans l'industrie du logiciel. Les modèles d'objets, représentés graphiquement par des diagrammes de classes, servent de fondations aux autres modèles. La qualité de ces diagrammes a donc un impact important sur la qualité du logiciel qui sera implémenté. Cet impact est même très important si on prends en compte la taille et la complexité des systèmes courants. Il est bien connu que la production d'un système de qualité nécessite un grand soin dans les premières phases du développement. Dans cet article nous introduisons et analysons un ensemble de métriques pour les objets permettant d'évaluer la complexité des diagrammes de classes lors de l'étape initiale d'un développement orienté-objet. Nous définissons également nos propres métriques pour compléter cet ensemble. MOTS-CLES: Métrique orienté-objet, qualité du logiciel orienté-objet, diagramme de classe, complexité.
Conference Paper
Full-text available
Benchmarks have been used in computer science to compare the performance of computer systems, information retrieval algorithms, databases, and many other technologies. The creation and widespread use of a benchmark within a research area is frequently accompanied by rapid technical progress and community building. These observations have led us to formulate a theory of benchmarking within scientific disciplines. Based on this theory, we challenge software engineering research to become more scientific and cohesive by working as a community to define benchmarks. In support of this challenge, we present a case study of the reverse engineering community, where we have successfully used benchmarks to advance the state of research.
Conference Paper
Full-text available
UML is the emerging standard for expressing OOA/OOD models. New metrics for object oriented analysis models are introduced, and existing ones are adapted to the entities and concepts of UML. In particular, these metrics concern UML use case diagrams and class diagrams used during the OOA phase. The proposed metrics are intended to allow an early estimate of development efforts, implementation time and cost of the system under development, and to measure its object orientedness and quality since the beginning of the analysis phase. The proposed metric suite is described in detail, and its relations with proposed metrics found in the literature are highlighted. Some measurements on three software projects are given
Article
Since there is no standard formalism for defining software metrics, many of the measures that exist have some ambiguity in their definitions which hinders their comparison and implementation. We address this problem by presenting an approach for defining software metrics. This approach is based on expressing the measures as Object Constraint Language queries over a language metamodel. To illustrate the approach, we specify how the Chidamber and Kemerer metrics suite can be measured from Unified Modelling Language class diagrams by presenting formal definitions for these metrics using the Unified Modelling Language 2.0 metamodel. Keywords: OO metrics, class diagram metrics, metamodels, UML, OCL. 1
Article
A discontinuity exists between object-oriented modeling and programming languages. This discontinuity arises from ambiguous concepts in modeling languages and a lack of corresponding concepts in programming languages. It is particularly acute for binary class relationships---association, aggregation, and composition. It hinders the traceability between software implementation and design, thus hampering software analysis. We propose consensual definitions of the binary class relationships with four minimal properties---exclusivity, invocation site, lifetime, and multiplicity. We describe algorithms to detect automatically these properties in source code and apply these on several frameworks. Thus, we bridge the gap between implementation and design for the binary class relationships, easing software analysis.
Conference Paper
This paper presents a library of measures, named FLAME - A Formal Library for Aiding Metrics Extraction, which is mainly used to formalize object-oriented design metrics definitions. The library itself is formalized with the Object Constraint Language upon the UML meta-model. The combination of FLAME functions together with the UML meta-model allow unambiguous metrics definitions, which in turn help increasing tool support for object- oriented metrics extraction. When applied to the object-oriented design, the metrics definitions performed with FLAME make available a set of comparisons among different models, leading to recommendations and conclusions for the developers. These assumptions can help improving the quality of object-oriented design, as well as they indirectly help in the enhancement of this methodology, contributing to the progress of the overall software life cycle.
Article
The paper argues for the need of a benchmark, or suite of benchmarks, to exercise and evaluate software visualization methods, tools, and research. The intent of the benchmark(s) must be to further and motivate research in the field of using visualization methods to support understanding and analysis of real world and/or large scale software systems undergoing development or evolution. The paper points to other software engineering sub-fields that have recently benefited from benchmarks and explains how these examples can assist in the development of a benchmark for software visualization.
Conference Paper
In this paper we report on our experiences of using the Dagstuhl Middle Metamodel as a basis for defining a set of software metrics. This approach involves expressing the metrics as Object Constraint Language queries over the meta- model. We provide details of a system for specifying Java- based software metrics through a tool that instantiates the metamodel from Java class files and a tool that automati- cally generates a program to calculate the expressed metrics. We present details of an exploratory data analysis of some cohesion metrics to illustrate the use of our approach.
Conference Paper
Measuring quality is the key to developing high-quality software, and it is widely recognised that quality assurance of software products must be assessed focusing on early artifacts, such as class diagrams. After having thoroughly reviewed existing OO measures applicable to class diagrams at a high-level design stage, a set of metrics for the structural complexity of class diagrams obtained using Unified Modeling Language (UML) was defined. This paper describes a controlled experiment carried out in order to corroborate whether the metrics are closely related to UML class diagram modifiability. Based on data collected in the experiment, a prediction model for class diagram modifiability using a method for induction of fuzzy rules was built. The results of this experiment indicate that the metrics related to aggregation and generalization relationships are the determinant of class diagram modifiability. These findings are in the line with the conclusions drawn from two other similar controlled experiments.
Conference Paper
Design metrics are useful means for improving the quality of software. A number of object-oriented metrics have been suggested as being helpful for resource allocation in software development. These metrics are particularly useful for identifying fault-prone classes and for predicting required maintenance efforts, productivity, and rework efforts. To obtain the design metrics of the software under development, most existing approaches measure the metrics by parsing the source code of the software. Such approaches can only be performed in a late phase of software development, thus limiting the usefulness of the design metrics in resource allocation. In this paper, we present a methodology that compiles UML specifications to obtain design information and to compute the design metrics at an early stage of software development. The current version of our tool uses diagrams produced by the Rational Rose tool and computes OO metrics that have been suggested as being good indicators for identifying faults related to Object- Oriented features. Our technique advances the state of the metrics measuring process; thus it is expected to strongly promote the use of design metrics and significantly increase their impact on improving software quality.
Article
GXL (Graph eXchange Language) is an XML-based standard exchange format for sharing data between tools. Formally, GXL represents typed, attributed, directed, ordered graphs which are extended to represent hypergraphs and hierarchical graphs. This flexible data model can be used for object-relational data and a wide variety of graphs. An advantage of GXL is that it can be used to exchange instance graphs together with their corresponding schema information in a uniform format, i.e. using a common document type specification. This paper describes GXL and shows how GXL is used to provide interoperability of graph-based tools. GXL has been ratified by reengineering and graph transformation research communities and is being considered for adoption by other communities. c
Conference Paper
In this paper the design of a CASE tool for measuring the complexity of object oriented software systems is described. Use of the tool within the software testing and release sub-process is outlined. The paper argues that (i) for metrics to be properly integrated into a software process requires tool support; (ii) tools must support heterogeneous systems often involving multiple programming languages to be useful in commercial development environments and (iii) given the immaturity of current complexity metrics, tools must be adaptable so that new metrics can readily be incorporated to best support the software process. Using an object oriented programming language meta-model in its database schema, the tool provides a flexible architecture facilitating support for new object oriented programming languages and metrics with relative ease. We believe these are essential requirements for measurement tools used in environments of constantly improving software processes indicative of high maturity organisations.
Conference Paper
The relationships between coupling and external quality factors of object-oriented software have been studied extensively for the past few years. For example, several studies have identified clear empirical relationships between class-level coupling and the fault-proneness of the classes. A common way to quantify the coupling is through static code analysis. However, the resulting static coupling measures only capture certain underlying dimensions of coupling. Other dependencies regarding the dynamic behavior of software can only be inferred from run-time information. For example, due to inheritance and polymorphism, it is not always possible to determine the actual receiver and sender classes (i.e., the objects) from static code analysis. This paper describes how several dimensions of dynamic coupling can be calculated by tracing the flow of messages between objects at run-time. As a first evaluation of the proposed dynamic coupling measures, fairly accurate prediction models of the change proneness of classes have been developed using change data from nine maintenance releases of a large SmallTalk system. Preliminary results suggest that dynamic coupling may also be useful for developing prediction models and tools supporting change impact analysis. At present, work on developing a dynamic coupling tracer and ripple-effect prediction models for Java programs is underway.
Article
The relationships between coupling and external quality factors of object-oriented software have been studied extensively for the past few years. For example, several studies have identified clear empirical relationships between class-level coupling and class fault-proneness. A common way to define and measure coupling is through structural properties and static code analysis. However, because of polymorphism, dynamic binding, and the common presence of unused ("dead") code in commercial software, the resulting coupling measures are imprecise as they do not perfectly reflect the actual coupling taking place among classes at runtime. For example, when using static analysis to measure coupling, it is difficult and sometimes impossible to determine what actual methods can be invoked from a client class if those methods are overridden in the subclasses of the server classes. Coupling measurement has traditionally been performed using static code analysis, because most of the existing work was done on nonobject oriented code and because dynamic code analysis is more expensive and complex to perform. For modern software systems, however, this focus on static analysis can be problematic because although dynamic binding existed before the advent of object-orientation, its usage has increased significantly in the last decade. We describe how coupling can be defined and precisely measured based on dynamic analysis of systems. We refer to this type of coupling as dynamic coupling. An empirical evaluation of the proposed dynamic coupling measures is reported in which we study the relationship of these measures with the change proneness of classes. Data from maintenance releases of a large Java system are used for this purpose. Preliminary results suggest that some dynamic coupling measures are significant indicators of change proneness and that they complement existing coupling measures based on static analysis.
Article
The increasing importance being placed on software measurement has led to an increased amount of research developing new software measures. Given the importance of object-oriented development techniques, one specific area where this has occurred is coupling measurement in object-oriented systems. However, despite a very interesting and rich body of work, there is little understanding of the motivation and empirical hypotheses behind many of these new measures. It is often difficult to determine how such measures relate to one another and for which application they can be used. As a consequence, it is very difficult for practitioners and researchers to obtain a clear picture of the state of the art in order to select or define measures for object-oriented systems. This situation is addressed and clarified through several different activities. First, a standardized terminology and formalism for expressing measures is provided which ensures that all measures using it are expressed in a fully consistent and operational manner. Second, to provide a structured synthesis, a review of the existing frameworks and measures for coupling measurement in object-oriented systems takes place. Third, a unified framework, based on the issues discovered in the review, is provided and all existing measures are then classified according to this framework. This paper contributes to an increased understanding of the state-of-the-art