ArticlePDF Available

Spatial Measures of Software Complexity

Authors:

Abstract

This paper introduces a set of simple software complexity metrics that has been inspired by developments within cognitive psychology. Complexity measures are constructed by analysing the distance between components of a program. The greater the distance between program fragments, the greater the resulting spatial complexity of a program. Suggestions are made as to how spatial complexity measures can be tailored to individual programmer teams. Using these metrics, the complexity of a software system can be adjusted using subjective measures of programmer experience and knowledge. A related set of simple object-oriented metrics based around the same principles of are also suggested. Finally, a number of further research possibilities are suggested. Index Terms : Software metrics, software complexity, psychological complexity, spatial reasoning, object-oriented programming, human-factors in software engineering, programmer experience, software maintenance. 1 Introduction There...
ITRI-99-01
Spatial measures of software
complexity
C. R. Douce, P. J. Layzell and J. Buckley
January, 1999
Also published in Proc 11th Meeting of Psychology of Programming
Interest Group, Leeds.
Software Management Group, UMIST, Manchester, UK
University of Limerick, Limerick, Ireland
Information Technology Research Institute Technical Report Series
ITRI, Univ. of Brighton, Lewes Road, Brighton BN2 4GJ, UK
TEL:
44 1273 642900 EMAIL: firstname.lastname@itri.brighton.ac.uk
FAX:
44 1273 642908 NET: http://www.itri.brighton.ac.uk
Spatial measures of software complexity
C.R.Douce
ITRI, University of Brighton, Brighton, Sussex, U K.
P.J.Layzell
Software Management Group, UMIST, Manchester, UK.
J.Buckley
University of Limerick, Limerick, Ireland
January 1999
Abstract
This paper introduces a set of simple software complexity metrics that has been in-
spired by developments within cognitive psychology. Complexity measures are con-
structed by analysing the distance between components of a program. The greater
the distance between program fragments, the greater the resulting spatial complexity
of a program. Suggestions are made as to how spatial complexity measures can be
tailored to individual programmer teams. Using these metrics, the complexity of a
software system can be adjusted using subjective measures of programmer experience
and knowledge. A related set of simple object-oriented metrics based around the same
principles of are also suggested. Finally, a number of further research possibilities are
suggested.
Index Terms : Software metrics, software complexity, psychological complexity, spa-
tial reasoning, object-oriented programming, human-factors in software engineering, pro-
grammer experience, software maintenance.
1 Introduction
There e xists the belief within engineering that if something can be measured, it can be con-
trolled. This belief is no more evident than in the field of software engineering, where a
large number of different software metrics have proliferated. One of the most important
metric to receive attention has been that of the complexity metric. The motivation is simple
: the more complex a software syst em is, the more difficult the software is to comprehend
and maintain. If ’complexity’ can be measured in s ome way, then we step towards man-
aging and understanding software production and corre ction. Software complexity has
been measured in a number of different ways. The simplest of all software complexity
measurements is the number of lines of code; the greater the number of lines, t he more
sophisticated a software system will be. Finer measurement of complexity includes simple
counts of program statements and analysis of a programs control structures [1, 2]. Study-
ing the source listing of s oftware has caused two forms of measures to be defined, control
flow complexity and data flow complexity.
1
Recently, object-oriented metrics has been an area of increasing interest, not only from
the understanding that d ata and procedure are broug ht to gether and so necessitate the
formation of new metrics, but also from a practical perspective. Object-oriented languages
are becoming increasingly popular as a vehicle for the construction of significant software
systems. A number of object-oriented metrics have been proposed by Chiadamber and
Kemerer that attempt to d escribe the design and complexity of object-oriented software
[3]. There are t hree types of metric, those that relate to object definitions, t ho se that relat e
to object attributes or o bject data items, and those that relate to object communication
or relations. An object definition metric is a measure of the depth of inheritance, along
with a measure of how many methods are used. Data metrics are metrics that count the
relationships between classes and their member functions.
It can be argued that contemporary software metrics, in part describe the software but
cannot not describe how difficult parts of the so ftware wo uld be able to be comprehend,
modify and change. Empirical software engineering practitioners have called for empiri-
cal assessment of software engineering practices and approaches; software metrics is one
of the approaches that a software e ngineer can use [4]. Psychological complexity and soft-
ware complexity are different but similar conceptions. A p rogram that is ’psychologically
complex’ is a program that is difficult to understand. A program may be difficult to un-
derstand and yet still have a small number of lines, a small number of statements and low
levels for certain types of complexity measure
. The sp atial metrics that follow has been
primarily inspired by theories of working memory [5]. Their intention is to measure psy-
chological complexity simply, and in a way that can be directly related to the processes
that occur during the comprehension o f program code.
2 Spatial complexity metrics
Intelligence tests examine a number of cognitive abilities. Verbal ability is tested. Graphi-
cal and textual based tests are used to test induction, and spatial abilities are tested using
mental rotation tasks. S patial ability is a term that is used to refer to an individuals cogni-
tive abilities relating to orientation, the location of objects in space, and the processing of
location related visual information. Spatial ability has been correlated with the se lection
of problem solving strategy, and has played an important role in the formulation of an
influential model of working memory.
To successfully solve debugging, maintenance and comprehension tasks, programmers
must posses k nowledge of the programming language, have an understanding of the ap-
plication domain and develop an appreciation of the relationships that can exist between
the two [6]. Program comprehension and software maintenance are considered to sub-
stantially use programmers spatial abilities. To develop an understanding of non-trivial
software syst ems, a programmer must begin to know where significant parts of the pro-
gram lie and have an appreciation of their relevance to other parts of a program. Important
parts of the program lie in the program ’space’, which is the source file. Program space is
not only one dimensional, but multidimensional. Software is not s imply encoded within a
single source file but can be distributed amongst any number of other files.
The idea of the programming plan or program schema has been used as an explana-
tory too l to explain programme r expertise. A plan represents a conception of some prede-
Within this paper, a program is considered to be a s e t of executable instructions that are written in a textual
format.
2
fined action. In computing terms this can be a sort or a searching algorithm, for example.
Letovsky and Soloway believed that pro gramming plans can be situated within different
parts of a program, and this can make programs difficult to understand [7]. Wilde et.al.
stated that programs written within an object-oriented language can be especially difficult
to understand since a program plan can be distributed in different program parts, within
classes, methods and object [8].
The more widely distributed the connections between program functions are, the more
complex the relations between the program parts become. Complexity metrics have his-
torically been of two main types; control flow oriented and d ata oriented. Spatial metrics,
like the object-oriented metrics that were described represents a third category of metrics :
code relation metrics.
The following sections present spatial complexity measures of increasing sophistica-
tion, beginning with measures of standard procedural code. This is followed with a dis-
cussion of related measures that can be applied to object-oriented code, derived primarily
from examining the C++ language, where two main measures are presented; relations that
may exist bet ween classes and relations that may exist between objects.
3 A function complexity metric
Understanding the purpose of program of a significant size necessitates the unders tand-
ing the functions o r pro cedures that are contained within a program. The greater the dis-
tance in lines of code between related functions, the more cognitive effort is required to
be expended to understand the connections between functions during the initial stages
of program comprehension. If a function definition directly precedes a function call, no
searching will have to be performed to locate portions of source code that are needed to
facilitate understanding.
The function complexity value is derived in two parts; by determining how many func-
tions are called within a program and calculating the distance in lines of code that lie be-
tween a function call and a functions declaration. A complexity measure for any particular
function can be calculated by,
where, name is the number of functions or procedures that are called, and distance is
the number of lines o f code from the functions declaration
. FC is an absolute value.
The entire spatial complexity for a program can be calculated by summing t he com-
plexity ratings for each function it contains,
where, n is the total number of functions that exist within a program.
Since it is very unlikely that source code is contained within a single monolithic file,
the function complexity value becomes more complex. It should be calculated by totalling
the distance from the function call to the to p of the current file with the line number of the
The words function and procedure are used interchangeably. The C convention of calling everything a
function is adopted
3
file where the source code is contained. In the case where no source code for a function
can be found, code is contained within a library which is only available within object form
only, no measure can be produced.
Two levels of granularit y can be used to de rive a spatial complexity measure. Firstly
there are those that can be measured in lines o f code, and those that are related to the
position of the function in relation to others. A complexity count for the distance in lines
of code can be calculated using multiples. The lower the line of code multiple, the finer the
level of complexity view.
4 Recursive function complexity metric
The simple function complexity metric does not consider that function calls are very of-
ten nested within one another. For example, a programmer may define multiple functions
that are called from a larger ’higher level’ function. The recursive metric is a simple pro-
gression. As described, a function complexity value calculated using LOC measures is
calculated by taking the sum of all the distances of the functions that it calls. The RFC
for a function is also calculated by summing LOC distances from calling functions. The
distances are the sum of the distances that its children call. Written more formally,
wheren is the number of functions that can be called, distance is the number of lines of
code from the current function, and FC is the complexity of the function that is called. The
greater the levels of nesting, the more navigation throughout the source text is required,
the greater the spatial complexity.
5 Object-oriented spatial complexity metrics
The spatial complexity measures can be easily modified to assess the complexity of object-
oriented code, just as it can be adopted to other textual programming languages without
any great degree of difficulty. Three simple measures are proposed. The first of these
is very closely related to the function complexity metrics previously described, while the
other two metrics relate directly to inheritance. There are two main forms of inheritance re-
lations that are used within object-oriented languages, inheritance through class reuse and
inheritance through the construction of compound objects. A fourth measure, a composite
measure, is also given.
5.1 Method location rating
The function location measure is a count of how close the definition of a member function
(or method) is in lines of code to its class declaration. Within the language C++, the source
code for member functions can be written next to the declarations. If this is the case, spatial
complexity of the software is minimal and comprehension is eased since all the re levant
information is contained within one place. The number of member functions used within
a class affects the function location measure. It is a measure that is distinctly reminiscent of
the weighted methods per class metric (WMC) as proposed by Chidamber and Kemerer.
4
Within C++ language, the method location metric is calculated by s umming distances
from a methods implementation and description. This is represented by,
method is the number of methods within a class and distance is a function that returns
the number of lines of code. In the Java language, a slightly different approach can be
considered. MLR can be approximated by taking the position of the current method, to the
first line of its class.
5.2 Class relation measure
The class relation measure is a measure of how close an inherited class is situated to t he
class which it is inherited from. The greater the distance between the class declarations, the
greater the role spatial memory will play during object-oriented code comprehension and
maintenance. The CRM measure is considered to be important since the comprehension of
inheritance structures requires an understanding of many different attributes, knowledge
of methods and an appreciation of the differences betwee n classes. Since a programmer is
unlikely to hold all information within working memory at any one time, especially when
performing ’cold comprehension’, knowing where a class resides is considered to be of
great importance.
The CRM is calculated by,
Where, class is the number of classes that a class inherits, distance is the number of lines
of code from the top of the current class to the top of an inherited class, and CRM is the
distance measure of this class. If classes are not defined within available code, once again
the measure cannot be derived. If classes are located in more than one file, the number of
lines from t he definition of a class to top of the file is s ummed with the line position within
the file where the definition can be found.
Take the following example: If a class ’a’ multiply inherits classes ’b’ and ’c’, a CRM
measure for ’b’ and all its subclasses is taken. This is repeated for class ’b’. A CRM measure
for class ’a’, is then simply CRM(a) + CRM(b).
5.3 Object relation measure
This metric examines the usage of object types (or declarations) within classes. The object
relation measure is calculated by summing the total distance in lines of code from each
object declaration to their respective class declaration. Like with the other metrics that
have been discussed, if declarations exist in other files (other than files that are p urely
intended to be header files) the rules that have been p reviously stated still apply. In the
situations where the object definition is unavailable, code distances cannot calculated. A
separate ’not available’ o r NA value should then be created.
This metric has some similarity with t he Chidamber and Kemerer coupling metric,
CBO, which stands for coupling between objects. Coupling can be described to be the mea-
sure of interdependence between modules. If a module o r object doe s not access others,
5
then coupling will be low, creating low interdependence. The ORM further develops the
conception of coupling. An object can be considered to have low spatial coupling, or low
spatial interdependence if the used object o r function is located near to where it is defined.
ORM is calculated simply by,
Where object is number of objects that are used within a class declaration, and distance
is the distance in lines of code between its usage pos ition and the class where it is defined.
5.4 Combining measures
These measures can be combined to produce a composite view of the spatial complexity
of the most significant parts of an object-oriented program. No other methods of combin-
ing the methods have currently been d evised apart from a simple summation operation.
Obtaining a composite complexity measure is one that is considered to be important, but
without understanding what the most cognitively demanding operations when manipu-
lating and working with object-oriented source code are, it is difficult to see how such a
value may relate to program comprehension and maintenance operations.
6 Complexity and Programmer Experience
Maintainers more often than not work on software systems for large amounts of time. The
measures that have been described can be used to obtain an indication of how complex
a software system is for programmers who have had no experience in using a particular
software systems; programmers who undert ake ’cold’ comprehension. The complexity
scores that can be d erived from software may appear to be impressive but easily become
meaningless to software development managers whose programmers have been work ing
on a software development for a year or more, for example.
Over a p eriod of months and years, it is safe t o assume that programmers consign
different types of information about a software syste m to memory. Such information can
include data flow, control flow, knowledge o f functional components and problem domain
information. Spatial information about a programs terrain is also held; where information
about a particular area of a large software system can be found as individual programmers
become familiar with particular components of a sys tem.
A complexity measure that differs between different groups of programmers can be an
especially useful tool for cost and time estimation. Measurements about the complexity of
particular sections can be weighted using a subjective knowledge measurement provided
by a programmer. This can be obtained in the form of a percentage. Individual program-
mers can rate particular sections of a software system in terms of their familiarity. A rating
of zero percent indicates that a programmer currently has no knowledge of a particular
part of a sy stem, while a maximum one hundred percent rating sugges ts that a program-
mer can recall the pos ition and names of all of the program segments and reconstruct
the key elements directly from memory. A complexity rating weighted by p rogrammer
knowledge can be simply calculated by subtracting the suggested percentage.
Group measurements for programming can be obtained by calculating simple aver-
ages of all collected data from all members of a programming team. Over time, subjective
6
knowledge can change or even degrade through lack of use. To maintain a correct view
of programmers experience and how they affect the complexity measures, subjective mea-
sures of knowledge should be taken at regular intervals to re assess the state of knowledge .
These metrics, when combined with personal adjustments, haw the potential to provide
the software developer with a view of how ’complicated’ program comprehension can be,
and indirectly, begin to gauge how costly it can be.
7 Discussion
The spatial metrics conform to many of Weyuker’s desirable properties of complexity mea-
sures [9]. Metrics should neither be too course or t o o fine. In essence, a measure should
not rank different programs as being equally complex. It will also follow that if two pro-
grams were joined toge ther, in some cases, the resulting program will be more complex
than the sum of its parts. It does not follow that in all cases, if st atements were re-ordered,
a different measurement will be obtained. It will follow if the function position is changed.
Further work is needed to understand the relationship between the spatial complexity
measurements and cognitive effort needed to und erstand p rogram code. Spatial memory
is said to play an important role in cognition, specifically wo rking memory. Badde ley
proposes a t heory of working memory that goes further than the simple distinction of
short-term and long-term memory. Evidence from cognitive neuropsychological studies
of the brain damaged gives weight to the conception that spatial processing involves a
particular cognitive syst em.
The psychological complexity of software g oes beyond simple considerations about
the relative p osition of related fragments of software. The spatial location metrics one
describes a very particular view of a software sys tem. Like all metrics, it should be used
in combination with others to obtain a full picture of the sop histication of software.
Code position is a concept that can be expanded and used to create further metrics. The
metrics that have been presented are by no means perfect. They are in need of refinement.
The object-oriented s patial metrics do not attempt to address additional language features
such as multi-tasking and exception handling, both of which are present within the Java
language. Although no direct consideration has been given to these features, using the
notion of distance functions, metrics can be constructed without great difficulty.
Further research is required to further assess the advantages and shortcomings of these
metrics. These metrics are rich for empirical investigation. Correlations bet ween other
more established measures should be conducted and empirical evidence should be col-
lected to begin validation and asses sment. The spatial metric is a powerful conception.
It presents a view of software complexity that is related to the cognitive demands of con-
ducting programming tasks, rather than to simple counts of lines, operators and operands.
It current ly remains to be seen whe ther the artifacts that software eng ineers produce can
be measured with accuracy, particularly in terms of their psy chological complexity. If they
can, then development and maintenance may indeed become an activity that can be con-
trolled.
References
[1] Halstead, M.H., Elements of software science. 1977, New York: Elsevier.
7
[2] McCabe, T.J., A complexity measure. IEEE Transactions on Software Engineering, 1976.
SE-4: p. 308-320.
[3] Chidamber, S .R. and C.F. Kemerer. Towards a metrics suite for object-oriented design. in
OOPSLA ’91. 1991: ACM SIGPLAN Notices.
[4] Fenton, N . and A. Melton, Deriving structurally based software measures. Journal of Sys-
tems and Software, 1990. 12: p . 177-178.
[5] Baddeley, A., Human memory : theory and practice - Revised edition. 1997, Hove: Psy-
chology Press.
[6] Brooks, R., Towards a theory of the comprehension of computer programs. International
Journal of Man-Machine Studies , 1983. 18: p. 543-554.
[7] Letovsky, S. and E. Soloway, Delocalised plans and program comprehension. IEEE Soft-
ware, 1986. 3: p. 41-47.
[8] Wilde, N., P. Mathews, and R. Huitt, Maintaining Object-Oriented software. IEEE Soft-
ware, 1993. 10: p. 75-80.
[9] Weyuker, R., Evaluating software complexity measures. IEE E Transactions on Software
Engineering, 1988. SE-14
8
... In the 70s, the number of program statements, McCabe's cyclomatic number [7], Halstead's programming effort [16], and the Knot measure [17] were the most frequently cited measures. In the 90s, Douce et al. introduced a set of metrics that help in calculating the complexity of a given system or program code based on the object-oriented concepts such as the object and class [18]. All those metrics were based on the spatial abilities which measure the complexity by calculating the distances between the program elements in the code. ...
... The first one (columns 4-11) is devoted to SSCC related metrics: iniCC is the initial cognitive complexity of the original methods (always above the threshold), extrac is the number of Extract Method refactorings proposed by the best solution, reducCC is the reduction on the cognitive complexity of the original methods, minReduc is the minimum reduction for a single extraction, maxReduc is the maximum reduction for a single extraction, avgReduc is the mean (average) reduction considering all extractions of the best solution, totalReduc is the sum of the reductions of all extractions of the best solution, and finalCC is the SSCC of the original methods after applying the sequence of Extract Method refactorings determined by the best solution. The second block (columns 12-15) shows LOC metrics, the third block (columns [16][17][18][19] provides information about the parameters involved in the extracted methods, and, finally, the last block shows the execution time in milliseconds. ...
Article
Full-text available
We model the cognitive complexity reduction of a method as an optimization problem where the search space contains all sequences of Extract Method refactoring opportunities. We then propose a novel approach that searches for feasible code extractions allowing developers to apply them, all in an automated way. This will allow software developers to make informed decisions while reducing the complexity of their code. We evaluated our approach over 10 open-source software projects and was able to fix 78% of the 1,050 existing cognitive complexity issues reported by SonarQube. We finally discuss the limitations of the proposed approach and provide interesting findings and guidelines for developers.
... This can be considered as a huge achievement of the Cognitive Complexity calculation since it expresses about the Spatial aspect of the source code. C. R. Douce, P. J. Layzell and J. Buckley extended the spatial capacity with the applicability of the recursive function callings together with the other function callings [19]. All of these calculations were limited to be assigned for Procedural Codes such that the importance of proposing a way of determining the Cognitive Complexity for an Object Oriented code was raised. ...
... Other category can be defined as the Spatial aspect which determines how the internal information scatters within the given software. Spatial aspect refers to the location of the data to the location of processing them within the software [18,19]. If the distance between variables, data structures and the methods definitions to their actual usage is high, it is difficult to manage the information regarding those information inside the source code. ...
... The idea assumes that the greater the distance in lines of code between related functions the more cognitive effort is required to be expended to understand the connections between functions during the initial stages of program comprehension. The function complexity is derived in two parts; by determining how many functions are called within a program and calculating the distance in lines of code that lie between a function call and functions' declaration, the following equation is used to describe FC (Function Complexity) [7]: ...
Chapter
Full-text available
When presented with two fully proved formal methods-based specifications, how can a System Engineer decide which is superior when both models specify the same requirements, but in two different ways? This paper investigates and propose a methodology by which formal methods (using the specific example of the Event-B notation) can be differentiated in terms of their quality, using criteria that may be highly subjective in nature. Established complexity functions applied to software are not applicable to formal methods, thus the paper proposes a new function which quantifies the “quality” of a given model. Complexity is not the only factor involved in determining the quality of formal methods, the quality of system thinking involved also play an impactful role. We propose a quality function which uses the well-established properties of axiomatic systems in theoretical mathematics with the addition of a specifically formulated complexity function. The distinction criteria are based on evaluating how four main properties have been achieved: “Consistency”, “Completeness”, “Independence” and “Complexity”. We base our approach according to the paradigm of; “if the formal specification looks visually complicated for a set-theory novice, then it is a poorly modeled specification”. Furthermore, we explore the notion of Miller’s rule (magic No. 7) to define what “good” should look like. We conclude that we need more than Miller’s 7, we need 1, 2 and 3 to help us with defining what good quality looks like, by taking human cognitive capacity as a benchmark. This novel approach implies considerable further research, described in future work section.
... Li's metric suite also consisted of six metrics : number of ancestor classes (NAC), number of descendent classes (NDC), number of local methods (NLM), class method complexity (CMC), coupling through abstract data type (CTA), and coupling through message passing (CTM). Out of these metrics NAC and NDC metrics were proposed as alternatives for DIT and NOC.In January 1999, C. R. Douce, P. L. Layzell, and J. Buckley introduced a set of spatial measures of software complexity for both the procedural and object-oriented codes [15]. , number of methods (NOM) were able to assess quality attributes such as reusability, functionality, effectiveness, understandability, extendibility and flexibility [13]. ...
... Thus, the development of software has also become increasingly sophisticated over the years. With the intension of finding the most appropriate attributes such as control flow [1], operator and operand count [2], information flow [3], data flow [4][5], identifier density [6], spatial complexity [7][8] and cognitive complexity [9][10][11] that measure the development and maintenance cost associated with developing software. Although, each of these proposed complexity metrics have their advantages and disadvantages, most of them are useful only in a limited environment. ...
Article
Full-text available
One of the central problems in software engineering is the inherent complexity. Since software is the result of human creative activity and cognitive informatics plays an important role in understanding its fundamental characteristics. This paper models one of the fundamental characteristics of software complexity by examining the cognitive weights of basic software control structures. Cognitive weights are the degree of the difficulty or relative time and effort required for comprehending a given piece of software, which satisfy the definition of complexity. Based on this approach a new concept of New Weighted Method Complexity (NWMC) of software is developed. Twenty programs are distributed among 5 PG students and development time is noted of all of them and mean is considered as the actual time needed time to develop the programs and Understandability (UA) is also measured of all the programs means how much time needed to understand the code. This paper considers Jingqiu Shao et al Cognitive Functional Size (CFS) of software for study. In order to validate the new complexity metrics we have calculated the correlation between proposed metric and CFS with respect to actual development time and performed analysis of NWMC with CFS with Mean Relative Error (MRE) and Standard Deviation (Std.). Finally, the authors found that the accuracy to estimate the development time with proposed measure is far better than CFS.
... Douce et al. [6] suggest that comprehension and maintenance substantially use programmer's spatial abilities and that the source-file can be viewed as "program space." Singer et al. [7] have developed a tool called 'NavTracks' to support navigation in "software space." ...
Article
Full-text available
Spatial cognition has been well examined using various psycholog-ical perspectives. Here we suggest that this previous research can be utilized to provide insight into source-code navigation and program comprehension. In our model, code represents an abstract space that must be navigated using the same cognitive strategies as for natural environments. Thus, when navigating 'codespace' computer programmers face many of the same challenges as peo-ple navigating within the real world, and consequently, will likely rely on similar skills and strategies. In support of this observation, we explore from a theoretical perspective the use of spatial cognition during program comprehension. Exam-ination reveals that research in spatial cognition provides, albeit currently un-proven, explanations for programmer behaviours during program comprehension activities. To validate our model, we suggest a preliminary experiment to explore the existence of codespace.
Conference Paper
Controlling the complexity of software applications is an essential part of the software development process as it directly affects maintenance activities such as reusability, understandability, modifiability and testability. However, as stated by Tom DeMarco “You cannot control what you cannot measure”. Thus, over the years many complexity metrics have been proposed with the intention of controlling and minimizing the complexity associated with software. However, majority of these proposed complexity metrics are based on only one aspect of complexity. The CB measure introduced by Chhillar and Bhasin is one metric which relies on a number of complexity factors to decide on the complexity of a program. However, it also has some shortcomings and can be further improved. Thus, this paper attempts to propose some additional complexity factors that the CB measure has not considered, to further improve it. The paper also presents an extensive coverage about the software complexity metrics proposed in the literature.
Article
There is an increasing awareness on the importance of software measurement within the software engineering community, as well as the necessity of respecting the scientific basis of measurement. However there is little evidence for the latter as there is a tendency for researchers and practitioners to apply software metrics without a full awareness of what they mean. Coupling, which is the measure of the interdependence between parts of a software system (e.g. classes), is one important property for which many metrics have been defined. While it is widely agreed that there is a relationship between high coupling and poor maintainability, I argue that current empirical evidence toward this is insufficient to promote a full understanding of this relationship. Part of this is due to the lack of coverage of all forms of connections that comprise coupling. To illustrate this I identify a specific, indirect, form of coupling that manifests between two seemingly unrelated parts of the system through hidden connections. My thesis is that there is a relationship between indirect coupling and maintainability. To gather evidence for this I follow a methodology based on the philosophies of key software metrics researchers. This involves operationally defining indirect coupling so that it can be accurately measured, establishing an explanatory model as to the relationship between indirect coupling and maintainability, and finally empirically corroborating this model.
Conference Paper
We study the Shannon information rate of accepting runs of various forms of automata. The rate is therefore a complexity indicator for executions of the automata. Accepting runs of finite automata and reversal-bounded nondeterministic counter machines, as well as their restrictions and variations, are investigated and are shown, in many cases, with computable execution rates. We also conduct experiments on C programs showing that estimating information rates for their executions is feasible in many cases.
Article
While software metrics are a generally desirable feature in the software management functions of project planning and project evaluation, they are of especial importance with a new technology such as the object-oriented approach. This is due to the significant need to train software engineers in generally accepted object-oriented principles. This paper presents theoretical work that builds a suite of metrics for object-oriented design. In particular, these metrics are based upon measurement theory and are informed by the insights of experienced object-oriented software developers. The proposed metrics are formally evaluated against a widelyaccepted list of software metric evaluation criteria.
Article
A sufficiency theory is presented of the process by which a computer programmer attempts to comprehend a program. The theory is intended to explain four sources of variation in behavior on this task: the kind of computation the program performs, the intrinsic properties of the program text, such as language and documentation, the reason for which the documentation is needed, and differences among the individuals performing the task. The starting point for the theory is an analysis of the structure of the knowledge required when a program is comprehended which views the knowledge as being organized into distinct domains which bridge between the original problem and the final program. The program comprehension process is one of reconstructing knowledge about these domains and the relationship among them. This reconstruction process is theorized to be a top-down, hypothesis driven one in which an initially vague and general hypothesis is refined and elaborated based on inf ormation extracted from the program text and other documentation.
Article
Many software engineering methods place internal structural constraints on the documents (including specifications, designs, and code) that are produced. Examples of such structural constraints are low coupling, high cohesion, reuse in designs and code, and control structuredness and data-abstraction in code. The use of these methods is supposed to increase the likelihood that the resulting software will have desirable external attributes, like reliability and maintainability. For this reason, we believe that the software engineering community needs to know how to measure internal attributes and needs to understand the relationships between internal and external software attributes. This can only be done if we have rigorous measures of the supposedly key internal attributes. We believe that measurement theory provides an appropriate basis for defining such measures. By way of example, we show how it is used to define a measure of coupling.
Article
This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program complexity. The paper first explains how the graph-theory concepts apply and gives an intuitive explanation of the graph concepts in programming terms. The issue of using nonstructured control flow is also discussed. A characterization of nonstructured control graphs is given and a method of measuring the ″structuredness″ of a program is developed. The last section of this paper deals with a testing methodology used in conjunction with the complexity measure; a testing strategy is defined that dictates that program can either admit of a certain minimal testing level or the program can be structurally reduced.
Conference Paper
In recent years much work has been performed in developing suites of metrics that are targeted for object oriented software, rather than functionally oriented software. This is necessary since good object oriented software has several characteristics, such as inheritance and polymorphism, that are not usually present in functionally oriented software. However, all of these object oriented metrics suites have been defined using only syntactic aspects of object oriented software; indeed, the earlier functionally oriented metrics were also calculated using only syntactic information. All syntactically oriented metrics have the problem that the mapping from the metric to the quality the metric purports to measure, such as the software quality factor “cohesion”, is indirect, and often arguable. Thus, a substantial amount of research effort goes into proving that these syntactically oriented metrics actually do measure their associated quality factors. The paper introduces a new suite of semantically derived object oriented metrics, which provide a more direct mapping from the metric to its associated quality factor than is possible using syntactic metrics. These semantically derived metrics are calculated using knowledge based, program understanding, and natural language processing techniques
Article
The research described is based on a model of the cognition involved in program understanding. In the model, program understanding is viewed as a process of recognizing plans in the code. The central claim is that the likelihood of a reader's correctly recognizing a plan in a program decreases when the lines of code are spread out or delocalized in the text of the program instead of being closely grouped. If the lines of code implementing a plan are close together, readers tend to have no trouble recognizing the plan. Examples are presented from protocol studies of expert programmers, illustrating certain common kinds of comprehension errors that can occur in the reading of code during maintenance.