Conference PaperPDF Available

Simple Function Point: a new Functional Size Measurement Method fully compliant with IFPUG 4.x

Authors:

Abstract

The IFPUG Function Points method has originally been developed almost 35 years ago. The need was for a way to capture in numbers the functional value for users of a certain software application. At that moment the development process was largely "handmade" and "Lines of Code" was the main measurement method available. Detailed statements of functional user requirements (in terms of elementary fields, logical files, references to files etc.) are still used to produce a measurement of the functional value of an application. Unfortunately producing such a measure is quite costly and time consuming and requires very high professionalism in counters. In addition there are often endless discussions between customers and suppliers about complexity of Base Functional Components (BFC) due to extreme detail in elements to be used and the ambiguity of many counting rules when applied to actual specific systems. Production people are often forced to accept measurement as a necessary step but unsatisfied by the subjectivity and cost of the measurement process. Essentially, analysts and programmers consider measurement as a "unavoidable waste of time". The need for a simpler, faster and cheaper functional measurement method is there. On the other hand there are a lot of studies, contracts and asset measures made up using the IFPUG method so it is a pity to lose those resources. Simple Function Point (SiFP) is a new measurement method based only on two BFCs which is totally compliant with the IFPUG one. All the resources and contractual frameworks developed for IFPUG are valid for Simple FP as well, starting from the ISBSG productivity data base. The usage of the new method reduces cost, time and disputes, the translation of an entire measured application portfolio is immediate.
Simple Function Point: a new Functional Size Measurement
Method fully compliant with IFPUG 4.x
Roberto Meli
Abstract
The IFPUG Function Points method has originally been developed almost 35 years ago.
The need was for a way to capture in numbers the functional value for users of a certain
software application. At that moment the development process was largely “handmade” and
“Lines of Code” was the main measurement method available. Detailed statements of
functional user requirements (in terms of elementary fields, logical files, references to files
etc.) are still used to produce a measurement of the functional value of an application.
Unfortunately producing such a measure is quite costly and time consuming and requires
very high professionalism in counters. In addition there are often endless discussions
between customers and suppliers about complexity of Base Functional Components (BFC)
due to extreme detail in elements to be used and the ambiguity of many counting rules when
applied to actual specific systems. Production people are often forced to accept measurement
as a necessary step but unsatisfied by the subjectivity and cost of the measurement process.
Essentially, analysts and programmers consider measurement as a “unavoidable waste of
time”. The need for a simpler, faster and cheaper functional measurement method is there.
On the other hand there are a lot of studies, contracts and asset measures made up using the
IFPUG method so it is a pity to lose those resources. Simple Function Point (SiFP) is a new
measurement method based only on two BFCs which is totally compliant with the IFPUG
one. All the resources and contractual frameworks developed for IFPUG are valid for Simple
FP as well, starting from the ISBSG productivity data base. The usage of the new method
reduces cost, time and disputes, the translation of an entire measured application portfolio is
immediate.
1. Quick overview of existing functional measurement methods
As of January 2011 there are two main methods for functional measuring of software that
are competing on the market: IFPUG and COSMIC. There are also other ISO certified
methods but they are basically limited to the territories from which they come from
(Netherlands, UK, Finland). All methods include the steps of identifying very granular BFC
and the appreciation of complexity is based on a very detailed view of functional information.
The IFPUG method identifies transactional and data BFC types while the COSMIC method
identifies only transactional BFC, although it is also necessary to identify business objects
that have a strong correlation with data elements but which do not contribute directly to the
numerical value in FP.
1.1. Advantages and disadvantages of the existing FSMM
The main advantages of the IFPUG method are that:
It is established by some decades of long usage
There are many benchmarking data available in the public domain
The main disadvantages of the IFPUG method are that:
It does not provide a layered model compatible with a modern architecture for the
development of components
Valorizing the shared static component of data, it brings the metrics not to satisfy the
distributive property of the counts: FP (A union B) is generally lower than FP (A) +
FP (B) where A and B are two sets of requirements considered in the first case as a
single application and the second as two separate applications sharing logical files.
This is because of the rule for the elimination of identical BFC within the same
Measurable Software Application (ASM).
It is usually considered not adequate for technological software
The main advantages of the COSMIC method are that:
It is suitable for a wide range of types of software (business, real time, networks ..)
It introduces the concept of layers and layered architectures
It introduces the concept of measurement “viewpoint”
The main disadvantage of the COSMIC method are that:
the numerical value of a measurement strongly depends on the selected viewpoint
The main common disadvantages of both methods are that:
they require a very detailed documentation of functional user requirements
they provide a large amount of rules that are not always easy to apply
they are relatively easy to calculate for development by scratch but difficult to apply
to ordinary maintenance and asset management measures (the detailed functional
information is usually very unstable and the alignment among measurements and
“living software” it is costly and time consuming).
1.2. Market needs
The market requires fast, agile measurement methods, with low impact on production
processes and that require not too much specialized skills, which is reliable in results, not
dependent on technology, correlated to work, cost, duration of a project. The current metrics
only partially address these needs.
2. Research Project Goals
Establish a new functional size measurement method consistent with the framework of the
ISO 14143 family of standards that is fully compatible with the IFPUG method at the level of
results if applied to the same extent, but which is:
easier to apply
easier to learn
less subject to interpretation
less subject to "manipulation" by the technical personnel
easier to keep aligned with the evolution of operational systems
quickly convertible from the IFPUG method
3. Assumptions of “functional size / effort” correlation
Functional metrics are generally considered useful to give a significant contribution to the
estimation of effort, durations, staff and costs for a development by scratch or a functional
enhancement maintenance intervention.
Until the availability of public productivity data (ISBSG essentially) the expert’s
community has accepted an implicit assumption like the following:
“To obtain an acceptable statistical correlation between work necessary for the
development or enhancement maintenance of software and functional size one must
necessarily consider both the number AND the internal complexity of Base Functional
Components.”
Research conducted by DPO on a sample of over a thousand projects, counted with the
IFPUG method, showed that this assumption, at least in the context of this methodology, it is
not true. Instead the following is true:
“The accuracy of a model of relationship between work effort and functional size of
software does not diminish even when you consider exclusively the number of BFC in the
same class (transactions or data type).”
These findings (documented below) allow to consider redundant the whole system of
IFPUG rules aimed at differentiating between EI, EO, EQ and between ILF and EIF and for
determining the complexity of the single BFC (using DETs, RETs, FTRs). The consequences
of this discovery are truly extraordinary in terms of impact on the method and process for
measuring Function Points.
Using only the number of BFC, however, does not allow an immediate adoption of all
models and results obtained by the application of the IFPUG method. The research, therefore,
had also, as an essential objective, to find a weight for the new adopted BFCs that would
make the two metrics (IFPUG FP and SiFP) reliably convertible.
4. Simple Function Point base features
The new Simple Function Point (SiFP) metrics has the characteristic of measuring the
functional user requirements with the same accuracy of standard IFPUG method and of being
fully compliant with it in terms of results: that is, the conversion ratio between IFPUG FP
and SiFP is equal to 1. This means that all the available results based on the IFPUG
measures, from ISBSG productivity data to the FP unit prices in contracts, from the defect
rates to asset valuations, can be used without any modification or conversion calculation !
The Simple Function Point method is not a new technique for estimating IFPUG Function
Point metrics but it is an easily convertible alternative.
5. The measurement model
The underlying measurement model of the SiFP metrics coincides with the IFPUG 4.x
regarding the general approach, the boundary, scope, definition of the elementary processes
and logical files, with its practice, the formulas. It is different for the introduction of
viewpoints and layer (that may be used when a conversion from IFPUG is not needed) and
the use of only two BFCs:
Unspecified Generic Elementary Process (UGEP)
Unspecified Generic Data Group (UGDG)
The first is a transactional function type while the second is a data function type. No need
to differentiate EI, EO, EQ, ILF and EIF, the 'primary intent' and inherent complexity.
The values associated with each BFC provided are:
UGEP = 4,6 SiFP
UGDG = 7,0 SiFP
6. Conversion between IFPUG FP and SiFP
To check the convertibility of the measure from IFPUG FP and SiFP we used a sample of
768 ISBSG counts for which we had the detail count in terms of BFC and whose distribution
of the FP value (after log transformation) has been very next to the normal distribution
allowing the application of all the typical tools for statistical analysis.
The linear regression on logarithmic transformed data (UFP vs. SiFP) indicates a
coefficient of exchange rate of 1.00045341, the statistical correlation index is equal to
0.998001323. This result indicates that the two metrics are nearly coincident. Analysis of
residues is smooth and normally distributed.
0,4
0,2
0
0,2
0,4
0,6
0,8
0123456789
Residui
ln(SiFP)
ln(SiFP)Tracciatodeiresidui
0
1
2
3
4
5
6
7
8
9
0123456789
ln(UFP)
ln(SiFP)
ln(SiFP)Tracciatodelleapprossimazioni
ln(UFP)
Previstoln(UFP)
The mean and median of percentage error (taken with the sign) is zero. The average
absolute percentage error is 12% and the median 10%. Of course the percentages are related
to different dimensions and therefore not comparable with each other in terms of absolute
importance.
0,00%
20,00%
40,00%
60,00%
80,00%
100,00%
120,00%
0
20
40
60
80
100
120
0,5 0,45 0,4 0,35 0,3 0,25 0,2 0,15 0,1 0,05 0 0,05 0,1 0,15 0,2 0,25 0,3 0,35 0,4 0,45 0,5 Altro
Frequenza
Intervalli
Distribuzionedeglierroripercentuali
Frequenza
%cumulativa
The “assets error” - or the difference between the sum of all measures with the sign made
with the IFPUG method and the sum of all measures with the sign done by the SiFP - is equal
to 0.4%, which means that the absolute errors are compensated by combining the counts as if
they were a large portfolio of applications. In fact, the overall SiFP measure exceeds the
IFPUG value of FP of only 1101 from a total of 291'139 !
An audit was also conducted on a sample of 140 other projects independent of the ISBSG
DB providing similar results.
The transition from assets counted by IFPUG FP to SiFP values is immediate if one only
knows the number of EI, EO, EQ, ILF and EIF.
7. Functional Size/Effort correlation
The correlation between effort and SiFP is identical to that between IFPUG FP and effort
for a new development and for the enhancement maintenance. The two measurements can be
used interchangeably in the determination of cost models and with the same unitary prices in
outsourcing contracts.
8. Streghts of the SiFP method
The SiFP method meets all the goals set up in the research project, since it is:
easier to apply
easier to learn
less subject to interpretation
less prone to "manipulation" by developers
easier to keep aligned with the evolution of operational systems
immediately convertible from IFPUG FP
In particular, we observe that the alignment of asset values as a result of the continuing
operations of small enhancements conducted within the ordinary maintenance involves a very
low cost because there is no need to document DETs, RETs and FTRs changed but only BFC
added or deleted from the baseline.
9. Future research
It is ob obvious interest to understand if similar results may be obtained using the
COSMIC FP approach. The research is undergoing.
10. References
[1] ISO/IEC, "14143-1:2007 'Information technology - Software measurement - Functional size
measurement - Part 1: Definition of Concepts'", JTC 1 / SC 7, ISO/IEC, 2007
[2] IFPUG, CPM manual v4.3.1, 2010
[3] DPO, Early & Quick Function Point 3.0 – Reference Manual v.1.3, Feb 2011
[4] ISBSG, Estimating, Benchmarking & Research Suite Release 11, 2010
... After the introduction of the original Function Points (FPs), a few "simplified" measures have been proposed, aiming to make measurement simpler and quicker, but also to make measures applicable when fully detailed software specifications are not yet available. Among the simplified measures are simple Function Points (SFPs) [2] (formerly known as SiFPs [3]). ...
... The simple Function Point (SiFP) measurement method [2,3] has been designed by Meli to be lightweight and easy to use. Later on, IFPUG acquired the SiFP rights and developed the IFPUG SFP method, maintaining the original structure but incorporating the terminology of the original FPA method. ...
... Since the introduction of Function Point analysis, many researchers and practitioners have strived to develop simplified versions of the FP measurement process, both to reduce the cost and duration of the measurement process and to make it applicable when fullfledged requirement specifications are not yet available [3,[24][25][26][27][28][29][30][31]. ...
Article
Full-text available
Functional size measures are widely used for estimating software development effort. After the introduction of Function Points, a few “simplified” measures have been proposed, aiming to make measurement simpler and applicable when fully detailed software specifications are not yet available. However, some practitioners believe that, when considering “complex” projects, traditional Function Point measures support more accurate estimates than simpler functional size measures, which do not account for greater-than-average complexity. In this paper, we aim to produce evidence that confirms or disproves such a belief via an empirical study that separately analyzes projects that involved developments from scratch and extensions and modifications of existing software. Our analysis shows that there is no evidence that traditional Function Points are generally better at estimating more complex projects than simpler measures, although some differences appear in specific conditions. Another result of this study is that functional size metrics—both traditional and simplified—do not seem to effectively account for software complexity, as estimation accuracy decreases with increasing complexity, regardless of the functional size metric used. To improve effort estimation, researchers should look for a way of measuring software complexity that can be used in effort models together with (traditional or simplified) functional size measures.
... These measures are simpler and quicker, and applicable when fully detailed software specifications are not yet available. Among the simplified measures are Simple Function Points (SFP) [2] (formerly known as SiFP [3]) and the sheer number of transaction functions. ...
... Currently, the IFPUG promotes two FSMs: Function Points (from the original proposal by Albrecht [1]) and Simple Function Points (the simplified measure proposed by Meli [3]). ...
... The Simple Function Point measurement method [2], [3] has been designed by Meli and subsequently evolved by IFPUG to be lightweight and easy to use. Like IFPUG FPA, it yields a size measure that does not depend on design, implementation or technology-related characteristics of the measured software. ...
Article
Full-text available
Functional size measures are often used as the basis for estimating development effort, because they are available in the early stages of software development. Several simplified measurement methods have also been proposed, both to decrease the cost of measurement and to make functional size measurement applicable when functional user requirements are not yet known in full detail. It has been shown that simplified functional measures are suitable to support effort estimation using traditional statistical effort models. Lately, machine learning techniques have been successfully used for software development effort estimation. However, the usage of machine learning techniques in combination with simplified functional size measures has not yet been empirically evaluated. This paper aims to fill this gap. It reports to what extent functional size measures can be simplified, without decreasing the accuracy of effort estimates they can support, when machine learning is used to build effort prediction models. In performing this evaluation, we also took into account that different effort models can be required when (i) new software is developed from scratch, (ii) it is extended by adding new functionality, or (iii) functionalities are added, changed and possibly removed. We carried out an empirical study, in which we used measures collected from several industrial projects. Effort estimation models were built via multiple Machine Learning techniques, using both traditional full-fledged functional size measures and simplified measures, for each of the three aforementioned types of development. According to our empirical study, it appears that using simplified functional size measures in place of traditional functional size measures for effort estimation does not yield practically relevant differences in accuracy; this result holds for all the project types considered, i.e., new developments, extensions and enhancements. Therefore, software project managers can consider analyzing only a small and specific part of functional user requirements to get measures that effectively support effort estimation.
... Another approach to achieving functional measures quickly and short of detailed requirements descriptions was adopted in the deinition of Simple Function Point (SiFP) [35]. Instead of approximating the FPA measure, SiFP provides a new measure, thus SiFP is a functional size measurement method of its own. ...
... We show that ML models are generally slightly more accurate. (4) We consider the 'lightweight' Simple Function Point (SFP) method [35,49]. Although not an estimation method, SFP requires a simpler and faster measurement process than HLFPA, and it usually yields measures close to those obtained via oicial FPA. ...
... The Simple Function Point measurement method [18,35] has been designed by Meli to be lightweight and easy to use. Like IFPUG FPA, it is independent of the technologies and of the technical design principles. ...
Article
Measuring software functional size via standard Function Points Analysis (FPA) requires the availability of fully specified requirements and specific competencies. Most of the time, the need to measure software functional size occurs well in advance with respect to these ideal conditions, under the lack of complete information or skilled experts. To work around the constraints of the official measurement process, several estimation methods for FPA have been proposed and are commonly used. Among these, the International Function Point User Group (IFPUG) has adopted the ‘High-level FPA’ method (also known as NESMA method). This method avoids weighting each data and transaction function by using fixed weights instead. Applying High-level FPA, or similar estimation methods, is faster and easier than carrying out the official measurement process, but inevitably yields an approximation in the measures. In this paper, we contribute to the problem of estimating software functional size measures by using machine learning. To the best of our knowledge, machine learning methods were never applied to the early estimation of software functional size. Our goal is to understand whether machine learning techniques yield estimates of FPA measures that are more accurate than those obtained with High-level FPA or similar methods. An empirical study on a large dataset of functional size predictors was carried out to train and test three of the most popular and robust machine learning methods, namely: Random Forests, Support Vector Regression, and Neural Networks. A systematic experimental phase, with cycles of dataset filtering and splitting, parameters tuning, and model training and validation is presented. The estimation accuracy of the obtained models was then evaluated and compared to that of fixed-weight models (e.g., High-level FPA) and linear regression models, also using a second dataset as the test set. We found that Support Vector Regression yields quite accurate estimation models. However, the obtained level of accuracy does not appear significantly better with respect to High-level FPA or to models built via ordinary least squares regression. Noticeably, fairly good accuracy levels were obtained by models that do not even require discerning among different types of transactions and data.
... In 2010, a new FSMM called Simple Function Point (SiFP) was developed by Meli [5]. In 2019, IFPUG acquired the method and in 2021 the IFPUG branded Simple Function Point (SFP) method was delivered to the market [6]. ...
... The Simple Function Point measurement method [5] [6] has been specifically designed to be agile, fast, lightweight, easy to use, and with minimal impact on software development processes. It is easy to learn and provides reliable, repeatable, and objective results. ...
Conference Paper
In software engineering, measuring software functional size via the IFPUG Function Point Analysis using the standard manual process can be a long and expensive activity. To solve this problem, several early estimation methods have been proposed and have become de facto standard processes. Among these, a prominent one is High-level Function Point Analysis. Recently, the Simple Function Point method has been released by IFPUG; although it is a proper measurement method, it has a great level of convertibility to traditional Function Points and may be used as an estimation method. Both High-level Function Point Analysis and Simple Function Point skip the difficult and time-consuming activities needed to weight data and transaction functions. This makes the process faster and cheaper, but yields approximate measures. The accuracy of the mentioned method has been evaluated, also via large-scale empirical studies, showing that the yielded approximate measures are sufficiently accurate for practical usage. In this paper, locally weighted regression is applied to the problem outlined above. This empirical study shows that estimates obtained via locally weighted regression are more accurate than those obtained via High-level Function Point Analysis, but are not substantially better than those yielded by alternative estimation methods using linear regression. The Simple Function Point method appears to yield measures that are well correlated with those obtained via standard measurement. In conclusion, locally weighted regression appears to be effective and accurate enough for estimating software functional size.
... Specifically, the HLFPA method proved fairly good in estimating both Real-Time and non Real-Time applications. [28,29] data fixed [7,30,31,32,9,16,13] NESMA estimated (HLFPA) [28,29] all functions fixed [7,30,31,32,9,16,13] Early & Quick FP [25,33] all functions statistics [13,34] Tichenor ILF model [35] ILF fixed [13] simplified FP (sFP) [18] all functions fixed [13] ISBSG average weights [36] all functions statistics [13] SiFP [37] data and trans. statistics [38,39] Using a database concerning over 100 applications, NESMA empirically evaluated the accuracy of the NESMA Estimated and Indicative FPA approximation methods [12]. ...
Preprint
Full-text available
Measuring Function Points following the standard process is sometimes long and expensive. To solve this problem, several early estimation methods have been proposed. Among these, the "NESMA Estimated" method is one of the most widely used; it has also been selected by the International Function Point User Group as the official early function point analysis method, under the name of 'High-level FPA' method. A large-scale empirical study has shown that the High-level FPA method-although sufficiently accurate-tends to underestimate the size of software. Underestimating the size of the software to be developed can easily lead to wrong decisions, which can even result in project failure. In this paper we investigate the reasons why the High-level FPA method tends to underestimate. We also explore how to improve the method to make it more accurate. Finally, we propose size estimation models built using different criteria and we evaluate the estimation accuracy of these new models. Our results show that it is possible to derive size estimation models from historical data using simple regression techniques: these models are slightly less accurate than those delivered by the High-level FPA method in terms of absolute estimation errors, but can be used earlier than the High-level FPA method, are cheaper, and do not underestimate software size.
Book
Full-text available
The Ninth International Conference on Advances and Trends in Software Engineering (SOFTENG 2023), held between April 24th and April 28th, 2023, continued a series of events focusing on these challenging aspects for software development and deployment, across the whole life-cycle. Software engineering exhibits challenging dimensions in the light of new applications, devices, and services. Mobility, user-centric development, smart-devices, e-services, ambient environments, e-health and wearable/implantable devices pose specific challenges for specifying software requirements and developing reliable and safe software. Specific software interfaces, agile organization and software dependability require particular approaches for software security, maintainability, and sustainability.
Article
Full-text available
In the last two decades, computing and storage technologies have experienced enormous advances. Leveraging these recent advances, Artificial Intelligence (AI) is making the leap from traditional classification use cases to automation of complex systems through advanced machine learning and reasoning algorithms. While the literature on AI algorithms and applications of these algorithms in automation is mature, there is a lack of research on trustworthy AI, i.e., how different industries can trust the developed AI modules. AI algorithms are data-driven, i.e., they learn based on the received data, and also act based on the received status data. Then, an initial step in addressing trustworthy AI is investigating the plausibility of the data that is fed to the system. In this work, we study the state-of-the-art data plausibility check approaches. Then, we propose a novel approach that leverages machine learning for an automated data plausibility check. This novel approach is context-aware, i.e., it leverages potential contextual data related to the dataset under investigation for a plausibility check. We investigate three machine learning solutions that leverage auto-correlation in each feature of dataset, correlation between features, and hidden statistics of each feature for generating the checkpoints. Performance evaluation results indicated the outstanding performance of the proposed scheme in the detection of noisy data in order to do the data plausibility check.
ResearchGate has not been able to resolve any references for this publication.