ArticlePDF Available

Exploratory Study of Performance Evaluation Models for Distributed Software Architecture

Authors:

Abstract

Several models have been developed to evaluate the performance of Distributed Software Architecture (DSA) in order to avoid problems that may arise during system implementation. This paper presents a review of DSA performance evaluation models with the view of identifying the common properties of the models. It was established in this study that the existing models evaluate DSA performance using machine parameters such as processor speed, buffer size, cache size, server response time, server execution time, bus and network bandwidth size and lots of others. The models are thus classified to be machine-centric. Moreover the involvement of end users in the evaluation process is not emphasized. Software is developed in order to satisfy specific requirements of the client organization (end-users); therefore, involving users in evaluating DSA performance should not be underestimated. This study suggests future works on establishing contextual organizational variables that can be used to evaluate DSA. Also to complement the existing models, works should be done on development of user-centric performance evaluation model which will directly involve the end-users in the evaluation of DSA using the identified contextual organizational variables as parameters for evaluation.
Exploratory Study of Performance Evaluation Models for Distributed
Software Architecture
S.O. Olabiyisi; E.O. Omidiora
Department of Computer Science
& Engineering
Ladoke Akintola University of
Technology, Ogbomoso
Oyo State, Nigeria
Victor W. Mbarika
International Centre for
Information Technology and
Development
Southern University, Baton
Rouge, Louisiana, USA
Faith-Michael
Uzoka
Department of
Computer Science &
Information Systems
Mount Royal University
Calgary, Canada
Mathieu Kourouma
Department of
Computer Science
College of Sciences
Southern University,
Baton Rouge,
Louisiana, USA
Boluwaji A. Akinnuwesi
Department of Information
Technology
Bells University of
Technology, Ota, Ogun State
Nigeria
Hyacinthe Aboudja
Department of Computer
Science
School of Business
Oklahoma City University
Several models have been developed to evaluate the performance of
Distributed Software Architecture (DSA) in order to avoid problems that
may arise during system implementation. This paper presents a review
of DSA performance evaluation models with the view of identifying the
common properties of the models. It was established in this study that
the existing models evaluate DSA performance using machine
parameters such as processor speed, buffer size, cache size, server
response time, server execution time, bus and network bandwidth size
and lots of others. The models are thus classified to be machine-
centric. Moreover the involvement of end users in the evaluation
process is not emphasized. Software is developed in order to satisfy
specific requirements of the client organization (end-users); therefore,
involving users in evaluating DSA performance should not be
underestimated. This study suggests future works on establishing
contextual organizational variables that can be used to evaluate DSA.
Also to complement the existing models, works should be done on
development of user-centric performance evaluation model which will
directly involve the end-users in the evaluation of DSA using the
identified contextual organizational variables as parameters for
evaluation.
Keywords: Distributed software, Performance, Performance evaluation
model, Software system architecture, Client organization, machine-
centric, user-centric
INTRODUCTION
Today, distributed computing applications are used
by many people in real time operations such as
electronic commerce, electronic banking, online
payment, et cetera [22]. Distributed computing is
used as enabling technology for modern enterprise
applications; thus in the face of globalization and
ever increasing competition, Quality of Service (QoS)
attributes like performance, security, reliability,
scalability, and robustness are of crucial importance
[29]. Companies must ensure that the distributed
software (DS) they operate does not only provide all
relevant functional services, but also meet the
performance expectation of their customers.
Therefore it becomes imperative to analyze and
predict the expected performance of distributed
software systems at the level of the architectural
design in order to avoid the pitfalls of poor QoS
during system implementation.
Software architecture (SA) is a phase of software
design which describes how a system is
decomposed into components, how these
components are interconnected, and how they
communicate and interact with each other. This
phase of software design is a major source of errors
if the organizational structure of the different
components is not carefully defined and designed.
There are two parts to SA [6, 33]. The first part is the
micro-architecture which covers the internal structure
of the software system such as conceptual
architecture, module interconnection architecture,
execution architecture, and code architecture. The
second part of SA is the macro-architecture that
focuses on external factors that could influence the
design and implementation of the software system.
Examples of the external factors are: culture and
belief of people (users), government policies and
regulations, and disposition of people towards the
use of computer.
SA is an important phase in the software life cycle as
it is the earliest point and highest level of abstraction
at which useful analysis of a software system is
possible [35]. Hence, performance analysis at this
level can be useful to establish whether a proposed
architecture satisfies the end users’ requirements
and also meets the desired performance
specifications. It also helps to identify eventual errors
and verify that the quality requirements have been
addressed in the design and thus saving major
potential modifications later in the software
development life cycle or tuning the system after
deployment.. SA is considered the first product in an
architecture-based development process and
evaluation at this level should reveal requirement
conflicts and incomplete design descriptions from
stakeholders’ perspective [6].
Performance of software is a quality attribute that is
measured in any of the following metrics: system
throughput, responsiveness, resource utilization,
turnaround time, latency, failure rate, and fault
tolerance. Thus, assessing and optimizing system
performance is essential for the smooth and efficient
operation of the software system. There are several
approaches for evaluating the performance of
system architecture. One of the earliest approaches
is the fix-it-later approach [3] which advocates
software correctness and deferring performance
considerations to the integration testing phase. If
performance problems are detected, then, additional
hardware may be needed; otherwise, the software
will be tuned to correct the problems. This approach
has some limitations, such as,: it takes time to
acquire and install new hardware; also tuning the
software takes time and could be costly. Tuning may
distort the original software design and testing must
be repeated after code changes. This gives a
negative impression to users after it is corrected. The
rational for the fix-it-later approach is to save
development time and cost. This however will not be
realized, if initial performance is unsatisfactory
because of additional time and cost of tuning and
maintenance. Also, Connie [3] proposed a Design-
Based Evaluation and Prediction Technique
(ADEPT), an analysis technique used in conjunction
with the performance engineering discipline. ADEPT
was the strategy used to combat the fix-it-later
principle and supported the performance engineering
process. ADEPT evaluates the performance of
information system early in the life cycle using
specifications for both expected resources
requirement and upper bounds. The system design
is likely to be stable if the performance goal is
satisfied for the upper bound. ADEPT had the
following challenges: lack of automatic feedback
component, not robust enough to evaluate large and
complex systems, inability to eliminate unwanted
argument in the course of evaluation, and inability to
work in concurrent processing environments.
In recent years, several models have been
developed to constantly evaluate the performance of
DSA. The survey done in this paper provides the
developments over about a decade (1999 2010)
with the aim of identifying the parameters used by
each model for evaluating DSA performance and
also deduces the properties that are common to the
models. Further research direction is proposed as a
consequence.
RELATED WORKS
Many studies have been carried out on the survey of
system performance evaluation models with the
ultimate goal of providing recommendations for
future research activities. Those activities could
significantly improve the performance evaluation and
prediction of software system architecture. A survey
of the approaches to evaluate software performance
from 1960 to 1986 was done in [4]. The study
pointed out the breakthroughs leading to the
software performance engineering approach (SPE)
and a comprehensive methodology for constructing
software to meet performance goals. The concepts,
methods, tools, and use of SPE were summarized
and future trends in each area were suggested.
In [6] eight architecture analysis methods were
reviewed with the view of discovering similarities and
differences between these methods by making
classifications, comparisons, and appropriateness
studies. The eight methods considered are: SAAM
(Scenario-Based Architecture Analysis Method),
SAAMCS (SAAM Founded on Complex Scenarios),
ESAAMI (Extended SAAM by Integration in the
Domain), SAAMER (Software Architecture Analysis
Method for Evolution and Reusability), ATAM
(Architecture Trade-Off Analysis Method), SBAR
(Scenario-Based Architecture Reengineering),
ALPSM (Architecture Level Prediction of Software
Maintenance), and SAEM (Software Architecture
Evaluation Model). The authors discovered at that
time that SAAM was used for different quality
attributes like modifiability, performance, availability,
and security. In addition SAAM was applied in
several domains unlike the other methods that were
undergoing refinement and improvement as at that
time. As a result, some future works were proposed
to evaluate the effects of their various usages and
create a repeatable method based on repositories of
scenarios, screening and elicitation questions.
Three indications that concern software design
specifications, performance models, and analysis of
processes were highlighted in [31]. The following
recommendations were made in the paper: the use
of standard software artifacts like Unified Modeling
Language (UML) diagrams for software design
specifications; the existence of strong semantic
mapping between software artifacts and the
performance models as strategy to reduce the
performance model complexity and still maintaining a
meaningful semantic correspondence; use of
simulation in addition to analytical simulations to
address performance model complexity and
provision of feedback which is a key success factor
for a widespread use of performance analysis
models.
In [1] a review of performance prediction techniques
for component-based software systems was carried
out and the following recommendations were made:
(1) integration of quantitative prediction techniques in
software development process; (2) design of
component models allowing quality prediction and
building of component technologies supporting
quality prediction; (3) inclusion of quality attributes
such as reliability, safety or security in the software
development process; and (4) study of
interdependencies among the different quality
attributes to determine, for example, how the
introduction of performance predictability can affect
other attributes such as reliability or maintainability.
In [7], three foundational formal software analyses
were described. The authors reviewed emerging
trends in software model and identified future
directions that promise to significantly improve the
cost-effectiveness.
CLASSIFICATION OF DSA PERFORMANCE
EVALUATION MODELS
This paper classifies existing performance models
based on the technique used to develop the models.
The techniques are: (1) Factor Analysis; (2) Queuing
Network; (3) Petri net; (4) Pattern-Based; (5)
Hierarchical Modelling; (6) Performance Analysis
and Characterization Environment (PACE) Based;
(7) Component-Based Modelling; (8) Scenario-
Based; (9) Soft computing approach; (10) Relational
Approach; (11) Software Architecture Analysis
Methods (SAAM); (12) Aspectual Software
Architecture Analysis Methods (ASAAM); (13) Hybrid
Approaches such as UML-Petri net, UML-Stochastic
Petri net, Queue Petri Nets Approach and Soft
Computing Approach. The models are reviewed in
order to establish the kind of parameters used in
them to evaluate DSA.
Factor Analysis (FA) Based Approach
FA approach was used in [2] to develop a model for
analysing Information Technology (IT) software
projects with the aim of establishing the success or
failure of the project before it takes off. FA as
contained in SPSS and Statview software was used.
Fifty performance indices of IT projects planning,
execution, management, and control were
formulated. Eleven factors were extracted and
subjected to further analysis with a view to
estimating and ranking their contribution to the
success of IT projects. The model was tested using
sample life data gotten using questionnaires that
were administered to the principal actors of the
popular IT software projects in Nigeria. The
significant contribution of the research is the
provision of a working model that utilized both
quantitative and qualitative decision variables in
assessing the success or failure of IT projects. This
serves as template for evaluating IT projects prior to
its implementation. This model was not used to
evaluate performance of software system
architecture.
Queuing Network Based Models
This is a conventional modelling paradigm which
consists of a set of interconnected queues [28]. The
models based on Queuing Networks are categorized
in Table 1.
Table 1 Queuing Network Based Performance Models
Description of Model
Parameters Considered
Class of Parameter
[30] designed and
implemented object-
oriented queuing
network model a
reusable performance
models for software
artifacts.
Buffer size, processor speed of server, queue size,
number of incoming request, request arrival time,
request departure time.
Machine centric
parameter
Petri Net Based Approach
Petri nets were introduced in 1962 by Dr. Carl Adam
Petri [27]. A Petri net is a graphical and
mathematical modelling tool [26]. It is a directed
bipartite graph with an initial state called the initial
marking. Petri Nets consist of four basic elements:
places, transitions, tokens, and arcs. System
performance models based on Petri net approach
are categorized in Table 2.
Table 2 Petri Net Based Performance Models
Description of model
Parameters Considered
Class of Parameter
[18] developed
performance evaluation
model for Agent-based
system using petri net
approach
System load, system delays, system
routing rate, latency of process, CPU
time.
Machine centric parameters
[20] did performance
analysis of Internet based
software retrieval systems
using petri nets
Network time.
Machine centric parameters
[13] developed stochastic
petri nets model from UML
activity diagrams
Routing rate, action duration, system
response time.
Machine centric parameters
[31] integrated
performance and
specification model to
provide a tool for
quantitative evaluation
of software architecture
at the design phase.
Number of service centers, service rate of service
center, arrival rate of requests at service centre, number
of servers in service centers, routing procedure of
requests, Number of request circulating in the system,
physical resources available system workloads,
network topology.
Software process centric
and machine centric
parameters
[35] modeled layered
software system as a
closed Product Form
Queuing Network
(PFQN) and solve it for
finding performance
attributes of the system
Range of number of clients accessing the system,
average think time of each client, number of layers in
the software system, relationship between the machines
and software components, number of CPUs and disks
on each of the machine and thread limitation (if any),
uplink and downlink capacities of the connectors
connecting machines running adjacent layers of the
system, size of packets of the links, service time
required to service one request by a software layer,
forward transition probability, rating factors of the CPU
and the disks of each machines in the system
Software and Machine
centric parameters
[31] proposed an
approach based on
queuing networks
models for
performance prediction
of software systems at
the software
architecture level,
specified by UML.
Same as in [35]
Software and Machine
centric parameters
[12] developed
Software Architecture
and Model Extraction
(SAME) technique that
extract communication
patterns from
executable designs or
prototype that use
message passing, to
develop a Layered
Queuing Network
Performance Model in
an automated fashion.
Same as in [35]
Software and Machine
centric parameter
[14] translated UML
activity diagram into
stochastic Petri net model
that allows to compute
performance indices.
Routing rate, action duration, system
response time.
Machine centric parameters
[23] derived performance
parameters from
Generalized Stochastic
Petri Net (GSPN) using
Markov chain theory.
Routing rate, action duration, system
response time.
Machine centric parameters
Queuing Petri Net (QPN) Based Models
The hybrid of Petri Net and Queuing Networks is
Queuing Petri Nets (QPNs) which facilitates the
integration of hardware and software aspects of
system behaviour into the same model. In addition to
hardware contention and scheduling strategies,
using QPNs, one can easily model simultaneous
resource possession, synchronization, blocking, and
contention for software resources. Thus QPNs
combines Queuing Networks and Petri Nets into a
single formalism in order to eliminating their
disadvantages. QPNs allow queues to be integrated
into places of Petri Nets and this enables the
modeller to easily represent scheduling strategies
and to bring the benefits of Queuing Networks into
the world of Petri Nets [28]. System performance
models based on Queuing Petri net approach are
categorized in Table 3.
Table 3 Queuing Petri Net Based Performance Models
Parameters Considered
Class of Parameters
Service demand of queue, service rate of
queue, token population of queue, queue
size, buffer size, processor speed of
server, routing rate.
Machine centric parameters
Same as in [28].
Machine centric parameters
Performance Analysis and Characterization
Environment Based Approach
The motivation to develop Performance Analysis and
Characterization Environment (PACE) based
approach in [15] was to provide quantitative data
concerning the performance of sophisticated
applications running on high performance systems.
The framework of PACE is a methodology based on
a layered approach that separates out the software
and hardware system components through the use
of a parallelization template. This is a modular
approach that leads to readily reusable models,
which can be interchanged for experimental analysis.
Each of the modules in PACE can be described at
multiple levels of details thus providing a range of
result accuracies, but at varying costs in terms of
prediction evaluation time. PACE is aimed to be
used for pre-implementation analysis, such as
design or code porting activities, as well as, for on-
the-fly use in scheduling systems. The core
component of PACE is a performance specification
language, CHIP3S (Characterization Instrumentation
for Performance Prediction of Parallel Systems).
CHIP3S provides a syntax that allows the description
of the performance aspects of an application and its
parallelization to be expressed. This includes control
flow information, resource usage information (for
example number of operations), communication
structures, and mapping information for a parallel or
distributed system. The software object in the PACE
system were created using the Application
Characterization Tool (ACT). ACT aids the
conversion of sequential or parallel source code into
the CHIP3S language via the Stanford Intermediate
Format (SUIF). ACT performs a static analysis of the
code to produce the control flow of the application,
count the number of operations in terms of high-level
language implemented, and also the communication
structure. The hardware objects of the model are
created using a Hardware Model Configuration
Language (HMCL) by specifying system-dependent
parameters. On evaluation, the relevant sets of
parameters are used and supplied to the evaluation
methods for each of the component models.
Hierarchical Performance Modeling Approach
In [32] a Hierarchical Performance Modelling (HPM)
technique for distributed systems, which
incorporated different level of modelling abstraction,
was presented. HPM is a technique to model
performance for different layers of abstraction. It
includes several layers of organization from primitive
operation to software architecture, therefore,
providing a degree of accuracy that cannot be
achieved with single layer models. The application is
developed in a top-down fashion from general to
more specific, but performance information is
generated in bottom-up method, thus linking the
different levels of analytic models into a composite
model. This approach support specification and
performance model generation that incorporates
computation and communication delays along with
hardware profile characteristics to assist in the
evaluation of performance alternatives. HPM models
provide a quantitative performance assessment of an
entire system comprising of hardware, software, and
communication. The HPM provided a well-defined
methodology to allow system designers to evaluate
the application based on the system requirements of
their application and fine tune the values of
performance parameters.
Pattern Based Approach
Design patterns are defined as description of
communicating objects and classes that are
customized to solve a general design problem in a
particular context. The components of design pattern
are: Pattern name, Intent, Motivation, Applicability,
Structure, Participants, Collaborations,
Consequences, Implementation, Sample code,
Known uses and Related pattern. Performance
models based on pattern based approach are
presented in Table 4.
Table 4 Pattern Based Performance Models
Description of Model
Parameters Considered
Class of Parameter
[19] presented an approach
based on patterns to develop
performance models for object
oriented software system in
the early stages of the
software development
process. This complement the
approach given in [18]
Event load, time to perform an action, request
arrival time, request service time, number of
concurrent users
Software process centric
parameters
[21] presented a pattern-based
approach to model the
performance of software
system and used it to evaluate
the performance of mobile
agent system
Same as in [19]
Software process centric
parameters
[9] presented a pattern-based
performance completion for
message-oriented middleware
System configuration (hardware & network
components), message size (incoming &
outgoing), delivery time for message, number
of message sent, size of message sent,
number of message delivered, size of
message delivered, transaction/request size,
buffer/pool size
Software process centric
parameters and machine
centric parameters
Soft Computing Approach
Soft computing is an approach to computing which
parallels the remarkable ability of the human mind to
reason and learn in an environment of uncertainty
and imprecision [8]. It is a consortium of
methodologies centering in fuzzy logic (FL), artificial
neural networks (ANN) and evolutionary computation
(EC). These methodologies are complementary and
synergistic, rather than competitive. They provide in
one form or another flexible information processing
capability for handling real life ambiguous situations.
Soft computing aims to exploit the tolerance for
imprecision, uncertainty, approximate reasoning, and
partial truth in order to achieve tractability,
robustness, and low-cost solutions. The attributes of
these models are often measured in terms linguistic
values, such as very low, low, high, and very high.
The imprecise nature of the attributes constitutes
uncertainty and vagueness in their (subsequent)
interpretation. Performance models based on soft
computing approach are presented in Table 5. The
advantage of Soft computing models particularly
fuzzy logic and ANN are [10]: they are more general
and they mimic the way in which humans interpret
linguistic values and the transition from one linguistic
value to a contiguous linguistic value is gradual
rather than abrupt.
Table 5 Performance Models Based Soft Computing Approach
Description of Models
Parameters Considered
Class of Parameter
[10] applied fuzzy logic to
measure similarity of
software projects when their
attributes are described by
categorical values (linguistic
values in fuzzy logic)
Seventeen parameters: software size, project
mode plus 15 cost drivers.
Software process
centric and machine
centric parameters
[11] presented a new
technique based on fuzzy
logic, linguistic quantifiers
and analogy-based
reasoning to estimate the
cost of or effort of software
projects when they are
described by either numerical
data or linguistic values.
Same as in [10]
Software process
centric and machine
centric parameters
[17] showed how fuzzy logic
can be applied to computer
performance work to simplify
and speed analysis and
reporting.
CPU Queue length, memory (RAM) available,
pages input per second, read time, write time,
I/Os per second.
Machine centric
parameters
[25] Developed a fuzzy
model for evaluating
information system projects
based on their present value
using fuzzy modelling
technique.
Three parameters representing three possible
values of project costs, benefits, evaluation
periods, and discount rate.
Software process
centric parameters
Other Performance Models
In [5], multivariate Adaptive Regression Splines
(MARS) was used for software performance
analysis. A resource function was designed and
automated, having the following parameters - size of
data objects, number of disk blocks to be read, size
of messages to be processed, memory and cache
size, processor speed, bus and network bandwidth.
In [16], PASA, a method for performance
assessment of software architectures, was
developed and it was scenario-based. It identifies
potential areas of risk within the architecture with
respect to performance and other quality objectives.
It identifies strategies for reducing or eliminating the
risks if a problem is found. Scenario for important
workloads are identified and documented. The
scenarios provide means of reasoning about the
performance of the software as well as other
qualities and they serve as starting point for
constructing performance models of the architecture.
ASAAM (Aspectual Software Architecture Analysis
Method) is scenario-based proposed in [34]. It
introduces a set of heuristic rules that help to derive
architectural aspects and the corresponding tangled
architectural components from scenarios. It takes as
input the architecture design and measures the
impact of predefined scenarios on it in order to
identify the potential risks and the sensitive points of
the architecture. This helps to predict the quality of
the system before it is built and therefore reducing
unnecessary maintenance costs.
In [36], performance analysis based on requirements
traceability was presented. Requirement traceability
is critical to providing a complete approach which will
lead to an executable model for performance
evaluation. The paper investigated the software
architectures that are extended based on the
performance requirements traceability to represent
performance property. The extended architectures
are then transformed into a simulation model colored
Generalized Stochastic Petri Nets (GSPN) and the
simulation results are used to validate performance
requirements and evaluate system design. The
parameters considered are queue length, number of
requests to be serviced, server response time,
server execution time, and processor speed.
GENERAL PROPERTIES OF THE EXISTING DSA
PERFORMANCE EVALUATION MODELS
From survey of the existing DSA performance
evaluation models, the following common attributes
are identified:
i. The models are algorithmic using hard computing
principles.
ii. Parameters for evaluation are machine centered
and they are objective. For example, processor
speed, bus and network bandwidth size, RAM
size, cache size, server response time, server
execution time, number of disk to be read and
message size. Therefore the models are
machine-centric.
iii. The models are implemented at the
architectural stage of the software life cycle.
iv. Though in the existing models, the contributions
of the client organization (end users) during
software development process were
acknowledged but none of the models draws
parameters for evaluation from the contextual
organizational decision variables.
v. The models are re-useable and scalable.
vi. Performance metrics considered are mostly the
following: throughput, response time, and
resource utilization.
viii. The models are limited by their inability to cope
with uncertainties and imprecision of data or
information surrounding software projects in the
early stage of the development life cycle.
ix. The conceptual structures of some model (for
example, probabilistic models) that can
represent vague information are inadequate for
dealing with problems in which information is
perception-based and is expressed in linguistic
form.
x. The models are computationally intensive and
are intolerant of noise. They cannot handle
categorical data other than binary valued
variables.
CONCLUSION AND FUTURE WORK
Conclusion
In this paper, a review of research works on
performance evaluation models from 1999 to 2010 is
presented in order to establish the properties
common to these models. It was deduced that most
models for evaluating DSA performance are
machine-centric. The following are some of the
evaluation parameters identified: buffer size,
processor speed, cache size, server response time,
server execution time, number of disk block to be
read, queue size, request arrival time, request
departure time, bus size, network bandwidth size
(uplink and down link), number of Central Processing
Unit (CPU), number of request circulating in the
system, system routing rate, latency of system,
network time, system RAM (Random Access
Memory) size, size of data object, size of message to
be processed. The performance evaluation models
are, therefore, classified as machine-centric models.
They are established and used to evaluate DSA
performance with respect to satisfying the machine
and system process requirements. However
subjective decision variables of users are not
considered in the machine-centric models; also the
models cannot cope with uncertainties and
imprecision of data or information surrounding
software projects in the development life cycle.
Users are involved in DSA development in order to
feed the software developers with the necessary
organizational information. This helps the software
developers to develop software system that will be
accepted by end users and satisfies the
organization’s requirements using available machine
infrastructure. The question is “how do we measure
the performance of the DSA from users’ perspectives
in order to establish the extent of responsiveness of
the DSA to the requirements of the client
organization”. It is hoped that future research works
will address this question.
Future Work
Management of the client organization and the end
users are key players in software development
process. Therefore, contextual organizational
decision variables (for example: Organizational goals
and task; Level of users competence/experience in
Information Technology; Information requirements of
users and the format; Internal service of the
organization, and their relationships; The
organization’s defined functions required in the user
interface; Organization’s policies, rules or
procedures for transaction process flow etcetera),
should not be underestimated while establishing the
variables to evaluate performance of software
architecture. We therefore propose, as a result, that
future works should identify and verify with some
empirical analysis, both objective and subjective
contextual organizational decision variables that
could influence the choice of architectural style and
design pattern made by the software developer. We
are of the view that if some organizational variables
can be established as parameters to evaluate DSA
performance, it will be possible to have some DSA
performance evaluation models that will be user-
centric or a hybrid model having both organizational
decision variables and machine/system variables as
parameters for evaluation.
REFERENCES
[1] Bailey, H.D., Snavely, A. “Performance
Modeling: Understanding the Present and
Predicting the Future”, Proceedings of Euro-Par,
Lisbon, Portugal: 2005
[2] Chiemeke, S.C. “Computer Aided System for
Evaluating Information Technology Projects”,
PhD thesis submitted to the School of
Postgraduate Studies, Federal University of
Technology, Akure, Ondo State, Nigeria: 2003.
[3] Connie, U.S. Increasing Information System
Productivity”, Proceedings of the Computer
Measurement Group’s International
Conference,The Computer Measurement Group
Inc: 1981.
[4] Connie, U.S. “The Evolution of Software
Performance Engineering: A Survey”,
Proceedings of ACM Fall Joint Computer
Conference: 1986, pp 778 783.
[5] Courtois, M., Woodside, M. “Using Regression
Splines for Software Performance Analysis”,
Proceedings of WOSP, Ontario, Canada. 2000.
[6] Dobrica, L., Niemela, E. “A Survey on Software
Architecture Analysis Methods”, IEEE
Transactions on Software Engineering, (28:7),
2002,
[7] Dwyer, B.M., Hatcliff, J., Pasareanu, S.C.,
Visser, W. “Formal Software Analysis: Emerging
Trends in Software Model Checking”,
Proceedings of Future of Software Engineering
(FOSE’07): 2007.
[8] Gary, R.G., Frank, C., 1999. “Application of
Neuro-Fuzzy Systems to Behavioral
Representation in Computer Generated
Forces”, Proceedings of 8th Conference on
Computer Generated Forces and Behavioural
Representation, Orlando FL: 1999.
[9] Happe, J., Friedrich, H., Becker, S., Reussner,
H.R. “A Pattern-Based Performance Completion
for Message-Oriented Middleware”,
Proceedings of WOSP’08, Princeton, New
Jersey: 2008.
[10] Idris, A., Abran, A. “A Fuzzy Based Set of
Measures for Software Project Similarity:
Validation and Possible Improvements”,
Proceedings of METRICS 2001, London,
England: 2001, pp 85 96.
[11] Idris A., Alain A. and Khoshgoftaar. “Fuzzy
Case-Based Reasoning Models for Software
Cost Estimation”. 2004. Available @
http://www.gelog.etsmtl.ca/publications/pdf/803.
pdf
[12] Israr, A., Tauseef, L.H.D., Franks, G.,
Woodside, M. “Automatic Generation of Layered
Queuing Software Performance Models from
Commonly Available Traces”, Proceedings of
WOSP’05, Palma de Mallorca, Spain: 2005
[13] Juan, P.L., Jose M., Javier, C. “From UML
Activity Diagrams to Stochastic Petri Nets:
Application to Software Performance
Engineering”, Proceedings of WOSP’04,
Redwood City, California: 2004.
[14] Juan, P.L., Jose, M., Javier, C. “On the use of
Formal Models in Software Performance
Evaluation”, News in the Petri Nets World,
Dec. 27, 2008. Available @
<http://webdiis.univzar.es.crpetri/paper/jcam
pos/02_LGMC_JJCC.pdf>
[15] Junwei, C., Darren, J.K., Efstathios, P.,
Graham, R.N. “Performance Modeling of
Parallel and Distributed Computing Using
PACE”, Proceedings of IEEE International
Performance Computing and Communications
Conference, IPCCC-2000, Phoenix: 2000, pp
485 492.
[16] Lloyd, G.W., Connie, U.S. “PASASM: An
Architectural Approach to Fixing Software
Performance Problems”, Software Engineering
Research and Performance Engineering
Services: 2002.
[17] Maddox, M. “Using Fuzzy Logic to Automate
Performance Analyses”, Proceedings of the
Computer Measurement Group’s 2005
International Conference, The Computer
Measurement Group inc: 2005.
[18] Merseguer, J., Javier, C., Eduardo, M.
“Performance Evaluation for the Design of
Agent-Based Systems: A Petri Net Approach”,
Proceedings of the workshop on Software
Engineering and Petri Nets within the 21st
International Conference on Application and
Theory of Petri Nets, University of Aarhus:
2000a. pp 1 20.
[19] Merseguer, J., Javier, C., Eduardo, M. A
Pattern-Based Approach to Model Software
Performance”, Proceedings of the 2nd
International Workshop on Software and
Performance, Ottawa, Ontario: 2000b, pp 137-
142.
[20] Merseguer, J., Campos, J., Mena, E.
“Performance Analysis of Internet Based
Software Retrieval Systems Using Petri Nets”,
Proceedings of 4th ACM International Workshop
on Modeling, Analysis and Simulation of
Wireless and Mobile System, Rome Italy: 2001.
[21] Merseguer, J., Javier, C., Eduardo, M. A
Pattern-based Approach to Model Software
Performance Using UML and Petri Nets:
Application to Agent-based Systems”,
Proceedings of 7th World Multiconference on
Systemic Cybernetics and Informatics, Orlando,
Florida: 2003, (9), pp 307 313.
[22] Merseguer, J., Javier, C. Software
Performance Modeling Using UML and Petri
Nets”, LNCS2965, Springer Verlag: 2004, pp
265-289.
[23] Motameni, H., Movaghar, A., Siasifar, M.,
Montazeri, H., Rezaei, A. “Analytic Evaluation
on Petri Net by Using Markov Chain Theory to
Achieve Optimal Models”, World Applied
Sciences Journal (3:3), 2008, pp 504 513.
[24] Olabiyisi S.O, Omidiora E.O, Uzoka F.M.E,
Victor Mbarika, Akinnuwesi B.A. “A Survey of
Performance Evaluation Models for Distributed
Software System Architecture”. Proceedings of
International Conference on Computer Science
and Application, World Congress on
Engineering and Computer Science (WCECS
2010), San Francisco: 2010, Vol. 1, pp 35 43.
[25] Omitaomu, A.O., Adedeji, B. “Fuzzy Present
Value Analysis Model for Evaluating Information
System Projects”, Engineering Economist
(52:2), 2007, pp 157 178.
[26] Peterson, J.L. Petri Net Theory and the
Modeling of Systems, Prentice Hall, 1981.
[27] Petri, C.A. Communication with Automata”.
Technical Report RADC-TR-65-377, Rome Air
Dev. Centre, New York: 1962
[28] Samuel, K., Alejandro, B. “Performance
Modeling of Distributed E-Business Applications
Using Queuing Petri Nets”, Proceedings of IEEE
International Symposium on Performance
Analysis of Systems and Software: 2003, pp
145 153.
[29] Samuel, K. “Performance Modeling and
Evaluation of Distributed Component-Based
System Using Queuing Petri Nets”, IEEE
Transactions on Software Engineering. (32:7),
2006, pp 487 502.
[30] Savino-Vazquez, N., Puigjaner, R. “A
Component Model for Object-Oriented Queuing
Networks and its Integration in a Design
Technique for Performance Models”,
Proceedings of the 2001 Symposium on
Performance Evaluation of Computer and
Telecommunication System (SPECTS 2001),
Orlando, Florida: 2001.
[31] Simonetta, B., Roberto, M., Moreno, M.
Performance Evaluation of Software
Architecture with Queuing Networking Model”,
Proceedings of ESMc’04, Paris, France: 2004.
[32] Smarkusky, D., Ammar, I.A., Sholi, H.
“Hierarchical Performance Modeling for
Distributed System Architecture”. Available @
<http://www.cs.sfu.ca/~mhefeeda/papers/ISC20
00-HPM.pdf>, 2000.
[33] Soni, D., Nord, R., Hofmeister, C. “Software
Architecture in Industial Applications”,
Proceedings 17th International Conference on
Software Engineering (ICSE17): 1995, pp 196-
207.
[34] Tekinerdogan B. “ASAAM: Aspectual Software
Architecture Analysis Method”, Early Aspects:
Aspect-Oriented Requirements Engineering and
Architecture Design Workshop, Boston, USA:
2003.
[35] Vibhu, S.S., Pankaj J., Kishor S.T. “Evaluating
Performance Attribute of Layered Software
Architecture”, CBSE 2005: Vol. 3489 of LNCS,
pp 66-81.
[36] Wise, J.C., Chang, C.K., Xia, J., Cleland-Huang,
J. “Performance Analysis Based on
Requirements Traceability”, Technical Report,
Dept of Computer Science, Iowa State
University, Iowa: 2005.
Note: This work is a revised version. The first
version is [24] that was presented in the International
Conference on Computer Science and Application,
World Congress on Engineering and Computer
Science (WCECS 2010), San Francisco, USA,
October 20 22, 2010. It is one of the preliminary
results of an ongoing research that focuses on
developing User-Centric Model to evaluate the
performance of Distributed Software Architecture.
Acknowledgement: This research is partly
sponsored by the National Science Foundation
(NSF) under Grant Nos. 1036324 and 0811453 and
UNCFSP NSTI under the supervision of Dr Victor
Mbarika in International Centre of Information
Technology and Development, Southern University
and A & M College, Baton Rouge, Louisiana, USA.
Also Bells University of Technology, Ota, Ogun
State, Nigeria is acknowledged for providing a partial
support.
About the Authors
S.O. Olabiyisi, Ph.D. is a Senior Lecturer in the
Department of Computer Science and Engineering,
LAUTECH, Ogbomosho, Nigeria. His Research
interests are Software Performance Evaluation,
Computational Mathematics, Discrete Structures and
Softcomputing. (e-mail:tundeolabiyisi@hotmail.com)
E.O Omidiora, Ph.D. is a Senior Lecturer in the
Department of Computer Science and Engineering,
LAUTECH, Ogbomosho, Nigeria. His research
interests are Computer Architecture, Softcomputing
and e-Learning system. (e-mail:
omidiorasayo@yahoo.co.uk)
F.M.E. Uzoka, Ph.D. is a Faculty member in the
Department of Computer Science and Information
System, Mount Royal University, Calgary, Canada.
He was a Senior Lecturer in Information Systems,
University of Botswana. He conducted a two year
postdoctoral research at the University of Calgary
(2004-2005). His research interests are
Organizational Computing, Decision Support
Systems, Technology Adoption and Innovation and
Medical Informatics. He serves as a member of
editorial/review board of a number of Information
Systems journals/conferences
(e-mail: uzokafm@yahoo.com).
Boluwaji A. Akinnuwesi, Ph.D., is a Lecturer with
Department of Information Technology, Bells
University of Technology, Ota, Ogun State, Nigeria.
He is also the Director of the Computer Centre in
Bells University of Technology. He was a Research
Scholar in International Centre of Information
Technology and Development in Southern
University, Baton Rouge, Louisiana, USA. His
research area is Software Performance Engineering.
His other research interest areas are Medical
Informatics, Soft-computing, Expert System, and
Software Engineering. He is a professional member
of ACM, CPN (Computer Professional Registration
Council of Nigeria) and NCS (Nigeria Computer
Society). e-mail: akinboluade@yahoo.com.
Victor Wacham A. Mbarika, Ph.D. is the Executive
Director, International Center for IT and
Development (ICITD, Southern University, T. T
Allain #321, Baton Rouge, LA 70813, USA. He is
Editor-in-Chief of The African Journal of Information
Systems (AJIS, Phone: +1 225 715 4621 or +1 225
572 1042; Fax: +1 225 208 1046. (Email:
victor@mbarika.com)
Mathieu Kokoly Kourouma, Ph.D., is a professor in
the Department of Computer Science, College of
Sciences, at Southern University and A&M College.
He has a Bachelor in Electrical and Computer
Engineering from the Polytechnic Institute of the
University of Conakry, Guinea, a Master and Ph.D. in
Telecommunications and Computer Engineering,
respectively, from the University of Louisiana at
Lafayette - U.S.A. His research areas of interest are
wireless communications, Sensor Networks,
Cognitive Radio Networks, Telecommunications,
Network Performance Analysis, Software
Engineering and Development, and Database
Design. He is a professional member of ACM, NSTA,
and AAC&U. Emails: mkkourouma@cmps.subr.edu
and mkourouma@gmail.com. Web site:
www.cmps.subr.edu. Office number: (225)771-3652.
Hyacinthe Aboudja, Ph.D. is currently visiting
Assistant Professor in the Computer Science
Department of the School of Business at Oklahoma
City University. His research interest ranges from
computer architecture, Real-time Systems Design,
Theory of computing, System Performance Analysis,
Software Engineering, and Computer Simulation of
Biological Systems. He is a professional member of
ACM and IEEE. (email: haboudja@okcu.edu)
... A review of various models for evaluating the performance of Software System Architecture (SSA) was carried out in [1, 2, 3] with emphasis on the identification and classification of parameters used for evaluation. In addition, [3] did a further review of various models used to measure Information System (IS) success in organizations with the aim of establishing the contextual factors (organizational factors) that were used to measure the IS success, and to determine if the factors were directly or indirectly related with the components of Distributed Software System Architecture (DSSA). ...
... NFPEM is evaluated by drawing a comparison between it and the existing models that are used to evaluate the performance of DSSA. Parameters used to draw the comparison are based on the facts deduced in the course of reviewing the research works on DSSA performance evaluation models for over a decade (1999 – 2011) and this review has been presented in [1, 2, 3]. The comparison is presented inTable 9. ...
Article
Full-text available
Neural-Fuzzy Performance Evaluation Model [NFPEM] is a user-centric model developed to evaluate the performance of Distributed Software System Architecture [DSSA]. Parameters used for evaluation are contextual organizational variables. The emphasis in this paper is to simulate NFPEM in four different Information Technology oriented environments where Distributed Software System [DSS] is used for service delivery, with the ultimate aim of establishing and evaluating its utility. The results of the simulation point to the responsiveness of the DSSA to the contextual organizational factors during the project life cycle.
... A survey of DSS performance evaluation models was carried out in (Olabiyisi et. al., 2010;Olabiyisi et. al., 2011 andAkinnuwesi et. al., 2012) considering performance of the system at both architectural and implementation levels. The authors deduced that none of the existing models considered evaluating DSSA performance using contextual organizational variables and this informed the development of NFPEM that was presented in (Akinnuwesi, 2011 andAkinnu ...
... ples of Tukey's pairwise comparison and analysis of variance (ANOVA) tests to them. The systems presented in Behrouz et. al. (2009) andVlahavas et. al. (1999) both has the same limitations. Figure 2 presents the system structure. The need for evaluating performance of DSSA using user-centric variables was established in (Olabiyisi et. al., 2010 andOlabiyisi et. al., 2011) and thus NFPEM was proposed in (Akinnuwesi et. al., 2012). NFPEM is a neuro-fuzzy hybridized model that is used to evaluate the performance of software architecture using contextual organizational (i.e. user-centric) parameters for evaluation. NFPEM is composed of 31 contextual organizational variables (x i, i = 1,2,3,..31 ), 10 softwar ...
Article
A Neuro-Fuzzy Performance Evaluation Model (NFPEM) proposed in Akinnuwesi, Uzoka, Olabiyisi, and Omidiora (2012) was reviewed in this work with the view of modifying it and thus making it flexible and scalable. The neuro-fuzzy expert system (NFES) reported in this paper is an enhancement to NFPEM with expert system components. NFES can be used to evaluate the performance of Distributed Software System Architecture (DSSA) with user-centric variables as parameters for performance measurement. The algorithm developed for NFES was implemented using Coldfusion programming language and MySQL relational database management system. The prototype of NFES was simulated using some life data and the performance results obtained point to the DSSA responsiveness to the users’ requirements that are defined at the requirements definition phase of the software development process. Thus the performance value is a qualitative value representing DSSA (i.e. system) responsiveness.
... (2)Micro-architecture-It focuses on internal structure of the system like execution architecture, code architecture etc [4]. ...
Article
Full-text available
In this article, the economic evaluation of information system projects using present value is analyzed based on triangular fuzzy numbers. Information system projects usually have numerous uncertainties and several conditions of risk that make their economic evaluation a challenging task. Each year, several information system projects are cancelled before completion as a result of budget overruns at a cost of several billions of dollars to industry. Although engineering economic analysis offers tools and techniques for evaluating risky projects, the tools are not enough to place information system projects on a safe budget/selection track. There is a need for an integrative economic analysis model that will account for the uncertainties in estimating project costs, benefits, and useful lives of uncertain and risky projects. In this study, we propose an approximate method of computing project present value using the concept of fuzzy modeling with special reference to information system projects. This proposed model has the potential of enhancing the project selection process by capturing a better economic picture of the project alternatives. The proposed methodology can also be used for other real-life projects with high degree of uncertainty and risk.
Article
Full-text available
The quality of an architectural design of a software system has a great influence on achieving non-functional requirements of a system. A regular software development project is often influenced by non-functional factors such as the customers' expectations about the performance and reliability of the software as well as the reduction of the underlying risk. The evaluation of non-functional parameters of a software system at the early stages of design and its development process are often considered as major factors in dealing with these issues. Because these evaluations can help us to choose the most proper model which is the securest and the most reliable. In this paper, a method is presented to obtain performance parameters from Generalized Stochastic Petri Net (GSPN) to be able to analyze the stochastic behaviour of the system. The embedded Continuous Time Markov Chain (CTMC) is derived from the GSPN and the Markov chain theory is used to obtain the performance parameters. We have designed a case tool to obtain some performance parameters that we discuss about them in this paper in addition to a case study.
Article
Full-text available
Providing a timely estimation of the likely software development effort has been the focus of intensive research investigations in the field of software engineering, especially software project management. As a result, various cost estimation techniques have been proposed and validated. Due to the nature of the software-engineering domain, software project attributes are often measured in terms of linguistic values, such as very low, low, high and very high. The imprecise nature of such attributes constitutes uncer-tainty and vagueness in their subsequent interpretation. We feel that soft-ware cost estimation models should be able to deal with imprecision and uncertainty associated with such values. However, there are no cost esti-mation models that can directly tolerate such imprecision and uncertainty when describing software projects, without taking the classical intervals and numeric-values approaches. This chapter presents a new technique based on fuzzy logic, linguistic quantifiers, and analogy-based reasoning to estimate the cost or effort of software projects when they are described by either numerical data or linguistic values. We refer to this approach as Fuzzy Analogy. In addition to presenting the proposed technique, this chapter also illustrates an empirical validation based on the historical COCOMO'81 software projects data set.
Conference Paper
Full-text available
Performance models of software designs can give early warnings of problems such as resource saturation or excessive delays. However models are seldom used because of the considerable effort needed to construct them. Software Architecture and Model Extraction (SAME) is a lightweight model building technique that extracts communication patterns from executable designs or prototypes that use message passing, to develop a Layered Queuing Network model in an automated fashion. It is a formal, traceable model building process. The transformation follows a series of well-defined transformation steps, from input domain, (an executable software design or the implementation of software itself) to output domain, a Layered Queuing Network (LQN) Performance model. The SAME technique is appropriate for a message passing distributed system where tasks interact by point-to-point communication. With SAME, the performance analyst can focus on the principles of software performance analysis rather than model building.
Conference Paper
Full-text available
Over the last decade, the relevance of performance evaluation in the early stages of the software development life-cycle has been steadily rising. We honestly believe that the integration of formal models in the software engineering process is a must, in order to enable the application of wellknown, powerful analysis techniques to software models. In previous papers the authors have stated a proposal for SPE, dealing with several UML diagram types. The proposal for malizes their semantics, and provides a method to translate them into (analyzable) GSPN models. This paper focuses on activity diagrams, which had not been dealt with so far. They will be incorporated in our SPE method, enhancing its expressivity by refining abstraction levels in the statechart diagrams. Performance requirements will be annotated according to the UML profile for schedulability, performance and time. Last but not least, our CASE tool prototype will be introduced. This tool deals with every model element from activity diagrams and ensures an automatic translation from ADs into GSPNs strictly following the process related in this paper.
Conference Paper
Full-text available
Nowadays, there exist web sites that allow users to retrieve and install software in an easy way. The performance of these sites may be poor if they are used in wireless networks; the reason is the inadequate use of the net resources they need. If this kind of systems are designed using mobile agent technology the previous problem might be avoided. In this paper, we present a comparison between the performance of a software retrieval system especially designed to be used in wireless networks (e.g., mobile computers) and the performance of a software retrieval system similar to the well-known Tucows.com or Download.com web sites. In order to compare performance, we make use of a software performance process enriched with formal techniques. The process has as important features that it uses UML as a design notation and it uses stochastic Petri nets as formal model. Petri nets provide a formal semantics for the system and a performance model.
Conference Paper
To make software performance prediction more powerful, execution demand functions must be measured over ranges of system parameters, preferably using scripts to automate the collection of large numbers of cases. Once data is collected it is natural to represent it by a regression function, and to interpolate using the function, to obtain parameters for models as they are needed. Although some practitioners have used linear functions, recent experience has shown that simple polynomial regression functions are often inadequate for these “resource functions”. They may be very irregular, even jagged. Regression splines offer a simple representation that can adapt to very irregular functions, and which can be fitted automatically. Resource functions offer an opportunity to gather enough data to provide whatever accuracy we require, by going back for more data after doing a partial fit. However regression splines do not have a strong theory of prediction errors. This research has addressed the estimation of accuracy of fit, and the control of the additional data gathering, to provide a controlled degree of error in the fitted function. A typical example shows the power of the technique.
Conference Paper
Details about the underlying Message-oriented Middleware (MOM) are essential for accurate performance predictions of software systems using message-based communication. The MOM's configuration and usage strongly influence its throughput, resource utilisation and timing behaviour. Prediction models need to reflect these effects and allow software architects to evaluate the performance influence of MOM configured for their needs. Performance completions [31, 32] provide the general concept to include low-level details of execution environments in abstract performance models. In this paper, we extend the Palladio Component Model (PCM) [4] by a performance completion for Message-oriented Middleware. With our extension to the model, software architects can specify and configure message-based communication using a language based on messaging patterns. For performance evaluation, a model-to-model transformation integrates the low-level details of a MOM into the high-level software architecture model. A case study based on the SPECjms2007 Benchmark [1] predicts the performance of message-based communication with an error less than 20%.