ArticlePDF Available

Abstract and Figures

This history column article provides a tour of the main software development life cycle (SDLC) models. (A lifecycle covers all the stages of software from its inception with requirements definition through to fielding and maintenance.) System development lifecycle models have drawn heavily on software and so the two terms can be used interchangeably in terms of SDLC, especially since software development in this respect encompasses software systems development. Because the merits of selecting and using an SDLC vary according to the environment in which software is developed as well as its application, I discuss three broad categories for consideration when analyzing the relative merits of SDLC models. I consider the waterfall model before the other models because it has had a profound effect on software development, and has additionally influenced many SDLC models prevalent today. Thereafter, I consider some of the mainstream models and finish with a discussion of what the future could hold for SDLC models.
Content may be subject to copyright.
Software Development Lifecycle
Models
Nayan B. Ruparelia
Hewlett-Packard Enterprise Services
<nayan.ruparelia@hp.com >
DOI: 10.1145/1764810.1764814
http://doi.acm.org/10.1145/1764810.1764814
Abstract
This history column article provides a tour of the main soft-
ware development life cycle (SDLC) models. (A lifecycle covers
all the stages of software from its inception with requirements
definition through to fielding and maintenance.) System devel-
opment lifecycle models have drawn heavily on software and so
the two terms can be used interchangeably in terms of SDLC,
especially since software development in this respect encom-
passes software systems development. Because the merits of se-
lecting and using an SDLC vary according to the environment in
which software is developed as well as its application, I discuss
three broad categories for consideration when analyzing the rela-
tive merits of SDLC models. I consider the waterfall model be-
fore the other models because it has had a profound effect on
software development, and has additionally influenced many
SDLC models prevalent today. Thereafter, I consider some of the
mainstream models and finish with a discussion of what the fu-
ture could hold for SDLC models.
Keywords: SDLC, SEN History Column, Waterfall, Spiral,
Wheel-and-spoke, Unified, RAD, Incremental, B-model, V-
model.
Introduction
SDLC is an acronym that is used to describe either software or
systems development life-cycles. Although the concepts between
the two are the same, one refers to the life-cycle of software whe-
reas the other to that of a system that encompasses software de-
velopment. In this article, software development is emphasized
although the same principles can be transmuted to systems. Also,
most of the innovation and thought leadership in terms of devis-
ing models and concepts has come from software development,
and systems development has borrowed heavily from software
development as a result.
SDLC, then, is a conceptual framework or process that consid-
ers the structure of the stages involved in the development of an
application from its initial feasibility study through to its deploy-
ment in the field and maintenance. There are several models that
describe various approaches to the SDLC process. An SDLC
model is generally used to describe the steps that are followed
within the life-cycle framework.
It is necessary to bear in mind that a model is different from a
methodology in the sense that the former describes what to do
whereas the latter, in addition, describes how to do it. So a model
is descriptive whilst a methodology is prescriptive. As a result,
we consider SDLC models in this article in terms of their relev-
ance to particular types of software projects. This approach re-
cognizes the context in which a SDLC model is used. For
example, the waterfall model may be the best model to use when
developing an enterprise relational database but it may not be the
optimum model when developing a web-based application. I
therefore consider the models for three distinct use cases, or soft-
ware categories, in order to provide a context for their applica-
tion, as follows:
Category 1. Software that provides back-end functionality.
Typically, this is software that provides a service to
other applications.
Category 2. Software that provides a service to an end-user or to
an end-user application. Typically, this would be
software that encapsulates business logic or formats
data to make it more understandable to an end-user.
Category 3. Software that provides a visual interface to an end-
user. Typically, this is a front-end application that is a
graphical user interface (GUI).
SDLC models, also, may be categorized as falling under three
broad groups: linear, iterative and a combination of linear and
iterative models. A linear model is a sequential one where one
stage, upon its completion, irrevocably leads to the initiation of
the next stage. An iterative model, on the other hand, ensures that
all the stages of the model shall be revisited in future; the idea
being that development remains a constant endeavor for im-
provement throughout its lifecycle. A combined model recognizes
that the iterative development process can be halted at a certain
stage.
Although there is an abundance of SDLC models in existence,
we shall consider the most important ones or those that have
gained popularity. These include: the waterfall, spiral, unified,
incrementing, rapid application development, the v and the w
models.
Waterfall Model
The waterfall model, also known as the cascade model, was
first documented by Benington [1] in 1956 and modified by
Winston Royce [2] in 1970. It has underpinned all other models
since it created a firm foundation for requirements to be defined
and analyzed prior to any design or development.
The original cascade model of Bennington recommended that
software be developed in stages: operational analysis ĺ RSHU a-
tional specification ĺGHVLJQ and coding specifications ĺGHYH l-
opment ĺWHVWLQJ ĺ GHSOR\PHQW ĺ HYDOXDWLRQ%\ UHFRJQL]LQJ
that there could be unforeseen design difficulties when a baseline
is created at the end of each stage, Royce enhanced this model by
providing a feedback loop so that each preceding stage could be
revisited. This iteration is shown in Figure 1 using red arrows for
A
CM SIGSOFT Software Engineering Notes Page 8
May 2010 Volume 35 Number 3
the process flow; this allowed for the stages to overlap in addition
to the preceding stage to be revisited. But Royce also felt that this
arrangement could prove inadequate since the iteration may need
to transcend the succeeding-preceding stage pair’s iteration. This
would be the case at the end of the evaluation (or testing) stage
when you may need to iterate to the design stage bypassing the
development stage or at the end of the design stage where you
may need to revisit the requirements definition stage bypassing
the analysis stage. That way, the design and the requirements,
respectively, are redefined should the testing or the design war-
rant. This is illustrated in Figure 1 by the dashed arrows that show
a more complex feedback loop. Further, Royce suggested that a
preliminary design stage could be inserted between the require-
ments and analysis stages. He felt that this would address the cen-
tral role that the design phase plays in minimizing risks by being
re-visited several times as the complex feedback loop of Figure 1
shows.
Figure 1: Waterfall model with Royce’s iterative feedback.
When referring to the waterfall model in this article, I shall
mean Royce’s modified version of Benington’s cascade model.
One aspect of the waterfall model is its requirement for docu-
mentation. Royce suggested that at least six distinct types of doc-
ument be produced:
1) Requirements document during the requirements stage.
2) Preliminary design specification during the preliminary
design stage.
3) Interface design specification during the design stage.
4) Final design specification that is actively revised and up-
dated over each visit of the design stage; this is further up-
dated during the development and validation stages.
5) Test plan during the design stage; this is later updated
with test results during the validation or testing stage.
6) Operations manual or instructions during the deployment
stage.
Quality assurance is built into the waterfall model by splitting
each stage in two parts: one part performs the work as the stage’s
name suggests and the other validates or verifies it. For instance,
the design stage incorporates verification (to assess whether it is
fit for purpose), the development stage has unit and integration
testing, and the validation stage contains system testing as its part
and parcel.
The waterfall model is the most efficient way for creating
software in category 1 discussed earlier. Examples for this would
be relational databases, compilers or secure operating systems.
B-Model
In 1988, Birrell and Ould discussed an extension to the water-
fall model that they called the b-model [3]. By extending the op-
erational life-cycle (denoted as the maintenance cycle) and then
attaching it to the waterfall model, they devised the b-model as
shown in Figure 2. This was done to ensure that constant im-
provement of the software or system would become part of the
development stages. Also, they felt that an alternative to obsoles-
cence needed to be captured so that enhanced or even newer sys-
tems could be developed as spin-offs from the initial system.
Figure 2: The b-model extends the waterfall model.
In a sense, the b-model was an attempt to modify the waterfall
by creating an evolutionary enhancement process that was cap-
tured by the spiral model that we shall discuss later. The b-model,
however, is more suitable to the development of category 1 soft-
ware like its cousin, the waterfall.
Incremental Model
The incremental model, also known as the iterative waterfall
model, can be viewed as a three dimensional representation of the
waterfall model. The z-axis contains a series of waterfall models
to represent the number of iterations that would be made in order
to improve the functionality of the end product incrementally.
The incremental model, in this sense, is a modification to the wa-
terfall that approaches the spiral model.
A
CM SIGSOFT Software Engineering Notes Page 9
May 2010 Volume 35 Number 3
The main strengths of this model are:
i. Feedback from earlier iterations can be incorporated in
the current iteration.
ii. Stakeholders can be involved through the iterations, and
so helps to identify architectural risks earlier.
iii. Facilitates delivery of the product with early, incremental
releases that evolve to a complete feature-set with each
iteration.
iv. Incremental implementation enables changes to be
monitored, and issues to be isolated and resolved to
mitigate risks.
V-Model
The v-model, also known as the vee model, first presented at
the 1991 NCOSE symposium in Chattanooga, Tennessee [4], was
developed by NASA. The model is a variation of the waterfall
model in a V shape folded in half at the lowest level of decompo-
sition, as Figure 3 shows. The left leg of the V shape represents
the evolution of user requirements into ever smaller components
through the process of decomposition and definition; the right leg
represents the integration and verification of the system compo-
nents into successive levels of implementation and assembly. The
vertical axis depicts the level of decomposition from the system
level, at the top, to the lowest level of detail at component level at
the bottom. Thus, the more complex a system is the deeper the V
shape with correspondingly larger number of stages. The v-
model, being three-dimensional, also has an axis that is normal to
the plane, the z-axis. This represents components associated with
multiple deliveries. Time is therefore represented by two axes:
from left to right and into the plane.
Figure 3: V-model's built-in quality assurance.
The v-model is symmetrical across the two legs so that the ve-
rification and quality assurance procedures are defined for the
right leg during the corresponding stage of the left leg. This en-
sures that the requirements and the design are verifiable in a
SMART (Specific, Measurable, Achievable, Realistic and Time-
bound) [5] manner, thus avoiding statements such as “user friend-
ly,” which is a valid but non-verifiable requirement.
A variation of the v-model is the vee+ model. This adds user
involvement, risks and opportunities to the z-axis of the v-model.
Thus, users remain involved until the decompositions are of no
interest to them. An application of this is that any anomalies iden-
tified during the integration and assembly stages are fed to the
user for acceptance or, in the case of rejection, to be resolved at
the appropriate levels of decomposition; the anomalies could then
be resolved as errors or accepted as design features. In contrast,
the spiral model adds user involvement during its risk reduction
activities whereas the v and the waterfall models incorporate it
during the initial requirements definition phase. Additionally, the
risks and opportunities feature of the vee+ mean that certain stag-
es of the v at the lower decomposition levels could be truncated in
order to integrate COTS (commercial, off-the-shelf software)
packages [6]. This means that varying levels of products, at dif-
ferent levels of decomposition, can be added to the life cycle in a
seamless manner.
Another variation of the v-model is the vee++ model [6] that
adds intersecting processes to the vee+ model; a decomposition
analysis and resolution process is added to the left leg of the v
shape, and a verification analysis and decomposition process is
added to the right one.
Like the waterfall, the v-model is more suited to category 1
systems; the creators modified the model to the vee+ for catego-
ries 1 and 3, and thereafter created the vee++ model for all cate-
gories. However, a key strength of the v-model is that it can be
used in very large projects where a number of different contrac-
tors, sub-contractors and teams are involved; decomposition, in-
tegration and verification at each stage with all the parties
involved is made possible by the v-model prior to progressing to
the next stage. This factor is acknowledged by the time parameter
being represented by the y and z axes of the model. Thus, a large
number of parties and stakeholders in a very large project become
inherently integrated using a requirements-driven approach.
Spiral Model
Boehm modified the waterfall model in 1986 [7] by introduc-
ing several iterations that spiral out from small beginnings. An
oft- quoted idiom describing the philosophy underlying the spiral
method is start small, think big.
Figure 4: Boehm's spiral life-cycle.
The spiral model attempts to address two main difficulties of
the waterfall model:
i. the two design stages can be unnecessary in some
situations,
ii. the top-down approach needs to be tempered with a look-
ahead step in order to account for software reusability or
the identification of risks and issues at an early stage.
The spiral model represents a paradigm shift from the water-
A
CM SIGSOFT Software Engineering Notes Page 10
May 2010 Volume 35 Number 3
fall’s specification driven approach to a risk-driven one. Each
cycle traverses four quadrants, as illustrated by Figure 4. These
are:
i. Determine objectives.
ii. Evaluate alternatives, and identify and resolve risks.
iii. Develop and test.
iv. Plan the next iteration.
As each cycle within the spiral progresses, a prototype is built,
verified against its requirements and validated through testing.
Prior to this, risks are identified and analyzed in order to manage
them. The risks are then categorized as performance (or user-
interface) related risks or development risks. If development risks
predominate, then the next step follows the incremental waterfall
approach. On the other hand, if performance risks predominate,
then the next step is to follow the spiral through to the next evolu-
tionary development. This ensures that the prototype produced in
the second quadrant has minimal risks associated with it.
Risk management is used to determine the amount of time and
effort to be expended for all activities during the cycle, such as
planning, project management, configuration management, quali-
ty assurance, formal verification and testing. Hence, risk man-
agement is used as a tool to control the costs of each cycle.
An important feature of the spiral model is the review that
takes place as part of the y-axis of Figure 4. This process occurs
at the completion of each cycle and covers all of the products and
artifacts produced during the previous cycle including the plans
for the next cycle as well as the resources required for it. A pri-
mary objective of the review process is to ensure that stakehold-
ers are committed to the approach to be taken during the next
cycle.
A set of fundamental questions arise about the spiral model:
i. How does the spiral get started?
ii. How and when do you get off the spiral?
iii. How does system enhancement or maintenance occur?
Boehm’s response to these questions was to use a complemen-
tary model that he called the Mission Opportunity Model (MOM)
[7]. Using the MOM as a central pivot, the operational mission is
assessed at key junctures as to whether further development or
effort should be expended. This is done by testing the spiral
against a set of hypotheses (requirements) at any given time; fail-
ure to meet those hypotheses leads to the spiral being terminated.
To initiate the spiral, however, the same process is followed using
the MOM.
In 1987, Iivari suggested a modification to the spiral model in
the January issue of this publication [8] that he called the hierar-
chical spiral model. He proposed to sub-divide each quadrant of
the spiral into two parts to allow for sub-phases. With this ap-
proach, he provisioned for baselines and milestones to be created
for each cycle in order to have greater control for that cycle. Also,
he suggested that the main phases be risk-driven, as with the spir-
al, but the sub-phases should be specification or process driven as
in the case of the waterfall. This combination of the risk-
specification paradigms would make the hierarchical spiral model
cost conscious (like the spiral) as well as disciplined (like the wa-
terfall).
A key benefit of the spiral model is that it attempts to contain
project risks and costs at the outset. Its key application is for the
category 1 use case. However, sufficient control of the cycles
could make it adoptable for the other categories as well. The main
difficulty of the spiral is that it requires very adaptive project
management, quite flexible contract mechanisms between the
stakeholders and between each cycle of the spiral, and it relies
heavily on the systems designers’ ability to identify risks correct-
ly for the forthcoming cycle.
Wheel-and-spoke Model
The wheel-and-spoke model is based on the spiral model and
is designed to work with smaller teams initially, which then scale
up to build value faster. It is a bottom-up approach that retains
most of the elements of the spiral model by comprising of mul-
tiple iterations.
First, system requirements are established and a preliminary
design is created. Thereafter, a prototype is created during the
implementation stage. This is then verified against the require-
ments to form the first spoke of the model. Feedback is propagat-
ed back to the development cycle and the next stage adds value to
create a more refined prototype. This is evaluated against the re-
quirements and feedback is propagated back to the cycle; this
forms the second spoke. Each successive stage goes through a
similar verification process to form a spoke. The wheel represents
the iteration of the development cycle.
The wheel-and-spoke model has many uses. It can be used to
create a set of programs that are related by a common API. The
common API then forms the center of the model with each pro-
gram verifying its conformance to the API (this in essence forms
a common set of requirements) at each spoke. Another applica-
tion of this model is with the Architecture Development Method
(ADM) of The Open Group Architecture Framework (TOGAF)
[9] where the spokes are used to validate the requirements during
the development of the architecture, as Figure 5 shows.
Like the spiral, it is more amenable to category 1 and 2 use
cases.
Figure 5: TOGAF's ADM as a wheel-and-spoke model.
A
CM SIGSOFT Software Engineering Notes Page 11
May 2010 Volume 35 Number 3
Unified Process Model
Whereas the waterfall is specification driven and the spiral is
risk driven, the unified process model is model (or architecture)
based and use case driven [10]; at the same time, it is iterative in
nature.
The unified model was created to address the specific devel-
opment requirements of object-oriented software and its design. A
descendent of the Objectory model, it was developed in the 1990s
by Rational Software; it is therefore commonly known as the Ra-
tional Unified Process (RUP) model. IBM acquired Rational
Software in 2003 and continues to develop and market the
process as part of various software development toolsets.
RUP encapsulates seven best practices within the model:
i. Develop iteratively using risk management to drive the
iterations.
ii. Manage requirements.
iii. Employ a component-based architecture.
iv. Use visual models.
v. Verify quality continuously.
vi. Control changes.
vii. Use customization.
Figure 6: Unified process model.
RUP uses models extensively and these are described using the
Unified Modeling Language (UML), which comprises of a collec-
tion of semi-formal graphical notations and has become the stan-
dard tool for object-oriented modeling. It facilitates the
construction of several views of a software system, and supports
both static and dynamic modeling. The notations include use case,
activity, class, object, interaction and state diagrams amongst oth-
ers. Use cases are central to RUP because they lay the foundation
for subsequent development. They provide a functional view by
modeling the way users interact with the system. Each use case
describes a single use of the system, and provides a set of high-
level requirements for it. Moreover, use cases drive the design,
implementation and testing (indeed, the entire development
process) of the software system.
RUP is iterative and incremental as it repeats over a series of
iterations that make up the life cycle of a system. An iteration
occurs during a phase and consists of one pass through the re-
quirements, analysis, design, implementation and test workflows
(based on disciplines), building a series of UML models that are
inter-related.
As Figure 6 shows, RUP has four phases: the inception, elabo-
ration, construction and transition phases. Each phase may com-
prise of any number of iterations by engaging with six core
disciplines (shown horizontally in the figure). These disciplines
are supported by three other disciplines: change and configuration
management, project management, and environment.
Although more suitable for the category 2 and 3 applications,
RUP could be adapted for mid-scale category 1 applications pro-
vided that risk and contractual links are well managed. However,
for very large system development, RUP would not be the ideal
model to use because of its model-based and use-case driven ap-
proach. The iterative waterfall, the V-model (including its cou-
sins, vee+ and vee++), and the spiral models would be more
suitable. At the other extreme, for very small projects, RUP could
be used provided that the number and type of artifacts are opti-
mized to the minimum required for rapid development.
Rapid Application Development
Principally developed by James Martin in 1991, Rapid Appli-
cation Development (RAD) is a methodology that uses prototyp-
ing as a mechanism, as per Figure 7, for iterative development.
Figure 7: Prototyping approach used by RAD.
RAD promotes a collaborative atmosphere where business
stakeholders participate actively in prototyping, creating test cas-
es and performing unit tests. Decision making is devolved away
from a centralized structure (usually comprising of the project
manager and developers) to the functional team.
The open source software development model (also known as
the Cathedral and the Bazaar model that was first documented by
Raymond [11]), espousing a ‘release early; release often; listen
to your customers’ philosophy, is quite similar to RAD and some
of its spin-off methodologies such as Agile.
Recently, RAD has come to be used in a broader, generic
sense that encompasses a variety of techniques aimed at speeding
software development. Of these, five prominent approaches are
discussed below in alphabetical order.
Agile
Scope changes, as well as feature creep, are avoided by break-
ing a project into smaller sub-projects. Development occurs in
short intervals and software releases are made to capture small
incremental changes.
Applying Agile to large projects can be problematic because it
emphasizes real-time communication, preferably on a personal,
face-to-face basis. Also, Agile methods produce little documenta-
tion during development (requiring a significant amount of post-
project documentation) whilst de-emphasizing a formal process-
driven steps. Thus, it is more suited to a Category 3 application.
A
CM SIGSOFT Software Engineering Notes Page 12
May 2010 Volume 35 Number 3
Extreme Programming (XP)
With XP, development takes place incrementally and on the
fly with a business champion acting as a conduit for user-driven
requirements and functionality; there is not an initial design stage.
(This could lead to higher costs later when new, incompatible,
requirements surface.) In order to lower costs, new requirements
are accounted for in short, fast spiral steps and development takes
place with programmers working in pairs.
Joint Application Development (JAD)
JAD advocates collaborative development with the end user or
customer by involving him during the design and development
phases through workshops (known as JAD sessions). This has the
possibility of creating scope creep in a project if the customer’s
requirements are not managed well.
Lean Development (LD)
In pursuance to the paradigm ‘80% today is better than 100%
tomorrow’, Lean Development attempts to deliver a project early
with a minimal functionality.
Scrum
Development takes place over a series of short iterations, or
sprints, and progress is measured daily. Scrum is more suited to
small projects because it places less emphasis on a process driven
framework (needed on large projects) and also the central com-
mand structure can cause power struggles in large teams.
RAD, as well as its affiliated methodologies described above,
is generally much less suited to Category 1 and 2 applications and
much more suited to Category 3 applications because of its em-
phasis on less formal, process independent approaches.
The Future
This history column invariably ends with a consideration of
the future; history’s importance is in creating a framework for the
future. Although in the past software development has influenced
systems design in an inordinate way, there should be a fusion of
knowledge sharing between the two in both directions in future.
Moreover, SDLC models for both disciplines can draw from do-
mains outside their technological boundaries. A large amount of
work has been done in other spheres such as behavior analysis,
time management, and business management (the Japanese Kan-
ban or Just-In-Time model, for example) that could be borrowed
by future SDLC models. To some extent, this is already happen-
ing with some models. For example, the Behavior Driven Devel-
opment technique (facilitates collaboration between various
participants) that is part of the Agile model has drawn from beha-
vior analysis. This sharing of ideas from disparate sources to in-
corporate them in an SDLC model should gather momentum in
future.
Since there is a proliferation of SDLC models, it would be
good to have a central repository, perhaps hosted on a wiki, where
various experiences could be shared. This will fulfill two purpos-
es.
First, the insatiable desire to compare models will be ad-
dressed. However, this should be done in the context of the appli-
cation of the SDLC, much like the three categories proffered in
this article. This is important because SDLC models cannot be
compared easily without considering their particular category of
application.
Second, a repository containing lessons learned will enable
key parameters to be distilled from the model and a statistical
analysis to be performed on their use and efficacy in various cate-
gories of projects. Also, parameters for comparison could include
non-functional attributes such as risks and costs at each stage.
This will provide us with a better foundation with which to make
decisions since the lessons learned analysis will be something that
comes from experience on the ground rather than just theory.
Perhaps the ACM could host such a lessons learned wiki and a
back-end statistical analysis engine. This can be applied beyond
SDLCs and encompass a wide range of techniques and technolo-
gies.
References
[1] Benington, H.D. (1956): Production of large computer programs. In Pro-
ceedings, ONR Symposium on Advanced Programming Methods for Digital Com-
puters, June 1956, pp 15-27.
[2] Royce, Winston W. (1970): Managing the development of large software
systems. In Proceedings, IEEE Wescon, August 1970, pp 1-9.
[3] Birrell, N. D. and Ould, M.A. (1988): A practical handbook to software
development; Cambridge University Press. ISBN 978-0521347921. pp 3-12.
[4] Forsberg, Kevin and Mooz, Harold (1991): The relationship of system en-
gineering to the project cycle. At NCOSE, Chattanooga, Tennessee, October 21-
23, 1991.
[5] Doran, George T. (1981): There's a S.M.A.R.T. way to write manage-
ment’s goals and objectives. In Management Review, vol. 70.11.
[6] Mooz, H and Forsberg, K. (2001): A visual explanation of the develop-
ment methods and strategies including the waterfall, spiral, vee, vee+, and vee++
models, pp 4-6.
[7] Boehm, Barry W. (1986): A spiral model of software development and
enhancement. In ACM SigSoft Software Engineering Notes, Vol. II, No. 4, 1986,
pp 22-42.
[8] Iivari, Juhani (1987): A hierarchical spiral model for the software process:
notes on Boehm’s spiral model. In ACM SIGSOFT Software Engineering Notes,
vol. 12, no. 1, January 1, 1987. Pp 35-37.
[9] TOGAF 9:The Open Group; http://www.opengroup.org/togaf/.
[10] Jacobson I., Booch G. & Rumbaugh J. (1999): The unified software de-
velopment process; Addison-Wesley, Reading, Massachusetts.
[11] Raymond, Eric (2001): Cathedral and the Bazaar, 1st Edition; O'Reilly
Media; ISBN: 978-0596001087.
A
CM SIGSOFT Software Engineering Notes Page 13
May 2010 Volume 35 Number 3
... According to English grammar, the meaning of Software Development Life Cycle Methodology is a software development life cycle methodology, which means it is a methodology used for the process of making and changing systems. A system is usually a computer or information system (Ruparelia, 2010). ...
Article
Full-text available
Islamic studies in Indonesia are rapidly expanding in the financial realm. Sharia-based financial calculations are an important element in ensuring compliance with Islamic Sharia in various financial products and services affiliated with the halal food industry. The principle of sharia calculation in the halal industry includes the formulation of prohibitions such as interest (riba), gambling, fraud, and loss. As a Muslim-majority nation, Indonesia holds significant potential to lead in Islamic economics, particularly in finance. However, the management of information systems in the halal food sector remains underdeveloped, with many halal companies relying on conventional systems instead of Sharia-compliant solutions. This study seeks to address the identified issue by developing a Sharia-compliant information system for the food and beverage industry where the existing information system acquires halal, infaq, sadaqah, and waqf calculations by uploading without involving tax obligations. Data collection was conducted from halal-certified companies with branch offices in Semarang and Surakarta, while user requirements were defined through technical discussions with main managers and branch heads to ensure system relevance. Implementation of system development methods using a prototype approach by acquiring system development theory. In its contribution, feedback from users who are business people in the halal industry includes sharia after tax, displaying suppliers of products that have the legality of halal certification and product prices that are above the price of potential products: so that even though halal production costs more, the resulting information system will be able to control these costs. This system is able to reduce and control costs so that the company remains in the profit corridor.
... Already in 1990, the SDLC was defined in the IEEE Standard Glossary of Software Engineering Terminology as the "period of time that begins with the decision to develop a software product and ends when the software is delivered" [10]. Also Ruparelia states, that the SDLC is a conceptual framework, that is structured in main phases and each of those phases is necessary for successful software development [11]. ...
Conference Paper
Full-text available
As Generative AI can potentially revolutionize the software engineering industry, it is necessary to look at the distinctive roles humans play within this field. Research was conducted to identify the essential human contribution by analyzing the Software Development Lifecycle (SDLC) and evaluating human responsibilities within this framework. Each responsibility was then assessed for potential vulnerability to replacement through Generative AI. Through this analysis, the essential human contribution within the field of software engineering was identified, including domain knowledge, comprehensive perceptiveness, situational flexibility, originality, and empathy. With the essential human contribution now recognized, important questions emerge regarding the future of software engineering education. Should educational programs focus on nurturing the skills required to perform these irreplaceable human roles? Is teaching skills that may soon be rendered obsolete by advancing AI technologies still necessary? The study not only raises these critical questions as Generative AI continues to reshape the software engineering profession but also emphasizes the importance of broad knowledge about the functionality of generative AI alongside these core human contributions. Additionally, it underscores the ethical challenges accompanying the increasing integration of Generative AI, particularly as the rapid pace of technological advancement may outstrip the ability of current professionals to adapt.
... To create a prosperous software application, usually software development teams including researchers and developers follow particular software development model such as waterfall (Petersen et al., 2009), v-Model (Ruparelia, 2010), agile (Zhang and Patel, 2010), spiral (Boehm, 1988), and incremental model (Larman and Basili, 2003). In all these . ...
Article
Full-text available
Introduction Requirements classification is an essential task for development of a successful software by incorporating all relevant aspects of users' needs. Additionally, it aids in the identification of project failure risks and facilitates to achieve project milestones in more comprehensive way. Several machine learning predictors are developed for binary or multi-class requirements classification. However, a few predictors are designed for multi-label classification and they are not practically useful due to less predictive performance. Method MLR-Predictor makes use of innovative OkapiBM25 model to transforms requirements text into statistical vectors by computing words informative patterns. Moreover, predictor transforms multi-label requirements classification data into multi-class classification problem and utilize logistic regression classifier for categorization of requirements. The performance of the proposed predictor is evaluated and compared with 123 machine learning and 9 deep learning-based predictive pipelines across three public benchmark requirements classification datasets using eight different evaluation measures. Results The large-scale experimental results demonstrate that proposed MLR-Predictor outperforms 123 adopted machine learning and 9 deep learning predictive pipelines, as well as the state-of-the-art requirements classification predictor. Specifically, in comparison to state-of-the-art predictor, it achieves a 13% improvement in macro F1-measure on the PROMISE dataset, a 1% improvement on the EHR-binary dataset, and a 2.5% improvement on the EHR-multiclass dataset. Discussion As a case study, the generalizability of proposed predictor is evaluated on softwares customer reviews classification data. In this context, the proposed predictor outperformed the state-of-the-art BERT language model by F-1 score of 1.4%. These findings underscore the robustness and effectiveness of the proposed MLR-Predictor in various contexts, establishing its utility as a promising solution for requirements classification task.
... System Development Life Cycle (SDLC) is the process of making and changing systems and the models and methodologies used to develop a system [12]. SDLC is also a pattern used to develop software systems, which consists of stages including planning, analysis, design, implementation, testing and maintenance [13]. The SDLC model used in this research is the Waterfall model. ...
Article
Full-text available
In the management of data recording marriage events at the KUA (the Office of Religious Affairs) is still manual and has not utilized information technology to the fullest. The process of inputting, managing and storing data on recording marriage events only utilizes Microsoft Office applications and there is even some data that is written manually so that the process of presenting reports for recording marriage events is still relatively slow and inefficient, there are often errors and loss of data which results in the KUA having to search for data again. To overcome this problem, an application design is carried out that can speed up the process of recording marriage events at the KUA. Design and Implementation was using Visual Studio 2012 and MySQL as a database processing place. The application designed to support the process of recording marriage events more effectively and efficiently.
... An INCOSE project explored agile software, firmware, and hardware systems engineering processes in real-world cases in the field [12]. This project has resulted in an ASE life cycle model framework, see Figure 1; as Dove and Schindel [13] [17]. These models can be distilled into four core steps: (1) specification, (2) design, (3) validation, and (4) evolution [18]. ...
Conference Paper
Full-text available
Users of services increasingly require a more customised user experience, and historically, the banking industry needs to respond to these changes due to the legacy system of systems. To address this, service systems engineering has evolved as a disciplined, systematic, service-orientated and client-centric approach. Financial service organisations have also started adopting agile development practices with mixed results in terms of their effectiveness. This research aims to develop and evaluate an agile service systems engineering artefact that fosters value co-creation in a socio-technical environment that can be used to systematically design digital service systems in a financial services organisation to rapidly deploy highly customised client value propositions. The design science research methodology is used to design, develop and test the conceptual framework. The conceptual framework designed seeks to address these problems by adhering to the following principles: the agile architecture of the system, the design process is socio-technical where value is co-created, and the process is measurable.
... The primary motivation behind the development of these applications is to automate time-consuming and labor-intensive tasks by utilizing cutting-edge Artificial Intelligence (AI) algorithms [3,4]. In software systems domain, to facilitate development of large softwares, researchers have proposed numerous software development models such as waterfall model [5], v model [6], spiral model [7] and agile model [8]. In all these models requirement engineering is a fundamental and essential process comprising several tasks [9]. ...
Preprint
Traditional language models have been extensively evaluated for software engineering domain, however the potential of ChatGPT and Gemini have not been fully explored. To fulfill this gap, the paper in hand presents a comprehensive case study to investigate the potential of both language models for development of diverse types of requirement engineering applications. It deeply explores impact of varying levels of expert knowledge prompts on the prediction accuracies of both language models. Across 4 different public benchmark datasets of requirement engineering tasks, it compares performance of both language models with existing task specific machine/deep learning predictors and traditional language models. Specifically, the paper utilizes 4 benchmark datasets; Pure (7,445 samples, requirements extraction),PROMISE (622 samples, requirements classification), REQuestA (300 question answer (QA) pairs) and Aerospace datasets (6347 words, requirements NER tagging). Our experiments reveal that, in comparison to ChatGPT, Gemini requires more careful prompt engineering to provide accurate predictions. Moreover, across requirement extraction benchmark dataset the state-of-the-art F1-score is 0.86 while ChatGPT and Gemini achieved 0.76 and 0.77,respectively. The State-of-the-art F1-score on requirements classification dataset is 0.96 and both language models 0.78. In name entity recognition (NER) task the state-of-the-art F1-score is 0.92 and ChatGPT managed to produce 0.36, and Gemini 0.25. Similarly, across question answering dataset the state-of-the-art F1-score is 0.90 and ChatGPT and Gemini managed to produce 0.91 and 0.88 respectively. Our experiments show that Gemini requires more precise prompt engineering than ChatGPT. Except for question-answering, both models under-perform compared to current state-of-the-art predictors across other tasks.
Article
Full-text available
This comprehensive study investigates the transformative effects of integrating artificial intelligence (AI) technologies into financial software development processes. As the financial sector increasingly relies on sophisticated software solutions, the potential for AI to enhance efficiency, accuracy, and overall performance in development cycles has become a subject of significant interest. This research examines various AI applications within financial software development, including automated code generation, intelligent debugging, predictive maintenance, and AI-assisted testing. Through a mixed-methods approach combining quantitative analysis of development metrics and qualitative insights from industry professionals, we evaluate the tangible impacts of AI integration on key performance indicators such as development speed, code quality, and resource utilization. Our findings reveal substantial improvements in efficiency and performance across multiple dimensions of the software development lifecycle, while also highlighting challenges and considerations for successful AI implementation. This study contributes to the growing body of knowledge on AI in software engineering and provides valuable insights for financial institutions and software development teams considering or currently implementing AI-driven development strategies.
Article
I anatomize a successful open-source project, fetchmail, that was run as a deliberate test of some surprising theories about software engineering suggested by the history of Linux. I discuss these theories in terms of two fundamentally different development styles, the "cathedral" model of most of the commercial world versus the "bazaar" model of the Linux world. I show that these models derive from opposing assumptions about the nature of the software-debugging task. I then make a sustained argument from the Linux experience for the proposition that "Given enough eyeballs, all bugs are shallow", suggest productive analogies with other self-correcting systems of selfish agents, and conclude with some exploration of the implications of this insight for the future of software.
Article
A new way of portraying the technical aspect of the project cycle clarifies the role and responsibility of system engineering to a project. This new three dimensional graphic illustrates the end-to-end involvement of system engineering in the project cycle, clarifies the relationship of system engineering and design engineering, and encourages the implementation of concurrent engineering.
Article
Barry B. Boehm suggested an interesting spiral model for the software development process in the International Workshop on the Software Process and Software Environments last Year (Boe86). The model is particularly interesting to me, since it seems to be highly consistent with some of the basic ideas of the PIOCO model for the information systems (IS) design process (Iiv82, Iiv83a, IiKo86) and also with its application to the development of embedded software (IiKT86). For instance, he characterizes the spiral model as a risk-driven approach. In the PIOCO model we have chosen to use the term uncertainty instead of risk emphasiznig that the IS/SW process, particularly in its earlier phases, is information production for the steering committee deciding about the IS/SW product and process (Iiv83a, Iiv86). In Iiv83a and b we have also formalized this idea in terms of Information Economics (Mar74) leading to the conclusion that IS/SW elaboration should be directed to those aspects of the IS/SW process in which the uncertainty, and therefore the potential value of the new information, is greatest (cf. Boe86).
Article
Considerable confusion continues relative to development methods, development strategies, and delivery strategies. This confusion is prevalent in routine engineering discussions and in recent textbooks. Method models such as the Waterfall, Spiral, Vee, Vee+, and Vee++ offer a variety of software and system development approaches each with a specific emphasis. Each of these models can be applied to an incremental or evolutionary strategy according to the nature of the project and each of these in turn can be produced in single or multiple deliveries. This paper addresses the choosing of a technical development strategy, which consists of selecting and implementing a combination of the most appropriate development method, a development strategy, and a delivery strategy.
Article
The designer of a software system, like the architect of a building, needs to be aware of the construction techniques available and to choose the ones that are the most appropriate. This book provides the implementer of software systems with a guide to 25 different techniques for the complete development processes, from system definition through design and into production. The techniques are described against a common background of the traditional development path, its activities and deliverable items. In addition the concepts of metrics and indicators are introduced as tools for both technical and managerial monitoring and control of progress and quality. The book is intended to widen the mental toolkit of system developers and their managers, and will also introduce students of computer science to the practical side of software development. With its wide-ranging treatment of the techniques available and the practical guidance it offers, it will prove an important and valuable work.
Conference Paper
The paper is adapted from a presentation at a symposium on advanced programming methods for digital computers sponsored by the Navy Mathematical Computing Advisory Panel and the Office of Naval Research in June 1956. The author describes the techniques used to produce the programs for the Semi-Automatic Ground Environment (SAGE) system.