Software Engineering Research and Applications: Second International Conference, SERA 2004, Los Angeles, CA, USA, MAY 5-7, 2004, Revised Selected Papers
Abstract
It was our great pleasure to extend a welcome to all who participated in SERA 2004, the second International Conference on Software Engineering Research, Management and Applications, held at the Omni Hotel, Los Angeles, California, USA. The conference would not have been possible without the cooperation of Seoul National University, Korea, the University of Lübeck, Germany, and Central Michigan University, USA. SERA 2004 was sponsored by the International Association for Computer and Information Science (ACIS). The conference brought together researchers, practitioners, and advanced graduate students to exchange and share their experiences, new ideas, and research results in all aspects (theory, applications, and tools) of Software Engineering Research and Applications. At this conference, we had keynote speeches by Barry Boehm, C.V. Ramamoorthy, Raymond Yeh, and Con Kenney. We would like to thank the publicity chairs, the members of our program committees, and everyone else who helped with the conference for their hard work and time dedicated to SERA 2004. We hope that SERA 2004 was enjoyable for all participants. Barry Boehm May 2004 Preface The 2nd ACIS International Conference on Software Engineering – Research, Management and Applications (SERA 2004) was held at the Omni Hotel in Los Angeles, California, during May 5–7, 2004. The conference particularly welcomes contributions at the junction of theory and practice disseminating basic research with immediate impact on practical applications. The SERA conference series has witnessed a short, but successful history:
Chapters (18)
The black-box view of an interactive component in a distributed system concentrates on the input/output behaviour based on
communication histories. The glass-box view discloses the component’s internal state with inputs effecting an update of the
state. The black-box view is modelled by a stream processing function, the glass-box view by a state transition machine. We
present a formal method for transforming a stream processing function into a state transition machine with input and output.
We introduce states as abstractions of the input history and derive the machine’s transition functions using history abstractions.
The state refinement is illustrated with three applications, viz. an iterator component, a scan component, and an interactive
stack.
NuEditor is a tool suite supporting specification and verification of software requirements written in NuSCR. NuSCR extends
SCR (Software Cost Reduction) notation that has been used in specifying requirements for embedded safety-critical systems
such as a shutdown system for nuclear power plant. SCR almost exclusively depended on fine-grained tabular notations to represent
not only computation-intensive functions but also time- or state-dependent operations. As a consequence, requirements became
excessively complex and difficult to understand. NuSCR supports intuitive and concise notations. For example, automata is
used to capture time or state-dependent operations, and concise tabular notations are made possible by allowing complex but
proven-correct equations be used without having to decompose them into a sequence of primitive operations. NuEditor provides
graphical editing environment and supports static analysis to detect errors such as missing or conflicting requirements. To
provide high-assurance safety analysis, NuEditor can automatically translate NuSCR specification into SMV input so that satisfaction
of certain properties can be automatically determined based on exhaustive examination of all possible behavior. NuEditor has
been programmed to generate requirements as an XML document so that other verification tools such as PVS can also be used
if needed. We have used NuEditor to specify a trip logic of RPS(Reactor Protection System) BP(Bistable Processor) and verify
its correctness. It is a part of software-implemented nuclear power plant shutdown system. Domain experts found NuSCR and
NuEditor to be useful and qualified for industrial use in nuclear engineering.
As software systems become more complex and important for business and everyday life, the need to better address non-functional
requirements (NFRs) become increasing more crucial. However, UML and particularly the use case modeling-the current de facto
standard method for functional requirements elicitation and modeling-lacks equally matured modeling constructs for dealing
with NFRs. This paper proposes a framework for representing and integrating NFRs with FRs in the use case model at four association
points: subject (system boundary), actor, use case, and communicate association. The NFRs can be implicitly associated with
other related use case model elements based on the NFR propagation rules proposed to eliminate the need for redundant NFR
specifications. A process is presented to demonstrate how to apply this framework, along with an illustration based on a simplified
pricing system.
Software evolution is the process of adapting an existing software system to conform to an enhanced set of requirements. Software
reengineering is software evolution performed in systematic way. Especially software system is fundamentally different from
developing one from scratch. Consequently, tools to support evolution must go beyond forward engineering tools. This paper
presents a reengineering method and tools for software evolution or modernization. The paper briefly describes MARMI-RE methodology
before presenting the individual tools and how they interoperate to support legacy system modernization. We expect that our
proposed methodology can be used flexibly because it presents various scenarios of migration process.
As web application systems become increasingly complex to build developers are turning more and more to integrating pre-built
components from third party developers into their systems. This use of Commercial Off-The-Shelf (COTS) software components
in system construction presents new challenges to web system architects and designers. Web applications are seldom developed
in isolation. Frequently there are many projects building, maintaining and evolving the applications, each with its own life
cycle of requirements, design and implementation. To gain improvements in productivity and quality across these applications,
it is necessary to consider the main element of theses solutions, to abstract them from the individual solutions, and to manage
them as a core asset of the organization. The continuing increase of interest in Component Based Development (CBD) signifies
the emergence of a new development trend within the Web application industry. This paper describes issues raised when integrating
COTS components for web application, outlines strategies for integration, and presents some informal rules we have developed
that ease the development and maintenance of such systems.
Software Process Improvement (SPI) is the set of activities with which an organization attempts to reach better performances
on product cost, time-to-market and product quality, by improving the software development process. Changes are made to the
process based on ‘best practices’: experiences of other, not necessarily similar organizations. Within SPI methodologies there
is a focus on the software development process, because it is based on the assumption that an improved development process
positively impacts product quality, productivity, product cost and time-to-market. This paper defines stan-dard metrics for
quantitative measurement of quality indicators of processes through Software Process Assessment (SPA) based on SPICE. Through
accomplishment of this, we are able to control and to measure SPI activity, and pro-vide for a basis of quantitative S/W process
management. The results of our re-search will represent a circulatory architecture for SPI and support of the risk management
through the improvement activities and the Process Asset Library with collected and measured data.
This paper proposes a query language to consistently access metadata registries. In current, many metadata registries have
been built in various fields. Unfortunately, there is no access method to access the metadata registries in a standard manner.
Thus many management systems have been developed in various and different access methods to build and manage their metadata
registry. In this paper, we propose a metadata registry query language that allows us to access consistently all metadata
registries in a standardized manner. The query language is an extension of the standard query language for the relational
databases SQL which is familiar to existing database managers. Consequently, the proposed metadata registry query language
reduces the development cost of a metadata registry management system. And it also enables all metadata registries to be accessed
in a consistent manner.
XML routers are devices that deliver the requested data from XML data streams to the destinations. Several XML stream process methodologies have been proposed and developed in recent years, but there are still many issues on XML routing at the network layer remain to be studied. In this paper we present a design of such a XML router at the network layer. An implementation of a prototype of the XML Router is also described that uses lazy Deterministic Finite Automata (DFA) to process XML streams from the network in real time. Preliminary experiments showed that our XML router has the potential of delivering requested data efficiently both in time and space.
Web information systems are rapidly increasing and the structure of the systems becomes more complex. When users, however,
navigate such complex Web systems, they cannot often grasp the current location and get the information that they want. Therefore,
a systematic approach to model the navigation of Web information systems is needed that helps users get information, purchase
products, and deal with complexity. If the systems provide the information of their navigation context with useful clues for
exploring, users will easily comprehend the present situation and find the information in a relatively short time. They will
also travel through the systems adaptively by using the context information. In this paper, we describe extending UML for
a context-based navigation modeling framework of Web information systems. An example of online bookstore is given to describe
the models produced in the framework.
The current Web is ‘machine-readable’, but not ‘machine-understandable’. Therefore, new methods are required for machines
to exactly understand an amount of Web information resources. A proposed solution for this issue is to use machine understandable
metadata to describe information resources contained on the Web. There are two leading methods to describe metadata of Web
information resources. One is Topic map, ISO/IEC JTC1’s standard, and the other is RDF, W3C’s standard. To implement effective
semantic web (machine-understandable web), semantic web must handle all metadata of web information resources. For this, the
necessity of interoperability is needed between Topic map area and RDF area. There are some previous researches on conversion
method between Topic map and RDF, but these methods generate some loss of meaning or complicated result. In this paper, a
new method to solve these issues is proposed. This method decreases the loss of implied semantics in comparison with the previous
conversion methods and generate clear RDF graph.
In recent years, the World Wide Web has become an ideal platform for developing Internet applications. World Wide Web service
and application engineering is a complex task. Many web applications at present are large-scale and involve hundreds or thousands
of web pages and sophisticated interactions with users and databases. Thus, improving the quality of web applications and
reducing development costs are important challenges for the Internet industry. One way to resolve the difficulty is to provide
web application developers with an integrated development environment. In this paper, I propose an efficient methodology and
development environment for web application programs. This environment includes a design model to represent data and navigational
structure, a modeling language for the notation technique of the design model, and a process model to define development stages.
The current state of art in existing middleware technologies does not support the development of distributed applications
that need processes to complete a task collaboratively. What is needed in the next generation of middleware is synergy of
heterogeneity, distribution, communication, and coordination. We are proposing to augment the existing middleware technologies
to provide collaboration support through Multiparty Interaction (MI) protocol rather than design a new programming language
for distributed coordinated programming. In this paper, a 4-layered interaction model will be presented to decouple the applications
and their underlying middleware implementations including coordination protocols by providing a set of generic interfaces
to the applications. The decoupling of applications and middleware technologies by isolating computation, communication, and
coordination promotes reuse, improves comprehension, and eases maintenance due to software evolution.
Class libraries play a key role in object-oriented paradigm. They provide, by and large, the most commonly reused components
in object-oriented environments. In this paper, we use a number of metrics to study reusability of four standard class libraries
of two object oriented languages; namely Java and Eiffel. The purpose of the study is to demonstrate how different design
philosophies of the two languages have affected structural design and organization of their standard libraries that in turn
might have affected their reusability with regards to Ease of Reuse and Design with Reuse. Our study concludes that within limits of our measurements, the Java libraries are easy to reuse whereas Eiffel libraries
are better designed with reuse. We observe that whilst design with reuse may make class libraries extensible and maintainable, but it does not necessarily make them easy to reuse.
Software organizations are in need of methods to understand, structure, and improve the data they are collection. We have developed an approach for use when a large number of diverse metrics are already being collected by a software organization. The Approach combines two methods. One looks at an organization's measurement framework in a goal-oriented fashion and the other looks at it in the performance pyramid by quantitative method. We present model-based performance prediction at software development time in order to optimize a project of organization and strengthen control of it and thus, accomplish its objectives by determining its process capability and project capability through the proposed three models(PCM, ECM, PPM) by developing strategies to improve the process and, by planning the most suitable project to its vision with Project Prediction Model (PPM).
Recent advances in object-oriented technology and computer networking have changed the way we maintain and develop software
systems, i.e., you may need to maintain the system that is running in remote area. In this paper, we introduce a dynamic program
slicing method applied to JavaTM programs using the JPDA [1] (Java Platform Debugger Architecture) facilities. Our approach produces DORDs (dynamic object
relationship diagrams) with respect to given slicing criterion in XML format, for export and graphical representations. The
resulting slice is collectively called DORD-XML. The slicing algorithm keeps track of dynamic dependencies of objects so that
it can compute a minimum set of objects with respect to given slicing criterion. By using DORD-XML and a graph-drawing tool,
we attempt to reduce the complexity of Java programs and to make distributed, remote, and local systems more maintainable
and understandable.
This paper attempts to clarify infrastructure’s definition and its impact and reach. Despite the wide use of the term and
the importance of the entities it represents, the notion of infrastructures has not been thoroughly addressed. We use different
perspectives for defining infrastructures and investigate the intricate relationships a system has with its infrastructure.
In order to deal with the complexity of the infrastructure notion, we provide a diverse set of classifications. We focus on
the information technology infrastructure and its security and survivability. We investigate the design issues for building
evolvable, resilient, disaster-hardened infrastructures.
We focus on non-preemptive Fixed Priority (fp) scheduling. Unlike the classical approach, where flows sharing the same priority are assumed to be scheduled arbitrarily,
we assume that these flows are scheduled Earliest Deadline First (edf), by considering their absolute deadline on their first visited node. The resulting scheduling is called fp/edf
*. In this paper, we establish new results for fp/edf
* in a distributed context, first when flows follow the same sequence of nodes (the same path). We then extend these results
when flows follow different paths. We show how to compute an upper bound on the end-to-end response time of any flow when
the packet priority is computed on the first node and left unchanged on any subsequent node. This alleviates the packet processing
in core nodes. For that purpose, we use a worst case analysis based on the trajectory approach, that is less pessimistic than
classical approaches. We compare our results with those provided by the holistic approach: the benefit can be very high.
In this paper, we propose a Web service based inter-AS (autonomous system) connection management architecture for QoS- guaranteed
DiffServ provisioning. In the proposed architecture, the interaction between customer network management (CNM) and network
management system (NMS), and the interactions among multiple NMSs are designed and implemented based on Web service architecture
with WSDL, SOAP/XML and UDDI. The proposed architecture can be easily implemented in the early stage of MPLS network employment
where the MPLS signaling is not mature yet, and provides efficient internetworking among multiple Internet Service Providers
(ISPs) that is requested to provide end-to-end QoS-guaranteed differentiated services.
In early 1960s, intricacy of software systems led to a call for the emergence of the concept of Software Reuse. Rather than building software applications from genesis, software reuse consents creating software systems from existing software. Efficient software reuse programs implemented by the firms may increase their productivity and value, thereby giving the organizations headway. Several reuse metric and models reign the software industry. Reuse assessment commit to high quality and economic system development. Despite its commencement as a potent vision, software reuse has botched to become a part of the typical software engineering practice. The paper is an attempt to articulate the notion of software reuse and the concerning issues. Reusability facet has been conferred analogous to OO paradigm and agile development. Here the concept of reuse has been addressed as a combination of artifacts as well as individual components.
ResearchGate has not been able to resolve any references for this publication.