Richard W. Selby’s research while affiliated with University of California, Irvine and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (32)


Research in Advanced Environments
  • Article
  • Full-text available

December 2001

·

29 Reads

Richard N. Taylor

·

·

Richard W. Selby

·

Active process support is being created to help human developers, including non-technical managers, coordinate their activities while integrating commercial project planning tools. Analysis and testing tools are being created to provide high assurance in software, through provision of prerun-time analysis, test case development, test result checking, and test process management tools. Powerful program analysis tools are being created which enable early detection of subtle coordination errors in concurrent systems composed of heterogeneous parts. Evolvable software architectures, first in the domain of user interface software, are being created to enable a more component-based software economy. World-Wide-Web (WWW) and hypermedia technology is being developed to foster easy information access, connections between software artifacts, and human understanding of complex systems. The UCI/Purdue team developed these technologies and validated them through interaction with the aerospace community, military organizations, the commercial software world, and the open Internet community.

Download

How Microsoft Builds Software

June 1997

·

172 Reads

·

212 Citations

Communications of the ACM

Microsoft Corp. has probably tackled more PC software projects than any other company in the industry. Microsoft Corp.'s philosophy for product development has been to cultivate its roots as a highly flexible, entrepreneurial company and not to adopt too many of the structured software-engineering practices commonly promoted by such organizations as the Software Engineering Institute and the International Standards Organization. Microsoft Corp. has tried to scale up a loosely structured small-team style of product development. The objective is to get many small parallel teams or individual programmers to work together as a single relatively large team in order to build large products relatively quickly while still allowing individual programmers and teams freedom to evolve their designs and operate nearly autonomously. This article summarizes how Microsoft Corp. uses various techniques and melds them into an overall approach that balances flexibility and structure in software product development. The authors label Microsoft Corp.'s style of product development, the synch-and-stabilize approach.


How Microsoft Competes

January 1996

·

11 Reads

·

11 Citations

Research-Technology Management

OVERVIEW: Today Microsoft owns the operating systems and basic applications programs that run on 170 million computers. Beyond the genius of co-founder/CEO Bill Gates, what accounts for the company's dramatic success? From two years of on-site observation and interviewing at Microsoft headquarters, the authors identify seven complementary strategies that characterize how Microsoft competes and operates: Find smart people who know the technology and the business; organize small teams of overlapping functional specialists; pioneer and orchestrate evolving mass markets; focus creativity by evolving features and "fixing" resources; do everything in parallel, with frequent synchronizations; improve through continuous self-critiquing, feedback and sharing; attack the future! Moreover, Microsoft's "synch-and-stabilize" approach to product development enables the company not only to build an increasing variety of complex features and end-products for fast-paced markets with short life cycles, but also to shape evolving mass markets and foster organizational learning.


Interconnectivity analysis techniques for error localization in large systems

March 1993

·

5 Reads

·

6 Citations

Journal of Systems and Software

Software systems contain multiple types of interrelations among components — data, control, and sequencing, among others. We are developing interconnectivity analysis techniques that derive multiple views of the structure of large-scale software systems. These techniques calculate interconnections among components and then recursively group the components into sets according to their degree of interconnection. These techniques are especially suited to large-scale systems (e.g., > 100,000 lines) since numerous types of interconnections can be determined automatically in a tractable manner. Interconnectivity analysis techniques produce visualizations of system structure and can be used to document systems, model their evolution over time, compare system structure, guide regression testing, or localize error-prone structure. This article summarizes two studies using interconnectivity analysis. In earlier work, one such technique was applied effectively in a feasibility study to characterize the error-prone components in a large-scale system from a production environment. Routines with the highest coupling/cohesion ratios had 8.1 times more errors per 1,000 source lines of code that did routines with the lowest coupling /cohesion ratios. A second validation study is currently underway. In this study, we are applying interconnectivity analysis techniques to the design specification of a large distributed command and control system. Tools supporting interconnectivity analysis will be integrated into the Amadeus measurement-driven analysis and feedback system.



Experimental Software Engineering Issues: Critical Assessment and Future Directions: International Workshop Dagstuhl Castle, Germany, September 14–18, 1992 Proceedings

January 1993

·

11 Reads

·

18 Citations

Lecture Notes in Computer Science

We have only begun to understand the experimental nature of software engineering, the role of empirical studies and measurement within software engineering, and the mechanisms needed to apply them successfully. This volume presents the proceedings of a workshop whose purpose was to gather those members of the software engineering community who support an engineering approach based upon empirical studies to provide an interchange of ideas and paradigms for research. The papers in the volume are grouped into six parts corresponding to the workshop sessions: The experimental paradigm in software engineering; Objectives and context of measurement/experimentation; Procedures and mechanisms for measurement/experimentation; Measurement-based modeling; packaging for reuse/reuse of models; and technology transfer, teaching and training. Each part opens with a keynote paper and ends with a discussion summary. The workshop served as an important event in continuing to strengthen empirical software engineering as a major subdiscipline ofsoftware engineering. The deep interactions and important accomplishments from the meeting documented in these proceedings have helped identify key issues in moving software engineering as a whole towards a true engineering discipline.


Scalable Techniques for Modeling Software Interconnectivity

January 1992

·

4 Reads

Software systems contain multiple types of interrelations among components — data, control, and sequencing, among others. We are developing interconnectivity analysis techniques that derive multiple views of the structure of large-scale software systems. These techniques calculate interconnections among components and then recursively group the components into sets according to their degree of interconnection. These techniques are especially suited to large-scale systems (e.g., > 100,000 lines) since numerous types of interconnections can be determined automatically in a tractable manner. Interconnectivity analysis techniques produce visualizations of system structure and can be used to document systems, model their evolution over time, compare system structure, guide regression testing, or localize error-prone structure. In earlier work, one such technique was applied effectively in a feasibility study to characterize the error-prone components in a large-scale system from a production environment. Tools supporting interconnectivity analysis will be integrated into the Amadeus measurement-driven analysis and feedback system.


Software Measurement and Experimentation Frameworks, Mechanisms, and Infrastructure.

January 1992

·

5 Reads

·

2 Citations

Lecture Notes in Computer Science

Software measurement and experimentation provide a cross-cutting foundation for software understanding, analysis, evaluation, and improvement. Effective measurement and experimentation requires a variety of issues to be addressed ranging from goal specification to metric definition to data interpretation. This paper focuses on a subset of the measurement and experimentation issues related to frameworks, mechanisms, and infrastructure. In particular, the paper highlights research issues or results in these areas: frameworks for measurement and experimentation, existing measures, determining appropriate measures, data collection, experimental designs, and infrastructure for measurement.


Paradigms for experimentation and empirical studies in software engineering

December 1991

·

11 Reads

·

31 Citations

Reliability Engineering & System Safety

The software engineering field requires major advances in order to attain the high standards of quality and productivity that are needed by the complex systems of the future. The immaturity of the field is reflected by the fact that most of its technologies have not yet been analyzed to determine their effects on quality and productivity. Moreover, when these analyses have occured the resulting guidance is not quantitative but only ethereal. One fundamental area of software engineering that is just beginning to blossom is the use of measurement techniques and empirical methods. These techniques need to be adopted by software researchers and practitioners in order to help the field respond to the demands being placed upon it. This paper outlines four paradigms for experimentation and empirical study in software engineering and describes their interrelationships: (1) Improvement paradigm (2) Goal-question-metric paradigm, (3) Experimentation framework paradigm, and (4) Classification paradigm. These paradigms are intended to catalyze the use of measurement techniques and empirical methods in software engineering.


Metric-Driven Classification Analysis.

October 1991

·

6 Reads

·

1 Citation

Lecture Notes in Computer Science

Metric-driven classification models identify software components with user-specifiable properties, such as those likely to be fault-prone, have high development effort, or have faults in a certain class. These models are generated automatically from past metric data, and they are scalable to large systems and calibratable to different projects. These models serve as extensible integration frameworks for software metrics because they allow the addition of new metrics and integrate symbolic and numeric data from all four measurement abstractions. In our past work, we developed and evaluated techniques for generating tree-based classification models. In this paper, we investigate a technique for generating network-based classification models. The principle underlying the tree-based models is partitioning, while the principle underlying the network-based models is pattern matching. Tree-based models prune away information and can be decomposed, while network-based models retain all information and tend to be more complex. We evaluate the predictive accuracy of network-based models and compare them to the tree-based models.


Citations (25)


... For software engineering to move from a craft towards an engineering discipline, software developers need empirical evidence to help them decide what defect detection technique to apply under various conditions [1], [2], [3]. However, most of the existing experimental research on testing techniques does not provide a solid basis for making any strong conclusions regarding effectiveness or efficiency of different testing techniques as the results are generally incompatible and do not allow one to make objective comparison on the effectiveness and efficiency of various testing approaches. ...

Reference:

A Controlled Experiment to Evaluate Effectiveness and Efficiency of Three Software Testing Methods
Experimental Software Engineering Issues: Critical Assessment and Future Directions: International Workshop Dagstuhl Castle, Germany, September 14–18, 1992 Proceedings
  • Citing Book
  • January 1993

Lecture Notes in Computer Science

... Trazendo o conceito de arquitetura de referência para o domínio de engenharia de software, na literatura podem-se encontrar trabalhos que envolvem ambientes de engenharia de software como é o caso da arquitetura de ambiente Arcadia (Taylor et al., 1988), o ambiente Field (Reiss, 1990), o projeto Taba (Rocha et al., 1990), o ambiente EPOS (Nguyen et al., 1997), o ambiente Odyssey (Braga, 1999), o ambiente MILOS (Maurer et al., 1999), o ambiente Orion e o ambiente RefASSET (Nakagawa, 2006). Em relação à área de teste de software existem algumas propostas de arquiteturas de referências. ...

Foundations for the Arcadia Environment Architecture
  • Citing Article
  • March 1989

ACM SIGPLAN Notices

R.N. Taylor

·

F.C. Belz

·

·

[...]

·

... In software development, error detection rates are only 20% to 40% [Basili and Selby, 1986;Johnson and Tjahjono, 1997;Myers, 1978;Porter, Votta, and Basili, 1995;Porter and Johnson, 1997;Porter, Sly, Toman, and Votta, 1997;Porter and Votta, 1994] when single inspectors examine a code module to look for errors. This is why software code inspection is always done in teams of three to five or more [Fagan, 1976[Fagan, 1986Gilb and Graham, 1993;Cohen, 2006]. ...

Four Applications of a Software Data Collection and Analysis Methodology
  • Citing Chapter
  • January 1986

... Figure 14 : Modèle de fonctionnement des approches agiles à grande l'échelle 67 à de plus grandes équipes de développement. Inspiré du mode de fonctionnement de Microsoft Synch and stabilize dans les années 1990 (Cusumano & Selby, 1996). Leffingwell part du principe qu'étendre les principes des méthodes de première génération s'accompagne de nombreux défis, cependant son livre Scaling agile software developpement est plus précis quant aux rituels et modes de fonctionnements de synchronisation entre les acteurs. ...

How Microsoft Competes
  • Citing Article
  • January 1996

Research-Technology Management

... Therefore, there is a possibility that these combinations cannot form continuous executable paths containing the maximum number of LSLs simply by the matching method. To solve this problem, for all the LSLs obtained previously, according to the initial docking depots of each UAV, we propose an algorithm using a resource tree [29] to obtain all the sequences of LSLs that can be executed sequentially, that is, the initial paths of N UAVs. Then, the insertion method is used to find the optimal insertion points for the feasible paths that are not included in the initial paths. ...

Learning from Examples: Generation and Evaluation of Decision Trees for Software Resource Analysis

IEEE Transactions on Software Engineering

... We conduct controlled experiments using Weka (Hall et al, 2009), a mature machine learning environment that is successfully used across several domains, for instance, bioinformatics (Frank et al, 2004), telecommunication (Alshammari and Zincir-Heywood, 2009), and astronomy (Zhao and Zhang, 2008). This section describes the denition, design and setting of the experiments, following the general guidelines by Basili et al (1986) and Wohlin et al (2012). ...

Hutchens: " Experimentation in Software Engineering
  • Citing Article
  • January 1986

... Amadeus. The Amadeus system 3 is best characterized as a subcomponent of a CAPE system [39,40]. The objectives of the Amadeus system include integrating measurement with process enactment by presenting an abstract interface for data collection, data analysis, and feedback. ...

Classification tree analysis using the Amadeus measurement and empirical analysis system
  • Citing Article

... In addition to these textbooks, there are also edited books available that are related to special events in empirical software engineering and cover valuable methodological contributions. Rombach et al. (1993) edited proceedings from a Dagstuhl seminar in 1992 on empirical software engineering entitled "Experimental Software Engineering Issues: Critical Assessment and Future Directions". The goal was to discuss the state of the art of empirical software engineering by assessing past accomplishments, raising open questions, and proposing a future research agenda at that time. ...

Experimental Software Engineering Issues: Critical Assessment and Future Directions, International Workshop Dagstuhl Castle, Germany, September 14-18, 1992, Proceedings
  • Citing Article
  • January 1993

... In other words, the ESE community is making a big effort in standardization, review, and digital publishing related to OS. Nevertheless, the research artifacts reviewed during the survey do not usually follow these guidelines nor other relevant proposals such as those by Basili and Selby (1991) and Solari and Vegas (2006), so the resulting research artifacts present some lack of completeness (RQ 1.3 and RQ 2.1 ); (iv) of the 13 digital repositories identified where the authors upload their research artifacts, the most commonly used in the SE community are GitHub, institutional repositories, and Zenodo (RQ 1.4 ); (v) The most common items in the research artifacts are instructions about experimental settings, operating material (that included training of participants) and experimental tools. ...

Paradigms for experimentation and empirical studies in software engineering
  • Citing Article
  • December 1991

Reliability Engineering & System Safety

... In SDEE, nonalgorithmic models are gaining prominence because of their ability to adapt to complex, volatile environments [9,[31][32][33][34][35][36][37][38][39][40][41][42]. These models often leverage CI techniques, such as CBR, ANNs, Evolutionary Computing (EC), Swarm Intelligence (SI), and FSs. ...

Evaluating techniques for generating metric-based classification trees
  • Citing Article
  • July 1990

Journal of Systems and Software