IEEE Transactions on Software Engineering (IEEE T SOFTWARE ENG )

Publisher: IEEE Computer Society, Institute of Electrical and Electronics Engineers

Description

Specification, design, development, management, testing, maintenance, and documentation of software systems. Topics include programming methodology; software project management; programming environments; hardware and software monitoring; and programming tools. Extensive bibliographies.

  • Impact factor
    2.59
    Show impact factor history
     
    Impact factor
  • 5-year impact
    3.37
  • Cited half-life
    0.00
  • Immediacy index
    0.27
  • Eigenfactor
    0.01
  • Article influence
    1.33
  • Website
    IEEE Transactions on Software Engineering website
  • Other titles
    IEEE transactions on software engineering, Institute of Electrical and Electronics Engineers transactions on software engineering, Transactions on software engineering, Software engineering
  • ISSN
    0098-5589
  • OCLC
    1434336
  • Material type
    Periodical, Internet resource
  • Document type
    Journal / Magazine / Newspaper, Internet Resource

Publisher details

Institute of Electrical and Electronics Engineers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Authors own and employers publicly accessible webpages
    • Preprint - Must be removed upon publication of final version and replaced with either full citation to IEEE work with a Digital Object Identifier or link to article abstract in IEEE Xplore or Authors post-print
    • Preprint - Set-phrase must be added once submitted to IEEE for publication ("This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible")
    • Preprint - Set phrase must be added when accepted by IEEE for publication ("(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
    • Preprint - IEEE must be informed as to the electronic address of the pre-print
    • Postprint - Publisher copyright and source must be acknowledged (see above set statement)
    • Publisher's version/PDF cannot be used
    • Publisher copyright and source must be acknowledged
  • Classification
    ​ green

Publications in this journal

  • IEEE Transactions on Software Engineering 09/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Smartphone applications’ energy efficiency is vital, but many Android applications suffer from serious energy ineffi-ciency problems. Locating these problems is labor-intensive and automated diagnosis is highly desirable. However, a key chal-lenge is the lack of a decidable criterion that facilitates automated judgment of such energy problems. Our work aims to address this challenge. We conducted an in-depth study of 173 open-source and 229 commercial Android applications, and observed two common causes of energy problems: missing deactivation of sensors or wake locks, and cost-ineffective use of sensory da-ta. With these findings, we propose an automated approach to diagnosing energy problems in Android applications. Our ap-proach explores an application’s state space by systematically executing the application using Java PathFinder (JPF). It moni-tors sensor and wake lock operations to detect missing deactivation of sensors and wake locks. It also tracks the transformation and usage of sensory data and judges whether they are effectively utilized by the application using our state-sensitive data utili-zation metric. In this way, our approach can generate detailed reports with actionable information to assist developers in validat-ing detected energy problems. We built our approach as a tool, GreenDroid, on top of JPF. Technically, we addressed the chal-lenges of generating user interaction events and scheduling event handlers in extending JPF for analyzing Android applications. We evaluated GreenDroid using 13 real-world popular Android applications. GreenDroid completed energy efficiency diagnosis for these applications in a few minutes. It successfully located real energy problems in these applications, and additionally found new unreported energy problems that were later confirmed by developers.
    IEEE Transactions on Software Engineering 05/2014; 40(9):911-940.
  • [Show abstract] [Hide abstract]
    ABSTRACT: System testing of software applications with a graphical-user interface (GUI) front-end requires that sequences of GUI events, that sample the application’s input space, be generated and executed as test cases on the GUI. However, the context-sensitive behavior of the GUI of most of today’s non-trivial software applications makes it practically impossible to fully determine the software’s input space. Consequently, GUI testers—both automated and manual—working with undetermined input spaces are, in some sense, blindly navigating the GUI, unknowingly missing allowable event sequences, and failing to realize that the GUI implementation may allow the execution of some disallowed sequences. In this paper, we develop a new paradigm for GUI testing, one that we call Observe-Model-Exercise* (OME*) to tackle the challenges of testing context-sensitive GUIs with undetermined input spaces. Starting with an incomplete model of the GUI’s input space, a set of coverage elements to test, and test cases, OME* iteratively observes the existence of new events during execution of the test cases, expands the model of the GUI’s input space, computes new coverage elements, and obtains new test cases to exercise the new elements. Our experiment with 8 open-source software subjects, more than 500,000 test cases running for almost 1,100 machine-days, shows that OME* is able to expand the test space on average by 464.11 percent; it detected 34 faults that had never been detected before.
    IEEE Transactions on Software Engineering 01/2014; 40(3):216-234.
  • IEEE Transactions on Software Engineering 01/2014; PP(99):1-1.
  • [Show abstract] [Hide abstract]
    ABSTRACT: When operating in volatile environments, service-based systems (SBSs) that are dynamically composed from component services must be monitored in order to guarantee timely and successful delivery of outcomes in response to user requests. However, monitoring consumes resources and very often impacts on the quality of the SBSs being monitored. Such resource and system costs need to be considered in formulating monitoring strategies for SBSs. The critical path of a composite SBS, i.e., the execution path in the service composition with the maximum execution time, is of particular importance in cost-effective monitoring as it determines the response time of the entire SBS. In volatile operating environments, the critical path of an SBS is probabilistic, as every execution path can be critical with a certain probability, i.e., its criticality. As such, it is important to estimate the criticalities of different execution paths when deciding which parts of the SBS to monitor. Furthermore, cost-effective monitoring also requires management of the trade-off between the benefit and cost of monitoring. In this paper, we propose CriMon, a novel approach to formulating and evaluating monitoring strategies for SBSs. CriMon first calculates the criticalities of the execution paths and the component services of an SBS and then, based on those criticalities, generates the optimal monitoring strategy considering both the benefit and cost of monitoring. CriMon has two monitoring strategy formulation methods, namely local optimisation and global optimisation. In-lab experimental results demonstrate that the response time of an SBS can be managed cost-effectively through CriMon-based monitoring. The effectiveness and efficiency of the two monitoring strategy formulation methods are also evaluated and compared.
    IEEE Transactions on Software Engineering 01/2014; 40(5):461-482.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Recommendation systems in software engineering (SE) should be designed to integrate evidence into practitioners experience. Bayesian networks (BNs) provide a natural statistical framework for evidence-based decision-making by incorporating an integrated summary of the available evidence and associated uncertainty (of consequences). In this study, we follow the lead of computational biology and healthcare decision-making, and investigate the applications of BNs in SE in terms of 1) main software engineering challenges addressed, 2) techniques used to learn causal relationships among variables, 3) techniques used to infer the parameters, and 4) variable types used as BN nodes. We conduct a systematic mapping study to investigate each of these four facets and compare the current usage of BNs in SE with these two domains. Subsequently, we highlight the main limitations of the usage of BNs in SE and propose a Hybrid BN to improve evidence-based decision-making in SE. In two industrial cases, we build sample hybrid BNs and evaluate their performance. The results of our empirical analyses show that hybrid BNs are powerful frameworks that combine expert knowledge with quantitative data. As researchers in SE become more aware of the underlying dynamics of BNs, the proposed models will also advance and naturally contribute to evidence based-decision-making.
    IEEE Transactions on Software Engineering 01/2014; 40(6):533-554.
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is widely believed that refactoring improves software quality and developer productivity. However, few empirical studies quantitatively assess refactoring benefits or investigate developers’ perception towards these benefits. This paper presents a field study of refactoring benefits and challenges at Microsoft through three complementary study methods: a survey, semi-structured interviews with professional software engineers, and quantitative analysis of version history data. Our survey finds that the refactoring definition in practice is not confined to a rigorous definition of semantics-preserving code transformations and that developers perceive that refactoring involves substantial cost and risks. We also report on interviews with a designated refactoring team that has led a multi-year, centralized effort on refactoring Windows. The quantitative analysis of Windows 7 version history finds the top 5 percent of preferentially refactored modules experience higher reduction in the number of inter-module dependencies and several complexity measures but increase size more than the bottom 95 percent. This indicates that measuring the impact of refactoring requires multi-dimensional assessment.
    IEEE Transactions on Software Engineering 01/2014; 40(7):633-649.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Continuous software process improvement (SPI) practices have been extensively prescribed to improve performance of software projects. However, SPI implementation mechanisms have received little scholarly attention, especially in the context of distributed software product development. We took an action research approach to study the SPI journey of a large multinational enterprise that adopted a distributed product development strategy. We describe the interventions and action research cycles enacted over a period of five years in collaboration with the firm, which resulted in a custom SPI framework that catered to both the social and technical needs of the firm's distributed teams. Institutionalizing the process maturity framework got stalled initially because the SPI initiatives were perceived by product line managers as a mechanism for exercising wider controls by the firm's top management. The implementation mechanism was subsequently altered to co-opt product line managers, which contributed to a wider adoption of the SPI framework. Insights that emerge from our analysis of the firm's SPI journey pertain to the integration of the technical and social views of software development, preserving process diversity through the use of a multi-tiered, non-blueprint approach to SPI, the linkage between key process areas and project control modes, and the role of SPI in aiding organizational learning.
    IEEE Transactions on Software Engineering 01/2014; 40(3):235-250.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In software testing, something which can verify the correctness of test case execution results is called an oracle. The oracle problem occurs when either an oracle does not exist, or exists but is too expensive to be used. Metamorphic testing is a testing approach which uses metamorphic relations, properties of the software under test represented in the form of relations among inputs and outputs of multiple executions, to help verify the correctness of a program. This paper presents new empirical evidence to support this approach, which has been used to alleviate the oracle problem in various applications and to enhance several software analysis and testing techniques. It has been observed that identification of a sufficient number of appropriate metamorphic relations for testing, even by inexperienced testers, was possible with a very small amount of training. Furthermore, the cost-effectiveness of the approach could be enhanced through the use of more diverse metamorphic relations. The empirical studies presented in this paper clearly show that a small number of diverse metamorphic relations, even those identified in an ad hoc manner, had a similar fault-detection capability to a test oracle, and could thus effectively help alleviate the oracle problem.
    IEEE Transactions on Software Engineering 01/2014; 40(1):4-22.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A software product line is an engineering approach to efficient development of software product portfolios. Key to the success of the approach is to identify the common and variable features of the products and the interdependencies between them, which are usually modeled using feature models. Implicitly, such models also include valuable information that can be used by economic models to estimate the payoffs of a product line. Unfortunately, as product lines grow, analyzing large feature models manually becomes impracticable. This paper proposes an algorithm to compute the total number of products that a feature model represents and, for each feature, the number of products that implement it. The inference of both parameters is helpful to describe the standardization/parameterization balance of a product line, detect scope flaws, assess the product line incremental development, and improve the accuracy of economic models. The paper reports experimental evidence that our algorithm has better runtime performance than existing alternative approaches.
    IEEE Transactions on Software Engineering 01/2014; 40(9):895-910.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Code clones have always been a double edged sword in software development. On one hand, it is a very convenient way to reuse existing code, and to save coding effort. On the other hand, since developers may need to ensure consistency among cloned code segments, code clones can lead to extra maintenance effort and even bugs. Recently studies on the evolution of code clones show that only some of the code clones experience consistent changes during their evolution history. Therefore, if we can accurately predict whether a code clone will experience consistent changes, we will be able to provide useful recommendations to developers onleveraging the convenience of some code cloning operations, while avoiding other code cloning operations to reduce future consistency maintenance effort. In this paper, we define a code cloning operation as consistency-maintenance-required if its generated code clones experience consistent changes in the software evolution history, and we propose a novel approach that automatically predicts whether a code cloning operation requires consistency maintenance at the time point of performing copy-and-paste operations. Our insight is that whether a code cloning operation requires consistency maintenance may relate to the characteristics of the code to be cloned and the characteristics of its context. Based on a number of attributes extracted from the cloned code and the context of the code cloning operation, we use Bayesian Networks, a machine-learning technique, to predict whether an intended code cloning operation requires consistency maintenance. We evaluated our approach on four subjects—two large-scale Microsoft software projects, and two popular open-source software projects—under two usage scenarios: 1) recommend developers to perform only the cloning operations predicted to be very likely to be consistency-maintenance-free, and 2) recommend developers to perform all cloning operations unless they are pre- icted very likely to be consistency-maintenance-required. In the first scenario, our approach is able to recommend developers to perform more than 50 percent cloning operations with a precision of at least 94 percent in the four subjects. In the second scenario, our approach is able to avoid 37 to 72 percent consistency-maintenance-required code clones by warning developers on only 13 to 40 percent code clones, in the four subjects.
    IEEE Transactions on Software Engineering 01/2014; 40(8):773-794.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software developers access bug reports in a project’s bug repository to help with a number of different tasks, including understanding how previous changes have been made and understanding multiple aspects of particular defects. A developer’s interaction with existing bug reports often requires perusing a substantial amount of text. In this article, we investigate whether it is possible to summarize bug reports automatically so that developers can perform their tasks by consulting shorter summaries instead of entire bug reports. We investigated whether existing conversation-based automated summarizers are applicable to bug reports and found that the quality of generated summaries is similar to summaries produced for e-mail threads and other conversations. We also trained a summarizer on a bug report corpus. This summarizer produces summaries that are statistically better than summaries produced by existing conversation-based generators. To determine if automatically produced bug report summaries can help a developer with their work, we conducted a task-based evaluation that considered the use of summaries for bug report duplicate detection tasks. We found that summaries helped the study participants save time, that there was no evidence that accuracy degraded when summaries were used and that most participants preferred working with summaries to working with original bug reports.
    IEEE Transactions on Software Engineering 01/2014; 40(4):366-380.

Related Journals