IEEE Transactions on Software Engineering (IEEE T SOFTWARE ENG)

Publisher: IEEE Computer Society, Institute of Electrical and Electronics Engineers

Journal description

Specification, design, development, management, testing, maintenance, and documentation of software systems. Topics include programming methodology; software project management; programming environments; hardware and software monitoring; and programming tools. Extensive bibliographies.

Current impact factor: 2.29

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 2.292
2012 Impact Factor 2.588
2011 Impact Factor 1.98
2010 Impact Factor 2.216
2009 Impact Factor 3.75
2008 Impact Factor 3.569
2007 Impact Factor 2.105
2006 Impact Factor 2.132
2005 Impact Factor 1.967
2004 Impact Factor 1.503
2003 Impact Factor 1.73
2002 Impact Factor 1.17
2001 Impact Factor 1.398
2000 Impact Factor 1.746
1999 Impact Factor 1.312
1998 Impact Factor 1.153
1997 Impact Factor 1.044

Impact factor over time

Impact factor
Year

Additional details

5-year impact 3.37
Cited half-life 0.00
Immediacy index 0.27
Eigenfactor 0.01
Article influence 1.33
Website IEEE Transactions on Software Engineering website
Other titles IEEE transactions on software engineering, Institute of Electrical and Electronics Engineers transactions on software engineering, Transactions on software engineering, Software engineering
ISSN 0098-5589
OCLC 1434336
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Institute of Electrical and Electronics Engineers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on Author's personal website, employers website or publicly accessible server
    • Author's post-print on Author's server or Institutional server
    • Author's pre-print must be removed upon publication of final version and replaced with either full citation to IEEE work with a Digital Object Identifier or link to article abstract in IEEE Xplore or replaced with Authors post-print
    • Author's pre-print must be accompanied with set-phrase, once submitted to IEEE for publication ("This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible")
    • Author's pre-print must be accompanied with set-phrase, when accepted by IEEE for publication ("(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
    • IEEE must be informed as to the electronic address of the pre-print
    • If funding rules apply authors may post Author's post-print version in funder's designated repository
    • Author's Post-print - Publisher copyright and source must be acknowledged with citation (see above set statement)
    • Author's Post-print - Must link to publisher version with DOI
    • Publisher's version/PDF cannot be used
    • Publisher copyright and source must be acknowledged
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Recommendation systems are intended to increase developer productivity by recommending files to edit. These systems mine association rules in software revision histories. However, mining coarse-grained rules using only edit histories produces recommendations with low accuracy, and can only produce recommendations after a developer edits a file. In this work, we explore the use of finer-grained association rules, based on the insight that view histories help characterize the contexts of files to edit. To leverage this additional context and fine-grained association rules, we have developed MI, a recommendation system extending ROSE, an existing edit-based recommendation system. We then conducted a comparative simulation of ROSE and MI using the interaction histories stored in the Eclipse Bugzilla system. The simulation demonstrates that MI predicts the files to edit with significantly higher recommendation accuracy than ROSE (about 63 over 35 percent), and makes recommendations earlier, often before developers begin editing. Our results clearly demonstrate the value of considering both views and edits in systems to recommend files to edit, and results in more accurate, earlier, and more flexible recommendations.
    IEEE Transactions on Software Engineering 03/2015; 41(3):314-330. DOI:10.1109/TSE.2014.2362138
  • [Show abstract] [Hide abstract]
    ABSTRACT: Interoperability is a major concern for the software engineering field, given the increasing need to compose components dynamically and seamlessly. This dynamic composition is often hampered by differences in the interfaces and behaviours of independently-developed components. To address these differences without changing the components, mediators that systematically enforce interoperability between functionally-compatible components by mapping their interfaces and coordinating their behaviours are required. Existing approaches to mediator synthesis assume that an interface mapping is provided which specifies the correspondence between the operations and data of the components at hand. In this paper, we present an approach based on ontology reasoning and constraint programming in order to infer mappings between components’ interfaces automatically. These mappings guarantee semantic compatibility between the operations and data of the interfaces. Then, we analyse the behaviours of components in order to synthesise, if possible, a mediator that coordinates the computed mappings so as to make the components interact properly. Our approach is formally-grounded to ensure the correctness of the synthesised mediator. We demonstrate the validity of our approach by implementing the MICS (Mediator synthesIs to Connect Components) tool and experimenting it with various real-world case studies.
    IEEE Transactions on Software Engineering 03/2015; 41(3):221-240. DOI:10.1109/TSE.2014.2364844
  • [Show abstract] [Hide abstract]
    ABSTRACT: Search-based approaches have been extensively applied to solve the problem of software test-data generation. Yet, test-data generation for object-oriented programming (OOP) is challenging due to the features of OOP, e.g., abstraction, encapsulation, and visibility that prevent direct access to some parts of the source code. To address this problem we present a new automated search-based software test-data generation approach that achieves high code coverage for unit-class testing. We first describe how we structure the test-data generation problem for unit-class testing to generate relevant sequences of method calls. Through a static analysis, we consider only methods or constructors changing the state of the class-under-test or that may reach a test target. Then we introduce a generator of instances of classes that is based on a family of means-of-instantiation including subclasses and external factory methods. It also uses a seeding strategy and a diversification strategy to increase the likelihood to reach a test target. Using a search heuristic to reach all test targets at the same time, we implement our approach in a tool, JTExpert, that we evaluate on more than a hundred Java classes from different open-source libraries. JTExpert gives better results in terms of search time and code coverage than the state of the art, EvoSuite, which uses traditional techniques.
    IEEE Transactions on Software Engineering 03/2015; 41(3):294-313. DOI:10.1109/TSE.2014.2363479
  • [Show abstract] [Hide abstract]
    ABSTRACT: A test suite is $m$ -complete for finite state machine (FSM) $M$ if it distinguishes between $M$ and all faulty FSMs with $m$ states or fewer. While there are several algorithms that generate $m$ -complete test suites, they cannot be directly used in distributed testing since there can be additional controllability and observability problems. Indeed, previous results show that there is no general method for generating an $m$ -complete test suite for distributed testing and so the focus has been on conditions under which this is possible. This paper takes a different approach, which is to generate what we call $c_m$ -complete test suites: controllable test suites that distinguish an FSM $N$ with no more than $m$ states from $M$ if this is possible in controllable testing. Thus, under the hypothesis that the system under test has no more than $m$ states, a $c_m$ -complete test suite achieves as much as is possible given the restriction that testing should be controllable. We show how the problem of generating a $c_m$ -complete test suite can be mapped to the problem of generating an $m$ -complete test suite for a partial FSM. Thus, standard test suite generation methods can be adapted for use in distributed testing.
    IEEE Transactions on Software Engineering 03/2015; 41(3):279-293. DOI:10.1109/TSE.2014.2364035
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes and validates a model-driven software engineering technique for spreadsheets. The technique that we envision builds on the embedding of spreadsheet models under a widely used spreadsheet system. This means that we enable the creation and evolution of spreadsheet models under a spreadsheet system. More precisely, we embed ClassSheets, a visual language with a syntax similar to the one offered by common spreadsheets, that was created with the aim of specifying spreadsheets. Our embedding allows models and their conforming instances to be developed under the same environment. In practice, this convenient environment enhances evolution steps at the model level while the corresponding instance is automatically co-evolved. Finally, we have designed and conducted an empirical study with human users in order to assess our technique in production environments. The results of this study are promising and suggest that productivity gains are realizable under our model-driven spreadsheet development setting.
    IEEE Transactions on Software Engineering 03/2015; 41(3):241-263. DOI:10.1109/TSE.2014.2361141
  • [Show abstract] [Hide abstract]
    ABSTRACT: A good understanding of the factors impacting defects in software systems is essential for software practitioners, because it helps them prioritize quality improvement efforts (e.g., testing and code reviews). Defect prediction models are typically built using classification or regression analysis on product and/or process metrics collected at a single point in time (e.g., a release date). However, current defect prediction models only predict if a defect will occur, but not when, which makes the prioritization of software quality improvements efforts difficult. To address this problem, Koru et al. applied survival analysis techniques to a large number of software systems to study how size (i.e., lines of code) influences the probability that a source code module (e.g., class or file) will experience a defect at any given time. Given that 1) the work of Koru et al. has been instrumental to our understanding of the size-defect relationship, 2) the use of survival analysis in the context of defect modelling has not been well studied and 3) replication studies are an important component of balanced scholarly debate, we present a replication study of the work by Koru et al. In particular, we present the details necessary to use survival analysis in the context of defect modelling (such details were missing from the original paper by Koru et al.). We also explore how differences between the traditional domains of survival analysis (i.e., medicine and epidemiology) and defect modelling impact our understanding of the size-defect relationship. Practitioners and researchers considering the use of survival analysis should be aware of the implications of our findings.
    IEEE Transactions on Software Engineering 02/2015; 41(2):176-197. DOI:10.1109/TSE.2014.2361131
  • IEEE Transactions on Software Engineering 02/2015; 41(2):135-156. DOI:10.1109/TSE.2014.2362924
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software crash reproduction is the necessary first step for debugging. Unfortunately, crash reproduction is often labor intensive. To automate crash reproduction, many techniques have been proposed including record-replay and post-failure-process approaches. Record-replay approaches can reliably replay recorded crashes, but they incur substantial performance overhead to program executions. Alternatively, post-failure-process approaches analyse crashes only after they have occurred. Therefore they do not incur performance overhead. However, existing post-failure-process approaches still cannot reproduce many crashes in practice because of scalability issues and the object creation challenge. This paper proposes an automatic crash reproduction framework using collected crash stack traces. The proposed approach combines an efficient backward symbolic execution and a novel method sequence composition approach to generate unit test cases that can reproduce the original crashes without incurring additional runtime overhead. Our evaluation study shows that our approach successfully exploited 31 (59.6 percent) of 52 crashes in three open source projects. Among these exploitable crashes, 22 (42.3 percent) are useful reproductions of the original crashes that reveal the crash triggering bugs. A comparison study also demonstrates that our approach can effectively outperform existing crash reproduction approaches.
    IEEE Transactions on Software Engineering 02/2015; 41(2):198-220. DOI:10.1109/TSE.2014.2363469
  • [Show abstract] [Hide abstract]
    ABSTRACT: Model-based testing employs models for testing. Model-based mutation testing (MBMT) additionally involves fault models, called mutants, by applying mutation operators to the original model. A problem encountered with MBMT is the elimination of equivalent mutants and multiple mutants modeling the same faults. Another problem is the need to compare a mutant to the original model for test generation. This paper proposes an event-based approach to MBMT that is not fixed on single events and a single model but rather operates on sequences of events of length k ≥ 1 and invokes a sequence of models that are derived from the original one by varying its morphology based on k. The approach employs formal grammars, related mutation operators, and algorithms to generate test cases, enabling the following: (1) the exclusion of equivalent mutants and multiple mutants; (2) the generation of a test case in linear time to kill a selected mutant without comparing it to the original model; (3) the analysis of morphologically different models enabling the systematic generation of mutants, thereby extending the set of fault models studied in related literature. Three case studies validate the approach and analyze its characteristics in comparison to random testing and another MBMT approach.
    IEEE Transactions on Software Engineering 02/2015; 41(2):113-134. DOI:10.1109/TSE.2014.2360690
  • [Show abstract] [Hide abstract]
    ABSTRACT: Zeno runs, where infinitely many actions occur within finite time, may arise in Timed Automata models. Zeno runs are not feasible in reality and must be pruned during system verification. Thus it is necessary to check whether a run is Zeno or not so as to avoid presenting Zeno runs as counterexamples during model checking. Existing approaches on non-Zenoness checking include either introducing an additional clock in the Timed Automata models or additional accepting states in the zone graphs. In addition, there are approaches proposed for alternative timed modeling languages, which could be generalized to Timed Automata. In this work, we investigate the problem of non-Zenoness checking in the context of model checking LTL properties, not only evaluating and comparing existing approaches but also proposing a new method. To have a systematic evaluation, we develop a software toolkit to support multiple non-Zenoness checking algorithms. The experimental results show the effectiveness of our newly proposed algorithm, and demonstrate the strengths and weaknesses of different approaches.
    IEEE Transactions on Software Engineering 01/2015; 41(1):3-18. DOI:10.1109/TSE.2014.2359893
  • IEEE Transactions on Software Engineering 01/2015; 41(1):1-2. DOI:10.1109/TSE.2014.2380479
  • IEEE Transactions on Software Engineering 01/2015; DOI:10.1109/TSE.2015.2389225
  • IEEE Transactions on Software Engineering 01/2015; DOI:10.1109/TSE.2015.2412134
  • IEEE Transactions on Software Engineering 01/2015; DOI:10.1109/TSE.2015.2398877
  • [Show abstract] [Hide abstract]
    ABSTRACT: Motivation: To survive and succeed, FLOSS projects need contributors able to accomplish critical project tasks. However, such tasks require extensive project experience of long term contributors (LTCs). Aim: We measure, understand, and predict how the newcomers' involvement and environment in the issue tracking system (ITS) affect their odds of becoming an LTC. Method: ITS data of Mozilla and Gnome, literature, interviews, and online documents were used to design measures of involvement and environment. A logistic regression model was used to explain and predict contributor's odds of becoming an LTC. We also reproduced the results on new data provided by Mozilla. Results: We constructed nine measures of involvement and environment based on events recorded in an ITS. Macro-climate is the overall project environment while micro-climate is person-specific and varies among the participants. Newcomers who are able to get at least one issue reported in the first month to be fixed, doubled their odds of becoming an LTC. The macro-climate with high project popularity and the micro-climate with low attention from peers reduced the odds. The precision of LTC prediction was 38 times higher than for a random predictor. We were able to reproduce the results with new Mozilla data without losing the significance or predictive power of the previously published model. We encountered unexpected changes in some attributes and suggest ways to make analysis of ITS data more reproducible. Conclusions: The findings suggest the importance of initial behaviors and experiences of new participants and outline empirically-based approaches to help the communities with the recruitment of contributors for long-term participation and to help the participants contribute more effectively. To facilitate the reproduction of the study and of the proposed measures in other contexts, we provide the data we retrieved and the scripts we wrote at https://www.passion-lab.org/projects/developerfluency.html.
    IEEE Transactions on Software Engineering 01/2015; 41(1):82-99. DOI:10.1109/TSE.2014.2349496
  • [Show abstract] [Hide abstract]
    ABSTRACT: When software engineers fix bugs, they may have several options as to how to fix those bugs. Which fix they choose has many implications, both for practitioners and researchers: What is the risk of introducing other bugs during the fix? Is the bug fix in the same code that caused the bug? Is the change fixing the cause or just covering a symptom? In this paper, we investigate alternative fixes to bugs and present an empirical study of how engineers make design choices about how to fix bugs. We start with a motivating case study of the Pex4Fun environment. Then, based on qualitative interviews with 40 engineers working on a variety of products, data from six bug triage meetings, and a survey filled out by 326 Microsoft engineers and 37 developers from other companies, we found a number of factors, many of them non-technical, that influence how bugs are fixed, such as how close to release the software is. We also discuss implications for research and practice, including how to make bug prediction and localization more accurate.
    IEEE Transactions on Software Engineering 01/2015; 41(1):65-81. DOI:10.1109/TSE.2014.2357438
  • IEEE Transactions on Software Engineering 01/2015; DOI:10.1109/TSE.2014.2357445