IEEE Transactions on Software Engineering Journal Impact Factor & Information

Publisher: IEEE Computer Society, Institute of Electrical and Electronics Engineers

Journal description

Specification, design, development, management, testing, maintenance, and documentation of software systems. Topics include programming methodology; software project management; programming environments; hardware and software monitoring; and programming tools. Extensive bibliographies.

Current impact factor: 2.29

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 2.292
2012 Impact Factor 2.588
2011 Impact Factor 1.98
2010 Impact Factor 2.216
2009 Impact Factor 3.75
2008 Impact Factor 3.569
2007 Impact Factor 2.105
2006 Impact Factor 2.132
2005 Impact Factor 1.967
2004 Impact Factor 1.503
2003 Impact Factor 1.73
2002 Impact Factor 1.17
2001 Impact Factor 1.398
2000 Impact Factor 1.746
1999 Impact Factor 1.312
1998 Impact Factor 1.153
1997 Impact Factor 1.044

Impact factor over time

Impact factor
Year

Additional details

5-year impact 3.37
Cited half-life 0.00
Immediacy index 0.27
Eigenfactor 0.01
Article influence 1.33
Website IEEE Transactions on Software Engineering website
Other titles IEEE transactions on software engineering, Institute of Electrical and Electronics Engineers transactions on software engineering, Transactions on software engineering, Software engineering
ISSN 0098-5589
OCLC 1434336
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Institute of Electrical and Electronics Engineers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on Author's personal website, employers website or publicly accessible server
    • Author's post-print on Author's server or Institutional server
    • Author's pre-print must be removed upon publication of final version and replaced with either full citation to IEEE work with a Digital Object Identifier or link to article abstract in IEEE Xplore or replaced with Authors post-print
    • Author's pre-print must be accompanied with set-phrase, once submitted to IEEE for publication ("This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible")
    • Author's pre-print must be accompanied with set-phrase, when accepted by IEEE for publication ("(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
    • IEEE must be informed as to the electronic address of the pre-print
    • If funding rules apply authors may post Author's post-print version in funder's designated repository
    • Author's Post-print - Publisher copyright and source must be acknowledged with citation (see above set statement)
    • Author's Post-print - Must link to publisher version with DOI
    • Publisher's version/PDF cannot be used
    • Publisher copyright and source must be acknowledged
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Software systems are increasingly regulated. Software engineers therefore must determine which requirements have met or exceeded their legal obligations and which requirements have not. Requirements that have met or exceeded their legal obligations are legally implementation ready, whereas requirements that have not met or exceeded their legal obligations need further refinement. In this paper, we examine how software engineers make these determinations using a multi-case study with three cases. Each case involves assessment of requirements for an electronic health record system that must comply with the US Health Insurance Portability and Accountability Act (HIPAA) and is measured against the evaluations of HIPAA compliance subject matter experts. Our first case examines how individual graduate-level software engineering students assess whether the requirements met or exceeded their HIPAA obligations. Our second case replicates the findings from our first case using a different set of participants. Our third case examines how graduate-level software engineering students assess requirements using the Wideband Delphi approach to deriving consensus in groups. Our findings suggest that the average graduate-level software engineering student is ill-prepared to write legally compliant software with any confidence and that domain experts are an absolute necessity.
    IEEE Transactions on Software Engineering 06/2015; 41(6):545-564. DOI:10.1109/TSE.2014.2383374
  • [Show abstract] [Hide abstract]
    ABSTRACT: To prevent ill-formed configurations, highly configurable software often allows defining constraints over the available options. As these constraints can be complex, fixing a configuration that violates one or more constraints can be challenging. Although several fix-generation approaches exist, their applicability is limited because (1) they typically generate only one fix or a very long fix list, difficult for the user to identify the desirable fix; and (2) they do not fully support non-Boolean constraints, which contain arithmetic, inequality, and string operators. This paper proposes a novel concept, range fix, for software configuration. A range fix specifies the options to change and the ranges of values for these options. We also design an algorithm that automatically generates range fixes for a violated constraint. We have evaluated our approach with three different strategies for handling constraint interactions, on data from nine open source projects over two configuration platforms. The evaluation shows that our notion of range fix leads to mostly simple yet complete sets of fixes, and our algorithm is able to generate fixes within one second for configuration systems with a few thousands options and constraints.
    IEEE Transactions on Software Engineering 06/2015; 41(6):603-619. DOI:10.1109/TSE.2014.2383381
  • [Show abstract] [Hide abstract]
    ABSTRACT: We provide a subclass of parametric timed automata (PTA) that we can actually and efficiently analyze, and we argue that it retains most of the practical usefulness of PTA for the modeling of real-time systems. The currently most useful known subclass of PTA, L/U automata, has a strong syntactical restriction for practical purposes, and we show that the associated theoretical results are mixed. We therefore advocate for a different restriction scheme: since in classical timed automata, real-valued clocks are always compared to integers for all practical purposes, we also search for parameter values as bounded integers. We show that the problem of the existence of parameter values such that some TCTL property is satisfied is PSPACE-complete. In such a setting, we can of course synthesize all the values of parameters and we give symbolic algorithms, for reachability and unavoidability properties, to do it efficiently, i.e., without an explicit enumeration. This also has the practical advantage of giving the result as symbolic constraints between the parameters. We finally report on a few experimental results to illustrate the practical usefulness of our approach.
    IEEE Transactions on Software Engineering 05/2015; 41(5):1-1. DOI:10.1109/TSE.2014.2357445
  • [Show abstract] [Hide abstract]
    ABSTRACT: Background. Slice-based cohesion metrics leverage program slices with respect to the output variables of a module to quantify the strength of functional relatedness of the elements within the module. Although slice-based cohesion metrics have been proposed for many years, few empirical studies have been conducted to examine their actual usefulness in predicting fault-proneness. Objective. We aim to provide an in-depth understanding of the ability of slice-based cohesion metrics in effort-aware post-release fault-proneness prediction, i.e. their effectiveness in helping practitioners find post-release faults when taking into account the effort needed to test or inspect the code. Method. We use the most commonly used code and process metrics, including size, structural complexity, Halstead's software science, and code churn metrics, as the baseline metrics. First, we employ principal component analysis to analyze the relationships between slice-based cohesion metrics and the baseline metrics. Then, we use univariate prediction models to investigate the correlations between slice-based cohesion metrics and post-release fault-proneness. Finally, we build multivariate prediction models to examine the effectiveness of slice-based cohesion metrics in effort-aware post-release fault-proneness prediction when used alone or used together with the baseline code and process metrics. Results. Based on open-source software systems, our results show that: 1) slice-based cohesion metrics are not redundant with respect to the baseline code and process metrics; 2) most slice-based cohesion metrics are significantly negatively related to post-release fault-proneness; 3) slice-based cohesion metrics in general do not outperform the baseline metrics when predicting post-release fault-proneness; and 4) when used with the baseline metrics together, however, slice-based cohesion metrics can produce a statistically- significant and practically important improvement of the effectiveness in effort-aware post-release fault-proneness prediction. Conclusion. Slice-based cohesion metrics are complementary to the most commonly used code and process metrics and are of practical value in the context of effort-aware post-release fault-proneness prediction.
    IEEE Transactions on Software Engineering 04/2015; 41(4):331-357. DOI:10.1109/TSE.2014.2370048
  • [Show abstract] [Hide abstract]
    ABSTRACT: The mobile apps market is one of the fastest growing areas in the information technology. In digging their market share, developers must pay attention to building robust and reliable apps. In fact, users easily get frustrated by repeated failures, crashes, and other bugs; hence, they abandon some apps in favor of their competition. In this paper we investigate how the fault- and change-proneness of APIs used by Android apps relates to their success estimated as the average rating provided by the users to those apps. First, in a study conducted on 5,848 (free) apps, we analyzed how the ratings that an app had received correlated with the fault- and change-proneness of the APIs such app relied upon. After that, we surveyed 45 professional Android developers to assess (i) to what extent developers experienced problems when using APIs, and (ii) how much they felt these problems could be the cause for unfavorable user ratings. The results of our studies indicate that apps having high user ratings use APIs that are less fault- and change-prone than the APIs used by low rated apps. Also, most of the interviewed Android developers observed, in their development experience, a direct relationship between problems experienced with the adopted APIs and the users’ ratings that their apps received.
    IEEE Transactions on Software Engineering 04/2015; 41(4):384-407. DOI:10.1109/TSE.2014.2367027
  • [Show abstract] [Hide abstract]
    ABSTRACT: It is a staple development practice to log system behavior. Numerous powerful model-inference algorithms have been proposed to aid developers in log analysis and system understanding. Unfortunately, existing algorithms are typically declared procedurally, making them difficult to understand, extend, and compare. This paper presents InvariMint, an approach to specify model-inference algorithms declaratively. We applied the InvariMint declarative approach to two model-inference algorithms. The evaluation results illustrate that InvariMint (1) leads to new fundamental insights and better understanding of existing algorithms, (2) simplifies creation of new algorithms, including hybrids that combine or extend existing algorithms, and (3) makes it easy to compare and contrast previously published algorithms. InvariMint’s declarative approach can outperform procedural implementations. For example, on a log of 50,000 events, InvariMint’s declarative implementation of the kTails algorithm completes in 12 seconds, while a procedural implementation completes in 18 minutes. We also found that InvariMint’s declarative version of the Synoptic algorithm can be over 170 times faster than the procedural implementation.
    IEEE Transactions on Software Engineering 04/2015; 41(4):408-428. DOI:10.1109/TSE.2014.2369047
  • [Show abstract] [Hide abstract]
    ABSTRACT: Interoperability is a major concern for the software engineering field, given the increasing need to compose components dynamically and seamlessly. This dynamic composition is often hampered by differences in the interfaces and behaviours of independently-developed components. To address these differences without changing the components, mediators that systematically enforce interoperability between functionally-compatible components by mapping their interfaces and coordinating their behaviours are required. Existing approaches to mediator synthesis assume that an interface mapping is provided which specifies the correspondence between the operations and data of the components at hand. In this paper, we present an approach based on ontology reasoning and constraint programming in order to infer mappings between components’ interfaces automatically. These mappings guarantee semantic compatibility between the operations and data of the interfaces. Then, we analyse the behaviours of components in order to synthesise, if possible, a mediator that coordinates the computed mappings so as to make the components interact properly. Our approach is formally-grounded to ensure the correctness of the synthesised mediator. We demonstrate the validity of our approach by implementing the MICS (Mediator synthesIs to Connect Components) tool and experimenting it with various real-world case studies.
    IEEE Transactions on Software Engineering 03/2015; 41(3):221-240. DOI:10.1109/TSE.2014.2364844
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper proposes and validates a model-driven software engineering technique for spreadsheets. The technique that we envision builds on the embedding of spreadsheet models under a widely used spreadsheet system. This means that we enable the creation and evolution of spreadsheet models under a spreadsheet system. More precisely, we embed ClassSheets, a visual language with a syntax similar to the one offered by common spreadsheets, that was created with the aim of specifying spreadsheets. Our embedding allows models and their conforming instances to be developed under the same environment. In practice, this convenient environment enhances evolution steps at the model level while the corresponding instance is automatically co-evolved. Finally, we have designed and conducted an empirical study with human users in order to assess our technique in production environments. The results of this study are promising and suggest that productivity gains are realizable under our model-driven spreadsheet development setting.
    IEEE Transactions on Software Engineering 03/2015; 41(3):241-263. DOI:10.1109/TSE.2014.2361141
  • [Show abstract] [Hide abstract]
    ABSTRACT: A test suite is $m$ -complete for finite state machine (FSM) $M$ if it distinguishes between $M$ and all faulty FSMs with $m$ states or fewer. While there are several algorithms that generate $m$ -complete test suites, they cannot be directly used in distributed testing since there can be additional controllability and observability problems. Indeed, previous results show that there is no general method for generating an $m$ -complete test suite for distributed testing and so the focus has been on conditions under which this is possible. This paper takes a different approach, which is to generate what we call $c_m$ -complete test suites: controllable test suites that distinguish an FSM $N$ with no more than $m$ states from $M$ if this is possible in controllable testing. Thus, under the hypothesis that the system under test has no more than $m$ states, a $c_m$ -complete test suite achieves as much as is possible given the restriction that testing should be controllable. We show how the problem of generating a $c_m$ -complete test suite can be mapped to the problem of generating an $m$ -complete test suite for a partial FSM. Thus, standard test suite generation methods can be adapted for use in distributed testing.
    IEEE Transactions on Software Engineering 03/2015; 41(3):279-293. DOI:10.1109/TSE.2014.2364035
  • [Show abstract] [Hide abstract]
    ABSTRACT: Search-based approaches have been extensively applied to solve the problem of software test-data generation. Yet, test-data generation for object-oriented programming (OOP) is challenging due to the features of OOP, e.g., abstraction, encapsulation, and visibility that prevent direct access to some parts of the source code. To address this problem we present a new automated search-based software test-data generation approach that achieves high code coverage for unit-class testing. We first describe how we structure the test-data generation problem for unit-class testing to generate relevant sequences of method calls. Through a static analysis, we consider only methods or constructors changing the state of the class-under-test or that may reach a test target. Then we introduce a generator of instances of classes that is based on a family of means-of-instantiation including subclasses and external factory methods. It also uses a seeding strategy and a diversification strategy to increase the likelihood to reach a test target. Using a search heuristic to reach all test targets at the same time, we implement our approach in a tool, JTExpert, that we evaluate on more than a hundred Java classes from different open-source libraries. JTExpert gives better results in terms of search time and code coverage than the state of the art, EvoSuite, which uses traditional techniques.
    IEEE Transactions on Software Engineering 03/2015; 41(3):294-313. DOI:10.1109/TSE.2014.2363479
  • [Show abstract] [Hide abstract]
    ABSTRACT: A good understanding of the factors impacting defects in software systems is essential for software practitioners, because it helps them prioritize quality improvement efforts (e.g., testing and code reviews). Defect prediction models are typically built using classification or regression analysis on product and/or process metrics collected at a single point in time (e.g., a release date). However, current defect prediction models only predict if a defect will occur, but not when, which makes the prioritization of software quality improvements efforts difficult. To address this problem, Koru et al. applied survival analysis techniques to a large number of software systems to study how size (i.e., lines of code) influences the probability that a source code module (e.g., class or file) will experience a defect at any given time. Given that 1) the work of Koru et al. has been instrumental to our understanding of the size-defect relationship, 2) the use of survival analysis in the context of defect modelling has not been well studied and 3) replication studies are an important component of balanced scholarly debate, we present a replication study of the work by Koru et al. In particular, we present the details necessary to use survival analysis in the context of defect modelling (such details were missing from the original paper by Koru et al.). We also explore how differences between the traditional domains of survival analysis (i.e., medicine and epidemiology) and defect modelling impact our understanding of the size-defect relationship. Practitioners and researchers considering the use of survival analysis should be aware of the implications of our findings.
    IEEE Transactions on Software Engineering 02/2015; 41(2):176-197. DOI:10.1109/TSE.2014.2361131
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software crash reproduction is the necessary first step for debugging. Unfortunately, crash reproduction is often labor intensive. To automate crash reproduction, many techniques have been proposed including record-replay and post-failure-process approaches. Record-replay approaches can reliably replay recorded crashes, but they incur substantial performance overhead to program executions. Alternatively, post-failure-process approaches analyse crashes only after they have occurred. Therefore they do not incur performance overhead. However, existing post-failure-process approaches still cannot reproduce many crashes in practice because of scalability issues and the object creation challenge. This paper proposes an automatic crash reproduction framework using collected crash stack traces. The proposed approach combines an efficient backward symbolic execution and a novel method sequence composition approach to generate unit test cases that can reproduce the original crashes without incurring additional runtime overhead. Our evaluation study shows that our approach successfully exploited 31 (59.6 percent) of 52 crashes in three open source projects. Among these exploitable crashes, 22 (42.3 percent) are useful reproductions of the original crashes that reveal the crash triggering bugs. A comparison study also demonstrates that our approach can effectively outperform existing crash reproduction approaches.
    IEEE Transactions on Software Engineering 02/2015; 41(2):198-220. DOI:10.1109/TSE.2014.2363469
  • [Show abstract] [Hide abstract]
    ABSTRACT: Whiteboard sketches play a crucial role in software development, helping to support groups of designers in reasoning about a software design problem at hand. However, little is known about these sketches and how they support design 'in the moment', particularly in terms of the relationships among sketches, visual syntactic elements within sketches, and reasoning activities. To address this gap, we analyzed 14 hours of design activity by eight pairs of professional software designers, manually coding over 4000 events capturing the introduction of visual syntactic elements into sketches, focus transitions between sketches, and reasoning activities. Our findings indicate that sketches serve as a rich medium for supporting design conversations. Designers often use general-purpose notations. Designers introduce new syntactic elements to record aspects of the design, or re-purpose sketches as the design develops. Designers constantly shift focus between sketches, using groups of sketches together that contain complementary information. Finally, sketches play an important role in supporting several types of reasoning activities (mental simulation, review of progress, consideration of alternatives). But these activities often leave no trace and rarely lead to sketch creation. We discuss the implications of these and other findings for the practice of software design at the whiteboard and for the creation of new electronic software design sketching tools.
    IEEE Transactions on Software Engineering 02/2015; 41(2):135-156. DOI:10.1109/TSE.2014.2362924
  • [Show abstract] [Hide abstract]
    ABSTRACT: Model-based testing employs models for testing. Model-based mutation testing (MBMT) additionally involves fault models, called mutants, by applying mutation operators to the original model. A problem encountered with MBMT is the elimination of equivalent mutants and multiple mutants modeling the same faults. Another problem is the need to compare a mutant to the original model for test generation. This paper proposes an event-based approach to MBMT that is not fixed on single events and a single model but rather operates on sequences of events of length k ≥ 1 and invokes a sequence of models that are derived from the original one by varying its morphology based on k. The approach employs formal grammars, related mutation operators, and algorithms to generate test cases, enabling the following: (1) the exclusion of equivalent mutants and multiple mutants; (2) the generation of a test case in linear time to kill a selected mutant without comparing it to the original model; (3) the analysis of morphologically different models enabling the systematic generation of mutants, thereby extending the set of fault models studied in related literature. Three case studies validate the approach and analyze its characteristics in comparison to random testing and another MBMT approach.
    IEEE Transactions on Software Engineering 02/2015; 41(2):113-134. DOI:10.1109/TSE.2014.2360690
  • IEEE Transactions on Software Engineering 01/2015; 41(1):1-2. DOI:10.1109/TSE.2014.2380479
  • [Show abstract] [Hide abstract]
    ABSTRACT: Opposing to the oracle assumption, a trustworthy test oracle is not always available in real practice. Since manually written oracles and human judgements are still widely used, testers and programmers are in fact facing a high risk of erroneous test oracles. However, test oracle errors can bring much confusion thus causing extra time consumption in the debugging process. As substantiated by our experiment on the Siemens Test Suite, automatic fault localization algorithms suffer severely from erroneous test oracles, which impede them from reducing debugging time to the full extent. This paper proposes a simple but effective approach to debug the test oracle. Based on the observation that test cases covering similar lines of code usually generate similar results, we are able to identify suspicious test cases that are differently judged by the test oracle from their neighbors. To validate the effectiveness of our approach, experiments are conducted on both the Siemens Test Suite and grep. The results show that averagely over 75% of the highlighted test cases are actually test oracle errors. Moreover, performance of fault localization algorithms recovered remarkably with the debugged oracles.
    IEEE Transactions on Software Engineering 01/2015; DOI:10.1109/TSE.2015.2425392