IEEE Transactions on Software Engineering (IEEE T SOFTWARE ENG)

Publisher: IEEE Computer Society, Institute of Electrical and Electronics Engineers

Journal description

Specification, design, development, management, testing, maintenance, and documentation of software systems. Topics include programming methodology; software project management; programming environments; hardware and software monitoring; and programming tools. Extensive bibliographies.

Current impact factor: 1.61

Impact Factor Rankings

2016 Impact Factor Available summer 2017
2014 / 2015 Impact Factor 1.614
2013 Impact Factor 2.292
2012 Impact Factor 2.588
2011 Impact Factor 1.98
2010 Impact Factor 2.216
2009 Impact Factor 3.75
2008 Impact Factor 3.569
2007 Impact Factor 2.105
2006 Impact Factor 2.132
2005 Impact Factor 1.967
2004 Impact Factor 1.503
2003 Impact Factor 1.73
2002 Impact Factor 1.17
2001 Impact Factor 1.398
2000 Impact Factor 1.746
1999 Impact Factor 1.312
1998 Impact Factor 1.153
1997 Impact Factor 1.044

Impact factor over time

Impact factor
Year

Additional details

5-year impact 2.35
Cited half-life >10.0
Immediacy index 0.13
Eigenfactor 0.01
Article influence 1.14
Website IEEE Transactions on Software Engineering website
Other titles IEEE transactions on software engineering, Institute of Electrical and Electronics Engineers transactions on software engineering, Transactions on software engineering, Software engineering
ISSN 0098-5589
OCLC 1434336
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Institute of Electrical and Electronics Engineers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on Author's personal website, employers website or publicly accessible server
    • Author's post-print on Author's server or Institutional server
    • Author's pre-print must be removed upon publication of final version and replaced with either full citation to IEEE work with a Digital Object Identifier or link to article abstract in IEEE Xplore or replaced with Authors post-print
    • Author's pre-print must be accompanied with set-phrase, once submitted to IEEE for publication ("This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible")
    • Author's pre-print must be accompanied with set-phrase, when accepted by IEEE for publication ("(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
    • IEEE must be informed as to the electronic address of the pre-print
    • If funding rules apply authors may post Author's post-print version in funder's designated repository
    • Author's Post-print - Publisher copyright and source must be acknowledged with citation (see above set statement)
    • Author's Post-print - Must link to publisher version with DOI
    • Publisher's version/PDF cannot be used
    • Publisher copyright and source must be acknowledged
  • Classification
    green

Publications in this journal


  • No preview · Article · Jan 2016 · IEEE Transactions on Software Engineering

  • No preview · Article · Jan 2016 · IEEE Transactions on Software Engineering

  • No preview · Article · Jan 2016 · IEEE Transactions on Software Engineering

  • No preview · Article · Jan 2016 · IEEE Transactions on Software Engineering

  • No preview · Article · Jan 2016 · IEEE Transactions on Software Engineering
  • [Show abstract] [Hide abstract]
    ABSTRACT: The field of automated software repair lacks a set of common benchmark problems. Although benchmark sets are used widely throughout computer science, existing benchmarks are not easily adapted to the problem of automatic defect repair, which has several special requirements. Most important of these is the need for benchmark programs with reproducible, important defects and a deterministic method for assessing if those defects have been repaired. This article details the need for a new set of benchmarks, outlines requirements, and then presents two datasets, MANYBUGS and INTROCLASS, consisting between them of 1,183 defects in 15 C programs. Each dataset is designed to support the comparative evaluation of automatic repair algorithms asking a variety of experimental questions. The datasets have empirically defined guarantees of reproducibility and benchmark quality, and each study object is categorized to facilitate qualitative evaluation and comparisons by category of bug or program. The article presents baseline experimental results on both datasets for three existing repair methods, GenProg, AE, and TrpAutoRepair, to reduce the burden on researchers who adopt these datasets for their own comparative evaluations.
    No preview · Article · Dec 2015 · IEEE Transactions on Software Engineering
  • [Show abstract] [Hide abstract]
    ABSTRACT: Modern business applications predominantly rely on web technology, enabling software vendors to efficiently provide them as a service, removing some of the complexity of the traditional release and update process. While this facilitates shorter, more efficient and frequent release cycles, it requires continuous testing. Having insight into application behavior through explicit models can largely support development, testing and maintenance. Model-based testing allows efficient test creation based on a description of the states the application can be in and the transitions between these states. As specifying behavior models that are precise enough to be executable by a test automation tool is a hard task, an alternative is to extract them from running applications. However, mining such models is a challenge, in particular because one needs to know when two states are equivalent, as well as how to reach that state. We present Process Crawler (ProCrawl), a tool to mine behavior models from web applications that support multi-user workflows. ProCrawl incrementally learns a model by generating program runs and observing the application behavior through the user interface. In our evaluation on several real-world web applications, ProCrawl extracted models that concisely describe the implemented workflows and can be directly used for model-based testing.
    No preview · Article · Dec 2015 · IEEE Transactions on Software Engineering
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a new method for runtime checking of a relaxed consistency property called quasi linearizability for concurrent data structures. Quasi linearizability generalizes the standard notion of linearizability by introducing nondeterminism into the parallel computations quantitatively and then exploiting such nondeterminism to improve the runtime performance. However, ensuring the quantitative aspects of this correctness condition in the low-level code of the concurrent data structure implementation is a difficult task. Our runtime verification method is the first fully automated method for checking quasi linearizability in the C/C++ code of concurrent data structures. It guarantees that all the reported quasi linearizability violations manifested by the concurrent executions are real violations. We have implemented our method in a software tool based on the LLVM compiler and a systematic concurrency testing tool called Inspect. Our experimental evaluation shows that the new method is effective in detecting quasi linearizability violations in the source code implementations of concurrent data structures.
    No preview · Article · Dec 2015 · IEEE Transactions on Software Engineering
  • [Show abstract] [Hide abstract]
    ABSTRACT: Many large-scale software systems must service thousands or millions of concurrent requests. These systems must be load tested to ensure that they can function correctly under load (i.e., the rate of the incoming requests). In this paper, we survey the state of load testing research and practice. We compare and contrast current techniques that are used in the three phases of a load test: (1) designing a proper load, (2) executing a load test, and (3) analyzing the results of a load test. This survey will be useful for load testing practitioners and software engineering researchers with interest in the load testing of large-scale software systems.
    No preview · Article · Nov 2015 · IEEE Transactions on Software Engineering
  • [Show abstract] [Hide abstract]
    ABSTRACT: Source Code Summarization is an emerging technology for automatically generating brief descriptions of code. Current summarization techniques work by selecting a subset of the statements and keywords from the code, and then including information from those statements and keywords in the summary. The quality of the summary depends heavily on the process of selecting the subset: a high-quality selection would contain the same statements and keywords that a programmer would choose. Unfortunately, little evidence exists about the statements and keywords that programmers view as important when they summarize source code. In this paper, we present an eye-tracking study of 10 professional Java programmers in which the programmers read Java methods and wrote English summaries of those methods. We apply the findings to build a novel summarization tool. Then, we evaluate this tool. Finally, we further analyze the programmers’ method summaries to explore specific keyword usage and provide evidence to support the development of source code summarization systems.
    No preview · Article · Nov 2015 · IEEE Transactions on Software Engineering
  • [Show abstract] [Hide abstract]
    ABSTRACT: The choice of test oracle—the artifact that determines whether an application under test executes correctly—can significantly impact the effectiveness of the testing process. However, despite the prevalence of tools that support test input selection, little work exists for supporting oracle creation. We propose a method of supporting test oracle creation that automatically selects the oracle data—the set of variables monitored during testing—for expected value test oracles. This approach is based on the use of mutation analysis to rank variables in terms of fault-finding effectiveness, thus automating the selection of the oracle data. Experimental results obtained by employing our method over six industrial systems (while varying test input types and the number of generated mutants) indicate that our method—when paired with test inputs generated either at random or to satisfy specific structural coverage criteria—may be a cost-effective approach for producing small, effective oracle data sets, with fault finding improvements over current industrial best practice of up to 1,435% observed (with typical improvements of up to 50%).
    No preview · Article · Nov 2015 · IEEE Transactions on Software Engineering
  • [Show abstract] [Hide abstract]
    ABSTRACT: Opposing to the oracle assumption, a trustworthy test oracle is not always available in real practice. Since manually written oracles and human judgements are still widely used, testers and programmers are in fact facing a high risk of erroneous test oracles. However, test oracle errors can bring much confusion thus causing extra time consumption in the debugging process. As substantiated by our experiment on the Siemens Test Suite, automatic fault localization algorithms suffer severely from erroneous test oracles, which impede them from reducing debugging time to the full extent. This paper proposes a simple but effective approach to debug the test oracle. Based on the observation that test cases covering similar lines of code usually generate similar results, we are able to identify suspicious test cases that are differently judged by the test oracle from their neighbors. To validate the effectiveness of our approach, experiments are conducted on both the Siemens Test Suite and grep. The results show that averagely over 75% of the highlighted test cases are actually test oracle errors. Moreover, performance of fault localization algorithms recovered remarkably with the debugged oracles.
    No preview · Article · Oct 2015 · IEEE Transactions on Software Engineering