IEEE Transactions on Software Engineering Journal Impact Factor & Information

Publisher: IEEE Computer Society, Institute of Electrical and Electronics Engineers

Journal description

Specification, design, development, management, testing, maintenance, and documentation of software systems. Topics include programming methodology; software project management; programming environments; hardware and software monitoring; and programming tools. Extensive bibliographies.

Current impact factor: 1.61

Impact Factor Rankings

2015 Impact Factor Available summer 2016
2014 Impact Factor 1.614
2013 Impact Factor 2.292
2012 Impact Factor 2.588
2011 Impact Factor 1.98
2010 Impact Factor 2.216
2009 Impact Factor 3.75
2008 Impact Factor 3.569
2007 Impact Factor 2.105
2006 Impact Factor 2.132
2005 Impact Factor 1.967
2004 Impact Factor 1.503
2003 Impact Factor 1.73
2002 Impact Factor 1.17
2001 Impact Factor 1.398
2000 Impact Factor 1.746
1999 Impact Factor 1.312
1998 Impact Factor 1.153
1997 Impact Factor 1.044

Impact factor over time

Impact factor

Additional details

5-year impact 2.35
Cited half-life >10.0
Immediacy index 0.13
Eigenfactor 0.01
Article influence 1.14
Website IEEE Transactions on Software Engineering website
Other titles IEEE transactions on software engineering, Institute of Electrical and Electronics Engineers transactions on software engineering, Transactions on software engineering, Software engineering
ISSN 0098-5589
OCLC 1434336
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Institute of Electrical and Electronics Engineers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on Author's personal website, employers website or publicly accessible server
    • Author's post-print on Author's server or Institutional server
    • Author's pre-print must be removed upon publication of final version and replaced with either full citation to IEEE work with a Digital Object Identifier or link to article abstract in IEEE Xplore or replaced with Authors post-print
    • Author's pre-print must be accompanied with set-phrase, once submitted to IEEE for publication ("This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible")
    • Author's pre-print must be accompanied with set-phrase, when accepted by IEEE for publication ("(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
    • IEEE must be informed as to the electronic address of the pre-print
    • If funding rules apply authors may post Author's post-print version in funder's designated repository
    • Author's Post-print - Publisher copyright and source must be acknowledged with citation (see above set statement)
    • Author's Post-print - Must link to publisher version with DOI
    • Publisher's version/PDF cannot be used
    • Publisher copyright and source must be acknowledged
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: To facilitate software refactoring, a number of approaches and tools have been proposed to suggest where refactorings should be conducted. However, identification of such refactoring opportunities is usually difficult because it often involves difficult semantic analysis and it is often influenced by many factors besides source code. For example, whether a software entity should be renamed depends on the meaning of its original name (natural language understanding), the semantics of the entity (source code semantics), experience and preference of developers, and culture of companies. As a result, it is difficult to identify renaming opportunities. To this end, in this paper we propose an approach to identify renaming opportunities by expanding conducted renamings. Once a rename refactoring is conducted manually or with tool support, the proposed approach recommends to rename closely related software entities whose names are similar to that of the renamed entity. The rationale is that if an engineer makes a mistake in naming a software entity it is likely for her to make the same mistake in naming similar and closely related software entities. The main advantage of the proposed approach is that it does not involve difficult semantic analysis of source code or complex natural language understanding. Another advantage of this approach is that it is less influenced by subjective factors, e.g., experience and preference of software engineers. The proposed approach has been evaluated on four open-source applications. Our evaluation results show that the proposed approach is accurate in recommending entities to be renamed (average precision 82 percent) and in recommending new names for such entities (average precision 93 percent). Evaluation results also suggest that a substantial percentage (varying from 20 to 23 percent) of rename refactorings are expansible.
    IEEE Transactions on Software Engineering 09/2015; 41(9):1-1. DOI:10.1109/TSE.2015.2427831
  • [Show abstract] [Hide abstract]
    ABSTRACT: Combinatorial interaction testing (CIT) is important because it tests the interactions between the many features and parameters that make up the configuration space of software systems. Simulated Annealing (SA) and Greedy Algorithms have been widely used to find CIT test suites. From the literature, there is a widely-held belief that SA is slower, but produces more effective tests suites than Greedy and that SA cannot scale to higher strength coverage. We evaluated both algorithms on seven real-world subjects for the well-studied two-way up to the rarely-studied six-way interaction strengths. Our findings present evidence to challenge this current orthodoxy: real-world constraints allow SA to achieve higher strengths. Furthermore, there was no evidence that Greedy was less effective (in terms of time to fault revelation) compared to SA; the results for the greedy algorithm are actually slightly superior. However, the results are critically dependent on the approach adopted to constraint handling. Moreover, we have also evaluated a genetic algorithm for constrained CIT test suite generation. This is the first time strengths higher than 3 and constraint handling have been used to evaluate GA. Our results show that GA is competitive only for pairwise testing for subjects with a small number of constraints.
    IEEE Transactions on Software Engineering 09/2015; 41(9):1-1. DOI:10.1109/TSE.2015.2421279
  • [Show abstract] [Hide abstract]
    ABSTRACT: Android is the most popular platform for mobile devices. It facilitates sharing of data and services among applications using a rich inter-app communication system. While access to resources can be controlled by the Android permission system, enforcing permissions is not sufficient to prevent security violations, as permissions may be mismanaged, intentionally or unintentionally. Android’s enforcement of the permissions is at the level of individual apps, allowing multiple malicious apps to collude and combine their permissions or to trick vulnerable apps to perform actions on their behalf that are beyond their individual privileges. In this paper, we present COVERT, a tool for compositional analysis of Android inter-app vulnerabilities. COVERT’s analysis is modular to enable incremental analysis of applications as they are installed, updated, and removed. It statically analyzes the reverse engineered source code of each individual app, and extracts relevant security specifications in a format suitable for formal verification. Given a collection of specifications extracted in this way, a formal analysis engine (e.g., model checker) is then used to verify whether it is safe for a combination of applications—holding certain permissions and potentially interacting with each other—to be installed together. Our experience with using COVERT to examine over 500 real-world apps corroborates its ability to find inter-app vulnerabilities in bundles of some of the most popular apps on the market.
    IEEE Transactions on Software Engineering 09/2015; 41(9):1-1. DOI:10.1109/TSE.2015.2419611
  • [Show abstract] [Hide abstract]
    ABSTRACT: Illegal code reuse has become a serious threat to the software community. Identifying similar or identical code fragments becomes much more challenging in code theft cases where plagiarizers can use various automated code transformation or obfuscation techniques to hide stolen code from being detected. Previous works in this field are largely limited in that (i) most of them cannot handle advanced obfuscation techniques, and (ii) the methods based on source code analysis are not practical since the source code of suspicious programs typically cannot be obtained until strong evidences have been collected. Based on the observation that some critical runtime values of a program are hard to be replaced or eliminated by semantics-preserving transformation techniques, we introduce a novel approach to dynamic characterization of executable programs. Leveraging such invariant values, our technique is resilient to various control and data obfuscation techniques. We show how the values can be extracted and refined to expose the critical values and how we can apply this runtime property to help solve problems in software plagiarism detection. We have implemented a prototype with a dynamic taint analyzer atop a generic processor emulator. Our value-based plagiarism detection method (VaPD) uses the longest common subsequence based similarity measuring algorithms to check whether two code fragments belong to the same lineage. We evaluate our proposed method through a set of real-world automated obfuscators. Our experimental results show that the value-based method successfully discriminates 34 plagiarisms obfuscated by SandMark, plagiarisms heavily obfuscated by KlassMaster, programs obfuscated by Thicket, and executables obfuscated by Loco/Diablo.
    IEEE Transactions on Software Engineering 09/2015; 41(9):1-1. DOI:10.1109/TSE.2015.2418777
  • [Show abstract] [Hide abstract]
    ABSTRACT: Highly configurable systems allow users to tailor software to specific needs. Valid combinations of configuration options are often restricted by intricate constraints. Describing options and constraints in a variability model allows reasoning about the supported configurations. To automate creating and verifying such models, we need to identify the origin of such constraints. We propose a static analysis approach, based on two rules, to extract configuration constraints from code. We apply it on four highly configurable systems to evaluate the accuracy of our approach and to determine which constraints are recoverable from the code. We find that our approach is highly accurate (93% and 77% respectively) and that we can recover 28% of existing constraints. We complement our approach with a qualitative study to identify constraint sources, triangulating results from our automatic extraction, manual inspections, and interviews with 27 developers. We find that, apart from low-level implementation dependencies, configuration constraints enforce correct runtime behavior, improve users’ configuration experience, and prevent corner cases. While the majority of constraints is extractable from code, our results indicate that creating a complete model requires further substantial domain knowledge and testing. Our results aim at supporting researchers and practitioners working on variability model engineering, evolution, and verification techniques.
    IEEE Transactions on Software Engineering 08/2015; 41(8):1-1. DOI:10.1109/TSE.2015.2415793
  • [Show abstract] [Hide abstract]
    ABSTRACT: Existing algorithms for I/O Linear Temporal Logic (LTL) model checking usually output a single counterexample for a system which violates the property. However, in real-world applications, such as diagnosis and debugging in software and hardware system designs, people often need to have a set of counterexamples or even all counterexamples. For this purpose, we propose an I/O efficient approach for detecting all accepting cycles, called Detecting All Accepting Cycles (DAAC), where the properties to be verified are in LTL. Different from other algorithms for finding all cycles, DAAC first searches for the accepting strongly connected components (ASCCs), and then finds all accepting cycles of every ASCC, which can avoid searching for a great many paths that are impossible to be extended to accepting cycles. In order to further lower DAAC’s I/O complexity and improve its performance, we propose an intersection computation technique and a dynamic path management technique, and exploit a minimal perfect hash function (MPHF). We carry out both complexity and experimental comparisons with the state-of-the-art algorithms including Detect Accepting Cycle (DAC), Maximal Accepting Predecessors (MAP) and Iterative-Deepening Depth-First Search (IDDFS). The comparative results show that our approach is better on the whole in terms of I/O complexity and practical performance, despite the fact that it finds all counterexamples.
    IEEE Transactions on Software Engineering 08/2015; 41(8):1-1. DOI:10.1109/TSE.2015.2411284
  • [Show abstract] [Hide abstract]
    ABSTRACT: During software development, the sooner a developer learns how code changes affect program analysis results, the more helpful that analysis is. Manually invoking an analysis may interrupt the developer’s workflow or cause a delay before the developer learns the implications of the change. A better approach is continuous analysis tools that always provide up-to-date results. We present Codebase Replication, a technique that eases the implementation of continuous analysis tools by converting an existing offline analysis into an IDE-integrated, continuous tool with two desirable properties: isolation and currency. Codebase Replication creates and keeps in sync a copy of the developer’s codebase. The analysis runs on the copy codebase without disturbing the developer and without being disturbed by the developer’s changes. We developed Solstice, an open-source, publicly-available Eclipse plug-in that implements Codebase Replication. Solstice has less than 2.5 milliseconds overhead for most common developer actions. We used Solstice to implement four Eclipse-integrated continuous analysis tools based on the offline versions of FindBugs, PMD, data race detection, and unit testing. Each conversion required on average 710 LoC and 20 hours of implementation effort. Case studies indicate that Solstice-based continuous analysis tools are intuitive and easy-to-use.
    IEEE Transactions on Software Engineering 08/2015; 41(8):1-1. DOI:10.1109/TSE.2015.2417161
  • [Show abstract] [Hide abstract]
    ABSTRACT: Developing modern distributed software systems is difficult in part because they have little control over the environments in which they execute. For example, hardware and software resources on which these systems rely may fail or become compromised and malicious. Redundancy can help manage such failures and compromises, but when faced with dynamic, unpredictable resources and attackers, the system reliability can still fluctuate greatly. Empowering the system with self-adaptive and self-managing reliability facilities can significantly improve the quality of the software system and reduce reliance on the developer predicting all possible failure conditions. We present iterative redundancy, a novel approach to improving software system reliability by automatically injecting redundancy into the system’s deployment. Iterative redundancy self-adapts in three ways: (1) by automatically detecting when the resource reliability drops, (2) by identifying unlucky parts of the computation that happen to deploy on disproportionately many compromised resources, and (3) by not relying on a priori estimates of resource reliability. Further, iterative redundancy is theoretically optimal in its resource use: Given a set of resources, iterative redundancy guarantees to use those resources to produce the most reliable version of that software system possible; likewise, given a desired increase in the system’s reliability, iterative redundancy guarantees achieving that reliability using the least resources possible. Iterative redundancy handles even the Byzantine threat model, in which compromised resources collude to attack the system. We evaluate iterative redundancy in three ways. First, we formally prove its self-adaptation, efficiency, and optimality properties. Second, we simulate it at scale using discrete event simulation. Finally, we modify the existing, open-source, volunteer-computing BOINC software system and deploy it on the globally-dist- ibuted PlanetLab testbed network to empirically evaluate that iterative redundancy is self-adaptive and more efficient than existing techniques.
    IEEE Transactions on Software Engineering 08/2015; 41(8):1-1. DOI:10.1109/TSE.2015.2412134
  • [Show abstract] [Hide abstract]
    ABSTRACT: Performance is a crucial attribute for most software, making performance analysis an important software engineering task. The difficulty is that modern applications are challenging to analyse for performance. Many profiling techniques used in real-world software development struggle to provide useful results when applied to large-scale object-oriented applications. There is a substantial body of research into software performance generally but currently there exists no survey of this research that would help identify approaches useful for object-oriented software. To provide such a review we performed a systematic mapping study of empirical performance analysis approaches that are applicable to object-oriented software. Using keyword searches against leading software engineering research databases and manual searches of relevant venues we identified over 5,000 related articles published since January 2000. From these we systematically selected 253 applicable articles and categorised them according to ten facets that capture the intent, implementation and evaluation of the approaches. Our mapping study results allow us to highlight the main contributions of the existing literature and identify areas where there are interesting opportunities. We also find that, despite the research including approaches specifically aimed at object-oriented software, there are significant challenges in providing actionable feedback on the performance of large-scale object-oriented applications.
    IEEE Transactions on Software Engineering 07/2015; 41(7):1-1. DOI:10.1109/TSE.2015.2396514
  • [Show abstract] [Hide abstract]
    ABSTRACT: Formal methods offer an effective means to assert the correctness of software systems through mathematical reasoning. However, the need to formulate system properties in a purely mathematical fashion can create pragmatic barriers to the application of these techniques. For this reason, Dwyer et al. invented property specification patterns which is a system of recurring solutions to deal with the temporal intricacies that would make the construction of reactive systems very hard otherwise. Today, property specification patterns provide general rules that help practitioners to qualify order and occurrence, to quantify time bounds, and to express probabilities of events. Nevertheless, a comprehensive framework combining qualitative, real-time, and probabilistic property specification patterns has remained elusive. The benefits of such a framework are twofold. First, it would remove the distinction between qualitative and quantitative aspects of events; and second, it would provide a structure to systematically discover new property specification patterns. In this paper, we report on such a framework and present a unified catalogue that combines all known plus 40 newly identified or extended patterns. We also offer a natural language front-end to map patterns to a temporal logic of choice. To demonstrate the virtue of this new framework, we applied it to a variety of industrial requirements, and use PSPWizard , a tool specifically developed to work with our unified pattern catalogue, to automatically render concrete instances of property specification patterns to formulae of an underlying temporal logic of choice.
    IEEE Transactions on Software Engineering 07/2015; 41(7):1-1. DOI:10.1109/TSE.2015.2398877
  • [Show abstract] [Hide abstract]
    ABSTRACT: Model driven security has become an active area of research during the past decade. While many research works have contributed significantly to this objective by extending popular modeling notations to model security aspects, there has been little modeling support for state-based views of security issues. This paper undertakes a scientific approach to propose a new notational set that extends the UML (Unified Modeling Language) statecharts notation. An online industrial survey was conducted to measure the perceptions of the new notation with respect to its semantic transparency as well as its coverage of modeling state based security aspects. The survey results indicate that the new notation encompasses the set of semantics required in a state based security modeling language and was largely intuitive to use and understand provided very little training. A subject-based empirical evaluation using software engineering professionals was also conducted to evaluate the cognitive effectiveness of the proposed notation. The main finding was that the new notation is cognitively more effective than the original notational set of UML statecharts as it allowed the subjects to read models created using the new notation much quicker.
    IEEE Transactions on Software Engineering 07/2015; 41(7):1-1. DOI:10.1109/TSE.2015.2396526
  • [Show abstract] [Hide abstract]
    ABSTRACT: Numbers are used in critical applications, including finance, healthcare, aviation, and of course in every aspect of computing. User interfaces for number entry in many devices (calculators, spreadsheets, infusion pumps, mobile phones, etc.) have bugs and design defects that induce unnecessary use errors that compromise their dependability. Focusing on Arabic key interfaces, which use digit keys - , usually augmented with correction keys, this paper introduces a method for formalising and managing design problems. Since number entry and devices such as calculators have been the subject of extensive user interface research since at least the 1980s, the diverse design defects uncovered imply that user evaluation methodologies are insufficient for critical applications. Likewise, formal methods are not being applied effectively. User interfaces are not trivial and more attention should be paid to their correct design and implementation. The paper includes many recommendations for designing safer number entry user interfaces.
    IEEE Transactions on Software Engineering 07/2015; 41(7):1-1. DOI:10.1109/TSE.2014.2383396
  • [Show abstract] [Hide abstract]
    ABSTRACT: Lazy Initialization (LI) allows symbolic execution to effectively deal with heap-allocated data structures, thanks to a significant reduction in spurious and redundant symbolic structures. Bounded lazy initialization (BLI) improves on LI by taking advantage of precomputed relational bounds on the interpretation of class fields in order to reduce the number of spurious structures even further. In this paper we present bounded lazy initialization with SAT support (BLISS), a novel technique that refines the search for valid structures during the symbolic execution process. BLISS builds upon BLI, extending it with field bound refinement and satisfiability checks. Field bounds are refined while a symbolic structure is concretized, avoiding cases that, due to the concrete part of the heap and the field bounds, can be deemed redundant. Satisfiability checks on refined symbolic heaps allow us to prune these heaps as soon as they are identified as infeasible, i.e., as soon as it can be confirmed that they cannot be extended to any valid concrete heap. Compared to LI and BLI, BLISS reduces the time required by LI by up to four orders of magnitude for the most complex data structures. Moreover, the number of partially symbolic structures obtained by exploring program paths is reduced by BLISS by over 50 percent, with reductions of over 90 percent in some cases (compared to LI). BLISS uses less memory than LI and BLI, which enables the exploration of states unreachable by previous techniques.
    IEEE Transactions on Software Engineering 07/2015; 41(7):1-1. DOI:10.1109/TSE.2015.2389225
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software systems are increasingly regulated. Software engineers therefore must determine which requirements have met or exceeded their legal obligations and which requirements have not. Requirements that have met or exceeded their legal obligations are legally implementation ready, whereas requirements that have not met or exceeded their legal obligations need further refinement. In this paper, we examine how software engineers make these determinations using a multi-case study with three cases. Each case involves assessment of requirements for an electronic health record system that must comply with the US Health Insurance Portability and Accountability Act (HIPAA) and is measured against the evaluations of HIPAA compliance subject matter experts. Our first case examines how individual graduate-level software engineering students assess whether the requirements met or exceeded their HIPAA obligations. Our second case replicates the findings from our first case using a different set of participants. Our third case examines how graduate-level software engineering students assess requirements using the Wideband Delphi approach to deriving consensus in groups. Our findings suggest that the average graduate-level software engineering student is ill-prepared to write legally compliant software with any confidence and that domain experts are an absolute necessity.
    IEEE Transactions on Software Engineering 06/2015; 41(6):545-564. DOI:10.1109/TSE.2014.2383374