IEEE Transactions on Reliability (IEEE T RELIAB)

Publisher: Institute of Electrical and Electronics Engineers. Professional Technical Group on Reliability; IEEE Reliability Group; IEEE Reliability Society; American Society for Quality Control. Electronics Division, Institute of Electrical and Electronics Engineers

Journal description

The principles and practices of reliability, maintainability, and product liability pertaining to electrical and electronic equipment.

Current impact factor: 1.66

Impact Factor Rankings

2015 Impact Factor Available summer 2015
2013 / 2014 Impact Factor 1.657
2012 Impact Factor 2.293
2011 Impact Factor 1.285
2010 Impact Factor 1.288
2009 Impact Factor 1.331
2008 Impact Factor 1.315
2007 Impact Factor 1.303
2006 Impact Factor 0.8
2005 Impact Factor 0.715
2004 Impact Factor 0.828
2003 Impact Factor 0.444
2002 Impact Factor 0.522
2001 Impact Factor 0.477
2000 Impact Factor 0.358
1999 Impact Factor 0.341
1998 Impact Factor 0.255
1997 Impact Factor 0.355
1996 Impact Factor 0.369
1995 Impact Factor 0.304
1994 Impact Factor 0.45
1993 Impact Factor 0.332
1992 Impact Factor 0.407

Impact factor over time

Impact factor

Additional details

5-year impact 2.07
Cited half-life 0.00
Immediacy index 0.21
Eigenfactor 0.00
Article influence 0.76
Website IEEE Transactions on Reliability website
Other titles IEEE transactions on reliability, Institute of Electrical and Electronics Engineers transactions on reliability, Transactions on reliability, Reliability
ISSN 0018-9529
OCLC 1752560
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Institute of Electrical and Electronics Engineers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on Author's personal website, employers website or publicly accessible server
    • Author's post-print on Author's server or Institutional server
    • Author's pre-print must be removed upon publication of final version and replaced with either full citation to IEEE work with a Digital Object Identifier or link to article abstract in IEEE Xplore or replaced with Authors post-print
    • Author's pre-print must be accompanied with set-phrase, once submitted to IEEE for publication ("This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible")
    • Author's pre-print must be accompanied with set-phrase, when accepted by IEEE for publication ("(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
    • IEEE must be informed as to the electronic address of the pre-print
    • If funding rules apply authors may post Author's post-print version in funder's designated repository
    • Author's Post-print - Publisher copyright and source must be acknowledged with citation (see above set statement)
    • Author's Post-print - Must link to publisher version with DOI
    • Publisher's version/PDF cannot be used
    • Publisher copyright and source must be acknowledged
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: As we live in a highly technological world, surrounded by the Internet, biological technologies, high-speed trains and sophisticated aircraft, new energy sources, new social movement, wide wealth gap between the rich and poor groups and countries, political and religious differences, and so on, we have to face many new challenges which may either improve or threaten our living environment. All these ever-increasing challenges add new risks, and are in the scope of reliability for sustainability. However, the traditional approach of reliability, which looks into the failure free operation of systems, is far too simple, and has to be re-examined.
    IEEE Transactions on Reliability 01/2015; 64(1):2-3. DOI:10.1109/TR.2015.2403579
  • [Show abstract] [Hide abstract]
    ABSTRACT: Most research on degradation models and analyses focuses on nondestructive degradation test data. In practice, destructive tests are often conducted to gain insights into the changes of the physical properties of products or materials over time. Such tests sometimes provide more reliable degradation information than nondestructive tests that may only yield indirect degradation measures, such as temperature, amount of metal particles, and vibration. However, an obvious drawback of destructive tests is that only one measurement can be obtained from each specimen. Moreover, some products start degrading only after a random degradation initiation time that is often not even observable in destructive degradation tests (DDTs). Such a degradation-free period adds another dimension of complexity in modeling DDT data. In this paper, we develop two delayed-degradation models based on DDT data to evaluate the reliability of a product with an exponentially distributed degradation initiation time. For homogeneous and heterogeneous populations, fixed-effects and random-effects Gamma processes are considered, respectively, in modeling the actual degradation of units after degradation initiation. A maximum likelihood method as well as an expectation-maximization algorithm is developed to estimate the model parameters, and bootstrap methods are used to obtain the confidence intervals of the interested reliability indices. Numerical examples demonstrate that the proposed models and estimation methods are effective in analyzing DDT involving random degradation initiation times.
    IEEE Transactions on Reliability 01/2015; 64(1):516-527. DOI:10.1109/TR.2014.2336411
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software defect prediction can help to allocate testing resources efficiently through ranking software modules according to their defects. Existing software defect prediction models that are optimized to predict explicitly the number of defects in a software module might fail to give an accurate order because it is very difficult to predict the exact number of defects in a software module due to noisy data. This paper introduces a learning-to-rank approach to construct software defect prediction models by directly optimizing the ranking performance. In this paper, we build on our previous work, and further study whether the idea of directly optimizing the model performance measure can benefit software defect prediction model construction. The work includes two aspects: one is a novel application of the learning-to-rank approach to real-world data sets for software defect prediction, and the other is a comprehensive evaluation and comparison of the learning-to-rank method against other algorithms that have been used for predicting the order of software modules according to the predicted number of defects. Our empirical studies demonstrate the effectiveness of directly optimizing the model performance measure for the learning-to-rank approach to construct defect prediction models for the ranking task.
    IEEE Transactions on Reliability 01/2015; 64(1):234-246. DOI:10.1109/TR.2014.2370891
  • [Show abstract] [Hide abstract]
    ABSTRACT: The world's increased dependence on software-enabled systems has raised major concerns about software reliability and security. New cost-effective tools for software quality assurance are needed. This paper presents an automated test generation technique, called Model-based Integration and System Test Automation (MISTA), for integrated functional and security testing of software systems. Given a Model-Implementation Description (MID) specification, MISTA generates test code that can be executed immediately with the implementation under test. The MID specification uses a high-level Petri net to capture both control- and data-related requirements for functional testing, access control testing, or penetration testing with threat models. After generating test cases from the test model according to a given criterion, MISTA converts the test cases into executable test code by mapping model-level elements into implementation-level constructs. MISTA has implemented test generators for various test coverage criteria of test models, code generators for various programming and scripting languages, and test execution environments such as Java, C, , C#, HTML-Selenium IDE, and Robot Framework. MISTA has been applied to the functional and security testing of various real-world software systems. Our experiments have demonstrated that MISTA can be highly effective in fault detection.
    IEEE Transactions on Reliability 01/2015; 64(1):247-268. DOI:10.1109/TR.2014.2354172
  • [Show abstract] [Hide abstract]
    ABSTRACT: In a reliability experiment, a combination of temperature cycling (TC) and temperature humidity bias (THB) stress was applied. The outcome varied significantly with the sequence of the stress tests. It made a difference whether first TC-stress was performed and then THB-stress, or the other way round. A statistical model is created for this effect. It is called the Stress Interaction Model. The stress sequence is given as a path in . The hazard function is set up as a vector field. Unless the hazard function is a potential of a vector field, the path integral depends on the way from point to point . In this setup, the sequence of the applied stress tests, say the stress history, is reflected in the reliability function. In this paper, the Stress Interaction Model is extended by a shape factor.
    IEEE Transactions on Reliability 01/2015; 64(1):528-534. DOI:10.1109/TR.2014.2363151
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper considers 1-out-of- :G heterogeneous fault-tolerant systems that are designed with a mix of hot and cold standby redundancies to achieve the tradeoff between restoration and operation costs of standby elements. In such systems, the way in which the elements are distributed between hot and cold standby groups and the initiation sequence of all the cold standby elements can greatly affect the system reliability and mission cost. Therefore, it is significant to solve the optimal standby element distributing and sequencing problem (SE-DSP). The failure that occurs in a system element can propagate, causing the outage of other system elements, which complicates the solution to the SE-DSP problem. In this paper, we first propose a numerical method for evaluating the reliability and expected mission cost of 1-out-of- :G systems with mixed hot and cold redundancy types and propagated failures. Two different failure propagation modes are considered: an element failure causing the outage of all the system elements, and an element failure causing the outage of only working or hot standby elements but not cold standby elements. A genetic algorithm is utilized as an optimization tool for solving the formulated SE-DSP problem, leading to a solution that can minimize the expected mission cost of the system while providing a desired level of the system reliability. Effects of the failure propagation probability on the system reliability, expected mission cost, as well as the optimization results are investigated. The suggested methodology can facilitate a reliability-cost tradeoff study of the considered systems, thus assisting in optimal decision making regarding the system's standby policy. Examples are provided for illustrating the considered problem as well as the proposed solution methodology.
    IEEE Transactions on Reliability 01/2015; 64(1):410-419. DOI:10.1109/TR.2014.2355514
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper we model and analyze non-repairable 1-out-of- : G warm standby systems subject to periodic backups and dynamic reworking. Particularly, in such systems, a standby element must redo some portion of already performed work by the failed online element before taking over the mission task, which makes the actual mission time dynamic. The considered systems are widely used in applications such as computing and manufacturing, but have not been well studied in reliability theory. In this work, we make new contributions by suggesting a numerical algorithm to evaluate the reliability of the considered warm standby systems. It is revealed that these systems are non-coherent, where the system reliability has non-monotonic dependence on the reliability of individual elements. Numerical examples further show that the non-coherency phenomenon is more distinguished for elements initiated earlier than those initiated later in the warm standby list. Example results also imply that placing highly unreliable elements at the end of the warm standby waiting list, or even removing them from the system planning, can enhance the reliability of a warm standby system subject to reworking. Findings from this work can guide the reliability design of the considered warm standby systems in practice.
    IEEE Transactions on Reliability 01/2015; 64(1):444-453. DOI:10.1109/TR.2014.2356938
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper addresses an NP-hard problem, refered to as Network Topology Design with minimum Cost subject to a Reliability constraint (NTD-CR), to design a minimal-cost communication network topology that satisfies a pre-defined reliability constraint. The paper describes a dynamic programming (DP) scheme to solve the NTD-CR problem, and proposes a DP approach, called Dynamic Programming Algorithm to solve NTD-CR (DPCR-ST), to generate the topology using a selected sequence of spanning trees of the network, . The paper shows that our DPCR-ST approach always provides a feasible solution, and produces an optimal topology given an optimal order of spanning trees. The paper proves that the problem of optimally ordering the spanning trees is NP-complete, and proposes three greedy heuristics to generate and order only spanning trees of the network. Each heuristic allows the DPCR-ST approach to generate using only spanning trees, which improves the time complexity while producing a near optimal topology. Simulations based on fully connected networks that contain up to spanning trees show the merits of using the ordering methods and the effectiveness of our algorithm vis-à-vis to four existing state-of-the-art techniques. Our DPCR-ST approach is able to generate 81.5% optimal results, while using only 0.77% of the spanning trees contained in networks. Further, for a typical 2 100 grid network that contains up to spanning t- ees, DPCR-ST approach requires only spanning trees to generate a topology with a reliability no larger than 5.05% off from optimal.
    IEEE Transactions on Reliability 01/2015; 64(1):118-131. DOI:10.1109/TR.2014.2338253
  • [Show abstract] [Hide abstract]
    ABSTRACT: Traditional software testing methods are not effective for testing embedded software thoroughly due to the fact that generating effective test inputs to cover all code is extremely difficult. In this work, we propose an automatic method to generate test inputs for embedded executables which is based on concolic execution. The core idea of our method is to divide concolic execution into symbolic execution on hosts, and concrete execution on targets, so considerable development work can be saved. Our method overcomes the limitations of the software and hardware abilities of embedded systems by restricting heavy-weight work on resourceful hosts. One feature of our method is that it targets executables, so the source of tested software is not needed. Another feature is that tested programs run in a real environment rather than in a simulator, so accurate run-time information can be acquired. Symbolic execution and concrete execution are coordinated by cross-debugging functions. Then we implement our method on Wind River VxWorks. Experiments show that our method achieves high code coverage with acceptable speed.
    IEEE Transactions on Reliability 01/2015; 64(1):284-296. DOI:10.1109/TR.2014.2363153
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this article, we propose a method to predict the remaining useful life (RUL) for individual units based on condition monitoring signals with a change point. In this work, we assume the units are subject to hard failure. The existence of a sudden change point in the condition monitoring signal indicates an increase of the probability of system failure. Therefore, this information should be considered in the RUL prediction. In the proposed method, an extended form of the joint prognostic model (JPM) considering a change point (JPM-C) is proposed, and the change point is detected according to the concordance correlation coefficient (CCC). The advantageous features of the proposed method have been shown through a numerical simulation, and a case study of the battery useful life prediction.
    IEEE Transactions on Reliability 01/2015; 64(1):182-196. DOI:10.1109/TR.2014.2355531
  • IEEE Transactions on Reliability 01/2015; DOI:10.1109/TR.2014.2385069
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software systems are widely employed in society. With a limited amount of testing resource available, testing resource allocation among components of a software system becomes an important issue. Most existing research on the testing resource allocation problem takes a single-objective optimization approach, which may not adequately address all the concerns in the decision-making process. In this paper, an architecture-based multi-objective optimization approach to testing resource allocation is proposed. An architecture-based model is used for system reliability assessment, which has the advantage of explicitly considering system architecture over the reliability block diagram (RBD)-based models, and has good flexibility to different architectural alternatives and component changes. A system cost modeling approach which is based on well-developed software cost models is proposed, which would be a more flexible, suitable approach to the cost modeling of software than the approach adopted by others which is based on an empirical cost model. A multi-objective optimization model is developed for the testing resource allocation problem, in which the three major concerns in the testing resource allocation problem, i.e., system reliability, system cost, and the total amount of testing resource consumed, are taken into consideration. A multi-objective evolutionary algorithm (MOEA), called multi-objective differential evolution based on weighted normalized sum (WNS-MODE), is developed. Experimental studies are presented, and the experiments show several results. 1) The proposed architecture-based multi-objective optimization approach can identify the testing resource allocation strategy which has a good trade-off among optimization objectives. 2) The developed WNS-MODE is better than the MOEA developed in recent research, called HaD-MOEA, in terms of both solution quality and computational efficiency. 3) The WNS-MODE seems quite robust from the sensitivity analy- is results.
    IEEE Transactions on Reliability 01/2015; 64(1):497-515. DOI:10.1109/TR.2014.2372411
  • [Show abstract] [Hide abstract]
    ABSTRACT: The -star graph, denoted by , is an enhanced version of -dimensional star graphs , that has better scalability than , and possesses several good properties, compared with hypercubes. Diagnosis has been one of the most important issues for maintaining multiprocessor-system reliability. Conditional diagnosability, which is more general than classical diagnosability, measures the multiprocessor-system diagnosability under the assumption that all neighbors of any processor in the system cannot fail simultaneously. In this paper, we investigate the conditional diagnosability of for ( and ) and ( and ) under the comparison diagnosis model.
    IEEE Transactions on Reliability 01/2015; 64(1):132-143. DOI:10.1109/TR.2014.2354912
  • [Show abstract] [Hide abstract]
    ABSTRACT: In a reliability experiment, accelerated life-testing allows higher-than-normal stress levels on test units. In a special class of accelerated life tests known as step-stress tests, the stress levels are increased at some pre-planned time points, allowing the experimenter to obtain information on the lifetime parameters more quickly than under normal operating conditions. Also, when a test unit fails, there are often several risk factors associated with the cause of failure (i.e., mechanical, electrical, etc.). In this article, the step-stress model under Type-I censoring is considered when the different risk factors have -independent generalized exponential lifetime distributions. With the assumption of cumulative damage, the point estimates of the unknown scale and shape parameters of the different causes are derived using the maximum likelihood approach. Using the asymptotic distributions and the parametric bootstrap method, we also discuss the construction of confidence intervals for the parameters. The precision of the estimates and the performance of the confidence intervals are assessed through extensive Monte Carlo simulations, and lastly, the method of inference discussed here is illustrated with examples.
    IEEE Transactions on Reliability 01/2015; 64(1):31-43. DOI:10.1109/TR.2014.2336392
  • [Show abstract] [Hide abstract]
    ABSTRACT: Extreme complimentary metal-oxide-semiconductor (CMOS) technology scaling is causing significant concerns in the reliability of computer systems. Intermittent hardware errors are non-deterministic bursts of errors that occur in the same physical location. Recent studies have found that 40% of the processor failures in real-world machines are due to intermittent hardware errors. A study of the effects of intermittent faults on programs is a critical step in building fault-tolerance techniques of reasonable accuracy and cost. In this work, we characterize the impact of intermittent hardware faults in programs using fault-injection campaigns in a microarchitectural processor simulator. We find that 80% of the non-benign intermittent hardware errors activate a hardware trap in the processor, and the remaining 20% cause silent data corruptions. We have also investigated the possibility of using the program state at failure time in software-based diagnosis techniques, and found that much of the erroneous data are intact and can be used to identify the source of the error.
    IEEE Transactions on Reliability 01/2015; 64(1):297-310. DOI:10.1109/TR.2014.2363152