IEEE Transactions on Reliability (IEEE T RELIAB)

Publisher: Institute of Electrical and Electronics Engineers. Professional Technical Group on Reliability; IEEE Reliability Group; IEEE Reliability Society; American Society for Quality Control. Electronics Division, Institute of Electrical and Electronics Engineers

Journal description

The principles and practices of reliability, maintainability, and product liability pertaining to electrical and electronic equipment.

Current impact factor: 1.93

Impact Factor Rankings

2015 Impact Factor Available summer 2016
2014 Impact Factor 1.934
2013 Impact Factor 1.657
2012 Impact Factor 2.293
2011 Impact Factor 1.285
2010 Impact Factor 1.288
2009 Impact Factor 1.331
2008 Impact Factor 1.315
2007 Impact Factor 1.303
2006 Impact Factor 0.8
2005 Impact Factor 0.715
2004 Impact Factor 0.828
2003 Impact Factor 0.444
2002 Impact Factor 0.522
2001 Impact Factor 0.477
2000 Impact Factor 0.358
1999 Impact Factor 0.341
1998 Impact Factor 0.255
1997 Impact Factor 0.355
1996 Impact Factor 0.369
1995 Impact Factor 0.304
1994 Impact Factor 0.45
1993 Impact Factor 0.332
1992 Impact Factor 0.407

Impact factor over time

Impact factor

Additional details

5-year impact 2.19
Cited half-life >10.0
Immediacy index 0.19
Eigenfactor 0.01
Article influence 0.85
Website IEEE Transactions on Reliability website
Other titles IEEE transactions on reliability, Institute of Electrical and Electronics Engineers transactions on reliability, Transactions on reliability, Reliability
ISSN 0018-9529
OCLC 1752560
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Institute of Electrical and Electronics Engineers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on Author's personal website, employers website or publicly accessible server
    • Author's post-print on Author's server or Institutional server
    • Author's pre-print must be removed upon publication of final version and replaced with either full citation to IEEE work with a Digital Object Identifier or link to article abstract in IEEE Xplore or replaced with Authors post-print
    • Author's pre-print must be accompanied with set-phrase, once submitted to IEEE for publication ("This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible")
    • Author's pre-print must be accompanied with set-phrase, when accepted by IEEE for publication ("(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
    • IEEE must be informed as to the electronic address of the pre-print
    • If funding rules apply authors may post Author's post-print version in funder's designated repository
    • Author's Post-print - Publisher copyright and source must be acknowledged with citation (see above set statement)
    • Author's Post-print - Must link to publisher version with DOI
    • Publisher's version/PDF cannot be used
    • Publisher copyright and source must be acknowledged
  • Classification

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, how to estimate residual useful lifetime (RUL) is discussed based on a model-based method when fluctuations exist in the degradation process around its average behavior. The main idea is to model the degradation process as a time-dependent Ornstein-Uhlenbeck (OU) process, where the first passage failure is adopted to consider corresponding RUL estimation. The time-dependent OU process is proved good by its statistical properties on the controllable mean, variance, and correlation. Its mean-reverting property is introduced to interpret temporary correlated fluctuations from an overall degrading trend in degradation records. The corresponding parameter estimation is proposed based on the maximum likelihood estimation method, and a Volterra integral equation of second kind with a non-singular kernel is then considered to calculate the probability density function (pdf) of failure time. Proposed methods are tested in a case study, where results are compared with a nonlinear-drift, linear-diffusion process.
    IEEE Transactions on Reliability 01/2016; 65(1). DOI:10.1109/TR.2015.2462353
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a contribution to solve the system prognostic problem is proposed. For that, the concept is defined in this work as a problem of predictive diagnosis under temporal constraint. Generally, this problem is treated using mainly approaches that are based on dynamic systems, experts' knowledge or are data-driven. Here, in order to describe the behavior of a process, we consider dynamic models that are composed of differential equations. The goal of this work is twofold. First, we present a new strategy for system prognosis based on observer design. Second, we propose a comparative study of two methodologies, dedicated to observer design, with application to an electromechanical process. To illustrate the performances of the approaches, simulation results are proposed.
    IEEE Transactions on Reliability 11/2015; DOI:10.1109/TR.2015.2494682
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, a mixture representation is derived for the pooled system lifetimes arising from a life-test on two or more independent samples. The components of each system are assumed to have the same common absolutely continuous distribution, but the system signature may vary between the samples. These mixtures are then used for developing exact nonparametric inference in the form of confidence intervals for quantiles of component or system lifetimes, as well as prediction intervals for future component or system lifetimes. Examples are finally provided to illustrate the developed methods. It is noted that testing with systems rather than components directly can reduce the expected number of failures while maintaining nominal coverage probability.
    IEEE Transactions on Reliability 11/2015; DOI:10.1109/TR.2015.2494362
  • [Show abstract] [Hide abstract]
    ABSTRACT: High-speed rail (HSR) holds promise as an investment to modernize the transportation infrastructure of the United States of America and support its national economy. The half trillion dollar price tag necessitates thorough analysis and planning to ensure long-term success, including a well formulated vulnerability analysis to identify an optimal or near-optimal strategy for deploying protective technologies and infrastructure that will render the HSR network resilient to disruptions induced by natural disasters and terrorist threats. This paper embeds a game-theoretic vulnerability assessment technique that considers the impact of defending network links into a genetic algorithm, which searches for a near-optimal assignment of finite defensive resources to the links of the network. The terms link and edge are used interchangeably in this paper. The approach is applied to the incremental HSR network deployment map given by the U.S. High-Speed Rail Association, describing network construction over a 15-year period from 2015 to 2030. Our results suggest that, over the life of the HSR network, a strategy utilizing relocatable defenses could achieve significant savings compared to less flexible strategies that rely solely on static defenses.
    IEEE Transactions on Reliability 11/2015; DOI:10.1109/TR.2015.2491602

  • IEEE Transactions on Reliability 10/2015;
  • [Show abstract] [Hide abstract]
    ABSTRACT: An Android-based smart Television (TV) must reliably run its applications in an embedded program environment under diverse hardware resource conditions. Owing to the diverse hardware components used to build numerous TV models, TV simulators are usually not high enough in fidelity to simulate various TV models, and thus are only regarded as unreliable alternatives when stress testing such applications. Therefore, even though stress testing on real TV sets is tedious, it is the de facto approach to ensure the reliability of these applications in the industry. In this paper, we study to what extent stress testing of smart TV applications can be fully automated in the industrial environments. To the best of our knowledge, no previous work has addressed this important question. We summarize the find-ings collected from 10 industrial test engineers to have tested 20 such TV applications in a real production environment. Our study shows that the industry required test automation supports on high-level GUI object controls and status checking, setup of resource conditions and the interplay between the two. With such supports, 87% of the industrial test specifications of one TV model can be fully automated and 71.4% of them were found to be fully reusable to test a subsequent TV model with major up-grades of hardware, operating system and application. It repre-sents a significant improvement with margins of 28% and 38%, respectively, compared to stress testing without such supports.
    IEEE Transactions on Reliability 09/2015; DOI:10.1109/TR.2015.2481601
  • [Show abstract] [Hide abstract]
    ABSTRACT: Manufacturing quality and lifetime testing conditions may affect product reliability measurements. The literature for the design of experiments (DOE) and robust product optimization considering both quality and reliability issues is scarce. This article develops a model to include both manufacturing variables and accelerated degradation test (ADT) conditions. A simple algorithm provides calculations of the maximum likelihood estimates (MLEs) of these model parameters and percentile lifetimes. Variances of these estimates are derived based on large sample theory. Our DOE plans focus on deciding replication sizes and proportions of the test-units allocated at three stress levels for various manufacturing and ADT conditions. This work also explores robust parameter design (RPD) optimizations for selected controllable manufacturing variables to achieve the longest product lifetime and smallest variation in lifetime distributions.
    IEEE Transactions on Reliability 09/2015; 64(3):1-11. DOI:10.1109/TR.2015.2415892
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose to define the problem of predicting the remaining useful life of a component as a binary classification task. This approach is particularly useful for problems in which the evolution of the system condition is described by a combination of a large number of discrete-event diagnostic data, and for which alternative approaches are either not applicable, or are only applicable with significant limitations or with a large computational burden. The proposed approach is demonstrated with a case study of real discrete-event data for predicting the occurrence of railway operation disruptions. For the classification task, Extreme Learning Machine (ELM) has been chosen because of its good generalization ability, computational efficiency, and low requirements on parameter tuning.
    IEEE Transactions on Reliability 09/2015; 64(3):1049-1056. DOI:10.1109/TR.2015.2440531
  • [Show abstract] [Hide abstract]
    ABSTRACT: Security protocols are notoriously difficult to get right, and most go through several iterations before their hidden security vulnerabilities, which are hard to detect, are triggered. To help protocol designers and developers efficiently find non-trivial bugs, we introduce SYMCONF, a practical conformance testing tool that generates high-coverage test input packets using a conformance test suite and symbolic execution. Our approach can be viewed as the combination of conformance testing and symbolic execution: 1) it first selects symbolic inputs from an existing conformance test suite; 2) it then symbolically executes a network protocol implementation with the symbolic inputs; and 3) it finally generates high-coverage test input packets using a conformance test suite. We demonstrate the feasibility of this methodology by applying SYMCONF to the generation of a stream of high quality test input packets for multiple implementations of two network protocols, the Kerberos Telnet protocol and Dynamic Host Configuration Protocol (DHCP), and discovering non-trivial security bugs in the protocols.
    IEEE Transactions on Reliability 09/2015; 64(3):1024-1037. DOI:10.1109/TR.2015.2443392
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software Defined Networking (SDN) is rapidly emerging as a new paradigm for managing and controlling the operation of networks ranging from the data center to the core, enterprise, and home. The logical centralization of network intelligence presents exciting challenges and opportunities to enhance security in such networks, including new ways to prevent, detect, and react to threats, as well as innovative security services and applications that are built upon SDN capabilities. In this paper, we undertake a comprehensive survey of recent works that apply SDN to security, and identify promising future directions that can be addressed by such research.
    IEEE Transactions on Reliability 09/2015; 64(3):1-12. DOI:10.1109/TR.2015.2421391
  • [Show abstract] [Hide abstract]
    ABSTRACT: Field return rate prediction is important for manufacturers to assess the product reliability and develop effective warranty management. To get timely predictions, lab reliability tests have been widely used in assessing field performance before the product is introduced to the market. This work concerns warranty prediction for highly reliable products. But, due to the high reliability associated with modern electronic devices, the failure data in lab tests are typically insufficient for each individual product, resulting in less accurate prediction for the field return rate. To overcome this issue, a hierarchical reliability model is suggested to efficiently integrate the information from multiple devices of a similar type in the historical database. Under a Bayesian framework, the warranty prediction for a new product can be inferred and updated as the data collection progresses. The proposed methodology is applied to a case study in the information and communication technology industry for illustration. Bayesian prediction is demonstrated to be very effective compared to other alternatives via a cross-validation study. In particular, the prediction error rate based on our updating prediction scheme is significantly improved as more field data are collected, and achieves a prediction error rate lower than 20% after launching the product for 3 months.
    IEEE Transactions on Reliability 09/2015; 64(3):1-11. DOI:10.1109/TR.2015.2427153
  • [Show abstract] [Hide abstract]
    ABSTRACT: A redundant system usually consists of primary and standby modules. The so-called spare gate is extensively used to model the dynamic behavior of redundant systems in the application of dynamic fault trees (DFTs). Several methodologies have been proposed to evaluate the reliability of DFTs containing spare gates by computing the failure probability. However, either a complex analysis or significant simulation time are usually required by such an approach. Moreover, it is difficult to compute the failure probability of a system with component failures that are not exponentially distributed. Additionally, probabilistic common cause failures (PCCFs) have been widely reported, usually occurring in a statistically dependent manner. Failure to account for the effect of PCCFs overestimates the reliability of a DFT. In this paper, stochastic computational models are proposed for an efficient analysis of spare gates and PCCFs in a DFT. Using these models, a DFT with spare gates under PCCFs can be efficiently evaluated. In the proposed stochastic approach, a signal probability is encoded as a non-Bernoulli sequence of random permutations of fixed numbers of ones and zeros. The component's failure probability is not limited to an exponential distribution, thus this approach is applicable to a DFT analysis in a general case. Several case studies are evaluated to show the accuracy and efficiency of the proposed approach, compared to both an analytical approach and Monte Carlo (MC) simulation.
    IEEE Transactions on Reliability 09/2015; 64(3):1-15. DOI:10.1109/TR.2015.2419214
  • [Show abstract] [Hide abstract]
    ABSTRACT: Intelligent systems for online fault diagnoses can increase the reliability, safety, and availability of large and complex systems. As an intelligent system, Dynamic Uncertain Causality Graph (DUCG) is a newly presented approach to graphically and compactly represent complex uncertain causalities, and perform probabilistic reasoning, which can be applied in fault diagnoses and other tasks. However, only static evidence was utilized previously. In this paper, the methodology for DUCG to perform fault diagnoses with dynamic evidence is presented. Causality propagations among sequential time slices are avoided. In the case of process systems, the basic failure events are classified as initiating, and non-initiating events. This classification can increase the efficiency of fault diagnoses greatly. Failure rates of initiating events can be used to replace failure probabilities without affecting diagnostic results. Illustrative examples are provided to illustrate the methodology.
    IEEE Transactions on Reliability 09/2015; 64(3):1-18. DOI:10.1109/TR.2015.2416332
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software testing began as an empirical activity, and remains part of engineering practice without a widely accepted theoretical foundation. The overwhelming majority of test methods are designed to find software errors, termed faults, in program source code, but not to assess software operational quality. To go beyond fault-seeking requires a theory that relates static program properties to executions. In the 1970s and 1980s, Gerhart, Howden, and others developed a sound functional theory of program testing. Then Duran and others used this theory to precisely define the notions of random testing and operational reliability. In the Gerhart-Howden-Duran theory, a program's behavior is a pure input-output mapping. This paper extends the theory to include persistent state, by adding a state space to the input space, and a state mapping to a program's output mapping. The extended theory is significantly different because test states, unlike inputs, cannot be chosen arbitrarily. The theory is used to analyze state-based testing methods, to examine the practicality of reliability assessment, and to suggest experiments that would increase understanding of the statistical properties of software.
    IEEE Transactions on Reliability 09/2015; 64(3):1098-1115. DOI:10.1109/TR.2015.2436443
  • [Show abstract] [Hide abstract]
    ABSTRACT: As our aging population significantly grows, personal health monitoring is becoming an emerging service and can be accomplished by large-scale, low-power sensor networks, such as Zigbee networks. However, collected medical data may reveal patient privacy, and should be well protected. We propose a Hierarchical and Dynamic Elliptic Curve Cryptosystem based self-certified public key scheme (HiDE) for medical data protection. To serve a large amount of sensors, HiDE provides a hierarchical cluster-based framework consisting of a Backbone Cluster and several Area Clusters. In an Area Cluster, a Secure Access Point (SAP) collects medical data from Secure Sensors (SSs) in the sensor network, and transmits the aggregated data to a Root SAP located in the Backbone Cluster. Therefore, the Root SAP can serve a considerable number of SSs without establishing separate secure sessions with each SS individually. To provide dynamic secure sessions for mobile SSs connecting SAP, HiDE introduces the Elliptic Curve Cryptosystem based Self-certified Public key scheme (ESP) for establishing secure sessions between each pair of Cluster Head (CH) and Cluster Member (CM). In ESP, the CH can issue a public key to a CM, and computes a Shared Session Key (SSK) with that CM without knowing the CM's secrete key. This concept satisfies the Zero Knowledge Proof so CHs can dynamically build secure sessions with CMs without managing a CM's secrete keys. Our experiments in realistic implementations and Network Simulation demonstrate that ESP requires less computation and network overhead than the Rivest-Shamir-Adleman (RSA)-based public key scheme. In addition, security analysis shows keys in ESP are well protected. Thus, HiDE can protect the confidentiality of sensitive medical data with low computation overhead, and keep appropriate network performance for wireless sensor networks.
    IEEE Transactions on Reliability 09/2015; 64(3):1-8. DOI:10.1109/TR.2015.2429271