IEEE Transactions on Reliability (IEEE T RELIAB)

Publisher: Institute of Electrical and Electronics Engineers. Professional Technical Group on Reliability; IEEE Reliability Group; IEEE Reliability Society; American Society for Quality Control. Electronics Division, Institute of Electrical and Electronics Engineers

Journal description

The principles and practices of reliability, maintainability, and product liability pertaining to electrical and electronic equipment.

Current impact factor: 1.93

Impact Factor Rankings

2015 Impact Factor Available summer 2016
2014 Impact Factor 1.934
2013 Impact Factor 1.657
2012 Impact Factor 2.293
2011 Impact Factor 1.285
2010 Impact Factor 1.288
2009 Impact Factor 1.331
2008 Impact Factor 1.315
2007 Impact Factor 1.303
2006 Impact Factor 0.8
2005 Impact Factor 0.715
2004 Impact Factor 0.828
2003 Impact Factor 0.444
2002 Impact Factor 0.522
2001 Impact Factor 0.477
2000 Impact Factor 0.358
1999 Impact Factor 0.341
1998 Impact Factor 0.255
1997 Impact Factor 0.355
1996 Impact Factor 0.369
1995 Impact Factor 0.304
1994 Impact Factor 0.45
1993 Impact Factor 0.332
1992 Impact Factor 0.407

Impact factor over time

Impact factor

Additional details

5-year impact 2.19
Cited half-life >10.0
Immediacy index 0.19
Eigenfactor 0.01
Article influence 0.85
Website IEEE Transactions on Reliability website
Other titles IEEE transactions on reliability, Institute of Electrical and Electronics Engineers transactions on reliability, Transactions on reliability, Reliability
ISSN 0018-9529
OCLC 1752560
Material type Periodical, Internet resource
Document type Journal / Magazine / Newspaper, Internet Resource

Publisher details

Institute of Electrical and Electronics Engineers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on Author's personal website, employers website or publicly accessible server
    • Author's post-print on Author's server or Institutional server
    • Author's pre-print must be removed upon publication of final version and replaced with either full citation to IEEE work with a Digital Object Identifier or link to article abstract in IEEE Xplore or replaced with Authors post-print
    • Author's pre-print must be accompanied with set-phrase, once submitted to IEEE for publication ("This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible")
    • Author's pre-print must be accompanied with set-phrase, when accepted by IEEE for publication ("(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
    • IEEE must be informed as to the electronic address of the pre-print
    • If funding rules apply authors may post Author's post-print version in funder's designated repository
    • Author's Post-print - Publisher copyright and source must be acknowledged with citation (see above set statement)
    • Author's Post-print - Must link to publisher version with DOI
    • Publisher's version/PDF cannot be used
    • Publisher copyright and source must be acknowledged
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, how to estimate residual useful lifetime (RUL) is discussed based on a model-based method when fluctuations exist in the degradation process around its average behavior. The main idea is to model the degradation process as a time-dependent Ornstein-Uhlenbeck (OU) process, where the first passage failure is adopted to consider corresponding RUL estimation. The time-dependent OU process is proved good by its statistical properties on the controllable mean, variance, and correlation. Its mean-reverting property is introduced to interpret temporary correlated fluctuations from an overall degrading trend in degradation records. The corresponding parameter estimation is proposed based on the maximum likelihood estimation method, and a Volterra integral equation of second kind with a non-singular kernel is then considered to calculate the probability density function (pdf) of failure time. Proposed methods are tested in a case study, where results are compared with a nonlinear-drift, linear-diffusion process.
    IEEE Transactions on Reliability 01/2016; 65(1). DOI:10.1109/TR.2015.2462353
  • [Show abstract] [Hide abstract]
    ABSTRACT: Field return rate prediction is important for manufacturers to assess the product reliability and develop effective warranty management. To get timely predictions, lab reliability tests have been widely used in assessing field performance before the product is introduced to the market. This work concerns warranty prediction for highly reliable products. But, due to the high reliability associated with modern electronic devices, the failure data in lab tests are typically insufficient for each individual product, resulting in less accurate prediction for the field return rate. To overcome this issue, a hierarchical reliability model is suggested to efficiently integrate the information from multiple devices of a similar type in the historical database. Under a Bayesian framework, the warranty prediction for a new product can be inferred and updated as the data collection progresses. The proposed methodology is applied to a case study in the information and communication technology industry for illustration. Bayesian prediction is demonstrated to be very effective compared to other alternatives via a cross-validation study. In particular, the prediction error rate based on our updating prediction scheme is significantly improved as more field data are collected, and achieves a prediction error rate lower than 20% after launching the product for 3 months.
    IEEE Transactions on Reliability 09/2015; 64(3):1-11. DOI:10.1109/TR.2015.2427153
  • [Show abstract] [Hide abstract]
    ABSTRACT: A redundant system usually consists of primary and standby modules. The so-called spare gate is extensively used to model the dynamic behavior of redundant systems in the application of dynamic fault trees (DFTs). Several methodologies have been proposed to evaluate the reliability of DFTs containing spare gates by computing the failure probability. However, either a complex analysis or significant simulation time are usually required by such an approach. Moreover, it is difficult to compute the failure probability of a system with component failures that are not exponentially distributed. Additionally, probabilistic common cause failures (PCCFs) have been widely reported, usually occurring in a statistically dependent manner. Failure to account for the effect of PCCFs overestimates the reliability of a DFT. In this paper, stochastic computational models are proposed for an efficient analysis of spare gates and PCCFs in a DFT. Using these models, a DFT with spare gates under PCCFs can be efficiently evaluated. In the proposed stochastic approach, a signal probability is encoded as a non-Bernoulli sequence of random permutations of fixed numbers of ones and zeros. The component's failure probability is not limited to an exponential distribution, thus this approach is applicable to a DFT analysis in a general case. Several case studies are evaluated to show the accuracy and efficiency of the proposed approach, compared to both an analytical approach and Monte Carlo (MC) simulation.
    IEEE Transactions on Reliability 09/2015; 64(3):1-15. DOI:10.1109/TR.2015.2419214
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software Defined Networking (SDN) is rapidly emerging as a new paradigm for managing and controlling the operation of networks ranging from the data center to the core, enterprise, and home. The logical centralization of network intelligence presents exciting challenges and opportunities to enhance security in such networks, including new ways to prevent, detect, and react to threats, as well as innovative security services and applications that are built upon SDN capabilities. In this paper, we undertake a comprehensive survey of recent works that apply SDN to security, and identify promising future directions that can be addressed by such research.
    IEEE Transactions on Reliability 09/2015; 64(3):1-12. DOI:10.1109/TR.2015.2421391
  • [Show abstract] [Hide abstract]
    ABSTRACT: Manufacturing quality and lifetime testing conditions may affect product reliability measurements. The literature for the design of experiments (DOE) and robust product optimization considering both quality and reliability issues is scarce. This article develops a model to include both manufacturing variables and accelerated degradation test (ADT) conditions. A simple algorithm provides calculations of the maximum likelihood estimates (MLEs) of these model parameters and percentile lifetimes. Variances of these estimates are derived based on large sample theory. Our DOE plans focus on deciding replication sizes and proportions of the test-units allocated at three stress levels for various manufacturing and ADT conditions. This work also explores robust parameter design (RPD) optimizations for selected controllable manufacturing variables to achieve the longest product lifetime and smallest variation in lifetime distributions.
    IEEE Transactions on Reliability 09/2015; 64(3):1-11. DOI:10.1109/TR.2015.2415892
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose to define the problem of predicting the remaining useful life of a component as a binary classification task. This approach is particularly useful for problems in which the evolution of the system condition is described by a combination of a large number of discrete-event diagnostic data, and for which alternative approaches are either not applicable, or are only applicable with significant limitations or with a large computational burden. The proposed approach is demonstrated with a case study of real discrete-event data for predicting the occurrence of railway operation disruptions. For the classification task, Extreme Learning Machine (ELM) has been chosen because of its good generalization ability, computational efficiency, and low requirements on parameter tuning.
    IEEE Transactions on Reliability 09/2015; 64(3):1049-1056. DOI:10.1109/TR.2015.2440531
  • [Show abstract] [Hide abstract]
    ABSTRACT: Security protocols are notoriously difficult to get right, and most go through several iterations before their hidden security vulnerabilities, which are hard to detect, are triggered. To help protocol designers and developers efficiently find non-trivial bugs, we introduce SYMCONF, a practical conformance testing tool that generates high-coverage test input packets using a conformance test suite and symbolic execution. Our approach can be viewed as the combination of conformance testing and symbolic execution: 1) it first selects symbolic inputs from an existing conformance test suite; 2) it then symbolically executes a network protocol implementation with the symbolic inputs; and 3) it finally generates high-coverage test input packets using a conformance test suite. We demonstrate the feasibility of this methodology by applying SYMCONF to the generation of a stream of high quality test input packets for multiple implementations of two network protocols, the Kerberos Telnet protocol and Dynamic Host Configuration Protocol (DHCP), and discovering non-trivial security bugs in the protocols.
    IEEE Transactions on Reliability 09/2015; 64(3):1024-1037. DOI:10.1109/TR.2015.2443392
  • [Show abstract] [Hide abstract]
    ABSTRACT: Intelligent systems for online fault diagnoses can increase the reliability, safety, and availability of large and complex systems. As an intelligent system, Dynamic Uncertain Causality Graph (DUCG) is a newly presented approach to graphically and compactly represent complex uncertain causalities, and perform probabilistic reasoning, which can be applied in fault diagnoses and other tasks. However, only static evidence was utilized previously. In this paper, the methodology for DUCG to perform fault diagnoses with dynamic evidence is presented. Causality propagations among sequential time slices are avoided. In the case of process systems, the basic failure events are classified as initiating, and non-initiating events. This classification can increase the efficiency of fault diagnoses greatly. Failure rates of initiating events can be used to replace failure probabilities without affecting diagnostic results. Illustrative examples are provided to illustrate the methodology.
    IEEE Transactions on Reliability 09/2015; 64(3):1-18. DOI:10.1109/TR.2015.2416332
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software testing began as an empirical activity, and remains part of engineering practice without a widely accepted theoretical foundation. The overwhelming majority of test methods are designed to find software errors, termed faults, in program source code, but not to assess software operational quality. To go beyond fault-seeking requires a theory that relates static program properties to executions. In the 1970s and 1980s, Gerhart, Howden, and others developed a sound functional theory of program testing. Then Duran and others used this theory to precisely define the notions of random testing and operational reliability. In the Gerhart-Howden-Duran theory, a program's behavior is a pure input-output mapping. This paper extends the theory to include persistent state, by adding a state space to the input space, and a state mapping to a program's output mapping. The extended theory is significantly different because test states, unlike inputs, cannot be chosen arbitrarily. The theory is used to analyze state-based testing methods, to examine the practicality of reliability assessment, and to suggest experiments that would increase understanding of the statistical properties of software.
    IEEE Transactions on Reliability 09/2015; 64(3):1098-1115. DOI:10.1109/TR.2015.2436443
  • [Show abstract] [Hide abstract]
    ABSTRACT: As our aging population significantly grows, personal health monitoring is becoming an emerging service and can be accomplished by large-scale, low-power sensor networks, such as Zigbee networks. However, collected medical data may reveal patient privacy, and should be well protected. We propose a Hierarchical and Dynamic Elliptic Curve Cryptosystem based self-certified public key scheme (HiDE) for medical data protection. To serve a large amount of sensors, HiDE provides a hierarchical cluster-based framework consisting of a Backbone Cluster and several Area Clusters. In an Area Cluster, a Secure Access Point (SAP) collects medical data from Secure Sensors (SSs) in the sensor network, and transmits the aggregated data to a Root SAP located in the Backbone Cluster. Therefore, the Root SAP can serve a considerable number of SSs without establishing separate secure sessions with each SS individually. To provide dynamic secure sessions for mobile SSs connecting SAP, HiDE introduces the Elliptic Curve Cryptosystem based Self-certified Public key scheme (ESP) for establishing secure sessions between each pair of Cluster Head (CH) and Cluster Member (CM). In ESP, the CH can issue a public key to a CM, and computes a Shared Session Key (SSK) with that CM without knowing the CM's secrete key. This concept satisfies the Zero Knowledge Proof so CHs can dynamically build secure sessions with CMs without managing a CM's secrete keys. Our experiments in realistic implementations and Network Simulation demonstrate that ESP requires less computation and network overhead than the Rivest-Shamir-Adleman (RSA)-based public key scheme. In addition, security analysis shows keys in ESP are well protected. Thus, HiDE can protect the confidentiality of sensitive medical data with low computation overhead, and keep appropriate network performance for wireless sensor networks.
    IEEE Transactions on Reliability 09/2015; 64(3):1-8. DOI:10.1109/TR.2015.2429271
  • [Show abstract] [Hide abstract]
    ABSTRACT: In today's Integrated Circuit industry, a foundry, an Intellectual Property provider, a design house, or a Computer Aided Design vendor may install a hardware Trojan on a chip which executes a malicious program such as one providing an information leaking back door. In this paper, we propose a fingerprint-based method to detect any malicious program in hardware. We propose a tamper-evident architecture (TEA) which samples runtime signals in a hardware system during the performance of a computation, and generates a cryptographic hash-based fingerprint that uniquely identifies a sequence of sampled signals. A hardware Trojan cannot tamper with any sampled signal without leaving tamper evidence such as a missing or incorrect fingerprint. We further verify fingerprints off-chip such that a hardware Trojan cannot tamper with the verification process. As a case study, we detect hardware-based code injection attacks in a SPARC V8 architecture LEON2 processor. Based on a lightweight block cipher called PRESENT, a TEA requires only a 4.5% area increase, while avoiding being detected by the TEA increases the area of a code injection hardware Trojan with a 1 KB ROM from 2.5% to 36.1% of a LEON2 processor. Such a low cost further enables more advanced tamper diagnosis techniques based on a concurrent generation of multiple fingerprints.
    IEEE Transactions on Reliability 09/2015; 64(3):1-10. DOI:10.1109/TR.2015.2430471
  • [Show abstract] [Hide abstract]
    ABSTRACT: Presents an editorial on the activities and major areas of development in the reliability industry and with the publication from 1999 through 2015.
    IEEE Transactions on Reliability 09/2015; 64(3):838-839. DOI:10.1109/TR.2015.2454372
  • [Show abstract] [Hide abstract]
    ABSTRACT: A High-Temperature Low-Sag Aluminum Conductor Composite Core (ACCC) bare overhead transmission line conductor utilizing a load bearing unidirectional carbon and glass fiber reinforced epoxy composite rod was evaluated for potential galvanic corrosion problems. A series of corrosion tests were performed in 0.5 M NaCl aqueous solution at room temperature, and at 85 . The corrosion performance of the ACCC conductor was compared to a conventional Aluminum Conductor Steel Reinforced (ACSR) conductor. The bi-metallic ACSR design suffers inherently from galvanic corrosion, while the ACCC design does not develop galvanic corrosion unless its fiberglass composite galvanic corrosion barrier is compromised. Even with a severely compromised barrier in the ACCC conductor, the measured galvanic corrosion rate of the aluminum in the ACCC conductor was much lower than the galvanic corrosion rate measured in the ACSR conductor.
    IEEE Transactions on Reliability 09/2015; 64(3):1-7. DOI:10.1109/TR.2015.2427894
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the application of cloud storage, a user no longer possesses his files in his local depository. Thus, he is concerned about the security of the stored files. Data confidentiality and data robustness are the main security issues. For data confidentiality, the user can first encrypt files and then store the encrypted files in a cloud storage. For data robustness, there are two concerns: service failure, and service corruption. We are concerned about data robustness in cloud storage services. Lin and Tzeng proposed a secure erasure code-based storage system with multiple key servers recently. Their system supports a repair mechanism, where a new storage server can compute a new ciphertext from the ciphertexts obtained from the remaining storage servers. Their system considers data confidentiality in the cloud, and data robustness against storage server failure. In this paper, we propose an integrity check scheme for their system to enhance data robustness against storage server corruption, which returns tampered ciphertexts. With our integrity check scheme, their storage system can deal with not only the problem of storage server failure, but also the problem of storage server corruption. The challenging part of our work is to have homomorphic integrity tags. New integrity tags can be computed from old integrity tags by storage servers without involvement of the user's secret key or backup servers. We prove the security of our integrity check scheme formally, and establish the parameters for achieving an overwhelming probability of a successful data retrieval.
    IEEE Transactions on Reliability 09/2015; 64(3):1-12. DOI:10.1109/TR.2015.2423192
  • [Show abstract] [Hide abstract]
    ABSTRACT: The accelerated degradation test (ADT) has become critical for product reliability assessment. In performing the ADT for newly developed products, a constant-stress ADT may be impractical or sometimes impossible where the available size of testing units and the testing duration are heavily bounded to meet the short development period of the products. As an alternative, a step-stress accelerated degradation test (SSADT) can be a useful tool for satisfying the test limitation, and for making up for uncertainty in selecting appropriate levels of stress. Occasionally, the elevated stress under SSADT not only accelerates the performance degradation of products, but it may also expedite traumatic failures. This paper proposes a modeling approach to simultaneously analyze linear degradation data and traumatic failures with competing risks in an SSADT experiment. Under the modeling approach, a cumulative exposure model is considered. The failure rate corresponding to each failure mode is described as a function of the degradation level at the moment of failure. No parametric assumptions are made regarding the failure-time distribution to extend the proposed method to more general cases. We derive maximum likelihood estimates of the model parameters, then estimate failure rates and product reliability based on the degradation level to failure. Asymptotic properties of the maximum likelihood estimates are also discussed. The proposed model is applied to accelerated degradation data from plastic substrate active matrix light-emitting diodes (AMOLEDs), along with sensitivity analysis.
    IEEE Transactions on Reliability 09/2015; 64(3):960-971. DOI:10.1109/TR.2015.2430451