IEEE Transactions on Reliability (IEEE T RELIAB )

Publisher: Institute of Electrical and Electronics Engineers. Professional Technical Group on Reliability; IEEE Reliability Group; IEEE Reliability Society; American Society for Quality Control. Electronics Division, Institute of Electrical and Electronics Engineers

Description

The principles and practices of reliability, maintainability, and product liability pertaining to electrical and electronic equipment.

  • Impact factor
    2.29
    Show impact factor history
     
    Impact factor
  • 5-year impact
    2.07
  • Cited half-life
    0.00
  • Immediacy index
    0.21
  • Eigenfactor
    0.00
  • Article influence
    0.76
  • Website
    IEEE Transactions on Reliability website
  • Other titles
    IEEE transactions on reliability, Institute of Electrical and Electronics Engineers transactions on reliability, Transactions on reliability, Reliability
  • ISSN
    0018-9529
  • OCLC
    1752560
  • Material type
    Periodical, Internet resource
  • Document type
    Journal / Magazine / Newspaper, Internet Resource

Publisher details

Institute of Electrical and Electronics Engineers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Author's pre-print on Author's personal website, employers website or publicly accessible server
    • Author's post-print on Author's server or Institutional server
    • Author's pre-print must be removed upon publication of final version and replaced with either full citation to IEEE work with a Digital Object Identifier or link to article abstract in IEEE Xplore or replaced with Authors post-print
    • Author's pre-print must be accompanied with set-phrase, once submitted to IEEE for publication ("This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible")
    • Author's pre-print must be accompanied with set-phrase, when accepted by IEEE for publication ("(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
    • IEEE must be informed as to the electronic address of the pre-print
    • If funding rules apply authors may post Author's post-print version in funder's designated repository
    • Author's Post-print - Publisher copyright and source must be acknowledged with citation (see above set statement)
    • Author's Post-print - Must link to publisher version with DOI
    • Publisher's version/PDF cannot be used
    • Publisher copyright and source must be acknowledged
  • Classification
    ​ green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: Competing failure is an important reliability topic. Thus, the study of the statistical inference of accelerated life testing (ALT) with competing failures is of great significance. In contrast to the previous related studies that assumed that all competing failure modes were cross statistically independent, we study the statistical inference method for ALT considering the statistical dependence of the competing failure modes based on copula theory. With the copula function, we construct the statistically dependent relationship between the margin distributions of the competing failure modes and their joint distribution, and find the maximum likelihood estimation (MLE) model for the parameter estimations. We also present a simple engineering-based multi-dimensional copula construction method applied in the statistical inference for ALT with statistically dependent competing failure modes. The results and analysis of the case studies indicate that the statistical inference models and the multi-dimensional copula construction method derived in this article are not only correct and feasible but have good accuracy and universality. We have provided an effective, universally applicable method for the statistical inference of ALT in situations involving competing failures, statistically independent or dependent, which is of great significance to evaluating a product's lifetime in ALT.
    IEEE Transactions on Reliability 09/2014; 63(3):764-780.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we focus on the reliability and availability analysis of Web service (WS) compositions, orchestrated via the Business Process Execution Language (BPEL). Starting from the failure profiles of the services being composed, which take into account multiple possible failure modes, latent errors, and propagation effects, and from a BPEL process description, we provide an analytical technique for evaluating the composite process' reliability-availability metrics. This technique also takes into account BPEL's advanced composition features, including fault, compensation, termination, and event handling. The method is a design-time aid that can help users and third party providers reason, in the early stages of development, and in particular during WS selection, about a process' reliability and availability. A non-trivial case study in the area of travel management is used to illustrate the applicability and effectiveness of the proposed approach.
    IEEE Transactions on Reliability 09/2014; 63(3):689-705.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Software is currently a key part of many safety-critical and life-critical application systems. People always need easy- and instinctive-to-use software, but the biggest challenge for software engineers is how to develop software with high reliability in a timely manner. To assure quality, and to assess the reliability of software products, many software reliability growth models (SRGMs) have been proposed in the past three decades. The practical problem is that sometimes these selected SRGMs by companies or software practitioners disagree in their reliability predictions, while no single model can be trusted to provide consistently accurate results across various applications. Consequently, some researchers have proposed to use combinational models for improving the prediction capability of software reliability. In this paper, three enhanced weighted-combinations, namely weighted arithmetic, weighted geometric, and weighted harmonic combinations, are proposed. To solve the problem of determining proper weights for model combinations, we further study how to incorporate enhanced genetic algorithms (EGAs) with several efficient operators into weighted assignments. Experiments are performed based on real software failure data, and numerical results show that our proposed models are flexible enough to depict various software development environments. Finally, some management metrics are presented to both assure software quality and determine the optimal release strategy of software products under development.
    IEEE Transactions on Reliability 09/2014; 63(3):731-749.
  • IEEE Transactions on Reliability 08/2014; 99:1-15.
  • [Show abstract] [Hide abstract]
    ABSTRACT: To address reliability challenges due to failures and planned outages, Internet Service Providers (ISPs) typically use two backbone routers (BRs) at each central office. Access routers (ARs) are connected to these BRs in a dual-homed configuration. To provide reliability through node and path diversity, redundant backbone routers and redundant transport equipment to interconnect them are deployed. However, deploying such redundant resources increases the overall cost of the network. Hence, to avoid such redundant resources, a fundamental redesign of the backbone network leveraging the capabilities of an agile optical transport network is highly desired. In this paper, we propose a fundamental redesign of IP backbones. Our alternative design uses only a single router at each office. To survive failures or outages of a single local BR, we leverage the agile optical transport layer to carry traffic to remote BRs. Optimal mapping of local ARs to remote BRs is determined by solving an Integer Linear Program (ILP). We describe how our proposed design can be realized using current optical transport technology. We evaluate network designs for cost and performability, the latter being a metric combining performance and availability. We show significant reduction in cost for approximately the same level of reliability as current designs.
    IEEE Transactions on Reliability 06/2014; 63(2):427-442.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose goodness-of-fit tests for Birnbaum-Saunders distributions with type-II right censored data. Classical goodness-of-fit tests based on the empirical distribution, such as Anderson-Darling, Cramér-von Misses, and Kolmogorov-Smirnov, are adapted to censored data, and evaluated by means of a simulation study. The obtained results are applied to real-world censored reliability data.
    IEEE Transactions on Reliability 06/2014; 63(2):543-554.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We describe a set of design methodologies and experiments related to enabling hardware systems to utilize on-the-fly configuration of reconfigurable logic to recover system operation from unexpected loss of system function. Methods we explore include programming using locally stored configuration bitstream as well as using configuration bitstream transmitted from a remote site. We examine specific ways of utilizing reconfigurable logic to regenerate system function, as well as the effectiveness of this approach as a function of the type of attack, and various architectural attributes of the system. Based on this analysis, we propose architectural features of System-on-Chip (SoC) that can minimize performance degradation and maximize the likelihood of seamless system operation despite the function replacement. This approach is highly feasible in that it is not required to specially manage system software and other normal system hardware functions for the replacement.
    IEEE Transactions on Reliability 06/2014; 63(2):661-675.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper deals with the design of a span-restorable (SR) elastic optical network under different spectrum conversion capabilities, including 1) no spectrum conversion, 2) partial spectrum conversion, and 3) full spectrum conversion. We develop Integer Linear Programming (ILP) models to minimize both the required spare capacity and the maximum number of link frequency slots used for each of the three spectrum conversion cases. We also consider using the Bandwidth Squeezed Restoration (BSR) technique to obtain the maximal restoration levels for the affected service flows, subject to the limited frequency slot capacity on each fiber link. Our studies show that the spectrum conversion capability significantly improves spare capacity efficiency for an elastic optical network.
    IEEE Transactions on Reliability 06/2014; 63(2):401-411.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In semiconductor manufacturing, it is necessary to guarantee the reliability of the produced devices. Latent defects have to be screened out by means of burn-in (that is, stressing the devices under accelerated life conditions) before the items are delivered to the customers. In a burn-in study, a sample of the stressed devices is investigated on burn-in relevant failures with the aim of proving a target failure probability level. In general, zero failures are required; if burn-in related failures occur, countermeasures are implemented in the production process, and the burn-in study actually has to be restarted. Countermeasure effectiveness is assessed by experts. In this paper, we propose a statistical model for assessing the devices' failure probability level, taking account of the reduced risk of early failures after the implementation of the countermeasures. Based on that, the target ppm-level can be proven when extending the running burn-in study by a reduced number of additional inspections. Therefore, a restart of the burn-in study is no longer required. A Generalized Binomial model is applied to handle countermeasures with different amounts of effectiveness. The corresponding probabilities are efficiently computed, exploiting a sequential convolution algorithm, which also works for a larger number of possible failures. Furthermore, we discuss the modifications needed in case of uncertain effectiveness values, which are modeled by means of Beta expert distributions. For the more mathematically inclined reader, some details on the model's decision-theoretical background are provided. Finally, the proposed model is applied to reduce the burn-in time, and to plan the additional sample size needed to continue the burn-in studies also in the case of failure occurrences.
    IEEE Transactions on Reliability 06/2014; 63(2):583-592.
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents several models to compare centralized and distributed automated separation assurance concepts in aviation. In a centralized system, safety-related functions are implemented by common equipment on the ground. In a distributed system, safety-related functions are implemented by equipment on each aircraft. Failures of the safety-related functions can increase the risk of near mid-air collisions. Intuitively, failures on the ground are worse than failures in the air because the ground failures simultaneously affect multiple aircraft. This paper evaluates the degree to which this belief is true. Using region-wide models to account for dependencies between aircraft pairs, we derive the region-wide expectation and variance of the number of separation losses for both centralized and distributed concepts. This derivation is done first for a basic scenario involving a single component and function. We show that the variance of the number of separation losses is always higher for the centralized system, holding the expectations equal. However, numerical examples show that the difference is negligible when the events of interest are rare. Results are extended to a hybrid centralized-distributed scenario involving multiple components and functions on the ground and in the air. In this case, the variance of the centralized system may actually be less than that of the distributed system. The overall implication is that the common-cause failure of the ground function does not seriously weaken the overall case for using a centralized concept versus a distributed concept.
    IEEE Transactions on Reliability 03/2014; 63(1):259-269.
  • [Show abstract] [Hide abstract]
    ABSTRACT: A three-parameter discrete distribution is introduced based on a recent modification of the continuous Weibull distribution. It is one of only three discrete distributions allowing for bathtub shaped hazard rate functions. We study some of its mathematical properties, discuss estimation by the method of maximum likelihood, and describe applications to four real data sets. The new distribution is shown to outperform at least three other models including those allowing for bathtub shaped hazard rate functions.
    IEEE Transactions on Reliability 03/2014; 63(1):68-80.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In general, one of the most important aspects of software development and project management is how to make predictions and assessments of quality and reliability for developed products. Project data usually will be systematically collected and analyzed during the process of software development. Practically, it would be helpful if developers could identify the most error-prone modules early so that they can optimize testing-resource allocation and increase fault detection effectiveness accordingly. In the past, many research studies revealed the applicability of the Pareto principle to software systems, and some of them reported that the Pareto distribution (PD) model can be used to predict the fault distribution of software. In this paper, a special form of the Generalized PD model, named the Bounded Generalized Pareto distribution (BGPD) model, is further proposed to investigate the fault distributions of Open Source Software (OSS). It can be seen that the BGPD model eliminates the issue which occurred in the classical PD model. Three methods of parameter estimation will be presented, and related experiments are performed based on real OSS failure data. Experimental results show that the BGPD model presents high fitness to the actual failure data of OSS. Finally, the possibility of using early limited fault data to predict the later software fault distribution is also studied. Numerical results indicate that the BGPD model can be trusted to consistently produce accurate estimates of fault predictions during the early stages of development. The findings can provide an effective foundation for managing the necessary activities of software development and testing.
    IEEE Transactions on Reliability 03/2014; 63(1):309-319.