IEEE Transactions on Reliability (IEEE T RELIAB )

Publisher: Institute of Electrical and Electronics Engineers. Professional Technical Group on Reliability; IEEE Reliability Group; IEEE Reliability Society; American Society for Quality Control. Electronics Division, Institute of Electrical and Electronics Engineers

Description

The principles and practices of reliability, maintainability, and product liability pertaining to electrical and electronic equipment.

  • Impact factor
    2.29
    Show impact factor history
     
    Impact factor
  • 5-year impact
    2.07
  • Cited half-life
    0.00
  • Immediacy index
    0.21
  • Eigenfactor
    0.00
  • Article influence
    0.76
  • Website
    IEEE Transactions on Reliability website
  • Other titles
    IEEE transactions on reliability, Institute of Electrical and Electronics Engineers transactions on reliability, Transactions on reliability, Reliability
  • ISSN
    0018-9529
  • OCLC
    1752560
  • Material type
    Periodical, Internet resource
  • Document type
    Journal / Magazine / Newspaper, Internet Resource

Publisher details

Institute of Electrical and Electronics Engineers

  • Pre-print
    • Author can archive a pre-print version
  • Post-print
    • Author can archive a post-print version
  • Conditions
    • Authors own and employers publicly accessible webpages
    • Preprint - Must be removed upon publication of final version and replaced with either full citation to IEEE work with a Digital Object Identifier or link to article abstract in IEEE Xplore or Authors post-print
    • Preprint - Set-phrase must be added once submitted to IEEE for publication ("This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible")
    • Preprint - Set phrase must be added when accepted by IEEE for publication ("(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")
    • Preprint - IEEE must be informed as to the electronic address of the pre-print
    • Postprint - Publisher copyright and source must be acknowledged (see above set statement)
    • Publisher's version/PDF cannot be used
    • Publisher copyright and source must be acknowledged
  • Classification
    ‚Äč green

Publications in this journal

  • [Show abstract] [Hide abstract]
    ABSTRACT: In semiconductor manufacturing, it is necessary to guarantee the reliability of the produced devices. Latent defects have to be screened out by means of burn-in (that is, stressing the devices under accelerated life conditions) before the items are delivered to the customers. In a burn-in study, a sample of the stressed devices is investigated on burn-in relevant failures with the aim of proving a target failure probability level. In general, zero failures are required; if burn-in related failures occur, countermeasures are implemented in the production process, and the burn-in study actually has to be restarted. Countermeasure effectiveness is assessed by experts. In this paper, we propose a statistical model for assessing the devices' failure probability level, taking account of the reduced risk of early failures after the implementation of the countermeasures. Based on that, the target ppm-level can be proven when extending the running burn-in study by a reduced number of additional inspections. Therefore, a restart of the burn-in study is no longer required. A Generalized Binomial model is applied to handle countermeasures with different amounts of effectiveness. The corresponding probabilities are efficiently computed, exploiting a sequential convolution algorithm, which also works for a larger number of possible failures. Furthermore, we discuss the modifications needed in case of uncertain effectiveness values, which are modeled by means of Beta expert distributions. For the more mathematically inclined reader, some details on the model's decision-theoretical background are provided. Finally, the proposed model is applied to reduce the burn-in time, and to plan the additional sample size needed to continue the burn-in studies also in the case of failure occurrences.
    IEEE Transactions on Reliability 06/2014; 63(2):583-592.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Prognostics focuses on predicting the future performance of a system, specifically the time at which the system no long performs its desired functionality, its time to failure. As an important aspect of prognostics, remaining useful life (RUL) prediction estimates the remaining usable life of a system, which is essential for maintenance decision making and contingency mitigation. A significant amount of research has been reported in the literature to develop prognostics models that are able to predict a system's RUL. These models can be broadly categorized into experience-based models, date-driven models, and physics-based models. However, due to system complexity, data availability, and application constraints, there is no universally accepted best model to estimate RUL. The review part of this paper specifically focused on the development of hybrid prognostics approaches, attempting to leverage the advantages of combining the prognostics models in the aforementioned different categories for RUL prediction. The hybrid approaches reported in the literature were systematically classified by the combination and interfaces of various types of prognostics models. In the case study part, a hybrid prognostics method was proposed and applied to a battery degradation case to show the potential benefit of the hybrid prognostics approach.
    IEEE Transactions on Reliability 01/2014;
  • [Show abstract] [Hide abstract]
    ABSTRACT: Effective debugging is crucial to producing reliable software. Manual debugging is becoming prohibitively expensive, especially due to the growing size and complexity of programs. Given that fault localization is one of the most expensive activities in program debugging, there has been a great demand for fault localization techniques that can help guide programmers to the locations of faults. In this paper, a technique named DStar $({rm D}^{ast})$ is proposed which can suggest suspicious locations for fault localization automatically without requiring any prior information on program structure or semantics. ${rm D}^{ast}$ is evaluated across 24 programs, and is compared to 38 different fault localization techniques. Both single-fault and multi-fault programs are used. Results indicate that ${rm D}^{ast}$ is more effective at locating faults than all the other techniques it is compared to. An empirical evaluation is also conducted to illustrate how the effectiveness of ${rm D}^{ast}$ increases as the exponent * grows, and then levels off when the exponent * exceeds a critical value. Discussions are presented to support such observations.
    IEEE Transactions on Reliability 01/2014; 63(1):290-308.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Predictive Maintenance (PrM) exploits the estimation of the equipment Residual Useful Life (RUL) to identify the optimal time for carrying out the next maintenance action. Particle Filtering (PF) is widely used as a prognostic tool in support of PrM, by reason of its capability of robustly estimating the equipment RUL without requiring strict modeling hypotheses. However, a precise PF estimate of the RUL requires tracing a large number of particles, and thus large computational times, often incompatible with the need of rapidly processing information for making maintenance decisions in due time. This work considers two different Risk Sensitive Particle Filtering (RSPF) schemes proposed in the literature, and investigates their potential for PrM. The computational burden problem of PF is addressed. The effectiveness of the two algorithms is analyzed on a case study concerning a mechanical component affected by fatigue degradation.
    IEEE Transactions on Reliability 01/2014; 63(1):134-143.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In the field of reliability engineering, several approaches have been developed to identify those components that are important to the operation of the larger interconnected system. We extend the concept of component importance measures to the study of industry criticality in a larger system of economically interdependent industry sectors that are perturbed when underlying infrastructures are disrupted. We provide measures of (i) those industries that are most vulnerable to disruptions and (ii) those industries that are most influential to cause interdependent disruptions. However, difficulties arise in the identification of critical industries when uncertainties exist in describing the relationships among sectors. This work adopts fuzzy measures to develop criticality indices, and we offer an approach to rank industries according to these fuzzy indices. Much like decision makers with the knowledge of the most critical components in a physical system, the identification of these critical industries provides decision makers with priorities for resources. We illustrate our approach with an interdependency model driven by US Bureau of Economic Analysis data to describe industry interconnectedness.
    IEEE Transactions on Reliability 01/2014; 63(1):42-57.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We consider the deployment of a sensor alongside a programme of planned maintenance interventions to enhance the reliability of two-phase systems. Such systems operate fault free until they enter a worn state which is a precursor to failure. The sensor is designed to report transitions into the worn state, but does so with error. The sensor can fail to report a transition when it occurs (false-negative), and can report one when none has taken place (false-positive). Key goals of our analyses are (i) the design of simple, cost effective schedules for the inspection, repair, and renewal of such systems, for use alongside the sensor; and (ii) the determination of the range of sensor operating characteristics for which the deployment of the sensor is cost beneficial. The latter is achieved via the computation of cost indifference curves which identify sensor operating characteristics for which we are indifferent to whether the sensor is deployed or not.
    IEEE Transactions on Reliability 01/2014; 63(1):118-133.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Counterfeit electronics have been reported in a wide range of products, including computers, medical equipment, automobiles, avionics, and military systems. Counterfeiting is a growing concern for original equipment manufacturers (OEMs) in the electronics industry. Even inexpensive passive components such as capacitors and resistors are frequently found to be counterfeit, and their incorporation into electronic assemblies can cause early failures with potentially serious economic and safety implications. This study examines counterfeit electrolytic capacitors that were unknowingly assembled in power supplies used in medical devices, and then failed in the field. Upon analysis, the counterfeit components were identified, and their reliability relative to genuine parts was assessed. This paper presents an offline reliability assessment methodology and a systematic counterfeit detection methodology for electrolytic capacitors, which include optical inspection, X-Ray examination, weight measurement, electrical parameter measurement over temperature, and chemical characterization of the electrolyte using Fourier Transform Infrared Spectroscopy (FTIR) to assess the failure modes, mechanisms, and reliability risks. FTIR was successfully able to detect a lower concentration of ethylene glycol in the counterfeit capacitor electrolyte. In the electrical properties measurement, the distribution of values at room temperature was broader for counterfeit parts than for the authentic parts, and some electrical parameters at the maximum and minimum rated temperatures were out of specifications. These techniques, particularly FTIR analysis of the electrolyte and electrical measurements at the lowest and highest rated temperature, can be very effective to screen for counterfeit electrolytic capacitors.
    IEEE Transactions on Reliability 01/2014; 63(2):468-479.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, the degradation based reliability demonstration test (RDT) plan design problems for long life products under a small sample circumstance are studied. Fixed sample method, sequential probability ratio test (SPRT) method, and sequential Bayesian decision method are provided based on univariate degradation testing. The simulation examples show the superiority of degradation based RDT methods compared with the traditional failure based methods, and the sequential-type methods have more test power than their fixed sample counterparts. The test power can be further improved by combining the test data of a reliability indicator with the data of its marker, based on which the bivariate fixed sample method and the sequential Bayesian decision method are defined. The simulation study shows the benefit from the combination. The degradation based RDT plan optimization model, and the corresponding searching-based solution algorithm using some heuristic rules discovered in the paper, are also presented. The case study of Rubidium Atomic Frequency Standard with a RDT plan design demonstrates the effectiveness of our methods on overcoming the difficulties of small samples in reliability demonstration of long life products.
    IEEE Transactions on Reliability 01/2014; 63(3):781-797.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we focus on the reliability and availability analysis of Web service (WS) compositions, orchestrated via the Business Process Execution Language (BPEL). Starting from the failure profiles of the services being composed, which take into account multiple possible failure modes, latent errors, and propagation effects, and from a BPEL process description, we provide an analytical technique for evaluating the composite process' reliability-availability metrics. This technique also takes into account BPEL's advanced composition features, including fault, compensation, termination, and event handling. The method is a design-time aid that can help users and third party providers reason, in the early stages of development, and in particular during WS selection, about a process' reliability and availability. A non-trivial case study in the area of travel management is used to illustrate the applicability and effectiveness of the proposed approach.
    IEEE Transactions on Reliability 01/2014; 63(3):689-705.
  • [Show abstract] [Hide abstract]
    ABSTRACT: Motivated by an industrial problem affecting a water utility, we develop a model for a load sharing system where an operator dispatches work load to components in a manner that manages their degradation. We assume degradation is the dominant failure type, and that the system will not be subject to sudden failure due to a shock. By deriving the time to degradation failure of the system, estimates of system probability of failure are generated, and optimal designs can be obtained to minimize the long run average cost of a future system. The model can be used to support asset maintenance and design decisions. Our model is developed under a common set of core assumptions. That is, the operator allocates work to balance the level of the degradation condition of all components to achieve system performance. A system is assumed to be replaced when the cumulative work load reaches some random threshold. We adopt cumulative work load as the measure of total usage because it represents the primary cause of component degradation. We model the cumulative work load of the system as a monotone increasing and stationary stochastic process. The cumulative work load to degradation failure of a component is assumed to be inverse Gaussian distributed. An example, informed by an industry problem, is presented to illustrate the application of the model under different operating scenarios.
    IEEE Transactions on Reliability 01/2014; 63(3):721-730.
  • B.X. Wang, K. Yu, Z. Sheng
    [Show abstract] [Hide abstract]
    ABSTRACT: Constant-stress procedures based on parametric lifetime distributions and models are often used for accelerated life testing in product reliability experiments. Maximum likelihood estimation (MLE) is the typical statistical inference method. This paper presents a new inference method, named the random variable transformation (RVT) method, for Weibull constant-stress accelerated life tests with progressively Type-II right censoring (including ordinary Type-II right censoring). A two-parameter Weibull life distribution with a scale parameter that is a log-linear function of stress is used. RVT inference life distribution parameters and the log-linear function coefficients are provided. Exact confidence intervals for these parameters are also explored. Numerical comparisons of RVT-based estimates to MLE show that the proposed RVT inference is promising, in particular for small sample sizes.
    IEEE Transactions on Reliability 01/2014; 63(3):807-815.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We describe a set of design methodologies and experiments related to enabling hardware systems to utilize on-the-fly configuration of reconfigurable logic to recover system operation from unexpected loss of system function. Methods we explore include programming using locally stored configuration bitstream as well as using configuration bitstream transmitted from a remote site. We examine specific ways of utilizing reconfigurable logic to regenerate system function, as well as the effectiveness of this approach as a function of the type of attack, and various architectural attributes of the system. Based on this analysis, we propose architectural features of System-on-Chip (SoC) that can minimize performance degradation and maximize the likelihood of seamless system operation despite the function replacement. This approach is highly feasible in that it is not required to specially manage system software and other normal system hardware functions for the replacement.
    IEEE Transactions on Reliability 01/2014; 63(2):661-675.
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a robust optimization framework to deal with uncertain component reliabilities in redundancy allocation problems in series-parallel systems. The proposed models are based on linearized versions of standard mixed integer nonlinear programming (MINLP) formulations of these problems. We extend the linearized models to address uncertainty by assuming that the component reliabilities belong to a budgeted uncertainty set, and develop robust counterpart models. A key challenge is that, because the models involve nonlinear functions of the uncertain data, classical robust optimization approaches cannot apply directly to construct their robust optimization counterparts. We exploit problem structure to develop robust counterparts and exact solution methods, and present computational results demonstrating their performance.
    IEEE Transactions on Reliability 01/2014; 63(1):239-250.
  • [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, comparisons of allocation policies of components in two-parallel-series systems with two types of components are provided with respect to both hazard rate and reversed hazard rate orders. The main results indicate that the lifetime of these kinds of system is stochastically maximized by unbalancing the two classes of components as much as possible. We only assume that the two distributions implied in the model have proportional hazard rates. The same type of comparisons are also given for the dual model, the two-series-parallel systems, but assuming that the distributions implied in the model have proportional reversed hazard rates, and therefore the final conclusion is the opposite; that is, the reliability of the system improves as the similarity between the two parallel subsystems increases.
    IEEE Transactions on Reliability 01/2014; 63(1):223-229.

Related Journals