IEEE Transactions on Reliability

Published by Institute of Electrical and Electronics Engineers
Online ISSN: 0018-9529
Publications
Article
This paper consists of two parts, both of which are concerned with a ratio of two lives from life test data. Part 1 derives an exact distribution of the quotient of two random variables, each of which follows the truncated extreme-value distribution. Then graph of the cumulative distribution function is given for several values of a parameter. Part 2 develops a control chart for sample ratios to be used in controlling the deterioration of quality characteristics having an extreme-value distribution.
 
Article
This thesis presents typical results of a long-term life-test on P-N-P-N power thyristors with diffused alloyed junctions. Devices are tested under thermal cycle operating conditions for 6 years (50 000 hours) real time. Degradation and catastrophic failures are discussed. Electrical phenomena are correlated with failure rates.
 
Article
Corrections to (6) in "A simple procedure for Bayesian estimation of the Weibull distribution" (vol. 54, pp. 612-616, Dec 05) are presented here.
 
Article
In the above titled paper (ibid., vol. 56, no. 1, pp. 85-93, Mar 07), there were a number of errors. A list of corrections is presented here.
 
Article
The Pioneer 10/11 program began in the summer of 1969 and had as its objective the exploration of the Asteroid Belt¿that region between Mars and Jupiter¿and the exploration of Jupiter. Two flight spacecraft were built¿Pioneer 10 and Pioneer 11. Pioneer 10 was launched in 1972 March, passed by Jupiter in 1973 December, and continued on a trajectory that will allow it to escape the solar system. Thus Pioneer 10 became the first man-made object to pass successfully through the Asteroid Belt, to encounter Jupiter, and to escape from the solar system. Pioneer 11 was launched in 1973 April and passed by Jupiter in 1974 December. As a result of Jupiter's influence on its trajectory, it turned around at Jupiter, crossed back through the solar system, then beyond the orbits of Mars and Jupiter, and then encountered Saturn in 1979 September. This encounter was beyond the original objectives and was made possible by the long life and reliability of the spacecraft. Pioneer 11 became the first man-made object to encounter Saturn. At present, Pioneer 11 is also on a solar-escape trajectory although in the opposite direction from Pioneer 10.
 
Article
This paper presents a nonparametric regression accelerated life-stress (NPRALS) model for accelerated degradation data wherein the data consist of groups of degrading curve data. In contrast to the usual parametric modeling, a nonparametric regression model relaxes assumptions on the form of the regression functions and lets data speak for themselves in searching for a suitable model for data. NPRALS assumes that various stress levels affect only the degradation rate, but not the shape of the degradation curve. An algorithm is presented for estimating the components of NPRALS. By investigating the relationship between the acceleration factors and the stress levels, the mean time to failure estimate of the product under the usual use condition is obtained. The procedure is applied to a set of data obtained from an accelerated degradation test for a light emitting diode product. The results look very promising. The performance of NPRALS is further checked by a simulated example and found satisfactory. We anticipate that NPRALS can be applied to other applications as well
 
Article
A combination of redundant circuits and error-correcting-code circuits have been implemented on a 16-Mb memory chip. The combination of these circuits results in a synergistic fault-tolerance scheme that makes this chip immune to a high level of manufacturing and reliability defects. Experiments have been performed with highly defective chips to test the error-correction capability of this chip and to determine models for the tradeoff between manufacturing yields and reliability. Additional experiments have been done with accelerated protons to investigate the soft-error sensitivity of this chip. Results show no soft-error reliability failures, including those caused by cosmic-particle radiation. Negative binomial distributions were used to evaluate the experiments. The correlation between manufacturing-faults and stress-failures were modeled with a bivariate negative-binomial distribution
 
Article
We consider progressive-stress accelerated life testing wherein the stress on an unfailed item increases linearly with test time, when the time transformation function is a version of the inverse power law. Our approach here is nonparametric in that we do not make many assumptions about life distribution. We introduce a testing pattern and explain how to draw inferences from progressive stress tests. In some detail, we propose an estimator of life distribution at the usual stress level. Reader Aids - Purpose: Widen state of the art Special math needed for explanations: Statistics Special math needed to use results: Same Results useful to: Reliability analysts.
 
Article
This paper investigates the validity of the execution-time theory of software reliability. The theory is outlined, along with appropriate background, definitions, assumptions, and mathematical relationships. Both the execution time and calendar time components are described. The important assumptions are discussed. Actual data are used to test the validity of most of the assumptions. Model and actual behavior are compared. The development projects and operational computation center software from which the data have been obtained are characterized to give the reader some basis for judging the breadth of applicability of the concepts.
 
Article
Not Available
 
Article
Not Available
 
Article
Not Available
 
Article
Not Available
 
Article
A census of customer outages reported to Tandem showing a clear improvement in the reliability of hardware and maintenance has been taken. It indicates that software is now the major source of reported outages (62%), followed by system operations (15%). This is a dramatic shift from the statistics for 1985. Even after discounting systematic underreporting of operations and environmental outages, the conclusion is clear: hardware faults and hardware maintenance are no longer a major source of outages. As the other components of the system become increasingly reliable, software necessarily becomes the dominant cause of outages. Achieving higher availability requires improvement in software quality and software fault tolerance, simpler operations, and tolerance of operational faults
 
Article
This paper reviews the progress in software-reliability since 1975, and discusses the best tools and practices that can be applied today. The best development practices are recommended herein, for managing the reliability of software. The development of software-reliability models and user-friendly tool kits is described. These tools allow software-reliability to be measured, tracked, and improved to meet the customer-specified reliability. The ability to measure software-reliability promotes development focus, and consequently, its improvement. The 1990s have seen much growth in the software content of products. The problems have grown as the software mushrooms both in size and complexity. Further, software is increasingly important in safety critical applications. To counter these concerns, there has been a great collective effort to improve the reliability and quality of software, through better development-focus exemplified by the Software Engineering Institute's Capability-Maturing-Model levels. Software-reliability-engineering has shown best practices for developing and measuring software-reliability. The 1990s have brought collective focus on managing reliability. Such collaboration has been in packaging software-reliability models in user-friendly packages such as SMERFS and CASRE
 
Conference Paper
The author recounts some of the more notable design faults that appeared during the development of the Multiflow TRACE/200 series of minisupercomputers. During development of the TRACE, the design errors found appeared to be largely random and uncorrelated. However, it appears that a fairly small set of categories can nearly span the set. These faults fall roughly into a few categories: interface misassumptions, stale-instructions-in-cache, parity-related, designer errors, CAD tools, and parts with defective designs. Specific examples are given for each category, and corrections for these design flaws are mentioned
 
Article
Several examples of design faults that appeared during the development of the Multiflow TRACE/200 series of minisupercomputers are discussed. The design flaws generally fell into a few categories: interface mis-assumptions, instruction cache, parity-related, designer errors, CAD tools, and defective part designs (especially ground-bounce). Examples of bugs in each category are given. Random diagnostics were particularly helpful in detecting several fault classes. The authors conclude with a classification of the severity and time history of the bug categories
 
Article
This paper deals with the McDonnell Aircraft (MCAIR) approach to achieving the objectives of R&M 2000. The ``Direction and Goals'', Objective 1, and ``Industry Commitment'', Objective 6, are primarily addressed as they have the most impact on industry. Independent Research and Development (IRAD) efforts for R&M Design Approaches and Supportability show how an aircraft support system can be operated to increase availability at reduced cost and manpower requirements. R&M 2000 establishes the foundation of supportability by applying R&M principles in balance with performance, cost, and schedule in the design process. Proper consideration of the integrated logistics support elements throughout this application has resulted in the highest levels of supportability possible. High supportability is essential to achieving the customer goal of increased warfighting capability at an affordable cost.
 
Article
This paper discusses the impact of R&M 2000 on the acquisition process. The paper emphasizes the interest level during the acquisition process, and the opportunities created by the proper implementation of advanced technology to improve the reliability and maintainability of systems. The use of electronics in weapon systems may be the way in which weapon systems gain increased capability to counter the threats of the 1990s. There are many opportunities to effect needed increases in reliability and maintainability by applying newer technologies.
 
Article
This paper introduces 5 principles and 21 associated building blocks representative of successful approaches used within industry to produce reliable and maintainable systems. Several building blocks are singled out for discussion because of recent initiatives undertaken by the Air Force. These topics include modularity, technician transparency, simplification, computer-aided tools, and environmental stress screening. The 21 building blocks provide a strong foundation upon which to build weapon systems with increased combat capability. The requirement to maintain the designed-in reliability and maintainability (R&M) attributes through careful attention to quality control issues also receives emphasis. Those factors which most strongly influence variability in the desired system performance must be identified. Finally, the paper emphasizes the crucial role of education in truly integrating the design aspects of R&M into engineering disciplines.
 
Article
This paper discusses the force multiplier effects of R&M from an operating command perspective whose assets range from intercontinental ballistic missiles to complex command, control, and communication systems. When expressing a need for new systems, operational commands will state their requirements in operational terms which then must be translated into contractual terms. A new office has been formed to insure that clear requirements are formed. While forming requirements, specific attention must be given to the requirements for ease of maintenance. The manpower and materials implications of this up front planning are illustrated with the Minuteman III ICBM missile guidance set.
 
Article
The US Air Force R&M 2000 Action Plan and related policies and procedures have elevated reliability and maintainability to be co-equal with performance, schedule, and cost. Elevation of the stature of R&M led Lockheed-Georgia Company to implement organizational changes in the R&M and supportability areas to increase their functional stature and to permit full advantage to be taken of the US Dept. of Defense increased emphasis on R&M. These changes are directed toward ``doubling the reliability'' and halving the maintenance of products and, therefore, enhancing a competitive position.
 
Article
The United States Air Force, Army, and Navy have established the need for Environmental Stress Screening (ESS). The Air Force policy statement issued 1986 November imposed the provision that all fielded major electronic systems must deliver 2000 hours failure free. To accomplish this will require changes in design, producibility, and manufacturing attitudes. Defectives must be removed in the factory, at the most cost effective point in the production cycle. This will require assessing failures from the field, then developing a system to identify these defectives in the factory, at the lowest level of assembly. Poor reliability rooted in high component failure rates leads to premature equipment failures. Fielded systems and spares provisioning with resultant maintenance actions cost heavily during each year of life. The result is that the spares provisioning & logistic support is now exceeding 50 percent of the defense allocation (1984 and 1985 budgets). Cost increases result from poor equipment reliability. Today, systems originally contracted to deliver over 500 hours mean time-between-failures (MTBF) are delivering less than 50 hours MTBF, (Navy Maintenance Material Management ``3M'' report). Excessive failure of components is a major cause of reduced life of equipment. Most maintenance actions involve the replacement of piece parts. The roots of this problem lie in the factory. Manufacturers are not removing defectives early in the product design/production cycle. Because of inadequate testing, most defectives are not detected in the factory. Failures are then precipitated in the field.
 
Article
This paper outlines the priorities of the Air Force Logistics Command (AFLC) in supporting the R&M 2000 initiative; provides an overview of long, mid, and near-term R&M planning; and discusses initiatives within the AFLC depots and material management community. In view of decreasing manpower and funding AFLC is focusing on the basic elements of R&M with smart applications of developing technologies and innovative approaches within the Air Logistics Centers. AFLC has restructured itself to support such technologies at VHSIC, composites, and information. Several major offices and centers such as the Air Force Acquisition Center and the Air Force Coordinating Office for Logistics Research actively work R&M/supportability issues. These issues include R&M incentives, logistics support analysis, and advances in avionics leading to greater combat capability through R&M. On a more day-to-day basis the paper discusses R&M integration into the Weapon System Master Plan and ongoing efforts to enhance the R&M of existing systems. Initiatives have begun within depots to identify improvements in repair techniques which will lead to higher reliability and productivity. Within the material management community, new management approaches, use of ESS on repairable assets, and enhancement of the product improvement process all highlight new efforts.
 
Article
As weapon systems become more sophisticated to meet multiple complex hostile threats, there will be an increasing reliance upon a high density of analog and digital microelectronic components, modules, and subsystems. Hybrid microelectronics are moving toward submicron geometries for semiconductor components, multilayer interconnects, and higher component densities on larger area substrates. This miniaturization increases the susceptibility of microelectronic circuitry to electrical overstress (EOS) and electrostatic discharge (ESD). Since EOS and ESD directly affect reliability and maintainability, procedures must be developed to account for them in engineering design, manufacturing, and testing. Over the past several years, there has been a concerted effort to raise ESD awareness at all levels of design and production at Motorola and other electronic firms. This paper briefly describes what measures should be considered and presents examples of their implementation at Motorola to increase reliability, lower costs, and reduce maintainability factors.
 
Article
This paper details the first four of a continuing series of policy letters which will generate the impetus for needed R&M improvements in Air Force systems. The first policy letter deals with R&M 2000 Environmental Stress Screening. This article discusses reasons for emphasizing this technique and provides some of the specific guidelines. Policy letter two establishes the requirement for all new systems to have ``twice the reliability'' and ``one-half the maintainability'' of the system being replaced. The third R&M policy letter focuses on the requirement for design engineers to design-in system characteristics which allow for ease of maintenance in the often harsh operational environment. The final policy letter details the minimum acceptable reliability of electronic systems. The policy letter sets a mean time between failures of 2000 hours for a line replaceable unit in an airborne fighter environment, the harshest environmental condition. The article closes with a discussion of quality and its importance.
 
Article
This paper describes the Tactical Air Command (TAC) methodology for stating R&M requirements for new systems and improving the combat capabilities of existing systems. It focuses on why TAC chose to state R&M in broad, output terms and illustrates three ways the R&M 2000 program has increased the emphasis on supporting fielded systems. Specifically, attention is given to two new technologies which should profoundly affect the maintainability of tactical systems.
 
Article
This paper describes how the US Army is improving readiness through enhanced reliability, availability, and maintainability (RAM). The Army is serious about supplying its personnel with the kind of equipment that stays on line. The US Army Materiel Command is taking aggressive steps to ensure that systems achieve their RAM requirements. Too often the Army has been accused of settling for minimum performance. As is well known, industry is reactive; it responds to pressure to improve what the customer thinks is important. An important step to getting higher levels of reliability and maintainability is to stand together with the Air Force and Navy customers and demand that RAM design and manufacturing disciplines are carried out and contractual RAM requirements are achieved. The achievement of requirements must be accomplished during system development and fielding. Improved RAM results in improved productivity, user satisfaction, and lower operating and support (O&S) costs. Linking R&M initiatives with O&S cost is an important step in justifying the up-front design and manufacturing disciplines that improve field performance. Meeting RAM requirements is the beginning and not the end of Army reliability efforts. Continued efforts to improve RAM are the thrust. For each system, the Army strives for continued improvement. Increased reliability reduces O&S costs while improving fielded mission accomplishment. Contractors will continue their efforts to improve production quality and to eliminate systemic causes of field failures.
 
Article
The US Air Force Reliability and Maintainability (R&M 2000) program seeks to institutionalize the means for improving the reliability and maintainability of weapons systems. The R&M 2000 program will greatly influence all elements of Integrated Logistics Support (ILS) comprising the logistics elements required to support a weapon system, viz, spare parts, support equipment, technical publications, maintenance training, and training devices. This paper has two objectives: 1) to describe tnhe linkage by which the R&M 2000 program influences requirements for logistics elements when fully implemented; 2) to give visibility to the interactions between design and the various logistics resources. The analytic Supportability Figure of Merit (SFOM) model provides a method for interrelating R&M 2000 goals with ILS elements; it computes several measures of effectiveness (MOEs) in terms of design-related R&M and ILS parameters. These MOEs relate directly to the R&M 2000 program objectives. The SFOM model assesses individual systems designs with regard to R&M 2000 objectives and identifies the best of several competing designs from this standpoint. Lockheed-California Company's experience in applying the model to systems for the Advanced Tactical Fighter (ATF) program shows that alternative designs can be evaluated in terms of R&M 2000 goals and as a function of R&M and ILS parameters.
 
Article
This paper presents the implications on software of the US Air Force Reliability and Maintainability Action Plan: R&M 2000, whose objective is to improve defense system reliability and maintainability. These implications are very important, even though R&M 2000 does not explicitly mention software. However, Air Force Regulation 800-18, which implements R&M 2000, does state that ``system ... comprises both hardware and software elements'' and includes the requirement to ``integrate the development of reliable software into the overall system development and acquisition program.'' Further, the US Department of Defense (DoD) Software Initiatives, new DoD Directives, and other Air Force Regulations on software, together with industry/academic initiatives, are intended to result in policies, standards, and specifications which lead to defense system software with requisite reliability and maintainability. This difficult goal can be achieved provided that: ¿The DoD Software Initiatives successfully create new software technology and transfer it to industry. ¿Industry associations (eg, Electronic Industries Association [EIA], National Industrial Security Association INSIA]) and professional societies (eg, Institute of Electrical and Electronics Engineers [IEEE], American Institute of Aeronautics and Astronautics [AIAA]) create and improve software reliability and maintainability standards through effective committee work. ¿Industry/Academe upgrade software development procedures to be consistent with software engineering principles based on Ada® technology.
 
Article
This paper is concerned with the impact on the US defense industry of the US Air Force R&M 2000 plan and other efforts for a dramatic improvement in the reliability and lifecycle cost of military systems. The paper generally discusses the aims and techniques of environmental stress screening (ESS) with quantitative examples from experience gained at the Litton Guidance and Control Systems Division.Reader Aids -Purpose: Present a point of view and a perspective Special math needed for explanations: None Special math needed to use results: None Results useful to: Reliability and quality analysts and managers. Copyright © 1987 by The Institute of Electrical and Electronics Engineers, Inc.
 
Article
The US Air Force R&M 2000 Action Plan makes R&M equal to technical performance and schedule in source selection, proposal evaluation and contract performance. This paper discusses that coequality. R&M 2000 provides an opportunity for the practitioners to achieve a new high water mark in technology by institutionalizing the objectives toward a common cause.Reader Aids -Special math needed for explanations: None Special math needed to use results: None Results useful to: Management; Reliability, maintainability, logistics, human factors, and design engineers. Copyright © 1987 by The Institute of Electrical and Electronics Engineers, Inc.
 
Article
The US Air Force new reliability and maintainability program, R&M 2000, will affect defense contractors in many ways. This paper discusses the potential impact of R&M 2000 on defense contractors, and the impact on defense contractor business planning activities.Reader Aids -Purpose: Discuss the potential impacts of R&M 2000 Special math needed for explanations: None Special math needed to use results: None Results useful to: Corporate business planners, defense industry addimanagement. Copyright © 1987 by The Institute of Electrical and Electronics Engineers, Inc.
 
Article
The US Air Force R&M 2000 ESS policy differs little from that of the other armed services in that the same two primary screens, temperature cycling and random vibration, are employed at the module and end-item levels. Two of the screening parameters, 25 cycles and 30 °C/minute, for module-level temperature-cycling exceed the norm. What is different is the requirement for part defect rates, 1000 parts per million (ppm) beginning in 1986 October and 100 ppm in 1989 October. Defense contractors will have to figure out how to work with their part suppliers, or develop in-house screening methods to get part defect rates down to 100 ppm. The Air Force has published a handbook [1] and a companion guideline document [3] for implementing the ESS policy. The handbook contains procedures for addressing the four problems: 1) estimating the fraction of defects entering the manufacturing process, 2) estimating the fraction of defects generated during manufacturing, 3) assessing the effectiveness of stress screens, and 4) determining an acceptable level of defects remaining on delivery. The ESS policy and guidelines can result in effective application of ESS and improved field reliability.
 
Article
Not Available
 
Article
This paper describes a computer program and data base with automatic part failure-rate estimation according to MIL-HDBK-217B. The user supplies only a part list and application-dependent information. The program retrieves the part characteristics from the database, and computes the failure rate and power consumption for each part. Program options sum the failure rate and power requirements (dissipation) for the entire part list and perform trade-off analyses for different operating conditions or screening levels.
 
Article
Some limitations on the use of Mil-Hdbk-217E models within the design process are discussed. Reliability was predicted for three printed wiring boards representative of those used for avionic applications in order to evaluate the inherent variability. Parts count and parts stress analyses were conducted in three environments using Mil-Hdbk-217E models. In addition, parts stress analyses were conducted at various temperatures, assuming that components were thermally isolated and the thermal interactions resulted from the characteristics of the cooling system. The results suggest that reliability can be predicted only when the layout of the components and exact thermal mapping are known. In practice this is not acceptable, since some measure of reliability prediction is necessary in determining electrical, thermal, and mechanical design tradeoffs early in the design process
 
Article
As science and technology become increasingly sophisticated, government and industry are relying more and more on science's advanced methods to determine reliability. Unfortunately, political, economic, time, and other constraints imposed by the real world, inhibit the ability of researchers to calculate reliability efficiently and accurately. Because of such constraints, reliability must undergo an evolutionary change. The first step in this evolution is to re-interpret the concept so that it meets the new century's needs. The next step is to quantify reliability using both empirical methods and auxiliary data sources, such as expert knowledge, corporate memory, and mathematical modeling and simulation.
 
Article
We discuss optimum inspection policies by introducing the inspection density. We derive the optimum inspection policy by using this inspection density. The models discussed are: 1) the basic model, 2) the basic model with checking time, and 3) the basic model with imperfect inspection. For each model, we obtain the approximate optimum inspection policy minimizing the total s-expected cost by applying the calculus of variations.
 
Article
An optimal inspection and replacement policy is discussed for a unit which assumes any one of several Markov states. The policy evaluation function is the s-expected cost-per-unit-time over an infinite time span. The problem is formulated as a semi-Markov decision process with a modified policy-improvement routine. Some properties of the optimal policy are discussed, for example, the control limit rule holds and the inspection time interval becomes shorter as degradation progresses.
 
Article
This paper presents the results of an analysis of outage data recorded over a 12-year period on 112 345-kV transmission lines of the Commonwealth Edison Company in Illinois; the lengths range from 3.5 to 188 miles. A linear relationship is developed between outage rate and line length; it does not pass through the origin but above it, indicating a residual outage rate corresponding to zero line length. This residual outage rate is attributed to terminal conditions and equipment, and is viewed quantitatively by equating it to a fixed number of miles of line length. Because of the wide scatter of data points, the data were grouped by line-length in 10-mile increments. The outage and years in service for each group were combined, and then analyzed by groups. The Gaussian method of least squares is used to fit a line to these points. The important value, is the outages/year due to the terminal conditions and equipment (ie, where the length of the transmission line equals zero and outages are still indicated). For accurate representation of this relation, only the values representing the largest number of data points, are included in the calculations. These show an annual failure rate of 0.7 which compares most favorably with the 0.6 found with the Markov Model (ie, by developing a failure and repair rate matrix due to the dependent outage combined with its probability matrix. A graphic method for determining the effects of terminal conditions and equipment has also been introduced.
 
Article
Three assumptions of Markov modeling for reliability of phased-mission systems that limit flexibility of representation are identified. The proposed generalization has the ability to represent state-dependent behavior, handle phases of random duration using globally time-dependent distributions of phase change time, and model globally time-dependent failure and repair rates. The approach is based on a single nonhomogeneous Markov model in which the concept of state transition is extended to include globally time-dependent phase changes. Phase change times are specified using nonoverlapping distributions with probability distribution functions that are zero outside assigned time intervals; the time intervals are ordered according to the phases. A comparison between a numerical solution of the model and simulation demonstrates that the numerical solution can be several times faster than simulation
 
Article
HP-41C handheld calculator programs are presented for Bayes interval estimation of: 1) the reliability R of a component for binomial data; 2) both R and the MTBF for exponential data. In both cases, the “ignorance” prior is employed: uniform on R over the range 0 to 1. The input data are numbers of trials and successes for the binomial case; and test time, mission time, and failures for the exponential. This paper describes the models and includes the programs. Copyright © 1982 by The Institute of Electrical and Electronics Engineers, Inc.
 
Article
The authors present an algorithm for routing fiber around a ring in a network, when the network nodes, links, connectivity, and which offices are to be used on that ring together are known. The algorithm aids automated survivable network planning. The algorithm was programmed in C, and run on a SPARC-station. Under certain conditions, the problem degenerates to the traveling salesman problem, and the ring routing algorithm degenerates to the nearest neighbor method of solving that problem
 
Article
Military Standard 470 is a recently developed and published document containing the requirements for conducting a maintainability program. It is a tri-service coordinated document and replaces seven previously existing maintainability specifications of the three services. This paper examines each of the maintainability program task requirements and offers explanation, interpretation, or amplification, as deemed necessary, to provide a better understanding of the salient features of the document.
 
Article
The direct method in sequential analysis is used for an exact analysis of the Method-One maintainability demonstration plan in MIL-STD-471. Method-One is a nonparametric binomial sequential test and consists of three plans: A, B1, B2.
 
Article
This paper presents results of an analysis of the sequential test (ST) procedures described in MIL-HDBK-781A, and IEC 61124, intended for checking the mean Time Between Failures (TBF) value under an exponential distribution of the TBF. The methodological basis of the calculations consists in discretization of the ST process through subdivision of the time axis in small segments. By this means, the process is converted into a binomial for which an algorithm, and a fast computer program have been developed; and most important of all, a tool is provided for searching for the optimal truncation. The influence of truncation by time on the Expected Test Time (ETT) characteristics was studied; and an improved truncation method, minimizing this influence, was developed. The distributions of the test times were determined. The type A plan characteristics in IEC 61124:2006 have substantial inconsistencies in the probabilities of types I & II errors (up to a factor of 2), and in the ETT (up to 17%). We checked these results by using the binomial-recursive method, and by simulation. The Type C plans, reproduced from GOST R27.402:2005, are consistent; but there is scope (and need) for substantial improvement of the search algorithm for the optimal parameters.
 
Article
The present paper examines the effect of safe failure fraction (SFF) constraints on hazardous-event rates, and discusses the validity of the SFF constraints in IEC 61508. First, the safe states are categorized into three types of states, and overall systems involving safety-related systems are classified into six types of systems based on the safe-state categorization, and the completeness of trips. Next, state-transition models for the systems where the effect of SFF is the greatest are presented, and the hazardous-event rates are analysed for the systems. Then, it is found that, when the effect of the SFF constraints is positive, it is negligible; and when it is negative, it is not negligible for safety. Thus, we recommend that the application of the SFF constraints to the standard should be put on hold.
 
Article
This paper describes some of the advantages, shortcomings, and recommended changes to MIL-STD-781B, which is a military specification for reliability test plans. Particular emphasis is given to laboratory discrepancies that are excluded from consideration as relevant failures. The principal reason that MIL-STD-781B test results do not reasonably foretell the minimum MTBF that can be expected in the field is that the field definition of failure is not compatible with the laboratory definition. Copyright © 1975 by The Institute of Electrical and Electronics Engineers, Inc.
 
Top-cited authors
John Kalbfleisch
  • University of Michigan
Kishor S Trivedi
  • Duke University
David W. Coit
  • Rutgers, The State University of New Jersey
Elisa Lee
  • University of Oklahoma Health Sciences Center
An hoang Pham
  • University of Greenwich