Statistical Methods for Combining Information: Stryker Family of Vehicles Reliability Case Study

ArticleinJournal of Quality Technology 47(4):400-415 · October 2015with 22 Reads
Cite this publication
Abstract
Problem: Reliability is an essential element in assessing the operational suitability of Department of Defense weapon systems. Reliability takes a prominent role in both the design and analysis of operational tests. In the current era of reduced budgets and increased reliability requirements, it is challenging to verify reliability requirements in a single test. Furthermore, all available data should be considered in order to ensure evaluations provide the most appropriate analysis of the system's reliability. Approach: This paper describes the benefits of using parametric statistical models to combine information across multiple testing events. Both frequentist and Bayesian inference techniques are employed and they are compared and contrasted to illustrate different statistical methods for combining information. We apply these methods to data collected during the developmental and operational test phases for the Stryker family of vehicles. Results: We show that, when we combine the available information across two test phases for the Stryker family of vehicles, reliability estimates are more accurate and precise than those reported previously using traditional methods that use only operational test data in their reliability assessments.

Do you want to read the rest of this article?

Request full-text
Request Full-text Paper PDF
  • Article
    One of the most powerful features of Bayesian analyses is the ability to combine multiple sources of information in a principled way to perform inference. This feature can be particularly valuable in assessing the reliability of systems where testing is limited. At their most basic, Bayesian methods for reliability develop informative prior distributions using expert judgment or similar systems. Appropriate models allow the incorporation of many other sources of information, including historical data, information from similar systems, and computer models. We introduce the Bayesian approach to reliability using several examples and point to open problems and areas for future work. © 2017, © Institute for Defense Analyses. Published with License by Taylor & Francis.
  • Article
    Full-text available
    In order to master the main malfunction and the law of occurrence of the vehicle which affects the operational reliability of the Straddle-Type Monorail traffic, this paper, based on the structural characteristics of monorail vehicle system and the statistical data of vehicle operational malfunction, uses the analysis method of system reliability engineering to study and determine the key factors that affect the operational reliability of monorail vehicle. Then, the operational reliability evaluation index system of monorail vehicle is established based on the influencing factors. Finally, the AHP (Analytic Hierarchy Process) method is used to determine the weights of each index, and the fuzzy comprehensive evaluation method is used to evaluate the operational reliability of monorail vehicle synthetically. The evaluation results have some guidance on the development of more effective maintenance strategies for monorail vehicle maintenance departments.
  • A commonly occurring problem in reliability testing is how to combine pass/fail test data that is collected from disparate environments. We have worked with colleagues in aerospace engineering for a number of years where two types of test environments in use are ground tests and flight tests. Ground tests are less expensive and consequently more numerous. Flight tests are much less frequent, but directly reflect the actual usage environment. We discuss a relatively simple combining approach that realizes the benefit of a larger sample size by using ground test data, but at the same time accounts for the difference between the two environments. We compare our solution with what look like more sophisticated approaches to the problem in order to calibrate its limitations. Overall, we find that our proposed solution is robust to its inherent assumptions, which explains its usefulness in practice. Copyright
  • Article
    Engineers use reliability experiments to determine the factors that drive product reliability, build robust products, and predict reliability under use conditions. This article uses recent testing of a howitzer to illustrate the challenges in designing reliability experiments for complex, repairable systems. We review research in complex system reliability models, failure-time experiments, and experimental design principles. We highlight the need for reliability experiments that account for various intended uses and environments. We leverage lessons learned from current research and propose methods for designing an experiment for a complex, repairable system.
  • Article
    We propose a Bayesian hierarchical model to assess the reliability of a family of vehicles, based on the development of the joint light tactical vehicle (JLTV). The proposed model effectively combines information across three phases of testing and across common vehicle components. The analysis yields estimates of failure rates for specific failure modes and vehicles as well as an overall estimate of the failure rate for the family of vehicles. We are also able to obtain estimates of how well vehicle modifications between test phases improve failure rates. In addition to using all data to improve on current assessments of reliability and reliability growth, we illustrate how to leverage the information learned from the three phases to determine appropriate specifications for subsequent testing that will demonstrate if the reliability meets a given reliability threshold.
  • Chapter
    The complex, multifunctional nature of defense systems, along with the wide variety of system types, demands a structured but flexible analytical process for testing systems. This chapter summarizes commonly used techniques in defense system testing and specific challenges imposed by the nature of defense system testing. It highlights the core statistical methodologies that have proven useful in testing defense systems. Case studies illustrate the value of using statistical techniques in the design of tests and analysis of the resulting data. The chapter focuses on the unique statistical challenges of designing operational tests, many of which can be attributed to the process, but some of which are inherent to the complexity of the systems and the missions system operators must complete. It provides an overview of the process of designing experiments for military systems with operational users in an operational environment.
  • Article
    The ability to estimate system reliability with an appropriate measure of associated uncertainty is important for understanding its expected performance over time. Frequently, obtaining full-system data is prohibitively expensive, impractical, or not permissible. Hence, methodology which allows for the combination of different types of data at the component or subsystem levels can allow for improved estimation at the system level. We apply methodologies for aggregating uncertainty from component-level data to estimate system reliability and quantify its overall uncertainty. This paper provides a proof-of-concept that uncertainty quantification methods using Bayesian methodology can be constructed and applied to system reliability problems for a system with both series and parallel structures.
  • The Advanced Theory of Statistics
    • O ' Hagan
    • A Forster
  • Book
    During the past decade and a half, the National Research Council, through its Committee on National Statistics, has carried out a number of studies on the application of statistical methods to improve the testing and development of defense systems. These studies were intended to provide advice to the Department of Defense (DOD), which sponsored these studies. The previous studies have been concerned with the role of statistical methods in testing and evaluation, reliability practices, software methods, combining information, and evolutionary acquisition. Industrial Methods for the Effective Testing and Development of Defense Systems is the latest in a series of studies, and unlike earlier studies, this report identifies current engineering practices that have proved successful in industrial applications for system development and testing. This report explores how developmental and operational testing, modeling and simulation, and related techniques can improve the development and performance of defense systems, particularly techniques that have been shown to be effective in industrial applications and are likely to be useful in defense system development. In addition to the broad issues, the report identifies three specific topics for its focus: finding failure modes earlier, technology maturity, and use of all relevant information for operational assessments. © 2012 by the National Academy of Sciences. All rights reserved.
  • Chapter
    Department of Defense (DOD) efforts to acquire goods and services are often complex and controversial. These efforts are referred to as defense acquisitions. The structure DOD utilizes to plan, execute, and oversee those activities is an intricate and multivariate "system of systems" composed of the requirements, resource allocation, and acquisition systems. This system of systems has evolved over time, its foundation being the report published by the Packard Commission in 1986, many of whose recommendations became part of the Goldwater-Nichols Department of Defense Reorganization Act of 1986. This evolution continued, as the requirements system changed from a threat-based to a capabilities-based system; the resource allocation system added execution reviews and concurrent program/budget reviews; and the acquisition system became a flexible, tailored process. The complexity of this system of systems combined with the magnitude of personnel, activities and funding involved in its operation can result in problems, including inefficient operations, fraud/waste/abuse, and inadequate implementation or enforcement of the laws and regulations that govern it. Both DOD and Congress have worked to address these types of problems and accompanying issues over the years. On April 21, 2010, the House Armed Services Committee unanimously voted in support of the Implementing Management for Performance and Related Reforms to Obtain Value in Every Acquisition Act of 2009 (H.R. 5013). While the act focuses primarily on the acquisition workforce, DOD's internal financial management, and the industrial base, some sections of the proposed bill relate directly to weapon system acquisition. In Fiscal Year (FY) 2009, a number of major efforts were undertaken to reform the acquisition progress. DOD issued an updated and revised DOD Instruction 5000.2 (which governs the process for acquiring systems) and issued an updated and revised Instruction, Joint Capabilities Integration and Development System (which governs the process for deciding what capabilities new weapon systems require). In addition, Secretary Gates stated his intent to significantly alter the way weapon systems are acquired, including canceling or curtailing the acquisition of a number of current programs. For its part, the 110th Congress passed the FY2009 Duncan Hunter National Defense Authorization Act (S. 3001/P.L. 110-417) and the 111th Congress passed the Weapon Systems Acquisition Reform Act of 2009 (S. 454/P.L. 111-23), both of which made changes to the acquisition process. Key provisions in P.L. 111-23 include the appointment of a Director of Cost Assessment and Program Evaluation, a Director of Developmental Test and Evaluation, and a Director of Systems Engineering; a requirement that combatant commanders have more influence in the requirements generation process; changes to the Nunn-McCurdy Act, including rescinding the most recent Milestone approval for any program experiencing critical cost growth; and a requirement that DOD revise guidelines and tighten regulations governing conflicts of interest by contractors working on MDAPs.
  • Article
    We present a Bayesian model for assessing the reliability of multicomponent systems. Novel features of this model are the natural manner in which lifetime data collected at either the component, subsystem, or system level are integrated with prior information at any level. The model allows pooling of information between similar components, the incorporation of expert opinion, and straightforward handling of censored data. The methodology is illustrated with two examples.
  • Article
    Aspects of scientific method are discussed: In particular, its representation as a motivated iteration in which, in succession, practice confronts theory, and theory, practice. Rapid progress requires sufficient flexibility to profit from such confrontations, and the ability to devise parsimonious but effective models, to worry selectively about model inadequacies and to employ mathematics skillfully but appropriately. The development of statistical methods at Rothamsted Experimental Station by Sir Ronald Fisher is used to illustrate these themes.
  • Article
    In this article, we consider the development and analysis of both attribute- and variable-data reliability growth models from a Bayesian perspective. We begin with an overview of a Bayesian attribute-data reliability growth model and illustrate how this model can be extended to cover the variable-data growth models as well. Bayesian analysis of these models requires inference over ordered regions, and even though closed-form results for posterior quantities can be obtained in the attribute-data case, variable-data models prove difficult. In general, when the number of test stages gets large, computations become burdensome and, more importantly, the results may become inaccurate due to computational difficulties. We illustrate how the difficulties in the posterior and predictive analyses can be overcome using Markov-chain Monte Carlo methods. We illustrate the implementation of the proposed models by using examples from both attribute and variable reliability growth data.
  • SAS/STAT®9.3 User's Guide
    • Sas Institute Inc
  • Article
    When tackling complex problems to help with decision-making, we may often have access to multiple sources of data, each of which provide partial information to answer a primary question of interest. By considering the totality of data simultaneously, instead of performing analyses on each data type separately, we can leverage across all types of data to deepen our understanding, appropriately calibrate the uncertainty in our estimates and predictions, as well as potentially reveal weaknesses in our underlying theory. We explore some of the objectives and complications associated with data combination, analysis, and design of experiments in meta-analyses by considering three examples from diverse applications.
  • Article
    This paper develops a framework to determine the performance or reliability of a complex system. We consider a case study in missile reliability that focuses on the assessment of a high fidelity launch vehicle intended to emulate a ballistic missile threat. In particular, we address the case of how to make a system assessment when there are limited full-system tests. We address the development of a system model and the integration of a variety of data using a Bayesian network.
  • Article
    When assessing system reliability using system, subsystem, and component-level data, assumptions are required about the form of the system structure in order to utilize the lower-level data. We consider model forms which allow for the assessment and modeling of possible discrepancies between reliability estimates based on different levels of data. By understanding these potential conflicts between data, we can more realistically represent the true uncertainty of the estimates and gain understanding about inconsistencies which might guide further improvements to the system model. The new methodology is illustrated with several examples.
  • Article
    Full-text available
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a dis- cussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design pro- cess is more akin to a statistical control process than to a straight- forward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to gen- erate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.
  • Article
    Full-text available
    Bayesian reliability modeling of complex systems, such as a missile system, can allow considerable flexibility to incorporate component and subsystem level data, expert knowledge and full system tests. In this paper we present a unified method for developing a model that is able to consistently estimate reliability parameters for all elements of the model, from system level through to subsystem and component levels. The model can be adapted to model the various components of the system at different levels of granularity, depending on where data are available. In addition, this paper presents some direction about how Bayesian priors can be selected to incorporate expert knowledge of the system in a variety of ways. An example of a complex system based on a missile is used to illustrate the methods.
  • Article
    Full-text available
    The mean lifetime of a Weibull variable is not easy to handle because it depends on the gamma function. In this article, we compare three methods for the construction of confidence intervals of the Weibull mean lifetime based on a censored reliability data set. The confidence intervals can be easily calculated from the standard output of a commercial statistical software. The three methods are the naive method, which ignores the gamma function in the Weibull mean expression, the delta method, and an approximated delta method. A simulation study was performed to compare the three methods. It indicates that the naive one is inappropriate when the shape parameter is less than unity and the performance of the other two is very similar. This fact justifies the preference for the approximated delta method because of its simplicity. A numerical example illustrates the methods.
  • Article
    This paper presents a fully Bayesian approach that simultaneously combines non-overlapping (in time) basic event and higher-level event failure data in fault tree quantification. Such higher-level data often correspond to train, subsystem or system failure events. The fully Bayesian approach also automatically propagates the highest-level data to lower levels in the fault tree. A simple example illustrates our approach. The optimal allocation of resources for collecting additional data from a choice of different level events is also presented. The optimization is achieved using a genetic algorithm.
  • Article
    Aspects of scientific method are discussed: In particular, its representation as a motivated iteration in which, in succession, practice confronts theory, and theory, practice. Rapid progress requires sufficient flexibility to profit from such confrontations, and the ability to devise parsimonious but effective models, to worry selectively about model inadequacies and to employ mathematics skillfully but appropriately. The development of statistical methods at Rothamsted Experimental Station by Sir Ronald Fisher is used to illustrate these themes.
  • Article
    The authors propose a nonparametric reliability-growth model based on Bayes analysis techniques. By using the unique properties of the assumed prior distributions, the moments of the posterior distribution of the failure rate at various stages during a development test can be found. The proposed model is compared with the US Army Material Systems Analysis Activity (AMSAA) model based on relative and mean-square prediction errors. In all but one circumstance, the proposed model performed better than either the AMSAA or nonparametric models. The one exception appears to be when no information about the failure rate is available at the start of test and the actual failure process is nonhomogeneous Poisson, with power-law intensity function, as assumed by the AMSAA model
  • Article
    Full-text available
    The systems that statisticians are asked to assess, such as nuclear weapons, infrastructure networks, supercomputer codes and munitions, have become increasingly complex. It is often costly to conduct full system tests. As such, we present a review of methodology that has been proposed for addressing system reliability with limited full system testing. The first approaches presented in this paper are concerned with the combination of multiple sources of information to assess the reliability of a single component. The second general set of methodology addresses the combination of multiple levels of data to determine system reliability. We then present developments for complex systems beyond traditional series/parallel representations through the use of Bayesian networks and flowgraph models. We also include methodological contributions to resource allocation considerations for system relability assessment. We illustrate each method with applications primarily encountered at Los Alamos National Laboratory. Comment: Published at http://dx.doi.org/10.1214/088342306000000439 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org)