Article

A Bayesian framework for parameters estimation in complex system

Agency of Environment Protection, Bihor County, 410464, Oradea, ROMÂNIA; Institute of Solid Mechanics of Romanian Academy C-tin Mille 15, 010141, ROMÂNIA; Department of Electrical Measurements; Faculty of Electrical Engineering, Technical University Cluj Napoca, 400020, Cluj -Napoca, ROMÂNIA; Department of Electrical Engineering, Measurements and Electric Power Use; Faculty of Electrical Engineering and Information Technology, University of Oradea Universităţii st, Oradea, ROMÂNIA
01/2009;

ABSTRACT The real-life complex development situations express that the methods applied to new product development process content reliability risks which require assessment and quantification at the earliest stage, extracting relevant information from the process. Reliability targets have to be realistic and systematically defined, in a meaningful way for marketing, engineering, testing, and production. Potential problems proactively identified and solved during design phase and products launched at or near planned reliability targets eliminate extensive and prolonged improvement efforts after start on. Once in the market, products standard procedures require monitoring of early signs of issues, allowing corrective action to be quickly taken. Reliability validation before a product goes to market by the means of Bayesian statistical method because the model has shorter confidence intervals than the classical statistical inference models, allowing a more accurate decision-making process. The paper proposes the estimation of the shape parameters in a complex data structures approached with exponential gamma distribution as model of life time, reliability and failure rate functions. The numerical simulation performed in the case study validates the correctness of the proposed methodology.

0 Bookmarks
 · 
73 Views
  • [Show abstract] [Hide abstract]
    ABSTRACT: Reliability importance indices are valuable in establishing direction and prioritization of actions related to an upgrading effort (reliability improvement) in system design, or suggesting the most efficient way to operate and maintain system status. Existing indices are calculated through analytical approaches, and application of these indices to complex repairable systems may be intractable. Complex repairable systems are being increasingly seen, and issues related to analytical system reliability and availability solutions are well known. To overcome this intractability, discrete event simulation, through the use of reliability block diagrams (RBD), is often used to obtain numerical system reliability characteristics. Traditional use of simulation results provides no easy way to compute reliability importance indices. To bridge this gap, several new reliability importance indices are proposed and defined in this paper. These indices can be directly calculated from the simulation results and their limiting values are traditional reliability importance indices. Examples are provided to illustrate the application of the proposed importance indices.
    Reliability and Maintainability, 2004 Annual Symposium - RAMS; 02/2004
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The results presented in this paper lead to solving main problems of the reliability theory: specifying the failure moments of a failure and determining the solutions of renewal equations. We analyze the following situations: 1. the system structure is not taken into consideration; 2. the system structure is known. In the first case we presume that the adopted efficiency function is the average operation time and, by using specific methods of the theory of games, we can prove that there is no equilibrium type solution, so the failure moment of the system cannot be precisely determined. By solving some specific problems, type maximum or minimax, we can only get the interval where the failure point of the system is found. The optimal problem type maximum is solved by specific methods from the theory of games while the optimal problem type minimax is solved by using the maximum principle of Pontriaghin. In the second case we start from the graph structure associated to a system with renewal operations and we build immediately the equation system with finite differences and the system of differential equations associated to this graph. Applying the Laplace transformation it is determined the system availabilities and unavailabilities caused by its subsystems. The failure moments of the system are determined as equilibrium points but the difficulties in calculations lead to obtaining only an approximate solution. Knowing the failure moments of the analyzed system lead to the reconsideration of the renewal policies of the system. Practically, there are determined the approximate solutions of the renewal equations and their separation curve. Having these elements we can completely analyze the renewal process; this analysis being based both on the failure moments of the system and on the renewal costs of the analyzed system.
    WSEAS Transactions on Mathematics. 01/2009; 8(2).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Several software reliability growth models (SRGMs) have been presented in the literature in the last three decades. These SRGMs take into account different testing environment depending on size and efficiency of testing team, type of components and faults, design of test cases, software architecture etc. The plethora of models makes the model selection an uphill task. Recently, some authors have tried to develop a unifying approach so as to capture different growth curves, thus easing the model selection process. The work in this area done so far relates the fault removal process to the testing/execution time and does not consider the consumption pattern of testing resources such as CPU time, manpower and number of executed test cases. More realistic modeling techniques can result if the reliability growth process is studied with respect to the amount of expended testing efforts. In this paper, we propose a unified framework for testing effort dependent software reliability growth models incorporating imperfect debugging and error generation. The proposed framework represents the realistic case of time delays between the different stages of fault removal process i.e Failure Observation/Fault Detection and Fault Removal/Correction processes. The Convolution of probability distribution functions have been used to characterize time differentiation between these two processes. Several existing and new effort dependent models have been derived by using different types of distribution functions. We have also provided data analysis based on the actual software failure data sets for some of the models discussed and proposed in the paper.
    WSEAS Transactions on Systems 04/2009; 8(4):521-531.

Full-text

Download
2 Downloads
Available from