About
131
Publications
24,560
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,270
Citations
Introduction
reliability, risk, mechanical engineering, algebraic inequalities and their application, flow networks, domain-independent principles for improving reliability, stress analysis
Skills and Expertise
Current institution
Additional affiliations
January 2007 - December 2015
Publications
Publications (131)
The reverse engineering of a valid algebraic inequality often leads to a novel physical reality characterized by a distinct signature: the algebraic inequality itself. This paper uses reverse engineering of valid algebraic inequalities for generating new knowledge and substantially improving the reliability of common series-parallel systems. Our st...
The common domain-specific approach to reliability improvement and risk reduction created the false perception that effective risk reduction can be delivered successfully solely by using methods offered by the specific domain. This paper argues that reliability improvement and risk reduction is underpinned by general principles whose knowledge help...
The paper introduces a powerful method for developing lightweight designs and enhancing the load-bearing capacity of common structures. The method, referred to as the ‘method of aggregation’, has been derived from reverse engineering of sub-additive and super-additive algebraic inequalities. The essence of the proposed method is consolidating multi...
The paper reveals that a prediction of system reliability on demand based on average reliabilities on demand of components is a fundamentally flawed approach. A physical interpretation of algebraic inequalities demonstrated that assuming average component reliabilities on demand entails an overestimation
of the system reliability on demand for syst...
New results related to maximizing the reliability of common systems with interchangeable redundancies at a component level have been obtained by using the method of algebraic inequalities. It is shown that for systems with independently working components with interchangeable redundancies, the system reliability corresponding to a symmetric arrange...
The paper treats the important problem related to risk controlled by the simultaneous presence of critical events, randomly appearing on a time interval and shows that the expected time fraction of simultaneously present events does not depend on the distribution of events durations. In addition, the paper shows that the probability of simultaneous...
A framework for topology optimization of repairable flow networks and reliability networks is presented. The optimization consists of determining the optimal network topology with a maximum transmitted flow achieved within a specified budget for building the network. A method for a topology optimization of reliability networks of safety-critical sy...
The reverse engineering of a valid algebraic inequality often leads to a projection of a novel physical reality characterized by a distinct signature: the algebraic inequality itself. This paper uses reverse engineering of valid algebraic inequalities for generating new knowledge and substantially improving the reliability of common series-parallel...
The paper explores the probabilistic interpretations of algebraic inequalities and presents several findings. First, the inequality of the additive ratios can be used to increase the probability of an event occurring within a set of mutually exclusive and exhaustive events. The interpretation of this inequality produced a counter‐intuitive result,...
This article examines the profound impact on the forecasted system reliability when one assumes average reliabilities on demand for components of various kinds but of the same type. In this article, we use reverse engineering of a novel algebraic inequality to demonstrate that the prevalent practice of using average reliability on demand for compon...
Problems with the current methods for reliability improvement are discussed as well as two generic methods for reliability improvement. The paper argues that reliability improvement is underpinned by common principles that provide key input to the design process. The domain-independent methods change the way engineers and scientists approach reliab...
The paper discusses applications of the domain-independent method of algebraic inequalities, for reducing uncertainty and risk. Algebraic inequalities have been used for revealing the intrinsic reliability of competing systems and ranking the systems in terms of reliability in the absence of knowledge related to the reliabilities of their component...
A special class of general inequalities has been identified that provides the opportunity for generating new knowledge that can be used for optimising systems and processes in diverse areas of science and technology. It is demonstrated that inequalities belonging to this class can always be interpreted meaningfully if the variables and separate ter...
A method for optimising the design of systems and processes has been introduced that consists of interpreting the left- and the right-hand side of a correct algebraic inequality as the outputs of two alternative design configurations delivering the same required function. In this way, on the basis of an algebraic inequality, the superiority of one...
The paper develops an important method related to using algebraic inequalities for uncertainty and risk reduction and enhancing systems performance. The method consists of creating relevant meaning for the variables and different parts of the inequalities and linking them with real physical systems or processes. The paper shows that inequalities ba...
The paper discusses applications of the domain-independent method of algebraic inequalities, for reducing uncertainty and risk. Algebraic inequalities have been used for revealing the intrinsic reliability of competing systems and ranking the systems in terms of reliability in the absence of knowledge related to the reliabilities of their component...
The paper introduces two fundamental approaches for reliability improvement and risk reduction by using nontrivial algebraic inequalities: (a) by proving an inequality derived or conjectured from a real system or process and (b) by creating meaningful interpretation of an existing nontrivial abstract inequality relevant to a real system or process....
The deliberate weaknesses are points of weakness towards which a potential failure is channeled in order to limit the magnitude of the consequences from failure. The article shows that reducing risk by deliberate weaknesses is a powerful domain-independent method which transcends mechanical engineering and works in various unrelated areas of human...
This article introduces a powerful domain-independent method for improving reliability and reducing risk based on algebraic inequalities which transcends mechanical engineering and can be applied in many unrelated domains. The article demonstrates the application of inequalities to reduce the risk of failure by producing sharp uncertainty bounds fo...
The article introduces new domain-independent methods for improving reliability and reducing risk based on algebraic inequalities and chain-rule segmentation. Two major advantages of algebraic inequalities for reducing risk have been demonstrated: (1) ranking risky prospects in the absence of any knowledge related to the individual building parts a...
The popular domain-specific approach to risk reduction created the illusion that efficient risk reduction can be delivered successfully solely by using methods offered by the specific domain. As a result, many industries have been deprived of efficient risk reducing strategy and solutions. This paper argues that risk reduction is underlined by doma...
This chapter introduces an important domain‐independent reliability improvement and risk reduction method referred to as 'the method of separation'. Harmful interaction of factors critical to reliability and risk is a major source of failures. Separating risk‐critical factors to reduce this harmful interaction is therefore a major avenue for improv...
This chapter summarizes general guidelines on risk management. The common approach to risk reduction is the domain‐specific approach which relies heavily on root cause analysis and detailed knowledge from the specific domain. The domain‐specific approach to risk reduction created an illusion: that efficient risk reduction can be delivered successfu...
The paper introduces the principle of minimised rate of damage accumulation as a domain-independent principle of reliability improvement and risk reduction. A classification is proposed of methods for reducing the rate of damage accumulation. The paper introduces the method of substitution for reducing the rate of damage accumulation. The original...
The paper provides for the first time a comprehensive introduction into the mechanisms through which the method of separation achieves risk reduction and into the ways it can be implemented in engineering designs. The concept stochastic separation of critical random events on a time interval, which consists of guaranteeing with a specified probabil...
The paper introduces the method of separation for improving reliability and reducing technical risk and provides insight into the various mechanisms through which the method of separation attains this goal. A comprehensive classification of techniques for improving reliability and reducing risk, based on the method of separation has been proposed f...
The paper treats the important problem related to risk controlled by the simultaneous presence of critical events, randomly appearing on a time interval and shows that the expected time fraction of simultaneously present events is insensitive to the distribution of events durations. In addition the paper shows that the probability of simultaneous p...
The paper provides analysis of the various mechanisms through which the segmentation improves reliability and reduces technical risk and presents a classification of risk-reduction techniques based on segmentation. On the basis of theoretical arguments and examples, it is demonstrated that segmentation increases the tolerance of components to flaws...
A number of new techniques for reliability improvement and risk reduction based on the inversion method, such as: 'inverting design variables,' 'inverting by maintaining an invariant,' 'inverting resulting in a reinforcing counter-force,' 'negating basic required functions' and 'moving backwards to general and specific contributing factors' have be...
A powerful method referred to as stochastic pruning is introduced for analysing the performance of common complex systems whose component failures follow a homogeneous Poisson process. The method has been applied to create a very fast solver for estimating the production availability of large repairable flow networks with complex topology. It is sh...
A comprehensively updated and reorganized new edition. The updates include comparative methods for improving reliability; methods for optimal allocation of limited resources to achieve a maximum risk reduction; methods for improving reliability at no extra cost and building reliability networks for engineering systems.
Includes:
A unique set of 46...
Keywords: Failure Minimal transportation cost Multiple destinations Multiple origins Shortest path On the basis of counterexamples , it is demonstrated that the time-honoured successive shortest path strategy fails to achieve a minimal total length of the transportation routes on a network with multiple interchangeable origins and multiple destinat...
This paper focuses on an important and very common problem and presents a theoretical framework for solving it: “determining the risk of unsatisfied request from users placing random demands on a time interval”. For the common case of a single source servicing a number of consumers, a closed-form solution has been derived for the risk of collision...
Failure rate and failure mode data are of vital importance for the reliability analysis. A basic initial step in reliability data analysis is to verify whether the data have been collected correctly since the quality of field data varies between misleading data and useful data. This is followed by exploratory data analysis, which involves summarisi...
The paper features a number of new generic principles for reducing technical risk with a very wide application area. Permutations of interchangeable components/operations in a system can reduce significantly the risk of system failure at no extra cost. Reducing the time of exposure and the space of exposure can also reduce risk significantly. Techn...
A simple yet powerful general risk-reduction principle has been formulated related to systems each state of which can be obtained from a given initial state by adding the effects from a specified set of modifications. An important application of the formulated principle has been found in determining the global extremum of multivariable functions wh...
This study exposes a critical weakness of the (0-1) knapsack dynamic programming approach, widely used for optimal allocationof resources. The (0-1) knapsack dynamic programming approach could waste resources on insignificant improvements andprevent the more efficient use of the resources to achieve maximum benefit. Despite the numerous extensive s...
A fracture condition incorporating the most unfavourable orientation of the crack has been derived to improve the safety of loaded brittle components with complex shape, whose loading results in a three-dimensional stress state. With a single calculation, an answer is provided to the important question whether a randomly oriented crack at a particu...
The paper introduces the concept ‘dominated parasitic flow loops’ and demonstrates that these occur naturally in real networks transporting interchangeable commodity. The dominated parasitic flow loops are augmentable broken loops which have a dominating flow in one particular direction of traversing. The dominated parasitic flow loops are associat...
The paper states and proves an important result related to the theory of flow networks with disturbed flows:“the throughput flow constraint in any network is always equal to the throughput flow constraint in its dual network”. After the failure or congestion of several edges in the network, the throughput flow constraint theorem provides the basis...
A framework for topology optimization of repairable flow networks and reliability networks is presented. The optimization consists of determining the optimal network topology with a maximum transmitted flow achieved within a specified budget for building the network. A method for a topology optimization of reliability networks of safety-critical sy...
Parasitic flow loops in real networks are associated with transportation losses, congestion and increased pollution of the environment. The paper shows that complex networks dispatching the same type of interchangeable commodity exhibit parasitic flow loops and the commodity does not need to be physically travelling around a closed contour for a pa...
Repairable flow networks are a new area of research, which analyzes the repair and flow disruption caused by failures of components in static flow networks. This book addresses a gap in current network research by developing the theory, algorithms and applications related to repairable flow networks and networks with disturbed flows. The theoretica...
Directed flow loops are highly undesirable because they are associated with wastage of energy for maintaining them and entail big losses to the world economy. It is shown that directed flow loops may appear in networks even if the dispatched commodity does not physically travel along a closed contour. Consequently, a theorem giving the necessary an...
We state and prove a theorem regarding the average production availability of a repairable flow network, composed of independently working edges, whose failures follow a homogeneous Poisson process. The average production availability is equal to the average of the maximum output flow rates on demand from the network, calculated after removing the...
The paper discuses a new fundamental result in the theory of flow networks referred to as the ‘dual network theorem forstatic flow networks’. The theorem states that the maximum throughput flow in any static network is equal to the sum ofthe capacities of the edges coming out of the source, minus the total excess flow at all excess nodes, plus the...
A number of fundamental theorems related to non-reconfigurable repairable flow networks have been stated and proved. For a specified source-to-sink path, the difference between the sum of the unavailabilities of its forward edges and the sum of the unavailabilities of its backward edges is the path resistance. In a repairable flow network, the abse...
The article features exact algorithms for reduction of technical risk by (1) optimal allocation of resources in the case where the total potential loss from several sources of risk is a sum of the potential losses from the individual sources; (2) optimal allocation of resources to achieve a maximum reduction of system failure; and (3) making an opt...
The article discuses a number of fundamental results related to determining the maximum output flow in a network after edge failures. On the basis of four theorems, we propose very efficient augmentation algorithms for restoring the maximum possible output flow in a repairable flow network, after an edge failure. In many cases, the running time of...
A theoretical framework and models are proposed for reliability analysis and setting reliability requirements based on the cost of failure. It is demonstrated that a high availability target does not necessarily limit the risk of failure or minimize the total losses. The proposed models include:
(i) models for determining the value from the reliabi...
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. In this paper it is demonstrated that increasing the reliability of the system does not always mean decreasing the losses from failures. An inappropriate increase of the reliability of the system may lead...
The paper discuses new, very efficient augmentation algorithms and theorems related to maximising the flow in single-commodity and multi-commodity networks. For the first time, efficient algorithms with linear average running time O(m) in the size m of the network, are proposed for restoring the maximum flow in single-commodity and multi-commodity...
We propose a framework for analysis and optimization of repairable flow networks by (i) stating and proving the maximum flow minimum flow path resistance theorem for networks with merging flows (ii) a discrete-event solver for determining the variation of the output flow from repairable flow networks with complex topology (iii) a procedure for dete...
For repairable flow networks with complex topology, a simple and efficient algorithm is proposed for minimising the lost flow due to component failures during a specified time interval, at a specified output flow rate. The algorithm is based on two fundamental properties of repairable flow networks which involve the new concept 'specific resistance...
A simple, easily reproduced experiment based on artificial flaws has been proposed which demonstrates that the distribution of the minimum failure load does not necessarily follow a Weibull distribution. The experimental result presented in the paper clearly indicates that the Weibull distribution with its strictly increasing function, is incapable...
The paper presents a discrete-event simulator of repairable flow networks with complex topology. The solver is based on an efficient algorithm for maximizing the flow in repairable flow networks with complex topology. The discrete-event solver maximizes the flow through the repairable network upon each component failure and return from repair. This...
A fundamental theorem related to maximizing the flow in a repairable flow network with arbitrary topology has been stated and proved. `The flow transmitted through a repairable network with arbitrary topology and a single source and sink can be maximized by (i) determining, all possible flow paths from the start node (the source) to the end node (t...
The exact upper bound of the variance of properties from multiple sources is attained from sampling not more than two sources. This paper discusses important applications of this result referred to as variance upper bound theorem. A new conservative, non-parametric estimate has been proposed for the capability index of a process whose output combin...
The utility of the Weibull distribution has been traditionally justified with the belief that it is the mathematical expression of the weakest-link concept in the case of flaws locally initiating failure in a stressed volume. This paper challenges the Weibull distribution as a mathematical formulation of the weakest-link concept and its suitability...
Calculating the absolute reliability built in a product is often an extremely difficult task because of the complexity of the physical processes and physical mechanisms underlying the failure modes, the complex influence of the environment and the operational loads, the variability associated with reliability-critical design parameters and the non-...
A quantitative framework is presented dealing with competing opportunity and failure events in a finite time interval. The framework is based on the new fundamental concepts potential benefit, potential loss and potential gain, for which closed-form expressions regarding their distributions are derived and verified by a simulation. It is demonstrat...
A powerful new technology is proposed for creating reliable and robust designs, characterized by a high resistance to failure. The new technology is based on a new mixed-mode failure criterion, and computationally very efficient simulation technique for calculating the probability of failure of a component with complex shape.The new technology hand...
A basic principle for risk-based design has been formulated: the larger the losses from failure of a component, the smaller the upper bound of its hazard rate, the larger the required minimum reliability level from the component. A generalized version and analytical expression for this important principle have also been formulated for multiple fail...
An efficient algorithm is proposed for determining the quantity of transferred flow and the losses from failures of repairable stochastic networks with converging flows. We show that the computational speed related to determining the variation of the flow through a stochastic flow network can be improved enormously if the topology of the network is...
An efficient algorithm has been proposed for determining the probability of failure of structures containing flaws. The algorithm is based on a powerful generic equation, a central parameter in which is the conditional individual probability of initiating failure by a single flaw. The equation avoids conservative predictions related to the probabil...
A new method for optimisation of the topology of engineering systems is proposed, based on reliability allocation by minimising the total cost – the sum of the cost for building the system and the risk of failure. The essence of the proposed method can be summarised in three steps: developing a system topology with the maximum possible reliability;...
For a long time, conventional reliability analyses have been oriented towards selecting the more reliable system and preoccupied with maximising the reliability of engineering systems. On the basis of counterexamples however, we demonstrate that selecting the more reliable system does not necessarily mean selecting the system with the smaller losse...
This book has been written with the intention to fill two big gaps in the reliability and risk literature: the risk-based reliability analysis as a powerful alternative to the traditional reliability analysis and the generic principles for reducing technical risk.
An important theme in the book is the generic principles and techniques for reducing...
Powerful equations and an efficient algorithm are proposed for determining the probability of failure of loaded components with complex shape, containing multiple types of flaws. The equations are based on the concept ‘conditional individual probability of initiating failure’ characterising a single flaw given that it is in the stressed component....
Purpose
The aim of this paper is to propose efficient models and algorithms for reliability value analysis of complex repairable systems linking reliability and losses from failures.
Design/methodology/approach
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from fa...
Purpose – The aim of this paper is to propose efficient models and algorithms for reliability value
analysis of complex repairable systems linking reliability and losses from failures.
Design/methodology/approach – The conventional reliability analysis is based on the premise
that increasing the reliability of a system will decrease the losses from...
The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger...
Presenting a radically new approach and technology for setting reliability requirements, this superb book also provides the first comprehensive overview of the M/F-FOP philosophy and its applications. Each chapter covers probabilistic models, statistical and numerical procedures, applications and/or case studies Comprehensively examines a new metho...
A new methodology is proposed for determining the probability of failure of an arbitrarily loaded component with an arbitrary shape, containing internal flaws. An important application area of the proposed equation is developing optimised designs and loading, associated with low probability of failure.Methods have also been developed for specifying...
A new reliability measure is proposed and equations are derived which determine the probability of existence of a specified set of minimum gaps between random variables following a homogeneous Poisson process in a finite interval. Using the derived equations, a method is proposed for specifying the upper bound of the random variables' number densit...
A powerful equation is derived for determining the probability of safe/failure states dependent on random variables, following a homogeneous Poisson process in a finite domain. The equation is generic & gives the probability of all types of relative configurations of the random variables governing reliability. The significance of the derived equati...