# A two-stage approach for multi-objective decision making with applications to system reliability optimization

**ABSTRACT** This paper proposes a two-stage approach for solving multi-objective system reliability optimization problems. In this approach, a Pareto optimal solution set is initially identified at the first stage by applying a multiple objective evolutionary algorithm (MOEA). Quite often there are a large number of Pareto optimal solutions, and it is difficult, if not impossible, to effectively choose the representative solutions for the overall problem. To overcome this challenge, an integrated multiple objective selection optimization (MOSO) method is utilized at the second stage. Specifically, a self-organizing map (SOM), with the capability of preserving the topology of the data, is applied first to classify those Pareto optimal solutions into several clusters with similar properties. Then, within each cluster, the data envelopment analysis (DEA) is performed, by comparing the relative efficiency of those solutions, to determine the final representative solutions for the overall problem. Through this sequential solution identification and pruning process, the final recommended solutions to the multi-objective system reliability optimization problem can be easily determined in a more systematic and meaningful way.

**0**Bookmarks

**·**

**127**Views

- [Show abstract] [Hide abstract]

**ABSTRACT:**This paper studies the reliability-based multiobjective optimization by using a new interval strategy to model uncertain parameters. A new satisfaction degree of interval, which is significantly extended from [0,1] to [-∞,+∞], is introduced into the non-probabilistic reliability-based optimization. Based on a predefined satisfaction degree level, the uncertain constraints can be effectively transformed into deterministic ones. The interval number programming method is applied to change each uncertain objective function to a deterministic two-objective optimization. So in this way the uncertain multiobjective optimization problem is transformed into a deterministic optimization problem and a reliability-based multiobjective optimization is then established. For sophisticated engineering problems, the objectives and constraints are modeled by using the response surface (RS) approximation method to improve the optimization efficiency. Thus the reliability-based multiobjective optimization is combined with the RS approximation models to form an approximation optimization problem. For the multiobjective optimization, the Pareto sets can be obtained with different satisfactory degree levels. Two numerical examples and one real-world crashworthiness design for vehicle frontal structure are presented to demonstrate the effectiveness of the proposed approach.Computer Modeling in Engineering and Sciences 01/2011; 74(1). · 1.18 Impact Factor - SourceAvailable from: Ahmad Jafarian[Show abstract] [Hide abstract]

**ABSTRACT:**In the big data era, systems reliability is critical to effective systems risk management. In this paper, a novel multiobjective approach, with hybridization of a known algorithm called NSGA-II and an adaptive population-based simulated annealing (APBSA) method is developed to solve the systems reliability optimization problems. In the first step, to create a good algorithm, we use a coevolutionary strategy. Since the proposed algorithm is very sensitive to parameter values, the response surface method is employed to estimate the appropriate parameters of the algorithm. Moreover, to examine the performance of our proposed approach, several test problems are generated, and the proposed hybrid algorithm and other commonly known approaches (i.e., MOGA, NRGA, and NSGA-II) are compared with respect to four performance measures: 1) mean ideal distance; 2) diversification metric; 3) percentage of domination; and 4) data envelopment analysis. The computational studies have shown that the proposed algorithm is an effective approach for systems reliability and risk management.IEEE transactions on cybernetics. 01/2015; - SourceAvailable from: Mostafa Abouei Ardakan[Show abstract] [Hide abstract]

**ABSTRACT:**In this paper the redundancy allocation problem (RAP) for a series-parallel system is considered.•Traditionally there are two main strategies for redundant component namely active and standby.•In this paper a new redundancy strategy which is called “Mixed” redundancy is introduced.•Computational experiments demonstrate that implementing the new strategy lead to interesting results.ISA Transactions 10/2014; · 2.26 Impact Factor

Page 1

A two-stage approach for multi-objective decision making with applications

to system reliability optimization

Zhaojun Lia, Haitao Liaob,?, David W. Coitc

aDepartment of Industrial Engineering, University of Washington, Seattle, WA 98195, USA

bNuclear Engineering Department/Industrial and Information Engineering Department, University of Tennessee, Knoxville, TN 37996, USA

cDepartment of Industrial and Systems Engineering, Rutgers University, Piscataway, NJ 08854, USA

a r t i c l e i n f o

Article history:

Received 27 February 2008

Received in revised form

13 February 2009

Accepted 24 February 2009

Available online 6 March 2009

Keywords:

System reliability

Multi-objective optimization

Self-organizing map

Data envelopment analysis

a b s t r a c t

This paper proposes a two-stage approach for solving multi-objective system reliability optimization

problems. In this approach, a Pareto optimal solution set is initially identified at the first stage by

applying a multiple objective evolutionary algorithm (MOEA). Quite often there are a large number of

Pareto optimal solutions, and it is difficult, if not impossible, to effectively choose the representative

solutions for the overall problem. To overcome this challenge, an integrated multiple objective selection

optimization (MOSO) method is utilized at the second stage. Specifically, a self-organizing map (SOM),

with the capability of preserving the topology of the data, is applied first to classify those Pareto optimal

solutions into several clusters with similar properties. Then, within each cluster, the data envelopment

analysis (DEA) is performed, by comparing the relative efficiency of those solutions, to determine the

final representative solutions for the overall problem. Through this sequential solution identification

and pruning process, the final recommended solutions to the multi-objective system reliability

optimization problem can be easily determined in a more systematic and meaningful way.

& 2009 Elsevier Ltd. All rights reserved.

1. Introduction

System reliability analysis and optimization is important to

efficiently utilize available resources and part types and to

develop a preferred or optimal system design architecture. In this

paper, a formulation and practical solution methodology is

presented when there are multiple design objectives, yet a

decision-maker must ultimately select one or a small set of

solutions to be further considered. In this new approach,

prospective solutions are clustered and then pruned so that a

decision-maker can focus only on a smaller subset of promising

solutions.

Redundancy strategy has been widely used to improve the

reliability of a system by incorporating redundant components or

subsystems. Fig. 1 depicts a typical series–parallel redundant

system, in which multiple functionally equivalent components are

connected in parallel. When making an optimal redundancy

allocation strategy, the type(s) of components and the quantities

of each type must be determined.

Traditionally, the goal of redundancy allocation is to maximize

the system reliability under various physical or budgetary

constraints. Such problem formulations belong to a single-

objective integer programming problem, and they are NP hard

as demonstrated by Chern [1]. In the literature, single-objective

redundancy allocation problems (RAPs) have been extensively

studied either through mathematical programming [2–4] or

heuristic approaches [5–8]. Kuo et al. [9] provides a comprehen-

sive review on this subject.

Mathematicalprogramming

restrict the solution space by considering only one component

choice for each subsystem without allowing for the mixture of

those functionally equivalent components. Alternatively, Levitin

et al. [10] present an application of component mixing in the

structure optimization of a power system. Coit and Smith [5]

demonstrate that considering component mixing in system

redundancy increases the problem solution space, and thus, may

result in higher system reliability values. When the mixture of

component choices is allowed, heuristic algorithms such as

genetic algorithm (GA) [5] and Tabu search [6] are usually

employed. Additionally, when considering estimation uncertainty,

mixing functionally equivalent components may potentially

reduce the variance of system reliability estimate [11] and

approachesforRAPusually

ARTICLE IN PRESS

Contents lists available at ScienceDirect

journal homepage: www.elsevier.com/locate/ress

Reliability Engineering and System Safety

0951-8320/$-see front matter & 2009 Elsevier Ltd. All rights reserved.

doi:10.1016/j.ress.2009.02.022

Abbreviations: RAP, redundancy allocation problem; SOM, self-organizing map;

BMU, best matching unit in SOM; GA, genetic algorithm; MOEA, multiple objective

evolutionary algorithm; NSGA, non-dominated sorting genetic algorithm; NSGA-II,

fast non-dominated sorting genetic algorithm; MOSO, multiple objective selection

optimization; DEA, data envelopment analysis; DMU, decision making units in

DEA

?Corresponding author. Tel.: +18659740984; fax: +13169783742.

E-mail addresses: stevenli777@gmail.com (Z. Li), hliao4@utk.edu (H. Liao),

coit@rutgers.edu (D.W. Coit).

Reliability Engineering and System Safety 94 (2009) 1585–1592

Page 2

minimizes the likelihood of common cause failures. When the

common cause of failures cannot be avoided, the formulation for

RAP in the presence of common cause failures proposed by

Ramirez-Marquez and Coit [12] can be utilized.

In many engineering applications, multiple objectives may be

involved in RAP, e.g., maximizing the system reliability and

minimizing the total cost as well as the total weight. Such

problem formulations are quite natural, particularly in aircraft

design or the development of a variety of medical devices. In a

multi-objective formulation, the problem becomes more complex

as compared to the single-objective counterpart.

In the past several years, determination of efficient methods

for multi-objective system reliability optimization has become a

central focus in the area of reliability engineering. Dhingra [13]

applies a multi-objective optimization approach to maximize the

system reliability and minimize the resource consumption (cost,

weight, and volume). Misra and Sharma [14,15] use integer

programming and a min–max concept to obtain the Pareto

optimal solutions. Li [16] considers a dynamic programming

approach to solve Dhingra’s problem. Coit et al. [17] propose a

multi-objective formulation to maximize the estimated system

reliability and minimize the variance of that estimate. Busacca

et al. [18] propose a multiple objective genetic algorithm (MOGA)

to identify the Pareto optimal solutions and utilize this metho-

dology in the design of a standby safety system in a nuclear power

plant. Tian and Zuo [19] propose a multi-objective optimization

model for redundancy allocation for multi-state series-parallel

systems. A physical programming approach is proposed to address

the conflicting nature of different objectives, and the problem is

solved using GA.

For a multi-objective system reliability optimization problem,

one simple approach is to combine multiple objectives into one

single objective by identifying a suitable value or utility function

of decision-maker(s). If this can be done accurately, then it is a

very credible and useful approach. However, in practice it is very

difficult, if not impossible, to find an accurate utility function

which can truly represent the decision preferences for one or

more decision-makers. As a result, extensive research has been

conducted in attempt to obtain a Pareto optimal solution set

instead of achieving a single optimal solution. However, one

challengingproblemassociated

approach is how to effectively prune a possibly huge Pareto

optimal solution set in order to select a few representative

solutions for implementation.

To reduce the size of the Pareto optimal set, Taboada et al. [20],

and Taboada and Coit [21] apply a k-means approach to classify a

Pareto optimal solution set and select the centroid solution from

each set as a representative solution. As a widely used unsuper-

vised classification approach, the k-means method classifies data

points based on the mean squared errors. However, this method is

somewhat subjective as the number of clusters should be

specified beforehand. To overcome this issue, Taboada and Coit

[21] recommend the use of silhouette plots to determine the value

of k. Another challenge associated with the k-means method is

that, when the number of objectives increases it becomes more

withtheParetooptimum

difficult to interpret the results from the k-means classification,

let alone visualize them in a lower dimensional space for making a

straightforward final decision. Although selecting the centroid

solutions as the representative solutions provides one possible

choice, this method may not have a definitive economic meaning.

In fact, from the economic perspective those solutions with high

effectiveness in terms of investment vs. gain should be considered

instead of the centroid.

To address the multi-objective optimization problem in a more

practical and efficient way, a new two-stage approach is proposed

in this paper. Fig. 2 shows the flow chart of this approach. At the

first stage, a multiple objective evolutionary algorithm (MOEA)

such as the non-dominated sorting genetic algorithm (NSGA) is

applied to identify a representative Pareto optimal solution set. At

the second stage, the identified Pareto optimal solutions are first

classified into several clusters by applying the self-organizing map

(SOM) method. Afterwards, non-efficient solutions are eliminated

from each cluster, and representative efficient solutions are

identified through the data envelopment analysis (DEA) method,

which is a special multiple objective selection optimization

(MOSO) approach.

The new approach attempts to extend and improve the

methodology proposed by Taboada et al. [20], and Taboada and

Coit [21] in two respects. Compared to the k-means classification

method, SOM has the following advantages: (1) the similarity of

those data points is measured in terms of both the Euclidian

distance and angle between training samples/vectors; (2) the

classification results can be effectively mapped to an output space

with fewer dimensions without imposing extra constraints on the

dimensions of the training samples/vectors; (3) the classification

results can be easily visualized due to the lower dimension of the

output space. Regarding the Pareto solution reduction process,

DEA is used as an enabling tool for eliminating the non-efficient

solutions based on the relative efficiency criterion. With the

underlying economic implication of the input–output analysis,

this method is more meaningful in evaluating the effectiveness of

a solution compared to the k-means method which determines

the centroid representative solutions based only upon the

Euclidian distance. Due to those advantages, the proposed two-

stage approach not only identifies a Pareto optimal solution set

ARTICLE IN PRESS

RAP

Non-efflclent

solutions elimination

by applying DEA

solution

identification

Pareto-optimal

classifying the

Pareto solution set

Data mining in

MOEA:

NSGA, NSGA-II

Stage one

Stage two

Final

representative

solutions

Fig. 2. Two-stage method for solving multi-objective RAP.

s

1

1

2

n1

ns

2

1

Fig. 1. Example of series–parallel redundancy system.

Z. Li et al. / Reliability Engineering and System Safety 94 (2009) 1585–1592

1586

Page 3

but also significantly reduces the number of solutions from the

new perspective of relative efficiency. This approach is expected to

greatly facilitate the multi-objective decision making in system

reliability optimization.

The remainder of this paper is organized as follows. In Section 2,

the multi-objective optimization problem is formulated, and the

solution algorithm is provided. Section 3 briefly describes several

data classification methods including both supervised and

unsupervised statistical learning methods. In Section 4, the DEA

method is introduced, and its potential application in reducing the

size of a Pareto optimal solution set is addressed. A numerical

example is provided in Section 5 to demonstrate potential

applications of the proposed method in solving multi-objective

RAP. Section 6 draws conclusions.

2. Multi-objective optimization problem

2.1. Mathematical formulation

Let x be a vector containing p decision variables. Mathemati-

cally, an optimization problem with n objective functions can be

expressed as [22]

minimize=maximize fiðxÞ for i ¼ 1; 2;...; n

Subject to:

gjðxÞp0;

gkðxÞ ¼ 0;

where x ¼ (x1, x2,y, xp), xiis the ith decision variable.

A variety of approaches can be used to solve this problem. One

popular approach is to combine those objectives into a single

composite objective so that traditional mathematical program-

ming methods can be applied. To this end, some sort of value or

utility function needs to be identified according to the preference

of one or multiple decision-makers. The simplest method is to

assume independent preferences among those objectives and

apply an additive utility function. On the other hand, instead of

transforming the original problem into a single-objective one, the

Pareto optimum concept based on non-dominance can be utilized.

Pareto dominance and non-dominance can be determined

through multiple pair-wise vector comparison. More specifically,

let y ¼ (y1, y2,y, yp) be another vector containing p decision

variables. In a maximization problem, it is said that solution x

dominates solution y if and only if:

j ¼ 1;2;...;J,

k ¼ 1;2;...;K,(1)

fiðxÞXfiðyÞ8i; and fiðxÞ4fiðyÞ

In other words, x is non-dominated in a p-dimensional set X, if

there is no other yax in X such that fðyÞXfðxÞ. Let N be a set

containing all the non-dominated solutions in X. Then, the set N is

called the Pareto optimal set [23] or Pareto frontier of the multi-

objective optimization problem. By introducing the Pareto

optimum concept, more choices may be provided to different

decision-makers with different perspectives. In many cases,

however, the number of solutions in the Pareto optimal solution

set is large as the number of conflicting objectives increases,

which may not be desirable to a decision-maker. The two-stage

method proposed in this paper is expected to fill the gap between

single solution and Pareto optimal set by providing decision-

makers with a medium-sized set of solutions (several representa-

tive solutions) from a holistic view.

for atleast one i 2 f1;2;...;ng.(2)

2.2. Non-dominated sorting genetic algorithm

GA, developed by Holland [24], is a particular class of

evolutionary algorithms. It starts with a population of random

individuals (called chromosomes) that are revised over successive

generations. The crossover and mutation operators are used to

generate new solutions at each generation. For each generation,

each solution is evaluated in terms of a fitness function.

Individuals with higher fitness values are ranked at the top while

individuals with low-fitness values are likely to be eliminated

from the current population. The algorithm continues for a pre-

determined number of generations or until no additional

improvement is observed.

To solve a multi-objective optimization problem, the following

multi-objective GAs, referred to as MOEA, have been developed:

? vector evaluated GA by Schaffer [25];

? MOGA by Fonseca and Fleming [26];

? Niched-Pareto GA by Horn et al. [27];

? NSGA developed by Srinivas and Deb [28];

? Strength Pareto evolutionary algorithm by Zitzler and Thiele

[29];

? NSGA-II by Deb et al. [30–32] and Kumar et al. [33];

? MOMS-GA by Taboada et al. [34].

In this paper, the RAPs are solved using either NSGA or NSGA-II

to effectively identify the Pareto optimal solution set. NSGA differs

from a simple GA only in the manner in which the selection

operator works, while the crossover and mutation remain as

usual. More specifically, NSGA uses a non-dominated sorting

procedure [28]. It applies a ranking method that emphasizes those

good solutions and tries to maintain them in the population.

Through a sharing method, this algorithm maintains the diversity

in the population. The algorithm explores different regions in the

Pareto front. This algorithm can accommodate many objectives

and constraints and is very efficient in obtaining good Pareto

optimal sets (or fronts). As an improved version of NSGA, NSGA-II

ARTICLE IN PRESS

Fast-non-dominated-sort (NSGA-II):

for each :initialized population

pP

S

= Φ

=

∈

0

for each

if (

) then If dominates

{ }

p

q

∪

Add to the set of solutios dominated by

) then

1 Increment the domination counter of

p

p

p

p

p

P

n

qP

pq

S

pq

S

∈

=

else if(

if 0 then belong to the first fr

=

∪

p

=

p

pq

p

n

q

pn

n

=+

11

ont

i

=

1

{ }

p

1 Initialize the front counter

while

Used to store the member of the next front

for eac

h

for each

p

qS

nn

=−

=

= +

=∪

= +

=

rank

p

F

i

F

F

Q

=

≠ Φ

= Φ

1

if 0 then belongs to the next front

q

i

F

1

q

{ }

1

i

qq

q

rank

Q

i

pF

∈

n

qi

Q

i

Q

∈

Fig. 3. NSGA-II algorithm.

Z. Li et al. / Reliability Engineering and System Safety 94 (2009) 1585–1592

1587

Page 4

utilizes the fast non-dominated sorting genetic algorithm. This

method is more computationally efficient, non-elitism preventing,

and less dependent on sharing parameter for diversity preserva-

tion. The pseudo-code of NSGA-II [32] is depicted in Fig. 3.

3. Statistical classification methods

There are very general classes of statistical classification

methods, which can be supervised or unsupervised. This distinc-

tion will be described in the following sections. For this research,

SOM is used to cluster Pareto optimal sets based on performance

similarities. This approach provides some distinct benefits

compared to other approaches.

3.1. Unsupervised and supervised data classification

Statistical classification methods have been widely used in

mining useful information from huge data sets. Based on the

amount of prior knowledge about the original data, either

unsupervised statistical classification methods (e.g., k-means

and SOM) or supervised statistical classification alternatives

(e.g., artificial neural network and support vector machine)

may be applied to extract the desirable information from the

data. When no or very few prior information about the data is

available, the unsupervised classification approach is more

appropriate in classifying the data. On the other hand, when

some relationship between the data and its corresponding cluster

is already known, the supervised classification approach becomes

a better choice. A brief introduction to the two methods is given as

follows.

Before classification is performed, each sample data in the data

set is expressed as a vector, called input data/vector, with or

without a label. Each of them is composed of those measures or

features of the sample to be classified. Moreover, the label

associated with the vector indicates the cluster to which the

vector belongs. For the unsupervised statistical classification

approach, such labels are not specified beforehand, and classifica-

tion is carried out purely based on the similarity of the input data

measured by the distance and/or angle between the input vectors.

On the other hand, for the supervised statistical classification, the

label for each input vector needs to be specified first. The model

obtained from the training process can be applied to predict the

cluster which the test or new data belong to. In other words,

the supervised approach is analogous to the regression analysis

(e.g., logistic regression), where the response variable is a

categorical variable, for example, with possible values of ?1, 0,

and 1. This method depends on a specified classifying function

such as a logistic function or hypertangent function. The

parameters in such functions may be determined by training the

model in terms of some performance metrics (e.g., minimizing

the mean squared error in an artificial neural network classifier,

maximizing the marginal distance between the support vectors in

a support vector machine classifier).

For a multi-objective RAP, there is often a very large size of

Pareto optimal solution set. Therefore, it is very useful to classify

the Pareto optimal solutions first in order to better understand the

behavior of these solutions and to conduct meaningful trade-offs.

Such solution identification processes can extract, not only trade-

offs information about those solutions, but also maintain the

completeness of the Pareto optimal solution set. More specifically,

each Pareto optimal solution is treated as an input vector

containing reliability, weight, and cost. Because there is no prior

information about the cluster to which an input vector belongs,

labels do not exist; thus the unsupervised classification method

such as SOM appears to be more appropriate.

3.2. Self-organizing map

SOM is one of the unsupervised classification methods and is a

special artificial neural network with a single layer feedforward

structure [35]. It generates a set of representations (usually two-

dimensional or three-dimensional) for multi-dimensional input

vectors while preserving the topological properties measured in

terms of similarity of these input vectors, such as the same or

similar distances and angles between them. More specifically,

each input vector is connected to all output neurons, and a weight

vector with the same dimensionality as the input vectors is

attached to each neuron (see Fig. 4 for the case with a two-

dimensional output lattice). Usually, the number of dimensions of

an input vector is much higher than that of the output lattice, thus

the mapping from the input space to the output space can be seen

as a dimension reduction process.

During the SOM’s training process, a competitive learning

technique is utilized. When a training sample (input vector) is

given to the network, its Euclidean distance to all weight vectors is

computed. The neuron with the weight vector that is most similar

to the input is called the best matching unit (BMU). The weights of

the BMU and neurons close to it in the SOM lattice are then

adjusted towards the input vectors. The magnitude of the

adjustment decreases with time and is smaller for those neurons

that are far away from the BMU in the lattice. The weight w(t) is

updated iteratively as

wðt þ 1Þ ¼ wðtÞ þ yðn;tÞaðtÞ½IðtÞ ? wðtÞ?,(3)

where w(t+1) is the weight vector at step t+1; I(t) is the input

vector; a(t) is the learning coefficient that is monotonically

decreasing with time; y(n,t) is the neighborhood function, which

has a smaller value when the neuron is far away from the BMU (as

determined by n, the Euclidean distance) and decreases with time.

For instance, the Gaussian neighborhood function given by:

y(n,t) ¼ exp(?n2/s2), has been widely used, where s2is the width

parameter that gradually decreases over time. This updating

process can be performed for a given number of iterations or until

I(t) approaches the weight vector w(t). To ensure the quality of

this training process, the representatives of all the possible input

vectors need to be selected as the training samples. Eventually,

output nodes are associated with groups or patterns correspond-

ing to the input vectors. In the subsequent mapping process, a

new input vector is mapped to a specific location on the lattice

based on its similarity to the weight vector of a specific neuron.

Fig. 5 shows an example of two-dimensional output lattice

after training. The coordinates provide a visual representation of

the input vectors in the output space. During the mapping

process, those input vectors that are similar to the weight vectors

of the neurons in quadrant I are assigned coordinates with signs of

(+, +) in the output lattice. Similarly, the input vectors having a

good similarity with the weight vectors of the neurons in

quadrant II are allocated coordinates with signs of (?, +), and so

on for quadrants III and IV.

ARTICLE IN PRESS

Two-dimensional

8 by 8 output lattice

Each output neuron contains

a nine-dimensional weight

vector

Nine-dimensional

input vectors

Fig. 4. SOM single layer feedforward network.

Z. Li et al. / Reliability Engineering and System Safety 94 (2009) 1585–1592

1588

Page 5

Unlike the k-means method that only minimizes the mean

squared error in terms of the Euclidian distance, the SOM

approach measures the similarity by the Euclidian distance as

well as the angle between the input vectors by updating the

weight vectors iteratively [36]. Such training process results in the

topological preservation from the input vectors to the output

lattice map. Because of those advantages, SOM is utilized in the

proposed two-stage approach to classify the Pareto optimal

solution set for solving multi-objective RAP.

4. Reduction of Pareto optimal solutions

Even though the classification results are informative, the

number of solutions in each cluster may still be prohibitive for a

decision-maker to make informed choices. At this point, selecting

representative solutions from each cluster itself can be regarded as a

multi-objective optimization problem, also called multiple objective

selection optimization problem [37]. In fact, the appropriate

application of the MOSO method can significantly reduce the size

of each cluster of Pareto optimal solutions. A special MOSO method

is the DEA method. From the perspective of relative efficiency, DEA is

able to eliminate those non-efficient Pareto optimal solutions from

each cluster. A brief introduction to DEA in the context of multi-

objective RAP is given in the next section.

4.1. Data envelopment analysis

DEA is a linear programming-based technique for measuring

the relative performance of decision making units (DMUs) where

the presence of multiple inputs (i.e., cost type criteria) and

outputs (i.e., benefit type criteria) makes comparisons difficult

[38]. DMU is a unit, whose performance can be measured in terms

of input–output analysis. For MOSO, each alternative solution is

treated as a DMU in the DEA method, and all the DMUs are usually

assumed to be homogeneously comparable such that the resulting

relative efficiencies are meaningful. In comparing their efficien-

cies, the relative efficiency (RE) incorporating multiple inputs and

outputs can be defined as

RE ¼weighted sum of outputs

weighted sum of inputs.

Considering a problem involving l DMUs, each of which has m

inputs and n outputs, then the relative efficiency of the kth DMU

can be expressed mathematically as

Pn

uj;viX0;

i ¼ 1;2;...;m;

REk¼

j¼1ujyjk

Pm

i¼1vixik

;

k ¼ 1;2;...;l,

j ¼ 1;2;...;n,

where uj and vi are the weights for the outputs and inputs,

respectively.

Since different DMUs may utilize different strategies to achieve

their highest relative efficiency values, a specific weight set for

each DMU is more practical instead of pursuing a common set of

weights for all DMUs [39]. Consequently, the relative efficiency of

a specific DMU k0can be obtained as a solution to the following

problem:

Pn

subject to:

Pn

uj;viX?;

i ¼ 1;2;...;m;

where e is a small positive quantity. The decision variables of the

problem are those weights, and the solution contains a weight set

most favorable to unit k0and the value of its relative efficiency.

Moreover, maximizing a fraction or ratio depends on the relative

magnitude of the numerator and denominator but not on their

individual values. Therefore, the same result can be obtained by

setting the denominator equal to a constant and maximizing the

numerator instead. As a result, the above fractional linear

programming problem can be transformed to a general linear

programming problem as

Max REk0¼

j¼1ujyjk0

Pm

j¼1ujyjk

Pm

i¼1vixik0

i¼1vixik

p1;

k ¼ 1;2;...;l,

j ¼ 1;2;...;n,

Max REk0¼

X

n

j¼1

ujyjk0

subject to:

X

X

uj;viX?;

m

i¼1

n

vixik0¼ 1 ðnormalizationÞ

j¼1

ujyjk?

X

i ¼ 1;2;...;m;

m

i¼1

vixikp0;

k ¼ 1;2;...;l

j ¼ 1;2;...;n,

where e is a small positive quantity.

To obtain the efficiencies of the entire set of units, it is

necessary to solve a linear program for each unit. Clearly, as the

objective function varies from one problem to another, the

weights obtained for each unit may be different. Moreover, when

applying DEA, all DMUs are attempting to select their most

favorable weights; therefore there may be more than one efficient

unit whose relative efficiency has the value of one.

When there are two outputs or two benefit type criteria, all the

DMUs with the relative efficiency values equal to one provide an

efficient frontier in a two-dimensional output space (see Fig. 6). If

there are three outputs, an envelopment surface may be formed

by connecting those points with the relative efficiency values

equal to one.

In the MOSO formulation for the RAP, all the Pareto optimal

solutions in each cluster can be considered as DMUs. Moreover,

ARTICLE IN PRESS

(+, +)

(−, +)

(−, −)

(+, −)

III

II

IV

Fig. 5. Two-dimensional output lattice representation in an output lattice.

Output 1

Relative efficiency

less than 1

Output 2

Relative efficiency equals to 1

Fig. 6. DEA efficient frontier with two outputs.

Z. Li et al. / Reliability Engineering and System Safety 94 (2009) 1585–1592

1589

Page 6

some objectives (e.g., weight and cost) may be considered as

inputs and others (e.g., reliability) as outputs. In some particular

situations, when only input or output variables exist, dummy

variables may be introduced. For instance, if all objectives

are of the cost type, a dummy variable with the value of one

may be introduced as an output variable. For the Pareto optimal

solutions in each cluster, each alternative solution may be

considered as a DMU and be evaluated in terms of the relative

efficiency. A higher relative efficiency value indicates that a higher

output value (e.g., higher system reliability) can be obtained

under a certain amount of inputs like the total cost and weight.

Often, those solutions with high relative efficiencies (equal to one)

are preferred, and those with relative efficiency values less than

one can be eliminated from the cluster. This is a strong statement

because even if the favorable weight set cannot achieve the

relative efficiency value of one, that solution must be a non-

efficient solution.

In this paper, methods are presented when the decision-

makers have not expressed any objective function preferences. An

alternative method to prune the Pareto optimal set is based on an

ordinal ranking of objective functions, as described by Taboada

et al. [20] and Kulturel-Konak et al. [40]. In this approach, weight

sets adhering to the stated preferences are randomly and

repeatedly elected to identify which Pareto optimal solution is

the best.

5. Application to multi-objective RAP

In this section, an example RAP is presented to demonstrate

the use of the proposed two-stage procedure. In the Pareto

optimal solution identification stage, MOGA is initially applied.

The example problem originally presented by Taboada and Coit

[21] was solved using the NSGA-II method, and 75 Pareto optimal

solutions were identified. Specifically, each Pareto optimal solu-

tion (input vector) has three dimensions, i.e., system reliability,

total cost, and system weight, respectively. To utilize the SOM

approach to cluster those solutions, a 10?10 output lattice are

employed, which results in 100 neurons, and each neuron has a

three-dimensional weight vector.

This example considers a system consisting of three subsys-

tems, with an option of five, four and five types of components in

each subsystem, respectively. The maximum number of compo-

nents is eight in each subsystem. Table 1 shows the component

choices for each subsystem, in which rij, cij and wij are the

reliability, cost and weight of component j that can be used in

subsystem i, respectively. The NSGA-II algorithm was used to solve

the problem and 75 solutions were found in the Pareto optimal set

as shown in Fig. 7. The Pareto set is then pruned using the

successive application of SOM and DEA methods previously

described.

After applying SOM classification, the solutions in the two-

dimensional output space can be obtained as shown in Fig. 8. The

75 solutions are classified into four clusters characterized by their

signs in the two-dimensional space. Each cluster has its own

characteristics, and those solutions in a specific cluster are

topologically similar to each other. For example, the cluster

denoted by (+, +) or circle sign includes those solutions with

high cost, high reliability, and high weight. While the cluster

denoted by (?, ?) consists of solutions with low cost, low

reliability, and low weight. Such information is very infor-

mative and helpful to decision-makers without losing any

completeness from the original Pareto optimal solution set.

For comparison, the view of these clusters in the original

three-dimensional space is illustrated in Fig. 9. In the figure, two

clusters appear to be overlapping, but actually the cluster

with cross sign is above the cluster with triangle sign in three-

dimensional space.

By applying the DEA method to each cluster, and solving a total

number of 75 linear programming problems, three solutions

achieve the relative efficiency values greater than 90%, of which

two solutions have the relative efficiency values equal to one. The

two solutions are plotted in Fig. 10. The specific performance

indices and corresponding component choices are listed in

Table 2. This example shows that the DEA method is effective in

reducing the size of Pareto optimal set.

ARTICLE IN PRESS

Table 1

Component choices for each subsystem.

Component choice j

Subsystem i

123

rij

cij

wij

rij

cij

wij

rij

cij

wij

1

2

3

4

5

0.94

0.91

0.89

0.75

0.72

9

6

6

3

2

9

6

4

7

8

0.97

0.86

0.70

0.66

12

3

2

2

5

7

3

4

0.96

0.89

0.72

0.71

0.67

10

6

4

3

2

6

8

2

4

4

0.7

0.8

0.9

1

0

50

100

150

0

50

100

150

Reliability

Cost

Weight

Fig. 7. Original 75 Pareto optimal solutions.

-1 -0.8 -0.6 -0.4 -0.20 0.2 0.40.60.81

-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

(+, +)

(+, -)

(-, -)

(-, +)

Fig. 8. Two-dimensional SOM representation of the original Pareto optimal

solutions.

Z. Li et al. / Reliability Engineering and System Safety 94 (2009) 1585–1592

1590

Page 7

6. Conclusions

In this paper, a two-stage method for multiple objective

system reliability optimization is proposed. The Pareto optimal

solutions identification process is implemented by applying an

effective MOEA such as NSGA and its improved version of NSGA-II.

Often there are very many solutions in this set, and it is

advantageous to reduce the overall set or prune it to produce a

set of promising solutions. While at the solutions pruning stage,

SOM is first applied in classifying the Pareto solution set, so basic

ARTICLE IN PRESS

0.7

0.8

0.9

1

0

50

100

150

0

20

40

60

80

100

120

Reliability

Cost

Weight

Fig. 9. Four clusters after applying SOM.

0.7

0.8

0.9

1

10

15

20

25

30

35

19

19.2

19.4

19.6

19.8

20

Reliability

Cost

Weight

Fig. 10. Kernel solutions after DEA evaluation.

Table 2

Remaining solutions after directly applying DEA.

Solution no.System reliability Total cost System weightSystem configuration Relative efficiency

1 0.6820481319

1

90.87532831 20

1

Z. Li et al. / Reliability Engineering and System Safety 94 (2009) 1585–1592

1591

Page 8

trade-offs information about the whole Pareto optimal solution

set can be observed. Furthermore, by introducing the DEA

method, some Pareto optimal solutions are found to be non-

efficient and can be eliminated in a meaningful way. By applying

the DEA method, the original solutions are greatly reduced.

Through such a sequential decision support process, multi-

objective decision making for RAP becomes easier to be

addressed.

Acknowledgments

The authors would like to thank the editor and referees for

their insightful comments that greatly improved the content of

this paper.

References

[1] Chern MS. On the computational complexity of reliability redundancy

allocation in a series system. Operations Research Letters 1992;11:309–15.

[2] Fyffe DE, Hines WW, Lee NK. System reliability allocation problem and a

computational algorithm. IEEE Transactions on Reliability 1968;17:64–9.

[3] Nakagawa Y, Miyazaki S. Surrogate constraints algorithm for reliability

optimization problems with two constraints. IEEE Transactions on Reliability

1981;30:175–80.

[4] Ghare PM, Taylor RE. Optimal redundancy for reliability in series system.

Operations Research 1969;17:838–47.

[5] Coit DW, Smith AE. Reliability optimization for series–parallel systems using a

genetic algorithm. IEEE Transactions on Reliability 1996;45(2):254–60.

[6] Kulturel-Konak S, Smith AE, Coit DW. Efficiently solving the redundancy

allocation problem using Tabu search. IIE Transactions 2003;35(6):515–26.

[7] Moghaddam RT, Safari J, Sassani F. Reliability optimization of series–parallel

systems with a choice of redundancy strategies using a genetic algorithm.

Reliability Engineering and System Safety 2008;93(4):550–6.

[8] Coelho LS. An efficient particle swarm approach for mixed-integer program-

ming in reliability-redundancy optimization applications. Reliability Engi-

neering and System Safety 2009;94(4):830–7.

[9] Kuo W, Prasad V, Tillman F, Hwang CL. Optimal Reliability Design:

Fundamentals and Applications. Cambridge, UK: Cambridge University Press;

2000.

[10] Levitin G, Lisnianski A, Elmakis D. Structure optimization of power system

with different redundant elements. Electric Power Systems Research 1997;

43:19–27.

[11] Jin T, Coit DW. Variance of system reliability estimates with arbitrarily

repeated components. IEEE Transactions on Reliability 2001;50(4):409–13.

[12] Ramirez-Marquez J, Coit DW. Optimization of system reliability in the

presence of common cause failures. Reliability Engineering and System Safety

2007;92(10):1421–34.

[13] Dhingra A. Optimal apportionment of reliability & redundancy in series

systems under multiple objectives. IEEE Transactions on Reliability 1992;

41(4):576–82.

[14] Misra K, Sharma U. An effective approach for multiple criteria redundancy

optimizationproblems.Microelectronics

303–21.

[15] Multi-criteria optimization for combined reliability and redundancy alloca-

tion in systems employing mixed redundancies. Microelectronics and

Reliability 1991;31(2/3):323–35.

[16] Li D. Interactive parametric dynamic programming and its application of large

system reliability. Journal of Mathematical Analysis and Applications

1995;191:589–607.

[17] Coit DW, Jin T, Wattanapongsakorn N. System optimization with component

reliability estimation uncertainty: a multi-criteria approach. IEEE Transac-

tions on Reliability 2004;53(3):369–80.

andReliability1991;31(2/3):

[18] Busacca PG, Marseguerra M, Zio E. Multiobjective optimization by genetic

algorithms: application to safety systems. Reliability Engineering and System

Safety 2001;72:59–74.

[19] Tian Z, Zuo MJ. Redundancy allocation for multi-state systems using physical

programming and genetic algorithms. Reliability Engineering and System

Safety 2006;91(9):1049–56.

[20] Taboada HA, Baheranwala F, Coit DW. Practical solutions for multi-objective

optimization: an approach to system reliability design problems. Reliability

Engineering and System Safety 2007;92:314–22.

[21] Taboada HA, Coit DW. Data clustering of solutions for multiple objective

system reliability optimization problems. Quality Technology and Quantita-

tive Management 2007;4:191–210.

[22] Rao SS. Optimization Theory and Application. New Delhi: Wiley Eastern

Limited; 1991.

[23] Zeleny M. Multiple criteria decision making. McGraw-Hill series in quanti-

tative methods for management. New York: McGraw-Hill; 1982.

[24] Holland J. Adaptation in natural and artificial systems. University of Michigan

Press; 1975.

[25] Schaffer JD. Multiple objective optimization with vector evaluated genetic

algorithms. In: Genetic algorithms and their applications: proceedings of

the first international conference on genetic algorithms, Hillsdale, NJ; 1985.

p. 93–100.

[26] Fonseca CM, Fleming PJ. Genetic algorithms for multiobjective optimization:

formulation, discussion and generalization. In: Proceedings of the fifth

international conference on genetic algorithms, San Mateo, CA; 1993.

p. 416–23.

[27] Horn J, Nafpliotis N, Goldberg DE. A niched pareto genetic algorithm for

multiobjective optimization. In: Proceedings of the first IEEE conference on

evolutionary computation, IEEE world congress on computational intelli-

gence, vol. 1. Piscataway, NJ: IEEE Press; 1994. p. 82–7.

[28] Srinivas NK, Deb A. Multiobjective optimization using nondominated sorting

in genetic algorithms. Journal of Evolutionary Computation 1994;2(3):

221–48.

[29] Zitzler E, Thiele L. Multiobjective evolutionary algorithms: a comparative case

study and the strength Pareto approach. IEEE Transactions on Evolutionary

Computation 1999;3(4):257–71.

[30] Deb K, Agarwal S, Pratap A, Meyarivan T. A fast elitist nondominated sorting

geneticalgorithm for multi-objective

report number 200001. Kanpur, India: Indian Institute of Technology;

2000.

[31] Deb K, Agarwal S, Pratap A, Meyarivan T. A fast elitist nondominated sorting

genetic algorithm for multi-objective optimization: NSGA-II. In: Proceedings

of the parallel problem solving from nature VI conference, Paris, France; 2000.

p. 849–58.

[32] Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multiobjective

genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation

2002;6(2):182–97.

[33] Kumar R, Izui K, Yoshimura M, Nishiwaki S. Multi-objective hierarchical

genetic algorithms for multilevel redundancy allocation optimization.

Reliability Engineering and System Safety 2009;94(4):891–904.

[34] Taboada H, Espiritu J, Coit D. MOMS-GA: a multi-objective multi-state genetic

algorithm for system reliability optimization design problems. IEEE Transac-

tions on Reliability 2008;57(1):182–91.

[35] Fausett L. Fundamentals of neural networks: architectures, algorithms, and

applications. Englewood Cliffs, NJ, USA: Prentice-Hall; 1994. p. 169–87.

[36] Demuth BH, Beale M, Hagan MT. Neural network design. Boston, MA, USA:

PWS Publishing Co.; 1997.

[37] Tarja J, Pekka K, Jyrki W. Structural comparison of data envelopment analysis

and multiple objective linear programming. Management Science 1998;

44(7):962–70.

[38] Cooper WW, Seiford LM, Tone K. Data envelopment analysis: a comprehen-

sive text with models, applications, references, and DEA-Solver software.

Berlin: Springer; 2006.

[39] Charnes A, Cooper WW, Rhodes E. Measuring the relative efficiency of

decision making units. European Journal of Operational Research 1978;2:

429–44.

[40] Kulturel-Konak S, Coit D, Baheranwala F. Pruned pareto-optimal sets for the

system redundancy allocation problem based on multiple prioritized

objectives. Journal of Heuristics 2008;14(4):335–57.

optimization:NSGA-II. KanGAL

ARTICLE IN PRESS

Z. Li et al. / Reliability Engineering and System Safety 94 (2009) 1585–1592

1592