Content uploaded by Kai Wu
Author content
All content in this area was uploaded by Kai Wu on Aug 17, 2020
Content may be subject to copyright.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
1
Abstract—The problem of inferring nonlinear and complex
dynamical systems from available data is prominent in many fields
including engineering, biological, social, physical, and computer
sciences. Many evolutionary algorithm (EA) based network
reconstruction methods have been proposed to address this
problem, but they ignore several useful information of network
structure, such as community structure, which widely exists in
various complex networks. Inspired by the community structure,
this paper develops a community-based evolutionary multi-
objective network reconstruction framework to promote the
reconstruction performance of EA-based network reconstruction
methods due to their good performance; we refer this framework
as CEMO-NR. CEMO-NR is a generic framework and any
population-based multi-objective metaheuristic algorithm can be
employed as the base optimizer. CEMO-NR employs the
community structure of networks to divide the original decision
space into multiple small decision spaces, and then any multi-
objective evolutionary algorithm (MOEA) can be used to search
for improved solutions in the reduced decision space. To verify the
performance of CEMO-NR, this paper also designs a test suite for
complex network reconstruction problems. Three representative
MOEAs are embedded into CEMO-NR and compared with their
original versions, respectively. The experimental results have
demonstrated the significant improvement benefiting from the
proposed CEMO-NR in 30 multi-objective network
reconstruction problems.
Index Terms—Complex network reconstruction, multi-
objective optimization, evolutionary algorithm, large-scale
optimization, community structure.
I. INTRODUCTION
esearch on the problem of controlling and synchronizing of
nonlinear and complex dynamical systems is outstanding
and has attracted attention in many fields including engineering,
physical, computer, biological, and social sciences. Complex
networks are an effective tool to analyze complex dynamics and
play an important role in controlling collective dynamics [1],
[2]. Each key factor or part in a complex system can be
considered as a concept or node in a complex network.
However, it is difficult to control collective dynamics because
This work was supported in part by the Key Project of Science and
Technology Innovation 2030 supported by the Ministry of Science and
Technology of China under Grant 2018AAA0101302 and in part by the General
Program of National Natural Science Foundation of China (NSFC) under Grant
61773300.
the network structure and nodal dynamics in complex systems
are often unknown. In most cases, only limited observed data
of complex dynamics are available. Thus, inferring the network
structure from observed data has become a central challenge in
contemporary network science and engineering.
A wide range of network inference methods have been
developed to overcome this challenge, such as regression [3],
[4], [5], [6], mutual information [7], random forest [8],
evolutionary algorithms (EAs) [9], [10], and many other
methods [11]. EAs are powerful optimization tools for solving
non-convex or complex optimization problems and have
presented good performance in many fields. Because of the
good performance and wide application of EA-based network
reconstruction methods in addressing the complex network
reconstruction problems [12], [13], [14], [15], [16], this paper
mainly focuses on the EA-based network reconstruction
methods. In EA-based network reconstruction methods, to infer
networks from observed data, fuzzy cognitive maps [17], [18],
S-system [12], [16] or other inference models [19] could be
used for modeling the observed data; then the EA is employed
to optimize the parameters of these inference models. To obtain
the connections of each node, the network structure X needs to
be inferred from the known data Y. More specifically, to recover
the network structure, this problem can be simplified as follows:
( ) ( )
( )
min ,
Xh X Y g X
+
(1)
where
0 is a positive constant which controls the tradeoff
between two terms, Y represents the observed data, h()
represents the difference between the output of the employed
inference model and Y, and X represents the parameters in the
inference models or the network structure. In the above
equation, the first term minimizes the difference between the
output of the inference model or the dynamic and the observed
data and the last term g(X) is used for ensuring the sparsity of
network structure [4], [6]. In most complex networks, such as
social networks, gene regulatory networks, and power networks,
they have a sparse structure [60]. In order to infer these
networks with high accuracy, we need to ensure the sparsity of
network structure [60], [4], [6]. The problem of solving (1) has
The authors are with the School of Artificial Intelligence, Xidian University,
Xi’an 710071, China (e-mail: kaiwu@stu.xidian.edu.cn; neouma@163.com;
ystar1991@126.com; 523184685@qq.com; f.shen@qq.com.).
An Evolutionary Multi-Objective Framework
for Complex Network Reconstruction Using
Community Structure
Kai Wu, Jing Liu, Senior Member, IEEE, Xingxing Hao, Penghui Liu, and Fang Shen
R
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
2
been proven to be a non-convex optimization problem [20],
which can be overcome by EAs. Moreover, the choice of
has
a great effect on the performance of the network reconstruction
methods [4], [6], [9], [10], [17]. Thus, the problem (1) could be
extended to multi-objective problems (MOPs) for avoiding the
choice of
[9], [10].
Over the past years, to solve this problem, many EA-based
approaches have been developed [12], [14], [16], [21].
However, their performance is unsatisfactory. In general, if the
scale of complex systems increases linearly, the complexity of
the learning algorithm’s search space increases exponentially,
which eventually leads to low accuracy in handling large-scale
complex systems. However, the exact network structure among
nodes is urgently needed in large-scale complex systems. A
high-quality large-scale gene regulatory network (GRN) may
result in a clear description among genes, and vice versa; then
we can use the inferred GRN to control or knock out mutated
genes for cancer treatment. Hence, it is worth developing an
effective algorithm to overcome the limitation of current
network reconstruction methods in dealing with high-
dimensional network reconstruction problems.
Unlike current network reconstruction methods, in this paper,
we develop a community-based evolutionary multi-objective
network reconstruction framework, termed as CEMO-NR.
Social networks naturally tend to be clustered into groups or
communities [22], [23]. The nodes link more densely with the
nodes in the same group than the nodes outside the group [22].
Inspired by the community structure in social networks, it is a
reasonable idea to decompose the high-dimensional network
reconstruction problem into multiple low-dimensional
subproblems according to the learned community structure.
However, this idea needs the initial network structure before
dividing the original problem into the subproblems. To
overcome this issue, we propose a two-stage optimization
procedure that can infer the links in a cross manner. These
subproblems focus on the links within or between the
community but ignore the whole structure, especially when the
problem is inseparable, which may lead to low accuracy due to
the information loss between the original problem and the
decomposed problems. To overcome this problem, we design a
strategy to bridge the gap between the original problem and
decomposed problems. Due to the good performance of multi-
objective evolutionary algorithms (MOEAs) [24], [25], [26],
[27], [28] in dealing with non-convex optimization problems,
we can employ them to solve the original and decomposed
network reconstruction problems. CEMO-NR is ideally
appropriate for solving large-scale network reconstruction
problems in which the community structure is significant with
probability. To validate the performance of CEMO-NR, in the
experiments, evolutionary game (EG) [29], [30], fuzzy
cognitive map (FCM) [31], and resistor network (RN) [4], [10]
models taking place in different model-based networks are
employed. Moreover, three state-of-the-art MOEAs are
embedded in CEMO-NR. The experimental results have
demonstrated the significant improvement benefiting from the
proposed framework. The highlights of the proposed CEMO-
NR are summarized as follows:
1) A community-based network reconstruction
framework is first proposed. To the best of our
knowledge, the proposed CEMO-NR is the first
multi-objective network reconstruction framework
that decomposes the original high-dimensional
network reconstruction problem into multiple low-
dimensional network reconstruction problems by
using the property of community structure in
complex networks.
2) A two-stage optimization process is proposed to
overcome the shortage of problem decomposition
based on community structure. At the first stage, the
original problem is optimized for obtaining the initial
network structure which is employed by the second
optimization stage; then the second stage aims at
decomposing the network reconstruction problem
based on community structure and inferring the links
within or between communities. Moreover, to obtain
higher accuracy, the second stage is repeated in a
cross manner.
We organize the rest of this paper as follows. In Section II,
the existing EA-based network reconstruction methods and
large-scale MOEAs are reviewed. Section III gives an
introduction to the designed multi-objective network
reconstruction problems. The details of the proposed CEMO-
NR are presented in Section IV. Section V presents the
experimental results on the proposed test suite. Section VI
concludes the work in this paper.
TABLE I
THE MAIN SYMBOLS USED IN THIS PAPER.
Parameters
Descriptions
X
The links between nodes in one network;
Xi
The links within the ith community;
Y, A
The observed data;
Z
The original problem;
L
The number of links in real networks;
N
The number of nodes in one network;
TFE
The total number of function evaluations;
Ns-m
Ns response sequences with m time points each;
Si(t)
The strategy of agent i at the tth time;
xij
The link between nodes i and j;
The share of the TFE used for the community optimization;
Ck
The number of nodes in the kth community;
ai(t)
The activation degree of concept i at iteration t;
The sigmoid function;
t1
The number of function evaluations for the decomposed
problem;
NewP
The updated population in Algorithm 4;
Zk
The decomposed problem;
ND
The population size for problem Zk;
No
The population size for problem Z;
rij
The resistance of a resistor between nodes i and j;
Vi and Ii
The voltage and the total current at node i;
PPDG
The 2 2 payoff matrices;
Wk
The population for decomposed problem Zk;
P
The population for original problem Z;
q
The selected population size for the community operator;
QkQ
The selected population for the community operator;
p
The number of decomposed problems in Qk;
Pro,
The parameters in (7);
M
The number of objective functions;
D
The number of decision variables;
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
3
II. RELATED WORK
To be convenient, the main symbols used in this paper are
summarized in Table I.
A. EA-based Network Reconstruction Methods
The goal of the network reconstruction problem is to infer the
links between each pair of nodes. Many EA-based methods
have been proposed to address the network reconstruction
problems. In this subsection, we briefly introduce several
traditional methods [13], [15] to demonstrate our motivation for
CEMO-NR. These methods share the same inference pattern in
solving the problem of network reconstruction from observed
data. First, the inference model is employed to model the
observed data. FCMs [21], [32], S-system [12], [14], [16],
recurrent neural network (RNN) [19] or other models can be
used as the inference model. Then the EA is used for optimizing
the parameters of the inference model. For example, Xu et al.
[19] proposed a recurrent neural network (RNN) and particle
swarm optimization (PSO) approach to infer GRNs from time-
series gene expression data, where RNN was employed to
interpret complex temporal behavior and PSO was employed to
optimize RNN. The core of these methods is to develop more
accurate models and high-performance optimizers. Moreover,
many network reconstruction methods are proposed to
reconstruct complex networks based on the known complex
behavior [4], [5], [6], [9], [10]. However, their performance is
unsatisfactory, especially for the high-dimensional network
reconstruction problems. In general, if the number of concepts
in complex networks increases linearly, the size of the search
space of the EAs increases exponentially. Therefore, there is a
practical demand to develop large-scale network reconstruction
schemes. The proposed CEMO-NR uses the information of
community structure to handle the task of problem
decomposition. In this way, EAs can deal with multiple low-
dimensional problems more efficiently.
B. Large-scale MOPs
In this paper, the network reconstruction problem is modeled
as MOPs to avoid the choice of
. Thus, to demonstrate the
novelty and effectiveness of the proposed CEMO-NR, we
review the existing approaches for large-scale MOPs.
MOEA/DVA [33] and LMEA [34] decomposed the decision
variables by decision variable analysis and decision variable
clustering, respectively. The weighted optimization framework
(WOF) [35] divided the decision variables into many groups
using the differential grouping [56], random grouping, linear
grouping, or ordered grouping, and each group is assigned a
weight variable. Then, WOF optimized the weight vector
instead of all the decision variables in the same group. These
methods are the general framework and suitable for many large-
scale MOPs with real variables. However, they do not take
advantage of the characteristics of network reconstruction
problems, such as the properties of network structure, leading
to low decomposition accuracy. Moreover, their decomposition
strategies may be time-consuming. Thus, these decomposition
strategies are not effective and efficient in solving network
reconstruction problems. The proposed CEMO-NR makes use
of the characteristics of network reconstruction problems and
employs the community structure widely existing various types
of social networks to decompose the high-dimensional network
reconstruction problem into multiple low-dimensional
subproblems. Our community-based decomposition strategy is
fast and accurate due to the high performance of BGLL [23].
To accelerate the computational efficiency of existing
MOEAs on large-scale multi-objective optimization, LSMOF
[36] reformulated large-scale MOPs as a low-dimensional
single-objective optimization problem. For the MOEAs based
on decision variable grouping, they may encounter difficulties
in solving MOPs with complicated landscapes [55]. Tian et al.
[55] proposed a competitive swarm optimizer-based efficient
search for MOPs with complicated landscapes. Hong et al. [57]
proposed a scalable indicator-based evolutionary algorithm for
large-scale multiobjective optimization. These approaches have
obtained good performance on benchmark problems. However,
they could not achieve high accuracy in the multi-objective
network reconstruction problems (MONRPs) due to the fact
that these MOEAs can only solve MOPs with real variables;
besides, these MOEAs do not take advantage of the
characteristics of network reconstruction problems, which are
important for partitioning decision variables. To solve large-
scale sparse multi-objective optimization problems with binary
variables, Tian et al. [47] proposed an evolutionary multi-
objective algorithm for large-scale sparse multi-objective
optimization problems.
There has been an increasing interest in adopting the
cooperative coevolution (CC) framework to solve real-world
problems [37], [38], [39], [58], [59]. A wide variety of single-
objective and multiobjective many-variable optimizers have
been proposed [37], [59]. For example, Gong et al. [39]
employed the CC framework to address the hyperspectral
sparse unmixing problems. In [39], the decision variables are
divided into different subsets based on the characteristic of the
hyperspectral sparse unmixing problems. Thus, it is a natural
and effective way to decompose the decision variables into
multiple small-scale subsets based on the characteristics of the
problems being solved. Inspired by this idea, we use the
information of community structure in social networks to divide
the high-dimensional decision space into low-dimensional
decision spaces. Moreover, we also design a two-stage
optimization strategy to overcome the limitation of the
proposed community-based decomposition strategy. All of the
above MOEAs can be employed as the base optimizer in the
proposed CEMO-NR.
III. MULTI-OBJECTIVE NETWORK RECONSTRUCTION
PROBLEMS
To demonstrate the effectiveness of the proposed framework
in addressing the network reconstruction problems, this paper
takes the EG network dynamics [29], [30], FCM [31], and RN
[4] as examples.
A. Multi-objective Optimization
The optimization problem involving multiple conflicting
objectives is known as MOPs. MOPs can be mathematically
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
4
formulated as follows:
( ) ( ) ( ) ( )
( )
12
: min , , ,
..
M
D
Z F x f x f x f x
st x
=
(2)
where represents the decision space, f1(x), f2(x), …, fM(x),
represent M conflicting objectives, and D is set to the number
of decision variables. Unlike single-objective optimization
problems, there does not exist a single optimal solution that is
optimal for all objectives simultaneously due to the conflicting
nature of them. Instead, current methods concentrate on finding
non-dominated solutions. Suppose that x1 and x2 be two
solutions of an MOP, x1 is known to Pareto dominate x2, if and
only if fi(x1)fi(x2) (i1, 2, , M) and there exists at least
one objective fj (j1, 2, , M) satisfying fj(x1)fj(x2). The
collection of all the Pareto optimal solutions in the decision
space is called the Pareto optimal set. The projection of the
Pareto optimal set in the objective space is called the Pareto
optimal front.
B. Three MONRPs
A network is a set of nodes (vertices), with connections
between them, called edges [60]. In this paper, we consider the
single-layer networks. A complex network is modeled as a
graph G = (V, E), with nodes (vertices) in V modeling the
individuals in the network and edges in E modeling the
relationship between individuals. We use N to denote the
number of nodes and L to denote the number of edges. A simple
network is shown in Fig. 1. Moreover, we define the network
structure as follows:
Definition 1 (Network Structure) The network structure
among nodes can be defined as an N×N weight matrix X:
11 1
1
N
N NN
xx
X
xx
=
(3)
In EG and RN models, xij {0, 1} represents the link between
nodes i and j, i, j=1, 2, 3, …, N. xij = 1 implies that there is a
link between nodes i and j; otherwise, xij=0. Moreover, one
node cannot connect to itself, namely, xii = 0. In the FCM model,
xij [−1, 1] indicates how much node i affects node j. xij 0
implies the promoting effect. xij 0 means the inhibiting effect.
The node in the FCM can link to itself.
Let Y be the observed data. Let h(X, Y) denote the simulation
of the inference model from candidate network structure
learned from the network reconstruction methods. Network
reconstruction methods take Y and the inference model as input
and generate a network structure X, with the intention that the
difference between the observed data Y and the simulated data
generated by using X on the inference model is as small as
possible. The elements of Y and X are described in Fig. 1.
Moreover, we establish the following MONRP, which can be
simplified as follows:
( ) ( )
( )
: min , ,
..
X
D
Z h X Y g X
st X
(4)
h() and g() are defined according to different MONRPs. We
introduce the details of three MONRPs as follows.
EG Network Reconstruction Problems [4], [5]. EGs model
common interaction types in various complex systems. In an
EG, one agent must select a strategy of defection or cooperation
in any given round. In this paper, the prisoner’s dilemma games
(PDG) [40] are employed and the 22 payoff matrices are
showed as follows:
10
1.2 0
PDG
P
=
(5)
where the agents obtain rewards 1(0) if both choose to
cooperate (defect). If both choose different strategies, the
defector obtains reward 1.2 and the cooperator obtains reward
0. Formally, the payoff in round t for each agent i can be
expressed as follows:
( ) ( ) ( )
1
N
i ij i j
j
Y t x S t PS t
=
=
(6)
where Si(t) denotes the strategy of agent i in the tth round. T
represents “transpose”. The vector Y(t) consists of the payoff of
each agent in a particular round t. To maximize the payoff of
agent i in the next round, the Fermi rule [40] is employed to
update its strategy in the next round. After an agent i randomly
selects its neighbor j, agent i uses the strategy Sj with the
probability Pro. The Fermi rule is expressed as follows:
( ) ( )
1
1 exp /
ij
ij
Pro S S YY
=
+−
(7)
where
= 0.1. To find the links among agents, we establish the
goal of reconstructing X from the payoff data Y and the strategy
data U, and h() and g() are developed as follows [4], [5]:
( )
( )
2
2
1
0
,
0,1
N
i i i
i
NN
h X Y U X Y
g X X
X
=
=−
=
(8)
where Xi represents the local connections from the pool of all
agents to agent i, and their components are described as follows:
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
1 , 1 , 1
1 , 1 , 1
1 , 1 , 1
1 1 1 1
2 1 2 2
i i i i i iN
i i i i i iN
i
i i i i i iN
B B B B
B B B B
U
B m B m B m B m
−+
−+
−+
=
(9)
4
3
5
4
3
1.2
2.4
1.2
0
0
1
2.4
1.2
1.2
2.4
1.2
1.2
1.2
0
3.6
1.2
0
0
1.2
2.4
Profit Sequences
N
m
Reconstruction
method
An Simple Case of EG Network Reconstruction
A
Y =
Simulate the EG model
using Candidate Network
The Known Strategy
of Each Player
3
5
4
3
2.4
1.2
0
0
2.4
1.2
1.2
2.4
1.2
1.2
0
3.6
0
0
1.2
2.4
Input
Input
Learn
Candidate
Network
Ysimulate =
2
4
4
3
2.4
0
0
0
2.4
1.2
1.2
2.4
1.2
1.2
1.2
2.4
1.2
0
1.2
2.4
Learned
Network
Output
Repeat
edge
Node or
Vertices
Fig. 1. The simple procedure of EG network reconstruction.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
5
( ) ( ) ( )
( )
1 , 2 , ,
i i i i
Y Y Y Y m
=
(10)
( )
1 , 1 , 1
, , , ,
i i i i i i iN
X x x x x
−+
=
(11)
where Bij(t) = STi(t)PSj(t) and m is the number of rounds. The
first objective means minimizing the difference between the
real payoff data and the simulated payoff data for agent i. The
second objective is l0-norm of X which represents the sparsity
of local connections from the pool of all agents to agent i.
Fuzzy Cognitive Map Reconstruction Problems [17], [41],
[42], [62]. The FCM, combining the main aspects of fuzzy logic
and neural network, is an effective tool for modeling and
simulating complex systems. Benefiting from their advantages
in terms of flexibility, adaptability, abstraction, and fuzzy
reasoning, FCMs have been applied in a significant number of
applications [53], [54]. An FCM is a signed fuzzy network. The
state values of these concepts can be denoted as a vector A,
12 N
A a a a=
(12)
The state value of one concept is in the range [0, 1]. The state
value of one concept is to quantify its activation degree at a
particular time moment.
The state vector A(t) consists of the activation level of each
concept at a particular iteration t. Formally, the activation level
at iteration t+1 for concept i is defined as follows:
( ) ( )
1
1N
i ji j
j
a t x a t
=
+=
(13)
where ai(t) is the activation degree of concept i at iteration t and
() is a transformation function that bounds the activation
degree to the range of [0, 1]. The sigmoid function is employed
due to its good performance, which is defined as follows:
( )
1
1x
xe
−
=+
(14)
For available data with m time points, the state matrix of all
nodes is shown as follows:
( ) ( )
( ) ( )
1
1:
1
11
N
m
N
aa
A
a m a m
=
(15)
The casual relationships between concepts play an important
role in the FCM model. To learn the weight matrix of FCM
from the available data A, we establish the following multi-
objective network reconstruction problem [42],
( ) ( )
( )
2
1: 1 2: 2
1
,
1,1
mm
NN
h X Y A X Y
g X X
X
−
=−
=
−
(16)
where X1 represents l1-norm of X. the components in (16) are
described as follows:
( ) ( )
( ) ( )
1
2:
1
22
N
m
N
aa
Y
a m a m
=
(17)
( ) ( )
( ) ( )
1
1: 1
1
11
11
N
m
N
aa
A
a m a m
−
=
−−
(18)
Resistor Network Reconstruction Problems. We consider
the current transportation in a network consisting of resistors
[4]. If the voltages at the nodes and resistances of links are
known, the currents at the nodes can be calculated according to
Kirchhoff’s laws at different periods.
( )
1
1
N
i i j
jij
I V V
r
=
=−
(19)
where Vi = Vsin(w+wi)t is the voltage and Ii is the total
current at node i, and rij is the resistance of a resistor between
nodes i and j. In [4], V = 1 is the voltage peak, w = 103 is the
frequency, and wi [0, 20] is the perturbation. Moreover, for
simplicity, rij = 1 if nodes i and j are connected by a resistor and
rij = if nodes i and j are not directly connected by a resistor.
We assume that only voltages and currents at the nodes are
measurable. To reconstruct the resistor network, the multi-
objective network reconstruction problem is established as
follows:
( )
( )
(
2
2
1
0
,
0,1
N
i i i
i
NN
h X Y R X Y
g X X
X
=
=−
=
(20)
where the components are described as follows:
( ) ( ) ( )
( )
1 , 2 , ,
i i i i
Y I I I m
=
(21)
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
( ) ( ) ( ) ( )
1 , 1 , 1
1 , 1 , 1
1 , 1 , 1
1 1 1 1
2 1 2 2
i i j i j iN
i i j i j iN
i
i i j i j iN
D D D D
D D D D
R
D m D m D m D m
−+
−+
−+
=
(22)
1 , 1 , 1
1 1 1 1
, , , , ,
ii i j i j iN
Xr r r r
−+
=
(23)
where Di,j = Vi – Vj, and if nodes i and j are not connected, 1/rij
= 0; otherwise, 1/rij 0.
Generate Data. The numerical simulation of the EG is
showed as follows [5]: a) Input the target network. Each node
is treated as an agent. Each agent must select the strategy of
cooperation or defection; b) For agent i, in round t, the payoff
of agent i is calculated using (6); c) The strategy of agent i is
updated using (7). Repeat Step b) and Step c) until m iterative
steps are reached. For this dynamic process, we record
strategies and payoffs of all agents in a different round.
The numerical simulation of the FCM is showed as follows
[41]: a) Input the target network and each nonzero weight is
generated randomly from [-1, 1]. The absolute value of each
nonzero weight needs to be larger than 0.05; b) The initial state
value of each concept is randomly generated in the range of [0,
1]; c) The response sequences are generated using (13). Repeat
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
6
Step c) until m iterative steps are reached. For this dynamical
process, we record state values of all agents at different
iterations.
The numerical simulation of the RN is described as follows
[5]: a) Input the target network and each node state is obtained
from a random number wi [0, 20]; b) For node i, the voltage
of node i is calculated by Vi = Vsin(w+wi)t; c) At time t, the
electrical current of the node is calculated using (19). Repeat
Step b) and Step c) until m iterative steps are reached. For this
dynamical process, we record the voltages and currents at the
nodes at different times.
IV. COMMUNITY-BASED EVOLUTIONARY MULTI-OBJECTIVE
NETWORK RECONSTRUCTION FRAMEWORK
The main scheme of the proposed community-based
evolutionary multi-objective network reconstruction
framework is presented in Algorithm 1. CEMO-NR
1
adopts two
different phases of optimization: a normal optimization stage
and a community optimization stage. First of all, the population
1
Our MATLAB implementation of CEMO-NR can be accessed at
https://github.com/SparseL/Community-NR.
of the employed MOEA is initialized randomly for problem Z
(line 1 in Algorithm 1). In line 2 of Algorithm 1, the normal
optimization stage is firstly carried out to optimize the original
problem Z with a fixed number of function evaluations (line 2
in Algorithm 1). We perform this stage to obtain the initial
network structure which is employed by the community
optimization stage. The normal optimization stage can reach all
possible solutions but might have very slow convergence in the
high-dimensional decision space. Next, at the community
optimization stage (lines 5–7 in Algorithm 1), the problem Z is
decomposed into multiple small-scale subproblems by using the
community structure of the selected solution obtained from the
normal optimization stage. The solution representation of each
subproblem is the links within or between the community. Then,
the same MOEA is employed to optimize these low-
dimensional subproblems. The advantage of the community
optimization stage is to search for improved solutions in a
Algorithm 1 CEMO-NR
Input: Z: MONRP;
No: Population size for the normal stage;
ND: Population size for the community stage;
TFE: Total number of function evaluations;
{t1, q,
}: Key parameters;
Output: Population P.
1:
PInitialization (No, Z);
2:
POptimizer (Z, P, (1-
)
TFE); //normal stage
3:
while used FE
TFE do
4:
QSelection (P, q); //Algorithm 2.
5:
for k in (1:1:q) do
6:
PkCommunity (Z, ND, Qk, t1);
// Algorithm 3.
7:
end for //community stage
8:
PDelete duplicated solutions P, Pk;
9:
PDo non-dominated sorting on P;
10:
end while
11:
return FirstNonDominatedFront(P);
26
9
C1
Modularity
Optimization
1
4
3 5 7
8
12
11
10
Final
Community
Community
Aggregation
C2
C3
Initial Network
Fig. 2. An example of BGLL algorithm. We also show an example of the
selected solution Qk with three communities. The community C1 consists of
nodes 1, 2, 3, 4, and 5. The community C2 consists of nodes 6, 7, and 8. The
community C3 consists of nodes 9, 10, 11, and 12.
Algorithm 2 Selection
Input: P: Population;
q: Selected population size;
Output: Population Q.
1:
if size(P) q do
2:
QP;
3:
else
4:
F1, F2, NDSort (P);
5:
kargmini (F1 Fi q);
6:
CrowdDisCrowdingDistance (Fk);
7:
Delete F1 Fk − q solutions from Fk
with the smallest CrowdDis;
8:
Q F1 Fk;
9:
end if
10:
return Q;
Algorithm 3 Community
Input: Z: MONRP;
ND: Population size;
Qk: The kth solution in Q;
{t1}: Key parameter;
Output: Population Pk.
1:
C1, , CpBGLL (Qk); //Section IV.B.1
2:
for k in (1:1:p) do
3:
ZkDecomposition (Z, Ck);
4:
WkInitialization (ND, Qk, Ck);
5:
WkOptimizer (Zk, Wk, t1);
6:
(Qk, Pk)Update (Qk, Wk);
7:
end for
8:
Zp+1Decomposition (Z, C1, , Cp);
9:
Wp+1Initialization (ND, Qk, C1, , Cp);
10:
Wp+1Optimizer (Zp+1, Wp+1, t1);
// Infer the links between communities
11:
Pp+1Update (Qk, Wp+1);
12:
PkP1, , Pp+1;
13:
return Pk;
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
7
smaller decision space, which results in faster convergence.
However, this decomposition operator may lead to the
optimization objective of subproblems to be inconsistent with
the original problem. To overcome this issue, we design a
strategy to bridge the gap between the original problem and the
decomposed problems (the detailed steps are presented in
Subsection IV.B.2 and Subsection IV.B.5). After that, the main
loop starts again (lines 3–10 in Algorithm 1). Finally, if the
algorithm stops, Algorithm 1 outputs the nondominated
solutions in P (line 11 in Algorithm 1).
Furthermore, the duplicated solutions found in the union set
are deleted (line 8 in Algorithm 1), and the nondominated
sorting procedure is performed to remove worse solutions
found in the current iteration (line 9 in Algorithm 1). The main
loop of CEMO-NR (lines 3–10 in Algorithm 1) is repeated until
TFE function evaluations are achieved, where
[0, 1] and
TFE is the total number of function evaluations. The algorithm
performs the normal optimization stage for (1-
)TFE function
evaluations (line 2 in Algorithm 1). In this paper, we select the
same base optimizer to optimize the original problem and the
decomposed problems. More detailed steps of CEMO-NR are
given in Algorithms 1–4, which are further described in the
following subsections.
A. How to Select Q from Population P
The choice of Q has a great effect on the performance of the
proposed framework. Unlike the single-objective optimization,
we cannot determine the best solution for MOPs. Choosing one
solution from P may greatly accelerate the optimization process,
but may reduce the diversity of the population. To maintain the
diversity and convergence of the population, in this paper, we
select the best q solutions by applying the crowding distance
metric to the first nondominated front of the current population
P as suggested in [35]. The procedure of the Selection operator
is shown in Algorithm 2. As shown in Algorithm 2, the
population P is set as the selected population Q if the number
of individuals in population P is smaller than the predefined
value of q; otherwise, the nondominated front number [46] and
crowding distance [25] of each solution in P are calculated.
Afterward, q solutions with a better nondominated front number
and crowding distance in the population are set as Q. If two
extreme solutions have infinite crowding distance, we select
one solution randomly.
B. How to Use the Information of Community Structure
After selecting q solutions from the original population P, we
need to handle these solutions with the proposed Community
operator (Algorithm 3). In the network reconstruction problem,
each individual in Q represents the learned network structure.
According to the knowledge of community structure in
complex networks, we can employ the community detection
methods to find the community structure in the selected
individual Qk (see Section IV.B.1). The links among nodes
within communities are denser than the links between
communities. Inspired by this knowledge, the Decomposition
operator is proposed to decompose the original problem (see
Section IV.B.2). Moreover, a simple but effective Initialization
operator is proposed to initialize the population of the
decomposed problem Zk (see Section IV.B.3). Then, an MOEA
can be employed to optimize problem Zk with t1 FEs (see
Section IV.B.4). Perform lines 2–11 in Algorithm 3 until all
decomposed subproblems are optimized. Finally, to embed the
optimized solutions into the original solution Qk, we design an
Update operator (see Section IV.B.5).
1) Community Detection
The goal of this operator is to obtain the community structure
of solution Qk. In this paper, we employ the BGLL method to
Fig. 3. The solution representation of problem Zk. For example, for subproblem
Z1, the goal of this problem is to infer the links among nodes in community C1.
Thus, we need to infer the links between nodes 1, 2, 3, 4, and 5. Note that X4
represents the links between communities.
Fig. 4. An example of generating 5 individuals for problem Zk. In Qk, the
relationships among nodes in community C3 is selected, and its values are set
to Wk1. We change each element in Wk1 with probability rand, and then we
obtain the second individual of Wk. Repeat this process until the required
number of Wk is achieved.
Fig. 5. An example of Assign operator. In Qk, the links within community C1
are selected from problem Z1. The links within community C2 are selected from
problem Z2. The links within community C3 are selected from problem Z3. The
links between communities are selected from problem Z4.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
8
deal with this task. BGLL [23] is a fast heuristic method based
on modularity optimization [22]. The modularity is defined as
follows:
( )
,
1,
22
ij
ij i j
ij
kk
modularity X c c
bb
=−
(24)
where ki=jXij is the sum of the weight of the edges attached to
node i, ci is the community to which node i is assigned,
(u,v)=1 if u=v and 0 otherwise, and b=ijXij. To determine the
maximal positive gain in modularity, we define the gain in
modularity
modularity obtained by moving an isolated node i
into a community C as follows:
2
,
22
22
2 2 2
in i in tot i
in tot i
kk
modularity bb
k
b b b
++
= −
− − −
(25)
where in is the sum of the weights of the links inside C, tot is
the sum of the weights of the links incident to nodes in C, and
ki,in is the sum of the weights of the links from node i to nodes
in C.
The BGLL algorithm can find more natural community
structures of networks due to no prior knowledge about the
community number. The procedure of BGLL consists of the
following phases:
a) Each node in Qk is considered as a community. One
node is removed from its original community to its
neighbor’s community which has the maximal
positive gain in modularity. Repeat this phase for all
nodes until no further improvement can be achieved.
b) Each community obtained in the first phase is
considered as a node so that a new network can be
built. Conduct Step a) until no further improvement
can be achieved.
c) BGLL runs Steps a) and b) iteratively until the stable
result and the maximal modularity are achieved.
Finally, we obtain the community labels of all nodes which is
important for the next procedure. The procedure of BGLL with
one example is shown in Fig. 2. Ck (k=1, , p) contains the node
number in the kth community. The time complexity of the
BGLL is O(Ld), where L is the number of links in social
networks and d is the maximum degree of the network. In Fig.
2, we give an example of solution Qk with three communities
C1, C2, C3 found by the BGLL algorithm.
2) Problem Decomposition
The goal of this operator is to decompose the original
problem Z into Zk (k=1, , p+1) using the obtained community
structure of solution Qk. As shown in Fig. 2, the nodes within
the same community are more densely connected than the nodes
outside the community. According to the community structure
found in Fig. 2, we can decompose problem Z into Z1, Z2, Z3,
and Z4. The detailed solution representation of problem Zk is
illustrated in Fig. 3, and the problem Zk is defined as follows:
( ) ( )
( )
:min , ,
k
k k k k
X
Z h X Y g X
(26)
where Xk is the links among nodes in the kth community or the
links between communities and Yk is the state vector of nodes
in the kth community. For each subproblem Zk (k=1, , p), the
links between nodes in the kth community are inferred. Note
that Zp+1 represents the links between communities. It is an
effective and natural way to perform problem decomposition
based on the community structure in complex networks due to
fewer connections among communities.
3) Population Initialization for Zk
The population initialization strategy of problem Zk is
presented in Fig. 4. As shown in line 1 of Algorithm 1, the
method of generating population randomly ignores the useful
information of Qk. Thus, we propose a simple but effective
strategy to generate the required population. An example of
generating 5 individuals for problem Zk is shown in Fig. 4. First,
the solution Wk is obtained from the original solution Qk and
then Wk is reserved in the initial population of problem Zk.
Second, each element in Wk is changed with the probability
rand = 0.8. For instance, in Fig. 4, if a random number is greater
than rand, the second element in W31 is set to 0. We can obtain
a new population with ND individuals by repeating this operator.
Moreover, for the FCM reconstruction problem, if a random
number is greater than 0.8, the second element in W31 is set to a
Algorithm 4 Update
Input: Qk: Solution;
Wk: The solutions for problem Zk;
Output: NewP and Qk.
1:
for i in (1:1:ND) do
2:
QtempQk;
3:
NewPiAssign (Qtemp, Wk1, , Wki);
4:
end for
5:
WtempSelection (Wk, 1);
6:
QkAssign (Qk, Wtemp);
7:
return NewP and Qk;
Fig. 6. An example of how to evaluate the solutions of subproblem Zk and how
to update the population from the subproblems. We first optimize the
subproblems Z1 and obtain the initial population W1. To avoid the information
loss between problem Z and the subproblems, we assign each solution in W1 to
solution Qtemp, (Qtemp Qk) where the decision variables in Z1 are replaced with
solutions W1 and other decision variables are maintained; then solutions NewPi
can be evaluated using problem Z. Please note that this procedure does not
change the elements in Qk. If the procedure of optimizing subproblem Z1 stops,
the population NewPi is used for updating the population P and we select a
solution from W1; we assign it to solution Qk and then obtain a new solution Qk
for the next subproblem Z2. Repeat these operators until all subproblems are
solved.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
9
random number that belongs to [−1, 1].
4) Subproblem Optimization
The procedure of subproblem optimization is the same as the
procedure of the normal stage. We can employ the MOEA to
optimize subproblem Zk. However, if the original problem is
complex, non-decomposable, and nonlinear, we encounter the
information loss in terms of objective function between
problem Z and subproblems Zk (k=1, , p+1). For example,
h(X,Y) h(X1,Y1)++h(Xk,Yk)++h(Xp+1,Yp+1). Moreover, to
evaluate the fitness function of one solution, we need to know
the state vector of all nodes. We simulate the network model
using the learned solutions (the whole network structure) and
obtain the simulated data; then we calculate the difference
between observed data and simulated data. If we want to
evaluate the solutions of subproblems, we need to combine
subcomponents of other subproblems, which is different from
the current CC framework. To overcome this issue, we design
the following procedure: to evaluate the solutions in problem Zk,
in Qk, the links within the kth community are replaced with the
solutions in problem Zk (An example of this procedure is shown
in Fig. 6).
5) Update Population
The goal of this operator is to assign the solutions obtained
from the subproblems to the original solution Qk (lines 6 and 11
in Algorithm 3). This operator is shown in Algorithm 4. For
each solution in problem Zk, its elements are assigned to the
solution in Qk (lines 1-4 in Algorithm 4) and we can obtain the
population NewP. An example of Assign operator is presented
in Fig. 5. To save computational resources, we select a solution
using the Selection operator (Algorithm 2) from the obtained
population Wk and set it to Wtemp. The elements of Wtemp are
assigned to Qk and we return the changed Qk for the next
subproblem Zk+1. Moreover, an example of how to evaluate the
solutions in subproblem Zk and how to update the population
from the subproblems is presented in Fig. 6.
C. Discussion
The differences between our proposal and the CC framework
are discussed as follows:
1) Normal stage. Unlike the CC framework, we design the
normal stage (line 2 in Algorithm 1). The goal of network
reconstruction is to infer networks from observed data.
The initial network structure is unknown. Thus, to
decompose the network structure into small components
by using community structure, we need the initial network
structure.
2) Community-based decomposition strategy (Algorithm
3). It is our main contribution. Many CC frameworks have
been proposed to handle different tasks. They exploited
the characteristics of the tasks to design suitable
decomposition strategies. Similar to this idea, we propose
a community-based decomposition strategy to handle
large-scale network reconstructions problems. Inspired by
the community structure in social networks, it is a
reasonable idea to decompose the high-dimensional
network reconstruction problem into multiple low-
dimensional subproblems according to the learned
community structure.
3) Update subproblems in order. Unlike the CC framework
that updates the subproblems in parallel, we update the
subproblems in order because the solutions of
subproblems are evaluated by combining the
subcomponents of other subproblems. This strategy can
exploit the advantage of the optimal solutions of other
subproblems and promote the performance of the whole
algorithm. However, this strategy increases the running
time compared with the CC framework.
V. EXPERIMENTS
This section consists of four parts. Section V.A introduces
these experimental settings. In Section V.B, we test the
performance of CEMO-NR on 30 MONRPs. We compare
CEMO-NR with four state-of-the-art methods, and the results
are showed in Section V.C. Section V.D analyzes the effect of
three parameters on the performance of CEMO-NR.
A. Experimental Setup
Algorithms. To verify the performance of CEMO-NR, three
popular MOEAs are employed in this experiment, namely-
NSGA-II [25], SPEA2 [26] and SMS-EMOA [27]. They have
been verified to be effective for solving MOPs including
combinatorial MOPs. Three MOEAs are implemented on the
evolutionary multi-objective optimization platform PlatEMO
[43].
TABLE II
PROBLEMS USED IN THIS PAPER.
Problem
ID
Type of
variables
Ns-m
Network
D
N
L
FCM1
Real
xij-1, 1
5-10
ZK
1156
34
78
FCM2
polbooks
11025
105
441
FCM3
football
13225
115
613
FCM4
lesmis
5929
77
254
FCM5
dolphin
3844
62
159
FCM6
Real
xij-1, 1
20-10
ZK
1156
34
78
FCM7
polbooks
11025
105
441
FCM8
football
13225
115
613
FCM9
lesmis
5929
77
254
FCM10
dolphin
3844
62
159
EG1
Binary
xij{0, 1}
5-10
ZK
1156
34
78
EG2
polbooks
11025
105
441
EG3
football
13225
115
613
EG4
lesmis
5929
77
254
EG5
dolphin
3844
62
159
EG6
Binary
xij{0, 1}
20-10
ZK
1156
34
78
EG7
polbooks
11025
105
441
EG8
football
13225
115
613
EG9
lesmis
5929
77
254
EG10
dolphin
3844
62
159
RN1
Binary
xij{0, 1}
5-10
ZK
1156
34
78
RN2
polbooks
11025
105
441
RN3
football
13225
115
613
RN4
lesmis
5929
77
254
RN5
dolphin
3844
62
159
RN6
Binary
xij{0, 1}
20-10
ZK
1156
34
78
RN7
polbooks
11025
105
441
RN8
football
13225
115
613
RN9
lesmis
5929
77
254
RN10
dolphin
3844
62
159
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
10
Evaluation Metrics. To quantify the performance of our
framework, two measurement indices are employed, the
hypervolume (HV) [44] and the area under the receiver
operating characteristic curve (AUC) [45], [61]. The HV is
employed to measure each obtained solution set because the
Pareto fronts of the MONRPs are unknown. The reference point
(RP) of HV is set to the maximum value of each objective. On
each MOP, 30 independent runs are performed for MOEA to
obtain statistical results, and the Wilcoxon rank-sum test is
adopted to test the significance of the experimental results. For
FCM reconstruction problems, the learned xij is a real number.
To calculate the AUC of FCM reconstruction problems, if
nodes i and j are connected, xij 0.05; otherwise, xij0.05 [31].
Problems. We select five real social networks for each of the
three MONRPs. These social networks consist of football [48],
polbooks [49], dolphin [50], ZK [51], and lesmis [52] networks.
The details of five social networks used in this paper are
presented in Table II, including the number of nodes N and the
number of links L. The dataset for each problem is generated
using the methods described in Section II. Two types of
TABLE III
MEDIAN HV VALUES AND INTERQUARTILE RANGES OBTAINED BY SIX COMPARED ALGORITHMS ON 30 TEST INSTANCES. THE BEST RESULTS IN EACH TWO
COLUMNS ARE HIGHLIGHTED. ‘+’, ‘−’ AND ‘’ INDICATE THAT THE RESULT IS SIGNIFICANTLY BETTER, SIGNIFICANTLY WORSE AND STATISTICALLY SIMILAR TO
THAT OBTAINED BY CEMO-NR -ALG, RESPECTIVELY.
Problem
ID
CEMO-NR
-NSGA-II
NSGA-II
CEMO-NR
-SPEA2
SPEA2
CEMO-NR
-SMS-EMOA
SMS-EMOA
FCM1
0.581(0.051)
0.527(0.031)−
0.572(0.043)
0.424(0.076)−
0.638(0.030)
0.599(0.020)−
FCM2
0.116(0.008)
0.093(0.009)−
0.109(0.009)
0.082(0.023)−
0.135(0.017)
0.107(0.006)−
FCM3
0.102(0.009)
0.079(0.009)−
0.098(0.006)
0.070(0.010)−
0.116(0.010)
0.092(0.003)−
FCM4
0.180(0.014)
0.155(0.014)−
0.177(0.049)
0.142(0.054)−
0.221(0.009)
0.183(0.015)−
FCM5
0.177(0.025)
0.143(0.021)−
0.157(0.044)
0.114(0.081)−
0.226(0.021)
0.178(0.015)−
FCM6
0.209(0.041)
0.151(0.020)−
0.194(0.046)
0.083(0.025)−
0.283(0.064)
0.214(0.024)−
FCM7
0.605(0.008)
0.580(0.007)−
0.603(0.011)
0.581(0.015)−
0.620(0.008)
0.600(0.007)−
FCM8
0.077(0.011)
0.056(0.008)−
0.072(0.015)
0.043(0.014)−
0.095(0.007)
0.070(0.006)−
FCM9
0.158(0.007)
0.133(0.010)−
0.145(0.047)
0.119(0.046)−
0.200(0.016)
0.164(0.008)−
FCM10
0.150(0.016)
0.115(0.012)−
0.123(0.064)
0.088(0.059)−
0.193(0.017)
0.147(0.013)−
EG1
0.550(0.032)
0.578(0.028)+
0.566(0.015)
0.604(0.016)+
0.534(0.048)
0.546(0.042)+
EG2
0.533(0.009)
0.020(0.010)−
0.533(0.012)
0.027(0.011)−
0.539(0.008)
0.117(0.017)−
EG3
0.502(0.010)
0.019(0.007)−
0.511(0.010)
0.017(0.006)−
0.512(0.008)
0.100(0.014)−
EG4
0.176(0.020)
0.055(0.007)−
0.179(0.021)
0.052(0.007)−
0.175(0.024)
0.105(0.013)−
EG5
0.130(0.016)
0.191(0.021)+
0.141(0.014)
0.174(0.011)+
0.126(0.025)
0.203(0.013)+
EG6
0.521(0.028)
0.559(0.017)+
0.538(0.023)
0.581(0.016)+
0.507(0.036)
0.542(0.017)+
EG7
0.501(0.008)
0.016(0.007)−
0.507(0.007)
0.018(0.006)−
0.512(0.012)
0.108(0.011)−
EG8
0.506(0.008)
0.017(0.007)−
0.515(0.019)
0.018(0.005)−
0.525(0.007)
0.099(0.010)−
EG9
0.114(0.009)
0.060(0.009)−
0.124(0.014)
0.063(0.014)−
0.118(0.016)
0.106(0.009)−
EG10
0.134(0.012)
0.216(0.015)+
0.142(0.010)
0.210(0.012)+
0.135(0.006)
0.228(0.006)+
RN1
0.486(0.026)
0.533(0.023)+
0.492(0.017)
0.548(0.010)+
0.482(0.029)
0.509(0.023)+
RN2
0.562(0.008)
0.016(0.003)−
0.566(0.004)
0.020(0.010)−
0.567(0.004)
0.129(0.009)−
RN3
0.571(0.003)
0.017(0.008)−
0.574(0.006)
0.017(0.007)−
0.582(0.004)
0.102(0.013)−
RN4
0.104(0.006)
0.067(0.007)−
0.105(0.006)
0.058(0.006)−
0.100(0.007)
0.090(0.006)−
RN5
0.126(0.008)
0.182(0.008)+
0.125(0.005)
0.171(0.008)+
0.121(0.009)
0.195(0.008)+
RN6
0.509(0.024)
0.545(0.016)+
0.512(0.024)
0.559(0.019)+
0.498(0.043)
0.524(0.020)+
RN7
0.566(0.004)
0.016(0.006)−
0.567(0.004)
0.018(0.006)−
0.571(0.006)
0.126(0.012)−
RN8
0.579(0.005)
0.014(0.009)−
0.578(0.004)
0.018(0.007)−
0.585(0.004)
0.107(0.009)−
RN9
0.089(0.006)
0.063(0.007)−
0.095(0.008)
0.058(0.010)−
0.093(0.005)
0.085(0.010)−
RN10
0.133(0.015)
0.195(0.009)+
0.135(0.010)
0.184(0.005)+
0.133(0.009)
0.202(0.011)+
+−
−
8/22/0
−
8/22/0
−
8/22/0
(a)
(b)
Fig. 7. (a) HV of CEMO-NR-NSGA-II versus varying number of evaluations; (b) AUC of CEMO-NR-NSGA-II versus varying number of evaluations.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
11
response sequences are employed. The first set is the case of 5
response sequences with 10 iterations each (NS=5, m=10). The
second set is the case of 20 response sequences with 10
iterations each (NS=20, m=10). In Table II, FCM1–FCM10,
EG1–EG10, and RN1–RN10 denote the FCM learning problem,
the EG reconstruction problem, and the RN reconstruction
problem with five real-world networks, respectively.
Operators. Three MOEAs employ the same crossover and
mutation operators. The single-point crossover and bitwise
mutation are employed in EG and RN network reconstruction
problems. The simulated binary crossover and polynomial
mutation are employed for the FCM learning problems, where
the distribution indexes of both crossover and mutation are set
to 20. In all three MOEAs, the probabilities of mutation and
crossover are set to 1D and 1.0, respectively.
Function Evaluations and Population Size. The total
number of function evaluations TFE is set to 200000 in all
experiments. In CEMO-NR, t1 is set to 1000, q is set to 1, and
is set to 0.5. The size of the population for the original
problem is set to 100, while the size of the population for each
subproblem is set to 20 owing to the reduced size of the search
space.
(a)
(b)
Fig. 8. (a) HV of CEMO-NR-NSGA-II versus varying q; (b) AUC of CEMO-NR-NSGA-II versus varying q.
TABLE IV
MEDIAN AUC VALUES AND INTERQUARTILE RANGES OBTAINED BY SIX COMPARED ALGORITHMS ON 30 TEST INSTANCES. THE BEST RESULTS IN EACH TWO
COLUMNS ARE HIGHLIGHTED. ‘+’, ‘−’ AND ‘’ INDICATE THAT THE RESULT IS SIGNIFICANTLY BETTER, SIGNIFICANTLY WORSE AND STATISTICALLY SIMILAR
TO THAT OBTAINED BY CEMO-NR -ALG, RESPECTIVELY.
Problem
ID
CEMO-NR
-NSGA-II
NSGA-II
CEMO-NR
-SPEA2
SPEA2
CEMO-NR
-SMS-EMOA
SMS-EMOA
FCM1
0.551(0.024)
0.521(0.027)−
0.534(0.020)
0.513(0.039)−
0.549(0.024)
0.520(0.027)−
FCM2
0.498(0.007)
0.504(0.006)+
0.500(0.012)
0.503(0.007)
0.498(0.008)
0.500(0.008)
FCM3
0.500(0.009)
0.497(0.007)−
0.501(0.009)
0.500(0.002)−
0.501(0.005)
0.500(0.009)
FCM4
0.501(0.011)
0.502(0.007)
0.499(0.008)
0.505(0.007)+
0.498(0.009)
0.499(0.012)
FCM5
0.498(0.011)
0.500(0.004)
0.501(0.017)
0.503(0.015)
0.497(0.017)
0.500(0.011)
FCM6
0.536(0.022)
0.513(0.038)−
0.538(0.025)
0.517(0.022)−
0.538(0.033)
0.535(0.015)−
FCM7
0.498(0.009)
0.507(0.008)+
0.498(0.008)
0.500(0.009)
0.502(0.012)
0.503(0.009)
FCM8
0.500(0.007)
0.495(0.007)−
0.499(0.005)
0.500(0.006)
0.500(0.005)
0.500(0.009)
FCM9
0.496(0.008)
0.496(0.010)
0.494(0.016)
0.504(0.012)+
0.497(0.011)
0.495(0.007)
FCM10
0.500(0.008)
0.504(0.013)+
0.502(0.010)
0.499(0.014)−
0.500(0.013)
0.500(0.018)
EG1
0.863(0.026)
0.797(0.026)−
0.847(0.018)
0.791(0.017)−
0.861(0.025)
0.817(0.011)−
EG2
0.573(0.008)
0.523(0.009)−
0.567(0.014)
0.515(0.011)−
0.581(0.009)
0.521(0.009)−
EG3
0.539(0.005)
0.514(0.007)−
0.539(0.007)
0.512(0.009)−
0.546(0.008)
0.511(0.006)−
EG4
0.698(0.017)
0.612(0.013)−
0.685(0.008)
0.604(0.013)−
0.694(0.010)
0.632(0.016)−
EG5
0.752(0.028)
0.686(0.016)−
0.737(0.010)
0.665(0.015)−
0.745(0.021)
0.694(0.016)−
EG6
0.933(0.020)
0.845(0.025)−
0.893(0.034)
0.854(0.021)−
0.933(0.029)
0.859(0.033)−
EG7
0.601(0.009)
0.526(0.006)−
0.597(0.007)
0.525(0.009)−
0.615(0.012)
0.531(0.011)−
EG8
0.546(0.007)
0.513(0.011)−
0.541(0.006)
0.508(0.012)−
0.556(0.009)
0.509(0.009)−
EG9
0.738(0.012)
0.609(0.011)−
0.732(0.020)
0.601(0.014)−
0.741(0.010)
0.640(0.008)−
EG10
0.828(0.022)
0.718(0.009)−
0.819(0.022)
0.701(0.018)−
0.832(0.016)
0.738(0.021)−
RN1
0.916(0.012)
0.863(0.021)−
0.871(0.020)
0.856(0.016)−
0.914(0.019)
0.881(0.012)−
RN2
0.550(0.005)
0.512(0.011)−
0.551(0.008)
0.513(0.013)−
0.555(0.013)
0.511(0.008)−
RN3
0.520(0.005)
0.506(0.010)−
0.518(0.006)
0.503(0.006)−
0.527(0.007)
0.505(0.007)−
RN4
0.677(0.019)
0.585(0.010)−
0.670(0.013)
0.572(0.010)−
0.677(0.015)
0.597(0.013)−
RN5
0.749(0.014)
0.668(0.022)−
0.735(0.032)
0.654(0.018)−
0.756(0.026)
0.682(0.012)−
RN6
0.942(0.009)
0.889(0.015)−
0.924(0.031)
0.881(0.015)−
0.944(0.010)
0.902(0.019)−
RN7
0.559(0.006)
0.516(0.005)−
0.558(0.009)
0.515(0.010)−
0.570(0.007)
0.515(0.007)−
RN8
0.523(0.007)
0.505(0.007)−
0.518(0.006)
0.501(0.010)−
0.531(0.004)
0.503(0.007)−
RN9
0.713(0.018)
0.591(0.014)−
0.721(0.011)
0.581(0.013)−
0.727(0.016)
0.614(0.013)−
RN10
0.805(0.025)
0.689(0.015)−
0.792(0.008)
0.670(0.021)−
0.804(0.013)
0.710(0.013)−
+−
−
3/24/3
−
2/24/4
−
0/23/8
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
12
B. Results on MONRPs
Table III shows the median HV values and interquartile
ranges obtained by NSGA-II, SPEA2, SMS-EMOA, and
CEMO-NR-Alg on FCM1–FCM10, EG1–EG10, and RN1–
RN10, where CEMO-NR-Alg denotes the CEMO-NR with the
algorithm Alg embedded. In terms of AUC, the experimental
results are reported in Table IV. Note that we select the
individual with the highest value of AUC to represent the
performance of all compared methods.
In Table III, the HV of CEMO-NR-Alg exceeds the three
original algorithms in 66 out of 90 cases and loses 24 times to
the three original algorithms. In the cases of FCM1-FCM10,
CEMO-NR-Alg exceeds NSGA-II, SPEA2, and SMS-EMOA
in all cases. From the results shown in Table IV, in terms of
AUC, we can see that CEMO-NR-Alg outperforms NSGA-II,
SPEA2, and SMS-EMOA in EG1–EG10 and RN1–RN10
problems. In the cases of FCM1-FCM10, CEMO-NR-Alg
matches or exceeds NSGA-II, SPEA2, and SMS-EMOA in 25
out of 30 cases, and loses 5 times to the three original
algorithms. For the problems EG1–EG10 and RN1–RN10 with
ZK and dolphin networks, the performance of CEMO-NR is
worse than the original version in terms of HV, but outperforms
the original version in terms of AUC. The goal of MONRPs is
to infer the high-quality network structure but not the diversity
of Pareto solutions. Thus, the results support our argument that
CEMO-NR can improve the performance of EA-based network
reconstruction methods. The parameter q can control the
convergence and diversity of Pareto solutions (the detailed
discussion is shown in Section V.D). The balance between
convergence and diversity of Pareto solutions can be selected
by the decision-makers based on their preference. In this
subsection, we mainly focus on the convergence of CEMO-NR
and set q=1, which may result in a low value of HV (low
diversity of Pareto solutions). Moreover, for the cases with
football, polbooks, and lesmis networks, the performance of
CEMO-NR is better than that of the original version in terms of
HV and AUC.
We also test the effectiveness of the proposed community-
based decomposition strategy on the EG1 problem. The
convergence curve of HV and AUC is shown in Fig. 7. From
the results showed in Fig. 7, we can see that CEMO-NR-
NSGA-II and NSGA-II are converged. In terms of AUC,
CEMO-NR-NSGA-II and NSGA-II have similar performance
in the normal stage. Then, in the community stage, with the
increase of the number of evaluations, the AUC of CEMO-NR-
NSGA-II increases rapidly and NSGA-II increases slowly.
Thus, we can claim that the proposed community stage is
effective in promoting the reconstruction accuracy and
accelerating convergence of NSGA-II. In terms of HV, CEMO-
NR-NSGA-II and NSGA-II have similar performance in the
normal stage. In the community stage, with the increase in the
number of evaluations, the HV of CEMO-NR-NSGA-II keeps
a stable level, and NSGA-II increases slowly. This phenomenon
appears because CEMO-NR-NSGA-II has little attention on the
improvement of the diversity of Pareto front when we set q = 1.
However, we also find that with the increase in the number of
evaluations, CEMO-NR-NSGA-II can maintain a fairly good
value of HV.
C. Comparison with State-of-the-art Methods
We also compare CEMO-NR with other state-of-the-art
large-scale MOEAs including WOF-NSGA-II, LSMOF-
NSGA-II, MOEA/DVA, and SparseEA [47]. The results are
shown in Table V. The number of function evaluations of all
the compared MOEAs is set to 200000 for fair comparisons. In
WOF-NSGA-II, LSMOF-NSGA-II, and SparseEA, the
population size and the parameters of crossover operator and
mutation operator are the same as those of CEMO-NR-NSGA-
II. In WOF-NSGA-II, the number of groups is set to 4, the
number of evaluations for the transformed problem is set to 500,
the number of evaluations for the original problem is set to 1000,
the number of chosen solutions for weight optimization is set to
3, and the fraction of evaluations for weight optimization is set
0.5. In MOEA/DVA, the number of sampling solutions to
recognize the control properties of the decision variables is set
to 20, and the maximum number of trails required to judge the
interaction between two variables is set to 6. In LSMOF, the
number of reference solutions is set to 10, the population size
for the single-objective optimization is set to 30. We don’t
TABLE V
MEDIAN AUC VALUES AND INTERQUARTILE RANGES OBTAINED BY FIVE COMPARED ALGORITHMS ON 15 TEST INSTANCES.
Problem
ID
CEMO-NR
-NSGA-II
MOEA/DVA
WOF
-NSGA-II
LSMOF
-NSGA-II
SparseEA
FCM1
0.551(0.024)
0.511(0.027)−
0.529(0.009)−
0.516(0.009)−
0.496(0.021)−
FCM2
0.498(0.007)
0.517(0.006)+
0.501(0.004)
0.517(0.007)+
0.494(0.005)−
FCM3
0.500(0.009)
0.500(0.007)
0.501(0.009)
0.498(0.008)
0.483(0.005)−
FCM4
0.501(0.011)
0.515(0.007)+
0.499(0.008)−
0.505(0.007)+
0.556(0.009)+
FCM5
0.498(0.011)
0.507(0.004)+
0.518(0.003)+
0.516(0.005)+
0.491(0.013)−
EG1
0.863(0.026)
/
0.660(0.018)−
/
0.804(0.025)−
EG2
0.573(0.008)
/
0.515(0.014)−
/
0.572(0.008)−
EG3
0.539(0.005)
/
0.514(0.007)−
/
0.548(0.006)+
EG4
0.698(0.017)
/
0.610(0.008)−
/
0.676(0.011)−
EG5
0.752(0.028)
/
0.563(0.010)−
/
0.653(0.019)−
RN1
0.916(0.012)
/
0.689(0.021)−
/
0.808(0.015)−
RN2
0.550(0.005)
/
0.490(0.008)−
/
0.545(0.014)−
RN3
0.520(0.005)
/
0.512(0.006)−
/
0.552(0.007)+
RN4
0.677(0.019)
/
0.589(0.013)−
/
0.673(0.013)−
RN5
0.749(0.014)
/
0.537(0.032)−
/
0.653(0.021)−
+−
−
3/1/1
1/12/2
3/1/1
3/12/0
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
13
apply LSMOF and MOEA/DVA to solve EG1-EG5 and RN1-
RN5 problems due to the poor ability in handling the problems
with binary variables.
From the results shown in Table V, in terms of AUC, we can
see that CEMO-NR-NSGA-II matches or exceeds
MOEA/DVA and LSMOF-NSGA-II in 2 out of 5 cases and
loses three times to MOEA/DVA and LSMOF-NSGA-II.
CEMO-NR-NSGA-II outperforms WOF-NSGA-II in EG1–
EG5 and RN1–RN5 problems. In the cases of FCM1-FCM5,
CEMO-NR-NSGA-II matches or exceeds WOF-NSGA-II in 4
out of 5 cases, and loses once to WOF-NSGA-II. CEMO-NR-
NSGA-II matches or exceeds SparseEA in 12 out of 15 cases
and loses three times to SparseEA. These results demonstrate
the good performance of CEMO-NR.
D. Analysis of the Parameter Sensitivity in CEMO-NR
In this subsection, we discuss the effect of three parameters
on the performance of CEMO-NR and take the problem EG1 as
an example. These parameters are summarized as follows: 1) q,
the number of chosen solutions (line 4 in Algorithm 1); 2)
, the
share of the TFE used for the community optimization stage; 3)
t1, the amount of FEs used for optimizing each decomposed
problem (line 6 in Algorithm 3). To visually analyze one
parameter, the values of other parameters are fixed. NSGA-II is
employed as the base optimizer.
Fig. 8 shows the AUC and HV of CEMO-NR versus varying
q. The value of q is set to 1, 2, 3, 5, and 8. In Fig. 8, we can find
that the value of q has a great effect on the performance of
CEMO-NR. With increasing q, CEMO-NR obtains the greater
value of HV until q 3. Then, with increasing q, HV decreases.
Fig. 8(b) shows the AUC of CEMO-NR versus varying q. The
greater value of q denotes the smaller value of AUC. Choosing
one solution from P may greatly accelerate the optimization
process, but may reduce the diversity of the population. When
q = 1, we obtain the best value of AUC, but the worst value of
HV. To achieve a higher value of HV, q should be set to 3. To
achieve a higher value of AUC, q should be set to 1. In terms of
the fixed number of FEs, with increasing q, the generation of
each subproblem decreases. In this way, the advantage of
problem decomposition may be offset. The parameter q plays
an important role in balancing the diversity and convergence of
CEMO-NR. The decision-makers can determine its value
according to the practical demand. In this paper, we need to
obtain the high reconstruction accuracy in MONRPs, so we set
q to 1.
Fig. 9 shows the AUC and HV of CEMO-NR versus varying
. The value of
is set to 0.1, 0.3, 0.5, and 0.8. In Fig 9(a), with
increasing
, CEMO-NR obtains the greater value of HV. Fig.
9(b) shows the AUC of CEMO-NR versus varying
. With
increasing
, AUC increases until
> 0.3; then, with increasing
, the value of the median AUC keeps at a stable level. However,
we find that the AUC of CEMO-NR and the number of FEs for
the normal optimization stage decrease with the increase of
.
This phenomenon appears because the accuracy of the initial
network inferred by the normal optimization stage has a great
effect on the accuracy of the decomposed problem.
Fig. 10 shows the AUC and HV of CEMO-NR versus
varying t1. t1 is set to 100, 300, 500, 800, and 1000. In Fig. 10(a),
(a)
(b)
Fig. 9. (a) HV of CEMO-NR-NSGA-II versus varying
; (b) AUC of CEMO-
NR-NSGA-II versus varying
.
(a)
(b)
Fig. 10. (a) HV of CEMO-NR-NSGA-II versus varying t1; (b) AUC of CEMO-
NR-NSGA-II versus varying t1.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
14
we can find that the values of t1 have a great effect on the values
of HV. Fig. 10(b) shows the AUC of CEMO-NR versus varying
t1. With increasing t1, the value of the median AUC increases.
The greater value of t1 may denote the smaller value of HV. The
great value of t1 might result in the low diversity of the Pareto
front. In terms of AUC, we can set t1 = 1000. If we set t1 to a
small value, CEMO-NR decomposes the original problem
frequently, which leads to low accuracy in terms of AUC.
We find that these parameters play an important role in the
performance of CEMO-NR. These parameters affect the
convergence and diversity of Pareto solutions and need to be
adjusted by the preference of decision-makers. Due to the
different characteristics of different network reconstruction
problems, it is difficult to learn similar patterns of model
selection. The proper combinations of parameters in one EG
model maybe not suitable for the others.
VI. CONCLUSION
Because of the poor performance of current EA-based
network reconstruction methods in handling large-scale
complex network reconstruction problems, this paper proposes
a community-based network reconstruction framework. The
core of CEMO-NR is to use the information of community
structure to divide the decision space of network reconstruction
problems. CEMO-NR is suitable for networks with community
structure. In this case, CEMO-NR can decompose the high-
dimensional problem quickly due to the low time complexity of
BGLL. The existing research on complex networks shows that
many networks with different backgrounds have community
structure, which shows that CEMO-NR has a wide range of
applications. In the worst case, even if the inferred network has
no significant community structure, our proposal is equivalent
to the standard MOEA. The experimental results conducted on
30 MONRPs demonstrate the effectiveness of CEMO-NR
compared with the original MOEAs. The results well support
the claim that CEMO-NR can overcome the limitations of
current network reconstruction methods in dealing with high-
dimensional complex network reconstruction problems.
REFERENCES
[1] S. H. Strogatz, “Exploring complex networks,” Nature, vol. 410, no. 6825,
pp. 268−276, 2001.
[2] L. Ma, J. Li, Q. Lin, M. Gong, C. A. Coello Coello, and Z. Ming, “Cost-
aware robust control of signed networks by using a memetic algorithm,”
IEEE Transactions on Cybernetics, DOI: 10.1109/TCYB.2019.2932996,
2019.
[3] A. C. Haury, F. Mordelet, P. Vera-Licona, and J. P. Vert, “TIGRESS:
trustful inference of gene regulation using stability selection,” BMC
Systems Biology, vol. 6, no. 1, pp. 145, 2012.
[4] X. Han, Z. Shen, W. X. Wang, and Z. Di, “Robust reconstruction of
complex networks from sparse data,” Phys. Rev. Lett., vol. 114, pp.
028701, 2015.
[5] W. X. Wang, Y. C. Lai, C. Grebogi, and J. Ye, “Network reconstruction
based on evolutionary-game data via compressive sensing,” Physical
Review X, vol. 1, pp. 021021, 2011.
[6] Z. Shen, W. X. Wang, Y. Fan, Z. Di, and Y. C. Lai, “Reconstructing
propagation networks with natural diversity and identifying hidden
sources,” Nature Communications, vol. 5, pp. 4324, 2014.
[7] A. A. Margolin, I. Nemenman, K. Basso, C. Wiggins, G. Stolovitzky, R.
Dalla Favera, and A. Califano, “ARACNE: an algorithm for the
reconstruction of gene regulatory networks in a mammalian cellular
context,” BMC Bioinformatics, vol. 7, pp. S7, 2006.
[8] V. A. Huynh-Thu, A. Irrthum, L. Wehenkel, and P. Geurts, “Inferring
regulatory networks from expression data using tree-based methods,”
PLoS ONE, vol. 5, pp. e12776, 2010.
[9] K. Wu, J. Liu, and S. Wang, “Reconstructing networks from profit
sequences in evolutionary games via a multiobjective optimization
approach with lasso initialization,” Scientific Reports, vol. 6, pp. 37771,
2016.
[10] K. Wu, J. Liu, and D. Chen. “Network reconstruction based on time series
via memetic algorithm,” Knowledge-Based Systems, vol. 164, pp.
404−425, 2019.
[11] D. Marbach et al. “Wisdom of crowds for robust gene network inference,”
Nature Methods, vol. 9, no. 8, 796, 2012.
[12] S. Kimura et al. “Inference of S-system models of genetic networks using
a cooperative coevolutionary algorithm,” Bioinformatics, vol. 21, no. 7,
pp. 1154−1163, 2004.
[13] W. P. Lee and Y. T. Hsiao, “Inferring gene regulatory networks using a
hybrid GA–PSO approach with numerical constraints and network
decomposition,” Information Sciences, vol. 188, pp. 80−99, 2012.
[14] S. Kikuchi, D. Tominaga, M. Arita, K. Takahashi, and M. Tomita,
“Dynamic modeling of genetic networks using genetic algorithm and S-
system,” Bioinformatics, vol. 19, no. 5, pp. 643−650, 2003.
[15] N. Noman and H. Iba, “Inferring gene regulatory networks using
differential evolution with local search heuristics,” IEEE/ACM
Transactions on Computational Biology and Bioinformatics, vol. 4, no. 4,
pp. 634−647, 2007.
[16] L. Palafox, N. Noman, and H. Iba, “Reverse engineering of gene
regulatory networks using dissipative particle swarm optimization,” IEEE
Transactions on Evolutionary Computation, vol. 17, no. 4, pp. 577−587,
2012.
[17] K. Wu and J. Liu, “Robust learning of large-scale fuzzy cognitive maps
via the lasso from noisy time series,” Knowledge-Based Systems, vol. 113,
pp. 23−38, 2016.
[18] K. Wu and J. Liu, “Learning large-scale fuzzy cognitive maps based on
compressed sensing and application in reconstructing gene regulatory
networks,” IEEE Transactions on Fuzzy Systems, vol. 25, no. 6, pp.
1546−1560, 2017.
[19] R. Xu, D. Wunsch II, and R. Frank, “Inference of genetic regulatory
networks with recurrent neural network models using particle swarm
optimization,” IEEE/ACM Transactions on Computational Biology and
Bioinformatics, vol. 4, no. 4, pp. 681−692. 2007.
[20] G. Davis, “Adaptive nonlinear approximations,” PH.D. dissertation, Dept.
Math., Courant Institute of Mathematical Sciences, New York Univ., New
York, NY, USA, 1994.
[21] J. Liu, Y. Chi, and C. Zhu, “A dynamic multiagent genetic algorithm for
gene regulatory network reconstruction based on fuzzy cognitive maps,”
IEEE Transactions on Fuzzy Systems, vol. 24, no. 2, pp. 419−431, 2015.
[22] M. E. Newman, “Modularity and community structure in networks,” Proc.
Natl. Acad. Sci., vol. 103, no. 23, pp. 8577–8582, 2006.
[23] V. D. Blondel, J. L. Guillaume, R. Lambiotte, and E. Lefebvre, “Fast
unfolding of communities in large networks,” J. Stat. Mech. Theory Exp.,
vol. 2008, no. 10, pp. P10008, 2008.
[24] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithm
based on decomposition,” IEEE Transactions on Evolutionary
Computation, vol. 11, pp. 712–731, 2007.
[25] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multi-
objective genetic algorithm: NSGA-II,” IEEE Transactions on
Evolutionary Computation, vol. 6, no. 2, pp. 182–197, 2002.
[26] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength
Pareto evolutionary algorithm for multiobjective optimization,” in
Proceedings of the Fifth Conference on Evolutionary Methods for Design,
Optimization and Control with Applications to Industrial Problems, pp.
95–100, 2001.
[27] N. Beume, B. Naujoks, and M. Emmerich, “SMS-EMOA: Multiobjective
selection based on dominated hypervolume,” European Journal of
Operational Research, vol. 181, no. 3, pp. 1653–1669, 2007.
[28] M. Gong, Q. Cai, X. Chen, and L. Ma, “Complex network clustering by
multiobjective discrete particle swarm optimization based on
decomposition,” IEEE Transactions on Evolutionary Computation, vol.
18, no. 1, pp. 82–97, 2014.
[29] M. A. Nowak and R. M. May, “Evolutionary games and spatial chaos,”
Nature, vol. 359, pp. 826−829, 1992.
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
15
[30] G. Szabó and G. Fath, “Evolutionary games on graphs,” Phys. Rep., vol.
446, pp. 97−216, 2007.
[31] B. Kosko, “Fuzzy cognitive maps,” International Journal of Man-
Machine Studies, vol. 24, no. 1, pp. 65–75, 1986.
[32] X. Zou and J. Liu, “A mutual information-based two-phase memetic
algorithm for large-scale fuzzy cognitive map learning,” IEEE
Transactions on Fuzzy Systems, vol. 26, no. 4, pp. 2120–2134, 2018.
[33] X. Ma, F. Liu, Y. Qi, X. Wang, L. Li, L. Jiao, M. Yin, and M. Gong, “A
multiobjective evolutionary algorithm based on decision variable analyses
for multi-objective optimization problems with large scale variables,”
IEEE Transactions on Evolutionary Computation, vol. 20, no. 2, pp. 275–
298, 2016.
[34] X. Zhang, Y. Tian, Y. Jin, and R. Cheng, “A decision variable clustering
based evolutionary algorithm for large-scale many-objective
optimization,” IEEE Transactions on Evolutionary Computation, vol. 22,
no. 1, pp. 97–112, 2016.
[35] H. Zille, H. Ishibuchi, S. Mostaghim, and Y. Nojima, “A framework for
large-scale multi-objective optimization based on problem
transformation,” IEEE Transactions on Evolutionary Computation, vol.
22, no. 2, pp. 260–275, 2018.
[36] C. He, L. Li, Y. Tian, X. Zhang, R. Cheng, Y. Jin, and X. Yao,
“Accelerating large-scale multi-objective optimization via problem
reformulation,” IEEE Transactions on Evolutionary Computation, vol. 23,
no. 6, pp. 949–961, 2019.
[37] L. M. Antonio and C. A. C. Coello, “Coevolutionary multi-objective
evolutionary algorithms: A survey of the state-of-the-art,” IEEE
Transactions on Evolutionary Computation, vol. 22, no. 6, pp. 851–865,
2017.
[38] L. M. Antonio and C. A. C. Coello, “Use of cooperative coevolution for
solving large scale multiobjective optimization problems,” in
Proceedings of 2013 IEEE Congress on Evolutionary Computation, 2013,
pp. 2758–2765.
[39] M. Gong, H. Li, E. Luo, J. Liu, and J. Liu, “A multiobjective cooperative
coevolutionary algorithm for hyperspectral sparse unmixing,” IEEE
Transactions on Evolutionary Computation, vol. 21, no. 2, pp. 234–248,
2017.
[40] G. Szabó and C. Tőke, “Evolutionary prisoner’s dilemma game on a
square lattice,” Physical Review E, vol. 58, pp. 69, 1998.
[41] W. Stach, L. Kurgan, W. Pedrycz, and M. Reformat, “Genetic learning of
fuzzy cognitive maps,” Fuzzy Sets and Systems, vol. 153, no. 3, pp. 371–
401, 2005.
[42] Y. Chi and J. Liu, “Learning of fuzzy cognitive maps with varying
densities using a multi-objective evolutionary algorithm,” IEEE
Transactions Fuzzy Systems, vol. 24, no. 1, pp. 71–81, 2016.
[43] Y. Tian, R. Cheng, X. Zhang, and Y. Jin, “PlatEMO: A MATLAB
platform for evolutionary multi-objective optimization [educational
forum],” IEEE Computational Intelligence Magazine, vol. 12, pp. 73–87,
2017.
[44] L. While, P. Hingston, L. Barone, and S. Huband, “A faster algorithm for
calculating hypervolume,” IEEE Transactions on Evolutionary
Computation, vol. 10, no. 1, pp. 29–38, 2006.
[45] J. Grau, I. Grosse, and J. Keilwagen, “PRROC: computing and visualizing
precision-recall and receiver operating characteristic curves in R,”
Bioinformatics, vol. 31, no. 15, pp. 2595−2597, 2015.
[46] Y. Tian, H. Wang, X. Zhang, and Y. Jin, “Effectiveness and efficiency of
non-dominated sorting for evolutionary multi- and many-objective
optimization,” Complex & Intelligent Systems, vol. 3, no. 4, pp. 247–263,
2017.
[47] Y. Tian, X. Zhang, C. Wang, and Y. Jin, “An evolutionary algorithm for
large-scale sparse multi-objective optimization problems,” IEEE
Transactions on Evolutionary Computation. DOI:
10.1109/TEVC.2019.2918140.
[48] M. E. Newman, “Finding community structure in networks using the
eigenvectors of matrices,” Phys. Rev. E, vol. 74, pp. 036104, 2006.
[49] V. Krebs, http://www.orgnet.com/divided.html.
[50] D. Lusseau, K. Schneider, O. J. Boisseau, P. Haase, E. Slooten, and S. M.
Dawson, “The bottlenose dolphin community of doubtful sound features
a large proportion of long-lasting associations,” Behav. Ecol. Sociobiol.,
vol. 54, no. 4, pp. 396−405, 2003.
[51] W. W. Zachary, “An information flow model for conflict and fission in
small groups,” J. Anthropol. Res., pp. 452−473, 1977.
[52] D. E. Knuth, The Stanford Graph Base: A Platform for Combinatorial
Computing, Addison-Wesley, Reading, MA, 1993.
[53] K. Wu, J. Liu, P. Liu, and S. Yang, “Time series prediction using sparse
autoencoder and high-order fuzzy cognitive maps,” IEEE Transactions on
Fuzzy Systems, DOI: 10.1109/TFUZZ.2019.2956904, 2019.
[54] J. Liu, Y. Chi, C. Zhu, and Y. Jin, “A time series driven decomposed
evolutionary optimization approach for reconstructing large-scale gene
regulatory networks based on fuzzy cognitive maps,” BMC
Bioinformatics, vol. 18, no. 1, 241, 2017.
[55] Y. Tian, X. Zheng, X. Zhang, and Y. Jin, “Efficient large-scale
multiobjective optimization based on a competitive swarm optimizer,”
IEEE Transactions on Cybernetics, DOI:
10.1109/TCYB.2019.29063832019.
[56] M. N. Omidvar, X. Li, Y. Mei, and X. Yao, “Cooperative co-evolution
with differential grouping for large scale optimization,” IEEE
Transactions Evolutionary Computation, vol. 18, no. 3, pp. 378–393,
2014.
[57] W. Hong, K. Tang, A. Zhou, H. Ishibuchi, and X. Yao, “A scalable
indicator-based evolutionary algorithm for large-scale multiobjective
optimization,” IEEE Transactions on Evolutionary Computation, vol. 23,
no. 3, pp. 525–537, 2019.
[58] B. Cao, J. Zhao, Z. Lv, and X. Liu, “A distributed parallel cooperative
coevolutionary multiobjective evolutionary algorithm for large-scale
optimization,” IEEE Trans. Ind. Informat., vol. 13, no. 4, pp. 2030–2038,
2017.
[59] X. Ma, X. Li, Q. Zhang, K. Tang, Z. Liang, W. Xie, and Z. Zhu, “A survey
on cooperative co-evolutionary algorithms,” IEEE Transactions on
Evolutionary Computation, vol. 23, no. 3, pp. 421–441, 2019.
[60] M. E. Newman, “The structure and function of complex networks,” SIAM
Review, vol. 45, no. 2, pp. 167–256, 2003.
[61] T. Fawcett, “An introduction to ROC analysis,” Pattern Recognition
Letters, vol. 27, pp. 861–874, 2006.
[62] K. Wu, J. Liu, P. Liu, and F. Shen, “Online fuzzy cognitive map learning,”
IEEE Transactions on Fuzzy Systems, DOI:
10.1109/TFUZZ.2020.2988845, 2020.