Content uploaded by Constantinos Spanakis

Author content

All content in this area was uploaded by Constantinos Spanakis on Jun 17, 2016

Content may be subject to copyright.

A Proposed Method for Improving Rigid

Registration Robustness

Constantinos Spanakis #∗1, Emmanuel Mathioudakis ∗2, Nicholaos Kampanis +3, Manolis Tsiknakis #4, Kostas Marias #5

#Institute of Computer Science,

Foundation of Research and Technology-Hellas

∗School of Mineral Resources Engineering

Technical University of Crete, Greece

+Institute of Applied and Computational Mathematics,

Foundation of Research and Technology-Hellas

1kspan@ics.forth.gr

2manolis@amcl.tuc.com

3kampanis@iacm.forth.gr

4tsiknaki@ics.forth.gr

5kmarias@ics.forth.gr

Abstract—Image Registration is the task of aligning a pair of

images in order to bring them in a common reference frame.

It is widely used in several image processing applications in

order to align monomodal, multimodal and/or temporal image

information. For example in medical imaging registration is often

used to align either mono-modal temporal series of images or

multi-modal data in order to maximize the diagnostic information

and enable physicians to better establish diagnosis and plan

the treatment. This paper ﬁrst deals with the analysis of the

well-known Maes’ method for Image Registration explaining

some of its drawbacks. As a remedy to these problems, a novel

Image Registration method is proposed as an extension to Maes’

one, which overcomes its drawbacks and improves robustness.

Results presented in satellite and medical images indicate that

the presented method is more robust comparing to well-known

algorithms at the expense of increased computational time.

Index Terms—Image Registration, Mutual Information, Genetic

Algorithms

I. INTRODUCTION

Image Registration is the process of aligning images ac-

quired from different perspective, in different time frames or

even through different modalities [1]. The pair of images to

be aligned is known as Source/Sensed Image (subjected to

transform) and Target/Reference Image (static image). There

are numerous applications that utilize image registration such

as mapping, remote sensing, medical imaging, computer vi-

sion, etc. It remains largely an unsolved problem due to the

staggering diversity of images and the types of geometrical

and photometrical variations that can be present in any image

pair. Registration techniques take into consideration a number

of factors including: a) Modality (i.e. mono-modal or multi-

modal registration problem), b) Geometry (rigid or non-rigid),

c) Interaction requirements (automated or semi-automated),

d) Execution time (e.g. in some cases there is a need to

perform fast i.e. in application relevant times), and e) Accuracy

of registration result needed for a given speciﬁc application.

As a consequence, there are numerous methods proposed for

image registration, the majority of which belong to one of the

following categories:

•Feature-based: A set of distinctive features (e.g. salient

points([2], [3], [4],[5], [6]), curves, contours([7], [8], [9],

[10], [11])) are selected from the Source Image and

another one set from the Target Image corresponding to

the ﬁrst one. This can be done either manually or auto-

matically. Then, what remains is to ﬁnd the corresponding

geometric transformation that transform the coordinates

of the features of the Source set to those of the Target set,

which is the transformation that can align the two images.

Such methods can be very fast, but they are problematic

in images with poorly distinct features, especially in the

presence of noise. Furthermore, the choice of a wrong set

of features can give inaccurate registration results.

•Intensity based: Instead of using features, the images’

intensity patterns are compared using correlation mea-

sures to calculate their similarity/difference ([12], [13],

[14], [15], [16], [17]). For the calculation of the images’

similarity, either the whole image or at least a sub-

image is required, which makes them computationally

expensive. In addition, they cannot be easily applicable in

non-rigid registration. Unlike the methods of the previous

category, they are more robust and accurate.

The presented method in this paper was initially designed for

rigid medical image registration, although its generalized na-

ture makes it suitable for many other applications. In medical

imaging, we need to align pairs of images, often acquired from

different modalities with varying degree of details present.

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

1

https://sites.google.com/site/ijcsis/

ISSN 1947-5500

Feature-based methods can be used, but the lack of distinct

features may require the interactive selection of distinctive

features by an expert or the introduction of ﬁducial landmarks.

Maes’ intensity-based method has been successfully used in

medical image registration [12]. However, there are a number

of problems which lead to limited robustness of this method,

especially in difﬁcult rigid registration cases. In this paper

we present a novel method based on Maes which aims to

improve the drawbacks of the original method and improve

its robustness. There has been a lot of research on medical

rigid registration problem. Some of the proposed methods

are feature-based, using either distinct points([5], [6]) or sur-

faces/boundaries ([7], [8], [9],[10], [11]). Due to the frequent

difﬁculties to robustly identify landmark pairs in medical

images, intensity-based functions (pure or in combination

with feature- based) are widely used. Maes’ method[12] uses

mutual information as correlation metric, being optimized

using Powell’s optimization method[18], with quite good

results. This method has been subsequently improved, using

other better correlation metrics such as normalized mutual

information[13] and entropy correlation coefﬁcient, instead

of mutual information. Viola’s method[14] also uses mutual

information as a correlation metric, but uses stochastic gradient

descent in order to ﬁnd the optimal transformation. In addition

to Powell’s method and stochastic gradient descent, other

methods[15] are used for maximizing mutual information such

as downhill simplex, steepest gradient descent, quasi-newton

methods, conjugate-gradient descent, least-square methods and

multiresolution techniques. Furthermore, mutual information

and its variants are extended by adding spatial information[17].

Since intensity-based methods can be generalized, the method

presented in this paper can be used in many any other image

processing applications where accuracy is required (e.g. earth

remote sensing). In this paper, a variant of Maes’ image

registration method [12] is proposed. This variant uses a

genetic algorithm[19] to ﬁnd the optimal transformation for

the alignment of the images. Genetic algorithms have been

previously used for image registration([20], [21], [22], [23],

[24], [25]), either using the same or other similarity measure

for image comparison. Herein, we propose the use of a variant

of genetic algorithm known as elitism[26], which hasn’t been

proposed in the past in the context of image registration.

The purpose is to provide a more robust mutual information

optimization framework that outperforms both the original

Maes method as well as other methods widely used for rigid

registration. To this end,our method is also compared to two

widely used ITK (http://www.itk.org/) Methods in terms of

execution time and accuracy. ITK is a widely known freeware

toolkit for image processing. In the next section, we present

brieﬂy the limitations of Maes’ method and a description of

the genetic algorithms. The proposed method is fully described

in the third section, where we present a series of experiments

for explaining the functionality of the method. Finally, in the

fourth section the results are presented along with ideas on

future studies.

II. EXTENDING MA ES ’METHOD ON RIGID REGISTRATION

In this section we present our method for rigid registration

extending the well-known Maes’ method. Maes’ image regis-

tration method is based on the idea that when two images are

aligned their mutual information is maximized. In this method

Shannon’s mutual information[27] is used as a similarity

metric for the comparison of the Source and Target image.

The optimal transform of the Source image that maximizes

the mutual information is found using Powell’s method [18]

for minimization, which in turn uses Brent’s method [28] for

one dimensional minimization.

A. Powell’s minimization method

Powell’s conjugate direction method [18] is a method for

ﬁnding the local minimum of a multi-variate function. It is

an iterative function which starts from an initial point x0

the searches for the minimum along nlinearly independent

directions di, i = 1,· · · , n where n is the number of variables.

The basic procedure is the following one:

Algorithm 1 Iteration process

Require: fun, p0, d

1: for r= 1,· · · , n do

2: Find λrso that fun(pr−1+λrdr)is minimum

3: pr=pr−1+λrdr

4: end for

5: for r= 1,· · · , n −1do

6: dr=dr+1

7: end for

8: dn=pn−p0

9: Fined λso that fun(pn+λ(pn−p0)) is minimum

10: p0=p0+λ(pn−p0)

The minimization procedure along the directions di, i =

1,· · · , n is accomplished with the use of Brent’s univariable

function minimization method [28]. If either a number of

maximum iterations is reached or the improvement is minimal

(i.e abs(fun(pnew)−fun(pold)) < tol), where a tol is a

tolerance number, then Powell’s method terminates yielding

the solution found so far.

B. Limitations of Maes’ Method

Powell’s method has an inherent drawback; it requires an

initial point, from which the method will start the search, and

an initial set of direction vectors that will direct the search for

each argument of the solution. In this subsection we will see

the impact of this limitation on Maes’ method. An example

of using the original Medical Image Registration method is

shown in Fig. 1, where subFig. (a) is the source image, (b)

is the T1-weighted MRI target image, (c) the transformed

PD- weighted MRI image and (d) is the difference image

of (b) and (c) (i.e. the subtraction of the image intensities

between corresponding pixels of the subFigs. (b) and (c) ).

The images are from the Retrospective Image Registration

Evaluation ProjectRIRE (http://www.insight-journal.org/rire/ )

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

2

https://sites.google.com/site/ijcsis/

ISSN 1947-5500

(a) (b)

(c) (d)

Fig. 1: A failed example illustrating the limitations of the

original Maes’ Method: (a) MRPD Source Image, (b) MRT1

Target Image, (c) Transformed Source image using Maes’

method, (d) Difference Image between SubFigs. (b) and (c)

In this example the initial point is (0, 0, 0) and the

initial set of direction vectors is (1 0 0; 0 1 0; 0 0

1). Maes’ method terminated giving ”optimal” transform

T= (0.000006658185612 radians,-0.000026843523301 pix-

els along x, -0.000083844133769 pixels along y). Fig. 2

presents another example where the initial point is (0.0,

4.0, 4.0), keeping the same initial set of direction vec-

tors. In the second experiment we have ”optimal” transform

T= (0.000000083872180 radians, 4.000013351440430 pixels

along x, 4.000064849853516 pixels along y). From the two

experiments it is obvious that it is difﬁcult, if not impossible,

to know a priori the initial point and the initial set of direction

vectors that will lead us to the global optimum.

(a) (b)

Fig. 2: Second failed experiment of the original method:

(a)Transformed Source image using Maes’ method, (b) Dif-

ference Image between SubFigs. 1.(b) and 2.(a)

In Fig. 3 we show the progression of Powell’s method during

the search of the ”optimum” transformation both for starting

point (0, 0, 0) and (0, 4.0, 4.0).

In both experiments the method stops after a few iterations

at a local optimum. The existence of local optima in mutual

information combined with the fact that for each pair of images

we have a unique (non-convex) form of mutual information,

makes the choice of good initial point and direction vector

Fig. 3: Mutual information Graph of the failed experiments

set a very critical task which in the original method it is very

difﬁcult to control in order to ﬁnd the correct transformation

i.e. the global optimum. In the following subsection we present

our method explaining how this problem is addressed.

C. Genetic Algorithms

As we’ve seen in the previous section, Maes’ algorithm

has some limitations which may lead to erroneous results

especially in images of complex patterns or/and symmetries,

bad quality or in difﬁcult transformation problems in gen-

eral. In this subsection, we discuss the possibility of solving

the problem with the use of genetic algorithms. Genetic

algorithms have been previously used in image registration.

They belong to the wider family of evolutionary algorithms,

a class of heuristic optimization algorithms which imitate

(certain aspects of) biological evolution. In order to understand

the reason genetic algorithm is used, its nature must be

understood. In the ﬁeld of artiﬁcial intelligence, a genetic

algorithm is a heuristic search that emulates the Darwinian

evolution, according to which some traits of the members of

a population may give them an advantage over the rest in

survival and reproduction. The members with the favoured

traits will have a higher probability to survive long enough

to reproduce and have eventually more offspring, while those

with the least favoured traits gradually dwindle and eventually

disappear. The traits of a member are either inherited from the

parents of the offspring or the product of mutations. In genetic

algorithms, the principles of Darwinian evolution are applied.

A population of random candidate solutions, called generation,

is initialized, where each candidate solution is known as chro-

mosome/genome containing genes. Then, in every iteration a

fraction of the current generation is selected (with emphasis

to the genomes of high ﬁtness) for reproduction, producing a

number of new candidate genomes, which in turn are subjected

to random mutation. After the mutation process, the ﬁtness of

each genome of the new generation is evaluated. At the end

of the iteration, the new population replaces the current one

and the process is repeated. At the end of each iteration we

keep its current best as the optimum solution to our problem.

The rationale for our choice of a genetic algorithm is its

resistance in avoiding local optima (the method of avoiding

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

3

https://sites.google.com/site/ijcsis/

ISSN 1947-5500

them will be explained further below) and the fact that they

don’t need derivatives to ﬁnd the local optimum, which is

quite useful, especially if the derivative of the function we

want to optimize is difﬁcult, to calculate. In the next section

the proposed method is analytically presented in detail.

III. PROP OS ED ME TH OD

In the previous subsection we presented a general description

of the genetic algorithms, of which there are many variants.

Here we present analytically the proposed method for opti-

mization of mutual information in the registration process.

A. Elitism

Elitism[26], is a variant of genetic algorithms. During the

iterations, we may come along a solution (or solutions) that

might be (more or less) close to the global optimum. Instead

of destroying the good solution(s) through crossover and

mutation, we let them pass to the next population unchanged,

replacing some of the new solution, usually the least ﬁt ones.

This variant guarantees that the quality of the solution obtained

by the genetic algorithm won’t decrease during its progres-

sion, therefore leading to quicker convergence. Since mutual

information (as a function of the transformation matrix T) is a

non-convex function, conventional methods may quite possibly

fail in ﬁnding the optimal transformation Topt. Therefore we

use genetic algorithms, and speciﬁcally elitism. The reason

for choosing elitism is the fact that since there are so many

local optima, we need a method that not only overcomes

a local optimum, but also use information from previous

generation(s) that may be useful in ﬁnding the global optimum.

After all, the previous generation’s best solution may be a

local optimum that could be very close to the global one.

In the next generation, that optimum, through crossover and

mutation, could be discarded and therefore lose an opportunity

to converge to the global optimum. By using elitism, we can

use previous information about the problem and converge

quicker to the global optimum. In Fig. 4 we can see the

successful result of the proposed method that uses elitism in

the same image pair as Figs 1,2.

Genetic algorithms tend to be computationally inefﬁcient due

to many repetitions of function evaluation. In order to reduce

the execution time, we use generation stalemate, i.e. when the

search after a certain number of generations fails to ﬁnd a new,

better solution, the algorithm terminates. With this way, we

avoid unnecessary calculations. Below we present the graphs

of mutual information with elitism and generation stalemate

and the general genetic algorithm.

In Fig. 5 we see the progression of image registration (MRPD-

to-MRT1 registration) with and without the use of elitism. The

original variant stumble on a local optimum (albeit close to

the global, a case which is evident in Fig. 7.), while the elitism

variant shows greater progression and faster convergence for

the same parameters, making it more suitable for addressing

the drawbacks of Powell’s method. The same thing can be seen

in the following graphs (Figs. 6 and 7, respectively) where we

(a) (b)

(c) (d)

Fig. 4: A successful example using the proposed Method: (a)

MRPD Source Image, (b) MRT1 Target Image, (c) Trans-

formed Source image using proposed method, (d) Difference

Image between SubFigs. (b) and (c)

Fig. 5: Graph of mutual information with and without elitism

for the MRPD-to-MRT1 registration

try to register MRPD and MRT1 to MRT2 (the images and

results of the proposed method are shown in Figs. 11 and 12).

Fig. 7: Graph of mutual information for MRT1-to-MRT2

registration

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

4

https://sites.google.com/site/ijcsis/

ISSN 1947-5500

Fig. 6: Graph of mutual information for MRPD-to-MRT2

registration

Like in Fig. 5, in Fig. 6 and Fig. 7 our method converges

faster to the global minimum than the basic genetic algorithm

(The results of the proposed image registration method are

presented in subFigs. (c) and (d) in Figs. 11 and 12 respec-

tively).

B. Indicative results

For comparison purposes, in this section we present two image

registration methods of ITK (Insight Toolkit), which is an

image processing toolkit widely used in medical analysis. The

reason we choose ITK is because it is widely used both in

research and commercial applications and has been designed

for multi-model image registration which represents a difﬁcult

problem in medical imaging (although ITK, just like our

method, can be used for general image processing). The ﬁrst

ITK image registration method uses mutual information as a

correlation metric, but instead of Maes’ mutual information,

the current metric is developed by Mattes[29] and instead of

Powell’s method it uses Regular step gradient descent[30] for

the discovery of the optimum transformation for the alignment

of the images. The second method uses normalized mutual

information[13] which is a better correlation metric than

mutual information in dealing with partial overlap. Unlike

the previous method, this one uses one-plus-one evolution

strategy[30]. Both methods can be downloaded from [ http://

www.itk.org/ITK/resources/software.html]. In this section we

present a series of experiments of medical and remote sensing

image registration, using our variant of Maes’ image registra-

tion method and compare its results with those of the two ITK

image registration methods. In each experiment the methods’

maximum number of iterations is 6000. In the novel method

we set boundaries for rotation and translation along x and y

equal to (-0.6, 0.6) radians and [-200,200] pixels respectively,

although they can be omitted since an image with width W and

height H has rotation space (-π,π) and maximum translation

W and H along x and y axis respectively, rendering the method

(almost) automatic. For the comparison of the accuracy of

the methods we calculate the mutual information of the target

image and the transformed source image. In Table 1 we present

the values of the mutual information for each experiment

performed using our method NMM (Novel Maes’ Method)

and the two ITK methods and in Table 2 the duration in

seconds of the algorithms. In Table 1 we have the results of the

experiments where we used patients’ data (patients 001-007)

from R.I.R.E (Retrospective Image Registration Evaluation

Project- http://www.insight-journal.org/rire/ ).

TABLE I: Results of Mutual Information using two ITK

techniques and our proposed method NMM (Novel Maes’

Method) in the images from the Retrospective Image Reg-

istration Evaluation Project

Mutual Information after registration

Experiment ITK1 ITK2 NMM

Patient001

1) PD to T1 1.5797 1.5815 1.5792

2) T1 to PD 1.2793 1.2781 1.2388

3) PD to T2 1.3213 1.3181 1.2793

4) T2 to PD 1.1153 1.1222 1.0919

5) T1 to T2 1.1746 1.1764 1.1661

6) T2 to T1 1.2687 1.2679 1.2632

Patient002

7) PD to T1 1.5245 1.5251 1.5201

8) T1 to PD 1.2409 1.2445 1.223

9) PD to T2 1.5503 1.5554 1.5082

10) T2 to PD 1.2693 1.2695 1.2607

11) T1 to T2 1.3505 1.3532 1.3236

12) T2 to T1 1.4199 1.4193 1.411

Patient003

13) PD to T1 1.5272 1.5272 1.4517

14) T1 to PD 1.1454 1.1434 1.0975

15) PD to T2 1.3985 1.4019 1.3981

16) T2 to PD 1.1162 1.0598 1.0741

17) T1 to T2 1.2447 1.2423 1.2202

18) T2 to T1 1.346 1.3459 1.3129

Patient004

19) PD to T1 1.6942 1.6927 1.5711

20) T1 to PD 1.1566 1.1592 1.1245

21) PD to T2 1.5152 1.5201 1.4694

22) T2 to PD 1.2006 1.2005 1.1719

23) T1 to T2 1.2857 1.2841 1.2868

24) T2 to T1 1.5933 1.5958 1.542

Patient005

25) PD to T1 1.5944 1.5957 1.5231

26) T1 to PD 1.1389 1.1397 1.1248

27) PD to T2 1.5697 1.5759 1.5784

28) T2 to PD 1.1092 1.1101 1.0675

29) T1 to T2 1.3824 1.382 1.365

30) T2 to T1 1.3609 1.3596 1.3331

Patient006 31) PD to T2 1.5943 1.0504 1.5456

32) T2 to PD 1.0593 0.9437 1.037

Patient007

33) PD to T1 1.5781 1.0441 1.5755

34) T1 to PD 1.0737 1.0732 1.0536

35) PD to T2 1.4096 1.4164 1.3659

36) T2 to PD 1.1616 1.1658 1.1237

37) T1 to T2 1.1611 1.1618 1.114

38) T2 to T1 1.4485 0.978 1.4099

Apart from the medical image registration experiments we

compared our method also in a series of remote sens-

ing images (http://old.vision.ece.ucsb.edu/registration/satellite/

testimag/index.htm). In Table 3 we present the mutual infor-

mation of the images after the registration process and in Table

4 we present the corresponding durations.

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

5

https://sites.google.com/site/ijcsis/

ISSN 1947-5500

TABLE II: Duration of the registration experiments of Table

I

Time required for registration (secs)

Experiment ITK 2 ITK 2 NMM

Patient 001

1) PD to T1 20 527 1109

2) T1 to PD 16 544 1151

3) PD to T2 24 554 949

4) T2 to PD 19 543 433

5) T1 to T2 13 578 1794

6) T2 to T1 15 529 1246

Patient 002

7) PD to T1 16 531 839

8) T1 to PD 21 543 1287

9) PD to T2 23 531 1555

10) T2 to PD 21 542 353

11) T1 to T2 20 543 1032

12) T2 to T1 22 538 1496

Patient 003

13) PD to T1 10 536 1413

14) T1 to PD 31 525 841

15) PD to T2 194 583 1168

16) T2 to PD 15 552 1403

17) T1 to T2 61 545 325

18) T2 to T1 189 509 1088

Patient 004

19) PD to T1 25 559 1075

20) T1 to PD 17 536 798

21) PD to T2 86 522 1135

22) T2 to PD 69 531 1362

23) T1 to T2 18 557 882

24) T2 to T1 73 530 361

Patient 005

25) PD to T1 22 554 1121

26) T1 to PD 23 520 1195

27) PD to T2 124 482 999

28) T2 to PD 57 539 1266

29) T1 to T2 51 527 904

30) T2 to T1 46 498 1513

Patient 006 31) PD to T2 25 527 1455

32) T2 to PD 28 543 1103

Patient 007

33) PD to T1 22 551 778

34) T1 to PD 185 521 981

35) PD to T2 29 565 341

36) T2 to PD 58 535 1247

37) T1 to T2 18 552 895

38) T2 to T1 32 536 1282

TABLE III: Results of Image Registration of Remote Sensing

Images

Mutual Information after registration

Experiments ITK1 ITK2 NMM

b0 1) b040 to b042 0.1055 0.1096 1.0759

2) b042 to b040 0.1108 0.1097 0.6852

casitas 3) casitas84 to casitas86 0.2394 0.2587 0.688

4) casitas86 to casitas84 0.1039 0.1686 0.4724

dunes 5) dunes883 to dunes885 0.9199 1.8126 1.5021

6) dunes885 to dunes883 0.8608 1.8332 1.5018

exp 7) exp186 to exp188 0.8335 0.7899 1.0588

8) exp188 to exp186 0.8492 0.8551 1.1052

gav 9) gav88 to gav90 1.1539 1.1537 0.9734

10) gav90 to gav88 0.2661 1.3648 1.2079

gribralt 11) gibralt84 to gibralt86 0.3896 0.8971 0.7395

12) gibralt86 to gibralt84 0.263 0.8941 0.7887

img 13) img1 to img2 0.0487 0.0459 0.0646

14) img2 to img1 0.0464 0.0737 0.1199

mono 15) mono1 to mono3 0.7496 0.8178 1.1084

16) mono3 to mono1 1.2164 0.8715 1.1794

mtns1 17) mtn1 to mtn3 0.4217 0.4352 1.5521

18) mtn3 to mtn1 0.2765 0.2772 0.9758

mtns2 19) mtn4 to mtn7 1.5368 1.5631 1.2286

20) mtn7 to mtn4 1.5058 1.51 1.7308

sci 21) sci84 to sci86 1.1758 1.1799 1.0954

22) sci86 to sci84 0.5965 1.3389 1.1148

TABLE IV: Duration of the registration experiments of Table

III

Time required for registration (secs)

Experiment ITK1 ITK2 NMM

b0 1) b040 to b042 654 2387 3056

2) b042 to b040 1117 2483 5621

casitas 3) casitas84 to casitas86 536 3099 7455

4) casitas86 to casitas84 436 3062 7743

dunes 5) dunes883 to dunes885 290 1354 2377

6) dunes883 to dunes885 1925 1325 2686

exp 7) exp186 to exp188 122 1404 2268

8) exp188 to exp186 1084 1316 2586

gav 9) gav88 to gav90 469 3222 8008

10) gav90 to gav88 729 3219 5435

gibralt 11) gibralt84 to gibralt86 861 3178 4570

12) gibralt86 to gibralt84 1173 3228 4756

img 13) img1 to img2 222 2444 3867

14) img2 to img1 294 2557 3403

mono 15) mono1 to mono3 331 475 727

16) mono3 to mono1 83 486 692

Mtns1 17) mtn1 to mtn3 152 1360 4758

18) mtn3 to mtn1 137 1411 3557

Mtns2 19) mtn4 to mtn7 204 1330 3474

20) mtn7 to mtn4 241 1365 3785

sci 21) sci84 to sci86 312 1245 2141

22) sci86 to sci84 313 1268 2061

In the next graphs (Figs. 8-9) we present the mean difference

of pixels in the difference image after registration correspond-

ing to the previous Tables (blue for ITK1 method red for ITK2

method and green for our proposed method NMM).

Fig. 8: Mean pixel difference in the difference image medical

image registration experiments (Less is better)

Fig. 9: Mean pixel difference in the difference in remote

sensing image registration experiments (less is better)

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

6

https://sites.google.com/site/ijcsis/

ISSN 1947-5500

In Figs. 10-12 we present a series of indicative success-

ful results of medical image registration using our pro-

posed method NMM and the ITK methods (MRPD-to-

MRT2, MRT1-to-MRT2). The medical image are from RIRE

( http://www.insight-journal.org/ rire/ ). In each of theses

ﬁgures, Subﬁg. (a) and (b) are the source image and target

image respectively, (c) is the transformed source image using

the proposed method and (d) the difference between (c) and

(b). The Subﬁg. (e) and (g) are the transformations using the

ITK1 and ITK2 methods respectively. Finally, Subﬁg. (f) and

(h) are the respective difference images between Subﬁgs. (b)

and (e), (g).

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Fig. 10: MRPD-to-MRT1: (a) Source Image, (b) Target Image,

(c) Result of novel method NMM, (d) Difference Image

between (b) and (c), (e) Result of ITK1 method, (f) Difference

Image between (b) and (e), (g) Result of ITK2 method, (h)

Difference Image between (b) and (g)

Next, in Figs. 13-16, a series of remote sensing image regis-

tration is presented.

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Fig. 11: MRPD-to-MRT2: (a) Source Image, (b) Target Image,

(c) Result of novel method NMM, (d) Difference Image

between (b) and (c), (e) Result of ITK1 method, (f) Difference

Image between (b) and (e), (g) Result of ITK2 method, (h)

Difference Image between (b) and (g)

(a) (b)

(c) (d)

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

7

https://sites.google.com/site/ijcsis/

ISSN 1947-5500

(e) (f)

(g) (h)

Fig. 12: MRT1-to-MRT2: (a) Source Image, (b) Target Image,

(c) Result of novel method NMM, (d) Difference Image

between (b) and (c), (e) Result of ITK1 method, (f) Difference

Image between (b) and (e), (g) Result of ITK2 method, (h)

Difference Image between (b) and (g)

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Fig. 13: b040-to-b042: (a) Source Image, (b) Target Image, (c)

Result of novel method NMM, (d) Difference Image between

(b) and (c), (e) Result of ITK1 method, (f) Difference Image

between (b) and (e), (g) Result of ITK2 method, (h) Difference

Image between (b) and (g)

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Fig. 14: b042-to-b040: (a) Source Image, (b) Target Image, (c)

Result of novel method NMM, (d) Difference Image between

(b) and (c), (e) Result of ITK1 method, (f) Difference Image

between (b) and (e), (g) Result of ITK2 method, (h) Difference

Image between (b) and (g)

(a) (b)

(c) (d)

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

8

https://sites.google.com/site/ijcsis/

ISSN 1947-5500

(e) (f)

(g) (h)

Fig. 15: casitas84-to-casitas86: (a) Source Image, (b) Target

Image, (c) Result of novel method NMM, (d) Difference Image

between (b) and (c), (e) Result of ITK1 method, (f) Difference

Image between (b) and (e), (g) Result of ITK2 method, (h)

Difference Image between (b) and (g)

(a) (b)

(c) (d)

(e) (f)

(g) (h)

Fig. 16: casitas86-to-casitas84: (a) Source Image, (b) Target

Image, (c) Result of novel method NMM, (d) Difference Image

between (b) and (c), (e) Result of ITK1 method, (f) Difference

Image between (b) and (e), (g) Result of ITK2 method, (h)

Difference Image between (b) and (g)

IV. CONCLUSION

Apart from determining the search space, the probabilities of

mutation and crossover and the population size, no initializa-

tion or any intermediate interaction is needed, which renders

the method automatic. In Table 1 in almost every experiment

the values of the indices regarding our method are quite close

to those of the two ITK methods, indicating that our method

can achieve very good results in medical multi-modal rigid

image registration cases. This can be seen as well in Figs.

10-12. The ITK methods slightly surpasses our method due

to their ability to handle better the physical space of the

images. Furthermore, ITK2 method uses normalized mutual

information as a similarity measure which is more robust than

mutual information[13]. On the other hand, as we see in Table

3, our novel method seems to outperform the ITK methods

in remote sensing image registration, where more difﬁcult

geometric transformations need to be dealt with and the image

pairs often have repetitive patterns which lead to increased

local optima of Mutual Information. These promising results

indicate that our method produces better results for a diversity

of image pairs, making it a good candidate for becoming

a generalized image rigid registration tool especially when

robustness is needed. In Fig. 8 we see that our method has

similar results of difference image mean pixel value with those

of the ITK methods, while in Fig. 9 for the remote sensing

image registration experiments, it is evident that our method

in most cases outperforms the ITK methods and as is shown in

Figs. 13-16 it ﬁnds the right transformation in difﬁcult cases

where the other methods fail. The reason for the extended

method’s ability to deal with large deformations successfully

lies on the ability of genetic algorithms to overcome local

optima more efﬁciently than other (especially conventional)

methods. The problem of the local optima is especially obvious

in the ITK method that uses Regular Step Gradient Descent

to optimize Mattes’ Mutual information, which is a method

that can easily stumble upon local minima. However, genetic

algorithms cannot always guarantee that the global optimum

will be found, since there is still a chance (albeit small

one) to stumble on a local one. That is a problem that

depends on the nature of the optimization problem and can

be solved, at least partially, either by using proper values for

the rates of crossover and mutation or by using other methods

of diversiﬁcation. To this end, in order to ensure that our

method will be more robust, a relatively high mutation rate,

in combination with elitism, was used. In Table 2 and Table

4, where the duration times for each experiment performed

by the three methods are presented, it is obvious that our

method is slower than the two ITK methods which comes as a

compromise for the increased robustness against local optima.

However its effectiveness compensates for its lack of speed

and can become the method of choice in many application that

speed is not critical. For example, in remote sensing mapping

the existence even of a small error in registration can have

great impact on global change measurement accuracy [31],

[32] (1 pixel misregistration means 50% error in Vegetation

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

9

https://sites.google.com/site/ijcsis/

ISSN 1947-5500

Index computation ). In future work a further study in selecting

a faster similarity metric than mutual information will help us

reduce the duration time of the algorithm. Also, the success

of the presented methods in difﬁcult cases as shown in Figs.

13-16 indicates that it can be suitable for temporal registration

problems either in medical imaging or remote sensing where

multi-temporal effects render the alignment of the image pair

a difﬁcult task. In conclusion, the presented method can be

used in advantage as a robust and generic rigid registration

tool where robustness is critical and fast execution is not a

prerequisite.

V. APPENDIX

In this Appendix we present the details of the parameters of

the genetic algorithm (Population size, selection, cross over

probability and mutation probability), in order to gain further

insight to this method, as well as the optimization methods

used by ITK (Regular step gradient descent and One-Plus-One

evolutionary strategy).

A. Population Size

Unlike other optimization methods such as simulated an-

nealing which compute only one possible solution at every

iteration, genetic algorithms evaluate a population of possible

solutions. That gives the genetic algorithm the advantage of

further exploration in the search space where our optimal solu-

tion lies. The population size mustn’t be very small, due to the

fact that small populations are prone to have the frequencies of

their genes changed (in biology this phaenomenon is known as

genetic drift). But a very large population size is not favourable

due to extra computational burden.

B. Selection

The selection method is the process through which a fraction

of the current generation is chosen for reproduction. The

selection must be done so that the ﬁttest of the current

generation are chosen for reproduction (because they might

have the genes closer to the optimal solution), without the

exclusion of those less ﬁt (in order to maintain diversity and

avoid being led to any local optima). There are a number of

methods for this process. Some of them rank the solutions with

respect to their ﬁtness (the higher the ﬁtness, the higher the

rank), giving emphasis to those of higher rank, while others

choose randomly solutions (of high or low ﬁtness) from the

current generation.

C. Crossover Probability

The success of the genetic algorithms is based on the ”building

block” hypothesis. According to it, low-order schemata of

average ﬁtness (called building blocks), if combined properly,

they can build high order schemata of higher-than-average

ﬁtness. The Crossover probability (Crp) indicates the repro-

ductive probability of the parents. For N-sized population the

solutions/genomes that undergo crossover are Crp∗N. The

higher the probability, the quicker the adding of new solutions

to the population is. If the probability is too high, then high-

ﬁtness individuals are discarded faster than the production of

improvements, but too small probability should be avoided due

to loss of potential of solution exploration.

D. Mutation Probability

Mutation is the key to evolution. Many mutations are neutral,

while others are harmful, but sometimes mutations may give

the members of a species advantageous traits making them

more ﬁt, more able to successfully procreate and eventually

supplant the less ﬁt, driving them to extinction. The purpose

of mutation is to rediversify a stagnant population, (i.e. a

population whose members have are genetically homogenous)

which could be the result of stumbling onto a local optimum. A

low mutation probability leads to slow, if any, diversiﬁcation,

while a high mutation rate may destroy the good solutions

(unless elitism is applied) and make the genetic algorithm

behave like a random search algorithm.

E. Regular Step Gradient Descent

Given an objective function C(µ), where µthe argument

vector, in regular step gradient descent, we start from two

initial points µ0and µ1and we use the following function in

order to go to the next point:

µk+1 =µk−καk

∂C

∂µ

µ=µk

Where k= 1,2,· · · ,,κa relaxation factor between 0 and 1

(small value of κmeans slower convergence), and αka variant

which in regular step gradient descent is determined by the

inner product of the derivatives of function C(µ) at points µk

and µk−1. The choice of the two initial points is critical for

the convergence of the algorithm, since a bad initial choice

can lead to a local optimum (especially when the function to

be optimized is non convex).

F. One-Plus-One Evolutionary Strategy

The idea is simple: at each iteration a member/solution of the

population (known as generation) has a child, which is the

product of the mutated parent. If the child’s ﬁtness is at least

equal to that of the parent, it replaces the parent and becomes

the parent of the next generation. Otherwise it is discarded.

Below is the steps of the One-Plus-One Evolution Strategy.

1) Initialize parent P

2) Create child newP by mutating P

3) If newP is better than P, then P=newP, else discard

newP

4) If termination conditions are not met, then go to

Step 2

5) Return P

Evolution strategy, like genetic algorithms, tends to deal better

with local optima making it a very good choice in optimizing

non-convex function, whose behaviour is unpredictable.

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

10

https://sites.google.com/site/ijcsis/

ISSN 1947-5500

ACK NOW LE DG EM EN T

The authors acknowledge support from the European Union’s

Seventh Framework Programme project RASimAS under grant

agreement no 610425.

REFERENCES

[1] B. Zitova and J. Flusser, “Image registration methods: a survey,” Image

and vision computing, vol. 21, no. 11, pp. 977–1000, 2003.

[2] K. Marias, J. Ripoll, H. Meyer, V. Ntziachristos, and S. Orphanoudakis,

“Image analysis for assessing molecular activity changes in time-

dependent geometries,” Medical Imaging, IEEE Transactions on, vol. 24,

no. 7, pp. 894–900, 2005.

[3] K. Marias, J. Brady, R. Highnam, S. Parbhoo, A. Seifalian, and M. Wirth,

“Registration and matching of temporal mammograms for detecting

abnormalities,” Medical Imaging Understanding and Analysis, 1999.

[4] C. P. Behrenbruch, K. Marias, P. A. Armitage, M. Yam, N. Moore,

R. E. English, and J. M. Brady, “Mri–mammography 2d/3d data fusion

for breast pathology assessment,” in Medical Image Computing and

Computer-Assisted Intervention–MICCAI 2000. Springer, 2000, pp.

307–316.

[5] A. Evans, C. Beil, S. Marrett, C. Thompson, and A. Hakim,

“Anatomical-functional correlation using an adjustable mri-based region

of interest atlas with positron emission tomography,” Journal of Cerebral

Blood Flow & Metabolism, vol. 8, no. 4, pp. 513–530, 1988.

[6] D. L. Hill, D. J. Hawkes, J. Crossman, M. Gleeson, T. Cox, E. Bracey,

A. Strong, and P. Graves, “Registration of mr and ct images for skull

base surgery using point-like anatomical features,” The British journal

of radiology, vol. 64, no. 767, pp. 1030–1035, 1991.

[7] D. N. Levin, C. A. Pelizzari, G. Chen, C. Chen, and M. Cooper, “Ret-

rospective geometric correlation of mr, ct, and pet images.” Radiology,

vol. 169, no. 3, pp. 817–823, 1988.

[8] G. Borgefors, “Distance transformations in arbitrary dimensions,” Com-

puter vision, graphics, and image processing, vol. 27, no. 3, pp. 321–

345, 1984.

[9] M. van Herk and H. M. Kooy, “Automatic three-dimensional correlation

of ct-ct, ct-mri, and ct-spect using chamfer matching,” Medical physics,

vol. 21, no. 7, pp. 1163–1178, 1994.

[10] E. Cuchet, J. Knoplioch, D. Dormont, and C. Marsault, “Registration

in neurosurgery and neuroradiotherapy applications,” Journal of image

guided surgery, vol. 1, no. 4, pp. 198–207, 1995.

[11] J. Declerck, J. Feldmar, M. L. Goris, and F. Betting, “Automatic regis-

tration and alignment on a template of cardiac stress and rest reoriented

spect images,” Medical Imaging, IEEE Transactions on, vol. 16, no. 6,

pp. 727–737, 1997.

[12] F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and P. Suetens,

“Multimodality image registration by maximization of mutual informa-

tion,” Medical Imaging, IEEE Transactions on, vol. 16, no. 2, pp. 187–

198, 1997.

[13] C. Studholme, D. L. Hill, and D. J. Hawkes, “An overlap invariant

entropy measure of 3d medical image alignment,” Pattern recognition,

vol. 32, no. 1, pp. 71–86, 1999.

[14] W. M. Wells, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis, “Multi-

modal volume registration by maximization of mutual information,”

Medical image analysis, vol. 1, no. 1, pp. 35–51, 1996.

[15] F. Maes, D. Vandermeulen, and P. Suetens, “Comparative evaluation of

multiresolution optimization strategies for multimodality image registra-

tion by maximization of mutual information,” Medical image analysis,

vol. 3, no. 4, pp. 373–386, 1999.

[16] J. P. Pluim, J. A. Maintz, and M. A. Viergever, “Mutual-information-

based registration of medical images: a survey,” Medical Imaging, IEEE

Transactions on, vol. 22, no. 8, pp. 986–1004, 2003.

[17] ——, “Image registration by maximization of combined mutual infor-

mation and gradient information,” in Medical Image Computing and

Computer-Assisted Intervention–MICCAI 2000. Springer, 2000, pp.

452–461.

[18] M. J. Powell, “An efﬁcient method for ﬁnding the minimum of a function

of several variables without calculating derivatives,” The computer

journal, vol. 7, no. 2, pp. 155–162, 1964.

[19] C. E. Taylor, “Book Review:Adaptation in Natural and Artiﬁcial Sys-

tems: An Introductory Analysis with Applications to Biology, Control,

and Artiﬁcial Intelligence. Complex Adaptive Systems. John H. Hol-

land,” p. 88, 1994.

[20] R. Singhai and J. Singhai, “Registration of satellite imagery using

genetic algorithm,” in Proc of the World Congress on Engineering, WCE,

2012.

[21] C. V. Rao, K. Rao, A. Manjunath, and R. Srinivas, “Optimization

of automatic image registration algorithms and characterization,” in

Proceedings of the ISPRS Congress, 2004, pp. 698–702.

[22] F. L. Seixas, L. S. Ochi, A. Conci, and D. M. Saade, “Image registration

using genetic algorithms,” in Proceedings of the 10th annual conference

on Genetic and evolutionary computation. ACM, 2008, pp. 1145–1146.

[23] Q. Zhu and Q. Shi, “Application of Improved Genetic Algorithm,” pp.

1063–1071, 2013.

[24] A. Valsecchi, S. Damas, and J. Santamaria, “An image registration ap-

proach using genetic algorithms,” in Evolutionary Computation (CEC),

2012 IEEE Congress on. IEEE, 2012, pp. 1–8.

[25] J. Salvi, C. Matabosch, D. Foﬁ, and J. Forest, “A review of recent range

image registration methods with accuracy evaluation,” Image and Vision

computing, vol. 25, no. 5, pp. 578–596, 2007.

[26] S. Baluja and R. Caruana, “Removing the genetics from the standard

genetic algorithm,” in Machine Learning: Proceedings of the Twelfth

International Conference, 1995, pp. 38–46.

[27] T. M. Cover and J. a. Thomas, “Elements of Information Theory -

Solutions Manual,” p. 211, 1992.

[28] D. Anderson, “Algorithms for minimization without derivatives,” IEEE

Transactions on Automatic Control, vol. 19, no. 5, 1974.

[29] D. Mattes, D. R. Haynor, H. Vesselle, T. K. Lewellen, and W. Eubank,

“Pet-ct image registration in the chest using free-form deformations,”

Medical Imaging, IEEE Transactions on, vol. 22, no. 1, pp. 120–128,

2003.

[30] H.-G. Beyer and H.-P. Schwefel, “Evolution strategies–a comprehensive

introduction,” Natural computing, vol. 1, no. 1, pp. 3–52, 2002.

[31] J. R. Townshend, C. O. Justice, C. Gurney, and J. McManus, “The

impact of misregistration on change detection,” Geoscience and Remote

Sensing, IEEE Transactions on, vol. 30, no. 5, pp. 1054–1060, 1992.

[32] X. Dai and S. Khorram, “The effects of image misregistration on the

accuracy of remotely sensed change detection,” Geoscience and Remote

Sensing, IEEE Transactions on, vol. 36, no. 5, pp. 1566–1577, 1998.

International Journal of Computer Science and Information Security (IJCSIS),

Vol. 14, No. 5, May 2016

11

https://sites.google.com/site/ijcsis/

ISSN 1947-5500