We propose a constrained linear curvature image registration model to explicitly control the deformation according to the transformed Jacobian matrix determinant using point-by-point inequality constraints in this paper. In addition, an effective numerical method is proposed to solve the resulting inequality constrained optimization model. Finally, some numerical examples are given to prove the obvious advantages of the curvature image registration model with inequality constraints.
1. Introduction
In image processing, people are interested not only in analyzing an image but also comparing or combining information from images which take different time, different places, different viewpoints, or different modalities. Thus, image registration is one of the most useful and challenging problems in the field of image processing. Its main idea is to find a geometric transformation which aligns points in one view of one object with corresponding points in another view of the same or similar object. At present, there are a large number of application areas which require image registration, such as computer vision, biological imaging, remote sensing, and medical imaging. For comprehensive surveys of these applications, refer to [1–5].
The basic framework of image registration can be described as follows: given two images of the same object, which are called reference image and template image , respectively, and our purpose is to find a vector value transformation as defined below:or equivalently find the unknown displacement field :so that the transformed template image is as similar to the reference image as possible. Here, denotes spatial dimension of the given images.
Without loss of generality, here we focus on throughout this paper, but it is easy to generalize to with some additional modifications. The variational model is an important tool for studying image registration and has been widely concerned by many researchers [5–9]. This variational model treats the image registration problem as a minimization problem of the joint energy functional in the following form:wherewhere denotes distance measure which quantifies distance or similarity of the transformed template image and reference , for other choices on , refer to [5, 7], is the deformation regularizer which constrains and ensures the well-posedness of the problem, and is a regularization parameter which balances similarity and regularity of displacement.
We all know that different regularizers will produce displacement fields with different degrees of smoothness and the selection of a regularizer is critical to the solution of the problem and its properties; for more details, refer to [5]. Usually the choice of regularizer can be classified into two main categories: the first type is to limit the displacement field to the parametric model [10–14], for example, rigid or affine transformations (parameterized by rotation, scaling, and translation) or linear combinations of a set of basis functions (-splines) [3, 5, 15–18]; the second type is based on the derivative of the displacement field. At present, there are regularizers based on first-order derivatives, such as elastic regularizer [19–21], diffusion regularizer [22], total variational regularizer [23, 24], modified total variational regularizer [25, 26], total fractional-order regularizer [27], and the ones based on higher-order derivatives, such as linear curvature [28, 29], mean curvature [8, 30], and Gaussian curvature [31]. It has been proved that, in many cases, the selection method of the first kind of a regularizer is too strict, and the required transformation cannot be guaranteed to be included in the parametric model. Therefore, the second kind of method is a common method to select a regularizer. For the second method, it is easy to implement for low-order regularizers, while they are less effective than high-order ones in producing smooth displacement fields which are important in some applications including medical imaging. Although the registration models based on a higher-order regularizer can produce more satisfactory registration results visually, they do not take into account mesh folding.
In fact, the regularity of the displacement field is also an important measure in image registration [32]. In many variational models (1) that currently exist, although they can produce satisfactory registration results visually, they cannot ensure that the transformation found is reversible. The irreversibility of the transformation means that the displacement field is not regular. In this case, there will be mesh folding during the registration process, which is not allowed in practical applications. Therefore, it is necessary to avoid mesh folding during the registration process. Currently, a direct idea to avoid mesh folding is to use a larger regularization parameter . However, such a value will cause the similarity between the transformed template image and the reference image to become worse. In order to avoid mesh folding phenomenon, some scholars have proposed to add an additional regular term on the transformed Jacobian matrix determinant in the objective function formula (3) [33–36], i.e.,where represents the determinant of the Jacobian matrix of the transformation. However, this method only penalizes the irregular displacement field as a whole, while the local displacement field cannot be guaranteed to be regular [32]. In addition, this method is only effective for the smaller regularizer parameter , and increasing the value of usually leads to ill-posed optimization problems [37]. To solve this problem, Haber and Modersitzki proposed a new registration model by adding additional explicit volume inequality constraints [32]; however, this constrained method usually leads to solving a large-scale highly nonlinear inequality constrained optimization problem. Other methods to ensure the regularity of the displacement field can be found in the literature [20, 38–43]. However, some of them require more computation time due to the complexity of the regularizer.
There are two purposes for image registration. One is to enhance some similarities between two images by geometrically transforming one of the given two images. The other is to ensure that this transformation is reasonable. In fact, it is equivalent to find geometric transformation and the displacement field in the framework of variational model. If the displacement field is irregular, the transformation is considered unreasonable, and then the mesh folding phenomenon will appear which is not allowed in practical applications. In this paper, we propose a new image registration model by integrating the evaluation criteria to measure the registration results directly into the basic framework of the variational model (3).
The rest of the paper is organized as follows: in Section 2, we propose a new constrained linear curvature image registration model. Then in Section 3, we discuss the numerical method for solving the new model by using a combination of the multiplier method and Gauss–Newton scheme with the Armijos line search and further combine with a multilevel method to achieve fast convergence. Next, some experimental results from syntectic and real images are illustrated in Section 4. Finally, conclusions and future work are summarized in Section 5.
2. Constrained Linear Curvature Image Registration Model
Firstly, we briefly review the Fischer–Modersitzki’s linear curvature image registration model [25, 27]. Choosing in (3) based on an approximation to the curvature of the surface of the displacement field is given by the following form:
There are two major advantages to the particular choice of the regularizer. Firstly, it can penalize oscillations; secondly, without requiring an additional affine linear preregistration step, it can produce visually more satisfactory registration results than a diffusion model and an elastic model for smooth displacement fields. However, a mesh folding phenomenon is not considered in this linear curvature model. In order to avoid this, the evaluation criteria to measure the registration results are directly integrated into the basic framework of the variational model (3), and we propose a constrained linear curvature image registration model in the following form:where
Compared with model (5), our new model can ensure that the displacement field is regular both globally. In addition, the new model prevents mesh folding even for very small regularization parameters . Finally, visually pleasing registration results can be obtained by using our new model with low computing time for smooth registration problems. The numerical solution of the new model (7) is given below.
3. Numerical Solution of the New Model
In general, it is difficult to solve the optimization problem (7) by the analytic method. Thus it is necessary to adopt the numerical method and appropriate discretization. In this paper, we choose the discretize-optimize method which aims to take advantage of efficient optimization techniques. In this section, we first discuss briefly the discretization we use and then describe the details of numerical algorithms.
3.1. Finite Difference Discretization
Assume that given discrete images have pixels. For simplicity, the image region is further assumed to be , and then each side of these cell-centered images has width . Thus the discrete domain can be denoted by
3.1.1. Discretization of Regularizer
The discrete form of the continuous displacement field can be represented by , where and are the discrete grid functions defined on the discrete region . For convenience, let , and . Since the curvature regularizer is expressed based on the Laplacian operator which can be regarded as the product of gradient operator and divergence operator , we introduce the symbols and to represent their discrete forms, respectively. The discrete gradient operator can be defined at each pixel by the following form:where
The displacement field satisfies the homogeneous Neumann boundary conditions on the boundary of the image region :
Through the analysis of continuous setting, we know that the discrete divergence operator is the negative conjugate transposition of the gradient operator, namely, . Thus, it can be defined by the following form:where is a vector. For the convenience of calculation, the grid functions and can be changed into column vectors and according to lexicographical ordering, respectively:
We can get , , and , where . Discrete gradient operator can also be expressed as the product of matrix and the vector in the following form:
Let
By this notation, we can get
Let , , and
Then the discrete form of (18) is as follows:
According to the midpoint quadrature formula, the linear curvature regularizer has the following discrete form:where .
3.1.2. Discretization of Template and Reference
For a given discrete image, if we want to know the gray value at any spatial location other than the grid point, then image interpolation is needed. In order to take full advantage of the fast and effective optimization method, a smooth cubic -spline is used for interpolation. Next, and are used to represent the continuous smooth approximation of template image and reference image , respectively. Letand .
Thus the discrete reference image and transformed template image can be represented by the following form, respectively:and further we can get the Jacobian of :where and the Jacobian of is a block matrix with diagonal blocks.
3.1.3. Discretization of Distance Measure
Although it is in a continuous setting, it is not possible to compute integrals analytically. Thus it is necessary to use numerical integration. In discrete simulation, the midpoint quadrature formula can be used to approximate the integral. According to (22) and (23), the discrete form of distance measurement can be written directly as follows:
In addition, the derivative of the discrete functional on can also be calculated and has the following form:
Furthermore, we can calculate the second derivative of the distance measurement :where . On one hand, it is consuming and numerically unstable to compute higher-order derivatives (27) in registering two images for practical applications. On the other hand, the difference between and will become smaller if the template image is well registered. To have an efficient and stable numerical algorithm as proposed in work [5], can be approximated by the following form:
3.1.4. Discretization of Inequality Constraint Functional
In model (7), the inequality constraint functional is defined by
According to the previous analysis, the discrete form of the partial derivative of the continuous displacement field element can be expressed as follows:
Obviously, , and , where . Letwhere symbol denotes the multiplication of the corresponding elements of two vectors and . For the convenience of calculation, let denote the -th element of , . Therefore, the continuous inequality constraint function has the following discrete form:
Since the first-order variation of the continuous inequality constraint function on continuous displacement field is as follows:
Thus we can get the discrete form of the first-order variation :
Obviously, , , and .
3.2. Solving the Discrete Optimization Problem
According to the above analysis, inequality constrained functional (7) has the following discrete form:
Below we use the multiplier method to numerically solve the inequality constrained optimization problem (35). The basic idea of this method is to transform the original problem into a series of unconstrained optimization problems to solve and simultaneously estimate the Lagrangian multiplier. For more details on multiplier scheme, see [37]. Before solving (35), let us briefly review the multiplier method of inequality constrained optimization.
3.2.1. Multiplier Method for Inequality Constrained Problems
Consider the following inequality constrained optimization problem:
Let , and the above inequality constraint can be transformed into the following equivalent equality constraint problem:
In this case, the augmented Lagrange function can be expressed as
In order to eliminate the auxiliary variables , the minimization of with respect to variable can be considered. According to the first-order necessary condition, let
We can get
Namely,
Therefore, when ,that is to say,
Thus when , we have
And when , we can obtain
According to the above two cases,
Substituting it into formula (38), we can get the corresponding augmented Lagrange function of (36):
Since the multiplier vector needs to be updated to solve the inequality constrained optimization problems (36) by using the multiplier method, next we derive the multiplier iterative formula. Firstly, fix the penalty parameter to some value at its -th iteration, and fix at the current estimate . Secondly, perform minimization with respect to . Using to denote the approximate minimizer of , then we can get by the optimality conditions for unconstrained minimization that
Let satisfy the KKT conditions for (37), then we have
By comparing (48) with (49), we can deduce that
According to (50), to improve the current estimate of the Lagrange multiplier vectors, the multiplier iteration formula can be given by the following form:
Then, taking (43) into the multiplier iteration formula (51), we have
Furthermore, it can be written as
Similarly, take (43) into the termination criterion
We can get
3.2.2. Multiplier Method for Solving Model
Next, we use the multiplier method to solve the model (35). Firstly, we construct the augmented Lagrange function for solving model (35):
The corresponding multiplier iteration formula has the following form:
And the corresponding stopping criterion is
Although the augmented Lagrangian function (57) of the model (35) contains the min function, it is still continuously differentiable; for details, see [37, 44]. The detailed steps of the multiplier method for solving the model (35) can be summarized by Algorithm 1.
Step 1: input the initial value: , the objective function and its gradient , inequality constrained vector , and the transpose of its Jacobian matrix ; let , , , , , , , and .
Step 2: solving the subproblem. With as the initial point, solve the minimum value of the unconstrained subproblem (51) by using the Gauss–Newton scheme with Armijo line search.
Step 3: check the termination condition. If or , where is defined by (57), the iteration is stopped, and is output as the approximate minimum of the original problem; otherwise, go to Step 4.
Step 4: update penalty parameters. If , let ; otherwise, set .
Step 5: update multiplier vector. Calculate
Step 6: set , and go to Step 1.