Page 1

MAXIMUM A POSTERIORI VIDEO SUPER-RESOLUTION WITH A NEW

MULTICHANNEL IMAGE PRIOR

Stefanos P. Belekos1,2, Nikolaos P. Galatsanos3, and Aggelos K. Katsaggelos1,2

1 Faculty of Physics, Department of Electronics, Computers, Telecommunications and Control,

National and Kapodistrian University of Athens

Panepistimiopolis, Zografos, 15784, Athens, Greece

phone: + (30) 2107276873, email: stefbel@phys.uoa.gr

2 Department of Electrical Engineering and Computer Science, Northwestern University

Evanston, IL, 60208-3118, USA

3 Department of Electrical and Computer Engineering, University of Patras

26500, Rio, Greece

phone: + (30) 2610969861, email: ngalatsanos@upatras.gr

ABSTRACT

Super-resolution (SR) is the term used to define the process of

estimating a high resolution (HR) image or a set of HR images

from a set of low resolution (LR) observations. In this paper we

propose a class of SR algorithms based on the maximum a poste-

riori (MAP) framework. These algorithms utilize a new mul-

tichannel image prior model, along with the state-of-the art image

prior and observation models. Numerical experiments comparing

the proposed algorithms, demonstrate the advantages of the

adopted multichannel approach.

1.

INTRODUCTION

Resolution enhancement of an image / frame or of a video se-

quence based on multiple LR observed frames, which is also re-

ferred to in the literature as super-resolution (SR), is of critical

importance in signal processing applications, such as video sur-

veillance, remote sensing, medical imaging, cell phones, digital

video cameras, portable Digital Versatile Disc (DVD), portable

Global Positioning Systems (GPS), High Definition Television

(HDTV) e.t.c. [1]-[2]. The super-resolution problem is an inverse

problem that requires a regularized solution. The Bayesian frame-

work, used in this work, offers many advantages (see [1] for ex-

ample).

In most of the Bayesian formulations which have been used

for the SR problem so far, single channel image priors have been

adopted, based on a Gaussian stationary assumption for the re-

siduals of the local image differences [3], whereas there have also

been proposed non Bayesian total variation (TV) regularization

techniques [4]-[5]. As far as the imaging models are concerned,

many techniques are incorporating the motion field (MF) informa-

tion [3]-[6], whereas others do not use this information at all.

The term multichannel [7] in the context of video recovery

implies the use of the between frames correlations. Such ap-

proaches have been used successfully in the past for video restora-

tion and compressed video reconstruction [6]-[8] and [9], respec-

tively. However, these approaches were deterministic and the

multichannel idea was basically imposed by using between frame

regularization.

In this paper we address the video SR problem utilizing a

MAP approach. One of the main contributions of this work is the

use of a new multichannel prior that incorporates registration be-

tween frames information which is directly related with the accu-

racy of the motion field estimation.

This paper is organized as follows. Section 2 describes the ap-

propriate mathematical background on all possible image priors

and observation models used. Section 3 introduces a MAP prob-

lem formulation for the SR of uncompressed video for each one of

the proposed models, along with the realizations of the corre-

sponding algorithms. In section 4 we demonstrate the efficacy of

each one of the models through simulation experiments which

provides a comparison among them indicating the benefits of the

new prior. Finally, section 5 presents the conclusions.

2.

MATHEMATICAL BACKGROUND

2.1

Observation Models

In this paper we use two different observation models. In the first

one the relationship between a LR observation

terpart if is given in matrix-vector form (all images have been lexi-

cographically ordered) by

Α

for

iii

+

g =Hfn

where

respectively, Α is the MN

×

which sub-samples the HR frame, H is the LMLN

blurring matrix,

MN × , represents the additive white

Gaussian noise (AWGN) term, which includes the acquisition er-

rors, P represents the total number of frames used and L denotes

the resolution enhancement factor.

Equation (1) can be rewritten for the multichannel case as

=

gAHf

? ??

?

where

[]

,...,,...,,

k mkk n

−+

=

gggg

?

[

,...,

k m

−

=

nnn

?

and n , m indicate respectively the number of frames used in the

forward and backward temporal directions (

with respect to the k th

−

frame, T denotes the transpose of a

matrix-vector,

A defines the up-sampling operation and

{}

,..., ,...,,

diag

=

HHHHA?

?

are respectively of dimensions

PMNPLMLN

×

. Note that a generalization of models (1) and (2)

i g and its HR coun-

1,2,

iP

=

…

(1)

ig and

if are of dimensions

1

MN × and

LMLN

down-sampling matrix

1

LMLN ×

LMLN

×

in of size

1

,

+

n

? (2)

T

TTT

[]

,...,,...,,

T

T

k m

−

f

T

k

T

k n

+

f

=

f

?

f

]

,...,

T

TT

k

T

k n

+

n

(3)

1

nm

++= P ),

T

{}

ΑΑ

PLMLN

Α

,..., ,...,

×

,

diag

PLMLN

=

(4)

and

16th European Signal Processing Conference (EUSIPCO 2008), Lausanne, Switzerland, August 25-29, 2008, copyright by EURASIP

Page 2

is to consider a different blur and down-sampling per frame, i.e.

replace H by

i

H and Α by Αi.

The second observation model utilizes the motion field, accord-

ing to which

()

Μ

,,

ii kki ki

=+=

fdfnf

()

Μ

,,

ˆ

, ,

k

i kik

LMLN × , Μ

i k

+=

nMd

(5)

where if and kf are column vectors of size

is the 2-D motion compensation matrix of size LMLN

mapping frame

1

,

()

i k

d

LMLN

(displace-

×

,

kf into frame

if with the use of

, i k

d

ments at each pixel location), and

process that accounts for the motion compensation (registration)

errors and is also described by a column vector of size

Combining (1) with (5), results in the definition of the second

imaging model (warp – blur model [1]),

Α

, ,

() for

i i kki k

+

g =HM dfw

=+

wn AHn

a column vector of size

senting the total contribution of the noise term (including both reg-

istration and acquisition errors) which is again modelled to be

AWGN. Here, using the imaging model in (6) we are incorporating

only the motion information that is relevant to the SR of the middle

(k th

−

) frame.

2.2

Image Prior Models

In this work we consider two prior models. Although deterministic

approaches have been developed in the past [10], the Bayesian for-

mulation of a prior offers many advantages. A simple model based

on a Gaussian stationary assumption (stochastic non-stationary as-

sumptions [11] have also been used) is given by

( ,)

jjj

N

εα−

= Qf0I

∼

(

( ;) exp

2

where Q represents a convolutional operator (discrete Laplacian)

, i k

n

is assumed to be an AWGN

1

LMLN × .

,..., ,...,

m

,

ikkkn

=−+

(6)

with

,,

i kii k

1

MN × repre-

1

, or equivalently

)

j

2

,

j

jj

p

α

α∝−

f Qf

(7)

of size LMLNLMLN

×

, ⋅ denotes the 2l norm and the pa-

α accounts for the within channel (within frame j) inverse

LMLN × zero vector and I the

LMLN

×

identity matrix.

The second image prior model introduced in this paper is a new

mulichannel prior that takes into account both within frame

smoothness captured by equation (7) and between-frame smooth-

ness incorporated through the motion field information. More spe-

cifically the multichannel prior we propose, is given by

k n k n

pZ

= −= −

≠

where

(

( | ;) exp

ij ij

p

β∝−

f f

and ( ;)

jj

p

α

f

is given by (7) with

(

jiij

=

MM

where matrix ()T

ij

M

clearly represents the operation of backward

rameter

j

variance, with 0 being the

LMLN

1

( ;)(|;) ( ;

p

),

ij ijjj

i k m j k m

j i

p

βα

++

∝

∏ ∏

f

? ??

β αβ α

,fff

(8)

()

)

2

/2

iji ij j

β−

fM f

(9)

()

,

),

T

j i

=

M d

(10)

motion compensation along the motion vectors (mapping frame

if

into frame

jf with the use of

, j i

d

), whereas ?β β and ? α α are also the

column vectors that contain the parameters

ij

β and

j

α , respec-

tively. The parameter

of the motion compensation error between frames i and j.

The joint pdf in the right hand side term of Eq. (8) can be also

written as

()(

, ;,

ijijj

pp

β α=

⎛

⎜

⎜

∝−

⎜

⎜ ⎜⎝

where

β

−

⎢

=⎢

−

⎢

⎣

ij

β represents the inverse variance (precision)

) (

p

)

[]

1/2

-1

ij

|;;

1

2

exp,,

ijijj

⎤⎞⎟

⎥⎟

⎟

⎥⎟⎟

⎠

⎥

⎦

j

i

T

i

T

jij

j

βα

−

∝

⎡

⎢

⎢

⎢

⎣

f ffff

f

RffR

f

(11)

()()

1

ijijij

ij

TT

T

ijijjijijij

β

βαβ

−

⎡⎤

⎥

⎥

⎥

⎦

+

IM

R

MQ Q MM

(12)

denotes the ‘cross-channel’ inverse covariance matrix. Then, we can

rewrite (8) as

⎡

⎢

⎢

∝−

⎢

⎢

⎢

⎢

⎣

⎡⎤

=−

⎢⎥

⎣⎦

thus

1/2

1

Z

=

R

?

R?

representing the inverse covariance matrix of the prior

pdf (

( , )

N

f0 R

??

∼

) which is not given in closed form due to space

constraints.

Evaluating Z from (14) is cumbersome given the size of

1

−

R?

and mainly because of the need to find the derivative of this

determinant with respect to all the involved parameters. Therefore,

in the herein work we approximate the functional relationship of the

parameters

j

α and

ij

β and the normalizing constant Z (the parti-

tion function) of the multichannel prior as (in this paper we choose

mn

=

)

k m

P LMLN

j

j k m

= −

[]

1

1

1

2

( ; , )

p

f

? ??

β α β α

exp ,

1

2

exp ,

k n

+

∑ ∑

k n

+

i

T

i

T

jij

j

i k m j k m

= −

j i

≠

T

Z

Z

−

= −

−

⎤

⎥

⎥

⎥

⎥

⎥

⎥

⎦

⎡

⎢

⎢

⎢

⎣

⎤

⎥

⎥

⎥

⎦

f

ffR

f

f R f

? ??

(13)

1/2

,

−

−

=

R

?

(14)

with

1

−

(-1)/2 /2

()()

k m

+

∏ ∏

k m

+

PLMLN

ij

β

i k m j k m

= −

j i

≠

Z

α

+

= −

∝ ∏

. (15)

Clearly, the introduced prior is an improper one. Improper priors

have been used in the past with success in image recovery problems,

see for example [11].

3.

MAP PROBLEM FORMULATION AND

PROPOSED ALGORITHMS

Taking into account the main possible combinations of the obser-

vation models ((1) or equivalently (2) and (6)) with the prior mod-

els ((7) and (13)) we propose three formulations of the HR problem

and derive the corresponding MAP algorithms.

3.1 Model 1

The simplest approach to the super resolution problem is to use a

single channel to obtain the observation model of (1). In this case no

motion field information is used (in applications where more than

one frames are to be restored, these frames are independently super

resolved based on (1) without using any of the adjacent channels).

Given (1), the fidelity pdf is defined as

16th European Signal Processing Conference (EUSIPCO 2008), Lausanne, Switzerland, August 25-29, 2008, copyright by EURASIP

Page 3

()

2

2

(| ;

f

)exp(/2),

MN

iiiiii

i

p

γγγ

∝−−

gg AHf

(16)

where the parameter

precision), whereas the prior model is defined by (7).

In obtaining the MAP estimate the following objective function

is minimized

(|;,)2log ( , ;

MAP iiii

J

α γ

γ

= −−

gf

resulting in

( 1),

ii

i

Qf

1

iγ− is the acquisition noise variance (inverse

,)

2log ( | ;)2log ( ; ),

iiii

iiiii

p

pp

α γ

α

∝ −=

fg g f

f

(17)

22

,

ii

LMLN MN

−

αγ

−

==

gAHf

(18)

()ˆ

(/)

TTTTT

iiii

αγ

+=

H A AHQ Q fH A g . (19)

3.2

This model is based on [3], where the observation model is de-

scribed by (6) and the prior model is also given by (7). In that case

the fidelity pdf is given by

ˆ

(| , ,;)

ii i k k iik

p

γ≡

gfdd

Model 2

,,

2

2

(| ,;)

exp( (/2) ),

k

ikik

MN

iki ik k

ik

p

γ

γγ

∝

∝−−

gf

gAHM f

(20)

where the parameters

related to the motion compensation errors / registration noise and

also to the acquisition noise) and as is expected for i

that

ik ik

γγγ

==

. Moreover, when

matrices) are given, the random variables

the respective error terms, are assumed to be statistically independ-

ent. Thus we have

k m

pp

= −

where ? γ γ denotes the column vector that contains all the (scalar)

ik

γ are the inverse variances (precisions

k

=

it holds

kf (and the motion field

ig (observations), or else

,,

( |

g

?

; )

?

γ γ

(|,,; ),

kik i kk iik

i k m

γ

+

= ∏

fgf dd

(21)

parameters

With this model, the objective function that is minimized is

given by

ik

γ .

,,

,,

(| ; ,

g

?

) 2log[ ( | ; ) ( ;

p

?

γγ

)]

2log[(|,,;) ( ;

p

)]

2log[(|,,;)] 2log[ ( ;)],

MAPkkkkk

k m

+

∏

iki kk i ikkk

i k m

k m

+

∏

ik i kk iikkk

i k m

= −

Jp

p

pp

αα

γα

γα

= −

∝ −=

= − =

= −−

fg

?

ff

gf ddf

gf d df

?

γγ

(22)

resulting in

2,

ik

iik k

=

MN

AHM f

ˆ

)

k

Q Q f

γ=

−

g

(23)

where

(,

T

k

a

+

J

?

Z

? (24)

[()],

[()],

k m

+

∑

TT

ikkiik

i k m

= −

k m

+

∑

TT

ikki i

i k m

= −

γ

γ

=

=

J

?

M H A AHM

Z

?

M H A g

(25)

9

(

(

was added such that the blurred signal-to-noise ratio (BSNR) de-

fined (in dB) as

whereas the estimation of the parameter

hand side term of (18)). Obviously with this model only the part of

k

α is given (by the left

the motion field information which is relative to the HR frame ˆkf is

used and this contribution is attributed to the formulation of the

observation model.

3.3

Model 3

With this model, the multichannel observation model described by

(2) is combined with the new multichannel prior described by (8),

(13) and (15).

Based on the above analysis the fidelity pdf term is given by

1

2

( | ; ) ({ })exp{

pDet

∝⋅−

gf

?

?

?

?

γΓγΓ

1

1

2

()()},

T

−

−

−−

g

?

AHf

?

??

g

?

AHf

?

??

?

ΓΓ

(26)

where

{}

111

,...,

I

,...,

I

k m

−

×

=

k k m

γ

+

diag γγ

−−−

=

I

?Γ Γ

is the covariance

matrix of size PMN

MN MN

×

tor that contains the inverse noise variances for each one of the

channels that are used.

Consequently, the objective function is given by

( | ; , , )

MAP

J

∝ −

fg

? ?

?

?

?

?

γγ

and its minimization results in

(1),

( 1)

j

P

−

Qf

PMN

γ

−

, I is the identity matrix of size

,...,,...,]T

kk m

γγ

+

is the column vec-, and

[

k m

? γ γ

2log ( , ; , , )

p

g f

?

2log ( | ; )

p

= −

2log ( ; , )

p

=

−

gff

? ?

??

β α γβ α γ

? ?

? ?

?

β α γβ α γ

β αβ α

(27)

22

,

jij

iij j

P LMLNPLMLN

−

M f

αβ

−

==

f

(28)

with

T

=

GH A

?

??

1 ˆ

)(,

−

+=

G

?

R

?

f

?

g

??

Λ Λ

(29)

1

{,...,,...,}

T

TTTTTT

k m

−

k k m

+

diag γ

and

=

?

ΛΛ

In this model the motion field information is taken into account

only through the prior and not through the observation term,

whereas (18) also holds as far as the estimation of

Moreover, in model 3 simultaneous SR (and restoration) of all the

HR frames is taking place, which is not the case in models 1 and 2.

Finally, it is clear that (19), (24) and (29) can not be solved in closed

form, given that analytical inversion of matrices is not possible due

to the non-circulant nature of matrices

resorted to a numerical solution using a conjugate-gradient (C.G.)

algorithm.

γγ

−

=

=

AH

?

H A AHH A AHH A AH

?

?Γ Γ

1

{,...,,...,}.

TTTTTTTT

k m

−

kk m

γ

+

diag γγ

−

=

H A

?

H AH A H A

?

?

ΓΓ

iγ is concerned.

, and

T

ij

A A M . Thus, we

4.

NUMERICAL EXPERIMENTS

In this section we present numerical experiments to evaluate our

algorithms and also justify the benefits provided by the use of the

new multichannel prior based on (13). In all experiments, a central

316316

×

region of the sequence Mobile was selected, similarly to

the experiments in [3]-[4] (these experiments have also been con-

ducted using more sequences yielding similar results). Moreover,

2

mn

==

(

3

k =

) and frames ‘018’ - ‘022’ were chosen.

Two cases were considered, where in the first one uniform

9

×

blur was used, whereas in the second one no blur was used

=

HI ). After blurring, sub-sampling by the factor of two

2

L =

) in both dimensions took place and white Gaussian noise

16th European Signal Processing Conference (EUSIPCO 2008), Lausanne, Switzerland, August 25-29, 2008, copyright by EURASIP

Page 4

2

1

10

10log/ () ,

⎟

⎟ ⎟

⎠

iii

BSNRMNγ−

⎛

⎜

⎜

⎜⎝

⎞⎟

=−

AHfAHf

or equivalently the SNR when

=

HI , for each LR frame equals to

20dB, 30dB and 40dB (

The metric used to quantify performance was the improvement

in signal-to-noise ratio (ISNR). This metric (in dB) is defined as

(

10

i

i

AHf is the spatial mean of vector

i

AHf ).

)

2

2

,

ˆ

f

10log/,

i Iii

ISNR =−−

fgf

where

tion.

, i I

g

denotes the bicubic interpolation of the

th

i LR observa-

In model 1, considering both experimental cases, an iterative

scheme was used (in our attempt to get the best possible initial con-

ditions for models 2 and 3) where the bicubically interpolated LR

observations served as initial conditions for the C.G. algorithm and

for the estimation of the parameters.

In all experiments (in both cases) with models 2 and 3 the algo-

rithm (including the estimation of the parameters) is initialized by a

single frame stationary super-resolution algorithm (model 1) from

which the motion field computation was also performed using a 3

level hierarchical block matching algorithm with integer pixel accu-

racy at each level. For the C.G. algorithm implementation, matrices

M were initially estimated and remained fixed (no iterative

scheme was adopted and the same holds for the estimated precision

parameters in the proposed algorithms of models 2 and 3).

In all the aforementioned models, the convergence criterion

which was adopted for the termination of the C.G. algorithm was

22

/5 10

kkk

−<⋅

fff

each frame independently).

Moreover, a noteworthy observation of our experiments was the

robustness of model 3 in terms of initial conditions. More specifi-

cally the ISNR in all cases, when the (C.G.) algorithm was initial-

ized by the bicubically interpolated LR observations and the motion

fields along with the parameters were also estimated by them,

proved to be lower but close enough to its respective values which

are given in tables 1 and 2 below (for this model).

In table 1 the ISNR results for the blurry case are given, whereas

in table 2 the ISNR results for no blur super-resolution are shown

(for different (B)SNRs).

TABLE 1

ISNR (IN dB) COMPARISON AMONG MODELS WITH

RESPECT TO THE MIDDLE FRAME

Noise Level BSNR=20dB

Model1 1.5489

Model2 2.0390

Model3 2.5756

TABLE 2

ISNR (IN dB) COMPARISON AMONG MODELS WITH

RESPECT TO THE MIDDLE FRAME

(NO BLUR CASE)

Noise Level SNR=20dB

Model1 1.9269

Model2 2.2788

Model3 2.8934

ij

5

newold old

−

(in model 1 it was used for

BSNR=30dB

3.4379

3.5480

4.1684

BSNR=40dB

4.7620

4.8175

5.4582

SNR=30dB

3.6800

3.7974

4.3088

SNR=40dB

4.1258

4.1522

4.5426

Figures 1, 2, 3 and 4 show representative examples of the pro-

posed algorithms. In these figures, due to space constraints, we

show the central segment of the corresponding (middle) frame for

each adopted approach. As can be seen, several areas benefit from

the recovery (mainly by model 3) i.e., the numbers in the calendar

are sharper.

(a)

(b)

(c)

(d)

Figure 1 - SR results for BSNR=20dB (a) Segment of bicubically

interpolated LR observation of middle frame, (b) SR model 1

(ISNR=1.5489dB), (c) SR model 2 (ISNR=2.0390dB), (d) SR

model 3 (ISNR=2.5756 dB).

(e)

(f)

(g)

(h)

Figure 2 - SR results for BSNR=30dB (e) Segment of bicubically

interpolated LR observation of middle frame, (f) SR model 1

(ISNR=3.4379dB), (g) SR model 2 (ISNR=3.5480dB), (h) SR

model 3 (ISNR=4.1684dB).

16th European Signal Processing Conference (EUSIPCO 2008), Lausanne, Switzerland, August 25-29, 2008, copyright by EURASIP

Page 5

(a)

(b)

(c)

Figure 3 - SR results for SNR=20dB (a) Segment of bicubically

interpolated LR observation of middle frame, (b) SR model 1

(ISNR=1.9269dB), (c) SR model 2 (ISNR=2.2788dB), (d) SR

model 3 (ISNR=2.8934dB).

(d)

(e)

(f)

(g)

Figure 4 - SR results for SNR=30dB (e) Segment of bicubically

interpolated LR observation of middle frame, (f) SR model 1

(ISNR=3.6800dB), (g) SR model 2 (ISNR=3.7974dB), (h) SR

model 3 (ISNR=4.3088dB).

(h)

5.

CONCLUSIONS

In this paper, we presented a MAP approach of a new multichannel

image prior applied in the digital video SR problem, along with

three proposed algorithms and their comparative study. These algo-

rithms have been tested in different cases (in the presence or the

absence of blur for different BSNR and SNR values respectively).

The experimental results show in both cases that the algorithm

which is based on the new proposed model (model 3) performs

better than previous ones in terms of both robustness with respect to

the initial conditions and improvement in SNR (ISNR). Moreover,

the efficacy of this algorithm is further established by the fact that it

provides the maximum gain with respect to the single frame SR

algorithm (model1) at low BSNR / SNR values, which is also the

case with respect to model 2 when no blur is used (resolution en-

hancement efficacy). Finally, the comparison between models 2 and

3 serves as a strong indication that the use of motion field in the

prior term is much more efficient both in terms of restoration capa-

bility and resolution enhancement with respect to its use in the ob-

servation term (model).

ACKNOWLEDGEMENT

This paper is part of the 03ED-535 research project, implemented

within the framework of the “Reinforcement Programme of Human

Research Manpower” (PENED) and co-financed by National and

Community Funds (25% from the Greek Ministry of Development-

General Secretariat of Research and Technology and 75% from

E.U.-European Social Fund).

REFERENCES

[1] A. K. Katsaggelos, R. Molina, and J. Mateos, Super Resolution

of Images and Video. 1st ed. Ed. Morgan and Claypool, 2007,

pp. 5-9, 29-30, 77-88.

[2] S. Borman and R. Stevenson, Spatial resolution enhancement

of low resolution image sequences. A comprehensive review

with directions for future research. Technical report, Labora-

tory for Image and Signal Analysis (LISA), University of

Norte Dame, Norte Dame, IN 46556, USA, July 1998.

[3] C. A. Segall, A. K. Katsaggelos, R. Molina, and J. Mateos,

“Bayesian resolution enhancement of compressed video,”

IEEE Transactions on Image Processing, vol. 13, no. 7, pp.

898-910, 2004.

[4] C. Wang, P. Xue, and W. Lin, “Improved super-resolution

reconstruction from video,” IEEE Transactions on Circuits

and Systems for Video Technology, vol. 16, no. 11, pp. 1411-

1422, November 2006.

[5] M. K. Ng, H. Shen, E. Y. Lam, and L. Zhang, “A total varia-

tion regularization based super-resolution algorithm for digital

video,” EURASIP Journal on Advances in Signal Processing,

vol. 2007, Article ID 74585, doi: 10.1155/2007/74585.

[6] S. Borman and R. Stevenson, “Simultaneous multi-frame MAP

super-resolution video enhancement using spatio-temporal pri-

ors,” in Proc. ICIP 1999, Kobe, Japan, October 24-28. 1999,

pp. 469-473.

[7] N. P. Galatsanos and R. T. Chin, “Digital restoration of multi-

channel images,” IEEE transactions on Acoustics Speech and

Signal Processing, vol. 37, no. 3, pp. 415-421, March 1989.

[8] M. G. Choi, N. P. Galatsanos, and A. K. Katsaggelos, “Mul-

tichannel regularized iterative motion compensated restoration

of image sequences,” Journal of Visual Communications and

Image Representation, vol. 7, no. 3, pp. 244-258, September

1996.

[9] M. G. Choi, Y. Yang, and N. P. Galatsanos, “Multichannel

regularized recovery of compressed video sequences,” IEEE

Transactions on Circuits and Systems II: Analog and Digital

Signal Processing, vol. 48, no. 4, pp. 376-387, April 2001.

[10] A. K. Katsaggelos, J. Biemond, R. Mersereau, and R. Schaefer,

“A regularized iterative restoration algorithm,” IEEE Trans.

Signal Processing, vol. 39, pp. 914-929, 1991.

[11] G. Chantas, N. Galatsanos, and A. Likas, “Bayesian restoration

using a new non-stationary edge-preserving image prior,”

IEEE Transactions on Image Processing, vol. 15, no. 10, pp.

2987-2997, October 2006.

16th European Signal Processing Conference (EUSIPCO 2008), Lausanne, Switzerland, August 25-29, 2008, copyright by EURASIP