PresentationPDF Available

Multi-view 3D CG Image Quality Assessment for Contrast Enhancement Including S-CIELAB Color Space in case the Background Region is Gray Scale

Authors:

Abstract

In this paper, we experimented the subjective evaluation for 3D CG image including the gray scale region with 8 viewpoints parallax barrier method, and we analyzed this result statistically. Next, we measured about the relation between the luminance change and the color difference by using S-CIELAB color space and CIEDE2000. As a result, we obtained knowledge about the relation among the coded image quality, the contrast enhancement, and gray scale.
Multi-view 3D CG Image Quality
Assessment for Contrast Enhancement
Including S-CIELAB Color Space in case
the Background Region is Gray Scale
Department of Information Engineering,
Graduate School of Information Science,
Nagoya University
Norifumi Kawabata and Masaru Miyao
The 31st International Technical Conference on
Circuits/Systems, Computers and Communications
(ITC-CSCC 2016)
Date: July 12, 2016
T2-6-3, 14:10-14:30
Presentation: 15 minutes, Q&A: 5 minutes
Contents
1. Background and purpose of research
2. Image quality evaluation experiment
3. Experimental results and discussion
4. Conclusion
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 2
1. Background and
purpose of research
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 3
By advancing the spread of QFHDTV
(Quad Full HD TV), 3D still image or video
quality becomes higher than before
(FHDTV).
It becomes to focus on 3D videos
(glassless 3D as a center) as the added
value.
On the other hand, it has been advanced
for the research and development on the
multi-view 3D image quality assessment
towards the near future.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 4
8K UHDTV
QFHDTV
FHDTV
1920 3840
7680
1080
2160
4320
Background of research
Previous and related work
Thus far, there were studies for image quality
assessment based on S-CIELAB color space including the
spatial frequency characteristics of human vision system
[1], [2].
We also carried out the quality assessment of multi-
view 3D image [3], multi-view 3D image for contrast
enhancement [4], and multi-view 3D image for contrast
enhancement of the object region [5] based on S-CIELAB
color space.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 5
The problem of CIELAB
(color system)
CIELAB (color system) is shown the coordinates
on uniform color space consisting of luminance ,
chromatics ,.
It is problem that there is the difference between
 and the evaluation of human eyes by colors.
This may occur because the shape of the color
identification range of the human eyes, which greatly
different from the shape of the color difference range
,.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 6
S-CIELAB color space and
CIEDE2000
Therefore, there were proposed method, S-CIELAB color
space or CIEDE2000.
S-CIELAB color space is considered the spatial frequency
characteristics of human visual system.
CIEDE2000 solved the difference between the shape of
the color identification range of the human eyes and
visual evaluation which may occur by the difference of size.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 7
[ref]. http://www.konicaminolta.jp/instruments/knowledge/color/index.html
[1]. G. M. Johnson, M. D. Fairchild, “A Top Down Description of S-CIELAB and CIEDE2000,Color Research
& Application, vol. 28, no. 6, pp. 425--435, 2003.
Whether the background region is
gray scale or not
As contents creators, we consider that they need not only to
consider the coded degradation but also not stand out the
background.
We consider that it is possible for these to achieve by using
gray scale in the background.
In the 3D images including the luminance that affected to
both the stereoscopic effect and the evaluation result, it is not
clear that assessors are able to accept how degradation or
luminance by whether background region is gray scale or not.
Therefore, we need to verify about these problems.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 8
In this study,
First, we carried out the quality evaluation experiment of 3D
CG image encoded and decoded by H.265/HEVC for contrast
enhancement in case the background region is gray scale.
And then, we analyzed these results and classified for the
luminance change and the coded image quality (Exp. 1).
Next, we measured the luminance objectively by
transforming to S-CIELAB color space (Exp. 2).
Finally, we measured the color difference objectively by
using CIEDE2000 (Exp. 3), and compare among results.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 9
2. Image quality
evaluation experiment
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 10
2.1. 3D CG images used
in this study
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 11
3D CG image used in this study
In this study, we used the 3D CG images, so called
“Museum (a), (c), (e),” “Wonder World (b), (d), (f)”
provided from NICT by free of charge [6].
NICT (National Institute of Information and
Communication Technology)
[6]. http://www.nict.go.jp/
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 12
Process to the generating 3D image
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 13
1. As a generation procedure, we constructed CG cameras
by the number of viewpoints on the virtual space
generated the HD quality still image by 8 viewpoints.
2. And then, we carried out the region division for the
object and the background in still images by 8
viewpoints, and processed gray scale transformation to
the background region image.
3. After that, we composed each regions again.
Process to the generating 3D image
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 14
4. Next, we carried out the contrast enhancement for
generated images by carrying out the processing of
Adaptive Histogram Equalization (AHE).
5. We encoded and decoded the generated image by
H.265/HEVC.
6. After repeating these processes by 8 viewpoints, we
composed images for each viewpoint. As a result, we
generated 3D images.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 15
(a) Museum (ref) (b) Wonder World (ref)
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 16
Types of 3D CG image sequences
1. Quantization Parameter ( =,20,30,40,51) by
H.265/HEVC
2. Maximum luminance parameter
(=, 0.25, 0.50, 1.00, 2.00, 4.00) by Adaptive
Histogram Equalization
3. Contents (“Museum,” “Wonder World”)
We prepared 3D CG images of 60 sequences.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 17
2.2. S-CIELAB color
space and CIEDE2000
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 18
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 19
Flowchart from S-CIELAB color space
to CIEDE2000
START
Load
Images
Original
Image
()
Coding
Image
()


















Spatial
Filter
and
Convo-
lution
CIEDE
2000
GOAL
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 20
START
Load
Images
Original
Image
()
Coding
Image
()






(1) Transform RGB space to XYZ
space
Loading of original image and coding image.
Separating RGB color. Transforming RGB space to XYZ space.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 21
(2) Transform XYZ space to the
opponent color space
START
Load
Images
Original
Image
()
Coding
Image
()









Transform each  component of both original and
coding images to the opponent color space ().
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 22
(1) Transform RGB space to XYZ space
=
0.41245 0.35758 0.18042
0.21267 0.71516 0.07217
0.01933 0.11919 0.95023
=
0.297 0.72 0.107
0.449 0.29 0.077
0.086 0.59 0.501
(2) Transform XYZ space to the
opponent color space
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 23
(3) Spatial Filter and Convolution
START
Load
Images
Original
Image
()
Coding
Image
()












Spatial
Filter
and
Convo-
lution
Convoluting Gaussian Kernel Filter as Spatial Filtering.
(3) Spatial Filter and Convolution
= 
 =
=exp +
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 24
Convolution
Spatial Filter
Gaussian Kernel
Gaussian Convolution Kernel
Filter Weight ()Spread ()
Achromatic (= 1)1.00327 0.0500
Achromatic (= 2)0.11442 0.2250
Achromatic (= 3)0.11769 7.0000
Red-Green (= 1)0.61673 0.0685
Red-Green (= 2)0.38328 0.8260
Blue-Yellow (= 1)0.56789 0.0920
Blue-Yellow (= 2)0.43212 0.6451
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 25
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 26
(4) Transform the opponent color
space to XYZ space
START
Load
Images
Original
Image
()
Coding
Image
()















Spatial
Filter
and
Convo-
lution
=
0.979 1.535 0.445
1.189 0.764 0.135
1.232 1.163 2.079
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 27
(5) Transform XYZ space to Lab space
START
Load
Images
Original
Image
()
Coding
Image
()


















Spatial
Filter
and
Convo-
lution
(5) Transform XYZ space to Lab
space
=
116
16,
> 0.008856
903.3
,
Here, ,,are the tristimulus value of diffuse
reflection under standardization light.(RGB)
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 28
(5) Transform XYZ space to Lab
space
=
500
,
,
> 0.008856
500 7.787
+
 7.787
+
 ,
Here, ,,are the tristimulus value of diffuse
reflection under standardization light.(RGB)
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 29
(5) Transform XYZ space to Lab
space
=
200
,
,
> 0.008856
200 7.787
+
 7.787
+
 ,
Here, ,,are the tristimulus value of diffuse
reflection under standardization light. (RGB)
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 30
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 31
(6) CIEDE2000
START
Load
Images
Original
Image
()
Coding
Image
()


















Spatial
Filter
and
Convo-
lution
CIEDE
2000
(6) CIEDE2000
CIEDE2000 is the color difference equation corrected the
difference between result of a measurement and visual
appreciation as weakness of color space.
Adding the correction of weighted coefficient (,,),
parametric coefficient (,,) based on the difference of
lightness ,chroma ,hue .
 =
+
+
+

Here, is the rotation function.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 32
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 33
Flowchart from S-CIELAB color space
to CIEDE2000
START
Load
Images
Original
Image
()
Coding
Image
()


















Spatial
Filter
and
Convo-
lution
CIEDE
2000
GOAL
2.3. Experimental content
and assessment method
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 34
Experimental contents
In this study, we carried out 3 experiments as shown below.
Exp. 1: Subjective image quality evaluation (contrast
enhancement)
Exp. 2: Objective image quality evaluation (Luminance)
Exp. 3: Objective image quality evaluation (CIEDE2000 for images
of the contrast enhancement)
The experimental environment was based on the ITU-R
BT.500-13 recommendation.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 35
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 36
Type of display Newsight 3D Display 24V type
Display resolution 1920x1080 (Full HD) (pixel)
Display resolution
in the case of presentation 1920x960 (pixel)
Types of 3D CG images “Museum” (M), “WonderWorld” (W)
Types of image Windows bitmap (Background region is gray scale. )
Encoding and decoding H.265/HEVC
Quantization Parameter (QP) QP=0 (ref), 20, 30, 40, 51
Maximum Luminance L=ref, 0.25, 0.50, 1.00. 2.00, 4.00
3D system Parallax Barrier Method
The number of viewpoints Only 8 viewpoints
Visual range 3H (Height of an image)
Indoor lighting None (same as dark room)
Assessor’s position Within horizontally ±30 degrees from the center of the screen
Evaluation experiment
Presentation time 10 seconds/Contents
Evaluation method Double Stimulus Impairment Scale
Assessor Repeating 5 times by an assessor
Mechanism of 3D display
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 37
In the pixel arrangement, each viewpoint is
arranged consistently, as R, G, and B, in the
right diagonal direction.
In the 3D display mechanism, images can
be seen in 3D display in a different field of
view by dividing R, G, and B through the
parallax barrier in front of an LCD display.
When we presented a 3D image in 3D
display, we performed the real-time parallax
mix automatically using a media player
provided by Newsight Corporation.
[7]. Newsight Japan, http://newsightjapan.jp/,
accessed May 25, 2016.
Mechanism of 3D display
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 38
In the pixel arrangement, each viewpoint is
arranged consistently, as R, G, and B, in the
right diagonal direction.
In the 3D display mechanism, images can
be seen in 3D display in a different field of
view by dividing R, G, and B through the
parallax barrier in front of an LCD display.
When we presented a 3D image in 3D
display, we performed the real-time parallax
mix automatically using a media player
provided by Newsight Corporation.
Parallax Barrier
LCD Panel
Pixel arrangement in the case of
inputting 8 viewpoints 3D image
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 39
1920
960
360
640
240
120
21 3
5
4
7
6
8
8
7
Pixel arrangement in the case of
inputting 8 viewpoints 3D image
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 40
1920
960
360
640
240
120
21 3
5
4
7
6
8
8
7
Experimental method (1/2)
We used DSIS (Double Stimulus Impairment Scale)
Method as the experimental method.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 43
Score Quality
5Imperceptible
4Perceptible,
but not annoying
3Slightly annoying
2annoying
1Very annoying
EBU quality scale of experiment
Experimental method (2/2)
When the assessors performed subjective evaluation for more than 30
minutes, they suffered accumulated fatigue (visual fatigue, etc.) [ref].
Therefore, we divided the assessors’ subjective evaluation time into
three periods.
In this experiment, we presented for one assessor to repeat by 5 times.
The image sequences were presented in random order in each
experiment. We included a rest time of half an hour.
Therefore, we were able to reach an appropriate conclusion.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 44
[ref]. N. Kawabata and M. Miyao, “Statistical Analysis of Questionnaire Survey on the Evaluation of 3D
Video Clips,” IEICE Tech. Rep., vol. 113, no. 350, IMQ2013-22, pp. 2732, 2013 (in Japanese).
Evaluation method
The assessors gave evaluation scores according to
five ranks (Mean Opinion Score: MOS (,)).
Here, we defined as shown below.
 = 4.5 detective limit (DL)
 = 3.5 acceptability limit (AL)
 = 2.5 endurance limit (EL)
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 45
3. Experimental results and
discussion
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 46
3.1. Result of Exp. 1
(Subjective quality assessment)
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 47
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 48
The vertical axis is , and the horizontal axis is the
Quantization Parameter ().
The error bar extended up and down from the plot
points in Figure which shows a 95% confidence interval.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 49
From experimental result,
we can verify the same
tendency in = 0.25, 4.00.
In = 1.00,  and 
shifted before and after “AL
except for = since they
are more than 3.
On the other hand, when
 40 is satisfied, we can
verify the decline of
assessment value rapidly.
3.2. Statistical analysis
by using Support
Vector Machine
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 50
Support Vector Machine (SVM)
Support Vector Machine (SVM) is a method that solve the
identification boundary line which the margin from
learning data is the maximum value.
Predicted value of SVM is shown as
x =

x, x+  
Equation shows the weighted addition value of kernel
function  ,for support vector, and bias section .
June 19, 2015 (C) NORIFUMI KAWABATA AND MASARU MIYAO 51
Sequential Minimal Optimization
It takes computational cost in order to need the numerical
calculation in method of chunking, protective conjugate gradient,
and decomposition on solving sub-quadratic programming problem.
SMO is the method which is improved this point.
SMO is one of SVM methods using in order to improve
computational efficiency.
SMO is the method that we can obtain conclusive answer by
solving sub-problem sequentially including only two Lagrange
multipliers.
June 19, 2015 (C) NORIFUMI KAWABATA AND MASARU MIYAO 52
[ref]. C.M.Bishop, “Pattern Recognition and Machine Learning,” Springer Japan, 2008 (in Japanese).
June 19, 2015 (C) NORIFUMI KAWABATA AND MASARU MIYAO 53
Weka as a data mining tool
Preprocessing
Classification
Definition
``Precision,'' ``Recall,'' and ``F-Measure'' values
greater than 0.7 are denoted in boldface font
(in Table 2 and 3).
Table 4 shows SVM correctly classified
percentage.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 54
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 55
From Table 2, for Class (Lum),
Precision of “lum_2,” “lum_ref,” Recall
of “lum_0.25” are more than 0.7,
however, correctly classified
percentage in Class (Lum) is less 40%.
Therefore, we can judge as “not
classified.”
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 56
On the other hand, from Table 3, for
Class (QP), most of “QP_ref,” “QP_51”
are boldface font because of more than
0.7.
Correctly classified percentage in Class
(QP) is also more than 0.7.
Therefore, we can judge as “Classified.
3.3. Result of Exp. 2 and 3
(Objective assessment)
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 57
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 58
The vertical axis is (of space) (Exp. 2).
The horizontal axis is the Quantization Parameter () same as Exp. 1.
From Fig. 5, as a whole, in “Museum,shifted between 75 and 93.
On the other hand, in “Wonder World,” shifted between 24 and 32.
Therefore, the luminance difference among in “Museum” is larger than
those in “Wonder World.”
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 59
The vertical axis is  (CIEDE2000) (Exp. 3).
The horizontal axis is the Quantization Parameter ()
same as Exp. 1.
From Fig. 6, as a whole, the color difference  in
“Wonder World” are larger than those in “Museum.
3.4. Discussion
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 60
Discussion
From experimental result of subjective assessment, in case the
luminance parameter is higher or lower, assessment value tends to
low, and in the case of nearly = 1, assessment value tends to high.
From this, we consider that when the luminance change is more
than a few, assessment values are affected.
From results of SVM, correctly classified percentage in “QP” is
higher than that in “Lum.”
From this, we consider that it is easy for the assessor to perceive
change of the Quantization Parameter than that of the luminance.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 61
4. Conclusion
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 62
Conclusion
From experimental results in this study, we
obtained knowledge that it is easy for the
assessor to affect the coded image quality than
the luminance change.
Acknowledgement
This study was carried out based on the Grant-in-Aid for Scientific
Research (B) 24300046, and the research grant for Ph.D. student at
Nagoya University.
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 63
July 12, 2016 (C) NORIFUMI KAWABATA AND MASARU MIYAO 64
Thank you for kind attention!
... Fig. 4 is shown from loading original image and evaluation image, transforming to S-CIELAB color space and calculating to CIEDE2000. In detail, you would like to refer to [13], [14], and [15]. ...
... This tendency is represented from Exp. 2 and 3, however, in Exp. 1, we estimate the relation to color information. In 3D CG images used in Author's references [5], [14], [15], there are change for luminance and color difference by contrast enhancement and image resolution. Therefore, we estimate applying this knowledge in laparoscopic image in this study. ...
Article
In this paper, first, we generated medical images cut as frame still image from laparoscopic video acquired by endoscopy. Using these images, we processed to encode and decode by H.265/HEVC in certain image regions, and we generated evaluation images. Next, we evaluated objectively seeing from the coded image quality by using PSNR (Peak Signal to Noise Ratio), considering the automatic detection of coded defect region information. Furthermore, we analyzed for color information by measuring both the luminance using S-CIELAB color space and the color difference using CIEDE2000. Finally, we try to classify effectively using Support Vector Machine (SVM), and we discussed including the automatic detection of coded defect region information whether it is possible for application of medical image diagnosis or not.
ResearchGate has not been able to resolve any references for this publication.