ArticlePDF Available

3D CG Image Quality Assessment for the Luminance Change by Contrast Enhancement Including S-CIELAB Color Space with 8 Viewpoints Parallax Barrier Method

Authors:

Abstract and Figures

In this paper, we processed the contrast enhancement for multi-view 3D image, therefore, we changed luminance. Next, we generated image by composing each view image by coding H.265/HEVC. Then, we carried out the subjective evaluation about generating image. On the other hand, we evaluated objectively about luminance value measurement by using color difference (such as CIEDE2000) in S-CIELAB color space. We considered whether the objective evaluation value related to the subjective evaluation value or not. Finally, we analyzed their results statistically using a Support Vector Machine (SVM).
Content may be subject to copyright.
3D CG Image Quality Assessment for the Luminance Change
by Contrast Enhancement Including S-CIELAB Color Space
with 8 Viewpoints Parallax Barrier Method
1Norifumi Kawabata and 2Masaru Miyao
1, 2Graduate School of Information Science, Nagoya University
In this paper, we processed the contrast enhancement for multi-view 3D image, therefore, we changed
luminance. Next, we generated image by composing each view image by coding H.265/HEVC. Then, we
carried out the subjective evaluation about generating image. On the other hand, we evaluated objectively
about luminance value measurement by using color difference (such as CIEDE2000) in S-CIELAB color
space. We considered whether the objective evaluation value related to the subjective evaluation value or
not. Finally, we analyzed their results statistically using a Support Vector Machine (SVM).
1. Introduction
By advancing the spread of QFHDTV (Quad Full HD
TV), 3D still image or video quality becomes higher than
before (FHDTV), and it becomes to focus on 3D videos as
the added value. On the other hand, it has been advanced
for the research and development on the multi-view 3D
image quality assessment towards the near future.
In the past, there were many studies for still image or
video [6, 7] quality assessment using S-CIELAB color
space [1-4], CIEDE2000 [1-7] considering the spatial
frequency characteristics of human visual system.
However, in the 3D still image that the luminance may
affect to the stereoscopic effect or the assessment, it is not
revealed how luminance is needed in order to see the
binocular 3D image. Multi-view 3D image is also same as
this. In order to achieve seeing multi-view 3D video
whenever and wherever, at first, we consider that we need
to verify by carrying out the experiment of both subjective
and objective assessment. For the subjective assessment,
we need to verify whether the assessor can perceive or not
how luminance is needed when the luminance is changed.
For the objective assessment, we need to verify the
adaptive contrast or luminance of multi-view 3D image by
using S-CIELAB color space considering the spatial
frequency characteristics of human visual system. By
quantifying these assessments, we consider that the
general assessment is possible.
There were many studies about the subjective
evaluation of 3D image or video quality [6-10]. However,
the studies for the luminance and the coded degradation,
which is not reported previously. In ITU-R, it is the
standardization about binocular 3D video quality
evaluation. However, the commonly assessment standard
necessarily in order that we see the 3D video, which is not
defined internationally regardless with glassless or glasses.
In this paper, at first, we carried out the subjective
quality assessment of 3D CG image by Adaptive
Histogram Equalization (AHE) and then by H.265/HEVC.
Then, we analyzed this result. Furthermore, we classified
the subjective values by maximum of the luminance using
a Support Vector Machine (SVM). On the other hand, we
*Furo-town, Chikusa-ku, Nagoya-city, Aichi, 4648603 Japan
1 e-mail: norifumi@nagoya-u.jp
2 e-mail: miyao@nagoya-u.jp
Figure 1: 3D CG image used in this experiment
analyzed objectively for the generated 3D CG image for
the luminance component from transforming the S-
CIELAB color space. Furthermore, we compared original
image to the coded image by using CIEDE2000, and then,
we analyzed this result objectively.
2. Image quality evaluation experiment
2.1 3D CG image used in this study
In this study, we used the 3D CG images shown in Figure
1, so called “Museum (a), (c), (e),” “Wonder World (b),
(d), (f)” provided from NICT by free of charge [11]. Figure
1 (a) and (b) show the reference images. Figure 1 (c) and
(d) show the reference images after AHE processing.
Figure 1 (e) and (f) show the coded images after AHE
processing. First, we constructed 8 viewpoints CG
cameras in 3D CG space. Then, we processed the camera
work and rendering. As a result, we were able to generate
8 viewpoints CG cameras. Then, we processed the
contract enhancement by AHE on definition of maximum
luminance value commonly by each viewpoint. After this,
we encoded and decoded the generated 3D image by
H.265/HEVC. After repeating these processes by 8
Figure 2: From the load images to S-CIELAB color space and CIEDE2000
Figure 3: Appearing 8
viewpoints parallax
barrier display [12]
Figure 4: Pixels and the
arrangement of viewpoint
Figure 5: Double Stimulus Impairment Scale
Table 1: Gaussian Convolution Kernel
Filter
Weight ()
Spread ()
Achromatic (
= 1
)
1.00327
0.0500
Achromatic (
= 2
)
0.11442
0.2250
Achromatic (
= 3
)
0.11769
7.0000
Red-Green (
= 1
)
0.61673
0.0685
Red-Green (
= 2
)
0.38328
0.8260
Blue-Yellow (
= 1
)
0.56789
0.0920
Blue-Yellow (
= 2
)
0.43212
0.6451
viewpoints, we composed images for each viewpoint. As
a result, we generated 3D images.
2.2 S-CIELAB color space and CIEDE2000
2.2.1 S-CIELAB color space
Figure 2 shows the flowchart from loading image to
transforming the S-CIELAB color space and CIEDE2000.
First, we transformed from  to  space in
Equation (1). Next, we transformed from  to the
opponent color space,  in Equation (2).
=0.41245 0.35758 0.18042
0.21267 0.71516 0.07217
0.01933 0.11919 0.95023
(1)
=0.297 0.72 0.107
0.449 0.29 0.077
0.086 0.59 0.501
(2)
Then, we carried out the convolution as a spatial filtering
by using Gaussian Kernel in Equation (3), (4) and (5).
Convolution:
=
(3)
Spatial Filter: =
(4)
Gaussian kernel: =exp +
(5)
After spatial filtering, we transformed from opponent
color space to XYZ space in Equation (6). Here, Table 1
shows the Gaussian Convolution Kernel (weight (),
spread()).
=0.979 1.535 0.445
1.189 0.764 0.135
1.232 1.163 2.079
(6)
Then, we transformed from XYZ to Lab space in Equation
(7), (8), and (9).
=
116
16, 
> 0.008856
903.3
,  (7)
=
500
, 
,
> 0.008856
500 7.787
+16
1167.787
+16
116,  (8)
=
200
, 
,
> 0.008856
200 7.787
+16
1167.787
+16
116,  (9)
Table 2: Experimental Specification
Type of di splay Newsight 3D Display 24V type
Display r esolution 1920x1080 (Full HD) (pixel)
Display resolution in the case of
presentation 1920x960 (pixel)
Types of 3D CG images “Museum” (M), “Wonder World” (W)
Types of image Windows bitmap
Encoding and decoding H.265/HEVC
Quantization Parameter (QP) QP=0 (ref), 20, 30, 40, 51
Maximum Luminance L=ref, 0.25, 0.50. 1.00, 2.00, 4.00
3D system Parallax Barrier Method
The number of viewpoints Only 8 viewpoints
Visual range 3H (Height of an image)
Indoor lighting None (same as dark room)
Assessor’s position Within horizontally ±30°
from the center of the screen
Evaluation experiment
Presentation time 10 sec/Contents
Evaluation method Double Stimulus
Impairment Scale
Assessor Repeating 5 times
by an assessor
Table 3: EBU method
MOS Quality
5 Imperceptible
4 Perceptible, but not annoying
3 Slightly annoying
2 annoying
1 Very annoying
2.2.2 CIEDE2000 color difference equation
CIEDE2000, which is the color difference equation,
compensated the measurement result and visual
appreciation as the weakness of . For of
both original image and coding image in obtained in
Equation (7), (8), and (9) of Sec. 2.2.1, when we calculated
adding the weighting coefficient (,,) and
parametric coefficient (,,)based on the
difference of the luminance, intensity, and hue, the color
difference  is represented as
=
+
+
+

 (10)
Here, is the rotation function.
2.3 Experimental contents
Table 2 shows the conditions and specifications of the
experiment. The experimental environment was based on
the ITU-R BT.500-13 recommendation. We used a
parallax barrier display, as shown in Figure 3. Figure 4
shows the pixel arrangement in the parallax barrier display.
In the pixel arrangement, each viewpoint is arranged
consistently, as R, G, and B, in the right diagonal direction.
In the 3D display mechanism, images can be seen in 3D
Figure 6: Result of Experiment 1
Table 4: SVM ()
Precision
Recall
Class
0.33
0.20
Lum_0.25
0.36
0.32
Lum_0.5
0.45
0.76
Lum_1
0.40
0.16
Lum_2
0.37
0.60
Lum_4
0.37
0.39
Weighted Avg.
display in a different field of view by dividing R, G, and
B through the parallax barrier in front of an LCD display.
When we presented a 3D image in 3D display, we
performed the real-time parallax mix automatically using
a media player provided by Newsight Corporation.
Following the experimental results reported in [9], we
defined the CG camera interval.
2.4 Experimental method and assessment method
Figure 5 shows the presentation order and image
sequence time in the experiment (Double Stimulus
Impairment Scale: DSIS). First, we presented reference
picture A for 10 seconds, and then, we presented the mid-
grey picture G for 3 seconds. Then, we presented the test
condition picture B for 10 seconds. After this, the assessor
evaluated this cycle, and input the evaluation value into
the computer application (VOTE), which took 10 seconds.
This cycle is defined as 1 set, and we repeated the cycle
until the last set. When the assessors performed subjective
evaluation for more than 30 minutes, they suffered
accumulated fatigue (visual fatigue, etc.) [13]. Therefore,
we divided the assessors’ subjective evaluation time into
three periods. In this experiment, the assessors input the
evaluation scores using a computer application. Table 3
shows the evaluation standard (European Broadcasting
Union: EBU method). The assessors gave evaluation
scores according to five ranks (Mean Opinion Score:
MOS (, )). Here, we defined MOS =
4.5, 3.5, 2.5 as the “detective limit (DL),” “acceptability
limit (AL),” and “endurance limit (EL),” respectively. In
this experiment, we presented for one assessor to repeat by
Table 5: SVM ()
Precision
Recall
F-Measure
Class
0.55
0.87
0.68
QP_ref
0.50
0.04
0.07
QP_20
0.44
0.68
0.53
QP_30
0.70
0.28
0.40
QP_40
0.78
1
0.88
QP_51
0.59
0.59
0.52
Weighted Avg.
T
able 6: Correctly Classified Percentage ( and )
Class Total Number
of Instances
Correctly Classified
Instances Percentage
Lum
130
51
39.2%
QP
130
76
58.4%
5 times. The image sequences were presented in random
order in each experiment. We included a rest time of half
an hour. Therefore, we were able to reach an appropriate
conclusion.
3. Results of subjective evaluation experiment
(Experiment 1)
3.1 Results of Experiment 1
Figure 6 shows the result of Exp. 1. The vertical axis
shows the , and the horizontal axis shows the
quantization parameter () The error bar extended up
and down from the plot points in Figure 6 which show a
95% confidence interval. From experimental results, we
can see satisfaction than “DL” in 40 in
case equals 1.00 and 2.00. We can see similarly L=
1.00 and 2.00, L= 0.50, L= 0.25 and 4.00.
3.2 Statistical Learning by SVM
Table 4 and 5 show SVM (SMO (Sequential Minimal
Optimization) algorithm) provided in Weka 3.6.
“Precision,” “Recall,” and “F-Measure” values greater
than 0.7 are in bold font. As a result, we can see values
greater than 0.7 in “Lum1”, “QP_40”, and “QP_51”. Table
6 shows SVM correctly classified percentage. In class
“Lum”, 39.2% (not classified), and in class “QP”, 58.4%
(not bad classified).
4. Results of objective evaluation experiment
(Experiment 2, 3)
Figure 7 shows the result of Experiment 2 and 3. In the
case of L= 1.00, we can see the difference of luminance
between “Museum” and “Wonder World”. On the other
hand, we can see the difference of CIEDE2000.
5. Discussion
From experimental results, the luminance of “Museum”
is greater than that of “Wonder World”, however, the
difference of CIEDE2000 in “Museum” is lower than that
in “Wonder World”, and from results of Exp. 1, we can see
higher value of  than that of . From this, we
Figure 7: Result of Experiment 2 and 3 (=.)
consider that the luminance change gives the affection to
assessment for the image contents. From results of SVM,
SVM () is higher value than SVM (). We consider
that even if the luminance change is a few, we guessed that
an assessor did not perceive the image quality and the
degradation, nevertheless experimenting several image
quality assessment tests.
6. Conclusion
In this paper, we carried out the subjective quality
assessment of 3D CG image by AHE and then by
H.265/HEVC. Then, we analyzed this result compared S-
CIELAB color space and CIEDE2000. From experimental
result, we can see the relation of the coded degradation and
the luminance in between MOS and CIEDE2000.
References
1) G. M. Johnson, M. D. Fairchild, “A Top Down Description of S-
CIELAB and CIEDE2000”, Color Research & Application, vol. 28, no.
6, pp. 425435, (2003).
2) H. Matsumoto, K. Shibata, and Y. Horita, “Full reference image quality
evaluation model considering the property of S-CIELAB color space,”
IEICE Tech. Rep., vol. 106, no. 356, CQ2006-11, pp. 101106 (2006)
[in Japanese].
3) Y. Miyake and T. Nakaguchi, “Evaluation of Color Image Quality -
Present State and Problems - ,” IEICE Fundamental Review, vol. 2, no.
3, pp. 2937 (2009).
4) N. Kawabata and M. Miyao, “3D CG Image Quality Evaluation
Including S-CIELAB Color Space with 8 Viewpoints Glassless 3D,”
Proceedings of the 2015 IEICE General Conference, A21-2, pp. 292
(2015) [in Japanese].
5) Y. Tsuruoka and M. Meguro, “Color Image Segmentation by Using the
CIEDE2000 Color-Difference formula,” IEICE Tech. Rep., vol. 112, no.
348, SIS2012-29, pp. 16 (2012) [in Japanese].
6) N. Matsumoto, T. Abe, and H. Haneishi, “Evaluation Method of
Degradation in Image Quality of Color Motion Pictures Due to
H.264/AVC Codec (I)” Journal of SPIJ, vol. 71, no. 5, pp. 368374
(2008) [in Japanese].
7) N. Matsumoto, T. Abe, and H. Haneishi, “Evaluation Method of
Degradation in Image Quality of Color Motion Pictures Due to
H.264/AVC Codec (II),--Proposal of an Image Quality Evaluation
Method That Satisfies Two Requirements,” Journal of SPIJ, vol. 72, no.
4, pp. 306314 (2009) [in Japanese].
8) Y. Kohda and T. Horiuchi, “13-11 Colorization Image Coding Using
Opponent Color Space in S-CIELAB,” ITE Annual Convention 2006,
“13-11-1”“13-11-2” (2006) [in Japanese].
9) N. Kawabata and Y. Horita, “Statistical Analysis and Consideration of
Subjective Evaluation of 3D CG Images with 8 Viewpoints Lenticular
Lens Method,” The 6th International Conference on Image Media
Quality and its Applications (IMQA2013), T1-2, pp.2332 (2013).
10) N. Kawabata, M. Miyao, and Y. Horita, “3D CG Image Quality Metrics
Including the Coded Degradation by Regions with 8 Viewpoints
parallax Barrier Method,” The 7th International Conference on Image
Media Quality and its Applications (IMQA2014), PS-9, pp. 102105
(2014).
11) http://www.nict.go.jp/, accessed Apr. 3 (2015).
12) Newsight Japan, http://newsightjapan.jp/, accessed Apr. 3 (2015).
13) N. Kawabata and M. Miyao, “Statistical Analysis of Questionnaire
Survey on the Evaluation of 3D Video Clips,” IEICE Tech. Rep., vol.
113, no. 350, IMQ2013-22, pp. 2732 (2013) [in Japanese].
... Thus far, there were studies for image quality assessment based on S-CIELAB color space including the spatial frequency characteristics of human vision system [1], [2]. We also carried out the quality assessment of multi-view 3D image [3], multi-view 3D image for contrast enhancement [4], and multi-view 3D image for contrast enhancement of the object region [5] based on S-CIELAB color space. As contents creators, we consider that they need not only to consider the coded degradation but also not stand out the background. ...
... (3) Finally, ∆E 00 is shown in Eq. (4), (5). ...
Article
Full-text available
In this paper, we experimented the subjective evaluation for 3D CG image including the gray scale region with 8 viewpoints parallax barrier method, and we analyzed this result statistically. Next, we measured about the relation between the luminance change and the color difference by using S-CIELAB color space and CIEDE2000.
... This study was carried out based on the Grant-in-Aid for Scientific Research (B) 24300046, and the research grant for Ph.D. student at Nagoya University. This paper is included improvement based on contents of our domestic conference proceeding [4], our international conference proceeding [13], and our technical report [14]. ...
Article
Previously, it is not obvious to what extent was accepted for the assessors when we see the 3D image (including multi-view 3D) which the luminance change may affect the stereoscopic effect and assessment generally. We think that we can conduct a general evaluation, along with a subjective evaluation, of the luminance component using both the S-CIELAB color space and CIEDE2000. In this study, first, we performed three types of subjective evaluation experiments for contrast enhancement in an image by using the eight viewpoints parallax barrier method. Next, we analyzed the results statistically by using a support vector machine (SVM). Further, we objectively evaluated the luminance value measurement by using CIEDE2000 in the S-CIELAB color space. Then, we checked whether the objective evaluation value was related to the subjective evaluation value. From results, we were able to see the characteristic relationship between subjective assessment and objective assessment.
Article
Full-text available
Recently, we are able to watch 3D videos or movies increasingly without glasses. However, they are various stereological and evaluation methods for multi-view 3D with no glasses for image quality, and their display methods are not unified. In this paper, we showed 3D CG images with 8 viewpoints lenticular lens method by ACR and DSIS methods, when we analyzed the results statistically with subjective evaluation. The experiment examined whether or not assessor were able to comfortable view the images by degree of camera’s interval and viewpoints, and whether or not they perceive or annoy degree of coded degradation at certain viewpoints.
Article
Full-text available
Until now, there have been many studies about image quality for multi-viewpoint 3D images. Particularly, as represented by ROI (Region Of Interest), it becomes important that users are interested in or focused on what region or area. Also, in the multi-viewpoint 3D still images, it's not clearly that users focused on objects or background in their images since there are many viewpoints. Until now, there were the case studies considered about the coded degradation for 2D videos, binocular 3D videos. However, there were not case studies for multi-viewpoint 3D, and their results were not cleared. In this paper, we demonstrated image quality assessment in the case of occurring the coded degradation for objects or background region with 8 viewpoints parallax barrier method, and we analyzed experimental results.
Poster
Full-text available
Until now, there have been many studies about image quality for multi-viewpoint 3D images. Particularly, as represented by ROI (Region Of Interest), it becomes important that users are interested in or focused on what region or area. Also, in the multi-viewpoint 3D still images, it's not clearly that users focused on objects or background in their images since there are many viewpoints. Until now, there were the case studies considered about the coded degradation for 2D videos, binocular 3D videos. However, there were not case studies for multi-viewpoint 3D, and their results were not cleared. In this paper, we demonstrated image quality assessment in the case of occurring the coded degradation for objects or background region with 8 viewpoints parallax barrier method, and we analyzed experimental results.
Article
Several studies have been conducted on visual fatigue and visually induced motion sickness. However, no assessments have been conducted to compare 2D or 3D images or video clips. Although certain studies focused on determining the influences of viewing 3D images, there has been no unified assessment method or standard regarding 3D video clips. Moreover, prior research have demonstrated the assessment examples for only one 3D system. However, there have been no assessments comparing multiple 3D systems. Thus, the comfort of the viewing experience of multi-view 3D systems is yet to be clarified. Previously, the assessments on 3D video clips and multiple 3D systems have not considered the viewer’s personal attributes such as sex, corrected eyesight, and user experience. Thus, we initially conducted a questionnaire survey on 168 subjects regarding their experience with 3D video clips. Subsequently, we performed an experiment on 3D video assessments of 92 subjects with three display types, portraying the same 3D computer graphics content. Ultimately, the statistical analysis of the experimental results revealed the influence of age generations and individual characteristics on visually induced motion sickness.
Article
We are developing an objective evaluation method for the H.264/AVC codec images which is highly correlated with subjective evaluation by human observers. In our previous report, we proposed an S-CIELAB-based method with a spatial limitation in calculation for this purpose. We also presented that the objective evaluation values by the proposed method for the images at just noticeable compression rate are similar. In this paper, it is confirmed that the similarity is still maintained among an increased number of images. Next, for the images at several levels of degradation noticeable compression rates, the correlation between the subjective evaluation value and the objective evaluation values by the proposed method and other conventional methods were compared. As a result, the proposed method was superior to the other methods both in the evaluation at just noticeable compression rate and in the evaluation at degradation noticeable compression rates.
Article
Recent work in color difference has led to the recommendation of CIEDE2000 for use as an industrial color difference equation. While CIEDE2000 was designed for predicting the visual difference for large isolated patches, it is often desired to determine the perceived difference of color images. The CIE TC8-02 has been formed to examine these differences. This paper presents an overview of spatial filtering combined with CIEDE2000, to assist TC8-02 in the evaluation and implementation of an image color difference metric. Based on the S-CIELAB spatial extension, the objective is to provide a single reference for researchers desiring to utilize this technique. A general overview of how S-CIELAB functions, as well as a comparison between spatial domain and frequency domain filtering is provided. A reference comparison between three CIE recommended color difference formulae is also provided. © 2003 Wiley Periodicals, Inc. Col Res Appl, 28, 425–435, 2003; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.10195
3D CG Image Quality Evaluation Including S-CIELAB Color Space with 8 Viewpoints Glassless 3D
  • N Kawabata
  • M Miyao
N. Kawabata and M. Miyao, "3D CG Image Quality Evaluation Including S-CIELAB Color Space with 8 Viewpoints Glassless 3D," Proceedings of the 2015 IEICE General Conference, A21-2, pp. 292 (2015) [in Japanese].
Full reference image quality evaluation model considering the property of S-CIELAB color space
  • H Matsumoto
  • K Shibata
  • Y Horita
H. Matsumoto, K. Shibata, and Y. Horita, "Full reference image quality evaluation model considering the property of S-CIELAB color space," IEICE Tech. Rep., vol. 106, no. 356, CQ2006-11, pp. 101-106 (2006) [in Japanese].
Color Image Segmentation by Using the CIEDE2000 Color-Difference formula
  • Y Tsuruoka
  • M Meguro
Y. Tsuruoka and M. Meguro, "Color Image Segmentation by Using the CIEDE2000 Color-Difference formula," IEICE Tech. Rep., vol. 112, no. 348, SIS2012-29, pp. 1-6 (2012) [in Japanese].