Content uploaded by Norifumi Kawabata
Author content
All content in this area was uploaded by Norifumi Kawabata on Dec 20, 2018
Content may be subject to copyright.
The Sixth International Workshop on Image Media Quality and its Applications, IMQA2013
September 12-13, 2013, Tokyo, Japan
STATISTICAL ANALYSIS AND CONSIDERATION OF SUBJECTIVE EVALUATION OF
3D CG IMAGES WITH 8 VIEWPOINTS LENTICULAR LENS METHOD
†
Norifumi KAWABATA,
††
Yuukou HORITA
†Graduate School of Information Science, Nagoya University
Furo-cho, Chikusa-ku, Nagoya-city, Aichi, 464-8601 Japan
Email: kawabata.norifumi@b.mbox.nagoya-u.ac.jp
††Graduate School of Science and Engineering, University of Toyama
Gofuku 3190 Toyama-city, Toyama, 930-8555 Japan
Email: horita@eng.u-toyama.ac.jp
ABSTRACT
Recently, we are able to watch 3D videos or movies
increasingly without glasses. However, they are various
stereological and evaluation methods for multi-view 3D
with no glasses for image quality, and their display
methods aren’t unified. In this paper, we showed 3D CG
images with 8 viewpoints lenticular lens method by ACR
and DSIS methods, when we analyzed the results
statistically with subjective evaluation. The experiment
examined whether or not assessors were able to
comfortable view the images by degree of camera’s
interval and viewpoints, and whether or not they perceive
or annoy degree of coded degradation at certain
viewpoints.
Index Terms ― Three-Dimensional Computer Graphics,
Lenticular Lens Method, Multi-View, Coded
Degradation, Absolute Category Rating Method, Double
Stimulus Impairment Scale Method
1. INTRODUCTION
With new glassless 3DTV, we are able to enjoy 3D
videos without glasses along with improvements in
image quality and presence evaluation. At present, they
aren’t unified definitely about form of glassless 3D
display, 3D display format, and experimental or
evaluation method for 3D videos. Certainly, they defined
in ITU-R BT.1438 [1] for evaluation standard or method
in stereoscope for 2 viewpoints. In “3DC guideline” of
3D consortium, for 3D video, it paid attention to viewers,
contents manufacturer, and producer [2]. In fact, there is
a report of URCF (Ultra-Realistic Communications
Forum) that measured the subjective evaluation of fatigue
caused by watching 3DTV. However, this report
basically is experiment with 3D glasses, which isn’t
necessarily the same results from glassless 3D or multi-
view glassless 3D. Furthermore, this same study
examined the relationship between 3D videos and fatigue
but there was no direct description of the 3D video
quality. On the other hand, experimental and evaluation
method aren’t defined from a point of view about image
quality evaluation as international standard now.
In this research, we focused on a camera’s interval and
coded degradation with a multi-view glassless 3D
method [3, 4]. About camera’s interval and coded
degradation with 3D glasses, or 2 viewpoints and
glassless, they were studied a number of research until
now. In this research, if viewers gaze naturally, they are
able to feel the presence of including stereoscopic and
depth effects. On the other hand, with multi-view
glassless 3D, there are advantages that viewpoints are
increased, and we are able to see at various viewpoints.
However, there are also problems associated with
discomfiture in using multi-glasses system. For example,
as representative parts, we detailed accommodation of a
camera’s interval or coded degradation by each
viewpoints. Of course, such as reference [5, 6], they
focused relation to crosstalk, camera baseline, and scene
contents in stereoscopic image. However, [5, 6] weren’t
considered about multi-view and coded degradation by
each viewpoints.
In this paper, we verified the satisfaction of viewers of
seeing 3D CG images. We did this by changing the
camera’s interval or viewpoints. And, we statistically
analyzed the experimental results obtained by each
experiments, and considered the relationship, tendency of
each experiments.
2. RELATED RESEARCH
There are many studies related to camera’s interval and
coded degradation. However, the image or video contents
used in past subjective evaluation experiments were
mostly of natural images or videos, and didn’t study or
evaluation experiments focused on 3D CG. At present,
multi-view glassless 3D, 3D CG image or video contents
are widely used more than natural image or video
contents. Therefore, we think the study of 3D CG
practically. Also, until now, examples of research
concerning coded degradation studied wasn’t coded
degradation related to viewpoints, but example of
carrying out coded degradation by viewpoints were none
until now. In the case of multi-view glassless 3D, since
users don’t always see viewpoints of coded degradation,
there may be the possibility that an evaluation constant
by users, and we thought necessity verified by subjective
evaluation experiment.
About multi-view, Multi-View Coding (MVC) has
become the international standardization until now.
Indeed, the algorithm of the MVC is open to the public.
There has been many study about viewpoint interpolation
and viewpoint generation. However, most of the studies
concern the generation technique, and there are no
arguments about the image quality evaluation until now.
3. IMAGE QUALITY EVALUATION
In this study, we carried out two experiments for the
image quality evaluation such as below.
(1) Evaluation experiment for camera’s interval and
viewpoints (Images and videos)
(2) Evaluation experiment in case of occurring coded
degradation (Images)
These are called experiment 1 and experiment 2.
Furthermore, each image and video quality evaluation
experiment in experiment 1 are Experiment 1-1, and 1-2.
3.1 3D Images and videos used in this experiment
In the images and videos in Experiment 1, we used
frames of 3D CG contents (a)-(f) as shown in Figure 1,
so called “Museum” provided from NICT (National
Institute of Information and Communications
Technology [7]) by free of charge. In this study, we used
the CG contents generated by the plugin of Autodesk
Maya which corresponds to a lenticular lens display
which we used. Also, we constructed 8 viewpoints with
the CG cameras, and edited animation, camera work, and
rendering. Finally, we generated images with 8
viewpoints in reserved frame’s range (450 sheets, 30 fps,
and 15 seconds).
On the other hand, in Experiment 2, we used images,
so called “Museum” and “Wonder World” as shown in
(g)-(j) of Figure 2 provided from NICT, and composed
images of coded degradation per 8 viewpoints.
3.2 Specification in this experiment
3.2.1 Experiment 1
Table 1 shows the experimental conditions of
experiment 1. The lenticular lens display as shown in
Figure 3 [8], was a covered semi-cylindrical lens inclined
to the left one by one on the surface of display.
Concerning the arrangement of the pixels, such as Figure
4, R, G, and B, they were parallel along an incline at the
left, and pixels of the 8 viewpoints were arranged from
left in ascending order. When we composed the images
per 8 viewpoints, we carried out to generate an image
composed from algorithm for pixel’s arrangement, and to
display them on the lenticular lens display. We fixed that
number of viewpoints at 8, and carried out experiments
that with intervals of 0.00, 0.15, 0.25, 0.35, 0.50, 0.75,
and 1.00, 7 types.
Figure 1: 3D CG contents used in experiment 1
Figure 2: 3D CG contents in experiment 2
Figure 3: Surface of 3D display used in our
experiments
Figure 4: Pixels and viewpoints arrangement of 3D
display used in our experiments
Table 1: Main specification of subjective evaluation
experiment
Display
Alioscopy 42V type
Image size
1920x1080 (FULL HD) (pixels)
Kind of images
Windows bitmap
Kind of videos
Incompressible AVI
3D glasses
None
3D method
Lenticular Lens Method
Number of
viewpoints
8 viewpoints
Apparent
distance
3H (H: height of image)
Indoor
lightning
None (darkroom)
Assessor’s
position
Within horizontality ±30° from
center of screen
Experiment 1
Time
15
(sec/Contents)
Method
ACR method
Assessor
15 people
(Images)
8 people
(Videos)
Experiment 2
Time
10
(sec/Contents)
Method
DSIS method
Assessor
10 people
Here, the camera’s interval parameter value is a ratio. For
example, if parameter value is “𝑝”, when 𝑝 equals 1.00,
the camera’s interval of 8 viewpoints departed 2.24
(5.69cm) in each viewpoints. When 𝑝 equals 0.00, all
cameras are between 4 view and 5 view in case of
𝑝 =1.00. We prepared images of 105 ways because of
number of frames are 15 ways and number of parameters
are 7 ways.
Next, Table 1 shows experimental condition (video)
of Experiment 1. Experimental environment in case of
videos is same as Experiment 1-1. In evaluation
experiment of videos, we carried out that viewpoints is 2
ways, those are 8 and 2 viewpoints, and such as
Experiment 1-1, camera’s interval is 7 ways. We
prepared video sequences of 14 ways by viewpoints and
parameters combination.
3.2.2 Experiment 2
Table 1 shows about experimental condition of
Experiment 2. We used the ITU-R BT.500-13
recommendation [9] for the experimental environment.
From results of Experiment 1, we fixed the parameter of
CG camera’s interval, 0.25. The standard of the
camera’s interval parameter was same as in Experiment
1. We used images generated by coded degradation
(𝑄𝑆 =10) of 1 and 2 viewpoints. Also, we prepared an
image sequence of 72 ways from the different content
with the coded degradation (1 viewpoint: 8 ways, 2
viewpoints: 28 ways).
Figure 5: Absolute Category Rating method
Figure 6: Double Stimulus Impairment Scale method
Table 2: ITU-R quality scale of Experiment 1
Score
Quality
5
Very good
4
Good
3
Fair
2
Bad
1
Very bad
Table 3: EBU quality scale of Experiment 2
Score
Quality
5
Imperceptible
4
Perceptible, but not annoying
3
Slightly Annoying
2
Annoying
1
Very Annoying
3.3 Experimental method and evaluation method
3.3.1 Experiment 1
Figure 5 shows presentation order and length of time in
Experiment 1. In two experiments (images and videos),
we showed image or video contents by 15 seconds, and
next, we made assessor fill out in evaluation paper by 15
seconds. So, we carried out experiments by these repeat.
About evaluation method, we used ACR method
prescribed by ITU-T P.910 recommendation [10].
Figure 7: MOS for images in Experiment 1-1
Figure 8: MOS for frame images in Experiment 1-1
Figure 9: MOS for videos in Experiment 1-2
Figure 10: Correlation coefficient for camera's interval or evaluation item
Here, Table 2 shows ITU-R quality scale, and we
evaluated by 5 steps, calculated MOS (Mean Opinion
Score). Also, evaluation items were 5 category,
“stereoscopic effect”, “encirclement effect”, “texture”,
“no seeing double image”, and “general evaluation”
[11]. Here, for “encirclement effect”, we evaluated image
and video contents without regard to large and small of
display.
3.3.2 Experiment 2
Figure 6 shows presentation order and length of time in
Experiment 2. First, we showed reference image by about
10 seconds. After showing mid-grey images by about 3
seconds, we showed evaluation images by about 10
seconds, and we made assessor fill form by about 10
seconds (DSIS method). Thus, these regard as 1 set, and
these are repeated. When experimental time of assessor
is over 30 minutes, assessor’s fatigue is accumulated. As
a result, these may hinder for experimental results [11].
Therefore, we carried out experiments divided by twice.
Also, we made experiments included set which
evaluation image is regarded as reference image by twice.
On the other hand, about filling of evaluation value, we
made assessor evaluate from computer application. Also,
Table 3 shows EBU quality scale, and we evaluated by 5
steps, calculated MOS. Here, we decided that 4.5, 3.5,
and 2.5 were each “detective limit”, “acceptability
limit”, and “endurance limit” [11].
4. EXPERIMENTAL RESULTS AND
CONSIDERATION (EXPERIMENT 1)
4.1 Experimental results with ACR method
First, Figure 7 shows line graph indicated by MOS
value for images focused on camera’s interval parameter,
and Figure 8 shows line graph indicated by MOS value
for frame images about evaluation values obtained by
Experiment 1-1. Also, Figure 9 shows line graph
indicated by MOS value for videos focused on camera’s
interval parameter value about evaluation value obtained
by Experiment 1-2. Here, error bar which stretched up
and down from plot points shows by 95% confidence
interval. If MOS value becomes over 3.5, it equals
“acceptable”, and if MOS value becomes over 2.5 less
than 3.5, it equals “a little acceptable”, and if MOS value
becomes less than 2.5, it equals “not acceptable”.
4.1.1 Images
For Experiment 1-1, in case of images of Figure 7,
when parameter value became over 0.75, MOS value
wasn’t hardly changed in all evaluation item, and it was
less than 2.5, that is “not acceptable”. When parameter
value equals 0.00, “stereoscopic effect” was the least
MOS value of all, and until parameter value rose 0.25,
MOS value was increased, but when parameter value
became over 0.35, MOS value wasn’t changed in front
and behind of 2.5. In “encirclement effect”, MOS value
was only less than 2.5 when parameter value equals 0.00,
and until parameter value became 0.15, MOS value was
increased, and after this, MOS value wasn’t changed in
front and behind of 3.0. When parameter value equals
0.00, “texture” was the best MOS value of all, and until
parameter value became 0.75, MOS value was decreased,
and after this, MOS value wasn’t hardly changed. About
“double image”, MOS value was changed the best of all
evaluation item. When parameter value was 0.00, MOS is
higher by nearly 5.0, but when parameter value became
between 0.00 and 0.15, MOS value was decreased rapidly,
and when parameter value became over 0.35, “not
acceptable”. Also, “general evaluation” was similar to
“texture”.
4.1.2 Images by frame
Next, in case we focused on number of frame by
camera’s interval parameter of Figure 8, when parameter
value equals 0.00, MOS value of “double image” was
high values between 4.5 and 5.0. About “texture” and
“general evaluation”, MOS value was between 2.75 and
3.75, so we judged it as “a little allowable”. About
“stereoscopic effect” and “encirclement effect”, MOS
value was less than 2.5 in all frame images, and we
judged it as “not acceptable”. Thus, we divided three
cases clearly. We can see that MOS value was increased
rapidly when parameter value is 0.15, 0.25, and 0.35, and
number of frame is between 150 and 210. When
parameter value is 0.75 and 1.00, MOS value was less
than 3.0 in all evaluation item.
About error bar, error range of each evaluation item
was largely when number of frame was between 150 and
180. Specially, when parameter was 0.15 and 0.25,
maximum error MOS value of “double image” was over
4.5, but minimum error MOS value of “encirclement
effect” was less than 1.5. As a result, we see difference
among evaluation item.
4.1.3 Videos
For Experiment 1-2, when we compared videos with 8
viewpoints and 2 viewpoints of Figure 9, in case of 8
viewpoints, MOS value of all evaluation item were less
than 2.5, but in case of 2 viewpoints, MOS value of all
parameter value were over 2.5. In videos with 8
viewpoints and 2 viewpoints, the best change were
“double image”. In case of 8 viewpoints, MOS value was
decreased rapidly in parameter value between 0.00 and
0.15. When parameter value equals 0.50, MOS value was
less than 2.5. That is “not acceptable”. On the other hand,
in case of 2 viewpoints, MOS value was over 4.0 until
parameter rises 0.50, and in parameter value between
0.50 and 0.75, MOS value is decreased, but MOS value
was over 3.0, that is “a little acceptable” when parameter
value equals 1.00. “Stereoscopic effect” shows higher
MOS value when parameter value is 0.15 and 0.25, but in
case of 2 viewpoints, MOS value wasn’t changed
generally. “Encirclement effect” shows higher MOS
value in case of 2 viewpoints than 8 viewpoints, but MOS
value wasn’t changed rapidly in case of both 2 viewpoints
and 8 viewpoints. “Texture” shows that MOS value was
decreased gently as parameter was increased. “General
Evaluation” shows that MOS value was decreased rapidly
in case of 8 viewpoints than 2 viewpoints in parameter
value between 0.35 and 0.50, and difference between
minimum values and maximum values of MOS became
2.0. Compared in case of 2 viewpoints and 8 viewpoints,
we see that increment was bigger than 1.0 in case of 2
viewpoints.
4.2 Analysis of correlation coefficient
Figure 10 shows line graph of correlation coefficient
focused on camera’s interval parameter value and
evaluation item of “Museum”. Also, Figure 10 shows that
line graph divided 2 types, images with 8 viewpoints and
videos with 8 viewpoints (left line), video with 8
viewpoints and 2 viewpoints (right line). In this paper, we
judged the following: if correlation coefficient is more
than absolute value of 0.8, values equal “correlated”. If
correlation coefficient is less than absolute value of 0.8,
values equal “not correlated”. First, compared images
and videos with 8 viewpoints, we can see positive
correlation when parameter value was 0.00, and more
than 0.35, and we see negative correlation when
parameter value was 0.25. Also, “texture”, “double
image”, and “general evaluation” are each 0.94, 0.93, and
0.82 about evaluation item. Accordingly, since
correlation coefficient is more than 0.8, we judged as
“correlated”. Correlation coefficient of “stereoscopic
effect” is 0.69, so this isn’t correlated. About
“encirclement effect”, correlation coefficient is −0.05
and isn’t correlated at all.
Next, we compared correlation coefficient of videos
with 8 viewpoints and 2 viewpoints. About camera’s
interval parameter of Figure 10, correlation coefficient
tends to same as comparison of images and videos with 8
viewpoints when parameter value is between 0.00 and
0.35. When parameter value is 0.50, we see negative
correlation. When parameter value is 0.75 and 1.00,
correlation coefficient is each −0.46 and 0.33, so we can
judge “not correlated” at all. About evaluation item,
correlation coefficient of “stereoscopic effect” and
“encirclement effect” are each 0.11 and 0.22, that isn’t
correlated at all. Correlation coefficient of “texture” is
0.74, that isn’t correlated. About “double image” and
“general evaluation”, correlation coefficient is each 0.86
and 0.92. Accordingly, we see correlated same as
comparison of images and videos with 8 viewpoints.
4.3 Consideration
As a result, MOS for frame images in Experiment 1 was
divided clearly by parameter value = 0.00 and others. We
think that this related to camera’s speed for camera’s
interval and contents. Also, MOS value of number of
frame images in Experiment 1 was increased rapidly
Figure 11: Coded degradation of any 1 viewpoint (by
contents)
when parameter equals 0.15, 0.25 and 0.35, and number
of frame is between 150 and 210. From this, we thought
that animation is used elliptical shape, and we thought
that when contents scene becomes curve of elliptical
sharp, it causes affection of changing scene rapidly.
Furthermore, about “encirclement effect” of evaluation
item, MOS value isn’t changed on number of frame and
camera’s interval generally. From this, we think that this
affected contents of images and videos related not to both
number of frame and camera’s interval.
5. EXPERIMENTAL RESULTS AND
CONSIDERATION (EXPERIMENT 2)
5.1 Experimental results with DSIS method
Figure 11 and Figure 12 shows line graph indicated by
MOS about coded degradation by 1 viewpoint and 2
viewpoints in results of Experiment 2. Here, error bar
which stretched up and down from plot points shows by
95% confidence interval.
5.1.1 1-viewpoint
Figure 11 shows that vertical axis is MOS value,
horizontal axis is coded degradation pattern and
viewpoints by 1 viewpoint. For example, if first
viewpoint is coded degradation, we inscribed “1”.
In case coded degradation is by 1 viewpoint, MOS
value of “Wonder World” is higher than that of
“Museum”. And for error bar, we see that error of
“Museum” is bigger than “Wonder World”. Seeing by
contents of images, “Wonder World” exceeds 𝑀𝑂𝑆 =
4.00 in almost of viewpoints, acceptability limit exceeds
by all viewpoints. Specially, “1”, “5”, and “6” exceeds
detective limit much, and we can judge that assessor
cannot recognize degradation within evaluation time. On
the other hand, when coded degradation is by 1 viewpoint,
“Museum” exceeds endurance limit by all, but except “5”,
“6”, and “8” is less than acceptability limit.
Figure 12: Coded degradation of any 2 viewpoints
(all)
Figure 13: Coded degradation of any 2 viewpoints
(by continuous)
Figure 14: Coded degradation of any 2 viewpoints
(by discontinuous)
Figure 15: Pattern of coded degradation
Figure 16: Correlation coefficient of coded
degradation pattern
5.1.2 2-viewpoints (All)
Next, we focused on Figure 12. Here, vertical axis is
values of MOS, and horizontal axis is coded degradation
pattern and viewpoint. For example, in case 1 viewpoint
and 2 viewpoint is coded degradation, we inscribed “12”.
Also, if coded degradation viewpoint of two deviated by
N, we inscribed by N difference (or “N diff”).
When coded degradation is by 2 viewpoints, “Wonder
World” exceeds detective limit in case of “15” only, but,
“Wonder World” exceeds acceptability limit same as 1
viewpoint by all cases. On the other hand, in “Museum”,
pattern of “56” and “57” more than acceptability limit,
but, others are most of endurance limit, and pattern of
“23”, “24”, and “35” are less than endurance limit. As
mentioned before, “Museum” is bigger error than
“Wonder World” related not to the coded degradation
viewpoint, majority of “Museum” is less than
acceptability limit, and plot within endurance limit
(compared in case of 1 viewpoint). Also, in average of
contents, from Figure 12, MOS of “57” exceeds 𝑀𝑂𝑆 =
4.00 of all. And viewpoint pattern of acceptability limit
were pattern of all that viewpoint is discontinuous.
When coded degradation is by 2 viewpoints, coded
degradation viewpoint pattern included “1”, “7”, and “8”
tends higher MOS generally. About error bar, coded
degradation by 2 viewpoints is larger range of top and
bottom than by 1 viewpoints. Particularly, in error bar of
“23” and “24” in “Museum”, maximum error is more
than acceptability limit, but minimum error is less than
endurance limit and also 𝑀𝑂𝑆 = 2.00.
5.1.3 2-viewpoints (Continuous and discontinuous)
On the other hand, case of coded degradation by 2
viewpoints is able to be divided viewpoints of
“continuous” and “discontinuous” (Figure 13 and Figure
14). In case of continuous viewpoints (Figure 13), MOS
value of “Wonder World” is more than 𝑀𝑂𝑆 = 4.00
except “23” and “34”. Also, MOS value of “Museum” is
more than 𝑀𝑂𝑆 = 3.00. Here, we see that both MOS
value of “Museum” and “Wonder World” are the
minimum value by “23”.
Figure 17: Dendrogram about assessor
Table 4: Relationship between dendrogram of Figure
17 and assessor
No.
Level of assessor
No.
Level of assessor
1
General public
6
General public
2
Expert
7
A little expert
3
Expert
8
General public
4
A little expert
9
General public
5
General public
10
A little expert
Also, in case of discontinuous viewpoints (Figure 14),
MOS value of “Museum” is that difference between
maximum and minimum is each 1.8, 0.8, 0.5, 0.3, and 0.5
from “2 diff” to “6 diff”. After “4 diff”, we cannot see
change of MOS value (Figure 15). On the other hand,
MOS value of “Wonder World” is more than
acceptability limit related not to interval of coded
degradation, and we were not able to see difference
between continuous viewpoints (Figure 15).
5.2 Analysis of correlation coefficient
Figure 16 shows line graph of correlation coefficient
focused on “Museum” and “Wonder World”. Here,
vertical axis of Figure 16 shows correlation coefficient.
Particularly, (0.20) of scale shows −0.20. Also,
horizontal axis shows coded degradation pattern. From
Figure 16, correlation coefficient of all pattern are less
than 0.8, so we are able to judge as “not correlated”.
However, correlation coefficient of coded degradation
pattern by continuous viewpoints and “4 diff” are each
0.79 and 0.74. As a result, we are able to judge these
pattern as “correlated” than other pattern. Particularly, in
case of coded degradation by 2 viewpoints and “3 diff”,
correlation coefficient is 0.07, so correlation isn’t at all.
Also, general correlation coefficient is 0.48, so we are
able to judge that both “Museum” and “Wonder World”
are “not correlated” generally.
5.3 Comparison of Cluster analysis by assessor
Figure 17 shows comparison by assessor in Experiment
2 used by dendrogram in group average method. Table 4
shows relationship between dendrogram and assessor.
Here, vertical axis of Figure 17 shows Euclid distance,
and horizontal axis shows assessor. Furthermore, we
decided level of assessor in Table 4: assessor having
research experience for image media quality over 2 years
are “Expert (from the following, A)”, and from 1 year to
2 years are “a little Expert (from the following, B)”, and
others are “General public (from the following, C)”.
From Figure 17, tree can be divided two types largely, “8,
10, 1, 6” and “5, 9, 3, 2, 4, 7”. Group of the former is that
number of “B” is 1 people, and number of “C” is 3 people
from Table 4. On the other hand, group of the latter is that
number of “A” is 2 people, number of “B” is 2 people,
and number of “C” is 2 people. From this, we see that 4
people of 6 people is more than “B”. Accordingly, we can
classify that group of the former is nearly “C”, and group
of the latter is nearly “A”.
Next, focused on group of the latter, group of the latter
is able to divide “5, 9” and “3, 2, 4, 7” furthermore. This
means that “5, 9” is 2 people of “C”, and “3, 2, 4, 7” is 2
people of “A” and 2 people of “C”. From this, we can
distinguish that “5, 9” is nearly “C” and “3, 2, 4, 7” is
nearly “A”. But first, “5, 9” was classified nearly “A”, so
we see that this is nearly “B” in “C”. Consequently, by
carrying out cluster analysis, we can each assessor of “C”
and “B”, and “A”.
In our experiment, number of assessor were 10 people,
so this was a small number. Also, we cannot compare “A”
to “C”, but from results, if we can increase participator of
assessor and compare “A” to “C” including error bar of
evaluation item, we can suppose something to difference
probably.
5.4 Consideration
As a result, we see that line graph of “Museum” and
“Wonder World” are similar from Figure 11 in case of
coded degradation by 1 viewpoint. From this, we think to
tendency that MOS value of “3” is the least of all, and
MOS value of “5” and “6” are the most of all, related not
to contents in case of coded degradation by 1 viewpoint.
About MOS value of coded degradation by 2
viewpoints, MOS value of “23” and “24” were less than
endurance limit in “Museum”, and were around
acceptability limit in “Wonder World”. And both
“Museum” and “Wonder World” were the minimum
values in these plot points. We think that this result was
affected that an image which assessor saw first is coded
degradation. Therefore, we think that these pattern are on
the middle of screen. About error range, error of
“Museum” is more than that of “Wonder World”. We
think that it is difficult for assessor to confirm coded
degradation within evaluation time, since background of
“Wonder World” is dark as mentioned before. Therefore,
length of error bar became short. Not only these thought
but also coded degradation in texture part of “Museum”
was understood fairly for assessor. Therefore we think
that these things let evaluation difference between
assessor who perceived coded degradation and assessor
who didn’t perceive coded degradation expanded.
6. CONCLUSION
In this paper, we carried out subjective evaluation, and
considered about experimental results statistically.
From experiment 1, we understood that experimental
results were divided by images or videos, 8 viewpoints
or 2 viewpoints, rapidly or slowly of moving of objects
in video contents. Also, in videos of 3D CG, when we
changed only a few parameter values, assessor were
affected to view or watch. In case of images and videos
with 8 viewpoints, assessor were able to see from
variable viewpoints, however, we seized that evaluation
values of MOS for images and videos with 8 viewpoints
was lower than that of evaluation value of MOS for
images and videos with 2 viewpoints generally. About
evaluation item, we confirmed that particularly,
“encirclement effect” wasn’t change of evaluation item
for presence.
Also, from Experiment 2, in case of coded
degradation by 1 viewpoint, we confirmed tendency that
MOS of “3” was least of all, and MOS value of “5” and
“6” was the best of all by independent of image
contents. And we were able to confirm that MOS value
was more than acceptability limit at least, and we also
confirmed case of more than detective limit. In case of
coded degradation by 2 viewpoints, MOS value was
more than endurance limit at least, and we also
confirmed case of more than acceptability limit at least.
Compared to assessor in cluster analysis, we classified
by assessor of “A” and “C”.
In the future, we need to accurate definition of
physical parameter judged image and video quality. In
this paper, we argued carrying out about image quality
evaluation focused on presence and coded degradation
by viewpoints mainly. By establishing physical
parameter for visual information or bio-information, and
increasing kind of evaluation contents, we think that
image quality evaluation are able to be carried out by
independent for contents from user’s point of view.
7. REFERENCE
[1] ITU-R BT. 1438, http://www.itu.int/rec/R-REC-
BT.1438/en
[2] 3DC Guideline,
http://www.3dc.gr.jp/jp/scmt_wg_rep/3dc_guideJ_2011
1031.pdf
[3] Liyuan Xing, Junyong You, Touradj Ebrahimi, and
Andrew Perkis, Assessment of Stereoscopic Crosstalk
Perception, Multimedia, IEEE Transactions on Volume
14, Issue 2, 2012.
[4] Liyuan Xing, Junyong You, Touradj Ebrahimi, and
Andrew Perkis, An objective metric for assessing quality
of experience on stereoscopic images, Multimedia
Signal Processing (MMSP), 2010.
[5] Norifumi Kawabata, Keiji Shibata, Yasuhiro Inazumi,
and Yuukou Horita, Video Quality Evaluation of
Camera’s interval and number of Viewpoints in three-
dimensional CG Videos with 8 Viewpoints Lenticular
Lens Method, pp.109-114, IEICE-IE2011-152, IEICE-
MVE2011-114 (in Japanese).
[6] Norifumi Kawabata, Keiji Shibata, Yasuhiro Inazumi,
and Yuukou Horita, Image Quality Evaluation of Three-
Dimensional CG Images with 8 viewpoints Lenticular
Lens Method for Occurring Coded Degradation at
Certain Viewpoints, pp.17-22, IEICE-IMQ2012-17 (in
Japanese).
[7] http://www.nict.go.jp/
[8] http://www.alioscopy.com/
[9] ITU-R BT.500-13, http://www.itu.int/rec/R-REC-
BT.500/en
[10] ITU-T P.910, http://www.itu.int/rec/T-REC-P.910/en
[11] Tetsuo Mitsuhashi, Toyohiko Hatada, Sumio Yano,
Image and Visual Information Science, ITE, CORONA
PUBLISHING CO., LTD, 2009 (in Japanese).