Fig 4 - uploaded by Min H. Kim
Content may be subject to copyright.
The result of preference scores for different transition types across all scenes. The scale is in the form of z-score, of which group mean is at 0, the y-value represents multiples of the group standard deviation as discrimination dispersion of the perceived image quality, and higher scores indicates more preference by participants.
Source publication
Emerging interfaces for video collections of places attempt to link similar content with seamless transitions. However, the automatic computer vision techniques that enable these transitions have many failure cases which lead to artifacts in the final rendered transition. Under these conditions, which transitions are preferred by participants and w...
Contexts in source publication
Context 1
... we calculate the z-score of the logit as a psychometric scale by using the inverse cumulative distribution function. See Figure 4 for the rescaled participant responses. ...
Context 2
... Table II for pairwise sig- nificance tests of psychophysical rescales at the 95% confidence level. Figure 4 shows the result of the preference scores across all scenes and view changes, with Tables IIa- IIc showing significance values and whether these cross a positive/negative threshold of p-value < 0.05. Figure 5 plots preference scores into grouped bars. Our perceptual scale variances are computed across Table II. ...
Context 3
... results show that there is an overall preference for full 3D static transitions (Figure 4). This is not surprising as the video frames are projected onto actual 3D geometry and this provides the strongest spatial cues of all transitions. ...
Context 4
... warp is preferred for slight view changes, and is significantly better than plane and APC tran- sitions when considering slight view changes only (p-value < 0.05, t-test, Table IIb). While it is not significantly preferred over full 3D transitions, opinion on the warp transition in slight cases was con- sistent, with a very small variance and the highest mean score of any transition (Figure 4). The static 3D transition is among the top 3 transitions for all sets, and overall is significantly better than all other transitions for considerable view changes (p-value < 0.05, t-test, Table IIc). ...
Context 5
... factors may have contributed to the preference of participants, but we find with significance that slight vs. considerable view changes are a key factor. Warp transitions are the perceptually preferred transition type for slight view changes: warps are significantly preferred over all other transitions except the full 3D transitions and, vs. full 3D, warps have a higher perceptual score and a much smaller variance ( Figure 4). As such, our results indicate employing warps if the view rotation is slight, that is, equal to or less than 10 • . ...
Similar publications
Considering a spacecraft that encounters particle-laden environment, such as dust particles flying up over the regolith by the jet of the landing thruster, high-speed flight of a projectile in such environment was experimentally simulated by using the ballistic range. At high-speed collision of particles on the projectile surface, they may be refle...
Citations
... An experiment was also performed for precomputed videos, so that the impact of user's interaction and dynamic aspects of free viewing could be judged. More recently, this work was extended to transitions between videos [37]. Similar studies were also performed in the context of panoramas [23]. ...
Light fields become a popular representation of three dimensional scenes, and there is interest in their processing, resampling, and compression. As those operations often result in loss of quality, there is a need to quantify it. In this work, we collect a new dataset of dense reference and distorted light fields as well as the corresponding quality scores which are scaled in perceptual units. The scores were acquired in a subjective experiment using an interactive light-field viewing setup. The dataset contains typical artifacts that occur in light-field processing chain due to light-field reconstruction, multi-view compression, and limitations of automultiscopic displays. We test a number of existing objective quality metrics to determine how well they can predict the quality of light fields. We find that the existing image quality metrics provide good measures of light-field quality, but require dense reference light- fields for optimal performance. For more complex tasks of comparing two distorted light fields, their performance drops significantly, which reveals the need for new, light-field-specific metrics.
Packet loss is a significant cause of visual impairments in video broadcasting over packet-switched networks. There are several subjective and objective video quality assessment methods focused on the overall perception of video quality. However, less attention has been paid on the visibility of packet loss artifacts appearing in spatially and temporally limited regions of a video sequence. In this paper, we present the results of a subjective study, using a methodology where a video sequence is displayed on a touchscreen and the users tap it in the positions where they observe artifacts. We also analyze the objective features derived from those artifacts, and propose different models for combining those features into an objective metric for assessing the noticeability of the artifacts. The practical results show that the proposed metric predicts visibility of packet loss impairments with a reasonable accuracy. The proposed method can be applied for developing packetization and error recovery schemes to minimize the subjectively experienced distortion in error-prone networked video systems.
The ultimate goal of many image-based modeling systems is to render photo-realistic novel views of a scene without visible artifacts. Existing evaluation metrics and benchmarks focus mainly on the geometric accuracy of the reconstructed model, which is, however, a poor predictor of visual accuracy. Furthermore, using only geometric accuracy by itself does not allow evaluating systems that either lack a geometric scene representation or utilize coarse proxy geometry. Examples include a light field and most image-based rendering systems. We propose a unified evaluation approach based on novel view prediction error that is able to analyze the visual quality of any method that can render novel views from input images. One key advantage of this approach is that it does not require ground truth geometry. This dramatically simplifies the creation of test datasets and benchmarks. It also allows us to evaluate the quality of an unknown scene during the acquisition and reconstruction process, which is useful for acquisition planning. We evaluate our approach on a range of methods, including standard geometry-plus-texture pipelines as well as image-based rendering techniques, compare it to existing geometry-based benchmarks, demonstrate its utility for a range of use cases, and present a new virtual rephotography-based benchmark for image-based modeling and rendering systems.
The ultimate goal of many image-based modeling systems is to render photo-realistic novel views of a scene without visible artifacts. Existing evaluation metrics and benchmarks focus mainly on the geometric accuracy of the reconstructed model, which is, however, a poor predictor of visual accuracy. Furthermore, using only geometric accuracy by itself does not allow evaluating systems that either lack a geometric scene representation or utilize coarse proxy geometry. Examples include light field or image-based rendering systems. We propose a unified evaluation approach based on novel view prediction error that is able to analyze the visual quality of any method that can render novel views from input images. One of the key advantages of this approach is that it does not require ground truth geometry. This dramatically simplifies the creation of test datasets and benchmarks. It also allows us to evaluate the quality of an unknown scene during the acquisition and reconstruction process, which is useful for acquisition planning. We evaluate our approach on a range of methods including standard geometry-plus-texture pipelines as well as image-based rendering techniques, compare it to existing geometry-based benchmarks, and demonstrate its utility for a range of use cases.