Conference Paper

360RVW: Fusing Real 360° Videos and Interactive Virtual Worlds

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Their benefits include their ease of use and low cost, as well as interactivity, and the possibility to convey a sense of immersive realism and presence [2,3]. Furthermore, 360°videos of real world locations can be enriched by introducing further interaction and collaboration possibilities to create virtual tours or photo-realistic virtual environments [21,22,23,8]. This way, they enhance the engagement and presence of users to bridge the gap between videos and 3D generated virtual environments. ...
Preprint
In this work, we investigate facial anonymization techniques in 360{\deg} videos and assess their influence on the perceived realism, anonymization effect, and presence of participants. In comparison to traditional footage, 360{\deg} videos can convey engaging, immersive experiences that accurately represent the atmosphere of real-world locations. As the entire environment is captured simultaneously, it is necessary to anonymize the faces of bystanders in recordings of public spaces. Since this alters the video content, the perceived realism and immersion could be reduced. To understand these effects, we compare non-anonymized and anonymized 360{\deg} videos using blurring, black boxes, and face-swapping shown either on a regular screen or in a head-mounted display (HMD). Our results indicate significant differences in the perception of the anonymization techniques. We find that face-swapping is most realistic and least disruptive, however, participants raised concerns regarding the effectiveness of the anonymization. Furthermore, we observe that presence is affected by facial anonymization in HMD condition. Overall, the results underscore the need for facial anonymization techniques that balance both photo-realism and a sense of privacy.
Article
In this work, we investigate facial anonymization techniques in 360° videos and assess their influence on the perceived realism, anonymization effect, and presence of participants. In comparison to traditional footage, 360° videos can convey engaging, immersive experiences that accurately represent the atmosphere of real-world locations. As the entire environment is captured simultaneously, it is necessary to anonymize the faces of bystanders in recordings of public spaces. Since this alters the video content, the perceived realism and immersion could be reduced. To understand these effects, we compare non-anonymized and anonymized 360° videos using blurring, black boxes, and face-swapping shown either on a regular screen or in a head-mounted display (HMD). Our results indicate significant differences in the perception of the anonymization techniques. We find that face-swapping is most realistic and least disruptive, however, participants raised concerns regarding the effectiveness of the anonymization. Furthermore, we observe that presence is affected by facial anonymization in HMD condition. Overall, the results underscore the need for facial anonymization techniques that balance both photo-realism and a sense of privacy.
Article
Full-text available
Omnidirectional cameras are capable of providing 360360^{\circ } field-of-view in a single shot. This comprehensive view makes them preferable for many computer vision applications. An omnidirectional view is generally represented as a panoramic image with equirectangular projection, which suffers from distortions. Thus, standard camera approaches should be mathematically modified to be used effectively with panoramic images. In this work, we built a semantic segmentation CNN model that handles distortions in panoramic images using equirectangular convolutions. The proposed model, we call it UNet-equiconv, outperforms an equivalent CNN model with standard convolutions. To the best of our knowledge, ours is the first work on the semantic segmentation of real outdoor panoramic images. Experiment results reveal that using a distortion-aware CNN with equirectangular convolution increases the semantic segmentation performance (4% increase in mIoU). We also released a pixel-level annotated outdoor panoramic image dataset which can be used for various computer vision applications such as autonomous driving and visual localization. Source code of the project and the dataset were made available at the project page (https://github.com/semihorhan/semseg-outdoor-pano).
Conference Paper
Full-text available
We present Geollery, an interactive mixed reality social media platform for creating, sharing, and exploring geotagged information. Geollery introduces a real-time pipeline to progressively render an interactive mirrored world with three-dimensional (3D) buildings, internal user-generated content, and external geotagged social media. This mirrored world allows users to see, chat, and collaborate with remote participants with the same spatial context in an immersive virtual environment. We describe the system architecture of Geollery, its key interactive capabilities, and our design decisions. Finally, we conduct a user study with 20 participants to qualitatively compare Geollery with another social media system, Social Street View. Based on the participants' responses, we discuss the benefits and drawbacks of each system and derive key insights for designing an interactive mirrored world with geotagged social media. User feedback from our study reveals several use cases for Geollery including travel planning, virtual meetings, and family gathering.
Article
Collaborative exploration of 360 videos with contemporary interfaces is challenging because collaborators do not have awareness of one another's viewing activities. Tourgether360 enhances social exploration of 360° tour videos using a pseudo-spatial navigation technique that provides both an overhead "context" view of the environment as a minimap, as well as a shared pseudo-3D environment for exploring the video. Collaborators are embodied as avatars along a track depending on their position in the video timeline and can point and synchronize their playback. We evaluated the Tourgether360 concept through two studies: first, a comparative study with a simplified version of Tourgether360 with collaborator embodiments and a minimap versus a conventional interface; second, an exploratory study where we studied how collaborators used Tourgether360 to navigate and explore 360° environments together. We found that participants adopted the Tourgether360 approach with ease and enjoyed the shared social aspects of the experience. Participants reported finding the experience similar to an interactive social video game.
Chapter
We present a new flow-based video completion algorithm. Previous flow completion methods are often unable to retain the sharpness of motion boundaries. Our method first extracts and completes motion edges, and then uses them to guide piecewise-smooth flow completion with sharp edges. Existing methods propagate colors among local flow connections between adjacent frames. However, not all missing regions in a video can be reached in this way because the motion boundaries form impenetrable barriers. Our method alleviates this problem by introducing non-local flow connections to temporally distant frames, enabling propagating video content over motion boundaries. We validate our approach on the DAVIS dataset. Both visual and quantitative results show that our method compares favorably against the state-of-the-art algorithms.
Conference Paper
In this paper, we introduce OpenVSLAM, a visual SLAM framework with high usability and extensibility. Visual SLAM systems are essential for AR devices, autonomous control of robots and drones, etc. However, conventional open-source visual SLAM frameworks are not appropriately designed as libraries called from third-party programs. To overcome this situation, we have developed a novel visual SLAM framework. This software is designed to be easily used and extended. It incorporates several useful features and functions for research and development. OpenVSLAM is released at https://github.com/xdspacelab/openvslam under the 2-clause BSD license.
Virtual Shibuya Halloween event ups its game with personal avatars
  • Ayumi Sugiyama
  • Sugiyama Ayumi