ChapterPDF Available

Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place

Authors:

Abstract and Figures

This paper describes about project “Data Sensorium” launched at the Asia AI Institute of Musashino University. Data Sensoriumis a conceptual framework of systems providing physical experience of content stored in database. Spatial immersive display is a key technology of Data Sensorium. This paper introduces prototype implementation of the concept and its application to environmental and architectural dataset.
Content may be subject to copyright.
Data Sensorium
Spatial Immersive Displays for
Atmospheric Sense of Place
Hiroo IWATAa,1 , Shiori SASAKIb, Naoki ISHIBASHIb,
Virach Sornlertlamvanichb, Yuki ENZAKIb, and Yasushi KIYOKIb
a University of Tsukuba
b Musashino University
Abstract. This paper describes about project “Data Sensorium” launched at the Asia
AI Institute of Musashino University. Data Sensorium is a conceptual framework of
systems providing physical experience of content stored in database. Spatial
immersive display is a key technology of Data Sensorium. This paper introduces
prototype implementation of the concept and its application to environmental and
architectural dataset.
Keywords. immersive image, visualization, projection-based VR, locomotion
interface
1. Introduction
Immersive image, such as 360-degree panorama, plays important rolls in visualizing
environmental or architectural content stored in database. HMD (head-mounted display)
is a typical device for displaying immersive image. It is a common device in VR(virtual
reality) systems. VR has been popular not only in the field of entertainment but also in
scientific research and industrial applications.However, HMD has several drawbacks.
Firstly, HMD can only provide image to a single user that causes limitation in natural
communication among multiple users. Secondly, HMD is tightly coupled to the user’s
head so that it is uncomfortable for long term use. Thirdly, optical system of HMD is not
sufficient for covering natural field of view of human eyes.
Spatial immersive display is an alternative of HMD. It is a room-like display
composed of multiple large screens or curved screens[1]. Its advantages are fused
together to overcome the drawbacks of HMD. Multiple users can go into the room-like
display together and can physically interact among them. They don’t have to wear
goggles. Room-like display can fully offer a natural field of view of human eyes.
Data Sensorium is a conceptual framework of systems providing physical
experience of content stored in database. Spatial immersive display is a key technology
of Data Sensorium. This paper introduces prototype implementation of the concept and
its application to the projects running at the Asia AI Institute of Musashino University.
1 Hiroo Iwata, University of Tsukuba, Tsukuba 305-8573 Japan; E-mail: iwata@kz.tsukuba.ac.jp.
Information Modelling and Knowledge Bases XXXIV
M. Tropmann-Frick et al. (Eds.)
© 2023 The authors and IOS Press.
This article is published online with Open Access by IOS Press and distributed under the terms
of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0).
doi:10.3233/FAIA220506
247
2. Basic Concept of Data Sensorium
Data Sensorium aims to present data through physical experience of users. Its
structure consists of three basic functions of “Sensing”, “Processing” and “Actuation”.
Figure 1. illustrates framework of the Data Sensorium. Sensing subsystem captures
multi-dimensional data of specific places. 360-degree cameras or 3D scanners are used
for the sensors. Processing subsystem memorize and visualize multi-dimensional data
captured by the Sensing subsystem. Spatial immersive displays are used to present results
of visualization. Actuation subsystem provides physical experience to the users. It
employs various interface devices that cause actual movement of the users in the real
world. One of the examples is a locomotion interface that create sense of walking. Spatial
immersive displays may be located in various places in the world so that global
collaboration using Data Sensorium can be achieved.
Figure 1. Framework of Data Sensorium
3. Prototype Implementation
3.1. Four-screens Configuration
We firstly tried rapid prototyping of a spatial immersive display by using 16:9 flat
screens. Field of view of human eyes is 200 degrees horizontally. Thus, we employed
four 120-inch screens for offering full field of view. These screens are arranged in
pentagonal shape so that it offers 288 degrees field of view. Short focus projectors,
Optoma GT1080, are put inside the screens.
This configuration is easy to setup and it can realize room-like display. Figure 2
shows overall view of the installation. We also developed the pentagonal-shape screens
by using 85-inch monitors (Figure 3). Although each screen is smaller than the projection
screen, this configuration is easier to install. It works even in bright room and it is
actually working in Thammasat University.
H. Iwata et al. / Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place248
Figure 2. Four 120-
inch Screens Configuration
Figure 3. Four 85-
inch monitors Configuration
3.2. Integration of Torus Treadmill with Four 120-inch Screens
The Torus Treadmill is a locomotion interface that creates sense of walking.
Although traveling on foot is the most intuitive way for locomotion, proprioceptive
feedback of walking is not introduced in most applications of virtual environments. We
have been developing an infinite surface driven by actuators for creation of sense of
walking[2]. Torus-shaped surfaces is selected to realize the locomotion interface. The
device employs 12 sets of treadmills. Figure 4 illustrates basic structure of the Torus
Treadmill. Each treadmill moves the walker along an "X" direction. These treadmills are
connected side by side and driven in a perpendicular direction. This motion moves the
walker along a "Y" direction. Combination of these motions enables the walker to omni-
directional walking. The walker can go in any direction while his/her position is fixed in
the real world. Figure 5 shows overall view of the device.
We are planning to integrate the Torus Treadmill with the four 120-inch screens
display as mentioned in the former section. Figure 6 illustrates combination of the Torus
Treadmill and screens. Walking action is measured by a position sensor and image of
virtual space is displayed according to the walking distance. This function provides
physical walking experience in virtual environment.
Major application of the system will be art museum. Art museums not only provide
each artwork but also whole space that include artworks. Thus, physical walking is
essential to experience in art museums.
"X"direction "Y" direction
Figure 4. structure of the Torus Treadmill
H. Iwata et al. / Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place 249
Figure 5. overall view of
the Torus
Treadmill
Figure
6. combination of the Torus Treadmill and the
four
120-inch screens display
3.3. atmoSphere Display - a spherical full-surround screen
Sphere is an ideal shape for a full-surround screen. Distance between eyes and screen
is constant in a spherical screen. Flat screens are easier to install but they have difficulty
in offering vertical field of view. The four-screens configuration described in the section
3.1 can offer only 40 degrees vertically. On the other hand, vertical field of view human
eyes is 125 degrees. We designed a spherical screen that offers 360 degrees horizontally
and 135 degrees vertically. It is named “atmoSphere Display”. Figure 7 shows basic
structure of the display. Diameter of the screen is 3.8m and overall height is 3.0m. It is
designed to fit to the room of the Asia AI Institute of Musashino University. Four
projectors thorough images to the spherical wall. Two projectors thorough images to the
floor. Figure 8 shows overall view of the display. It is made of fabric screen. It is
relatively easy to fabricate compared to solid screens. Another advantage of fabric screen
is that it transmits light and sound. Full-surround screens often suffer from internal
reflection that degrades the contrast of the image. Fabric screens provide solution for
the internal reflection.
Figure 7. basic structure of the atmoSphere
Display
Figure 8. overall view
of the atmoSphere Display
H. Iwata et al. / Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place250
4. Integration of 5D World Map System and Data Sensorium
It is significant to memorize those situations and compute environment change in
various aspects and contexts, in order to discover what are happening in the nature of our
planet. Kiyoki and Sasaki have proposed “5-Dimensional World Map System” [3]-[12]
for integrating and analyzing environmental phenomena in ocean and land. This system
is effective and advantageous to memorize environmental situations with Physical-Cyber
integration to detect environmental phenomena as real data resources in a physical-space
(real space), map them to cyber-space to make analytical and semantic computing, and
actuate the analytically computed results to the real space with visualization for
expressing environmental phenomena, causalities and influences. Currently, the 5D
World Map System is globally utilized as a Global Environmental Semantic Computing
System, in SDG14, United-Nations-ESCAP
(https://sdghelpdesk.unescap.org/toolboxes?title=&field_sdgs_target_id=All&page=1htt
ps://sdghelpdesk.unescap.org/toolboxes) for observing and analyzing disaster, natural
phenomena, ocean-environment situations with local and global multimedia data
resources [9][12]. The 5D World Map System has also introduced the concept of “SPA
(Sensing, Processing and Analytical Actuation Functions)” for global environmental
system integrations, as a global environmental knowledge sharing, analysis and
integration system [7][8][9].
4.1. 5D World Map System
We have introduced the architecture of a multi-visualized and dynamic knowledge
representation system “5D World Map System [7]-[12],” which is applied to
environmental analysis and semantic computing. The basic space of this system consists
of a temporal (1st dimension), spatial (2nd, 3rd and 4th dimensions) and semantic
dimensions (5th dimension, representing a large-scale and multiple-dimensional semantic
space. This space memorizes and recalls various multimedia information resources with
temporal, spatial and semantic correlation computing functions, and realizes a 5D World
Map for dynamically creating temporal-spatial and semantic multiple views applied for
various “environmental multimedia information resources.”
5D World Map System applies the dynamic evaluation and mapping functions of
multiple views of temporal-spatial metrics, and integrate the results of semantic
evaluation to analyze environmental multimedia information resources. The main feature
of this system is to create world-wide global maps and views of environmental situations
expressed in multimedia information resources (image, sound, text and video)
dynamically, according to user's viewpoints. Spatially, temporally, semantically and
impressionably evaluated and analyzed environmental multimedia information resources
are mapped onto a 5D time-series multi-geographical space. The basic concept of the 5D
World Map System has been introduced in [3]-[12]. The 5D World Map system applied
to environmental multimedia computing visualizes world-wide and global relations
among different areas and times in environmental aspects, by using dynamic mapping
functions with temporal, spatial, semantic and impression-based computations [7]-[12].
4.2. Connection between Data Sensorium and 5D World Map System
The connection between Data Sensorium and 5D World Map System is implemented
as a realization of 3D demand-propagation to local-spots for environmental-actuations
H. Iwata et al. / Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place 251
on a Plastic Garbage Monitoring project in conjunction with Closing-the-Loop project
by UN-ESCAP. Assuming an installation among multiple remote sites, the connection
between two systems is designed as shown in Figure 9. The connection between two
systems is realized for the purpose of Plastic Garbage Experience Sharing and Collection
- 360 area-atmosphere recognition with real local 360-degree images (taken by 360-
degree cameras such as GoPro and Theta). It shows the future direction of 5D World
Map System as a control center of "Plastic Garbage Discovery and Reduction" activities
with advanced technologies in the field of AI, Big-Data Analysis, Machine Learning,
VR/AR, IoT and Robotics.
Figure 9
.
Connection between Data Sensorium and 5D World Map System and the Installation Structures
in Remote Sites
5. Application to Virtual Museum
In the past few decades, many kinds of virtual museum have been discussed with
the rise of technologies in virtual reality, and the term virtual museum is very wide that
includes many technologies such as digital museum, electric museum, online museum,
Web museum or cybermuseum[13]. For constructing various digital applications in a
museum, a multidatabase system architecture for integrating digital archives of an art
collection was proposed which is named Artizon Cloud[14]. For introducing the data
sensorium concept to the virtual museum, Art Sensorium Project was launched in
Musashino University. Two key technologies that we focus are as follows: 1) a
H. Iwata et al. / Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place252
multidatabase system architecture to integrate multiple art collections, 2) virtual space
design and implementation for the data sensorium.
Fig.10 shows the multidatabase system of Art Sensorium. A purpose of the
multidatabase system is to realize semantic computing environment for art by integrating
multiple data archives of museum, and providing API to implement a variety of
application.
Figure 10. A multidatabase system architecture for integrating art collections
By introducing the data sensorium concept, many types of virtual museum are
assumed possible to implement as Fig.11. The data sensorium is capable to project an
artwork in real-size, and it could be used as a real-size art data browser as shown A in
Fig.11, and to reproject past exhibitions in a cyberspace as B in Fig.11. Dynamic
curation in a virtual space is a meaningful challenge to automatically generate an art
exhibition for an individual ind or intension as C in Fig.11.
Figure 11. Assumed applications of Art Sensorium
Furthermore, multiple data sensoriums in distinct area in terms of culture are
assumed to possibly provide cross-cultural communication by arts. Fig.12 shows
assumed use cases of multiple data sensoriums in Japan and Thailand.
according to one’s m
H. Iwata et al. / Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place 253
Figure 12. Assumed applications of Art Sensorium
6. Application to Urban Planning
Thammasat University has launched its initiative in enabling a city-scale AI in the
project of AI Ready City Networking in RUN2. The project is to transform the area of
2.8112 square kilometers of Thammasat University in Rangsit campus for modeling the
AI capacity in a city scale since the current research in AI is heavily suffered from the
insufficiency and diversity of data. Reliability and connectivity of the data will be
collected and made available to fully demonstrate the capability of AI in the real-life
campus. It is designed to function as a based platform [15] for the four most highly
impact domains in the Rangsit city, i.e. healthcare, environment, mobility, and
agriculture by being equipped with AI enabled healthcare monitoring devices [16],
noninvasive bed sensors [17], environmental sensors, video analytics cameras, street
lights, indoor tracking devices [18], and drones for aerial photography. Figure 13 depicts
the project architecture with its domain specific connectivity.
Figure 13. AI C ity platform for data and analytics connectivity
The spatial immersive displays of Data Sensorium are formed as a node in AAII
branch in Rangsit campus. The collaboration between Musashino University and
Thammasat University devices Data Sensorium to enable the room-like environmental
2 The project is financial support provided by Thammasat University Research fund under the TSRI,
Contract No. TUFF19/2564, for the project of “AI Ready City Networking in RUN”, based on the RUN Digital
Cluster collaboration scheme.
H. Iwata et al. / Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place254
experience across the campuses. AI City data are visualized on the Thammasat model
generated from (1) direct inspection photography, (2) aerial photogrammetric survey by
drone, and (3) laser scanner. The data are processed to determine the point cloud for
generating 3D mesh for reference as shown in Figure 14.
Figure 14. Modeling process of Thammasat Rangsit campus
Figure 15 show the visualization results on 4 surrounding screens to provide a room-like
experience. After the image stitching process, Figure 15(a) shows the synchronized
image controlled by the multiple points via the Internet. On each end, the users can
control the view and point out the area of interest. The function allows the users to share
their concerns in the same room-like experience. In addition, the models are smoothly
augmented with the location sensitive information to realize the spatial experience to the
users as shown in Figures 15(b). The example information of social media density is
expressed in a form of augmented graph in Figure 15(c) to shown the SNS population in
a specific moment.
(a)
(b) (c)
Figure 15. Modeling process of Thammasat Rangsit campus
As a result, Data Sensorium shows its potential in realizing the spatial immersive
environment to relieve the limitation of using HMD, especially in the case of urban
planning that needs a city scale environment sharing in the concept of AI City.
H. Iwata et al. / Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place 255
7. Conclusions
This paper has shown basic concept and prototype implementation of Data
Sensorium. Its effectiveness is exemplified in three projects running at the Asia AI
Institute of Musashino University. Future work will include further applications using
new interface devices. Muscle training device is one of the examples.
References
[1] Iwata,H., Rear-projection-based Full Solid Angle Display, Proceedings of ICAT'96; 1996; pp.59-64
[2] Iwata,H., Th e Torus Treadmill: Rea lizing Locomotion in VEs, IEEE Comput er Graphics and Applications;
Vol.19 No.6; 1999 pp.30-35
[3] Yasushi Kiyoki, Shiori Sasaki, Nhung Nguyen Trang, Nguyen Thi Ngoc Diep, "Cross-cultural Multimedia
Computing with Impression-based Semantic Spaces," Conceptual Modelling and Its Theoretical
Foundations, Lecture Notes in Computer Science, Springer, pp.316-328, March 2012.
[4] Shiori Sasaki, Yusuke Takahashi, Yasushi Kiyoki: “The 4D World Map System with Semantic and
Spatiotemporal Analyzers,” Information Modelling and Knowledge Bases, Vol.XXI, IOS Press, 18 pages,
2010.
[5] Totok Suhardijanto, Yasushi Kiyoki, Ali Ridho Barakbah: “A Term-based Cross-Cultural Computing
System for Cultural Semantics Analysis with Phonological-Semantic Vector Spaces,” Information
Modelling and Knowledge Bases XXIII, pp.20-38, IOS Press, 2012.
[6] Yasushi Kiyoki, Xing Chen, Shiori Sasaki and Chawan Koopipat, “Multi-Dimensional Semantic
Computing with Spatial-Temporal and Semantic Axes for Multi-spectrum Images in Environment
Analysis", to appear in Information Modelling and Knowledge Bases (IOS Press), Vol. XXVI, 20 pages,
March 2016.
[7] Yasushi Kiyoki, Asako Uraki, Chalisa Veesommai, “A Seawater-Quality Analysis Semantic- Space in
Hawaii-Islands with Multi- Dimensional World Map System ”, 18th International Electronics
Symposium (IES2016), Bali, Indonesia, September 29-30, 2016.
[8] Yasushi Kiyoki, Xing Chen, Shiori Sasaki, Chawan Koopipat, “A Globally-Integrated Environmental
Analysis and Visualization System with Multi-Spectral & Semantic Computing in “Multi-Dimensional
World Map””, Information Modelling a nd Knowledge Bases XXVIII, pp.106-122, 2017
[9] Yasushi Kiyoki, Xing Chen, Chalisa Veesommai, Shiori Sasaki, Asako Uraki, Chawan Koopipat,
Petchporn Chawakitchareon and Aran Hansuebsai, “An Environmental-Semantic Computing System for
Coral-Analysis in Water-Quality and Multi-Spectral Image Spaces with “Multi-Dimensional World
Map”, Information Modelling and Knowledge Bases, Vol. XXVIII, 20 pages, March 2018.
[10] Sasaki, S. and Kiyoki, Y., "Real-time Sensing, Processing and Actuation Functions of 5D World Map
System: A Collaborative Knowledge Sharing System for Environmental Analysis" Information
Modelling and Knowledge Bases, Vol. XXVIII, IOS Press, pp. 220-239, May 2016.
[11] Shiori Sasaki, Yasushi Kiyoki, "Analytical Visualization Functions of 5D World Map System for Multi-
Dimensional Sensing Data", Information Modelling and Knowledge Bases XXIX, IOS Press, pp.71 – 89,
May 2017.
[12] Shiori Sasaki, Yasushi Kiyoki, Madhurima Sarkar-Swaisgood, Jinmika Wijitdechakul, Irene Erlyn Wina
Rachmawan, Sanjay Srivastava, Rajib Shaw, Chalisa Veesommai, 5D World Map System for Disaster-
Resilience Monitoring from Global to Local: Environmental AI System for Leading SDG 9 and 11,
Information Modelling and Knowledge Bases XXXI, Proceedings of the 28th International Conference
on Information Modelling and Knowledge Bases, EJC 2019, Rapperanta, Finland, 5-9 June 2019, pp.
306 - 323.
[13] Werner Schweibenz, “The virtual museum: an overview of its origins, concepts, and terminology”, The
Museum Review, Vol.4, No.1, 2019.
[14] Naoki Ishibashi, “Artizon Cloud: A Multidatabase System Architecture for an Art Museum”, The 31st
International Conference on Information Modelling and Knowledge Bases(EJC2021), 2021.
[15] Nobuyuki Ota, “Create Deep Intelligence TM in the Internet of Thin gs”, 2014. URL http://on-
demand.gputechconf.com/gtc/2015/presentation/S5813-Nobuyuki-Ota.pdf
[16] Krishna Kant Singh, Akansha Singh, Jenn-Wei Lin, Ahmed A. Elngar, “Deep Learning and IoT in
Healthcare Systems”, Paradigms and Applications, CRC Press, December 2021.
H. Iwata et al. / Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place256
[17] Waranrach Viriyavit, Virach Sornlertlamvanich, Bed Position Classification by a Neural Network and
Bayesian Network Using Noninvasive Sensors for Fall Prevention”, Journal of Sensors, Volume 2020,
Article ID 5689860, Hindawi, https://doi.org/10.1155/2020/5689860, pp. 1-14, January 2020.
[18] La-or Kovavisaruch, Taweesak Sanpechuda, Krisada Chinda, Pobsit Kamolvej, Virach
Sornlertlamvanich, Museum Layout Evaluation based on Visitor Statistical History”, Asian Journal of
Applied Sciences, Vol 5, No 3, pp. 615-622, June 2017.
H. Iwata et al. / Data Sensorium. Spatial Immersive Displays for Atmospheric Sense of Place 257
... Although traveling on foot is the most intuitive way for locomotion, proprioceptive feedback from walking is not introduced in most applications of virtual environments. We have been developing an infinite surface driven by actuators for the creation of a sense of walking [5]. Torus-shaped surfaces are selected to realize the locomotion interface. ...
... A key feature is the integration of human action into the system architecture, enabling dynamic, interactive data experiences. For more details, refer to the related publication [5]. ...
Preprint
Full-text available
This paper presents a system that generates a magical experience with full-body motion. The system consists of a locomotion interface and a spatial immersive display. A virtual experience system named the Magical Experience Generator was developed, equipped with a Magical Experience Controller. This system provides a physical movement experience along with magical-like interactions in a virtual space. We developed content inspired by the Japanese story "The Man Who Made Flowers Bloom" using Unity as the system's environment. The locomotion interface records the participant's walking trajectory and hand movements, representing their actions in the virtual space.
... By leveraging the data, museums can offer unprecedented virtual and physical viewing experiences. The Art Sensorium Project [5] aims to integrate various types of open data from art collections, utilizing them to produce physical and virtual dynamic art experiences in the Data Sensorium [6], which is a spatially immersive display. In this project, a prototype application for a data sensorium is proposed, featuring dynamic curation based on an individual user's artviewing experience [7]. ...
Chapter
Full-text available
In this paper, a dynamic curation method that uses pretrained models of art interpretation is proposed. The proposed method leverages exhibition catalogs (primary classifications and images) as a source of curatorial knowledge to construct a machine learning model, thereby using this model to evaluate, position, and visualize other artwork. This method enables the dynamic curation of artwork in a vast integrated art collection archive. A prototype system for the proposed method was implemented and applied to two exhibition catalogs at the Artizon Museum. In the experiment, artworks matching the purpose and art style of the modeled exhibition from the archives of the Metropolitan Museum of Art in New York City and Paris Musées were evaluated using the constructed model. The experiment demonstrated that the proposed method successfully classifies and enables the visualization of artwork and images available in the open data from the MET and Paris Musées in alignment with the intended theme of the exhibition for which the model was constructed. Feedback from curators and art professionals indicates that the proposed method can be used to compare museums with art collections of the same genre and to organize new exhibitions based on the curatorial model.
... By integrating collection data, new types of art experience could be implemented in both physical and virtual spaces. Art Sensorium Project [9], as shown in Figure 1, was launched in Musashino University with focusing on two key technologies as follows: 1) a multidatabase system architecture to integrate multiple art collections, 2) virtual space design and implementation for the Data Sensorium [10]. In particular, personalized art exhibitions where artworks are selected from a museum archive and displayed based on the viewer's tastes and viewing tendencies could be implemented [11]. ...
Chapter
Full-text available
In this paper, an implementation method of GACA, Global Art Collection Archive, is proposed. Each museum maintains their own archives of art collections. GACA dynamically integrate those collection data of artworks in each museum archive and provide them with REST API. GACA works as a integrated data platform for various kinds of viewing environment of artworks such as virtual reality, physical exhibitions, smartphone applications and so on. It allows users not only to view artworks, but also to experience the creativity of artworks through seeing, feeling, and knowing them, inspiring a new era of creation.
... Data Sensorium is a conceptual framework of systems providing physical experience of contents stored in database [10], and Data Sensorium consists of spatial immersive display in a form of room-like display, various sensors that detect behaviour of users, and mechanical subsystems that provide haptics. ...
Chapter
Full-text available
This paper introduces Art Sensorium Project that is founded in Asia AI Institute of Musashino University. A main target of the project is to design and implement a system architecture of unified art collections for virtual art experiences. To provide art experiences, a projection-based VR system, called Data Sensorium, is used to stage art materials in a form of real-sized virtual reality. Furthermore, a system architecture of a multidatabase system for heterogeneous art collection archives is presented, so a set of integrated art data is applied to Data Sensorium for newly generated art experiences.
Article
Full-text available
Falls from a bed often occur when an elderly patient attempts to get out of bed or comes close to the edge of a bed. These mishaps have a high possibility of serious injuries, such as bruises, soreness, and bone fractures. Moreover, a lack of repositioning the body of a bedridden elderly person may cause bedsores. To avoid such a risk, a continuous activity monitoring system is needed for taking care of the elderly. In this study, we propose a bed position classification method based on the sensor signals collected from only four sensors that are embedded in a panel (composed of two piezoelectric sensors and two pressure sensors). It is installed under the mattress on the bed. The bed positions considered are classified into five different classes, i.e., off-bed, sitting, lying center, lying left, and lying right. To collect the training dataset, three elderly patients were asked for consent to participate in the experiment. In our approach, a neural network combined with a Bayesian network is adopted to classify the bed positions and put a constraint on the possible sequences of the bed positions. The results from both the neural network and Bayesian network are combined by the weighted arithmetic mean. The experimental results have a maximum accuracy of position classification of 97.06% when the proportion of coefficients for the neural network and the Bayesian network is 0.3 and 0.7, respectively.
Article
Full-text available
n recent years, the virtual museum has become a prominent subject again. However, the concept and terminology have existed for a long time, even to a period before the advent of the Internet. The early years of the virtual museum were characterized by multimedia and hypermedia applications on CD-ROM and stand-alone computers. The World Wide Web offered new possibilities for the presentation of museum information online, and thus to reach outside of the museum walls. This lead to a controversial discussion about the nature and the core concept of the virtual museum.
Chapter
This paper presents a new analysis method and the functions for multi-dimensional sensing data, including multi-parameter sensor data and series of sensing images, for a collaborative knowledge creation system called 5D World Map System, and the applications in the field of multidisciplinary environmental researches. The main feature of 5D World Map System is to provide a platform of collaborative work for users to perform a global analysis for sensing data in a physical space along with the related multimedia data in a cyber space, on a single view of time-series maps based on the spatiotemporal and semantic correlation calculations. The concrete target data of the proposed new method and functions for world-wide evaluation is (1) multi-parameter sensor data such as water-quality, air-quality, soil-quality etc., and (2) multispectral and natural-color image data taken by moving cameras such as UAV/car-mounted cameras or mobile phones for environmental monitoring. The proposed world-wide evaluation functions enable multiple remote-users to acquire real-time sensing data from multiple sites around the world, perform analytical visualizations of the acquired sensing data by a selected world environmental standard to discover the incidental phenomena, and provide the analysed results to related users' terminal equipment automatically. These new functions realize a new multidimensional data analysis and knowledge sharing for a collaborative environment. Especially, in the world-wide evaluation function, applying the concept of “semantic computing” to determining the environmental-quality levels of multiple places around the world. The results are able to be analysed by the time-series difference of the value of each place, the differences between the values of multiple places in a focused area, and the time-series differences between the values of multiple places, and calculated as a “world ranking”, to detect and predict an environmental irregularity and incident. In our world-wide evaluation method, we define the environmental impacts as “semantics” of environmental condition. The originality of our method is in (1) an interpreter to convert the numerical environmental quality-level to the qualitative impacts/meanings by the sentence or a set of words that even non-specialists or ordinary people are able to understand, and (2) a visualizer to realize a global comparison and “world-ranking” with a semantic computing for targeting the multi-parameter sensing values of multiple sites around the world.
Chapter
Semantic computing is an important and promising approach to semantic analysis for various environmental phenomena and changes in real world. This paper presents a new semantic computing method with multi-spectrum images for analyzing and interpreting environmental phenomena and changes occurring in the physical world.
Conference Paper
Semantic computing is one of the important and indispensable approaches to analyze various kinds of environmental phenomena and its changes in the real world. In this paper, we present “A Seawater-Quality Analysis Semantic-Space in Hawaii-Islands with Multi-Dimensional World Map System” to realize a global and environmental analysis for ocean environment with the multi-dimensional world map system. This space makes it possible to express environmental features of seawater, and to calculate its temporal and geographical changes from the viewpoint of semantics of the environment. We implemented and applied a new semantic-space for seawater-quality to the multi-dimensional world map system as a first experimental study focusing on seawater-quality data in Hawaii-Islands to show the feasibility and effectiveness of our concept.
Article
Humankind, the dominant species on Earth, faces the most essential and indispensable mission; we must endeavor on a global scale to perpetually restore and improve our natural and social environments. The essential computation in environmental study is context-dependent-differential computation to analyze the changes of various situations (temperature, color, CO2, places of livings, sea level, coral area, etc.). It is important to realize global environmental computing methodology for analyzing difference and diversity of nature and livings in a context dependent way with a large amount of information resources in terms of global environments. It is also significant to memorize those situations and compute environment change in various aspects and contexts, in order to discover what is happening in the nature of our planet. We have various (almost infinite) aspects and contexts in environmental changes in our planet, and it is essential to realize a new analyzer for computing differences in those situations for discovering actual aspects and contexts existing in the nature. We propose a new method for Differential Computing in our Multi-dimensional World map. We utilize a multi-dimensional computing model, the Mathematical Model of Meaning (MMM), and a multi-dimensional space filtering method with an adaptive axis adjustment mechanism to implement differential computing. Computing environmental changes in multi-aspects and contexts using differential computing, important factors that change natural environment are highlighted. We also present a method to analyze and visualize the highlighted factors using our Multi-dimensional World Map (5-Dimensional World Map) System. We also introduce the concept of “SPA (Sensing, Processing and Analytical Actuation Functions)” for realizing a global environmental system, to apply it to Multi-dimensional World Map (5-Dimensional World Map) System. This concept is effective and advantageous to design environmental systems with Physical-Cyber integration to detect environmental phenomena as real data resources in a physical-space (real space), map them to cyber-space to make analytical and semantic computing, and actuate the analytically computed results to the real space with visualization for expressing environmental phenomena, causalities and influences.
Article
This paper proposes a cross-cultural computing system that deals with multilingual analysis. This system focuses on a cultural aspect comparison that is based on linguistic basic elements. The most important task of our system is to realize a cross-cultural computation in the framework of correlation computation by using vectorized numeric data that express cultural aspects in some concepts and objects with regard to speech sounds. The key technology of the system is a cross-cultural semantic distance computation in phonological-semantic metadata spaces that involve the phonological aspects of sound, syllabic and lexical composition features. The phonological-semantic metadata of multiple languages is extracted based on two main aspects of language: form and meaning. Form refers to speech sound, and meaning refers to the semantic of language. We compare language units (or terms) with the same meaning from different cultures, focusing on the speech sound characteristics of the terms. The speech sound metadata are extracted from a term and separated based on the phonological aspects of sound, syllabic and lexical composition features. These metadata are converted into vectorized numeric data to create phonological-semantic vector spaces. By using these spaces, we conducted similarity and weighting computations to perform a comparative analysis of language-related metadata. Our research goal is to perform a language similarity analysis through a term-based distance calculation in phone (sound) and meaning spaces, and to reconstruct an inheritance relationship among languages via agglomerative hierarchical clustering based on an inter-term distance calculation. Our system clusters the phonological-semantic vector space and represents a 2D visualization of cultural differentiation to analyze further the interconnectedness across languages. In this paper, we perform our proposed cross-cultural computing system for an experimental purpose with linguistic data from 32 different Asian-Oceanic languages.
Conference Paper
This paper presents a design and implementation for the "4D World Map System," a knowledge representation system which enables semantic, temporal and spatial analysis of documents, and integrates and visualizes the analyzed results as a 4-dimentional dynamic historical atlas (4D World Map Set). The main feature of this system is to create various context-dependent patterns of historical/cultural stories according to a user's viewpoints dynamically. This system generates multiple views of semantic and temporal-spatial relationships among documents of the humanities and social sciences. This system organizes the relationships among documents into various historical/cultural stories by a user's viewpoints. A semantic associative search method is applied to this system for realizing the concept that "semantics" of words, documents, and events vary according to the "context". Semantically-evaluated and analyzed document data are also mapped dynamically onto a time-series multi-geographical space. This system provides high visibility of semantic correlations between documents in time series variation with geographic information. In this paper, we also show several experiments by using news articles and International Relations documents to clarify the feasibility of the system.
Article
Locomotion in virtual environments (VEs) remains one of the major problems in current virtual reality research. The most intuitive way to move about the real world is to travel on foot. People often feel a better sense of distance or direction while walking than while riding in a vehicle. This article discusses the development of a locomotion device that provides a sense of walking. In terms of natural interaction, the physical exertion of walking proves essential to locomotion. The research of my colleagues and I aims to give users a sense of walking while their position remains localized in the physical world. We've developed several prototypes of interface devices for walking. From the results of our research, we concluded that an infinite surface would offer an ideal means for giving people a sense of walking. Our device, called the Torus Treadmill, uses a torus-shaped surface to realize the locomotion interface. The surface employs 12 sets of treadmills connected side-by-side and driven in a perpendicular direction. These treadmills generate an infinite surface. We measured the motion of the users' feet with magnetic sensors. The floor moves in the opposite direction of the walker, canceling the motion of each step. The walker's position remains localized in the real world by this computer-controlled motion of the floor. The walker can freely change direction. An image of the virtual space appears in a head-mounted display corresponding to the walker's virtual position