Figure 11 - uploaded by Erik Geslin
Content may be subject to copyright.
Source publication
"WiiMedia" is a study using the WiiRemote, a new consumer video game controller from Nintendo's, for media art, pedagogical applications, scientific research and innovative unprecedented entertainment systems. Normally, consumer hardwares, like standard controllers of new video game platforms, are closed to public developers. The Nintendo's WiiRemo...
Similar publications
Simulation games can help teaching and learning in several areas of Software Engineering. One important research issue is providing support for simulation games development, making the results of their adoption successful in Software Engineering courses. In this work, we identify a set of requirements focusing on some of the Constructivist learning...
Citations
... Game controllers specifically designed for the game console, such as those used in PlayStation or XBox consoles are a common way to interact with games. Games with consoles designed to interact with body movements, such as Nintendo's Wii or Microsoft's Kinect are also popular [61,64], specifically when working with those with developmental disabilities [13,70] or with older adults [1,12,72]. In addition, the use of body movement as a game controller has shown to promote social engagement and movement with those in retirement communities [40] and with patients who are diagnosed with dementia [67,68]. ...
Autistic children face significant challenges in vocal communication and social interaction, often leading to social isolation. There is evidence that Augmentative and Alternative Communication (AAC) offers support to mitigate these challenges, enabling them to communicate with non-vocal means through forms of AAC, such as speech-generation devices (SGDs). However, the adoption and use of SGDs are hindered by several factors, including the large amount of practice required to learn to use SGDs and the limited options for highly engaging social learning contexts. Our study introduces the novel approach of using SGDs as game controller for digital and interactive games. With three design goals guiding our work, we conducted a Wizard-of-Oz formative case study with five participants aged 3-5 years, who were learning to use their SGD. We simulated a digital coloring game, integrating the speech-generated output of the participant's SGD to function as the game's controller. From this case study, we observed that all participants engaged with the game using their SGD for at least one turn, and two participants also engaged in emerging joint attention responses with the game and game's facilitator. This paper discusses these findings and contributes directions for future research, with suggestions for the design of future SGD-controlled games and exploration of social connection and collaboration between autistic children who use AAC and their caregivers, siblings, and peers.
... Geertsema et al. [2] portrays how people with convulsive seizures can be saved from sudden death if the movement can be predicted. Studies [3,4] have Gaurav Kumar Yadav gauravkumar.yadav@urv.cat Extended author information available on the last page of the article. ...
Predicting human motion based on past observed motion is one of the challenging issues in computer vision and graphics. Existing research works are dealing with this issue by using discriminative models and showing the results for cases that follow a homogeneous distribution (in distribution) and not discussing the issues of the domain shift problem, where training and testing data follow a heterogeneous (out of distribution) problem, which is the reality when such models are used in practice. However, recent research proposed addressing domain shift issues by augmenting the discriminative model with a generative model and obtained better results. In the present investigation, we propose regularizing the extended network by inserting linear layers to minimize the rank of the latent space and train the entire end-to-end network. We regularize the network to strengthen the model to deal effectively with domain shift scenarios. Both training and testing data come from different distribution sets; to deal with this, we toughen our network by adding the extra linear layers to the network encoder. We tested our model with the benchmark datasets, CMU Motion Capture and Human3.6M, and proved that our model outperforms 14 OoD actions of H3.6M and 7 OoD actions of CMU MoCap in terms of the Euclidean distance calculated between predicted and ground truth joint angle values. Our average results of 14 OoD actions for short-term (80, 160, 320, 400) are 0.34, 0.6, 0.96, 1.07, and for CMU MoCap of 7 OoD actions for short-term and long term (80, 160, 320, 400, 1000) are 0.28, 0.45, 0.77, 0.89, 1.46. All these results are much better than the other state-of-the-art results.
... It is a primary task that recognizes human simple actions based on the complete actions in a video. It plays a key role in many domains and applications including intelligent visual surveillance [1,2], video retrieval, gaming [3], home behavior analysis, entertainment, autonomous driving vehicle, human-robot interaction, health care and ambient assisted living [4,5]. Human action recognition in video includes various tasks like human detection, pose estimation, human tracking, and analysis. ...
Abstract Multi activity-multi object recognition (MAMO) is a challenging task in visual systems for monitoring, recognizing and alerting in various public places, such as universities, hospitals and airports. While both academic and commercial researchers are aiming towards automatic tracking of human activities in intelligent video surveillance using deep learning frameworks. This is required for many real time applications to detect unusual/suspicious activities like tracking of suspicious behaviour in crime events etc. The primary purpose of this paper is to render a multi class activity prediction in individuals as well as groups from video sequences by using the state-of-the-art object detector You Look only Once (YOLOv3). By optimum utilization of the geographical information of cameras and YOLO object detection framework, a Deep Landmark model recognize a simple to complex human actions on gray scale to RGB image frames of video sequences. This model is tested and compared with various benchmark datasets and found to be the most precise model for detecting human activities in video streams. Upon analysing the experimental results, it has been observed that the proposed method shows superior performance as well as high accuracy.
... Owing to the recent development and an increase in the demand of smart wearable devices such as smartwatches, which contain various sensors, such as accelerometers and gyroscopes, sensor data collected from these sensors are expected to be used in a wide variety of applications that employ the movements of body parts [1], [9], [17], [18], [20]. For example, recent studies have made use of smartwatches as video game controllers, hand drawing devices, hand writing devices, gestural input devices for IoT/CPS, activity recognition for context-aware applications, controllers for virtual reality (VR) applications, and remote communication using sign language [6], [11], [19], [21]. ...
Inertial sensor data collected from wearable smart devices such as smartwatches are expected to be used in various smart applications such as video game controllers, hand drawing, hand writing, gestural input devices, human activity recognition, and remote communication using sign language. However, since the maximum sampling rate of inertial sensors in commercial smartwatches is restricted, capturing fine-grained body movements using the low-sampled signals is difficult for these sensors. Therefore, this study proposes a new method for generating high sampling rate signals from the low-sampled signals by upsampling the low-sampled signals using interpolation with an artificial neural network. Because it is impossible to obtain "non-existent" data from low-sampled signals according to the information theory, we estimate these data from experience, i.e., using high-sampled signals prepared in advance for training. This is possible because trajectories of a sensor are restricted by the skeletal structure of the body part to which the sensor is attached.
... As games are often at the forefront of technology, there are many examples of the use of game-based technology for other purposes. This can range from the use of game hardware, such as controllers [39] and other input devices [27], to the use of game software, such as using game engines, for the development of general interactive software and simulations [32,40,47]. One class of scientific games, biotic games, even aims to advance scientific lab equipment by using it as game hardware [36]. ...
Scientific software is often developed with professional scientists in mind, resulting in complex tools with a steep learning curve. Citizen science games, however, are designed for citizen scientists---members of the general public. These games maintain scientific accuracy while placing design goals such as usability and enjoyment at the forefront. In this paper, we identify an emerging use of game-based technology, in the repurposing of citizen science games to be software tools for professional scientists in their work. We discuss our experience in two such repurposings: Foldit, a protein folding and design game, and Eyewire, a web-based 3D neuron reconstruction game. Based on this experience, we provide evidence that the software artifacts produced for citizen science can be useful for professional scientists, and provide an overview of key design principles we found to be useful in the process of repurposing.
... In this system we use two Nintendo Wii Remote that can detect and track up to four infrared light sources and wireless Bluetooth communication system [6,7]. ...
In this paper we present a system for track the movements of the training instrumental in 3D space, using two infrared Cameras from two Nintendo Wii Remotes in order to evaluate the progress surgeons during the learning phase for making incisions and placement of screws in the spine throw minimally invasive procedures. The cameras are positioned orthogonally, and they detect the position of IR markers that are placed in the training instrumental.
... The capability of automatically detecting people and understanding their behaviors is a key functionality of intelligent video systems. The interest in behavior understanding has dramatically increased in recent years, motivated by a societal needs that include security [1], natural interfaces [2], gaming [3], affective computing [4] and assisted living [5]. Significant technological advances in hardware and communication protocols are also facilitating new services such as real-time collection of statistics on group sports [6] and annotation of videos for event detection and retrieval [7]. ...
Understanding human behaviors is a challenging problem in computer vision that has recently seen important advances. Human behavior understanding combines image and signal processing, feature extraction, machine learning, and 3-D geometry. Application scenarios range from surveillance to indexing and retrieval, from patient care to industrial safety and sports analysis. Given the broad set of techniques used in video-based behavior understanding and the fast progress in this area, in this paper we organize and survey the corresponding literature, define unambiguous key terms, and discuss links among fundamental building blocks ranging from human detection to action and interaction recognition. The advantages and the drawbacks of the methods are critically discussed, providing a comprehensive coverage of key aspects of video-based human behavior understanding, available datasets for experimentation and comparisons, and important open research issues.
... There is much reported work on using gameware (e.g. Wii Remote) for a variety of purposes, such as for gesture recognition based applications [177][178][179], robot control [180], medical data interaction [181] and others [182,183]. Izadi et al. [184] developed an interactive reconstruction system called KinectFusion and their system collected live depth data from a moving Kinect camera to create accurate geometric models in real time. Kang et al. [185] suggested that the size of displays have increased (for example VR screens) to the point where it is difficult to control an application using a keyboard and a mouse. ...
... Groups of students and educators interact in SMALLab together through the manipulation of up to five illuminated glowball objects, marker-attached rigid object, a set of standard HID devices, including wireless gamepads, Wii Remotes (Brain 2007;Shirai et al. 2007), and commercial wireless pointer/clicker devices. ...
... A bar graph displays the current fault tension value in real time. Students use a Wii Remote game controller (Brain 2007;Shirai et al. 2007), with embedded accelerometers, to generate fault events. The more vigorously that a user shakes the device, the more the fault tension will increase. ...
... Finally, several HCI research studies have, like ours, used the low-cost Nintendo Wii remote as an input device for capturing complex hand movement and for implementing tangible or gestural interfaces. These include successful applications of hand-motion tracking for painting and drawing [23], music creation [8], and auto-racing and fencing simulation [39]. We extend those results to a new application area: mathematics education. ...
We introduce an embodied-interaction instructional design, the Mathematical Imagery Trainer (MIT), for helping young students develop grounded understanding of proportional equivalence (e.g., 2/3 = 4/6). Taking advantage of the low-cost availability of hand-motion tracking provided by the Nintendo Wii remote, the MIT applies cognitive-science findings that mathematical concepts are grounded in mental simulation of dynamic imagery, which is acquired through perceiving, planning, and performing actions with the body. We describe our rationale for and implementation of the MIT through a design-based research approach and report on clinical interviews with twenty-two 4th–6th grade students who engaged in problem-solving tasks with the MIT.