Child pedestrian injury is a preventable global health challenge. Successful training efforts focused on child behavior, including individualized streetside training and training in large virtual pedestrian environments, are laborious and expensive. This study considers the usability and feasibility of a virtual pedestrian environment "game" application to teach children safe street-crossing behavior via the internet, a medium that could be broadly disseminated at low cost. Ten 7- and 8-year-old children participated. They engaged in an internet-based virtual pedestrian environment and completed a brief assessment survey. Researchers rated children's behavior while engaged in the game. Both self-report and researcher observations indicated the internet-based system was readily used by the children without adult support. The youth understood how to engage in the system and used it independently and attentively. The program also was feasible. It provided multiple measures of pedestrian safety that could be used for research or training purposes. Finally, the program was rated by children as engaging and educational. Researcher ratings suggested children used the program with minimal fidgeting or boredom. The pilot test suggests an internet-based virtual pedestrian environment offers a usable, feasible, engaging, and educational environment for child pedestrian safety training. If future research finds children learn the cognitive and perceptual skills needed to cross streets safely within it, internet-based training may provide a low-cost medium to broadly disseminate child pedestrian safety training. The concept may be generalized to other domains of health-related functioning such as teen driving safety, adolescent sexual risk-taking, and adolescent substance use.
Using virtual reality (VR) to examine risky behavior that is mediated by interpersonal contact, such as agreeing to have sex, drink, or smoke with someone, offers particular promise and challenges. Social contextual stimuli that might trigger impulsive responses can be carefully controlled in virtual environments (VE), and yet manipulations of risk might be implausible to participants if they do not feel sufficiently immersed in the environment. The current study examined whether individuals can display adequate evidence of presence in a VE that involved potential interpersonally-induced risk: meeting a potential dating partner. Results offered some evidence for the potential of VR for the study of such interpersonal risk situations. Participants' reaction to the scenario and risk-associated responses to the situation suggested that the embodied nature of virtual reality override the reality of the risk's impossibility, allowing participants to experience adequate situational embedding, or presence.
In most existing immersive virtual environments, 3D geometry is imported from external packages. Within ICOME (an Immersive Collaborative 3D Object Modelling Environment) we focus on the immersive construction of 3D geometrical objects within the environment itself. Moreover, the framework allows multiple people to simultaneously undertake 3D modelling tasks in a collaborative way. This article describes the overall architecture, which conceptually follows a client/server approach. The various types of clients, which are implemented, are described in detail. Some illustrative 3D object modelling examples are given. Extensions to the system with regard to 3D audio are also mentioned.
This paper presents a model of (en)action from a conceptual and theoretical point of view. This model is used to provide solid bases to overcome the complexity of designing virtual environments for learning (VEL). It provides a common grounding for trans-disciplinary collaborations where embodiment can be perceived as the cornerstone of the project. Where virtual environments are concerned, both computer scientists and educationalists have to deal with the learner/user’s body; therefore the model provides tools with which to approach both human actions and learning processes within a threefold model. It is mainly based on neuroscientific research, including enaction and the neurophysiology of action.
In the real world, vision operates in harmony with self-motion yielding the observer to unambiguous perception of the three-dimensional (3D) space. In laboratory conditions, because of technical difficulties, researchers studying 3D perception have often preferred to use the substitute of a stationary observer, somehow neglecting aspects of the action-perception cycle. Recent results in visual psychophysics have proved that self-motion and visual processes interact, leading the moving observer to interpret a 3D virtual scene differently from a stationary observer. In this paper we describe a virtual environment (VE) framework which presents very interesting characteristics for designing experiments in visual perception during action. These characteristics arise in a number of ways from the design of a unique motion capture device. First, its accuracy and the minimal latency in position measurement; second, its ease of use and the adaptability to different display interfaces. Such a VE framework enables the experimenter to recreate stimulation conditions characterised by a degree of sensory coherence typical of the real world. Moreover, because of its accuracy and flexibility, the same device can be used as a measurement tool to perform elementary but essential calibration procedures. The VE framework has been used to conduct two studies which compare the perception of 3D variables of the environment in moving and in stationary observers under monocular vision. The first study concerns the perception of absolute distance, i.e. the distance separating an object and the observer. The second study refers to the perception of the orientation of a surface both in the absence and presence of conflicts between static and dynamic visual cues. In the two cases, the VE framework has enabled the design of optimal experimental conditions, permitting light to be shed on the role of action in 3D visual perception.
Virtual Reality and Artificial Intelligence provide suitable techniques to improve computer games quality. While the former offers mechanisms to model environment and characters' physical features, the latter provides models and tools for building characters, namely Synthetic Actors or Believable Agents, which can exhibit intelligent social behaviour and express personality and emotions. The current architecture proposals for Synthetic Actors do not fully meet the requirements for long-term games development, such as strategy and adventure ones, it is necessary to guarantee both personality stability and reactive emotional responses, which may be contradictory. In this work, we propose a new Synthetic Actor model that tightly connects emotions and social attitudes to personality, providing a long-term coherent behaviour. This model has been applied to two games presented here as case studies.
Virtual reality (VR) has been making inroads into medicine in a broad spectrum of applications, including medical education, surgical training, telemedicine, surgery and the treatment of phobias and eating disorders. The extensive and innovative applications of VR in medicine, made possible by the rapid advancements in information technology, have been driven by the need to reduce the cost of healthcare while enhancing the quality of life for human beings.
In this paper, we discuss the design, development and realisation of an innovative technology known as the Georgia Tech Wearable Motherboard™ (GTWM), or the “Smart Shirt”. The principal advantage of GTWM is that it provides, for the first time, a very systematic way of monitoring the vital signs of humans in an unobtrusive manner. The flexible databus integrated into the structure transmits the information to monitoring devices such as an EKG machine, a temperature recorder, a voice recorder, etc. GTWM is lightweight and can be worn easily by anyone, from infants to senior citizens. We present the universal characteristics of the interface pioneered by the Georgia Tech Wearable Motherboard™ and explore the potential applications of the technology in areas ranging from combat to geriatric care. The GTWM is the realisation of a personal information processing system that gives new meaning to the termubiquitous computing. Just as the spreadsheet pioneered the field of information processing that brought “computing to the masses”, it is anticipated that the Georgia Tech Wearable Motherboard™ will bring personalised and affordable healthcare monitoring to the population at large.
In this paper, we present a system for performing a complex surgical intervention using virtual reality (VR) technology. With the aid of the system, the intervention can be planned and simulated exactly before performing it in reality and important additional information can be achieved during the simulation. Before working in VR, finite element models of the patient's head are generated form CT-images. Based on these models, additional work is done in VR, where the patient's skull is cut into several pieces, which are then re-positioned. Based on moving and shifting the obtained pieces, the goal is to increase the volume inside the skull, which is called intracranial volume. Until now, it was not possible to measure the achieved increase of the intracranial volume. However, by using our system is it now possible to calculate this volume online during each step of our virtual intervention. The obtained results are used for the surgical intervention in reality.
Current computer-aided assembly systems provide engineers with a variety of spatial snapping and alignment techniques for
interactively defining the positions and attachments of components. With the advent of haptics and its integration into virtual
assembly systems, users now have the potential advantage of tactile information. This paper reports research that aims to
quantify how the provision of haptic feedback in an assembly system can affect user performance. To investigate human–computer
interaction processes in assembly modeling, performance of a peg-in-hole manipulation was studied to determine the extent to which haptics and stereovision may impact on task completion time. The
results support two important conclusions: first, it is apparent that small (i.e. visually insignificant) assembly features
(e.g. chamfers) affect the overall task completion at times only when haptic feedback is provided; and second, that the difference
is approximately similar to the values reported for equivalent real world peg-in-hole assembly tasks.
In e-commerce, the role of trust becomes vital for establishing and maintaining customer relationships. Drawing from established theoretical work on trust and relationship marketing, this paper synthesises a series of trust constructs, determinant variables and trust building processes, and proposes a framework for the formation of trust in customer-business relationships. The framework is conceptualised in the context of an electronic servicescape, where trust is formed through iterative interactions with promises being made, enabled and fulfilled. Based on this framework, the paper illustrates how the application of agent and virtual reality technologies can provide the environment and facilitate the expressiveness demanded by such a servicescape.
This paper describes the prototype of a conversational agent embedded within a collaborative virtual environment. This prototype —Ulysse — accepts spoken utterances from a user enabling him or her to navigate within relatively complex virtual worlds. It also accepts and executes commands to manipulate objects in the virtual world. We are beginning to adapt our agent to parse certain written descriptions of simultaneous actions of world entities and to animate these entities according to the given description.
The paper first describes what we can expect from a spoken interface to improve the interaction quality between a user and virtual worlds. Then it describes Ulysse's architecture, which includes a speech recognition device together with a speech synthesiser. Ulysse consists of a chart parser for spoken words, a semantic analyser, a reference resolution system, a geometric reasoner, a dialogue manager, and an animation manager, and has been integrated in the DIVE virtual environment. Ulysse can be ‘personified’ using a set of behavioural rules. A number of tests have demonstrated its usefulness for user navigation. We are currently developing simulations of written reports of car accidents within Ulysse; such simulations provide dynamic recreations of accident scenarios for individual and collaborative reviewing and assessment.
One of the first areas where virtual reality found a practical application was military training. Two fairly obvious reasons have driven the military to explore and employ this kind of technique in their training; to reduce exposure to hazards and to increase stealth. Many aspects of combat operations are very hazardous, and they become even more dangerous if the combatant seeks to improve his performance. Some smart weapons are autonomous, while others are remotely controlled after they are launched. This allows the shooter and weapon controller to launch the weapon and immediately seek cover, thus decreasing his exposure to return fire. Before launching a weapon, the person who controls that weapon must acquire/perceive as much information as he can, not only from its environment, but also from the people who inhabits that environment. Intelligent virtual agents (IVAs) are used in a wide variety of simulation environments, especially in order to simulate realistic situations as, for example, high fidelity virtual environment (VE) for military training that allows thousands of agents to interact in battlefield scenarios. In this paper, we propose a perceptual model, which seeks to introduce more coherence between IVA perception and human being perception, increasing the psychological coherence between the real life and the VE experience. Agents lacking this perceptual model could react in a non-realistic way, hearing or seeing things that are too far away or hidden behind other objects. The perceptual model, we propose in this paper introduces human limitations inside the agents perceptual model with the aim of reflecting human perception.
The creation of virtual humans capable of behaving and interacting realistically with each other requires the development of autonomous believable social agents. Standard goal-oriented approaches are not well suited to it because they don't take into account important characteristics identified by the social sciences. This paper tackles the issue of a general social reasoning mechanism, discussing its basic functional requirements using a sociological perspective, and proposing a high-level architecture based on Roles, Norms, Values and Types.
Constraint-based simulation is a fundamental concept used for assembly in a virtual environment. The constraints (axial, planer, etc.) are extracted from the assembly models in the CAD system and are simulated during the virtual assembly operation to represent the real world operations. In this paper, we present the analysis of combinations and order of application of axial and planar constraints used in assembly. Methods and algorithms for checking and applying the constraints in the assembly operation are provided. An object-oriented model for managing these constraints in the assembly operation is discussed.
Despite its central role in the constitution of a truly enactive interface, 3D interaction through human full body movement has been hindered by a number of technological and algorithmic factors. Let us mention the cumbersome magnetic equipments, or the underdetermined data set provided by less invasive video-based approaches. In the present paper, we explore the recovery of the full body posture of a standing subject in front of a stereo camera system. The 3D position of the hands, the head and the center of the trunk segment are extracted in real-time and provided to the body posture recovery algorithmic layer. We focus on the comparison between numeric and analytic inverse kinematics approaches in terms of performances and overall quality of the reconstructed body posture. Algorithmic issues arise from the very partial and noisy input and the singularity of the human standing posture. Despite stability concerns, results confirm the pertinence of this approach in this demanding context.
This case study evaluated the effect on cultural understanding of three different interaction modes, each teamed with a specific slice of the digitally reconstructed environment. The three interaction modes were derived from an initial descriptive theory of cultural learning as instruction, observation and action. A major aim was to ascertain whether task performance was similar to the development of understanding of the cultural context reached by participation in the virtual environment. A hypothesis was that if task performance is equivalent to understanding and engagement, we might be able to evaluate the success of virtual heritage environments (through engagement and education), without having to annoy the user with post-experience questionnaires. However, results suggest interaction in virtual heritage environments is so contextually embedded; subjective post-test questionnaires can still be more reliable than evaluating task performance.
KeywordsPalenque–Virtual heritage–Cultural learning–Mayan
The Haptic Cooperative Virtual Workspace (HCVW), where users can simultaneously manipulate and haptically feel the same object, is beneficial and in some cases indispensable for training a team of surgeons, or in application areas in telerobotics and entertainment. In this paper we propose an architecture for the haptic cooperative workspace where the participants can kinesthetically interact, feel and push each other simultaneously while moving in the simulation. This involves the ability to manipulate the same virtual object at the same time. A set of experiments carried out to investigate the haptic cooperative workspace is reported. A new approach to quantitatively evaluate the cooperative haptic system is proposed, which can be extended to evaluate haptic systems in general.
Virtual Reality-based simulation technology has evolved as a useful design and analysis tool at an early stage in the design
for evaluating performance of human-operated agricultural and construction machinery. Detecting anomalies in the design prior
to building physical prototypes and expensive testing leads to significant cost savings. The efficacy of such simulation technology
depends on how realistically the simulation mimics the real-life operation of the machinery. It is therefore necessary to
achieve ‘real-time’ dynamic simulation of such machines with operator-in-the-loop functionality. Such simulation often leads
to intensive computational burdens. A distributed architecture was developed for off-road vehicle dynamic models and 3D graphics
visualization to distribute the overall computational load of the system across multiple computational platforms. Multi-rate
model simulation was also used to simulate various system dynamics with different integration time steps, so that the computational
power can be distributed more intelligently. This architecture consisted of three major components: a dynamic model simulator,
a virtual reality simulator for 3D graphics, and an interface to the controller and input hardware devices. Several off-road
vehicle dynamics models were developed with varying degrees of fidelity, as well as automatic guidance controller models and
a controller area network interface to embedded controllers and user input devices. The simulation architecture reduced the
computational load to an individual machine and increased the real-time simulation capability with complex off-road vehicle
system models and controllers. This architecture provides an environment to test virtual prototypes of the vehicle systems
in real-time and the opportunity to test the functionality of newly developed controller software and hardware.
KeywordsReal-time simulation–Distributed architecture–Virtual reality–Vehicle dynamics models–Multi-rate simulation
In our everyday life, interaction with the world consists of a complex mixture of audio (speech and sounds), vision and touch. Hence, we may conclude that the most natural means of human communication is multi-modal. Our overall research goal is to develop a natural 3D human-computer interaction framework for modelling purposes without mouse or keyboard and where many different sensing modalities will be used simultaneously and cooperatively. This article will focus on the various interface issues on the way to an intuitive environment in which one or more users can model their prototypes in a natural manner. Some technical framework decisions, such as messaging and network systems, will also be investigated.
In this paper, we discuss the development of Virtual Training Studio (VTS), a virtual environment-based training system that
allows training supervisors to create training instructions and allows trainees to learn assembly operations in a virtual
environment. Our system is mainly focused on the cognitive side of training so that trainees can learn to recognize parts,
remember assembly sequences, and correctly orient the parts during assembly operations. Our system enables users to train
using the following three training modes: (1) Interactive Simulation, (2) 3D Animation, and (3) Video. Implementing these
training modes required us to develop several new system features. This paper presents an overview of the VTS system and describes
a few main features of the system. We also report user test results that show how people train using our system. The user
test results indicate that the system is able to support a wide variety of training preferences and works well to support
training for assembly operations.
The focus of this research was to examine how effectively augmented reality displays, generated with a wearable computer, could be used for aiding an operator performing a manual assembly task. Fifteen subjects were asked to assemble a computer motherboard using four types of instructional media: paper manual, computer-aided, opaque augmented reality display, and see-through augmented reality display. The time of assembly and assembly errors were measured for each type of instructional media, and a questionnaire focusing on usability was administered to each subject at the end of each condition. The results of the experiment indicated that the augmented reality conditions were more effective instructional aids for the assembly task than either the paper instruction manual or the computer-aided instruction. The see-through augmented reality display resulted in the fastest assembly times, followed by the opaque augmented reality display, the computer-aided instruction, and the paper instructions respectively. In addition, subjects made fewer errors using the augmented reality conditions compared to the computer-aided and paper instructional media. However, while the two augmented reality conditions were a more effective instructional media when time for assembly was the response measure, there were still some important usability issues associated with the augmented reality technology that were not present in the non-augmented reality conditions.
Virtual reality (VR) technology has matured during the past few years to a degree where real industrial applications have become feasible. The work described in this paper involves collaboration between Heriot-Watt University and BAE Systems and aimed to establish the feasibility of using augmented VR to support complex information delivery in high precision defence assembly. Laboratory and field studies were conducted which investigated performance when using augmented VR as compared to conventional methods of information delivery. The results show that augmented VR is comparable to conventional methods of information delivery in terms of latencies and errors but allows less disruption to work and greater mobility. There appear to be no adverse affects on operators from using VR and generally operators are positive towards using VR technology. The feasibility of supporting augmented VR with wearable technology is also demonstrated. The overall results are discussed in terms of further application of VR in industrial settings.
Assembly modelling is the process of capturing entities and activity information related to assembling and assembly. Currently, most CAD systems have been developed to ease the design of individual components, but are limited in their support for assembly designs and planning capability, which are crucial for reducing the cost and processing time in complex design, constraint analysis and assembly task planning. This paper presents a framework of a two-handed virtual assembly (VA) planner for assembly tasks, which coordinates two hands jointly for feature-based manipulation, assembly analysis and constraint-based task planning. Feature-based manipulation highlights the important assembling features (e.g. dynamic reference frames, moving arrow, mating features) to guide users for the ease of assembly and in an efficient and fluid manner. The users can freely navigate and move the mating pair along the collision-free path. The free motion of two-handed input in assembly is further restricted to the allowable motion guided by the constraints recognised on-line. The allowable motion in assembly is planned by the logic steps derived from the analysis of constraints and their translation in the progress of assembly. No preprocessing or predefined assembly sequence is necessary since the planning is produced in real-time upon the two-handed interactions. Mating features and constraints in databases are automatically updated after each assembly to simplify the planning process. The two-handed task planner has been developed and experimented for several assembly examples including a drill (12-parts) and a robot (17-parts). The system can be generally applied for the interactive task planning of assembly-type applications.
Distributed virtual environments need to address issues related to the control of network traffic, resource management, and
scalability. Given the distributed nature of these environments, the main problems they need to overcome are the efficient
distribution of workload among the servers and the minimization of the communication cost. In this direction, a lot of work
has been done and numerous relevant techniques and algorithms have been proposed. The majority of these approaches mainly
focus on user entities and their interactions. However, most of actual DVE systems include additional and non-dynamic elements,
denoted as objects, whose presence can affect users’ behavior. This paper introduces virtual objects’ attributes and proposes
two approaches that exploit these attributes in order to handle workload assignment and communication cost in DVE systems.
Both approaches take into account scenario-specific aspects of DVE systems, such as the impact that entities’ attributes have
on each other and the way this impact can affect the system’s state. These scenario-specific aspects are then combined with
quantitative factors of the system, such as workload, communication cost, and utilization. The experiments conducted in order
to validate the behavior of the proposed approach show that the incorporation of object’s presence can improve the DVE system’s
performance. More specifically, objects’ presence and their attributes can assist in the significant reduction in the communication
cost along with effective workload distribution among the system’s servers.
KeywordsDistributed virtual environments–Load balancing–VR techniques and systems
This paper describes an interactive system for specifying robotic tasks using virtual tools that allow an operator to reach into a live video scene and direct robots to use corresponding real tools in complex scenarios that involve integrating a variety of otherwise autonomous technologies. The attribute rich virtual tools concept provides a human-machine interface that is robust to unanticipated developments and tunable to the specific requirements of a particular task. This Interactive Specification concept is applied to intermediate manufacturing tasks.