Science topic
Virtual Environments - Science topic
Explore the latest questions and answers in Virtual Environments, and find Virtual Environments experts.
Questions related to Virtual Environments
Hello everyone,
I’m exploring research in Virtual Reality (VR) combined with psychology, specifically to support Virtual Environment for Rehabilitation Therapy (VERT) for my master's thesis research. I wanted to understand the latest advancements, key research areas, or potential applications in this field. Could anyone provide insights or recommend recent studies or directions where VR is being effectively integrated with psychological therapy?
Thank you for your time and guidance
Best regards,
Stefanus Benhard
Can you imagine being able to control a virtual environment with your emotions? Or having a game adapt to your needs based on how nervous you get? More and more immersive Virtual Reality applications are incorporating biosensors, especially in the academic world. This combination can help improve human performance and well-being.
Immersive Virtual Reality (iVR) has revolutionized how we interact with digital environments due to its immersive capabilities, but when combined with a biofeedback system, this technology reaches a new level of interaction and personalization.
Biofeedback is a technique that acts as an immediate reflection of an individual's physiological response to stimuli.
HOW DOES BIOFEEDBACK WORK IN AN IVR APPLICATION?
- A VR application is created, and a biofeedback system is planned.
- Biosensors are selected to capture a physiological signal (such as HR or breathing).
- These data are analysed using algorithms (artificial intelligence) to interpret the individual's emotional state. For example, an increase in heart rate could indicate stress.
- The interpreted information is displayed in the virtual environment through visual or auditory systems that can be more obvious (such as directly showing the HR) or more subtle (modifying the environment or the difficulty of the experience).
- This biofeedback system helps the individual understand their physiological response and become aware of how to control it.
WHAT DO PEOPLE DO WITH THIS COMBINATION? We conducted a review of 560 studies to analyze how biosensors are used in combination with iVR, and here's what we discovered:
- There is no consensus on how to use this combination or how to design applications.
- This combination is predominantly used in Psychology and Medicine, although its use is also growing in areas such as Education or Risk Prevention.
- The most monitored signals are HR (53.3%), EDA, and EEG. Surprisingly, eye-tracking is not used as much despite being incorporated into some virtual reality devices.
- Mostly, Desktop 6DOF devices are used because, being connected to a computer, they have more power and greater capacity to connect in real-time with biosensors.
- Experiences are mostly passive (41.6%), using biosensors only to view physiological data and not for interaction. On the other hand, 40.5% of experiences are interactive. However, only in 17.3% of experiences is a biofeedback system used that utilizes biosensor data for interaction.
This is a very promising field because it combines several disciplines such as software development and medicine or education. We need to continue working to establish a common framework for the use of these biosensors and explore what else they can offer us. The possibilities are limitless!!
You can find these results and much more in our recent paper titled "A systematic review of wearable biosensor usage in immersive virtual reality experiences" in the journal Virtual Reality: https://doi.org/10.1007/s10055-024-00970-9
Call for Papers
CMC-Computers, Materials & Continua new special issue“New Trends in Immersive Virtual Environments”is open for submission now.
Submission Deadline: 31 May 2025
Guest Editors
- Prof. Diego Vergara, Universidad Católica de Ávila, Spain
- Prof. Álvaro Antón-Sancho, Universidad Católica de Ávila, Spain
- Prof. Pablo Fernández-Arias, Universidad Católica de Ávila, Spain
Summary:The accelerated advancement of virtual reality (VR) and augmented reality (AR) technologies has led to the development of immersive virtual environments, transforming sectors such as education, entertainment, medicine, and commerce. In this context, new emerging trends present unprecedented opportunities for research and innovation. Topics such as improved interactivity and user experience, the use of artificial intelligence to generate adaptive environments, and the implementation of advanced haptic technologies are redefining the way users interact with virtual environments. In addition, the integration of VR and AR with the Internet of Things (IoT) promises to create digital ecosystems that are closer to today's society. Exploring these trends is not only crucial to understand the impact of these technologies on everyday life, but also to identify the ethical and security challenges that arise with their mass adoption. We invite scholars, developers, and practitioners from a variety of disciplines to contribute articles that explore these developments, offering critical analyses, case studies, reviews, and innovative proposals that advance the understanding and applications of immersive virtual environments in the digital age. Your participation will enrich the debate and foster a deeper, multidimensional understanding of these emerging technologies.
For submission guidelines and details, visit: https://www.techscience.com/.../immersive_virtual...
Is gamification integrated with the technology based environment including Mobile, AI, Virtual Reality, Augmented Reality and so on, or we have traditional and physical methods for gamification?
In other words I am trying to do a research within which I will focus only and only on the technology baesd methods and virtual environment. To do this what is the, how should I apply my keyword of gamification in my research topic?
"gamification" or "digital gamification" or "digitalized game based learning" or something else?
Hello ResearchGate community,
I am currently engaged in a research project in the field of robotics, focusing on the development and evaluation of photorealistic 3D virtual environments for robot manipulation and navigation. Our approach integrates Neural Radiance Fields (NeRF) and Unreal Engine 5 (UE5) to create these environments, aiming to bridge the gap between simulated training and real-world application in robotics.
Our main contributions include:
- The use of NeRF scene representations, specifically rendering and static geometry, learned from indoor scene videos, for creating realistic robot simulation environments.
- Demonstrating a faster method than previous studies in creating photorealistic 3D virtual environments of real-world interiors.
- Establishing that our visual guidance control policy has sufficient fidelity to enable effective simulation-reality transfer.
We are at a stage where we need to conduct quantitative evaluations to validate our approach and findings. Specifically, we are interested in methods that can effectively measure and compare the fidelity and accuracy of our photorealistic 3D environments against real-world environments, as well as the efficacy of simulation-reality transfer of visually guided control policies.
Could anyone suggest appropriate quantitative evaluation techniques or metrics that could be applied in this context? Any insights or references to similar studies would be greatly appreciated.
Thank you for your assistance.
Best regards,
There are many technical challenges in VR/AR. Among these, which is the most important technical challenge without which VR/AR will miss the mass market? Let us discuss.
The great number of resources existing in academic social networks, and the researchers interacting throughout them, looks to be an unquestionable factor supporting the organization and execution of learning activities. Do you agree? Do you have any particular experience in applying those resources in Education?
As a psychotherapist, I am interested in exploring the potential of virtual reality technology as a treatment tool for individuals suffering from Post-Traumatic Stress Disorder (PTSD). PTSD is a condition that can develop after an individual experiences or witnesses a traumatic event, and can manifest as symptoms such as flashbacks, avoidance, and hyperarousal. Traditional treatment methods for PTSD include therapies such as Cognitive Behavioral Therapy (CBT) and Eye Movement Desensitization and Reprocessing (EMDR), which have been found to be effective in reducing symptoms.
In recent years, virtual reality therapy has emerged as a promising alternative treatment for PTSD. Virtual reality therapy involves the use of virtual environments to expose individuals to simulations of traumatic events in a controlled and safe manner, allowing them to process and cope with their traumatic memories and feelings. A growing body of research has demonstrated that virtual reality therapy can be effective in reducing symptoms of PTSD, such as anxiety, avoidance, and flashbacks.
For example, a randomized controlled trial by Rothbaum et al. (2001) found that virtual reality exposure therapy was effective in reducing PTSD symptoms compared to a control group who received a waiting-list treatment. Additionally, several case studies have reported success in treating PTSD symptoms with virtual reality therapy, including with veterans and first responders, who often present with PTSD due to their professional experiences. for example, a case study by Rizzo et al (2008) have reported significant reduction in symptoms of PTSD in a sample of veterans with combat-related PTSD after treatment with virtual reality exposure therapy.
As a clinician, I am excited about the potential of virtual reality therapy to revolutionize the treatment of PTSD, providing a more efficient, accessible and cost-effective treatment option. I am also interested in further exploring the use of virtual reality therapy in my practice and observing the effects in my patients. This is a promising avenue that can be incorporated into the treatment plan of my patients and I look forward to keeping up with the latest advancements in this field.
1) What do you understand/characterize the metaverse?
2) Is it a disruptive innovation?
3) Will the metaverse replace the Internet?
4) How will legal, ethical and moral issues be dealt with in the metaverse?
5) Will the value chain of products and services in the metaverse differ from the real world?
6) What will sensations and perceptions be like in the metaverse?
7) Is it the right time for companies to make their migration to the metaverse?
8) Is current technology suitable for the metaverse to become a reality?
9) What is the impact of the metaverse on society?
10) Will the metaverse be a new Second Life?
We are running a VR study on a specific VR application. We want to see the impact of the content of our VR application on participants, so we need a control condition (placebo game) to know also the novelty effect of VR on participants. Hence we need an interactive VR game (not seated), and it is better to be a procedural task/game for that purpose.
I would appreciate it if you could share any article or valid sources which have been used a publicly available VR game as their control condition.
Recommendations of any kind for XR; VR; AR would help me a lot. Thank you very much for your help!
Hi all, I am trying to install the GPU accelerated version of AutoDock on my Windows 10 machine. Windows 10 Update 2004 allows for the installation of a full bash environment (SUSE, Ubuntu, etc.) that is not a virtual environment. I have installed Ubuntu and am wondering how I can install the AutoDock CUDA programs into that environment.
The link to the program is below:
Thanks in advance.
I am a doctoral candidate at Northcentral University. In partial fulfillment of a doctoral degree, I am conducting a study that involves understanding emotional intelligence and decision-making within virtual MIS teams, especially focused on team leaders and MIS staff members. Specifically, I would like permission to use the Wong and Law Emotional Intelligence Scale, (WLEIS). Participants for this study must hold one of the following positions in their organization; Either first-line supervisor, team leader, or team staff member in an MIS virtual environment, as developers, network engineers, computer architects, etc.). This study is quantitative and involves an emotional intelligence questionnaire.
Dear fellow researchers,
I am looking for some advice on eye-tracking enabled VR headsets. Currently contemplating between HTC Vive Pro Eye and Pico Neo 3 Pro Eye... Both have built in eye tracking by tobii. Does anyone has any experience with any of them? Or can recommend any other brands?
We are planning to use it for research in combination with EEG and EDA sensors to assess human response to built environment. Any advice is much appreciated.
Do you start to see mental pictures when you read a book? Are you emotionally touched by movies and really "dive" into the fiction during watching? Then you have high immersive tendencies.
This can be measured by the Immersive Tendencies Questionnaire (ITQ) from Witmer & Singer. It is important to know this because if you for example have a group of people with high immersive tendencies, they might rate your experiment experience entirely different than people with low immersive tendencies.
But when should you ask these questions? Before or after you have confronted your participants with your experiment?
We came across the following different opinions:
Opinion A: Since you only want to characterize the respondent, then apply this AFTER the test and any test-related questionnaires, so that user’s profile answer does not bias the test performance or the experience questions.
Opinion B: Because you want to test the participant before he or she is influenced by your experiment, you ask it BEFORE the experiment (for example your experiment would make literally everybody feel as if they are really immersed so that participants are still in the flow while answering the ITQ they might score higher than they normally would).
Original Paper (Witmer, B. G., & Singer, M. J. (1998). Measuring presence in virtual environments: A presence questionnaire. Presence, 7(3), 225-240.)
What do you think? Or do you know of recent research which investigates this?
In times of pandemic, the use of virtual environments for all areas of life has increased considerably. We are investigating in which areas you have used virtual environments and how satisfied you have been. Thank you.
Many Augmented Reality (AR) technologies allow us to portray various systems in a virtual environment. Are we able to project the Coronavirus into our environments to more easily understand the stucture of the virus?
We have many technical issues in Mixed Reality. Which is the most important technical challenge for Mixed Reality?
As COVID-19 has forced academia to react instantly to the dynamics of teaching and learning, I am curious to find out how you have been able to transfer complex ideas that you would normally teach in person to a virtual learning environment?
I am investigating the oscillatory brain activity (EEG 32 channels) of non-clinical adults during a Virtual Reality task.
During this task, participants can move their head in order to inspect the virtual environment.
I am using EEGLAB toolbox for the analysis.
I will use ICA in order to reject some kind of artifacts ( blink, eye movements, muscle artifacts).
How can I control artifacts of movement (e.g. head movements) ?
Thanks
I would like to know what is the best way to train mice in a virtual environment to perform behavioural tasks (to run consistently, to train them to reach a specific target without stops before to reach this target, and to navigate in 2D).
Hello everyone
The UMI3D Consortium's working group dedicated to embodiment is currently looking for a device agnostic way to manage navigation in Collaborative Virtual Environment (CVE).
Are you aware of existing research in the field ?
The objective would be to extend of the UMI3D protocol to handle the followings issues:
- Sharing a common representation of a 3D environment's "navigable" areas to asymmetrical devices.
- Managing the collisions between the users and the virtual environment.
Our assumptions are the following:
- It is not desirable to continuously control the movement of the user (e.g. sliding of the virtual cabin) because of the network latencies which would cause significant motion sickness.
- It is difficult / undesirable to impose the same navigation technique (e.g. go-go) on all devices (due to different good practices and context of use).
- The correct way to manage the collisions between a user and virtual objects differs from one device to another (e.g. freezing some dof of the camera on a PC is common but it is causing motion sickness in a VR headset).
Thanks a lot for your help
Kind regards,
Julien Casarin
The isolation generated by the COVID-19 pandemic has forced most of the world's universities to choose remote or virtual classes. In the case of engineering programs and other programs where real practical experiences are required, it has been necessary to resort to increasing simulation or to the development and implementation of remote laboratories. The scarce infrastructure that exists in remote laboratories will be able to demonstrate learning effectiveness and enhance future developments that validate the training of engineers using this educational tool that makes possible the technological advances of the 21st century and thus generate a permanent change in the global educational paradigm. .
Examples:
Because of university closures, I have to take classes in a virtual environment for my students and need some good free software.
The main difference between Type 1 and Type 2 hypervisors is that Type 1 runs on bare metal and Type 2 runs on top of an OS. Each hypervisor type also has its own pros and cons and specific use cases. Which of the two hypervisor types do you use in your virtual environment, and why?
Hi, I am planning a doctoral study using a novel immersive virtual reality playground for children with DCD and TD children, ages 7-10. I was planning on using referred children with DCD but was considering using the DCDQ'07 with the parents of both groups. I am also planning to administer the M-ABC2 for children who have not undergone this evaluation. We will be collecting kinematic data as well as task success, time for task performance and perceived motor competence. Our independent variables include level of immersion, level of output display gain, and setting (a real trampoline in the virtual environment (VE), a virtual trampoline in the VE, and a real trampoline in a real setting.
1.Can you share the full text of this article with me? 2. Would you consider a different tool for this purpose?
Thanks!
Sarina Goldstand
During some lessons it may be for a limited time, in specific situations of didactic games or the presentation of specific learning processes and topics, the teacher may allow the use of devices such as virtual reality slots and augmented reality. In addition, the teacher can also include other mobile devices such as laptops, tablets, smartphones etc. in the education process. In certain situations, these devices would play the role of teaching instruments supporting the didactic processes conducted by the teacher.
Do you agree with my opinion on this matter?
In view of the above, I am asking you the following question:
Can glasses for virtual reality and augmented reality be teaching instruments used in education processes?
Please reply
I invite you to the discussion
Thank you very much
Best wishes
I need to record the head and shoulders of a person speaking, and then reconstruct that person in 3D, so that it can be played back statically (not real-time) in a virtual environment.
I am interested in any software or methods there are out there for reconstruction of a recording in medium to high resolution.
Many companies provide the vIMS solution to run on their NFV or virtualized environment, such as Huawei, Ericsson, and other vendors.
Are there any company provides the vIMS in software that capable to work on commercial of the shelve (COTS) or on an ordinary data center?
What is the best algorithm or technique for tracking small objects in virtual environments. Best in the sense of tracking resolution, latency, and cost.
Version 3, 2005 attached. I've reached out to the listed addresses on both this and v2 of the PQ, but all email attempts have bounced back.
Thank you!
We are about to develop a hazard perception test in virtual reality. However, we are intersted in measuring where the participants looks at. What are the best options for tracking eye movements in the virtual environment?
I am making a few little videocasts to explain concepts like referencing/citation, or using electronic resources. Someone suggested incorporating gifs, cartoons or other animations. I looked at Giphy.com but with mixed results. Can you recommend a free source of gifs etc suitable for educational/library use, please? The link below gives a commentary on my experimentation with Giphy this evening!
We want to compare head movement of people exploring VR programs. It seems since the headsets sense head movement there should be a way to record them.
I designed a virtual volume model with Oculus Rift design software. Now, I would like to have a physical model of it. Is there any way to connect an Oculus Rift to a 3D printer (e.g. Ultimaker 2+) to ensure transferring the virtual model into a 3D-CAD model by software? And if so, which one would that be? So far, merely the transfer of a 3D-CAD-Model into a virtual reality model is discussed, not vice versa. Any ideas? Thx!
skills in:
idea about web designing
graphic designing
3D character modelling and animation
Knowledge on virtual reality
Hello!
Has someone worked on it? It's hard to find litterature on this specific subject!
Or litterature on a part of the subject...
Any reference would be of great help.
Thank you
Dear All,
I am conducting an experiment as part of my PhD research that aims to utilise users as a defence mechanism for detecting social engineering attacks on computer systems.
I require participants to take part in a one and half month long experiment, installing and using simple application to report suspected social engineering attacks to web system hosted by the researchers. This experiment is non-disruptive, requiring installation of a small app only and then participants normal computer use on their Windows device. You will need a Windows computer device to take part.
If you are interested and would be happy to take part in this study please click on the link below to review and complete experiment participant information and consent form, where further information and details on how to take part in this experiment are provided.
Please forward/share this call with affiliates that may also be interested in taking part in this experiment.
I would be very grateful for your participation.
Kind Regards,
Ryan Heartfield
PhD Researcher, University of Greenwich
I am starting some research in presence in desktop photoreal VEs and cannot find any generally-accepted instrumentation.
This question is based on a need to understand team dynamics in a project. Each project goes through project life cycle and each stage requires social interaction behaviors.
Would like to know how to measure these behaviors in real time
Regards
Rebone
Personally or in virtual environments?
I am looking for the best HMD that I can get on a budget that is still good enough for high level experimentation, in essence. That being said, I am certainly willing to take the hit financially if something like the HTC Vive is truly worth it in terms of effectiveness in research. That being said, if the Oculus Rift or even the Samsung Gear VR are sufficient for effective research, then those would be my choice. The direct application would be using the technology as an alternative learning option in the future, mostly for history.
Thanks!
especially 360 videos where you are supposed to see a place in the moment the video was taken. with all the actions that where taking place. And you cannot actually interact.
Student engagement can mean different things: (Kuh, G. for example)
Student engagement with course content
Student engagement with other students
Student engagement with faculty
How can 3D virtual environments be designed to enhance student engagement. For example - orientation before arriving on campus, collaboration, interaction.
What elements of the environment are most important and useful? Why?
For example: Text chat, avatars, realistic representation, shared documents, well presented content, video chat, ability to explore and discover, gamification elements.
I plan to interview & survey students and teachers and would like to hear thoughts and experience on these topics.
There are many research in the literature about virtual reality in wayfinding research. Some of them reported that HMDs increase the presence but affect spatial navigation task negatively compared to traditional desktop system. Others reported that HMDs increase presence and provides better spatial navigation performance. However, the technology they used are very poor in terms of resolution, field of view etc. I could not find any research conducted a wayfinding experiment by Oculus Rift. I'm curious about that using Oculus Rift in wayfinding research gives coherent results with real environments or not?
The end goal is to have two simultaneous machine running inside VMWare. One represents the attacker[already installed Windows 7], and the other the host[honeypot?/ OS suggestion]. I am still unsure what honeypot I can use to mimic a host system so that I can launch attacks from the other virtual attack system. I have already installed a Windows 7 OS on one machine which can represent the attacker. Also, in this case can the IDS be SNORT? Or do you have any suggestion regarding an appropriate IDS in this case? I did find one paper that comes the closest to what is something similar to what I want to achieve. It is attached.
Furthermore, once that is known, the logs generated by IDS need to be readable using WEKA for training a file which can be fed back to the IDS as rules in order to learn and capture novel, undefined attacks based on a certain threshold of certainty.
What do you suggest? Thankyou for your time and patience.
Can we be sure if it is not just a phase, and it will not disappear any time soon?
My study examines the lived experience in the area of human-computer interaction (HCI), 3D virtual environments and learning in professional development settings. I've seen various (psychological) phenomenological approaches employed e.g. in nursing and psychology, but what about in HCI and learning? There's naturally Ihde, and I have seen e.g. Clark Moustakas' (1994) descriptive approach used for e.g. studying computer use for informal learning Educational Computer Use in Leisure Contexts (Cilesiz 2008). Anything else that stands out in descriptive/interpretive phenomenology? Just to focus the question a bit more: I'm especially interested of phenomenology as a research approach, and not just as an underlying philosophy that allows to view HCI and learning through certain perspectives or concepts (e.g. Heidegger's present-at-hand). Thanks!
Virtual Reality is now popularized by Microsoft HoloLens and Oculus headset – see following links:
What are the other use cases you can suggest using Virtual Reality in educational learning?
In your opinion, what are the benefits of using Virtual Reality in educational learning?
What are the disadvantages / risks of using Virtual Reality in educational learning?
How Virtual Reality improves educational learning e.g. through immersing into seeing or experiencing the biological / nano / atomic chemical reaction or how a complex computer systems are working i.e. sending command / receiving data packets or signals etc. so that understanding is deeper & clearer?
I'm conducting an investigation in which I analyze the scaffolding for the self-regulated learning, given by a virtual classroom to students of a teacher education program of Blended Learning. I require to have an exhaustive search of the matter scaffold of self-regulation in virtual environments.
I'll be grateful to you for sending articles or references to it.
Hi Research Gate community-
I will be teaching an evaluation class next semester that is fully online. I am interested in topic ideas and effective methods you have used to keep your students interested, engaged, and most importantly, learning. Thanks in advance for techniques and topics that are proven in the virtual environment.
I am looking for any books, research or texts that discuss ecological validity in respect to digital-simulations/virtual-environments and serious/health games. I’d especially like to read,
1) How stake-holder, particularly user, requirements can be elicited with a view to endowing a serious/health game with a meaningful level of ecological validity.
2) Methods to measure the degree of ecological validity achieved in the game.
Thank you.
It is possible to interface commercial off-the-shelf (COTS) EEG equipment with 3D virtual environments. In a student project I am supervising, we use the Emotiv Epoc EEG headset and interface it with the 3D game engine Unity. Now that we have a live communication channel established by means of UDP, there are virtually unlimited experiments that can be designed inside Unity. The EEG signals can be used to control objects in a 3D world (e.g., move a character), to affect screenplay based on emotion (e.g., when a user becomes bored as determined from EEG readings, give him a scare!), and so on. Both EEG and the 3D virtual environment may be further linked to real physical devices, such as a robot arm. Rehabilitation and training of both healthy and non-healthy participants is possible. Suitable experiments can be designed to aid in reverse-engineering the human brain and work out how the brain does things, such as sensing, planning, and control.
What do you think should be investigated?
I am a PhD student looking at developing models for virtual collaborations in some specific areas such as manufacturing. I wonder if there is any model validation method I could use to validate the models I develop.
I want to simulate collaborative virtual environment. Where users initially connect to the main server. Main Server assigns a new server to the client where it establishes connection.
The main-server does the client assignment at the beginning of the connection. Each time there is connection request, the server checked the load on the zone server to make sure it does not reach the threshold assign. If the zone-server reach it threshold, then the main server assign the user to a different zone-server.
The user sends this message to its zone-server and the zone-server retransmits directly to all the connected zone-servers and sends a copy to the backup server. A copy is sent to the main-server only for synchronization purposes. ... etc
In short I want to simulate a CVE, I checked OpenSimulator for this purpose but I am not sure about its capabilities will meet my requirements. We can do this stuff in NS2 with very customize written Application.
Your suggestions are welcome.
Regards
How can I create a virtual environment with low clock speed to study about application performance ?
(i.e) I wish to create few Mhz clock speed environment with in my multi core Ghz processor (if possible control on my windows 8.1 operating system)
Please share Tool name so far you come across.
Thank you.
I designed a mobile-based 3D virtual environment for adult absolute illiterates, and my main focus is to find the impact of sense-of-belonging/attractions.
The question is, after all this, how can I relate my work with Computer Science? Most of this work is related to Psychology and Education; but by area of research is Computer Science.
I designed a 3D virtual environment for Adult Absolute illiterates and my main focus is to find the impact of sense of belonging/Attractions.
Want to analyze the effort of learning environment on learn-ability.
In a real environment, numerous elements allow the experience constitution for the user, but in a virtual environment, there is a reduction of these elements. Is anyone interested by the concept of adaptation in interaction with virtual environments?
I activate the 3D effect and the video card memory is up to full size, but it still crashed at the log on. Is there any solution?
Several methodologies exist for evaluating the usability of a graphical interface, but what is the most suitable for the evaluation of haptic interface?
I am looking for a simple software to handle, with good customization options and above, currently in use or under study. EMMA and Neuro VR2 I know but they seem pretty outdated.
I am interested in developing user interfaces to help people with intellectual disabilities. I would like to use kinect, but I have no experience in its development. I would like to contact someone who works in the development of applications with this device.
I am leading a seminar about virtual environments, including virtual worlds, augmented reality, tele-presence and so on. What would you consider the key papers in the field, and, perhaps, a textbook?
We know about digital caves and about head mounted displays and glasses as two ens of the spectrum for visual immersion in virtual reality. Do you have a good suggestion for something in-between that serves for a portable or semi-portable visual immersion tool?
An example could be a micro-cave, a cave that hangs around persons field of view an has 180-360 degree projection on it?
Links to publication sources about such tools would be helpful.