Questions related to Mixed Reality
I'm a Mechanical/Systems Engineer with over 20 years of industry experience. I have recently set foot in the AI (Artificial Intelligence) - particularly Augmented Reality (AR), Virtual Reality (VR) and Mixed Reality (MR) - realms. I have been concentrating mainly on developing virtual training modules for industrial applications within the space and nuclear settings.
I have an autistic child, for whom, I would like to develop psychoeducational applications under the AR/VR/MR scope. I have read 'some' research pertaining to this subject on ResearchGate; including technological reviews, capabilities and preliminary next steps etc., but have struggled to dig deeper into application development, per se.
I'm posting this generic query out there for fellow researchers with expertise and know-how in this specific subject to steer me (guide) in the right direction. As I've outlined, I'm versed on the development side of virtual reality but lack in-depth knowledge in the field of combining these realms with psychoeducation purposes.
Would love to get a discussion going on this topic to garner more awareness and deeper insights into how to converge upon a useful application to help the young neuro-diverse brains out there!
I am currently investigating the integration of VR and AR in the process of language learning. I am mainly focusing on the technological possibilities that the use of e.g. VR glasses offers compared to other learning media. I have already analysed a few advantages. I would like to know which arguments have not been appreciated enough so far. Overall, one often hears the same arguments.
Thank you very much!
There are many technical challenges in VR/AR. Among these, which is the most important technical challenge without which VR/AR will miss the mass market? Let us discuss.
Augmented Virtuality (AV) already overlays most parts of the users' environment, so I wonder if Diminished Reality (DR) could be seen as a sub-term of AV with a special focus on intentionally removing particular objects. Or is it better to see DR as a feature of Augmented Reality (AR), as in “do only remove few particular objects”? Maybe it is better to say it applies to both and therefore could be seen as a feature of Mixed Reality (MR)? (I use AV, AR, and MR according to Milgram and Kishino's Reality-Virtuality Continuum here)
Does anyone have a reliable definition about the differences and similarities between those concepts?
1) What do you understand/characterize the metaverse?
2) Is it a disruptive innovation?
3) Will the metaverse replace the Internet?
4) How will legal, ethical and moral issues be dealt with in the metaverse?
5) Will the value chain of products and services in the metaverse differ from the real world?
6) What will sensations and perceptions be like in the metaverse?
7) Is it the right time for companies to make their migration to the metaverse?
8) Is current technology suitable for the metaverse to become a reality?
9) What is the impact of the metaverse on society?
10) Will the metaverse be a new Second Life?
Later this year I will be conducting a repeated measures longitudinal experiment designed to quantify the long-term impact of Mixed Reality displays as user interfaces for viewing flight procedures/checlists on drone piloting performance.
Prior to testing, participants will receive training from instructors, and over the course of this training period, will execute 2 different drone flights (as shown in the image attached as 'Position Flight' and Traverse Flight').
Participants will be randomly assigned into 1 of 2 groups - the 'Hololens First' group, or the 'Screen First' group (an LCD screen is the control display condition). I will also be ensuring that an equal number of participants are assigned to each group.
Following the completion of the training flights, participants will perform three subsequent test flights:
- Test Session 1 will involve participants executing 2 different flights not seen in training (shown in the image attached as 'Orbit Flight' and 'Recon Flight'), and will take place roughly 5 minutes after the completion of training.
- Test Session 2 involves participants returning 10 days after Test Session 1 to conduct the same flights they executed in Test Session 1 (Orbit Flight and Recon Flight).
- Test Session 3 involves participants returning 180 days after Test Session 1 to conduct the same flights they executed in Test Session 1 (Orbit Flight and Recon Flight).
My question, therefore, is do I need to randomise my participants into the Hololens First group or the Screen First group prior to each test (i.e. randomise participants before Test Session 1, randomise them again before Test Session 2, and finally, randomise before Test Session 3), or if it is okay to randomly assign participants to one of these groups prior to their training flights commence - with participants remaining in either the Hololens First Group or the Screen First Group for the entire duration of the experiment?
Thanks a lot in advance for your help - it is much appreciated!
Dear fellow researchers,
I am looking for some advice on eye-tracking enabled VR headsets. Currently contemplating between HTC Vive Pro Eye and Pico Neo 3 Pro Eye... Both have built in eye tracking by tobii. Does anyone has any experience with any of them? Or can recommend any other brands?
We are planning to use it for research in combination with EEG and EDA sensors to assess human response to built environment. Any advice is much appreciated.
Our recent research shows that AR systems have inherent conflict while interacting with virtual objects. We termed this new conflict as Virtual Kinesthetic Conflict (VKC). This conflict is very similar to the inherent Visual Accommodation Conflict (VAC) in VR. Just like VAC, VKC also cannot be avoided, we can only reduce the effects of VKC. In our recent publication, we have listed a few guidelines to reduce the effects of VKC. Can you think of other solutions?
According to expert opinions, the IT technology virtual reality and augmented reality will be implemented to information services offered on online information portals and in technological applications of online companies. Probably in the near future, virtual reality technology and augmented reality will be one of the main computerized forms of access to the digital world in the future. Adding a digital overlay to reality allows you to create characters and objects that you can design and digitally develop. Digital objects created in this way can be placed in real space as if they really existed, which will probably be used in meeting expectations as to the development of information services in the future.
In view of the above, the current question is: Is IT technology virtual reality and augmented reality already implemented in Internet information services?
Please, answer, comments.
I invite you to the discussion.
Mixed Reality (MR) is a concept not yet consolidated. I have read and heard distinct definitions of the term: sometimes it fuses itself with the concept of Augmented Reality (AR), others, with Virtual Reality (VR) synchronized with the real world, as a room-scale VR experience.
What is, indeed, the best definition for Mixed Reality?
During some lessons it may be for a limited time, in specific situations of didactic games or the presentation of specific learning processes and topics, the teacher may allow the use of devices such as virtual reality slots and augmented reality. In addition, the teacher can also include other mobile devices such as laptops, tablets, smartphones etc. in the education process. In certain situations, these devices would play the role of teaching instruments supporting the didactic processes conducted by the teacher.
Do you agree with my opinion on this matter?
In view of the above, I am asking you the following question:
Can glasses for virtual reality and augmented reality be teaching instruments used in education processes?
I invite you to the discussion
Thank you very much
What is the best algorithm or technique for tracking small objects in virtual environments. Best in the sense of tracking resolution, latency, and cost.
For my master thesis I am trying to make PCA more understandable by explaining it with the means of virtual or mixed reality. In order to do this, I want to find out what goes wrong when people try to get a grasp of PCA. For example if students have trouble imagining things in 3D or beyond 3D. Maybe the math is too challenging for math novices, such as for communication students.
So my questions are:
- How do you teach and try to get people to understand PCA, what do you explain first, what path do you follow?
- What difficulties arise during teaching it
- Do you use visual help to make it more understandable and if so, which?
- Have you already used virtual or mixed reality to make PCA more understandable?
Thanks a lot in advance for your help,
Gilles Van den Eede
There are many reasons why Augmented Reality (AR) will be the future battleground. However this battleground cannot be won without solving some of the most difficult technical challenges. Among all the technical challenges, what is the most important technical challenge in Augmented Reality?
Do you think AR browsers are still the best option to build AR experiences for mobile devices such as smartphones? What about the native AR applications ? Which one do you prefer and what improvement have been there in each technology?.
Does AR and MR improve User Experience? This question is part of my thesis question. I would like to get some articles, books, journals etc that help me do literature review.
I also would like to know if someone has built apps with AR experiences in mobile devices. Thanks
Please share your opinion: For my knowledge (didn't find references), there are 4 types of Augmented Reality:
- Glass-See-Through: like Google Glass or the fancy cars "Head-up Display" The computer generated information appears in a transparent glass or acrylic between your eyes and what you look at.
- Video-See-Through: like cars rear cameras showing trajectory, like video games with knetic, architect color changing App and infrared car cameras with animal detection. The computer generated information appears in a video where you look at.
- Indirect: A QR code makes whatever is with the QR code go to
the computer screen
- Spacial: Like VeinViewer. The computer generated image is projected onto the target and there is no need of QR codes, goggles, glasses or video screens
I want to use the TAM to analyze acceptance of Augmented Reality (AR). Yet, it is unclear/open as to how people will interact with wearable AR systems (voice, gestures etc.)
I therefore want to show a short documentary on AR and then let the survey participants answer the TAM based on how they think it will be.
Problems: I don't have access to wearable AR Systems. Therefore, it is hard for participants to indicate "ease of use" as well as "usefulness".
Is the documentary enough?
I would be happy to receive some feedback and I appreciate any help.
I've never used this method, so I know only a little about it. Does it describe distortions due to the optics as well as field of view, eye position, etc?