Conference Paper

Rapid prototyping for XR: SIGGRAPH 2022 course

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Although many prototyping methods have been applied to support rapid AR prototyping [6,25], the support for creating interactive behavior in AR is limited in the context of goal-driven prototyping of AR use cases from end users' perspective [26]. More specifically, the techniques utilized by prior research are commonly found to be application-oriented and focus on the interactions that the target users are supposed to perform using the application. ...
Article
Prototyping use cases for Augmented Reality (AR) applications can be beneficial to elicit the functional requirements of the features early-on, to drive the subsequent development in a goal-oriented manner. Doing so would require designers to identify the goal-oriented interactions and map the associations between those interactions in a spatio-temporal context. Pertaining to the multiple scenarios that may result from the mapping, and the embodied nature of the interaction components, recent AR prototyping methods lack the support to adequately capture and communicate the intent of designers and stakeholders during this process. We present ImpersonatAR, a mobile-device based prototyping tool that utilizes embodied demonstrations in the augmented environment to support prototyping and evaluation of multi-scenario AR use cases. The approach uses: 1) capturing events or steps in form of embodied demonstrations using avatars and 3D animations, 2) organizing events and steps to compose multi-scenario experience, and finally 3) allowing stakeholders to explore the scenarios through interactive role-play with the prototypes. We conducted a user study with 10 participants to prototype use cases using ImpersonatAR from two different AR application features. Results validated that ImpersonatAR promotes exploration and evaluation of diverse design possibilities of multi-scenario AR use cases through embodied representations of the different scenarios.
Conference Paper
This paper explores and evaluates the main limitations associated with AI-based 360 panorama generation. We make use of a free AI-based 360 panorama generator, Skybox AI, to highlight some of the current limitations present in automatic panorama generation from text input. By recognizing and addressing these constraints, researchers and practitioners can pave the way for enhanced al-gorithms and systems capable of delivering more immersive and realistic panoramic experiences.
Chapter
The making of augmented reality for mobile learning is a complex endeavor. The visual materials needed for such builds may be time-consuming to create given technological requirements. The emergence of artmaking generative AI (GAI) tools provides opportunities for to fast-track some of the work, in all phases: the design (research, brainstorming, color palette selection, visual elements, drafting, compositing, pilot testing, and others), the development (the creation of the various elements, alpha and beta testing), and the deployment (the release of the works into production). This chapter explores the work needed to co-create artworks with generative AIs, the visual editing required (right-sizing, color changes, outlining, formatting, and others), the accessibility design, the usability design, and other efforts to ensure effective augmented reality work for teaching and learning. Advances in GAI are also considered.
ResearchGate has not been able to resolve any references for this publication.