Figure 1 - uploaded by Deepak Akkil
Content may be subject to copyright.
e setup at the worker's end. One table had the blocks to arrange and the other table was the taskspace where the blocks had to be arranged.
Source publication
An emerging use of mobile video telephony is to enable joint activities and collaboration on physical tasks. Weconducted a controlled user study to understand if seeing the gaze of a remote instructor is beneficial for mobile video collaboration and if it is valuable that the instructor is aware of sharing of the gaze. We compared three gaze sharin...
Similar publications
To date, the practice of global emergency medicine (GEM) has involved being “on‐the‐ground” supporting in‐country training of local learners, conducting research, and providing clinical care. This face‐to‐face interaction has been understood as critically important for developing partnerships and building trust. The COVID‐19 pandemic has brought si...
The article presents the experience of organizing the work of students using modern forms and methods of advanced training. The implementation of video conferencing allows students to learn advanced professional experience, exchange relevant information, ask leading experts questions of interest and receive recommendations.
Background:
The COVID-19 pandemic has transformed healthcare significantly and telepsychiatry is now the primary means of treatment in some countries.
Aims:
To compare the efficacy of telepsychiatry and face-to-face treatment.
Method:
A comprehensive meta-analysis comparing telepsychiatry with face-to-face treatment for psychiatric disorders....
The Covid-19 pandemic that has hit the world has resulted in limited human activities. Video conferencing is one solution for carrying out activities. Video conferencing helps people reduce the spread of Covid-19 and connects people. With video conferencing, people can meet without being limited by space and time. However, video conferencing still...
In recent decades, mechanisms for observation and information production have proliferated in an attempt to meet the growing needs of stakeholders to access dynamic data for the purposes of informed decision-making. In the land sector, a growing number of land observatories are producing data and ensuring its transparency. We hypothesize that these...
Citations
... In collaborative search tasks, for example, shared gaze visualizations have positively afected collaborators' performance, coordination, and searching behavior [3,3,6,22,30,34]. Furthermore, gaze sharing has been investigated on remote collaboration tasks such as joint programming projects [4], writing academic texts [16], problem-solving [5,21] and remote instructions [1,12,14]. In a learning context, shared gaze visualizations have been investigated to support students and to facilitate communication [9]. ...
... Interacción (Baishya & Neustaedter, 2017) (Berri, Wolf, & Osorio, 2015) (Berri, Wolf, & Osório, 2015) (Feick, Tang, & Bateman, 2018) (Huang, Xiang, Chen, & Fan, 2018) (Jones, Witcraft, Bateman, Neustaedter, & Tang, 2015) (Kasahara, Nagai, & Rekimoto, 2017) (Kasahara & Rekimoto, 2015) (Liu, Yu, & Shi, 2015) (Tang, Fakourfar, Neustaedter, & Bateman, 2017) (Buchanan, Bott, & LaViola J.J., 2015) 11 Compartición (Akkil, Thankachan, & Isokoski, 2018) La característica más importante que tienen los sistemas de videocolaboración son la colaboración para los diferentes trabajos, creando nuevos ambientes para el desarrollo de la educación, con interfaces amigables que les permitan interactuar y colaborar a los participantes. ...
El objetivo de esta revisión sistemática de literatura (SLR), es describir acerca del papel de la videocolaboración en la educación, para lo cual se utilizó una metodología aplicada a ingeniería y educación. Se basó en tres preguntas de investigación RQ1: ¿Qué tipo de características existen dentro de los sistemas para la videocolaboración aplicados a la
educación?, RQ2: ¿Qué herramientas se utilizan para la videocolaboración en la educación?, RQ3: ¿Para qué se están utilizando las sesiones por videocolaboraciones
dentro de la educación? Se seleccionaron 55 artículos de la base de datos Scopus, con diferente frecuencia para cada una de las preguntas, 55 para RQ1, 24 para RQ2 y 13 para RQ3. Los resultados obtenidos nos permiten evidenciar que la característica más
importante es la colaboración a través de la compartición de la información de los participantes mediante gestos y espacios mixtos para las labores y el ocio, en muchos de los casos reduciendo el costo de movilización; por otro lado, existen diferentes sistemas
que permiten conectarse desde diferentes puntos con audio y video para que los participantes se sientan inmersos en las diferentes actividades de educación en
las clases por videocolaboración. Por último, se evidencia que las videocolaboraciones están siendo utilizadas para la participación de estudiantes y docentes en clases, sobre todo existe mucha demanda en el campo de la Salud para la consulta y procedimientos médicos.
... Interacción (Baishya & Neustaedter, 2017) (Berri, Wolf, & Osorio, 2015) (Berri, Wolf, & Osório, 2015) (Feick, Tang, & Bateman, 2018) (Huang, Xiang, Chen, & Fan, 2018) (Jones, Witcraft, Bateman, Neustaedter, & Tang, 2015) (Kasahara, Nagai, & Rekimoto, 2017) (Kasahara & Rekimoto, 2015) (Liu, Yu, & Shi, 2015) (Tang, Fakourfar, Neustaedter, & Bateman, 2017) (Buchanan, Bott, & LaViola J.J., 2015) 11 Compartición (Akkil, Thankachan, & Isokoski, 2018) La característica más importante que tienen los sistemas de videocolaboración son la colaboración para los diferentes trabajos, creando nuevos ambientes para el desarrollo de la educación, con interfaces amigables que les permitan interactuar y colaborar a los participantes. ...
El objetivo de esta revisión sistemática de literatura (SLR), es describir acerca del papel de la videocolaboración en la educación, para lo cual se utilizó una metodología aplicada a ingeniería y educación. Se basó en tres preguntas de investigación RQ1: ¿Qué tipo de características existen dentro de los sistemas para la videocolaboración aplicados a la educación?, RQ2: ¿Qué herramientas se utilizan para la videocolaboración en la educación?, RQ3: ¿Para qué se están utilizando las sesiones por videocolaboraciones dentro de la educación? Se seleccionaron 55 artículos de la base de datos Scopus, con diferente frecuencia para cada una de las preguntas, 55 para RQ1, 24 para RQ2 y 13 para RQ3. Los resultados obtenidos nos permiten evidenciar que la característica más importante es la colaboración a través de la compartición de la información de los participantes mediante gestos y espacios mixtos para las labores y el ocio, en muchos de los casos reduciendo el costo de movilización; por otro lado, existen diferentes sistemas que permiten conectarse desde diferentes puntos con audio y video para que los participantes se sientan inmersos en las diferentes actividades de educación en las clases por videocolaboración. Por último, se evidencia que las videocolaboraciones están siendo utilizadas para la participación de estudiantes y docentes en clases, sobre todo existe mucha demanda en el campo de la Salud para la consulta y procedimientos médicos.
... To facilitate understanding distributed collaborator's perspectives, there have been studies exploring various gaze visualization techniques. A commonly adopted method for providing gaze awareness is to determine the object one user is looking at, then to place a virtual element such as a dot or a cursor on that object in the other user's view [1,29,35]. Then, understanding joint object references only requires a visual search for the virtual element in the scene. ...
In collaborative tasks, it is often important for users to understand their collaborator’s gaze direction or gaze target. Using an augmented reality (AR) display, a ray representing the collaborator’s gaze can be used to convey such information. In wide-area AR, however, a simplistic virtual ray may be ambiguous at large distances, due to the lack of occlusion cues when a model of the environment is unavailable. We describe two novel visualization techniques designed to improve gaze ray effectiveness by facilitating visual matching between rays and targets (Double Ray technique), and by providing spatial cues to help users understand ray orientation (Parallel Bars technique). In a controlled experiment performed in a simulated AR environment, we evaluated these gaze ray techniques on target identification tasks with varying levels of difficulty. The experiment found that, assuming reliable tracking and an accurate collaborator, the Double Ray technique is highly effective at reducing visual ambiguity, but that users found it difficult to use the spatial information provided by the Parallel Bars technique. We discuss the implications of these findings for the design of collaborative mobile AR systems for use in large outdoor areas.
... Gaze is an important nonverbal communication signal in everyday human-human interaction [4], and has become a popular research topic for technology-mediated interaction [17,43,60]. The ability to tell what someone is looking at-'gaze awareness'-is a useful way to gauge the attention of others [1,2,14,63]. Gaze observed over time is an effective predictor of human intention [26,27,50,56]. ...
... A common approach for gaze awareness is to visually overlay a user's gaze over a shared interface, which provides others rich insights into the mind of the tracked user. This complementary layer of communication has numerous benefits such as improved coordination [2,12,14] and situation awareness [50]. Despite these benefits, overlaying gaze on the interface can add a highly distracting element to the task at hand [50], confuse users when there is a mismatch with other modes of communication such as speech [14], and scales poorly with multiple users. ...
... The benefits are well demonstrated in multi-user scenarios, improving communication and coordination in collaborative settings (e.g. [2,12,24,63]). Gaze visualisation has also been explored in competitive gameplay [46,59], highlighting its potential for increasing social presence between remote players [36,45], and for enabling players to recognise the intentions of others in real-time [50,51]. ...
As it becomes more common for humans to work alongside artificial agents on everyday tasks, it is increasingly important to design artificial agents that can understand and interact with their human counterparts naturally. We posit that an effective way to do this is to harness nonverbal cues used in human-human interaction. We, therefore, leverage knowledge from existing work on gaze-based intention recognition, where the awareness of gaze can provide insights into the future actions of an observed human subject. In this paper, we design and evaluate the use of a proactive intention-aware gaze-enabled artificial agent that assists a human player engaged in an online strategy game. The agent assists by recognising and communicating the intentions of a human opponent in real-time, potentially improving situation awareness. Our first study identifies the language requirements for the artificial agent to communicate the opponent’s intentions to the assisted player, using an inverted Wizard of Oz method approach. Our second study compares the experience of playing an online strategy game with and without the assistance of the agent. Specifically, we conducted a within-subjects study with 30 participants to compare their experience of playing with (1) detailed AI predictions, (2) abstract AI predictions, and (3) no AI predictions but with a live visualisation of their opponent’s gaze. Our results show that the agent can facilitate awareness of another user’s intentions without adding visual distraction to the interface; however, the cognitive workload was similar across all three conditions, suggesting that the manner in which the agent communicates its predictions requires further exploration. Overall, our work contributes to the understanding of how to support human-agent teams in a dynamic collaboration scenario. We provide a positive account of humans interacting with an intention-aware artificial agent afforded by gaze input, which presents immediate opportunities for improving interactions between the counterparts.
This study explores how gaze visualization in virtual spaces facilitates the initiation of informal communication. Three styles of gaze cue visualization (arrow, bubbles, and miniature avatar) with two types of gaze behavior (one-sided gaze and joint gaze) were evaluated. 96 participants used either a non-visualized gaze cue or one of the three visualized gaze cues. The results showed that all visualized gaze cues facilitated the initiation of informal communication more effectively than the non-visualized gaze cue. For one-sided gaze, overall, bubbles had more positive effects on the gaze receiver's behaviors and experiences than the other two visualized gaze cues, although the only statistically significant difference was in the verbal reaction rates. For joint gaze, all three visualized gaze cues had positive effects on the receiver's behaviors and experiences. The design implications of the gaze visualization and the confederate-based evaluation method contribute to research on informal communication and social virtual reality.
Augmented Reality (AR) offer significant benefits for remote collaboration scenarios. However, when using a Head-Mounted Display (HMD), remote users do not always see exactly what local users are looking at. This happens when there is a spatial offset between the center of the Field of View (FoV) of the HMD’s cameras and the center of the FoV of the user. Such an offset can limit the ability of remote users to see objects of interest, creating confusion and impeding the collaboration. To address this issue, we propose the AHO-Guide techniques. AHO-Guide techniques are Automated Head Orientation Guidance techniques in AR with a HMD. Their goal is to encourage a local HMD user to adjust their head orientation to let remote users have the appropriate FoV of the scene. This paper presents the conception and evaluation of the AHO-Guide techniques. We then propose a set of recommendations from the encouraging results of our experimental study.KeywordsAugmented RealityMixed RealityRemote collaborationGuidanceField-of-View
The past decade has witnessed a growing interest for using dual eye tracking to understand and support remote collaboration, especially with studies that have established the benefits of displaying gaze information for small groups. While this line of work is promising, we lack a consistent framework that researchers can use to organize and categorize studies on the effect of shared gaze on social interactions. There exists a wide variety of terminology and methods for describing attentional alignment; researchers have used diverse techniques for designing gaze visualizations. The settings studied range from real-time peer collaboration to asynchronous viewing of eye-tracking video of an expert providing explanations. There has not been a conscious effort to synthesize and understand how these different approaches, techniques and applications impact the effectiveness of shared gaze visualizations (SGVs). In this paper, we summarize the related literature and the benefits of SGVs for collaboration, describe important terminology as well as appropriate measures for the dual eye-tracking space and discuss promising directions for future research. As eye-tracking technology becomes more ubiquitous, there is pressing need to develop a consistent approach to evaluation and design of SGVs. The present paper makes a first and significant step in this direction.
This paper investigates the effect of using augmented reality (AR) annotations and two different gaze visualizations, head pointer (HP) and eye gaze (EG), in an AR system for remote collaboration on physical tasks. First, we developed a spatial AR remote collaboration platform that supports sharing the remote expert’s HP or EG cues. Then the prototype system was evaluated with a user study comparing three conditions for sharing non-verbal cues: (1) a cursor pointer (CP), (2) HP and (3) EG with respect to task performance, workload assessment and user experience. We found that there was a clear difference between these three conditions in the performance time but no significant difference between the HP and EG conditions. When considering the perceived collaboration quality, the HP/EG interface was statistically significantly higher than the CP interface, but there was no significant difference for workload assessment between these three conditions. We used low-cost head tracking for the HP cue and found that this served as an effective referential pointer. This implies that in some circumstances, HP could be a good proxy for EG in remote collaboration. Head pointing is more accessible and cheaper to use than more expensive eye-tracking hardware and paves the way for multi-modal interaction based on HP and gesture in AR remote collaboration.