December 2012
·
21 Reads
·
13 Citations
Journal of Physical Therapy Education
This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.
December 2012
·
21 Reads
·
13 Citations
Journal of Physical Therapy Education
December 2010
·
24 Reads
Techniques et sciences informatiques
September 2010
·
126 Reads
·
21 Citations
Interacting with Computers
We present two empirical studies of visual search in dynamic 3D visualisations of large, randomly ordered, photo collections. The aim is to assess the possible effects of geometrical distortions on visual search effectiveness, efficiency and comfort, by comparing the influence of two perspective representations of photo collections on participants’ performance results and subjective judgments. Thumbnails of the 1000 or so photographs in each collection are plastered on the lateral surface of a vertical cylinder, either on the inside (inner view, IV) or on the outside (outer view, OV). IV and OV suggest two different interaction metaphors: locomotion in a virtual space (IV) versus manipulation of a virtual object (OV). They also implement different perspective distortions: enlargement and distortion of lateral columns (IV) versus enlargement of central columns and dwindling plus distortion of lateral columns (OV). Presentation of results focus on the second study, S2, which involved 20 participants and offered them strictly identical interaction facilities with the two views, unlike the initial pilot study, S1 (8 participants and slightly different interaction facilities between the two views). Participants in both studies were experienced computer users (average age: 25.15 years, SD: 3.13). They performed two types of basic visual tasks that are carried out repeatedly while navigating photo collections: (i) searching for a photo meeting specific, visual and thematic, criteria, the photo and its location in the collection being unknown to participants (ST1) and (ii) looking for a visually familiar photo, the location of the photo being familiar to participants (ST2). According to post-experiment questionnaires and debriefings, all participants in S2 save one judged both 3D views positively in reference to standard 2D visualisations. Half of them preferred IV over OV, four appreciated OV better, and six expressed no clear opinion. Preferences were mainly motivated by the effects of perspective distortions on thumbnail visibility. They were barely influenced by interaction metaphors (e.g., the feeling of immersion induced by IV). Despite large inter-individual differences in performance, a majority of participants carried out ST1 tasks more effectively and efficiently with IV than with OV, as regards error rates (statistically significant difference) and search times (tendency). Performance results for ST2 tasks were similar with the two views, due, probably, to the simplicity and brevity of ST2 tasks. Perspective distortions seem to have exerted less influence on participants’ visual strategies than horizontal scrolling, a dynamic feature common to both views. Qualitative analyses of participants’ behaviours suggest that IV has the potential to support spatial memory better than OV, presumably thanks to the locomotion metaphor. These results indicate that perspective views have the potential to facilitate and improve visual search in unstructured picture collections provided that distortions are adapted to users’ individual visual capabilities. Further research is needed to better understand: (i) the actual relations between visual exploration strategies and geometrical properties of perspective visualisations and (ii) the influence of the manipulation and locomotion metaphors on spatial memory. This knowledge is necessary to further improve the comfort and effectiveness of visual search in large unstructured picture collections, using 3D visualisations.
August 2009
·
487 Reads
·
28 Citations
This article presents a review of the question regarding the link between social communication difficulties and altered executive functions (which are cognitive functions involved in the control of behavior, such as planning, inhibition, working memory etc) in high functioning autism. We first analyze the difficulties experienced by people with high functioning autism in processing contextual cues during social conversations. We extend this approach to a broader scope including verbal and non-verbal communication. Indeed, understanding social interactions requires integrating and connecting transient multimodal social cues. The article then focuses on the alterations reported in high functioning autism concerning the ability to process facial expressions during an ongoing conversation. This ability involves attentional resources that are discussed in light of the executive dysfunction attributed to autism. On this basis, we hypothesize that the difficulties in appreciating the synergy between facial expressions and speech could be linked to impairments in shifting attention from one to the other. A new experimental paradigm designed for testing this hypothesis is presented. It relies on a virtual environment system based on eye- tracking technology enabling users to control the visual display via their gaze. The intent behind this apparatus is to compensate for the deficits in shifting attention attributed to autism. We finally describe the procedure devised for testing this new virtual environment paradigm and conclude on its potential therapeutic use.
July 2009
·
18 Reads
·
2 Citations
Lecture Notes in Computer Science
We present and discuss the results of two empirical studies that aim at assessing the contributions, to the effectiveness and efficiency of online help of: adaptive-proactive user support (APH), multimodal (speech and graphics) messages (MH), and embodied conversational agents (ECAs). These three enhancements to online help were implemented using the Wizard of Oz technique. The first study (E1) compares MH with APH, while the second study (E2) compares MH with embodied help (EH). Half of the participants in E1 (8) used MH, and the other half used APH. Most participants who used MH, resp. APH, preferred MH, resp. APH, to standard help systems which implement text and graphics messages (like APH). In particular, proactive assistance was much appreciated. However, higher performances were achieved with MH. A majority of the 22 participants in E2 preferred EH to MH, and were of the opinion that the presence of an ECA, a talking head in this particular case, has the potential to improve help effectiveness and efficiency by increasing novice users’ self confidence. However, performances with the two systems were similar, save for help consultation rate which was higher with EH. Longitudinal (usage) studies are needed to confirm the effects of these three enhancements on novice users’ judgments and performances.
October 2008
·
26 Reads
·
1 Citation
An empirical study is presented which aims at assessing the possible effects of embodiment on online help effectiveness and attraction. 22 undergraduate students who were unfamiliar with animation creation software created two simple animations with Flash, using two multimodal online help agents, EH and UH, one per animation. Both help agents used the same database of speech and graphics messages; EH was personified using a talking head while UH was not embodied. EH and UH presentation order was counter-balanced between participants. Subjective judgments elicited through verbal and nonverbal ques- tionnaires indicate that the presence of the ECA was well accepted by participants and its influence on help effectiveness perceived as positive. Analysis of eye tracking data indicates that the ECA actu- ally attracted their visual attention and interest, since they glanced at it from the beginning to the end of the animation creation (75 fixa- tions during 40 min.). Contrastingly, post-tests marks and interac- tion traces suggest that the ECA's presence had no perceivable ef- fect on concept or skill learning and task execution. It only encour- aged help consultation.
July 2008
·
15 Reads
Lecture Notes in Computer Science
Two groups of 8 participants experimented two enhancements of standard online help for the general public during one hour: adaptive proactive (AP) assistance and multimodal user support. Proactive help, that is, anticipation of the user’s information needs raised very positive judgments, while dynamic adaptation to the user’s current knowledge and skills went almost unnoticed. Speech and graphics (SG) messages were also well accepted, based on the observation that one can go on interacting with the software application while listening to instructions. However, several participants observed that the transience and linearity of speech limited the usability of this modality. Analysis of interaction logs and post-tests shows that procedural and semantic knowledge acquisition was higher with SG help than with AP assistance. Contrastingly, AP help was consulted more often than SG user support. Results also suggest that proactive online help may reduce the effectiveness of autonomous “learning by doing” acquisition of unfamiliar software concepts and procedures.
July 2008
·
85 Reads
·
20 Citations
The paper reports the main results and conclusions of the analysis of eight dialogues between an expert and eight novice users of Word. The analysis of the expert's requests for contextual information together with her explicit references to, and implicit use of the various types of contexts indicate that the types of contextual information she exploits most are the progress of the current task execution, the software current state and the novice's current intention. Her help strategy, which differs greatly from standard didactic computer aided instruction approaches, encourages novices to adopt a "learning by doing" strategy through helping them to achieve the tasks which motivate their use of the software. This strategy relies mostly, for defining the informational content of help messages, on the short-term context and on a dynamic model of the novice's activities and goals, rather than on an individual (dynamic) or generic (static) cognitive user model.
October 2007
·
15 Reads
·
3 Citations
Input multimodality combining speech and hand gestures has motivated numerous usability studies. Contrastingly, issues relating to the design and ergonomic evaluation of multimodal output messages combining speech with visual modalities have not yet been addressed extensively. The experimental study presented here addresses one of these issues. Its aim is to assess the actual efficiency and usability of oral system messages including brief spatial information for helping users to locate objects on crowded displays rapidly. Target presentation mode, scene spatial structure and task difficulty were chosen as independent variables. Two conditions were defined: the visual target presentation mode (VP condition) and the multimodal target presentation mode (MP condition). Each participant carried out two blocks of visual search tasks (120 tasks per block, and one block per condition). Scene target presentation mode, scene structure and task difficulty were found to be significant factors. Multimodal target presentation proved to be more efficient than visual target presentation. In addition, participants expressed very positive judgments on multimodal target presentations which were preferred to visual presentations by a majority of participants. Besides, the contribution of spatial messages to visual search speed and accuracy was influenced by scene spatial structure and task difficulty: (i) messages improved search efficiency to a lesser extent for 2D array layouts than for some other symmetrical layouts, although the use of 2D arrays for displaying pictures is currently prevailing; (ii) message usefulness increased with task difficulty. Most of these results are statistically significant.
October 2007
·
17 Reads
·
5 Citations
A preliminary experimental study is presented, that aims at eliciting the contribution of oral messages to facilitating visual search tasks on crowded visual displays. Results of quantitative and qualitative analyses suggest that appropriate verbal messages can improve both target selection time and accuracy. In particular, multimodal messages including a visual presentation of the isolated target together with absolute spatial oral information on its location in the displayed scene seem most effective. These messages also got top-ranking ratings from most subjects.
... Progress in affective computing is overcoming these constraints. Embodied conversational agents-ECA (Cassell et al., 2001) (Grynszpan et al., 2011;Marcos-Pablos et al., 2016). For instance, Grynszpan et al. (2011) showed that when interacting with a virtual agent, people with high functioning autism spectrum disorders showed a weaker modulation of eye movements, suggesting impairments in self-monitoring of gaze (for a debate regarding the autism and MNS see Heyes et al., 2022). ...
December 2012
Journal of Physical Therapy Education
... Et si le langage verbal n'est jamais désincarné (Bouvet, 2001), le geste peut être désolidarisé du discours, tout en faisant preuve d'une grande efficacité. Si le verbe sert à signaler les ajustements progressifs et précis nécessaires à localiser une cible (Carbonell et al., 1997), le geste est meilleur pour les explications et les indications relatives à des actions exécutables dans l'espace (Rapp et al., 2006). Après avoir appris comment assembler les pièces d'un objet, des sujets transmettent leurs nouveaux savoirs à des condisciples de façon plus efficace lorsqu'ils n'utilisent que des gestes pour le faire (Lozano et Tversky, 2006). ...
June 1997
Le travail humain
... La réponse à cette question peut paraître évidente. Cependant, les résultats d'une première expérimentation présentée dans (Carbonell et Kieffer, 2002 et 2005) ont montré que des indications orales sur la position de la cible dans l'image étaient sans effet sur les temps de détection. Quant à l'influence éventuelle de ce type d'information sur la précision du repérage, le protocole expérimental adopté ne permettait pas de l'étudier. ...
January 2002
... Elle est liée d'une part aux caractéristiques du comportement de l'utilisateur (pour un panorama de la littérature sur la question, voir Capobianco et Carbonell, 2002) : ...
January 2002
... Ce phénomène a été confirmé par une expérience de Hasson (Hasson, 2007) qui montre que si l'agent assistant force les utilisateurs à produire plusieurs tours de parole en reformulant ou en précisant leur question, ceuxci acceptent au maximum trois tours avant d'abandonner le système d'aide ('Clippy Effect'). Ceci a été aussi corroboré par (Capobianco et al., 2002). En conséquence la plupart des requêtes peuvent être traitées de manière isolée, c'est-à-dire hors contexte dialogique, mais pas hors contexte de la tâche car elles contiennent des indexicaux liés aux actions en cours. ...
March 2002
... Ce besoin est apparu lorsqu'on a commencé à se rendre compte que les utilisateurs se détournaient systématiquement des systèmes d'aides variés qui leur étaient proposés. Il a été montré que lorsque les utilisateurs novices ont besoin d'assistance, ils ont tendance à préférer s'adresser à « un ami derrière l'épaule », selon la terminologie de ( Capobianco et al., 2001) plutôt que d'avoir recours au système d'aide de leur ordinateur. Cette désaffectation met en avant le facteur d'acceptabilité, tel que défini par exemple par Davis (1989), considéré aujourd'hui comme un frein au développement des nouveaux outils d'assistance. ...
July 2008
... They would be reluctant to explore new software by trial and error in order to learn how it functions effectively. This pref-88 erence was confirmed by the analysis of the types of aid provided by human experts to help novices (Capobianco & Carbonell, 2003). Nev-89 ertheless, such a procedural orientation of assistance may result in a too restricted mental model of technology functioning impeding the 90 mastery of the use of the system and the transfer of previous knowledge or skills. ...
October 2003
... Kuriakose and Lahiri (2015) suggest a larger physiological alteration associated with anxiety when confronted with avatars' emotions or when the situations are difficult to interpret. A series of papers deal with the relationship between learning social communication and visual contact for ASD students (Mineo et al. 2009;Alcorn et al. 2011;Grynszpan et al. 2009Grynszpan et al. , 2012Lahiri et al. 2011;Bekele et al. 2013;Georgescu et al. 2013), showing mixed results: Whilst some observed improvements in visual cues, visual contact and attention during conversation (Mineo et al. 2009;Lahiri et al. 2011), as well as positive reactions to the avatar's body language (Alcorn et al. 2011), Grynszpan et al. (2009 noticed that those improvements were only maintained if introducing external manipulations. Moreover, Georgescu et al. (2013) found that ADS students did not change their opinion on an avatar's personality depending on the time of interaction, whereas neurotypical students did. ...
August 2009
... For example, Lallé and Conati (2019) demonstrated that users benefit from a system-driven customization of the information content presented in an information visualization system dependant upon the user characteristics of visualization literacy and locus of control. Christmann et al. (2010) proposed a gaze control search interface in which distortions of the images are adapted to the user's visual capabilities. To design systems that adapt to the user's mental effort and user performance, Buettner et al. (2018) found that difficult search tasks contribute to more pupil diameter variability, which is conceptualized as a measurement of interest. ...
Reference:
Search Interface Design and Evaluation
September 2010
Interacting with Computers
... Volkel et al. used this method to elicit dialogues for interacting with voice assistants [72]. Other researchers used user elicitation to study speech interaction as part of a multimodal human-computer interaction interface [44,60,61]. In their study, Hoffmann et al. elicited voice commands along with surface gestures and mid-air gestures for interacting with smart home and they found that people preferred speech interaction compared to mid-air gestures [26]. ...
January 2000