Conference Paper

AUGUR: providing context-aware interaction support.

DOI: 10.1145/1570433.1570458 Conference: Proceedings of the 1st ACM SIGCHI symposium on Engineering Interactive Computing System , EICS 2009, Pittsburgh, PA, USA, July 15-17, 2009
Source: DBLP

ABSTRACT As user interfaces become more and more complex and feature laden, usability tends to decrease. One possibility to counter this effect are intelligent support mechanisms. In this paper, we present AUGUR, a system that provides context-aware interaction support for navigating and entering data in arbitrary form-based web applications. We further report the results of an initial user study we performed to evaluate the usability of such context-aware interaction support. AUGUR combines several novel approaches: (i) it considers various context sources for providing interaction support, and (ii) it contains a context store that mimics the user's short-term memory to keep track of the context information that currently influences the user's interactions. AUGUR thereby combines the advantages of the three main approaches for supporting the user's interactions, i.e. knowledge-based systems, learning agents, and end-user programming.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: As user interfaces become more and more com-plex and feature laden, usability tends to de-crease. One possibility to counter this effect are intelligent user interfaces (IUIs) that support the user's interactions. In this paper, we give an overview of design challenges identified in litera-ture that have to be faced when developing user-adaptive IUIs and possible solutions. Thereby, we place special emphasis on design principles for successful adaptivity.
  • [Show abstract] [Hide abstract]
    ABSTRACT: While the usability of voice-based Web navigation has been steadily improving, it is still not as easy for users with visual impairments as it is for sighted users. One reason is that sequential voice representation can only convey a limited amount of information at a time. Another challenge comes from the fact that current voice browsers omit various visual cues such as text styles and page structures, and lack meaningful feedback about the current focus. To address these issues, we created Sasayaki, an intelligent voice-based user agent that augments the primary voice output of a voice browser with a secondary voice that whispers contextually relevant information as appropriate or in response to user requests. A prototype has been implemented as a plug-in for a voice browser. The results from a pilot study show that our Sasayaki agent is able to improve users' information search task time and their overall confidence level. We believe that our intelligent voice-based agent has great potential to enrich the Web browsing experiences of users with visual impairments.
    Proceedings of the International Conference on Human Factors in Computing Systems, CHI 2011, Vancouver, BC, Canada, May 7-12, 2011; 01/2011
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Online Web applications have become widespread and have made our daily life more convenient. However, older adults often find such applications inaccessible because of age-related changes to their physical and cognitive abilities. Two of the reasons that older adults may shy away from the Web are fears of the unknown and of the consequences of incorrect actions. We are extending a voice-based augmentation technique originally developed for blind users. We want to reduce the cognitive load on older adults by providing contextual support. An experiment was conducted to evaluate how voice augmentation can support elderly users in using Web applications. Ten older adults participated in our study and their subjective evaluations showed how the system gave them confidence in completing Web forms. We believe that voice augmentation may help address the users' concerns arising from their low confidence levels.

Full-text (2 Sources)