International Journal of Human-Computer Studies

Published by Elsevier BV

Online ISSN: 1095-9300

·

Print ISSN: 1071-5819

Articles


Easing semantically enriched information retrieval-An interactive semi-automatic annotation system for medical documents
  • Article

June 2010

·

59 Reads

·

·

·

Mapping medical concepts from a terminology system to the concepts in the narrative text of a medical document is necessary to provide semantically accurate information for further processing steps. The MetaMap Transfer (MMTx) program is a semantic annotation system that generates a rough mapping of concepts from the Unified Medical Language System (UMLS) Metathesaurus to free medical text, but this mapping still contains erroneous and ambiguous bits of information. Since manually correcting the mapping is an extremely cumbersome and time-consuming task, we have developed the MapFace editor.The editor provides a convenient way of navigating the annotated information gained from the MMTx output, and enables users to correct this information on both a conceptual and a syntactical level, and thus it greatly facilitates the handling of the MMTx program. Additionally, the editor provides enhanced visualization features to support the correct interpretation of medical concepts within the text. We paid special attention to ensure that the MapFace editor is an intuitive and convenient tool to work with. Therefore, we recently conducted a usability study in order to create a well founded background serving as a starting point for further improvement of the editor's usability.
Share

Automation-induced monitoring inefficiency: Role of display location

February 1997

·

246 Reads

Operators can be poor monitors of automation if they are engaged concurrently in other tasks. However, in previous studies of this phenomenon the automated task was always presented in the periphery, away from the primary manual tasks that were centrally displayed. In this study we examined whether centrally locating an automated task would boost monitoring performance during a flight-simulation task consisting of system monitoring, tracking and fuel resource management sub-tasks. Twelve nonpilot subjects were required to perform the tracking and fuel management tasks manually while watching the automated system monitoring task for occasional failures. The automation reliability was constant at 87.5% for six subjects and variable (alternating between 87.5% and 56.25%) for the other six subjects. Each subject completed four 30 min sessions over a period of 2 days. In each automation reliability condition the automation routine was disabled for the last 20 min of the fourth session in order to simulate catastrophic automation failure (0 % reliability). Monitoring for automation failure was inefficient when automation reliability was constant but not when it varied over time, replicating previous results. Furthermore, there was no evidence of resource or speed accuracy trade-off between tasks. Thus, automation-induced failures of monitoring cannot be prevented by centrally locating the automated task.

Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking
  • Article
  • Full-text available

November 2012

·

120 Reads

Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel's zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development.
Download

Cognitive Systems Engineering: New Wine in New Bottles

September 1999

·

229 Reads

This paper presents an approach to the description and analysis of complex Man-Machine Systems (MMSs) called Cognitive Systems Engineering (CSE). In contrast to traditional approaches to the study of man-machine systems which mainly operate on the physical and physiological level, CSE operates on the level of cognitive functions. Instead of viewing an MMS as decomposable by mechanistic principles, CSE introduces the concept of a cognitive system: an adaptive system which functions using knowledge about itself and the environment in the planning and modification of actions. Operators are generally acknowledged to use a model of the system (machine) with which they work. Similarly, the machine has an image of the operator. The designer of an MMS must recognize this, and strive to obtain a match between the machine's image and the user characteristics on a cognitive level, rather than just on the level of physical functions. This article gives a presentation of what cognitive systems are, and of how CSE can contribute to the design of an MMS, from cognitive task analysis to final evaluation.

A Taxonomy of Representation Strategies in Iconic Communication

August 2012

·

261 Reads

Predicting whether the intended audience will be able to recognize the meaning of an icon or pictograph is not an easy task. Many icon recognition studies have been conducted in the past. However, their findings cannot be generalized to other icons that were not included in the study, which, we argue, is their main limitation. In this paper, we propose a comprehensive taxonomy of icons that is intended to enable the generalization of the findings of recognition studies. To accomplish this, we analyzed a sample of more than eight hundred icons according to three axes: lexical category, semantic category, and representation strategy. Three basic representation strategies were identified: visual similarity; semantic association; and arbitrary convention. These representation strategies are in agreement with the strategies identified in previous taxonomies. However, a greater number of subcategories of these strategies were identified. Our results also indicate that the lexical and semantic attributes of a concept influence the choice of representation strategy.

The effects of motion and stereopsis on three-dimensional visualization

December 1997

·

32 Reads

Previous studies have demonstrated that motion cues combined with stereoscopic viewing can enhance the perception of three-dimensional objects displayed on a two-dimensional computer screen. Using a variant of the mental rotation paradigm, subjects view pairs of object images presented on a computer terminal and judge whether the objects are the same or different. The effects of four variables on the accuracy and speed of decision performances are assessed: stereo vs. mono viewing, controlled vs. uncontrolled object motion, cube vs. sphere construction and wire frame vs. solid surface characteristic. Viewing the objects as three-dimensional images results in more accurate and faster decision performances. Furthermore, accuracy improves although response time increases when subjects control the object motion. Subjects are equally accurate comparing wire frame and solid images, although they take longer comparing wire frame images. The cube-based or sphere-based object construction has no impact on decision accuracy nor response time.

TABLE 1 Components of a managed assurance scheme 
TABLE 3 Process standards in relation to other forms of specifying project activity
The improvement of human-centred processes—facing the challenge and reaping the benefit of ISO 13407

October 2001

·

275 Reads

Human-centred design processes for interactive systems are defined in ISO 13407 and the associated ISO TR 18529. The publication of these standards represents a maturing of the discipline of user-centred design. The systems development community see that (at last) Human Factors has processes which can be managed and integrated with existing project processes. This internationally agreed set of human-centred design processes provides a definition of the capability that an organization must possess in order to implement user-centred design effectively. It can also be used to assess the extent to which a particular development project employs user-centred design. As such, it presents a challenge to the Human Factors community, and indeed a definition of good practice may even be regarded by some as an unwelcome constraint. This paper presents the background to the process-level definition of user-centred design and describes how it relates to current practice. The challenges, benefits and use of a defined human-centred design process are presented. The implications for Human Factors and other disciplines are discussed. In Appendices A–D, the process terminology and the contents of ISO 13407 and ISO TR 18529 are described in more detail, and three examples are given (in Appendix D) of using this process improvement approach to improve the actual design methods in three organizations.

The evolution of US state government home pages from 1997 to 2002

October 2002

·

43 Reads

We examined the home pages of the 50 US states over the years 1997–2002 to discover the dimensions underlying people's perceptions of state government home pages, to observe how those dimensions have changed over the years, to identify different types of state home pages, and to see how these types have changed. We found that three primary dimensions explain the variation in perceptions of home pages. These are the layout of the page, its navigation support, and its information density. Over the years, variation in navigation support declined and variation in information density increased. We discovered that four types of state government home page have existed continuously from 1997 to 2001. These are the ‘Long List of Text Links’, the ‘Simple Rectangle’, the ‘Short L’, and the ‘High Density/Long L’. To this taxonomy, two other page types can be added: the ‘Portal’ page and the ‘Boxes’ page. The taxonomy we have identified allows for a better understanding of the design of US state home pages, and may generalize to other categories of home pages.

The visual characteristics of avatars in computer-mediated communication: Comparison of Internet Relay Chat and Instant Messenger as of 2003

December 2006

·

102 Reads

This research focuses on computer-mediated communication where users are represented by a graphical avatar. An avatar represents a user's self-identity and desire for self-disclosure. Therefore, the claim is made that there is a relationship between the characteristics of media and the choice of avatar. This study supports the claim by examining the difference between Internet Relay Chat (IRC) avatars and Instant Messenger (IM) avatars as of 2003 when both media had distinct characteristics and popular avatar service in Korea. Users of IRC are generally anonymous and involved with topic-based group discussions, whereas users of IM are known by their “real” names and communicate via one-on-one chitchatting. We found that avatars as symbols for users can have different characteristics in terms of self-identity and self-disclosure in different media. Gender is found to have significant moderation effect on avatar usage, whereas age is shown to have a mixed moderation effect.

Colour appeal in website design within and across cultures: A multi-method evaluation. International Journal of Human-Computer Studies, 68(1), 1-21

February 2010

·

982 Reads

Colour has the potential to elicit emotions or behaviors, yet there is little research in which colour treatments in website design are systematically tested. Little is known about how colour affects trust or satisfaction on the part of the viewer. Although the Internet is increasingly global, few systematic studies have been undertaken in which the impact of colour on culturally diverse viewers is investigated in website design. In this research three website colour treatments are tested across three culturally distinct viewer groups for their impact on user trust, satisfaction, and e-loyalty. To gather data, a rich multi-method approach is used including eye-tracking, a survey, and interviews. Results reveal that website colour appeal is a significant determinant for website trust and satisfaction with differences noted across cultures. The findings have practical value for web marketers and interface designers concerning effective colour use in website development.

Nass, C.: Consistency of personality in interactive characters: Verbal cues, non-verbal cues, and user characteristics. International Journal of Human-Computer Studies 53(2), 251-267

August 2000

·

874 Reads

This study examined whether people would interpret and respond to verbal (text) and non-verbal cues (posture) of personality in interactive characters just as they interpret cues from a person. In a balanced, between-subjects experiment (N=40), introverted and extroverted participants were randomly paired with one of two types of consistent computer characters: (1) matched participants' personality with both verbal and non-verbal cues or (2) completely mismatched the participant, or one of two types of inconsistent characters: (3) matched with verbal cues but not with non-verbal cues or (4) matched with non-verbal but not with verbal cues. Participants accurately identified the character's personality type in their assessment of its verbal and non-verbal cues. Preference was for consistent characters, regardless of participant personality. Consistent characters also had greater influence over peoples' behavior—interaction with consistent characters led to greater changes in people's answers than interaction with inconsistent characters. Finally, contrary to previous research, participants tended to prefer a character whose personality was complementary, rather than similar, with their own. This study demonstrates the importance of orchestrating the overall set of cues that an interactive computer character presents to the computer user, and emphasizes the need for consistency among these cues.

Towards a Standard for Pointing Device Evaluation, Perspectives on 27 Years of Fitts' Law Research in HCI

December 2004

·

372 Reads

This paper makes seven recommendations to HCI researchers wishing to construct Fitts’ law models for either movement time prediction, or for the comparison of conditions in an experiment. These seven recommendations support (and in some cases supplement) the methods described in the recent ISO 9241-9 standard on the evaluation of pointing devices. In addition to improving the robustness of Fitts’ law models, these recommendations (if widely employed) will improve the comparability and consistency of forthcoming publications. Arguments to support these recommendations are presented, as are concise reviews of 24 published Fitts’ law models of the mouse, and 9 studies that used the new ISO standard.

Initial examination of ease of use for 2D and 3D information visualizations of Web content

June 2000

·

185 Reads

We present a discussion and initial empirical investigation of user-interface designs for a set of three Web browsers. The target end-user population we identified were experienced software engineers who maintained large Web sites or portals. The user study demonstrated the strengths and weaknesses of two conventional 2D browsers for this target user, as well as that of XML3D, a novel browser that integrates an interactive 3D hyperbolic graph view with a more traditional 2D list view of the data. A standard collapse/expand tree browser and a Web-based hierarchical categorization similar to Yahoo!, were competitively evaluated against XML3D. No reliable difference between the two 2D browsers was observed. However, the results showed clear differences between XML3D and the 2D user interfaces combined. With XML3D, participants performed search tasks within existing categories reliably faster with no decline in the quality of their responses. It was informally observed that integrating the ability to view the overall structure of the information space with the ability to easily assess local and global relationships was key to successful search performance. XML3D was the only tool of the three that efficiently showed the overall structure within one visualization. The XML3D browser accomplished this by combining a 3D graph layout view as well as an accompanying 2D list view. Users did opt to use the 2D user-interface components of XML3D during new category search tasks, and the XML3D performance advantage was no longer obtained in those conditions. In addition, there were no reliable differences in overall user satisfaction across the three user-interface designs. Since we observed subjects using the XML3D features differently depending on the kind of search task, future studies should explore optimal ways of integrating the use of novel focus+context visualizations and 2D lists for effective information retrieval. The contribution of this paper is that it includes empirical data to demonstrate where novel focus+context views might benefit experienced users over and above more conventional user-interface techniques, in addition to where design improvements are warranted.

Dynamic picking system for 3D seismic data: Design and evaluation

July 2009

·

83 Reads

In the framework of data interpretation for petroleum exploration, this paper contributes two contributions for visual exploration aiming to manually segment surfaces embedded in volumetric data. Resulting from a user-centered design approach, the first contribution, dynamic picking, is a new method of viewing slices dedicated to surface tracking, i.e. fault-picking, from 3D large seismic data sets. The proposed method establishes a new paradigm of interaction breaking with the conventional 2D slices method usually used by geoscientists. Based on the 2D+time visualization method, dynamic picking facilitates localizing of faults by taking advantage of the intrinsic ability of the human visual system to detect dynamic changes in textured data. The second, projective slice, is a focus+context visualization technique that offers the advantage of facilitating the anticipation of upcoming slices over the sloping 3D surface. From the reported experimental results, dynamic picking leads to a good compromise between fitting precision and completeness of picking while the projective slice significantly reduces the amount of workload for an equivalent level of precision.

Navigation and orientation in 3D user interfaces: The impact of navigation aids and landmarks

September 2004

·

305 Reads

This study examined how users acquire spatial cognition in 3D user interfaces depicting an on-screen virtual environment. The study was divided into two main phases: learning and a test of learning transfer. The learning phase consisted of participants directly navigating (search for objects) in the on-screen virtual environment using one of two navigation aids: a visual map or a route list. In addition, there were two virtual environments, one with landmarks and the other without landmarks. Learning transfer was examined by testing both navigation and orientation tasks (relative-direction pointing) in the environment without the use of the navigation aids. Findings show that while the initial navigation with a map appeared to be harder, with longer navigation times and more navigation steps than with a route list, this difference became insignificant at the end of the learning phase. Moreover, performance degradation upon removal of the navigation aids was less for those that navigated with a map as compared to route list. A similar pattern was found for the impact of landmarks. Initial navigation with landmarks appeared to be harder than without landmarks, but this difference became insignificant at the end of the learning phase. Moreover, performance degradation upon removal of the navigation aid was less for those that navigated with landmarks as compared to no landmarks. Finally, the combined impact of both the navigation aid used in the learning and the presence of landmarks was primarily evident in the orientation task. Relative direction pointing was better for those who learnt with a map without landmarks, or with route list with landmarks. The findings are discussed in terms of the impact of navigations aids and landmarks on the acquisition of route and survey knowledge in spatial cognition. In addition, some gender differences are discussed in terms of different strategies in spatial cognition acquisition.

The Allobrain: An interactive, stereographic, 3D audio, immersive virtual world

November 2009

·

335 Reads

·

·

·

[...]

·

This paper describes the creation of the Allobrain project, an interactive, stereographic, 3D audio, immersive virtual world constructed from fMRI brain data and installed in the Allosphere, one of the largest virtual reality spaces in existence. This paper portrays the role the Allobrain project played as an artwork driving the technological infrastructure of the Allosphere. The construction of the Cosm toolkit software for prototyping the Allobrain and other interactive, stereographic, 3D audio, immersive virtual worlds in the Allosphere is described in detail. Aesthetic considerations of the Allobrain project are discussed in relation to world-making as a means to understand and explore large data sets.

Navidget for 3D Interaction: Camera Positioning and Further Uses

March 2009

·

72 Reads

This paper presents an extended version of Navidget. Navidget is a new interaction technique for camera positioning in 3D environments. This technique derives from the point-of-interest (POI) approaches where the endpoint of a trajectory is selected for smooth camera motions. Unlike the existing POI techniques, Navidget does not attempt to automatically estimate where and how the user wants to move. Instead, it provides good feedback and control for fast and easy interactive camera positioning. Navidget can also be useful for distant inspection when used with a preview window. This new 3D user interface is totally based on 2D inputs. As a result, it is appropriate for a wide variety of visualization systems, from small handheld devices to large interactive displays. A user study on TabletPC shows that the usability of Navidget is very good for both expert and novice users. This new technique is more appropriate than the conventional 3D viewer interfaces in numerous 3D camera positioning tasks. Apart from these tasks, the Navidget approach can be useful for further purposes such as collaborative work and animation.

Multimodal selection techniques for dense and occluded 3D virtual environments

March 2009

·

109 Reads

Object selection is a primary interaction technique which must be supported by any interactive three-dimensional virtual reality application. Although numerous techniques exist, few have been designed to support the selection of objects in dense target environments, or the selection of objects which are occluded from the user's viewpoint. There is, thus, a limited understanding on how these important factors will affect selection performance. In this paper, we present a set of design guidelines and strategies to aid the development of selection techniques which can compensate for environment density and target visibility. Based on these guidelines, we present new forms of the ray casting and bubble cursor selection techniques, which are augmented with visual, audio, and haptic feedback, to support selection within dense and occluded 3D target environments. We perform a series of experiments to evaluate these new techniques, varying both the environment density and target visibility. The results provide an initial understanding of how these factors affect selection performance. Furthermore, the results showed that our new techniques adequately allowed users to select targets which were not visible from their initial viewpoint. The audio and haptic feedback did not provide significant improvements, and our analysis indicated that our introduced visual feedback played the most critical role in aiding the selection task.

Navigation in 3D virtual environments: Effects of user experience and location-pointing navigation aids

November 2007

·

726 Reads

In this paper, we describe the results of an experimental study whose objective was twofold: (1) comparing three navigation aids that help users perform wayfinding tasks in desktop virtual environments (VEs) by pointing out the location of objects or places; (2) evaluating the effects of user experience with 3D desktop VEs on their effectiveness with the considered navigation aids. In particular, we compared navigation performance (in terms of total time to complete an informed search task) of 48 users divided into two groups: subjects in one group had experience in navigating 3D VEs while subjects in the other group did not. The experiment comprised four conditions that differed for the navigation aid that was employed. The first and the second condition, respectively, exploited 3D and 2D arrows to point towards objects that users had to reach; in the third condition, a radar metaphor was employed to show the location of objects in the VE; the fourth condition was a control condition with no location-pointing navigation aid available. The search task was performed both in a VE representing an outdoor geographic area and in an abstract VE that did not resemble any familiar environment. For each VE, users were also asked to order the four conditions according to their preference. Results show that the navigation aid based on 3D arrows outperformed (both in terms of user performance and user preference) the others, except in the case when it was used by experienced users in the geographic VE. In that case, it was as effective as the others. Finally, in the geographic VE, experienced users took significantly less time than inexperienced users to perform the informed search, while in the abstract VE the difference was significant only in the control and the radar conditions. From a more general perspective, our study highlights the need to take into specific consideration user experience in navigating VEs when designing navigation aids and evaluating their effectiveness.

Usability principles and best practices for the user interface design of complex 3D architectural design and engineering tools

February 2010

·

364 Reads

This study proposes usability principles for the user interfaces (UI) design of complex 3D parametric architectural design and engineering tools. Numerous usability principles have been developed for generic desktop or web applications. The authors tried to apply existing usability principles as guidelines for evaluating complex 3D design and engineering applications. However, the principles were too generic and high-level to be useful as design or evaluation guidelines. The authors, all with more than 10 or 30 years of experience with various CAD systems, selected and reviewed 10 state-of-the-art 3D parametric design and engineering applications and captured what they thought were best practices, as screenshots and videos. The collected best practices were reviewed through a series of discussion sessions. During the discussion sessions, UI design principles underlying the collected best practices were characterized in the line of existing UI principles. Based on the best practices and the derived common UI principles, a new set of refined and detailed UI principles were proposed for improving and evaluating 3D parametric engineering design tools in the future.

Beale, R.: Supporting serendipity: Using ambient intelligence to augment user exploration for data mining and web browsing. International Journal of Human-Computer Studies 65(5), 421-433

May 2007

·

205 Reads

Serendipity is the making of fortunate discoveries by accident, and is one of the cornerstones of scientific progress. In today's world of digital data and media, there is now a vast quantity of material that we could potentially encounter, and so there is an increased opportunity of being able to discover interesting things. However, the availability of material does not imply that we will be able to actually find it; the sheer quantity of data mitigates against us being able to discover the interesting nuggets.

Fig. 2. An expanded expectation-confirmation model.
Fig. 3. Results for expanded expectation-confirmation model.
Assessment of model fit
Descriptive statistics and convergent validity
The effects of post-adoption beliefs on the expectation-confirmation model for information technology continuance. International Journal of Human Computer Studies, 64(9), 799-810

January 2006

·

2,327 Reads

The expectation-confirmation model (ECM) of IT continuance is a model for investigating continued information technology (IT) usage behavior. This paper reports on a study that attempts to expand the set of post-adoption beliefs in the ECM, in order to extend the application of the ECM beyond an instrumental focus. The expanded ECM, incorporating the post-adoption beliefs of perceived usefulness, perceived enjoyment and perceived ease of use, was empirically validated with data collected from an on-line survey of 811 existing users of mobile Internet services. The data analysis showed that the expanded ECM has good explanatory power (R2=57.6% of continued IT usage intention and R2=67.8% of satisfaction), with all paths supported. Hence, the expanded ECM can provide supplementary information that is relevant for understanding continued IT usage. The significant effects of post-adoption perceived ease of use and perceived enjoyment signify that the nature of the IT can be an important boundary condition in understanding the continued IT usage behavior. At a practical level, the expanded ECM presents IT product/service providers with deeper insights into how to address IT users’ satisfaction and continued patronage.

Fig. 1. Expectancy disconfirmation theory. 
Table 1 (continued ) Mean S.D. Loading 
Fig. 2. Research model. 
Table 2 Correlations of latent variables 
Fig. 3. SEM analysis of the research model. 
Understanding e-Learning continuance intention: An extension of the technology acceptance model. International Journal of Human-Computer Studies, 64(8), 683-696

August 2006

·

7,414 Reads

Based on the expectancy disconfirmation theory, this study proposes a decomposed technology acceptance model in the context of an e-learning service. In the proposed model, the perceived performance component is decomposed into perceived quality and perceived usability. A sample of 172 respondents took part in this study. The results suggest that users’ continuance intention is determined by satisfaction, which in turn is jointly determined by perceived usefulness, information quality, confirmation, service quality, system quality, perceived ease of use and cognitive absorption.

Gutiérrez-Maldonado, J.: Influence of personality and individual abilities on the sense of presence experienced in anxiety triggering virtual environments. International Journal of Human-Computer Studies 68(10), 788-801

October 2010

·

286 Reads

In the literature, there are few studies of the human factors involved in the engagement of presence. The present study aims to investigate the influence of five user characteristics – test anxiety, spatial intelligence, verbal intelligence, personality and computer experience – on the sense of presence. This is the first study to investigate the influence of spatial intelligence on the sense of presence, and the first to use an immersive virtual reality system to investigate the relationship between users’ personality characteristics and presence. The results show a greater sense of presence in test anxiety environments than in a neutral environment. Moreover, high test anxiety students feel more presence than their non-test anxiety counterparts. Spatial intelligence and introversion also influence the sense of presence experienced by high test anxiety students exposed to anxiety triggering virtual environments. These results may help to identify new groups of patients likely to benefit from virtual reality exposure therapy.

Applications of abduction: Knowledge-level modelling

September 1996

·

84 Reads

A single inference procedure (abduction) can operationalise a wide variety of knowledge-level modelling problem solving methods; i.e. prediction, classification, explanation, tutoring, qualitative reasoning, planning, monitoring, set-covering diagnosis, consistency-based diagnosis, validation, and verification. This abductive approach offers a uniform view of different problem solving methods in the style proposed by Clancey and Breuker. Also, this adbuctive approach is easily extensible to validation; i.e. using this technique we can implement both inference tools and testing tools. Further, abduction can execute in vague and conflicting domains (which we believe occur very frequently). We therefore propose abduction as a framework for knowledge-level modelling.

Migratory user interfaces able to adapt to various interaction platforms

May 2004

·

63 Reads

The goal of this work is the design of an environment for supporting runtime migration of Web interfaces among different platforms. This allows users interacting with a Web application to change device and continue their interaction from the same point. The migration takes into account the runtime state of the interactive application and the different features of the devices involved. We consider Web interfaces developed through a multiple-level approach using: the definition of the tasks to support, the abstract description of the user interface and the actual code. The runtime migration engine exploits information regarding the application runtime state and higher-level information on the available target platforms. Runtime application data are used to achieve interaction continuity and preserve usability, while information on the different platforms is considered to adapt the application's appearance and behaviour to the specific device. The paper also discusses a sample application in order to provide concrete examples of the results that can be achieved through our approach.

Using OWL to Model Biological Knowledge Abstract

July 2007

·

56 Reads

Much has been written of the facilities for ontology building and reasoning offered for ontologies expressed in the Web Ontology Language (OWL). Less has been written about how the modelling requirements of different areas of interest are met by OWL-DL's underlying model of the world. In this paper we use the disciplines of biology and bioinformatics to reveal the requirements of a community that both needs and uses ontologies. We use a case study of building an ontology of protein phosphatases to show how OWL-DL's model can capture a large proportion of the community's needs. We demonstrate how Ontology Design Patterns (ODPs) can extend inherent limitations of this model. We give examples of relationships between more than two instances; lists and exceptions, and conclude by illustrating what OWL-DL and its underlying description logic either cannot handle in theory or because of lack of implementation. Finally, we present a research agenda that, if fulfilled, would help ensure OWL's wider take up in the life science community.

Acceptance of speech recognition by physicians: A survey of expectations, experiences, and social influence

January 2009

·

564 Reads

The present study has surveyed physician views and attitudes before and after the introduction of speech technology as a front end to an electronic medical record. At the hospital where the survey was made, speech technology recently (2006–2007) replaced traditional dictation and subsequent secretarial transcription for all physicians in clinical departments. The aim of the survey was (i) to identify how attitudes and perceptions among physicians affected the acceptance and success of the speech-recognition system and the new work procedures associated with it; and (ii) to assess the degree to which physicians’ attitudes and expectations to the use of speech technology changed after actually using it. The survey was based on two questionnaires—one administered when the physicians were about to begin training with the speech-recognition system and another, asking similar questions, when they had had some experience with the system. The survey data were supplemented with performance data from the speech-recognition system. The results show that the surveyed physicians tended to report a more negative view of the system after having used it for some months than before. When judging the system retrospectively, physicians are approximately evenly divided between those who think it was a good idea to introduce speech recognition (33%), those who think it was not (31%) and those who are neutral (36%). In particular, the physicians felt that they spent much more time producing medical records than before, including time correcting the speech recognition, and that the overall quality of records had declined. Nevertheless, workflow improvements and the possibility to access the records immediately after dictation were almost unanimously appreciated. Physicians’ affinity with the system seems to be quite dependent on their perception of the associated new work procedures.

Figure 2. Research model.  
Figure 3. E-library in OUHK, home page.  
Table 4 Factor analysis 
Multiple regression analysis
Understanding user acceptance of digital libraries: What are the roles of interface characteristics, organizational context, and individual differences?

January 2002

·

2,765 Reads

Digital library research efforts originating from library and information scientists have focused on the technical development. While millions of dollars have been spent on building “usable” digital libraries, previous research indicates that potential users may still not use them. This study contributes to understanding user acceptance of digital libraries by utilizing the technology acceptance model (TAM). Three system interface characteristics, three organizational context variables, and three individual differences are identified as critical external variables that have impact on adoption intention through perceived usefulness and perceived ease of use of the digital library. Data was collected from 397 users of an award-winning digital library. The findings show that both perceived usefulness and perceived ease of use are determinants of user acceptance of digital libraries. In addition, interface characteristics and individual differences affect perceived ease of use, while organizational context influences both perceived ease of use and perceived usefulness of digital libraries.

The Role of Moderating Factors in User Technology Acceptance

February 2006

·

3,184 Reads

Along with increasing investments in new technologies, user technology acceptance becomes a frequently studied topic in the information systems discipline. The last two decades have seen user acceptance models being proposed, tested, refined, extended and unified. These models have contributed to our understanding of user technology acceptance factors and their relationships. Yet they have also presented two limitations: the relatively low explanatory power and inconsistent influences of the factors across studies. Several researchers have recently started to examine the potential moderating effects that may overcome these limitations. However, studies in this direction are far from being conclusive. This study attempts to provide a systematic analysis of the explanatory and situational limitations of existing technology acceptance studies. Ten moderating factors are identified and categorized into three groups: organizational factors, technological factors and individual factors. An integrative model is subsequently established, followed by corresponding propositions pertaining to the moderating factors.

Task-technology fit and user acceptance of online auction

February 2010

·

165 Reads

Word Wide Web intelligent agent technology has provided researchers and practitioners, such as those involved in information technology, innovation, knowledge management, and technical collaboration with the ability to examine the design principles and performance characteristics of the various approaches to intelligent agent technology, and to increase the cross fertilization of ideas on the development of autonomous agents and multi-agent systems among different domains. This study investigates the employment of intelligent agents in a web-based auction process, with particular reference to the appropriateness of the agent software for the online auction task, consumers’ value perception of the agent, the effect of this consumer perception on their intention to use the tool, and a measure of consumer acceptance. In the initial case study, both consumers and web operators thought the use of software agents enhanced online auction efficiency and timeliness. The second phase of the investigation established that consumer familiarity with the agent functionality was positively associated with seven dimensions: online auction site's task, agent's technology, task-technology fit, perceived ease of use, perceived usefulness, perceived playfulness, intention to use tool, and negatively associated with perceived risk. Intelligent agents have the potential to release skilled operator time for the use of value-adding tasks in the planning and expansion of online auctions.

Predicting the use of web-based information systems: Self-efficacy, enjoyment, learning goal orientation, and the technology acceptance model

October 2003

·

540 Reads

With the growing reliance on computerized systems and increasing rapidity of the introduction of new technologies, user acceptance of technology continues to be an important issue. Drawing upon recent findings in information systems, human computer interaction, and social psychology, the present research extends the technology acceptance model by incorporating the motivation variables of self-efficacy, enjoyment, and learning goal orientation in order to predict the use of Web-based information systems. One hundred nine subjects participated in the study, which was conducted in a field setting with the Blackboard system, a Web-based class management system. A survey was administered after a 2-week trial period and the actual use of the system was recorded by the Blackboard system over 8 weeks. The results largely support the proposed model, highlighting the important roles of self-efficacy, enjoyment, and learning goal orientation in determining the actual use of the system. Practical implications of the results are provided.

The effects of contextualized access to knowledge on judgment

November 2001

·

33 Reads

This research conceptualizes contextualized access to knowledge, i.e. the ability to access task domain knowledge within the context of problem-solving and investigates its effects on knowledge dissemination. Two informationally equivalent versions of a financial analysis knowledge-based system (KBS) were compared in a laboratory experiment, one with contextualized access to the underlying task domain knowledge (deep explanations) via hypertext-style links and the other without such access. Results indicate that contextualized access had significant advantages. It afforded a major portion of the requests for deep explanations to occur in the context of problem-solving, as opposed to in the abstract, and led to a significant increase in the number of requests. The increased utilization of deep explanations and contextualized use were associated with a greater degree of congruence between users' judgement and KBS. The conclusion is that availability of knowledge alone is not sufficient;contextualized accessibility is the key for knowledge dissemination and for influencing performance.

Beyond web content accessibility guidelines: Design of enhanced text user interfaces for blind internet users

April 2008

·

245 Reads

Websites do not become usable just because their content is accessible. For people who are blind, the application of the W3C's Web Content Accessibility Guidelines (WCAG) often might not even make a significant difference in terms of efficiency, errors or satisfaction in website usage. This paper documents the development of nine guidelines to construct an enhanced text user interface (ETI) as an alternative to the graphical user interface (GUI). An experimental design with 39 blind participants executing a search and a navigation task on a website showed that with the ETI, blind users executed the search task significantly faster, committing fewer mistakes, rating it significantly better on subjective scales as well as when compared to the GUIs from other websites they had visited. However, performance did not improve with the ETI on the navigation task, the main reason presumed to be labeling problems. We conclude that the ETI is an improvement over the GUI, but that it cannot help in overcoming one major weakness of most websites: If users do not understand navigation labels, even the best user interface cannot help them navigate.

Evaluating information accessibility and community adaptivity features for sustaining virtual learning communities

November 2003

·

163 Reads

Virtual communities have been identified as the “killer applications” on the Internet Information Superhighway. Their impact is increasingly pervasive, with activities ranging from the economic and marketing to the social and educational. Despite their popularity, little is understood as to what factors contribute to the sustainability of virtual communities. This study focuses on a specific type of virtual communities—the virtual learning communities. It employs an experiment to examine the impact of two critical issues in system design—information accessibility and community adaptivity—on the sustainability of virtual learning communities. Adopting an extended Technology Acceptance Model, the experiment exposed 69 subjects to six different virtual learning communities differentiated by two levels of information accessibility and three levels of community adaptivity, solicited their feelings and perceptions, and measured their intentions to use the virtual learning communities. Results indicate that both information accessibility and community adaptivity have significant effects on user perceptions and behavioural intention. Implications for theory and practice are drawn and discussed.

Interface changes causing accidents. An empirical study of negative transfer

January 2005

·

245 Reads

When expert operators interact with a new device, they inevitably reuse former interaction modes and actions. This phenomenon is due to the human cognition seeking resources savings. Schemas support this strategy and are implemented in such a way that perfection is disregarded at the profit of an intuitive trade-off between performance and cognitive resources savings. As a consequence, humans have a strong inclination to fit well-known solution procedures into new problems. For this reason, changes in work environments can cause accidents when they allow operators to interact with a new device if the latter is erroneously perceived as familiar. This research issue originates from an industrial background. The suspected cause of a fatal error performed by an operator in a steelworks factory is replicated in an experiment. The results support the hypothesis according to which errors (and possible subsequent accidents) due to changes in the interface are more likely when the latter does not inhibit former modes of interaction modes. This main result is discussed under the angle of cognitive ergonomics and used as a basis to provide design guidelines.

Pauses in doctor–patient conversation during computer use: The design significance of their durations and accompanying topic changes

June 2010

·

133 Reads

Talk is often suspended during medical consultations while the clinician interacts with the patient's records and other information. This study of four general practitioners (GPs) focused on these suspensions and the adjacent conversational turns. Conversation analysis revealed how GPs took action to close conversations down prior to attending to the records, resulting in a ‘free turn’ that could be taken up by either GP or patient. The durations of the intervening pauses were also analysed, exposing a hitherto unobserved 10-second timeframe within which both GP and patient showed a preference for the conversation to be resumed. Resumption was more likely to be achieved within 10 s when the GP's records were paper-based rather than computer-based. Subsequent analysis of topic changes on resumption of talk has revealed a 5-second timeframe, also undocumented; when pauses exceed this timeframe, it is rare for the previous topic to be resumed without a restatement. Data recorded in the home suggest that these timeframes are also present in family conversations. We argue for considering the two timeframes when designing systems for use in medical consultations and other conversational settings, and discuss possible outcomes.

FIGURE 1. W/PANES task display.  
TABLE 1 Average number of commission (range: 0}6) and omission errors (range: 0}4) as a function of accountability condition
FIGURE 2. Auxiliary veri"cation display.  
Accountability and automation bias

April 2000

·

2,492 Reads

Although generally introduced to guard against human error, automated devices can fundamentally change how people approach their work, which in turn can lead to new and different kinds of error. The present study explored the extent to which errors of omission (failures to respond to system irregularities or events because automated devices fail to detect or indicate them) and commission (when people follow an automated directive despite contradictory information from other more reliable sources of information because they either fail to check or discount that information) can be reduced under conditions of social accountability. Results indicated that making participants accountable for either their overall performance or their decision accuracy led to lower rates of “automation bias”. Errors of omission proved to be the result of cognitive vigilance decrements, whereas errors of commission proved to be the result of a combination of a failure to take into account information and a belief in the superior judgement of automated aids.

Storied Spaces: Cultural Accounts of Mobility, Technology, and Environmental Knowing

December 2008

·

96 Reads

When we think of mobility in technical terms, we think of topics such as bandwidth, resource management, location, and wireless networks. When we think of mobility in social or cultural terms, a different set of topics come into view: pilgrimage and religious practice, globalization and economic disparities, migration and cultural identity, daily commutes and the suburbanization of cities.In this paper, we examine the links between these two aspects of mobility. Drawing on non-technological examples of cultural encounters with space, we argue that mobile information technologies do not just operate in space, but they are tools that serve to structure the spaces through which they move. We use recent projects to illustrate how three concerns with mobility and space—legibility, literacy, and legitimacy—open up new avenues for design exploration and analysis.

F AN: Finding Accurate iNductions

April 2002

·

51 Reads

In this paper we present a machine-learning algorithm that computes a small set of accurate and interpretable rules. The decisions of these rules can be straight-forwardly explained as the conclusions drawn by a case-based reasoner. Our system is named F AN, an acronym for f inding a ccurate i n ductions. It starts from a collection of training examples and produces propositional rules able to classify unseen cases following a minimum-distance criterion in their evaluation procedure. In this way, we combine the advantages of instance-based algorithms and the conciseness of rule (or decision-tree) inducers. The algorithm followed by F AN can be seen as the result of successive steps of pruning heuristics. The main tool employed is that of the impurity level, a measure of the classification quality of a rule, inspired by a similar measure used in IB3. Finally, a number of experiments were conducted with standard benchmark datasets of the UCI repository to test the performance of our system, successfully comparing F AN with a wide collection of machine-learning algorithms.

Rule identification using ontology while acquiring rules from Web pages

July 2007

·

41 Reads

As research on the Semantic Web actively progresses, a more intelligent Web environment is expected in various domains including rule-based systems and intelligent agents. However, rule acquisition is still a bottleneck in the utilization of rule-based systems. To extract rules from Web pages, the framework of eXtensible Rule Markup Language (XRML) has been developed. XRML allows the identification of rules from Web pages and generates rules automatically. However, the knowledge engineer's burden is still high because rule identification requires considerable manual work. In order to reduce the knowledge engineer's burden, we proposed an ontology-based methodology of enhanced rule identification. First, we have designed an ontology OntoRule for automated rule identification. Also, we proposed a procedure of rule identification using OntoRule. Lastly, we showed the performance of our approach with an experiment.

Acquiring user tradeoff strategies and preferences for negotiating agents: A default-then-adjust method

April 2006

·

35 Reads

A wide range of algorithms have been developed for various types of negotiating agents. In developing such algorithms the main focus has been on their efficiency and their effectiveness. However, this is only a part of the picture. Typically, agents negotiate on behalf of their owners and for this to be effective the agents must be able to adequately represent their owners’ strategies and preferences for negotiation. However, the process by which such knowledge is acquired is typically left unspecified. To address this problem, we undertook a study of how user information about negotiation tradeoff strategies and preferences can be captured. Specifically, we devised a novel default-then-adjust acquisition technique. In this, the system firstly does a structured interview with the user to suggest the attributes that the tradeoff could be made between, then it asks the user to adjust the suggested default tradeoff strategy by improving some attribute to see how much worse the attribute being traded off can be made while still being acceptable, and, finally, it asks the user to adjust the default preference on the tradeoff alternatives. This method is consistent with the principles of standard negotiation theory and to demonstrate its effectiveness we implemented a prototype system and performed an empirical evaluation in an accommodation renting scenario. The result of this evaluation indicates the proposed technique is helpful and efficient in accurately acquiring the users’ tradeoff strategies and preferences.

Acquiring domain knowledge for negotiating agents: A case of study

July 2004

·

50 Reads

In this paper, we employ the fuzzy repertory table technique to acquire the necessary domain knowledge for software agents to act as sellers and buyers using a bilateral, multi-issue negotiation model that can achieve optimal results in semi-competitive environments. In this context, the seller's domain knowledge that needs to be acquired is the rewards associated with the products and restrictions attached to their purchase. The buyer's domain knowledge that is acquired is their requirements and preferences on the desired products. The knowledge acquisition methods we develop involve constructing three fuzzy repertory tables and their associated distinctions matrixes. The first two are employed to acquire the seller agent's domain knowledge; and the third one is used, together with an inductive machine learning algorithm, to acquire the domain knowledge for the buyer agent.

A framework and computer system for knowledge-level acquisition, representation, and reasoning with process knowledge

October 2010

·

84 Reads

The development of knowledge-based systems is usually approached through the combined skills of software and knowledge engineers (SEs and KEs, respectively) and of subject matter experts (SMEs). One of the most critical steps in this task aims at transferring knowledge from SMEs’ expertise to formal, machine-readable representations, which allow systems to reason with such knowledge. However, this process is costly and error prone. Alleviating such knowledge acquisition bottleneck requires enabling SMEs with the means to produce the target knowledge representations, minimizing the intervention of KEs. This is especially difficult in the case of complex knowledge types like processes. The analysis of scientific domains like Biology, Chemistry, and Physics uncovers: (i) that process knowledge is the single most frequent type of knowledge occurring in such domains and (ii) specific solutions need to be devised in order to allow SMEs to represent it in a computational form. We present a framework and computer system for the acquisition and representation of process knowledge in scientific domains by SMEs. We propose methods and techniques to enable SMEs to acquire process knowledge from the domains, to formally represent it, and to reason about it. We have developed an abstract process metamodel and a library of problem solving methods (PSMs), which support these tasks, respectively providing the terminology for SME-tailored process diagrams and an abstract formalization of the strategies needed for reasoning about processes. We have implemented this approach as part of the DarkMatter system and formally evaluated it in the context of the intermediate evaluation of Project Halo, an initiative aiming at the creation of question answering systems by SMEs.

Knowledge restructuring and the acquisition of programming expertise

April 1994

·

14 Reads

This paper explores the relationship between knowledge structure and organization and the development of expertise in a complex problem-solving task. An empirical study of skill acquisition in computer programming is reported, providing support for a model of knowledge organization that stresses the importance of knowledge restructuring processes in the development of expertise. This is contrasted with existing models which have tended to place emphasis upon schemata acquisition and generalization as the fundamental modes of learning associated with skill development. The work reported in this paper suggests that a fine-grained restructuring of individual schemata takes place during the later stages of skill development. It is argued that those mechanisms currently thought to be associated with the development of expertise may not fully account for the strategic changes and the types of error typically found in the transition between intermediate and expert problem solvers. This work has a number of implications. Firstly, it suggests important limitations of existing theories of skill acquisition. This is particularly evident in terms of the ability of such theories to account for subtle changes in the various manifestations of skilled performance that are associated with increasing expertise. Secondly, the work reported in this paper attempts to show how specific forms of training can give rise to this knowledge restructuring process. It is argued that the effects of particular forms of training are of primary importance, but these effects are often given little attention in theoretical accounts of skill acquisition. Finally, the work presented here has practical relevance in a number of applied areas including the design of intelligent tutoring systems and programming environments.

Evaluating mass knowledge acquisition using the ALICE chatterbot: The AZ-ALICE dialog system

November 2006

·

118 Reads

In this paper, we evaluate mass knowledge acquisition using modified ALICE chatterbots. In particular we investigate the potential of allowing subjects to modify chatterbot responses to see if distributed learning from a web environment can succeed. This experiment looks at dividing knowledge into general conversation and domain specific categories for which we have selected telecommunications. It was found that subject participation in knowledge acquisition can contribute a significant improvement to both the conversational and telecommunications knowledge bases. We further found that participants were more satisfied with domain-specific responses rather than general conversation.

Expertise transfer and complex problems: using AQUINAS as a knowledge-acquisition workbench for knowledge-based systems

January 1987

·

37 Reads

Acquiring knowledge from a human expert is a major problem when building a knowledge-based system. Aquinas, an expanded version of the Expertise Transfer System (ETS), is a knowledge-acquisition workbench that combines ideas from psychology and knowledge-based systems research to support knowledge-acquisition tasks. These tasks include eliciting distinctions, decomposing problems, combining uncertain information, incremental testing, integration of data types, automatic expansion and refinement of the knowledge base, use of multiple sources of knowledge and providing process guidance. Aquinas interviews experts and helps them analyse, test, and refine the knowledge base. Expertise from multiple experts or other knowledge sources can be represented and used separately or combined. Results from user consultations are derived from information propagated through hierarchies. Aquinas delivers knowledge by creating knowledge bases for several different expert-system shells. Help is given to the expert by a dialog manager that embodies knowledge-acquisition heuristics.Aquinas contains many techniques and tools for knowledge acquisition; the techniques combine to make it a powerful testbed for rapidly prototyping portions of many kinds of complex knowledge-based systems.

Age differences and the acquisition of spatial knowledge in a three-dimensional environment: Evaluating the use of an overview map as a navigation aid

December 2005

·

86 Reads

This study examined age differences in the use of an electronic three-dimensional (3D) environment, and how the age differences were affected by the use of an overview map as a navigation aid. Task performance and the subjects’ acquisition of configural knowledge of the 3D-environment were assessed. Impact of spatial ability and prior experience on these measurements were also investigated. One group of older subjects (n=24) and one group of younger subjects (n=24) participated. An overall hypothesis for the work presented here was that differences in learning to and performing navigational tasks in the physical world are similar in learning and performing navigational tasks in the virtual world. The results showed that the older participants needed more time to solve the tasks; and similar to navigation in the physical world, the older participants were less likely to create configural knowledge. It could not be established that older participants benefited more from an overview map as cognitive support than younger subjects, except in the subjective sense: the older users felt more secure when the map was there. The map seemed to have supported the older users in creating a feeling of where objects were located within the environment, but it did not make them more efficient. The results have implications for design; in particular, it brings up the difficult issue of balancing design goals such as efficiency in terms of time and functionality, against maintaining a sense of direction and location in navigational situations.

Knowledge Acquisition by Encoding Expert Rules Versus Computer Induction From Examples: A Case Study Involving Soybean Pathology

January 1980

·

33 Reads

In view of growing interest in the development of knowledge-based computer consulting systems for various problem domains, the problems of knowledge acquisition have special significance. Current methods of knowledge acquisition rely entirely on the direct representation of knowledge of experts, which usually is a very time and effort consuming task. The paper presents results from an experiment to compare the above method of knowledge acquisition with a method based on inductive learning from examples. The compatison was done in the context of developing rules for soybean disease diagnosis and has demonstrated an advantage of the inductively derived rules in performing a testing task (which involved diagnosing a few hundred cases of soybean diseases).

Experimental evaluation of knowledge acquisition techniques and methods: History, problems and new directions

October 1999

·

27 Reads

The special problems of experimentally evaluating knowledge acquisition and knowledge engineering tools, techniques and methods are outlined, and illustrated in detail with reference to two series of studies. The first is a series of experiments undertaken at Nottingham University under the aegis of the UK Alvey initiative and the ESPRIT project ACKnowledge. The second is the series of Sisyphus benchmark studies. A suggested programme of experimental evaluation is outlined which is informed by the problems with using Sisyphus for evaluation.

Top-cited authors