Conference Paper

User Satisfaction Measurement Methodolgies: Extending the User Satisfaction Questionnaire.

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Although not yet generally accepted in psychology, IRT has had a major impact on educational testing, affecting the development and administration of the Scholastic Aptitude Test, Graduate Record Exam, and Armed Services Vocational Aptitude Battery. Some researchers have speculated that the application of IRT might improve the measurement of usability (Hollemans, 1999). ...
... This is an important finding because one of the criticisms made by IRT practitioners regarding CTT scales is that "by not including item properties in the model, true score can apply only to a particular set of items or their equivalent" (Embretson & Reise, 2000, p. 53). Taken to its extreme, this would mean that adding or deleting even one item from a scale developed using CTT could render its scores invalid (Hollemans, 1999). The essential equivalence of mean scores for complete and incomplete PSSUQs in this study is consistent with the expectation of equivalence implied by CTT. ...
... Although not yet generally accepted in psychology, IRT has had a major impact on educational testing, affecting the development and administration of the Scholastic Aptitude Test, Graduate Record Exam, and Armed Services Vocational Aptitude Battery. Some researchers have speculated that the application of IRT might improve the measurement of usability (Hollemans, 1999). ...
... This is an important finding because one of the criticisms made by IRT practitioners regarding CTT scales is that "by not including item properties in the model, true score can apply only to a particular set of items or their equivalent" (Embretson & Reise, 2000, p. 53). Taken to its extreme, this would mean that adding or deleting even one item from a scale developed using CTT could render its scores invalid (Hollemans, 1999). The essential equivalence of mean scores for complete and incomplete PSSUQs in this study is consistent with the expectation of equivalence implied by CTT. ...
Article
Full-text available
Factor analysis of Post Study System Usability Questionnaire (PSSUQ) data from 5 years of usability studies (with a heavy emphasis on speech dictation systems) indicated a 3-factor structure consistent with that initially described 10 years ago: factors for System Usefulness, Information Quality, and Interface Quality. Estimated reliabilities (ranging from .83-.96) were also consistent with earlier estimates. Analyses of variance indicated that variables such as the study, developer, stage of development, type of product, and type of evaluation significantly affected PSSUQ scores. Other variables, such as gender and completeness of responses to the questionnaire, did not. Norms derived from this data correlated strongly with norms derived from the original PSSUQ data. The similarity of psychometric properties between the original and this PSSUQ data, despite the passage of time and differences in the types of systems studied, provide evidence of significant generalizability for the questionnaire, supporting its use by practitioners for measuring participant satisfaction with the usability of tested systems.
... Well-tested usability questionnaires (e.g. Brooke 1996, Chin et al., 1988, Davis 1989, Kirakowski and Corbett 1993, Lewis 1995, Lin et al. 1997, Kirakowski et al. 1998, Gediga et al. 1999, Hollemans 1999 were collected and examined for their potential as a component-specific questionnaire. Most questionnaires were unsuitable because they might be too lengthy, make reference to the appearance of the system, or are only relevant for a specific type of system, e.g. a web-based system (Kirakowski et al. 1998). ...
Article
Although software engineers extensively use a Component-Based Software Engineering (CBSE) approach, existing usability questionnaires only support a holistic evaluation approach, which focuses on the usability of the system as a whole. Therefore, this paper discusses a component-specific questionnaire for measuring the perceived ease-of-use of individual interaction components. A theoretical framework is presented for this compositional evaluation approach, which builds on Taylor's Layered Protocol Theory. The application and validity of the component-specific measure is evaluated by reexamining the results of four experiments. Here participants were asked to use the questionnaire to evaluate a total of nine interaction components used in a mobile phone, a room thermostat, a web enabled TV set, and a calculator. The applicability of the questionnaire is discussed in the setting of a new usability study of an MP3-player. The findings suggest that at least part of the perceived usability of a product can be evaluated on a component-based level.
Article
Full-text available
Although software engineers extensively use a Component-Based Software Engineering (CBSE) approach, existing usability questionnaires only support a holistic evaluation approach, which focuses on the usability of the system as a whole. Therefore, this paper discusses a component-specific questionnaire for measuring the perceived ease-of-use of individual interaction components. A theoretical framework is presented for this compositional evaluation approach, which builds on Taylor's Layered Protocol Theory. The application and validity of the component-specific measure is evaluated by re-examining the results of four experiments. Here participants were asked to use the questionnaire to evaluate a total of nine interaction components used in a mobile phone, a room thermostat, a web enabled TV set, and a calculator. The applicability of the questionnaire is discussed in the setting of a new usability study of an MP3-player. The findings suggest that at least part of the perceived usability of a product can be evaluated on a component-based level.
Conference Paper
The last few years have seen an explosion not only in the amount of content produced but also in the amount of content available. However, while it is great that users can watch virtually anything they want, it becomes increasingly difficult to find content that is actually relevant to the user. This "curse" of content availability is very important for consumer electronic devices since they are as valuable as the value they add to a user's life: if the only thing they do is to increase exponentially the users' choices of what to watch, listen to or consume in any way, then they add, paradoxically, more complication to their lives. Content analysis can be used to help the user select what to watch and is thus essential in modern multimedia devices. However, there are technical and human restrictions to the technology developed for such devices. In this paper we discuss these restrictions and how they affect the work on content analysis done in an industrial research environment. We will present three case studies that show how these restrictions influence the choices made from the moment content analysis technology is designed, until the moment it is tested with users.
ResearchGate has not been able to resolve any references for this publication.