Usability - Science topic
Usability is the ease of use and learnability of a human-made object. The object of use can be a software application, website, book, tool, machine, process, or anything a human interacts with. A usability study may be conducted as a primary job function by a usability analyst or as a secondary job function by designers, technical writers, marketing personnel, and others. It is widely used in consumer electronics, communication, and knowledge transfer objects (such as a cookbook, a document or online help) and mechanical objects such as a door handle or a hammer.
Questions related to Usability
I have already developed a conceptual framework and tested my hypotheses by survey questionnaire. Could I test the applicability of this model in a specific context using the System Usability Scale (SUS)? the literature on this topic is either without specific context (very rare) or does not exist in a specific context. I would like to mention that I do not develop a prototype or a related system.
Thank you in advance
Hello All - Has anyone come across successful combination of a model like TAM / TAM2 / mTAM / UTAUT / UTAUT2 with Usability Metrics of a system or application (like count of clicks, task completion time, cursor movements etc)?
Trying to explore if actual usability determined through Usability Metrics (rather than questionnaire based UX research methods like SUS, UMUX etc) can work with UTAUT2 constructs.
Any guidance appreciated.
Then please consider supporting our research by answering the survey linked below. We would be very happy to know your opinions. And please share the post with your peers! Thanks!
First, you will be exposed to more detailed information about the study and the consent form.
In our study we with a Technology Acceptance Model (TAM) we want to use Usability at the place of Perceived Ease Of Use (PEOU).
- PEOU is defined as the degree to which individuals perceive how easy it is to use technology (Davis et al. 1989)
- Usability is defined as "The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use" (Jokela et al. 2003:11).
I would argue that usability is equivalent to PEOU, where PEOU has an explicitly subjective character (perception) and usability is somewhat broader and more objectively conceptualised (with its own subjecitve component in questions of surveys or interviews).
Please suggest me some journals for usability studies, especially for non human computer interactive products like water bottle, sanitizer container etc.
- Heart rate variability (HRV) & Emotion recognition
- How to classify different emotions using Heart rate variability (HRV)?
- What is your recommendation for above-mentioned purpose?
- Which statistical tool/software(s) is (are) preferable for classifying emotions?
Thanks in advance,
I'm studying augmented reality as a tool on ship bridges for maritime operations. One variable I want to study is the perceived risk of introducing this technology in the work of ship bridge operators. What I want to know is whether operators think the new technology (AR) might increase the risk of certain problems in maritime operations. I looked around a bit but haven't found a satisfactory questionnaire for this. The Technology Acceptance Model Questionnaire and the System Usability Scale focus too much on usability. The Perceived Risks Questionnaire (Jacoby & Kaplan, 1972) is aimed at private customers.
Could you suggest standardized questionnaires I can use to study the perceived risk of introducing this new technology?
Thanks in advance,
I am writing my bachelor thesis on the above topic. My question to you would be if there is any research or helpful publications that I can rely on. Above all, I miss research in the area of UX, usability and interactive 360 ° media. I can't find anything really useful. Are there any researches or tests of this kind in the area .
Can anyone suggest resources for learning about the state of the art in usability inspections?
I first learned about these methods back in the day from the excellent edited book entitled Usability Inspection Methods. Lately, I’ve done quite a few usability inspections and have been evolving my own practices. So, I’ve become interested in seeing what I can learn from the latest-greatest resources addressed to an audience already experienced with these practices. Thanks in advance.
Can I change the wording slightly in the Telehealth Usability Questionnaire (TUG) to meet the needs of the service delivered, or will it affect the reliability and validity? Would I have to use the Cronbach's Coefficient Alpha to measure the internal consistency if I changed the wording?
I have performed virtual ergonomic evaluation during COVID and would like to capture the end users feedback.
Kathryn Meeks PT DPT CAE
I am currently developing a framework about learning with immersive Virtual Reality. So far, I have categorized "Number of mistakes" and "Time to completion" as performance /objective factors and satisfaction, self-efficacy and motivation as affective factors. However, I also want to include embodiment, usability and cognitive load. I currently cannot come up with a suitable summary keyword. They all refer to the experience while learning, but I would prefer a different category than "learning experience". Do you have any ideas how I could categorize the three concepts?
Thank you very much in advance for your help!
I use the Smart PLS program for my model analysis.
I have created a path model among 8 of the Usability criteria and assigned "Optimism" as the moderating effect from the Technology Readiness Index.
Whichever any independent variable that I assign the Optimism as the moderating effect, the path coefficient turns out to be negative (example of -0,079), even when the p value (example of 0.03) is significant (as you see on the snipping). Always negative when I change the path lines in the program.
(Path coefficient value = -0.079, t=2.173)
I couldn't find a solution. What could your suggestions be?
I have sought the effect of suppression, but currently no way out.
Thank you so much for your reponses.
Does software design have an impact on the levels of simulation / cyber sickness in a virtual reality development environment? What are the ways to identify ? Also, Does considering human factors during the design phases minimize the levels of discomfort caused by exposure to Virtual Reality systems?
I'm investigating factors that can lead to a compatibility of privacy, security and usability in location-based services (LBS). The main focus lies on LBS in context-sensitive applications for regional marketing.
I am particularly interested in usability, security and privacy, security risks and future developments.
Thank you in advance.
My team and I have developed an augmented reality app for children (primary school students).
I would like to get their teachers' opinions about the usability of the system in the context of their students.
First, I thought about using standardized questionnaires like System Usability Scale or User Experience Questionnaire but those are designed for asking the actual user of a system and furthermore, I actually don't want too much focus on the usability rather than on their opinion about the suitability of the system for children.
I am wondering if there are any standardized questionnaires available for cases like that.
Many organizations encourage their users to create complicated passwords that are usually hard to remember (and easy to brute-force). Has there been a study to show that passphrases have a definitive advantage over passwords or vice versa?
I'm looking for some inspiration around methods for measuring usability of a bedside testing device in critical care. Any tools and ideas would be great.
My team and I have developed a prototype of an augmented reality mobile application for teaching primary school students human anatomy. We are going to do a usability testing and evaluation with the primary school students using FUN toolkit, and we are also going to conduct an expert review using heuristic evaluation and cognitive walkthrough. Furthermore, we also want the teachers to test the app, and to evaluate the usability in the context of their students' usage. However, the teachers are neither usability experts nor end-users so what is the most appropriate method for them regarding usability testing, survey design etc? Do you have any recommendations for usability testing methods, survey design/template etc.?
I'm doing a comparitive study between Hadoop and Azure and whilst they are both similiar in that they are both use to handle Big Data in the cloud environment, i do find some dissimiliarties in the following areas. With this I'm conducting a little survey for the purpose of my paper and would appricate it very much if you could rate each between a score of 1 to 10 for both Hadoop and Azure
3. Dev Time
I will be performing an experiment for testing Acceptance and Usability of a technology by intellectually disabled people through a likert scale questionnaire. Unfortunately, I am not able to access previous research work related to the Pretest questions used to train the experimentees for Likert scale responses. I need help with formulating these Pretest questions for my experiment.
The set of questions designed to test Acceptance and Usability of Human robot Collaboration by Intellectually disabled people working in Sheltered Workshops of Germany needs to be checked by an expert in the field for reliability and clarity.
we currently have the opportunity to evaluate our modeling languages for the domain of microservice architecture in a master course with roughly 40 participants.
Of course our overall goal is to get insights if the language concepts are comprehensible, understandable and easy/fun to use for the students.
I'm currently thinking about a fitting research design and struggle to find related work / similar designs. I'm pretty sure we are not among the first to evaluate their DSLs.
So far I only found "Usability Evaluation of Domain-Specific Languages" by Barisic et al.
Do you guys know about any available best practices, can give me some hints, or have links to good reads?
Thx for your help!
Is there a way to measure Cognitive Affordance of an Interaction Design or has anyone come accross such an idea or an attempt to do so?
I would like to measure responses to a simple question (three point scale) over an expended period of time at multiple points to observe changes connected to an intervention.
What tools are available in Europe to mass-message a text (via sms) and receive the reply? Are there any companies that may even have units specialised in research? Specifically, points to consider are (1) safety of the data (obviously the numbers cannot be shared outside the system, or saved on a server outside of the EU), (2) Usability/costumer service, and (3) prices.
If anyone has experience with these things and could share them, I would be very grateful.
Many thanks in advance!
In my master thesis, I proposed a set of Human-Computer Interaction (HCI) guidelines for inclusive design focused on users with autism. A challenging aspect of the research was the evaluation of the guidelines' effectiveness, since I coundn't find a well stablished method, tool, technique or framework to perform this task. I decided to use a pilot evaluation through a qualitative survey and then I performed a second qualitative evaluation adapting the methods Level of Evidence and Strength of Recommendations (or Strength of Evidence) applied in Healthcare papers [1-5].
Is there some robust and well stablished method in HCI to evaluate a proposal of guidelines? How to ensure the effectiveness of new recommendations?
 BRODERICK, J. P. et al. Guidelines for the management of spontaneous intracerebral hemorrhage: a statement for healthcare professionals from a special writing group of the Stroke Council. American Heart Association. Stroke. v. 30, p. 905-915, 2005.
 GRADE Working Group. Grading quality of evidence and strength of recommendations. BMJ: British Medical Journal, v. 328, n. 7454, p. 1490, 2004.
 LOBIONDO-WOOD, G. P.; HABER, J. Nursing research: Methods and critical appraisal for evidence-based practice. St. Louis, MO: Mosby Elsevier. 2010. 7ª Ed.
 NKF, National Kidney Foundation. KDOQI Clinical Practice Guidelines and Clinical Practice Recommendations for Diabetes and Chronic Kidney Disease. 2007. https://www2.kidney.org/professionals/KDOQI/guideline_diabetes/appendix2.htm
 SIMON, S. Special guidelines for overviews and meta-analyses. 2010. http://www.pmean.com/12a/journal/meta-analysis.asp
My thesis comprises of Literature Review as well as implementation of Augmented reality mobile application to evaluate User Experience.
If I search Augmented Reality application for mobile applications in search engines, then there will be thousands of related papers where it is very difficult for me to take and read all of them.
Considering Augmented Reality for better User Experience(UX), what strategy should I use to do systematic literature review that can better visualize the findings as a result. Besides, I also have to do Evaluation of my prototype (which is a mobile application). I provided a rough overview of my research method in the following image.
Any sample paper that has applied systematic literature review, particularly evaluating UX for mobile applications would be very helpful to provide.
There exist various tools for desktop or web usability testing like Techsmith Morae. Even if companies like Techsmith propose to use their tools also for mobile usability testing these tools do not satisfy because they are not native for this application area. Does somebody know an app or a tool which fits this need well? Who has experience in this area?
I am planning to conduct usability testing for a motorcycle HMI. The HMI has a joystick which can control the menu of the HMI (only when the biker is not riding the bike). I am thinking of measuring intuitiveness of the HMI, general user experience, Workload and Affect. Intuitiveness of the HMI would be measured for the case when the biker is riding the bike. Other variables would be measured for the case when the biker is not riding the bike. Should I measure any other variables as well?
Any suggestions on how to measure intuitiveness/glanceability of the HMI? I am planning to measure intuitiveness of the HMI as glanceability of the motorcycle HMI is critical considering the biker's safety (Eyes off the road has to be as minimal as possible while ensuring grasp of information from HMI).
It would be also great if anyone can share any literature regarding motorcycle HMI.
I am looking for metrics which I can use to evaluate any mobile application, there are many heuristics and design principles available for the websites to evaluate their usability. However, I did not encounter such heuristics for the mobile Apps, in addition, the uniqueness and the nature of the mobile Apps User interface make it harder to apply what I saw in the web design principles.
Any recommendations for papers which did a usability evaluation on Mobile Apps?
Hello professionals, my problem is that my university does not allow me to use qualitative interview methods to investigate the usage context of a web form that I have to create.
It is a form where people should be able to order an individual song from a musician. There is already competition out there, but how can I get the usage context out of a competition analysis?
If you have any ideas, please let me know. It would be great if you could point me to some books which could contain an answer.
Hi, I'm developing an application for children with ADHD. In order to assess its usability, I'm going to use System Usability Scale.
There was a recommendation from an expert working with those children, states that those children may not be able to distinguish between Strongly Agree, Agree and Strongly Disagree and Disagree.
Can I make the SUS with 3 points Likert scale?
If so, how can I calculate the scoring?
Any help and recommendation will be highly appreciated.
I'm developing an application for children with attention problems.
The application will use webcam and mouse to track the attention of the children. I'm intending to use System Usability Scale (SUS) to assess the usability of the application. Do I need to conduct reliability and validity for the SUS, or it's already reliable and validated.
One more thing, should I use the SUS twice , one to assess the usability of the app when using the webcam and another one when using mouse.
Your help and suggestions are highly appreciated.
I am currently developing a tool using the Eclipse framework. The main research question is about the usability of this framework. I am an unexperienced user who wants to test wether the framework is accessible to novice users.
I was wondering if anyone could help me find an academic way of testing the usability of this framework on one user, being myself?
Thank you in advance.
I'm looking for a method or protocol for evaluating/assessing the comprehensibility of auditory automotive human machine interfaces. Any idea ?
There are differences and overlaps in UX and usability.
But are there any differences in the approaches of identifying issues regarding UX vs. Issues regarding Usability when using an analytic approach like an expert review or another inspection method?
I was doing research on various usability techniques to measure ERP usability. I found many frameworks but found Purdue Usability Testing Questionnaire(PUTQ) and Software Usability Measurement Inventory(SUMI) match with usability criteria. Need suggestions. Thanks
EN 894-4:2010 Safety of machinery - Ergonomics requirements for the design of displays and control actuators - Part 4: Location and arrangement of displays and control actuators
EN 13557:2003+A2:2008 Cranes - Controls and control stations
I developed a new text input mechanism for Sinhala (A popular language used in Sri Lanka). The prototype was built for an android touch screen device. I would like to know the options I have for usability testing. Any tests I can carry out based on the cognitive system is preferred.
To measure the accessibility of interfaces with a participatory evaluation, can we follow the Nielsen's rule of 5 users like in usability tests?
Looking into Usability and Human Factors standards/studies/white papers in reference to Ambulance COT's and Stair Chairs.
I want to develop a new and enhanced technique for making website learning moer adaptive. Is there any tool developed for usability measurement?
Usability components: 'easy to use' as efficiency ,'easy to learn' as learnability, memorability and satisfaction. I would like to search for many other indicators that may have effects on these components.
Beside the common criteria like WCAG are there any really good sources for guidelines and experiences for good user interaction and user interface design for older adults with focus on Smart TVs?
Like website, ERP, Mobile apps are three different types of Information technology, if we are going to check the usability of ERP. ERP is a software which is mandatory to use by users(employees). As a result, frequency of usage is higher which might lead to better usability. In case of a website , usage could be optional by users. Same about eCommerce or eBanking software. Do you think in both of the cases, same method of Usability should be followed? Is there any term or theory available to differentiate Information technology based on use?
I have RNA seq data from yeast like this: gene id, yeast_value1 and yeast_value2, and log2 fold change. Value from a wild type, value 2 from a mutant strain. I would like to create a pathway, a possible network based on this data. Is this possible? I have tried cytoscape, GSAAseqSP, BioLayout software but I could not get any usable information. Is this data enough?
I have been using Medini-QVT which is no longer actively maintained and needs to be used within an old version of Eclipse. The documentation I found around avtd is extremely scant. Has anybody got experience with using qvtd, the extend to which it supports the QVT-Relational spec and how it compares to Medini-QVT?
Appreciate any inputs :).
I am working on adaptive visualization and want to cover as many factors that influence usability, perception and user performance in user-interaction design.
I need a list of chemical or natural materials usable to create an impermeable ground surface for water harvesting. Thanks for your help
If something isn’t performing well, you have to go be a detective and figure out why. That’s the essence of doing user testing. Usability testing is a qualitative method of research which is much better suited for answering question about why or how to fix a problem. I have attached a sample template
There has been quite some work aiming at improving this attribute, and most of them just described it as the ease of discovering or locating something in the interface. I wonder if there is a more formal definition that coined this term? I'd appreciate if a reference to it is provided.
We ran out of the 3x5 cm silicon plates usable as electrodes and we need to replace them somehow. The nature of doping / thickness are not so important for us. Any suggestions?
I looking for such questionnaire to help in carrying out an experiment to measure the usability of different approaches to access mobile apps after being downloaded.
I am discussing a research project with a colleague to determine the best analysis method for the type of data we need to collect and would appreciate any input from anyone with similar data set experience.
We will be seeking user preference based on four variables and four scenarios totaling 16 possible outcomes. (i.e. 1a, 1b, 1c, 1d, 2a, 2b, 2c, 2d, 3a, 3b, 3c, 3d, 4a, 4b, 4c, & 4d).
The data collection method will require users to sit, reach, and operate a wall mounted device (4 types) at four different locations (high/front, low/front, high/back, and low/back). I want to collect at minimum the best to worst ranking within the same product (1a, 2a, 3a, 4a), and within the positions (1a, 1b, 1c, 1d). I am not sure if we need or want ranking of the 16 outcomes (best to worst) but I certainly want good/bad for each at a minimum (1a good, 2b Bad, 3a Good, etc.).
This is an early research idea and I want to refine it before too much more work is done. I need to determine efforts/methods between several other ideas and this one was just brought to my attention as a new research possibility by a user group.
Any help is appreciated!
I have created a responsive design page. Now I want to test it on different screens and devices. I have discovered that the simulation tools for iphone 5 give a totally different result compared to my physical iphone 5. I know that is because of the retina display. My question is, if there is a tool (commercial or non commercial) where I can test my software on (samsung, iphone, nokia, tablets,.....) in a proper way.
OR is the only solution to buy all this hardware?
This is a research stream about users consuming information. How can we get information about influence of digital newspaper in public opinion?
We know of many cognitive biases, such as self reporting bias, confirmation bias, illusion of validity etc. (cf wikipedia link below), so how would you devise evaluations, and manage analyses of trial data in such a way that is aware of these constraints .
I will appreciate if anyone suggests an advanced online usability/usability engineering course? By advanced, I really mean a course offered for master or doctoral students.
Thanks for your contribution in advance.
We have developed libraries that allow the user to manipulate the viewport (virtual camera) and/or 3D objects with use of haptic devices (one or two Phantom Omni devices, depending on configuration). Now, we would like to test the set-up on some "haptic beginners" to assess the efficiency, ergonomics and the learning curve.
We have prepared a "put a peg in the hole" exercise and are going to test the accuracy and trial execution time.
What other exercises would you suggest? Do you know any standard procedures of such assessments?