Science topics: Human-Computer InteractionUser Studies
Science topic
User Studies - Science topic
Explore the latest questions and answers in User Studies, and find User Studies experts.
Questions related to User Studies
I am Ayah Soufan, a 3rd-year Ph.D. researcher at Strathclyde University interested in designing systems to help scholars conduct literature reviews.
If you are a Master's student in Computer Science or any related field, working on your literature review of your MSc project, and if you are searching for, reading, and making sense of papers on your topic, I would love to speak to you! This study will take place on Zoom for up to 1.5 hour with a £20 compensation (online voucher).
Please fill out this survey: https://lnkd.in/e_vq5Kci
Once your eligibility for the study is determined, I will contact you as soon as possible to set a date\time that suits you for the study session.

We, at the Design Innovation Centre of Mondragon University, are working to better understand the interaction between humans and robots through a user-focused questionnaire. Our Human-Robot Experience (HUROX) questionnaire will gauge human perception and acceptance of robots in an industrial setting. Your participation in completing the questionnaire will greatly help us validate our findings.
Please, answer the electronic questionnaire that can be accessed here: https://questionpro.com/t/AWzTgZwkBl
The estimated time to answer all questions is about 40 minutes.
Your cooperation and support in this research effort would be greatly appreciated. We believe that by working together, we can advance our understanding of human-robot interaction and create better, more intuitive technologies for the future. If you're willing, please share this message with your network of contacts to help us reach even more participants.
Thank you for your cooperation!
I conducted a magnitude estimation experiment to find out the difference between multiple conditions. Twelve people participated in the experiment, and three experimental conditions were given. Each participant performed five evaluations for each condition. Since the evaluation orders are randomly assigned, the order does not have any meaning.
I have one dependent variable (evaluation score) and two independent variables (fixed: condition, random: participant), so I think I should analyze the data with the "General Linear Model - Univariate" method. However, the raw data violates the homogeneity of variances assumption, and the SPSS disables the bootstrap option when I set the participant to a random variable. Should I use another analysis method, or can I preprocess the data to use the GLM Univariate method?
Thank you for sparing your valuable time.
Joyoung Han
I am planning to start an experimental study where I use social media posts as tweets and Facebook posts to ask Turkers if they believe the information posted in there or not. I will be manipulating some parts of the posts (i.e. the text, the URL, the image etc).
I plan to study the impact of these different elements of posts on the perceived credibility of health information (I establish factual truth of messages separately based on scientific evidence). Since asking people what elements of a tweet make a message more or less credible for them might not result in reliable answers (people might not know/ not be aware/ be biased by priming).
I have couple of questions regarding this study:
1- Ethical concerns: can I manipulate the tweets/facebook post before showing it to Turkers? Do I need a consent form from the users who posted these messages since I will be using their original text.
2- do I need to show the whole post as it is, I mean, I have seen studies were they just show the text without the frame or logo etc.
I would very much appreciate it if I can get references that explain how to do this the proper way. this is my first time doing an experimental study. I plan to do a pre and a post survey with the experiment. so any feedback/guideline would be very helpful.
We are planning to integrate a data anonymizer tool in our application. This work will used for a thesis and we are thinking of what are other methods can be used to do the platform evaluation part, other than the user study to test the usability.
I am using the Bechdel Test analysis to generate visualisations and test how people engage and gather insights from them. I'm looking for participants for an online user study to examine how users engage with data visualisations. You can use a standard web browser (e.g. Chrome, Firefox, Safari, Internet Explorer) on your computer to take part. The study involves looking at data visualisations about Hollywood movies and writing down insight you gained from them, plus answering some questions about your experience with these visualisations. The study will take no longer than 30 mins. £50 will be awarded to each of the top three participants with the highest number of correct insights. If several participants provided an equal number of accurate insights, we would choose the winner through a random draw. To start the study, please click here: https://iot.cs.ucl.ac.uk/embel/study/?SURVEY_CODE=TEST
Public/ User participation and its importance in public infrastructure projects is being understood more and more nowadays leading to increased public/ user engagement in development of such infrastructure assets. The view of 'uninformed project user' is going away with availability of better technologies for information dissemination whether it be project specific or local area information. The inevitability of understanding/ assessing local needs to the fullest and efforts for improved transparency in the project delivery system supplement these.
Is it getting established that views of the local users have the maximum utility in enhancing the project value / bringing out maximum benefits out of the asset?
I'm planning to do a user study in Korea, and the study itself is in English. I wanted to hire people with certain English skills, but I was wondering if this is acceptable in academic standards?
I designed a user study for my research study in which has too many tasks to perform for users. its a within subject repeated measures study.
I am thinking if we decrease the number of task for different users randomly.
For example: we have 20 tasks and we randomly give 14 task to each of the user.
This makes my data unbalance as many of the entries will be missing.
So my question is if I do so. is there any way we can still analyze this data using any statistical analysis(ANOVA etc).

I am looking for a questionaire to measure the user acceptance of artificial intelligence. Is there already a verified model for the acceptance of AI (except models for technology acceptance in general, like the TAM)? I want to explore what factors are influencing the acceptance of users of AI. Thank you for help.
I am looking for some research papers to understand design and evaluation of tangible educational toys, especially for visually impaired children. Please recommend some in your comments.
There are two terms in user-centered design always confusing me: co-creation and innovation. I would like to know the main differences between them? Further, there are some other related terms to them such as: co-design, co-production, crowdsourcing, mass customization.
I have conducted a user study with 32 users. The users completed 8 tasks in 4 visualisation methods. Each user was tested for each method. I then recorded the time users took to answer the questions successfully. I would like to run repeated measures ANOVA but I have a very high number of missing values. Sometimes I only have 6 values out of 32 users so I am not too sure what to do with the missing values or which analysis to run.
Thank you so much for your help.
I was wondering if there is existing research on measuring the impact / contributions that an individual has in an online collaboration setting. Literature has shown that there are different user roles and that their contribution to the overall "success" also differs (e.g. providing solutions, connecting people, providing guidance / comments, ...). What I did not find so far is any paper trying to measure the "performance" of individuals to such a platform / community. I would be interested in doing research on how the log data from online collaboration platforms can be used to measure the "performance" of individuals. Does anybody know of research in this direction? Can anybody recommend papers / streams of research that might be helpful to look at getting started?
Thanks a lot!
Approach to designed greenery has changed drastically in the last few decades. There have been fruitful discussions on urban designer/architects’ role in morphing urban greenery, however, much remains unexamined regarding particular relationship between users, technology and the evolution of urban greenery. How did (/does) technology shape urban greenery? I would appreciate ideas on place specific users feedback/involvement from dense city fabrics. Thanks.
I am looking for feedback on significant publications (environmental psychology, urban design, architecture and planning) since 2000, that could be used as directives.
In a current project I am researching the context of use of social worker's cooperative attitude towards (cross-organisational) cooperation in the work field. I am met with quite a lot of resistance on this subject. Possibly because people may feel hesitant in acknowledging there is room for improvement regarding their own performance or work and may experience it as a personal assessment.
I thought it might be a good idea to shift from a 1st person, personal view towards a 3rd person view, using persona's.
I realise this is not the main use for persona's, so I was wondering if there are any experiences with similar or different approaches.
We are planning to do a user study which involves judging the quality of short fly-throughs through virtual scenes. After each of these fly-throughs, we plan to ask the user if there were visual artifacts (which have been explained in some pre-study instructions). This is based on a Likert scale from 1-7.
We now consider asking the respondent how confident he was in giving his answer. My first idea would be that this could enable us filtering out samples where confidence was very low (as the user was distracted or something), but I am not sure if this does make sense or if it is even legitimate to do.
I'd be very grateful for any insights on this matter!
I need to define some scenarios for smart spaces. For example: 1-Lights are turned on when user enters the environment 2- Telephone starts to play messages automatically.
Is there any formal template, method or language for scenario definition in this context?
A colleague and I are currently running an international survey aimed at the global Human Factors and Ergonomics (HF/E) community, asking them (in english) for their personal understanding of words that have to do with social sustainability. An explorative study of a specific profession's terminology use, you might say.
Recruitment has been mainly conducted online, in the form of spreading the survey link via the following channels:
- email lists to participants at various HF/E conferences
- spreading the survey with a short description in specialized interest groups on social media like LinkedIn and Facebook
- posting the survey link on interest groups' websites (which is dependent on personal contacts)
- Asking personal contacts for aid with spreading the survey among their peers.
Survey participants were given the option of giving their email address if they were interested in getting follow-ups of the results.
After the survey had been out for about 2 months, we had 61 participants with a skewed over-representation of certain countries, so we decided to try and boost interest in the survey by releasing some descriptive info of the sample to previous participants, e.g. the nationality, gender distribution and represented application areas of the sample (but of course no actual results of the pertinent questions asked). We tried to 'liven this up' by making a short infographic video, which was then emailed to previous survey participants and posted on all the social media groups that were previously approached.
It has so far been an interesting challenge to get participation for the survey in what I can only assume is a general online buzz of distractions and requests for people's attention - in this, our survey may come out of nowhere asking for 10-15 minutes of a HF/E professional's time, meaning that the only apparent motivator for participating in the survey is a genuine willingness to help and an interest in learning what our community says about these issues.
Since the recruitment approach could be best described as "snowball recrutiment", where we have a purposive sample and hope for participants and contacts to spread the message onward, we cannot say (other than an educated guess) how many potential respondents we reached (because there is no guarantee that every single person logging in to social media forums actually sees the posting, due to e.g. news filtering functions in LinkedIn) compared to how many actually answered.
Has anyone else faced similar recruitment challenges regarding creation of interest, increasing the outreach and keeping the sample representative; and if so, what are your thoughts?
Hi
I have some experience with the EyeTribe(ET) and I wonder whether anybody has similar problems as I do/did, so, my questions are:
How many times do you repeat the calibration until you have a reliable one?
How long are your sessions if you use it in user studies? or
How long can you collect data with the ET? Does it automatically shut down after a while?
What is the viewing distance?
Do you use any extra tool to stabilize user's head or to preserve the calibration and the viewing distance?
Do you use ET more for post analysis or for real-time interaction?
Thanks
I was wondering if there exist any publicly available datasets that stem from experimental studies conducted on (immersive) visualization systems such as CAVEs, tiled displays, etc.
Obviously there are tons of such studies in the literature but I am not aware of any of the research groups releasing anonymized versions of their metrics per-subject (or even including head-tracking trajectories).
Any pointers on this front would be greatly appreciated :).