Science topic

User Experience Research - Science topic

Explore the latest questions and answers in User Experience Research, and find User Experience Research experts.
Questions related to User Experience Research
  • asked a question related to User Experience Research
Question
1 answer
Hi,
What all parameters should one test for when it comes to sound? Is there any heuristics available?
Thanks
Relevant answer
Answer
  1. Clarity and Intuitiveness: When testing sound, consider real-life examples like the sound design in ride-sharing apps, where distinct audio cues differentiate various app states (e.g., ride request accepted). Explore studies on the impact of sound on users' attention and understanding of notifications in mobile apps, aiding in the evaluation of clarity and intuitiveness. Heuristic guidelines by Jakob Nielsen suggest using recognizable and non-ambiguous soun@ds to facilitate clear user comprehension.
  2. Volume and Disturbance: Examine how video streaming platforms like YouTube manage audio levels during advertisements to avoid abrupt disturbances. Sound intensity can influence user attention and emotional engagement. When testing, consider the appropriate use of sound to enhance, not disrupt, user experiences. Follow guidelines like ISO 9241-11, which emphasizes the importance of appropriate sound levels to avoid annoyance.
  3. Context and Relevance: Study the use of sound in navigation systems like Google Maps, where voice instructions align with user context (e.g., turn-by-turn directions). Ongoing research examines how sound influences virtual reality experiences, enhancing immersion and context relevance. Heuristics like those proposed by Preece, Rogers, and Sharp emphasize the significance of providing appropriate feedback in context for improved user understanding.
  4. Consistency and Branding: Real-life examples of consistent sound design can be found in major operating systems like Apple's iOS, where cohesive sound cues establish brand recognition across devices. Studies on sonic branding explore how consistent audio elements reinforce brand identity. Heuristic principles by Nielsen-Norman Group advocate for consistency (sound, in this context) to aid user recognition and familiarity.
  5. Accessibility and Inclusivity: Consider examples of how sound is complemented with visual or haptic feedback in assistive technologies like screen readers for users with visual impairments. Ongoing research explores the use of haptic feedback as an alternative or supplement to auditory cues. Follow WCAG guidelines, which emphasize making essential auditory information available through alternative means for accessibility.
  6. User Preferences: Utilize A/B testing to compare user responses to different sound options for actions like button clicks or error notifications. Look at research on user preferences in sound design. Heuristics proposed by Tognazzini recommend providing customizable options to accommodate individual user preferences.
  7. Error Handling and Feedback: Examine instances of error sounds in applications like messaging platforms, where distinct tones signal message delivery failure. The impact of error feedback sound design on user resilience and error recovery has been proved high by studies. Usability heuristics by ISO 9241-110 emphasize providing clear and informative feedback for user actions, including errors. References: Clarity and Intuitiveness: 1. "Uber - CMoore Sound" https://cmooresound.com/work/uber/. 2. "Applying sound to UI - Material Design" https://m2.material.io/design/sound/applying-sound-to-ui.html#hero-sounds 3. Jakob Nielson. "10 Usability Heuristics for User Interface Design" https://www.nngroup.com/articles/ten-usability-heuristics/ Volume and Disturbance: 1. G. Lemaitre et al. "Feelings Elicited by Auditory Feedback from a Computationally Augmented Artifact: The Flops." IEEE Transactions on Affective Computing, 3 (2012): 335-348. https://doi.org/10.1109/T-AFFC.2012.1. 2. ISO 9241-11:1998. "Ergonomic requirements for office work with visual display terminals (VDTs)." International Organization for Standardization (1998). https://www.iso.org/standard/16883.html. Content and Relevance: 1. Preece, J., Rogers, Y., & Sharp, H. "Interaction design: Beyond Human-computer Interaction" (2019). Wiley. 2. Khoa-Van Nguyen et al. "Spatial audition in a static virtual environment : the role of auditory-visual interaction." J. Virtual Real. Broadcast., 6 (2009). 3. Gaver W., "Auditory Icons: Using Sound in Computer Interfaces." Human-Computer Interaction, 2 (1986): 167–177. https://doi.org/10.1207/s15327051hci0202_3 Consistency and Branding: 1. Shawn P. Scott et al. "Small sounds, big impact: sonic logos and their effect on consumer attitudes, emotions, brands and advertising placement." Journal of Product & Brand Management (2022). https://doi.org/10.1108/jpbm-06-2021-3507. Accessibility and Inclusivity: 1. J. Maculewicz et al. "An investigation on the impact of auditory and haptic feedback on rhythmic walking interactions." Int. J. Hum. Comput. Stud., 85 (2016): 40-46. https://doi.org/10.1016/j.ijhcs.2015.07.003. 2. WCAG 2.1. Web Content Accessibility Guidelines (WCAG) 2.1. World Wide Web Consortium. User Preferences: 1. Erkin Asutay et al. "Emoacoustics: A Study of the Psychoacoustical and Psychological Dimensions of Emotional Sound Design." Journal of The Audio Engineering Society, 60 (2012): 21-28. 2. Tognazzini, B. Tog on Interface (1992). Error Handling and Feedback: 1. Batmaz, A., & Stuerzlinger, W. (2021). The Effect of Pitch in Auditory Error Feedback for Fitts' Tasks in Virtual Reality Training Systems. 2021 IEEE Virtual Reality and 3D User Interfaces (VR), 85-94. https://doi.org/10.1109/VR50410.2021.00029. 2. ISO 9241-11:2018. "Ergonomics of human-system interactions." International Organization for Standardization (2018). https://www.iso.org/obp/ui/#iso:std:iso:9241:-11:ed-2:v1:en.
  • asked a question related to User Experience Research
Question
2 answers
We, at the Design Innovation Centre of Mondragon University, are working to better understand the interaction between humans and robots through a user-focused questionnaire. Our Human-Robot Experience (HUROX) questionnaire will gauge human perception and acceptance of robots in an industrial setting. Your participation in completing the questionnaire will greatly help us validate our findings.
Please, answer the electronic questionnaire that can be accessed here: https://questionpro.com/t/AWzTgZwkBl
The estimated time to answer all questions is about 40 minutes.
Your cooperation and support in this research effort would be greatly appreciated. We believe that by working together, we can advance our understanding of human-robot interaction and create better, more intuitive technologies for the future. If you're willing, please share this message with your network of contacts to help us reach even more participants.
Thank you for your cooperation!
Relevant answer
Answer
Your questionnaire is too long and the questions are repetitive - But the two questions never asked - made me stop. -- I never was asked if I want to engage with robots. Or, even if past experience on any level was satisfactory.
  • asked a question related to User Experience Research
Question
5 answers
Hello
I will do research on how nursing students experience the use of a Virtual medicineroom, and i have looked into UTAUT2. I will do a post - test survey using a questionnaire.
The students have not tried VR before, and are in theire second year of education.
I see that there are parts of the UTAUT2 that are not relevant. These are Social influence, facilitating conditions, price value and habit. These would give little meaning to the nursing students that will try the virtual medicineroom for the first time, and i think it would be confusing.
I want to use a validated model/questionnaire like UTAUT2, so i wonder if any of you have any experience in using a adjusted version of the UTAUT2, like i plan to? Is this ok to do?
I want to at least cover percieved usefullness, perceived ease of use, and hedonic factors. I like the TAM - model, but it doesnt have perceived enjoyment, so therefor i think i will use UTAUT2.
Relevant answer
Answer
.....
  • asked a question related to User Experience Research
Question
4 answers
One of my students is setting up an experiment to test the effect of smart cameras on bridge operators’ situation awareness. In this experiment participants will watch 50 short videos per condition (smart camera vs. normal camera). After each video participants need to answer one simple question. Furthermore, after each condition the participants are asked to answer 6 questions.
We are looking for a software package in which we can set up this experiment. This means we need a software package in which we can combine the short videos (100 in total) and the questions. This software should not only allow to display the videos and questions, but also to capture the participants’ answers. For the video part of the experiment it is preferable that the screen only exists of the video itself, so not white/black frame around the video.
What is a suitable software package which we can use to create this experiment set-up?
Relevant answer
Answer
Does it have to be a software app? An alternative would be to do this within HTML. It would not be too challenging for your student to create a basic HTML page using javascript to accomplish what you are looking for. Furthermore, if you are at a University you could probably have someone in IT or a student set this up on a University server.
  • asked a question related to User Experience Research
Question
6 answers
Hi all,
Does anyone have any recommendations on potential useful articles? There is plenty out there for computer/mobile/ tablet based experiences but I have not come across any for Augmented Reality.
Thanks!
Relevant answer
  • asked a question related to User Experience Research
Question
3 answers
Since most user research or often called UX research deals with software design, I am wondering where I can find information on methods (e.g., interviews, card sorting), tools (e.g., eye tracking), and best practices (e.g., A/B testing) especially in the area of hardware design.
Thank you very much for any advice,
Jonas
Relevant answer
Answer
Dear Jonas,
a lot of "classics" in the field of UX/ Usability or Interaction Design are actually not tied to only software design. Most methods are applicable to hardware design, too, including all sorts of every day things (referring to the classical example "the design of everyday things" by Norman).
You might also want to look at DIN EN ISO 9241 specifically parts like 210, 110... whatever fits your needs.
The key point here "what fits your needs", "what is the problem" and "what is your goal".
A good book about general "HCI" research methods is for example:
Lazar, J., Feng, J. H., & Hochheiser, H. (2017). Research methods in human-computer interaction. Morgan Kaufmann.
All the best for your research!
  • asked a question related to User Experience Research
Question
2 answers
Hi,
I have a recorded video in .mp4 format which has cursor movements in it. This was done using a video screenshot software. I need to have a static summary of the cursor movements made on the screen or heat maps for the movements made. Are there any free online softwares for achieving this?
Thanks,
Abhijai
Relevant answer
Answer
Thanks Adriano.
  • asked a question related to User Experience Research
Question
7 answers
My current job is about UX research predominantly in non-consumer ICT (information and communication technology) products. I’m looking for people who do similar work for information and experience exchange. Much UX stuff you find on the web is about consumer products and screen-based services but for me it’s quite difficult to find stuff about UX work on non-consumer equipment. I also screen the typical human computer interaction and ergonomics conferences and there’s also not too much coverage on professional and non-consumer equipment.
Anybody out there?
Relevant answer
Answer
Hi Andreas,
what exactly do you mean by non-consumer ICT? Do you mean business ICT? Professional Programs / Equipment?
I think it might be helpful to look into expert Systems, even from the consumer ICT Domain.
If by non-consumer ICT you think about everything where safety/security/effectivity is more important than UX? You might find useful stuff in literature about expert Systems in General.
Best regards,
Patrick
  • asked a question related to User Experience Research
Question
3 answers
If we want to know how different room design effect people,we need to do some surveys,do we have any authoritative psychological scale? Please recommend some scales ,articles or books to me if possible. And maybe we can have a discussion about this.
Relevant answer
Answer
If your work addresses the luminous or visual aspect of space design, you may wish to consult publications by John Flynn, who pioneered work related to the perceptual/psychological effects of light in spaces. A good summary with citations to his and other work is online at: https://www.informedesign.org/_news/feb_v02-p.pdf
  • asked a question related to User Experience Research
Question
7 answers
I'll be conducting a series Participatory Design workshops to co-design new technology for people with mental health (MH) difficulties.The participants will include people with MH difficulties and health professionals. I'd be interested to evaluate the extent participants felt they needs and priorities were represented and if the tasks were relevant to their skills and expertise. I'd be grateful to see any examples or pointers.
Thanks,
Luca
Relevant answer
Answer
Hi Luca,
I can suggest two articles that were part of a special issue that I edited. I hope they are useful.
 Ann Heylighen & Jasmien Herssens (2014) Designerly Ways of Not Knowing:What Designers Can Learn about Space from People Who are Blind, Journal of Urban Design, 19:3, 317-332, DOI: 10.1080/13574809.2014.890042
 Veerle Cox, Marleen Goethals, Bruno De Meulder, Jan Schreurs & Frank Moulaert (2014) Beyond Design and Participation: The ‘Thought for Food’ Project in Flanders, Belgium, Journal of Urban Design, 19:4, 412-435, DOI: 10.1080/13574809.2014.923742
best wishes,
A
  • asked a question related to User Experience Research
Question
10 answers
My PhD is a design study of a visual analytics system that visualises text cohesion, designed to help editors make documents more coherent. I am in the process of analysing and writing up the findings of my first user evaluation study (a ‘lab’ one, rather than an ‘in-the-wild’ one, the latter of which is yet to come). My background is as a domain expert (professional editor), so I have minimal experience with HCI methods.
I have the data, in the form of transcripts of sessions where I sat with domain-expert users and had them play with the tool (using their own data as well as several other example sets of data) and discuss their impressions and thoughts. I already know what phenomena I find interesting, but I can't seem to just write the chapter--I keep reorganising and renaming and remixing my structure. I can't seem to get beyond that stage of structuring and restructuring the chapter. I think this is happening because I want to assure myself that my observations are legitimate and relevant, and that they are elicited and expressed in some useful and systematic way. I don't know what the norms are in the way this kind of research is written up, or how to make best use of the data. As I said, I already know what phenomena I personally find interesting in the data, but I haven’t used any particular theory or process to identify those things. I’ve pretty much just used my knowledge/intuition. Is this OK? And if so, how do I organise that? It's just a series of observations right now. For example, should I organise them:
1. by what component of the designed tool I think they relate to (cohesion theory, LSA rendering of cohesion, visualisation, work practices in the domain, individual differences in users?)?
2. By what body of theory I want to use to explain why they happened (Affordances for interface design problems, Gestalt for visual perception problems, lack of connection with linguistic theory in writing/composition instruction for users' difficulties in understanding the theory of cohesion, etc)?
3. Or just put the observed phenomena in there one by one, as is ('users had unexpected ideas about what the system was for', 'users took a long time to learn how to use the system', 'some users found the lack of objective standard of cohesion challenging', etc), and then address the possible reasons for why these phenomena might have happened within the body of each of those sections (because, after all, this part will only be speculation, given that I won't be isolating variables and testing any of these theories--I will just be suggesting them as possible leads for further studies)?
Each of these options has a limitation. I feel that number one, organising by component, is a bit difficult and presumptuous. I don't necessarily know that a user's behaviour is caused by a problem with the visualisation design or by the theory the visualisation is trying to communicate, or an unintuitive interface with which to interact with the visualisation, or a lack of familiarity on the part of the user with the sample text, or the user's individual problems with computers/technology in general, or a limitation in the way I explained how the system works, or an incompatibility with their practice as an editor, or... etc etc. It could be one of those things or several of those things or none of those things, and I won't have enough in the data to prove (or sometimes even guess) which. This same problem plagues the second option--to organise by theory. That presumes that I know what caused the behaviour.
In fact, now that I have typed this out, it seems most sensible to use the third option--to just list out what I noticed and not try to organise it in any way. This to me (and probably to others) looks informal and underprocessed, like undercooked research. It's also just a bit disorganised.
I think looking at other similar theses will help. I have had difficulty locating good examples of design studies with qualitative user evaluations to show me how to organise the information and get a feel for what counts as a research contribution. Even if I find something, it's hard to know how good an example it is (as we all know, some theses scrape in despite major flaws, and others are exemplary).
Can anyone offer some advice, or point me to some good examples? Much appreciated.
Relevant answer
Answer
Caroline, there may be some value in comparing results of another objective reviewer of the transcripts. From that you can compare overlap and get an interrater reliability measure that may provide some assurance to your committee that there are others who at least somewhat agree with your list.  If there  is no agreement, you may want to consider what that other objective reviewer of the transcripts is saying. They should, in advance of reviewing, have some level of expertise, or you may consider training them (or having someone else train them) until they are competent to evaluate the transcripts. 
In terms of other researchers who have some related experience, I would encourage you to consider seeing research of Jesse Crosson (on ResearchGate; was at New Jersey School of Medicine and Dentistry, now at Princeton Health and has experience with Medical Informatics). 
You may also see the research articles from Mihaela Vorvoreanu (also on ResearchGate).  Both have specialized for a number of years in more qualitative methods related to human-computer interaction.
Their research (and the research of their students or co-authors) will be considered credible in view of your committee members.  If additional questions arise, you may contact me directly for any follow up.