Science topic
Human Computer Interface - Science topic
Explore the latest questions and answers in Human Computer Interface, and find Human Computer Interface experts.
Questions related to Human Computer Interface
The educational sector is experiencing rapid change because Artificial Intelligence (AI) has started to integrate with the learning and teaching processes. The application of AI in education produces groundbreaking capabilities that enable better educational encounters and resource distribution while improving academic performance. Moreover, this digital advancement creates multiple obstacles, which encompass worries about information security alongside prejudiced algorithms and educator job replacements and demands for systematic evaluation of cognitive and interactive effects on human skills. A review examines existing scholarly findings about AI education to understand both its advantages and negative effects as well as methods to utilize its benefits without inadequacies. The assessment investigates the use of AI assistants to boost research efficiency.
AI is emerging as a powerful tool to personalize and optimize the educational experience, offering tailored support to students and empowering educators with data-driven insights.
A. Personalized Learning and Adaptive Systems
One of the most promising applications of AI in education is the development of personalized learning systems [6]. These systems leverage AI algorithms to analyze student performance, identify individual learning needs, and adapt the content and pace of instruction accordingly. This individualized approach can lead to improved learning outcomes and increased student engagement [6]. Several studies explore the use of AI to optimize resource allocation within networks that support learning [2].
B. Intelligent Tutoring Systems
AI-powered intelligent tutoring systems (ITS) are designed to provide students with individualized instruction and feedback, mimicking the role of a human tutor [6]. These systems often use AI to diagnose student errors, provide targeted explanations, and adjust the difficulty of problems based on student performance. The use of AI in such systems can be combined with educational dashboards to provide real-time feedback and guidance to instructors [20]. The effectiveness of ITS has been demonstrated across a range of subjects and age groups, with studies showing significant improvements in student learning [6].
C. AI-Assisted Content Creation and Delivery
AI is also being used to automate and enhance the creation and delivery of educational content. AI can generate quizzes, assessments, and practice problems, freeing up educators' time and resources [6]. Furthermore, AI can be used to create interactive simulations, virtual field trips, and other engaging learning experiences [6]. Generative AI can support humans in conceptual design by assisting with problem definition and idea generation [16].
D. AI in Research
AI is also revolutionizing the way research is conducted. AI offers several benefits to researchers, including powerful referencing tools, improved understanding of research problems, enhanced research question generation, optimized research design, stub data generation, data transformation, advanced data analysis, and AI-assisted reporting [6]. For example, AI can be used to analyze large datasets of student performance data to identify patterns and predict student success [15]. AI can also assist in analyzing data, such as in the context of AI-assisted data analyses [5].
II. The Dark Side: Detriments and Challenges of AI in Education
While AI offers significant potential benefits, it also presents several challenges that must be carefully addressed to ensure its responsible and ethical implementation in education.
A. Data Privacy and Security
The use of AI in education often involves the collection and analysis of large amounts of student data, raising serious concerns about data privacy and security [14]. Protecting student data from unauthorized access, misuse, and breaches is paramount [14]. Clear policies and regulations are needed to govern the collection, storage, and use of student data, and to ensure that students and parents are informed about how their data is being used [14].
B. Algorithmic Bias and Fairness
AI algorithms are trained on data, and if the training data reflects existing societal biases, the AI system may perpetuate and even amplify those biases [14]. This can lead to unfair or discriminatory outcomes for certain groups of students [14]. For example, if an AI-powered assessment tool is trained on data that underrepresents or misrepresents students from certain backgrounds, it may unfairly penalize those students [14]. Addressing algorithmic bias requires careful attention to the data used to train AI systems, as well as ongoing monitoring and evaluation to identify and mitigate any biases that may emerge [14].
C. Impact on Human Educators
The increasing use of AI in education raises concerns about the potential displacement of human educators [14]. While AI is unlikely to completely replace teachers, it could automate some of the tasks traditionally performed by educators, such as grading assignments and providing basic instruction [14]. It is crucial to consider how AI will reshape the roles and responsibilities of educators and to provide teachers with the training and support they need to effectively integrate AI into their practice [14].
D. Over-Reliance and Loss of Critical Thinking
Over-reliance on AI can lead to a decline in critical thinking skills and a diminished capacity for independent problem-solving [4]. Students may become overly dependent on AI-powered tools and may not develop the skills they need to think critically, analyze information, and make independent judgments [4]. It is essential to design AI-powered tools that support and enhance human learning rather than replacing it [4].
E. Ethical Concerns
The use of AI in education raises a number of ethical concerns, including the potential for AI to be used to manipulate or control students, the lack of transparency in AI algorithms, and the potential for AI to be used to monitor and track student behavior [13]. It is essential to establish clear ethical guidelines for the development and deployment of AI in education and to ensure that AI systems are used in a way that respects student autonomy, privacy, and well-being [13].
III. Human-AI Collaboration: Designing Effective Interactions
The most promising approach to integrating AI into education is to focus on human-AI collaboration, where AI systems augment and support human educators and learners [7].
A. Designing for Human-AI Collaboration
Effective human-AI collaboration requires careful attention to the design of AI systems and the ways in which humans interact with them [7]. AI systems should be designed to be transparent, explainable, and trustworthy [19]. They should provide clear and concise explanations of their recommendations and decisions, and they should allow humans to understand how the AI system works [19]. It is also important to consider the potential for AI to be used to manipulate human behavior, and to design systems that are resistant to manipulation [13].
B. Explainable AI (XAI)
Explainable AI (XAI) is crucial for building trust and fostering effective human-AI collaboration [19]. XAI systems provide explanations for their decisions, allowing humans to understand why the AI system made a particular recommendation or prediction [19]. These explanations can help humans to assess the reliability of the AI system, identify potential errors, and make more informed decisions [19]. However, it is important to consider that the explanations themselves can be imperfect and potentially misleading [19].
C. Adapting to User Needs
AI systems should be designed to adapt to the needs and preferences of individual users [10]. This includes providing different levels of assistance depending on the user's skill level and the complexity of the task [10]. It also includes providing users with the ability to customize the AI system to meet their specific needs [10].
D. The Model Mastery Lifecycle
The implementation of AI is constrained by the context of the systems and workflows that it will be embedded within [7]. To address this, the AI Mastery Lifecycle framework provides guidance on human-AI task allocation and how human-AI interfaces need to adapt to improvements in AI task performance over time [7].
E. Understanding Human Behavior in AI-Assisted Decision Making
To best support humans in decision making, it is essential to quantitatively understand how diverse forms of AI assistance influence humans' decision making behavior [4]. AI assistance can be conceptualized as the "nudge" in human decision making processes, with AI assistance modifying humans' strategy in weighing different information in making their decisions [4].
F. Accuracy-Time Tradeoffs
In time-pressured scenarios, such as doctors working in emergency rooms, adapting when AI assistance is provided is especially important [10]. AI assistances have different accuracy-time tradeoffs when people are under time pressure compared to no time pressure [10].
IV. Optimizing AI-Assisted Systems
To fully realize the potential of AI in education, it is crucial to optimize AI-assisted systems to ensure their reliability, security, and effectiveness [11].
A. Code Generation and Optimization
AI-assisted code generation tools are transforming software development [8]. However, the security, reliability, functionality, and quality of the generated code must be guaranteed [11]. Strategies to optimize these factors are essential [11].
B. Reliability of AI Systems
The reliability of AI systems is a critical concern [18]. The SMART statistical framework for AI reliability research includes Structure of the system, Metrics of reliability, Analysis of failure causes, Reliability assessment, and Test planning [18].
C. Understanding the User's Perspective
Understanding how users perceive and interact with AI assistants is crucial for designing effective systems [8]. This includes understanding why developers may choose not to use AI assistants and what improvements are needed [8].
D. AI and DevOps
DevOps teams can use AI to test, code, release, monitor, and improve the system [12]. AI can improve the automation process delivered by DevOps efficiently [12].
V. Future Directions
The field of AI in education is rapidly evolving, and several key areas warrant further research and development.
A. Development of Robust and Explainable AI Systems
Building AI systems that are robust, reliable, and explainable is essential for fostering trust and ensuring the responsible use of AI in education [19]. This includes developing methods for detecting and mitigating algorithmic bias, as well as developing techniques for providing clear and concise explanations of AI decisions [19].
B. Personalized Learning at Scale
Further research is needed to develop personalized learning systems that can effectively adapt to the diverse needs of all learners, regardless of their age, background, or learning style [6]. This includes developing more sophisticated AI algorithms for analyzing student performance, as well as developing more engaging and effective instructional content [6].
C. Human-AI Collaboration Models
Developing effective models of human-AI collaboration is crucial for maximizing the benefits of AI in education [7]. This includes developing new methods for designing human-AI interfaces, as well as developing new strategies for training and supporting educators in the use of AI [7]. This also includes understanding how the AI's behavior can be described to improve human-AI collaboration [17].
D. Ethical and Policy Frameworks
Establishing clear ethical guidelines and policy frameworks is essential for ensuring the responsible and ethical use of AI in education [14]. This includes developing policies to protect student data privacy, address algorithmic bias, and ensure that AI is used in a way that promotes fairness, equity, and student well-being [14].
E. Addressing AI-related Concerns
Further research is needed to address concerns about the impact of AI on human educators, as well as the potential for AI to be used to manipulate or control students [14]. This includes developing strategies for training and supporting educators in the use of AI, as well as developing new methods for assessing the impact of AI on student learning and well-being [14].
F. Use of AI in Research
AI can be further used to improve research, and the development of AI tools to assist in research should be a focus [6].
In conclusion, AI has the potential to revolutionize education, offering unprecedented opportunities to enhance learning, improve teaching, and optimize educational outcomes. However, realizing this potential requires a careful and thoughtful approach that addresses the challenges and risks associated with AI, prioritizes human-AI collaboration, and establishes clear ethical guidelines and policy frameworks. By embracing a human-centered approach, we can harness the power of AI to create a more equitable, effective, and engaging educational experience for all learners.
==================================================
References
- Travis Norsen. Intelligent Design in the Physics Classroom?. arXiv:0603263v1 (2006). Available at: http://arxiv.org/abs/physics/0603263v1
- Sana Sharif, Sherali Zeadally, Waleed Ejaz. Resource Optimization in UAV-assisted IoT Networks: The Role of Generative AI. arXiv:2405.03863v1 (2024). Available at: http://arxiv.org/abs/2405.03863v1
- Wanja Timm Schulze, Sebastian Schwalbe, Kai Trepte, Stefanie Gräfe. eminus — Pythonic electronic structure theory. arXiv:2410.19438v3 (2024). Available at: http://arxiv.org/abs/2410.19438v3
- Zhuoyan Li, Zhuoran Lu, Ming Yin. Decoding AI's Nudge: A Unified Framework to Predict Human Behavior in AI-assisted Decision Making. arXiv:2401.05840v1 (2024). Available at: http://arxiv.org/abs/2401.05840v1
- Ken Gu, Ruoxi Shang, Tim Althoff, Chenglong Wang, Steven M. Drucker. How Do Analysts Understand and Verify AI-Assisted Data Analyses?. arXiv:2309.10947v2 (2023). Available at: http://arxiv.org/abs/2309.10947v2
- César França. AI empowering research: 10 ways how science can benefit from AI. arXiv:2307.10265v1 (2023). Available at: http://arxiv.org/abs/2307.10265v1
- Mark Chignell, Mu-Huan Miles Chung, Jaturong Kongmanee, Khilan Jerath, Abhay Raman. The Model Mastery Lifecycle: A Framework for Designing Human-AI Interaction. arXiv:2408.12781v1 (2024). Available at: http://arxiv.org/abs/2408.12781v1
- Agnia Sergeyuk, Yaroslav Golubev, Timofey Bryksin, Iftekhar Ahmed. Using AI-Based Coding Assistants in Practice: State of Affairs, Perceptions, and Ways Forward. arXiv:2406.07765v2 (2024). Available at: http://arxiv.org/abs/2406.07765v2
- Juhi Rajhans. Dynamical Symmetry of the Zwanziger problem in Non-commutative Quantum Mechanics. arXiv:1412.1149v2 (2014). Available at: http://arxiv.org/abs/1412.1149v2
- Siddharth Swaroop, Zana Buçinca, Krzysztof Z. Gajos, Finale Doshi-Velez. Accuracy-Time Tradeoffs in AI-Assisted Decision Making under Time Pressure. arXiv:2306.07458v3 (2023). Available at: http://arxiv.org/abs/2306.07458v3
- Simon Torka, Sahin Albayrak. Optimizing AI-Assisted Code Generation. arXiv:2412.10953v1 (2024). Available at: http://arxiv.org/abs/2412.10953v1
- Mamdouh Alenezi, Mohammad Zarour, Mohammad Akour. Can Artificial Intelligence Transform DevOps?. arXiv:2206.00225v1 (2022). Available at: http://arxiv.org/abs/2206.00225v1
- Zhuoyan Li, Ming Yin. Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary. arXiv:2411.10461v1 (2024). Available at: http://arxiv.org/abs/2411.10461v1
- Soheila Sadeghi. Employee Well-being in the Age of AI: Perceptions, Concerns, Behaviors, and Outcomes. arXiv:2412.04796v1 (2024). Available at: http://arxiv.org/abs/2412.04796v1
- Chun Fu, Clayton Miller. Using Google Trends as a proxy for occupant behavior to predict building energy consumption. arXiv:2111.00426v1 (2021). Available at: http://arxiv.org/abs/2111.00426v1
- Liuging Chen, Yaxuan Song, Jia Guo, Lingyun Sun, Peter Childs, Yuan Yin. How Generative AI supports human in conceptual design. arXiv:2502.00283v1 (2025). Available at: http://arxiv.org/abs/2502.00283v1
- Ángel Alexander Cabrera, Adam Perer, Jason I. Hong. Improving Human-AI Collaboration With Descriptions of AI Behavior. arXiv:2301.06937v1 (2023). Available at: http://arxiv.org/abs/2301.06937v1
- Yili Hong, Jiayi Lian, Li Xu, Jie Min, Yueyao Wang, Laura J. Freeman, Xinwei Deng. Statistical Perspectives on Reliability of Artificial Intelligence Systems. arXiv:2111.05391v1 (2021). Available at: http://arxiv.org/abs/2111.05391v1
- Katelyn Morrison, Philipp Spitzer, Violet Turri, Michelle Feng, Niklas Kühl, Adam Perer. The Impact of Imperfect XAI on Human-AI Decision-Making. arXiv:2307.13566v4 (2023). Available at: http://arxiv.org/abs/2307.13566v4
- Ajay Kulkarni. Towards Understanding the Impact of Real-Time AI-Powered Educational Dashboards (RAED) on Providing Guidance to Instructors. arXiv:2107.14414v1 (2021). Available at: http://arxiv.org/abs/2107.14414v1
Why are LLMs important and what are the changes in HCI (Human Computer Interface)?
Currently, we have at least the following models: GPT-4, Gemini, Llama 2, Claude 2, Falcon, MPT, XGen, Baidu's Ernie, Cohere, AI21 Labs Jurassic-2, DeepSeek, and Qwen. These are based on the Transformer architecture; Four are open source (Llama 2, Falcon, MPT, and DeepSeek).
I have completed a master's degree in computer science and have good research experience in the human-computer interface area. I have published 3 IF journals as a first author and 7 IEEE conference articles to date. I am struggling to find a good fully funded PhD position. I am open about the country as well.
Greeting!!!
Would like to create network of academic research collurators within the field of HCI, Persusasive technology, gamification, technlolgy enhanced learning, social media, marketing, accounting, networking, and an interdiscplinary research are welcome.
Hello, I am working on a Doctoral dissertation about the “Lessons learned from the implementation of extended reality in education and training”. Iam hoping to publish in the next 4 months - if you have any recommended articles/papers that you believe would be relevant, please let me know.
In the last week, one of my journal got rejected from International Journal of Human-Computer Interaction. Now I want to resubmit it in another journal. If anybody suggest me a Q2 or Q3 journals. The title and abstract of the journal is given below:
Title: Dynamic User Experience for efficiency enhancement based on facial expressions
Abstract: The main motive of Human-Computer Interaction is to make human comfortable while working with interactive computing device so that it can increase human efficiency and release the human trouble and saves humans’ time. In this paper, we recognize the face first then change the UI automatically based on his facial expression. Some of our personas also proposed the similar idea of building a system that would play music based on their facial expression. These scenarios gave us the idea of making an integrated system of dynamic user experience based on facial expression. So, we started to collect the data of our paper-based on questionnaires and interview approaches. Then made some low fidelity prototype during requirement gathering phase. We also made some high-fidelity prototypes using Axure RP to show the stakeholders the likely output of this paper. In the new phase, we have used the software engineering model and then implemented our code in visual studio with live server extension. Then we have followed the cognitive walkthrough model as our evaluation method. During the evaluation, our stakeholders need not have provided any input manually and it was easy to earn to use the system. We found that there should be a high-speed internet connection and we have to use VPN to handle some issues. The user was not feeling fatigued or discomfort at all because it is very easy to learn. Anyone can use it and who wants to use it just need to be in front of the camera that’s it. So, the user was very much comfortable and happy to use our system.
Thanks in advance.
Our recent research shows that AR systems have inherent conflict while interacting with virtual objects. We termed this new conflict as Virtual Kinesthetic Conflict (VKC). This conflict is very similar to the inherent Visual Accommodation Conflict (VAC) in VR. Just like VAC, VKC also cannot be avoided, we can only reduce the effects of VKC. In our recent publication, we have listed a few guidelines to reduce the effects of VKC. Can you think of other solutions?
I would like to test capture EGG of users while they are using a sound interface, so I would like to know if EMOTIV with 5 Channel is enough or I need to think buying EMOTIV with 14 Channel or other headset
Thank you!
I want to conduct an experiment in the field of Human-Computer Interaction, where I test the perception of users about which image is better using different camera lenses. In the experiment, I take the exact same image with different lenses. The "tournament" lies in users blindly picking which image they liked until all the lenses are eliminated. I believe the methodology might have several confounders, but its design reflects the real world setting, where a lot of parameters are un-controlled. Will this qualify as a scientific experiment worth writing a scientific paper about?
One of my students is setting up an experiment to test the effect of smart cameras on bridge operators’ situation awareness. In this experiment participants will watch 50 short videos per condition (smart camera vs. normal camera). After each video participants need to answer one simple question. Furthermore, after each condition the participants are asked to answer 6 questions.
We are looking for a software package in which we can set up this experiment. This means we need a software package in which we can combine the short videos (100 in total) and the questions. This software should not only allow to display the videos and questions, but also to capture the participants’ answers. For the video part of the experiment it is preferable that the screen only exists of the video itself, so not white/black frame around the video.
What is a suitable software package which we can use to create this experiment set-up?
Is there any physiological value or index that show us a stress or anxiety level? For example in an experimental reseach, when we want to find out computer games effect on stress, depression or anxiety, How can we detect these disorders level by using with physiological indicators?
gtec brain cap + Gammabox + Nihon kodhen input box + Nihon kodhen amplifier.
If you have experience on this combination please give your comments. (Functionality, Operation, Limitations and Advantages)
I'm working in a research that targets Games and Autism. Is there a researcher interested to work in a paper together?
Is there a way to measure Cognitive Affordance of an Interaction Design or has anyone come accross such an idea or an attempt to do so?
There are many reasons why Augmented Reality (AR) will be the future battleground. However this battleground cannot be won without solving some of the most difficult technical challenges. Among all the technical challenges, what is the most important technical challenge in Augmented Reality?
I am curious to learn what people use for response boxes. I have happily used the Ergodex DX1 for years ( http://www.ergodex.com/mainpage.htm ) which is a custom keyboard that allows us to arrange buttons on a sheet of plastic and have them register as keyboard presses. The problem is the device is abandoned and getting it to work with operating systems after XP is difficult.
I want a good physical interface that can be used for 2, 3, or 4 interval forced choice tasks. I am not particularly concerned with accurate reaction time measures. A keyboard press speed is more than sufficient.
Any suggestions?
I am in need of consultation sources on virtual reality technology applications in the product design development, especially in the preliminary design stage and more precisely in relation to the mechanical parts assembly, ie how to make the parts collide during assembly virtual.
I am trying to calibrate Samsung Galaxy S7 to get real colour from a piece of paper in VR. Any recommendation?
Hermes Pardini Institute, one brazillian Medical Center use VR for to control the children´s stress during VACCINE. (here the video: https://www.facebook.com/hermespardini/videos/1512020472162634/)
Like as one marketing campaing maybe is ok, but what is the ethical limits when use this method for deprivation of bodily experience? Cognitive experience can be blur with VR technology? Can Immersive Technology be use for to cheat the brain?
best regards,
@lucasparisi (tweet me)
Need a list of some important features which can help in developing a more user-friendly search user interface !
Hi! I'm looking for books or journals related to the use of brain-computer interfaces in the accessibility area.
I wish to know more about the data set. I extract features for character and wish to see that if my data comes under the linear or non-linear category. Ho can I find this?
I've been doing some research with Emotiv Epoc, but it has a lot of disadvantages, for example it is almost impossible to evaluate female participants with long hair and small heads - the device does not get any signal from them. Furthermore Emotiv sometimes looses signal from some single electrodes and the quality of the data that it provides is not so great.
Do you know any other low-cost EEG devices that you would recommend for studies regarding emotions?
I wish to extract features of EEG signals.
I'm trying to get an answer for the question: What is the effect of Culture centric UI in HCI?
I would like to know, if anyone designs the icons of UI for a particular demographic based on their culture, then can we expect any improvement in form of performance of interaction? If yes, then how much? Would you suggest me some papers on this topic?
I am working in global hand gesture recognition. I would like to align/ normalize the scaling, translation and rotation effect. Can anyone suggest me some techniques to do so.
So far I found the below equation to measure concentration level but the reference is not that solid and I can not relay on. Any one has other equation or solid reference that support this equation will be highly appreciated.
concentration level = ( (SMR + Beta) / Theta)
where SMR = SensoriMotor Rhythm
The question is related to my research and answers can prove helpful.
The goal of this part is just adding an enjoyable part to work, not necessarily scientific. If you have any idea, please share it. Thank you so much.
I am working on hand gesture recognition and I wanted to know about "skinmodel.bin", cause I am supposed to provide it whereas I don't know exactly what's it.
I am trying to understand the differences between the framework and tools, in the field of usability engineering?
Hello,
My name is Nana Kesewaa Dankwa, I study MSc. Computer Science and Media at the Bauhaus University in Weimar, Germany.
I am currently doing my Master Thesis research in the area of gamification. I am looking to conduct an experimental study that relates to finding the effects of play on the use of a gamified system. My research relates to gamified systems that can be used in an enterprise or for business purposes such as personnel administration, time management, multitasking or generally office productivity.
For my experiment, i need two gamified system5 that can be used that requires
- 5 minutes learning how to use the system
- around 30 minutes of actual usage
- in that usage time, users should get the core experience of the gamification.
I ask this question because i have been doing a thorough search for two existing gamified systems that can be used in an experimental setting for an experimental setup but having challenges finding some fitted for an experimental setting or simply costs lots of money . I need a system that:
1. Players can create own profile or avatar.
2. Players get rewarded (example with points,in-game items and, or badges) for every task carried out and these rewards are communicated to the player.
3. Players can see the goals of the tasks and can work towards it.
4. Players can monitor their progress while using the gamified system
5. Players can learn the gamified system within 5 minutes.
6. The existence of a leaderboard showing other users in the experiment (optional).
7. Tasks can be clearly outlined for the players using the gamified system
Our lab is trying to acquire an a portable eye-tracker for doing experiments in the field, using area of interest paradigm.
Can you recommend a system (including hardware and basic software for data collection as well as data analysis) within 10,000 dollar (preferably within 5000 dollar since we prefer also to keep some of the budget to buy a few matlab liscences)? Also, what is the lowest acceptable sampling rate for an AOI study?
I know a normal eyelink or tobbi will cost around 50,000 dollar and some toys just cost a few hundred. However, presumably AOI studies have a lower requirement for sampling rate. If we can sacrifice the sampling rate within an acceptable range, is it possible to find one system with significantly better reliability (less missing data) than the toys?
Thank you very much in advance.
I am working on language identification through i-vector.But i am thinking that if a language which use two language like if we speak hindi and some time u prefer some english word than that type of data set shows some problem for model which built for corresponding language.and it also decrease the performance of model
Proponents of pattern languages claim they are a way to bridge several communities (e.g. researchers and practitioners; or users, interaction designers and software engineers) and that they are usable in different phases of the design process.
i.e. Borchers, Jan O. "Interaction design patterns: twelve theses." Workshop, The Hague. Vol. 2. 2000.
Others present a much more critical view on the practicality of pattern languages.
Dearden, Andy, and Janet Finlay. "Pattern languages in HCI: A critical review."Human–computer interaction 21.1 (2006): 49-102.
These writings remain quite abstract, however. They present arguments for and against patterns, but few facts about how patterns are actually used by practitioners outside of the patterns community.
Are you aware of any empirical (e.g. ethnographic) studies on the use of design patterns in practice?
I am particularly interested in studies within human-computer interaction; of projects that are not lead or initiated by researchers studying patterns and of studies that show how patterns are used in conjunction with other types of knowledge representations (e.g. persona’s, scenario’s).
Thanks in advance!
I want to develop a new and enhanced technique for making website learning moer adaptive. Is there any tool developed for usability measurement?
I'm submitting a proposal later this summer and I'm looking for previous studies that have a strong methodology section on Virtual Reality and Human Computer Interaction that I can use for a basis for my own thesis. Thanks!
In my research I wish to create a voice based interaction system which can be embedded in a public Kiosk for the blind users to communicate. As a result they can start using the kiosk independently. My invention will be software tool which can be embedded in the kiosk that can take the voice of the user as an input and process it accurately to produce best results. The user can use their headphones to listen and they can communicate with the microphone to the Kiosk. This will be an additional device that visually impaired user needs to carry with them to communicate with the Kiosk.
I would like stable versions and structured tutorials with source code to work on one application to render 3D models using OpenGL.
In my research area related to how we can enhance learning by using datamining algorithms. In order to compare our algorithms we need a real dataTel and unfortunatly we could'nt find it. So what we find is just dataset related to ebusiness (BookCrossing, Movies, .). Can anyone help us if there is any dataTel available in the net? Ask.
I'am interested in literature / research, which deal (capabilities, advantages, disatvantages) with the use of graphical models for operational tasks, which are traditional are executed with charts and schedules. For instance workforce assignment via drag and drop in a graphic model instead of filling out a table.
I would be very thankful, if you could share your knowledge in that issue-area, give some literature hints or name the right keywords to search. Thank you!
I am particularily interested in methods and tools which are characterized by the usage of some kind of model(s) which incorporate related context of the user and the environment. In the HCI domain these kinds of methods are known as "user-centred" or "inclusive design" methods.
I know it is unusual, but I am wondering if there are any collaborative Virtual reality projects you know of in Edmonton, Canada. My main area of interest is spatial awareness and human material interaction. I appreciate your help.
I will need to display an image file and a sound file for a short period of time in one of the driving scenarios. I will use STISIM drive version 3 software. As I have no experience with this software, does anyone know if it's possible to do this?
By the way, the image file needs to be displayed above a moving vehicle. I'm aware that I will need to use the programmable module for this.
Applications, in particular mobile applications, try to provide the most suitable information for users and to react accordingly to user's needs. In this sense, emotions are a relevant aspect to establish the best way to interact with a user and IoT could help in providing that assessment. In this context, what is your vision on the importance of emotional assessment for the near future in mobile applications?
I am working on adaptive visualization and want to cover as many factors that influence usability, perception and user performance in user-interaction design.
I am looking to start a research on how interface can be adapted depending on carl jungs defined personality types but couldn't find relevent material. If any one can help it is appriciating for me.
Dear all,
I'm currently trying to use a accelerometer + gyroscope module (specifically MPU6050) with Arduino in order to track some specific kind of movements in the human body. I've used double integral in order to calculate the displacement of the module after each data sampling but the results are clearly wrong.
Is there any well known approach in the literature through which I can calculate displacement from the raw data given by such device (acceleration)?
Thanks!
My focus is respect to laboral trust between teammates of global software project
I'd like to build a graphical user interface that adapts according to eye pupil diameter, then I'd like to try to detect some human vision problems and to provide a graphical user interface according to the level of the problem of human vision.
Which attributes can evaluate the performance of content available from a university website from a user perspective?
Here, users can be prospective students or parents of students, or can be a employee who wants to join the organization.
As an advanced computing paradigm, pervasive computing has numerous benefits. How can pervasive computing be applied to classroom environments, especially in enhancing learning and promoting well-being of students?
Good answers will be really appreciated.
Several methodologies exist for evaluating the usability of a graphical interface, but what is the most suitable for the evaluation of haptic interface?
Affective Haptics is the emerging area of research which focuses on the design of devices and systems that can elicit, enhance, or influence the emotional state of a human by means of sense of touch.
There is a "one handed operation" feature on Samsung's Galaxy Note 3, which allows users to shrink the display into a small thumb-reachable area. I'm wondering if Samsung published any research about it? Or is it borrowed from some research project?
Could someone point me towards some well documented evaluation methods/criteria for assessing the UX/UI of a newly developed website?
RGB-D (depth maps using structured light projection) have become quite popular and a big advance for segmentation and recognition in computer vision, but typically have limited range (e.g. 1 to 5 meters) and must project light onto a scene (active sensing). The advantage is robust depth mapping combined with visible imaging in color and relatively low cost (PrimeSense, ASUS Xtion, and others supported by OpenNI). So, this begs the question, why continue to research traditional binocular vision if this method works much better (perhaps simply to understand human vision, for longer range 3D sensing, or perhaps to combine with active for active/passive).
I am interested in developing user interfaces to help people with intellectual disabilities. I would like to use kinect, but I have no experience in its development. I would like to contact someone who works in the development of applications with this device.
I anthropometric information are quite important when design interactions. So far I did not really find any reliable online resources of anthropometric tables that are open source. Of course Nasa (http://msis.jsc.nasa.gov/sections/section03.htm) provides quite a lot of tables but they are very hard to extract information from due to the format. ideally a machine readable or easy to export format will be a very handy tools for a lot of HCI researchers like myself.
If you have something to share please do :-)
Novel and futuristic applications but probable sort of a possible research work.
There are many ways to measure user experience and/or the usability of a software product. Expert reviews, questionnaires, etc. but which is the best way to measure?
I am thinking in training or at higher education level