Project

Multimodal Interface for disabled persons

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
0
Reads
0 new
12

Project log

Sharda Amardas Chhabria
added 3 research items
Assistive robots have the potential to provide disabled people with effective ways to alleviate the impact of their limitations, by compensating for their specific impairments. In particular, robotic wheelchairs may help in maneuvering a wheelchair and planning motion. The eye tracking systems are the most widely used and efficient way to control the systems used by the people with motor disabilities. The eye controlled wheelchair offers yet another alternative to persons who cannot use joysticks. Gaze-based human-computer interaction (HCI) enables users to operate computers by means of eye movement. In a multimodal conversation, the way users communicate with a system depends on the available interaction channels and the situated context (e.g., conversation focus, visual feedback). A correct interpretation can only be attained by simultaneously considering these constraints. This paper presents a technique of how Eye, Hand Gestures or voice command can be used to control the movement of wheelchair. For Hand gestures we present a scheme which reduces the size of the database which is used to store different postures of human beings which are used by robot as commands. The picture frame may divide into different scan lines and pixel color value under these scan lines are examined to guess the particular posture of user. The robot may use this as command and act accordingly. For interfacing through eye a new algorithm for tracking the movement of eye towards left or right is also proposed. The eye blinking feature can also be used by the algorithm to control the starting and stopping of wheelchair. This paper presents a technique for developing user friendly multimodal interface using eye, or hand gestures to control the movement of wheelchair.