System Design and Implementation of UCF-MANUS—An Intelligent Assistive Robotic Manipulator

IEEE/ASME Transactions on Mechatronics (Impact Factor: 3.65). 02/2014; 19(1):225-237. DOI: 10.1109/TMECH.2012.2226597

ABSTRACT This paper reports on the system design for integrating the various processes needed for end-to-end implementation of a smart assistive robotic manipulator. Specifically, progress is reported in the empowerment of the UCF-MANUS system with a suite of sensory, computational, and multimodal interface capabilities so that its autonomy can be made accessible to users with a wide range of disabilities. Laboratory experiments are reported to demonstrate the ability of the system prototype to successfully and efficiently complete object retrieval tasks. Benchmarking of the impact of the various interface modalities on user performance is performed via empirical studies with healthy subjects operating the robot in a simulated instrumental activities of daily living tasks setup. It is seen through a analysis of the collected quantitative data that the prototype is interface neutral and shows robustness to variations in the tasks and the environment. It is also seen that the prototype autonomous system is quantitatively superior to Cartesian control for all tested tasks under a “number of commands” metric, however, under a “time to task completion” metric, the system is seen to be superior for “hard” tasks but not for “easy” tasks.

1 Follower
  • [Show abstract] [Hide abstract]
    ABSTRACT: We document the progress in the design and implementation of a motion control strategy that exploits visual feedback from a narrow baseline stereo head mounted in the hand of a wheelchair mounted robot arm (WMRA) to recognize and grasp textured ADL objects for which one or more templates exist in a large image database. The problem is made challenging by kinematic uncertainty in the robot, imperfect camera and stereo calibration, as well as the fact that we work in unstructured environments. The approach relies on separating the overall motion into gross and fine motion components. During the gross motion phase, local structure on an object around a user selected point of interest (POI) is extracted using sparse stereo information which is then utilized to converge on and roughly align the object with the image plane in order to be able to pursue object recognition and fine motion with strong likelihood of success. Fine motion is utilized to grasp the target object by relying on feature correspondences between the live object view and its template image. While features are detected using a robust real-time keypoint tracker, a hybrid visual servoing technique is exploited in which tracked pixel space features are utilized to generate translational motion commands while a Euclidean homography decomposition scheme is utilized for generation of orientation setpoints for the robot gripper. Experimental results are presented to demonstrate the efficacy of the proposed algorithm.
    2009 IEEE International Conference on Robotics and Automation, ICRA 2009, Kobe, Japan, May 12-17, 2009; 01/2009
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper deals with the statistical analysis and pattern classification of electromyographic signals from the biceps and triceps of a below-the-humerus amputated or paralyzed person. Such signals collected from a simulated amputee are synergistically generated to produce discrete lower arm movements. The purpose of this study is to utilize these signals to control an electrically driven prosthetic or orthotic arm with minimum extra mental effort on the part of the subject. The results show very good separability of classes of movements when a learning pattern classification scheme is used, and a superposition principle seems to hold which may provide a means of decomposition of any composite motion to the six basic primitive motions, e.g., humeral rotation in and out, elbow flexion and extension, and wrist pronation and supination. Since no synergy was detected for the hand movements, different inputs have to be provided for a grip. The method described is not limited by the location of the electrodes. For amputees with shorter stumps, synergistic signals could be obtained from the shoulder muscles. However, the presentation in this paper is limited to bicep-tricep signal classification only.
    IEEE Transactions on Biomedical Engineering 07/1982; 29(6):403-12. DOI:10.1109/TBME.1982.324954 · 2.23 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: PURPOSE: In this paper the history of the development and validation of the PIADS is reviewed. Assistive devices (ADs) are extremely prevalent forms of health care intervention for persons who have a disability. There is a consensus that the AD field needs a reliable and valid measure of how users perceive the impact of ADs on their quality of life (QoL) and sense of well-being. The Psychosocial Impact of Assistive Devices Scale (PIADS) is a 26 item self-rating scale designed to fill this measurement gap. The challenges that we encountered are described in attempting to adequately conceptualize QOL impact, and operationalize it in a measure suitable for use with virtually all forms of AD. Current efforts to extend the validation of the PIADS are summarized. CONCLUSIONS: The study concludes by suggesting directions for future research and development of the scale. They include a richer examination of its conceptual relationships to other health care and rehabilitation outcome measures, and further investigation of its clinical utility. The PIADS is a reliable and valid tool that appears to have very significant power to predict AD abandonment and retention. It can and should be used both deductively and inductively to build, discover and test theory about the psychosocial impact of assistive technology.
    Disability and Rehabilitation 01/2002; 24(1-3):31-7. DOI:10.1080/09638280110066343 · 1.84 Impact Factor