[Show abstract][Hide abstract] ABSTRACT: This paper presents a Microsoft Kinect based vibrotactile feedback system to aid in navigation for the visually impaired. The lightweight wearable system interprets the visual scene and presents obstacle distance and characteristic information to the user. The scene is converted into a distance map using the Kinect, then processed and interpreted using an Intel Next Unit of Computing (NUC). That information is then converted via a microcontroller into vibrotactile feedback, presented to the user through two four-by-four vibration motor arrays woven into gloves. The system is shown to successfully identify, track, and present closest objects, closest humans, multiple humans, and perform distance measurements.
[Show abstract][Hide abstract] ABSTRACT: This paper presents the simulation, robustness and implementation of two- and three-dimensional auditory occupancy grids (AOGs) on a mobile robot. In two dimensions, AOGs are successfully applied on a three-microphone robot in four-sound-source environments, first in a simulation and then on a physical robot. The two-dimensional AOGs are also found to be robust to source positioning. In three dimensions, AOGs are successful on a simulated four-microphone robot in four-source environments and are found to also be robust to source positioning. AOGs are shown to be a viable method for gaining knowledge about the acoustical environment.
[Show abstract][Hide abstract] ABSTRACT: This paper presents the extension of auditory occupancy grids (AOGs) to three dimensions, to map the acoustic environment around a mobile robot. The three-dimensional AOGs are able to locate four sound sources in three-dimensions using four microphones and only nine measurements along the bottom plane of the workspace. Comparative robustness tests show that the three-dimensional AOGs are nearly identically as robust to source location as two-dimensional AOGs.
[Show abstract][Hide abstract] ABSTRACT: This paper presents the design and simulation of a cyclic robot for lower-limb exercise robots. The robot is designed specifically for cyclic motions and the high power nature of lower-limb interaction-as such, it breaks from traditional robotics wisdom by intentionally traveling through singularities and incorporating large inertia. Such attributes lead to explicit design considerations. Results from a simulation show that the specific design requires only a reasonably sized damper and motor. [DOI: 10.1115/1.4004648]
No preview · Article · Sep 2011 · Journal of Medical Devices
[Show abstract][Hide abstract] ABSTRACT: Human-robot interfaces can be challenging and tiresome because of misalignments in the control and view relationships. The
human user must mentally transform (e.g., rotate or translate) desired robot actions to required inputs at the interface.
These mental transformations can increase task difficulty and decrease task performance. This chapter discusses how to improve
task performance by decreasing the mental transformations in a human-robot interface. It presents a mathematical framework,
reviews relevant background, analyzes both single and multiple camera-display interfaces, and presents the implementation
of a mentally efficient interface.
KeywordsMental transformation–control rotation–control translation–view rotation–teleoperation
[Show abstract][Hide abstract] ABSTRACT: This paper presents the design and simulation of a novel lower-limb exercise robot designed specifically for cyclic motions and the high power nature of lower-limb interaction. In doing so, it breaks from traditional robotics wisdom by intentionally traveling through singularities and incorporating large inertia. Such attributes help define the understudied class of lower-limb exercise robots, and lead to some explicit design considerations. Results from a simulation show that the specific design requires only a reasonably sized damper and motor.
[Show abstract][Hide abstract] ABSTRACT: Purpose – Sets out to discuss lessons learned from the creation and use of an over-the-internet teleoperation testbed. Design/methodology/approach – Seven lessons learned from the testbed are presented. Findings – This teleoperation interface improves task performance, as proved by a single demonstration. Originality/value – In helping to overcome time-delay difficulties in the operation, leading to dramatically improved task performance, this study contributes significantly to the improvement of teleoperation by making better use of human skills.
[Show abstract][Hide abstract] ABSTRACT: Future space explorations necessitate manipulation of space structures in support of extra vehicular activities or extraterrestrial resource exploitation. In these tasks robots are expected to assist or replace human crew to alleviate human risk and enhance task performance. However due to the vastly unstructured and unpredictable environmental conditions, automation of robotic task is virtually impossible and thus teleoperation is expected to be employed. However teleoperation is extremely slow and inefficient. To improve task efficiency of teleoperation, this work introduces semi-autonomous telerobotic operation technology. Key technological innovations include implementation of reactive agent based robotic architecture and enhanced operator interface that renders virtual fixture.
[Show abstract][Hide abstract] ABSTRACT: We consider teleoperation in which a slave manipulator, seen in one or more video images, is controlled by moving a master manipulandum. The operator must mentally transform (i.e., rotate, translate, scale, and/or deform) the desired motion of the slave image to determine the required motion at the master. Our goal is to make these mental transformations less taxing in order to decrease operator training time, improve task time/performance, and expand the pool of candidate operators. In this paper, we introduce a framework for describing the transformations required to use a particular teleoperation setup. We analyze in detail the mental transformations required in an interface consisting of one camera and display. We then expand our discussion to setups with multiple cameras/displays and discuss the results from an initial experiment.
[Show abstract][Hide abstract] ABSTRACT: This paper presents the use of auditory occupancy grids (AOGs) for mapping of a mobile robot's acoustic environment. An AOG is a probabilistic map of sound source locations built from multiple measurements using techniques from both probabilistic robotics and sound localization. The mapping is simulated, tested for ro-bustness, and then successfully implemented on a three-microphone mobile robot with four sound sources. Using the robot's inherent advantage of mobility, the AOG cor-rectly locates the sound sources from only nine measure-ments. The resulting map is then used to intelligently po-sition the robot within the environment and to maintain auditory contact with a moving target.