Brian P. DeJong

Central Michigan University, Central, Louisiana, United States

Are you Brian P. DeJong?

Claim your profile

Publications (9)1.24 Total impact

  • Source
    Kumar Yelamarthi · Brian P Dejong · Kevin Laubhan
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a Microsoft Kinect based vibrotactile feedback system to aid in navigation for the visually impaired. The lightweight wearable system interprets the visual scene and presents obstacle distance and characteristic information to the user. The scene is converted into a distance map using the Kinect, then processed and interpreted using an Intel Next Unit of Computing (NUC). That information is then converted via a microcontroller into vibrotactile feedback, presented to the user through two four-by-four vibration motor arrays woven into gloves. The system is shown to successfully identify, track, and present closest objects, closest humans, multiple humans, and perform distance measurements.
    IEEE 57th International Midwest Symposium on Circuits and Systems, College Station, Texas; 08/2014
  • Brian P. DeJong · J. Edward Colgate · Michael A. Peshkin
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents the design and simulation of a cyclic robot for lower-limb exercise robots. The robot is designed specifically for cyclic motions and the high power nature of lower-limb interaction-as such, it breaks from traditional robotics wisdom by intentionally traveling through singularities and incorporating large inertia. Such attributes lead to explicit design considerations. Results from a simulation show that the specific design requires only a reasonably sized damper and motor. [DOI: 10.1115/1.4004648]
    Journal of Medical Devices 09/2011; 5(3):031006. DOI:10.1115/1.4004648 · 0.62 Impact Factor
  • Source
    B. P. DeJong · J. E. Colgate · M. A. Peshkin
    [Show abstract] [Hide abstract]
    ABSTRACT: Human-robot interfaces can be challenging and tiresome because of misalignments in the control and view relationships. The human user must mentally transform (e.g., rotate or translate) desired robot actions to required inputs at the interface. These mental transformations can increase task difficulty and decrease task performance. This chapter discusses how to improve task performance by decreasing the mental transformations in a human-robot interface. It presents a mathematical framework, reviews relevant background, analyzes both single and multiple camera-display interfaces, and presents the implementation of a mentally efficient interface. KeywordsMental transformation–control rotation–control translation–view rotation–teleoperation
    02/2011: pages 35-51;
  • Brian P. DeJong · J. Edward Colgate · Michael A. Peshkin
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents the design and simulation of a novel lower-limb exercise robot designed specifically for cyclic motions and the high power nature of lower-limb interaction. In doing so, it breaks from traditional robotics wisdom by intentionally traveling through singularities and incorporating large inertia. Such attributes help define the understudied class of lower-limb exercise robots, and lead to some explicit design considerations. Results from a simulation show that the specific design requires only a reasonably sized damper and motor.
    ASME 2009 International Mechanical Engineering Congress and Exposition; 01/2009
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Purpose – Sets out to discuss lessons learned from the creation and use of an over-the-internet teleoperation testbed. Design/methodology/approach – Seven lessons learned from the testbed are presented. Findings – This teleoperation interface improves task performance, as proved by a single demonstration. Originality/value – In helping to overcome time-delay difficulties in the operation, leading to dramatically improved task performance, this study contributes significantly to the improvement of teleoperation by making better use of human skills.
    Industrial Robot 04/2006; 33(3):187-193. DOI:10.1108/01439910610659097 · 0.62 Impact Factor
  • Source
    B.P. DeJong · J.E. Colgate · M.A. Peshkin
    [Show abstract] [Hide abstract]
    ABSTRACT: We consider teleoperation in which a slave manipulator, seen in one or more video images, is controlled by moving a master manipulandum. The operator must mentally transform (i.e., rotate, translate, scale, and/or deform) the desired motion of the slave image to determine the required motion at the master. Our goal is to make these mental transformations less taxing in order to decrease operator training time, improve task time/performance, and expand the pool of candidate operators. In this paper, we introduce a framework for describing the transformations required to use a particular teleoperation setup. We analyze in detail the mental transformations required in an interface consisting of one camera and display. We then expand our discussion to setups with multiple cameras/displays and discuss the results from an initial experiment.
    Robotics and Automation, 2004. Proceedings. ICRA '04. 2004 IEEE International Conference on; 01/2004
  • Brian DeJong
  • Source
    Brian P Dejong
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents the use of auditory occupancy grids (AOGs) for mapping of a mobile robot's acoustic environment. An AOG is a probabilistic map of sound source locations built from multiple measurements using techniques from both probabilistic robotics and sound localization. The mapping is simulated, tested for ro-bustness, and then successfully implemented on a three-microphone mobile robot with four sound sources. Using the robot's inherent advantage of mobility, the AOG cor-rectly locates the sound sources from only nine measure-ments. The resulting map is then used to intelligently po-sition the robot within the environment and to maintain auditory contact with a moving target.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Future space explorations necessitate manipulation of space structures in support of extra vehicular activities or extraterrestrial resource exploitation. In these tasks robots are expected to assist or replace human crew to alleviate human risk and enhance task performance. However due to the vastly unstructured and unpredictable environmental conditions, automation of robotic task is virtually impossible and thus teleoperation is expected to be employed. However teleoperation is extremely slow and inefficient. To improve task efficiency of teleoperation, this work introduces semi-autonomous telerobotic operation technology. Key technological innovations include implementation of reactive agent based robotic architecture and enhanced operator interface that renders virtual fixture.