Daniel RakitaYale University | YU · Department of Computer Science
Daniel Rakita
Phd
About
31
Publications
7,665
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
723
Citations
Introduction
Publications
Publications (31)
Robots designed to interact with people in collaborative or social scenarios must move in ways that are consistent with the robot's task and communication goals. However, combining these goals in a naïve manner can result in mutually exclusive solutions, or infeasible or problematic states and actions. In this paper, we present Lively, a framework...
Generating feasible robot motions in real-time requires achieving multiple tasks (i.e., kinematic requirements) simultaneously. These tasks can have a specific goal, a range of equally valid goals, or a range of acceptable goals with a preference toward a specific goal. To satisfy multiple and potentially competing tasks simultaneously, it is impor...
Recently, there has been a wealth of development in motion planning for robotic manipulation new motion planners are continuously proposed, each with their own unique strengths and weaknesses. However, evaluating new planners is challenging and researchers often create their own ad-hoc problems for benchmarking, which is time-consuming, prone to bi...
In this work, we present a novel sampling-based path planning method, called SPRINT. The method finds solutions for high dimensional path planning problems quickly and robustly. Its efficiency comes from minimizing the number of collision check samples. This reduction in sampling relies on heuristics that predict the likelihood that samples will be...
In this paper, we present a meta-algorithm intended to accelerate many existing path optimization algorithms. The central idea of our work is to strategically break up a waypoint path into consecutive groupings called "pods," then optimize over various pods concurrently using parallel processing. Each pod is assigned a color, either blue or red, an...
In this work, we present a novel sampling-based path planning method, called
SPRINT
. The method finds solutions for high dimensional path planning problems quickly and robustly. Its efficiency comes from minimizing the number of collision check samples. This reduction in sampling relies on heuristics that predict the likelihood that samples will...
In this work, we present a per-instant pose optimization method that can generate configurations that achieve specified pose or motion objectives as best as possible over a sequence of solutions, while also simultaneously avoiding collisions with static or dynamic obstacles in the environment. We cast our method as a multi-objective, non-linear con...
Recently, there has been a wealth of development in motion planning for robotic manipulation—new motion planners are continuously proposed, each with their own unique strengths and weaknesses. However, evaluating new planners is challenging and researchers often create their own ad-hoc problems for benchmarking, which is time-consuming, prone to bi...
We present a real-time motion-synthesis method for robot manipulators, called RelaxedIK, that is able to not only accurately match end-effector pose goals as done by traditional IK solvers, but also create smooth, feasible motions that avoid joint-space discontinuities, self-collisions, and kinematic singularities. To achieve these objectives on-th...
Human-centered environments provide affordances for and require the use of two-handed, or bimanual, manipulations. Robots designed to function in, and physically interact with, these environments have not been able to meet these requirements because standard bimanual control approaches have not accommodated the diverse, dynamic, and intricate coord...
We present an approach to synthesize robot arm trajectories that effectively communicate the robot’s intent to a human collaborator while achieving task goals. Our approach uses nonlinear constrained optimization to encode task requirements and desired motion properties. Our implementation allows for a wide range of constraints and objectives. We i...
In this paper, we present a novel shared-control telemanipulation method that is designed to incrementally improve a user»s motor ability. Our method initially corrects for the user»s suboptimal control trajectories, gradually giving the user more direct control over a series of training trials as he/she naturally gets more accustomed to the task....
In this paper, we present a method that improves the ability of remote users to teleoperate amanipulation robot arm by continuously providing them with an effective viewpoint using a secondcamera-in-hand robot arm. The user controls the manipulation robot usinganyteleoperation interface, and the camera-in-hand robot automatically servos to provide...
In this paper, we introduce a novel interface that allows novice users to effectively and intuitively tele-operate robot manipulators. The premise of our method is that an interface that allows its user to direct a robot arm using the natural 6-DOF space of his/her hand would afford effective direct control of the robot; however, a direct mapping b...
In this research, I report on novel methods to afford more intuitive and efficient robot teleoperation control using human motion. The overall premise of this work is that allowing users to control robots using the natural input space of their arms will lead to task performance and subjective measure benefits over more traditional interfaces. In th...
We present an approach for adding directed gaze movements to characters animated using full-body motion capture. Our approach provides a comprehensive authoring solution that automatically infers plausible directed gaze from the captured body motion, provides convenient controls for manual editing, and adds synthetic gaze movements onto the origina...
Motion-captured performances seldom include eye gaze, because capturing this motion requires eye tracking technology that is not typically part of a motion capture setup. Yet having eye gaze information is important, as it tells us what the actor was attending to during capture and it adds to the expressivity of their performance.