Eric Rosen

Eric Rosen
Brown University · Department of Computer Science

About

33
Publications
13,464
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
650
Citations
Citations since 2017
33 Research Items
650 Citations
2017201820192020202120222023050100150
2017201820192020202120222023050100150
2017201820192020202120222023050100150
2017201820192020202120222023050100150
Introduction
I am a Ph.D. student in the Computer Science department at Brown University, advised by Stefanie Tellex. I received my B.S in Computer Science/Applied Mathematics from Brown University in 2018. As an undergraduate, I worked with Stefanie Tellex and George Konidaris in the Humans To Robots (H2R) Laboratory and the Intelligent Robot Lab (IRL). My research focuses have spanned artificial intelligence/machine learning/reinforcement learning, robotics, and virtual/augmented/mixed reality interfaces for human-robot interaction.

Publications

Publications (33)
Preprint
Deploying robots in real-world domains, such as households and flexible manufacturing lines, requires the robots to be taskable on demand. Linear temporal logic (LTL) is a widely-used specification language with a compositional grammar that naturally induces commonalities across tasks. However, the majority of prior research on reinforcement learni...
Preprint
Full-text available
Mixed Reality (MR) has recently shown great success as an intuitive interface for enabling end-users to teach robots. Related works have used MR interfaces to communicate robot intents and beliefs to a co-located human, as well as developed algorithms for taking multi-modal human input and learning complex motor behaviors. Even with these successes...
Article
Frameworks have begun to emerge to categorize virtual, augmented, and mixed reality (VAM) technologies that provide immersive, intuitive interfaces to facilitate human–robot interaction (HRI). These frameworks, however, fail to capture key characteristics of the growing subfield of VAM-HRI and can be difficult to consistently apply because of conti...
Preprint
[In Robotics and Automation Magazine, arXiv link: https://arxiv.org/abs/2108.03477] Frameworks have begun to emerge to categorize Virtual, Augmented, and Mixed Reality (VAM) technologies that provide immersive, intuitive interfaces to facilitate Human-Robot Interaction. These frameworks, however, fail to capture key characteristics of the growing s...
Preprint
Full-text available
Learning continuous control in high-dimensional sparse reward settings, such as robotic manipulation, is a challenging problem due to the number of samples often required to obtain accurate optimal value and policy estimates. While many deep reinforcement learning methods have aimed at improving sample efficiency through replay or improved explorat...
Conference Paper
Full-text available
Advances in the capabilities of technologies like virtual reality (VR) and their rapid proliferation at consumer price points, have made it much easier to integrate them into existing robotic frameworks. VR interfaces are promising for robotics for several reasons, including that they may be suitable for resolving many of the human performance issu...
Preprint
Full-text available
Learning a robot motor skill from scratch is impractically slow; so much so that in practice, learning must be bootstrapped using a good skill policy obtained from human demonstration. However, relying on human demonstration necessarily degrades the autonomy of robots that must learn a wide variety of skills over their operational lifetimes. We pro...
Conference Paper
Full-text available
We present a decision-theoretic model and robot system that interprets multimodal human communication to disambiguate item references by asking questions via a mixed reality (MR) interface. Existing approaches have either chosen to use physical behaviors, like pointing and eye gaze, or virtual behaviors, like mixed reality. However, there is a gap...
Preprint
Full-text available
There are unwritten guidelines for how to make robot videos that researchers learn from their advisors and pass onto their students. We believe that it is important for the community to collaboratively discuss and develop a standard set of best practices when making robot videos. We suggest a starting set of maxims for best robot video practices, a...
Chapter
Full-text available
Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, robots often have difficulty efficiently communicating their motion intent to humans via these methods...
Chapter
Full-text available
Teleoperation allows a human to remotely operate a robot to perform complex and potentially dangerous tasks such as defusing a bomb, repairing a nuclear reactor, or maintaining the exterior of a space station. Existing teleoperation approaches generally rely on computer monitors to display sensor data and joysticks or keyboards to actuate the robot...
Research
Full-text available
A robot can clean a room by performing a series of pick-and-place tasks on relevant items. To accomplish this sequence, the robot must know the original poses of the relevant items, how to grasp these items, and the final poses to place each item at. Language and gestures have been shown to successfully enable humans to help a robot acquire this kn...
Preprint
Full-text available
To operate robustly in human-centered environments , robots must bridge the semantic gap between a robot's internal model and objects, actions, and tasks in the real world. Semantic maps are the prototypical representation for grounded robot knowledge about objects and attributes, but do not naturally include information about actions. We present a...
Preprint
Full-text available
Mixed Reality (MR) is a promising interface for robot programming because it can project an immersive 3D visualization of a robot's intended movement onto the real world. MR can also support hand gestures, which provide an intuitive way for users to construct and modify robot motions. We present a Mixed Reality Head-Mounted Display (MR-HMD) interfa...
Article
Full-text available
Virtual reality (VR) systems let users intuitively interact with 3D environments and have been used extensively for robotic teleoperation tasks. While more immersive than their 2D counterparts, early VR systems were expensive and required specialized hardware. Fortunately, there has been a recent proliferation of consumer-grade VR systems at afford...
Conference Paper
Full-text available
Learning from demonstration (LfD) has been a widely popular methodology for teaching robots how to perform manipulation tasks because it leverages human knowledge. However, collecting high quality demonstrations that can be used for learning robot policies can be time-consuming and difficult. Recently, some researchers have begun using consumer-gra...
Conference Paper
Full-text available
Virtual reality (VR) systems let users intuitively interact with 3D environments, and have been used extensively for robotic teleopera-tion tasks. Successful robot teleoperation requires operators to have sufficient contextual understanding of a robot's environment. While more immersive than their 2D counterparts, early VR systems were expensive an...
Conference Paper
Full-text available
Teleoperation allows a human to remotely operate a robot to perform complex and potentially dangerous tasks such as defusing a bomb, repairing a nuclear reaction, or maintaining the exterior of a space station. Existing teleoperation approaches generally rely on computer monitors to display sensor data and joysticks or keyboards to actuate the robo...
Article
Full-text available
Efficient motion intent communication is necessary for safe and collaborative work environments with collocated humans and robots. Humans efficiently communicate their motion intent to other humans through gestures, gaze, and social cues. However, robots often have difficulty efficiently communicating their motion intent to humans via these methods...

Network

Cited By

Projects

Project (1)