[Show abstract][Hide abstract] ABSTRACT: ABSTRACT Creating a human-robot ,interface is a ,daunting ,experience. Capabilities and functionalities of the,interface are dependent,on the robustness of many different sensor and input modalities. For example, object recognition poses problems for state-of-the-art vision systems. Speech ,recognition ,in noisy ,environments remains,problematic ,for acoustic systems. , Natural ,language understanding,and dialog are often limited ,to specific ,domains and baffled by ambiguous ,or novel utterances. Plans based on domain-specific tasks limit the applicability of dialog ,managers. The,types ,of sensors ,used ,limits spatial knowledge ,and understanding and constrains cognitive issues, such as perspective-taking. Inthis research, we are integrating several modalities, such as vision, audition, and natural language understanding to leverage the existing strengths of each ,modality and overcome ,individual weaknesses. We are using visual, acoustic, and linguistic inputs invarious,combinations ,to solve ,such ,problems ,as the disambiguation of referents (objects in the environment), localization of human speakers, and determination of the source ofutterances and appropriateness of responses when,humans,and
Proceedings of the Second ACM SIGCHI/SIGART Conference on Human-Robot Interaction, HRI 2007, Arlington, Virginia, USA, March 10-12, 2007; 01/2007
[Show abstract][Hide abstract] ABSTRACT: One of the great challenges of putting humanoid robots into space is developing cognitive capabilities for the robots with an interface that allows human astronauts to collaborate with the robots as naturally and efficiently as they would with other astronauts. In this joint effort with NASA and the entire Robonaut team, we are integrating natural language and gesture understanding, spatial reasoning incorporating such features as human–robot perspective taking, and cognitive model-based understanding to achieve a high level of human–robot interaction. Building greater autonomy into the robot frees the human operator(s) from focusing strictly on the demands of operating the robot, and instead allows the possibility of actively collaborating with the robot to focus on the task at hand. By using shared representations between the human and robot, and enabling the robot to assume the perspectives of the human, the humanoid robot may become a more effective collaborator with a human astronaut for achieving mission objectives in space.
International Journal of Humanoid Robotics 06/2005; 2:181-201. DOI:10.1142/S0219843605000442 · 0.69 Impact Factor
Proceedings, The Twentieth National Conference on Artificial Intelligence and the Seventeenth Innovative Applications of Artificial Intelligence Conference, July 9-13, 2005, Pittsburgh, Pennsylvania, USA; 01/2005
[Show abstract][Hide abstract] ABSTRACT: Our multimodal interface integrates speech recognition, natural language understanding, spatial reasoning and human cognitive models for completing specific tasks and for perspective-taking in locative oriented tasks. With natural language and gestures, we believe human-robot interaction and communication is facilitated. Instead of concentrating on the various modalities of the interface, users can concentrate on the task at hand. Likewise, by incorporating human cognitive models for handling spatial information and perspective-taking, as well as for specific task completion, a better match with the expectations that humans acquire from their human-human interactions should be obtained, further facilitating cooperation and collaboration in human-robot interactions.
AIAA 1st Intelligent Systems Technical Conference; 09/2004
[Show abstract][Hide abstract] ABSTRACT: In 2002 and 2003, five government, scholastic, and in- dustry research groups worked together to attempt the AAAI Robot Challenge. In short, autonomous robots are to attempt the roles of graduate students at the Amer- ican Association of Artificial Intelligence (AAAI) con- ference. Two of the main thrusts of the Robot Challenge are for robots to demonstrate autonomy and to interact with people in natural and dynamic environments. This paper focuses on the task of the robot finding its way from the front door of the conference center to the registration desk by asking for directions. Asking for and following route directions in a partially known en- vironment provides a rich testbed for interacting with an autonomous system. This paper describes the task and the environment, details the adjustable autonomy archi- tecture implemented for the Robot Challenge, and com- pares it against other approaches. It also presents the problems encountered using this architecture, and sug- gests further refinements.
[Show abstract][Hide abstract] ABSTRACT: Grace and George have been past entrants in the AAAI Robot Challenge. This year, however, we chose to inte- grate our more recent work in the field of human-robot interaction, and designed a system to enter the "Open Interaction" category. Our goal was to have two robots at the AAAI National Conference, with one acting as an "information kiosk," telling about the Conference and giving directions, and the other acting as a mobile es- cort for people. This paper discusses the system we en- visioned, as well as what we were able to achieve at the Conference.
[Show abstract][Hide abstract] ABSTRACT: In an attempt to solve as much of the AAAI Robot Challenge as possible, five research institutions representing academia, industry and government, integrated their research on a pair of robots named GRACE and GEORGE. This paper describes the second year effort by the GRACE team, the various techniques each participant brought to GRACE, and the integration effort itself.
AAAI Mobile Robot Competition 2003, Papers from the AAAI Workshop; 01/2003