Jani Even

Advanced Telecommunications Research Institute, Kioto, Kyoto, Japan

Are you Jani Even?

Claim your profile

Publications (50)34.14 Total impact

  • Source
    [Show description] [Hide description]
    DESCRIPTION: This work presents a human-robot cooperative approach for infrastructure inspection. The goal is to create a robot that assists the human inspector during hammer sounding inspections. Hammer sounding is a frequently used inspection technique that detects invisible defects under the surface of concrete by striking the surface with a hammer and listening the resulting sound. The conventional hammer sounding inspection is time-consuming, and there is no convenient way to represent exhaustively the test results. The proposed approach solves these two problems by having an assistant robot following the inspector, and always being able to look at the hammer impact position. The assistant robot accurately estimates the position of the impact in real-time and creates a detailed representation of the test results. Experimental results show the process for creating the detailed inspection report. The accuracy of the human-robot cooperative approach is evaluated for a real world application. The average error of the impact point estimation was 32 millimeters and the standard deviation was 30 millimeters.
    Full-text · Research · Oct 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a work on mapping the use of space by humans in long periods of time. Daily geometric maps with the same coordinate frame were generated with SLAM, and in a similar manner, daily affordance density maps (places people use) were generated with the output of a human tracker running on the robot. The contribution of the paper is two-fold: an approach to detect geometric changes to cluster them in similar geometric configurations and the building of geometric and affordance composite maps on each cluster. This approach avoids the loss of long term retrieved information. Geometric similarity was computed using a normal distance approach on the maps. The analysis was performed on data collected by a mobile robot for a period of 4 months accumulating data equivalent to 70 days. Experimental results show that the system is capable of detecting geometric changes in the environment and clustering similar geometric configurations.
    Full-text · Conference Paper · Oct 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a work on mapping the use of space by humans in long periods of time. Daily geometric maps with the same coordinate frame were generated with SLAM, and in a similar manner, daily affordance density maps (places people use) were generated with the output of a human tracker running on the robot. The contribution of the paper is two-fold: an approach to detect geometric changes to cluster them in similar geometric configurations and the building of geometric and affordance composite maps on each cluster. This approach avoids the loss of long term retrieved information. Geometric similarity was computed using a normal distance approach on the maps. The analysis was performed on data collected by a mobile robot for a period of 4 months accumulating data equivalent to 70 days. Experimental results show that the system is capable of detecting geometric changes in the environment and clustering similar geometric configurations.
    Full-text · Conference Paper · Oct 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This work presents a human-robot cooperative approach for infrastructure inspection. The goal is to create a robot that assists the human inspector during hammer sounding inspections. Hammer sounding is a frequently used inspection technique that detects invisible defects under the surface of concrete by striking the surface with a hammer and listening the resulting sound. The conventional hammer sounding inspection is time-consuming, and there is no convenient way to represent exhaustively the test results. The proposed approach solves these two problems by having an assistant robot following the inspector, and always being able to look at the hammer impact position. The assistant robot accurately estimates the position of the impact in real-time and creates a detailed representation of the test results. Experimental results show the process for creating the detailed inspection report. The accuracy of the human-robot cooperative approach is evaluated for a real world application. The average error of the impact point estimation was 32 millimeters and the standard deviation was 30 millimeters.
    Full-text · Conference Paper · Oct 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This work proposes a Human-Comfortable Path Planner (HCoPP) system for autonomous passenger vehicles. The aim is to create a path planner that improves the feeling of comfort of the passenger, this topic is different from collision free planning and it has not received much attention. For this purpose, in addition to the shortest distance constraint conventionally used in path planning, constraints related to relevant environmental features are introduced. For straight segments, the constraint is based on the lane-circulation pattern preferred by humans. In curved segments and intersections, the constraint takes into account the visibility. A multi-layered cost map is proposed to integrate these additional constraints. To compute the human-comfortable path, a graph search algorithm was implemented. The evaluation of the proposed approach was conducted by having 30 participants riding an autonomous robotic wheelchair. The paths computed by the proposed path planner were compared towards a state of the art shortest-distance path planner implemented in the navigation stack of ROS. Experimental results show that the paths computed by the proposed approach are perceived as more comfortable.
    Full-text · Conference Paper · May 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a new optimized microphone-array processing method for a spoken-dialogue robot in noisy and reverberant environments. The method is based on frequency-domain blind signal extraction, a signal separation algorithm that exploits the sparseness of a speech signal to separate the target speech and diffuse background noise from the sound mixture captured by a microphone array. This algorithm is combined with multichannel Wiener filtering so that it can effectively suppress both background noise and reverberation, given a priori information of room reverberation time. In this paper, first, we develop an automatic optimization scheme based on the assessment of musical noise via higher-order statistics and acoustic model likelihood. Next, to maintain the optimum performance of the system, we propose the multimodal switching scheme using the distance information provided by robot's image sensor and the estimation of SNR condition. Experimental evaluations have been conducted to confirm the efficacy of this method.
    Preview · Article · Jan 2015 · Acoustical Science and Technology
  • [Show abstract] [Hide abstract]
    ABSTRACT: This work presents a human-robot cooperative approach for infrastructure inspection. The goal is to create a robot that assists the human inspector during hammer sounding inspections that detects invisible defects under the surface of concrete by striking the surface with a hammer and listening the resulting sound. The conventional hammer sounding inspection is time-consuming, and there is no convenient way to represent exhaustively the test results. In the proposed approach, an assistant robot accurately estimates the position of the impact in real-time and creates a detailed representation of the test results. Experimental results show the process for creating the detailed inspection report. The accuracy of the human-robot cooperative approach is evaluated for a real world application. The center of the error distribution of the impact point estimation was 44[mm] from the ground-truth with 27[mm] of standard deviation.
    No preview · Article · Jan 2015
  • Source
    Dataset: Visibility

    Full-text · Dataset · Sep 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a framework for making a mobile robot aware of an entity in the blind region of its laser range finders when that entity emits sound. First in a mapping stage, a 3D description of the environment that contains information about acoustic reflection is created. Then during operation, the robot combines estimated directions of arrival of sound with this 3D description to detect entities that are not visible by line of sight sensors but could be heard because of sound reflections. Using this approach, it is possible to restrict the hypothesis about the position of a sound emitting entity in the blind region to a small set of candidate depth values.
    Full-text · Conference Paper · Sep 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This work proposes a model for human habit-uation while riding a robotic wheelchair. We present and describe the concept of human navigational habituation which we define as the human habituation to repetitively riding a robotic wheelchair. The approach models habituation in terms of preferred linear velocity based on the experience of riding a wheelchair. We argue that preferred velocity changes as the human gets used to riding on the wheelchair. Inexperi-enced users initially prefer to ride at a slow moderate pace, however the longer they ride they prefer to speed up to a certain comfort level and find initial slower velocities to be tediously "too slow" for their experience level. The proposed habituation model provides passenger preferred velocity based on experience. Human biological measurements, galvanic skin conductance, and participant feedback demonstrate the prefer-ence for habituation velocity control over fixed velocity control. To our knowledge habituation modeling is new in the field of autonomous navigation and robotics.
    Full-text · Conference Paper · Sep 2014
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a framework for creating a 3D map of an environment that contains the probability of a geometric feature to emit a sound. The goal is to provide an automated tool for condition monitoring of plants. The map is created by a mobile platform equipped with a microphone array and laser range sensors. The microphone array is used to estimate the sound power received from different directions whereas the laser range sensors are used for estimating the platform pose in the environment. During navigation, a ray casting method projects the audio measurements made on-board the mobile platform to the map of the environment. Experimental results show that the created map is an efficient tool for sound source localization.
    Full-text · Conference Paper · Sep 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We proposed a method for estimating sound source positions in 3D space by integrating sound directions estimated by multiple microphone arrays and taking advantage of reflection information. Two types of sources with different directivity properties (human speech and loudspeaker speech) were evaluated for different positions and orientations. Experimental results showed the effectiveness of using reflection information, depending on the position and orientation of the sound sources relative to the array, walls, and the source type. The use of reflection information increased the source position detection rates by 10% on average and up to 60% for the best case.
    No preview · Article · Sep 2014 · IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This work introduces a 3D visibility model for comfortable autonomous vehicles. The model computes a vis-ibility index based on the pose of the wheelchair within the environment. We correlate this index with human navigational comfort (discomfort) and we discuss the importance of modeling visibility to improve human riding comfort. The proposed approach models the 3D visual field of view combined with a two-layered environmental representation. The field of view is modeled with information from the pose of the robot, a 3D laser sensor and a two-layered environmental representation composed of a 3D geometric map with traversale area infor-mation. Human navigational discomfort was extracted from participants riding the autonomous wheelchair. Results show that there is fair correlation between poor visibility locations (e.g., blind corners) and human discomfort. The approach can model places with identical traversable characteristics but different visibility and it differentiates visibility characteristics according to traveling direction.
    Full-text · Conference Paper · May 2014
  • [Show abstract] [Hide abstract]
    ABSTRACT: We proposed a method for estimating sound source locations in a 3D space by integrating sound directions estimated by multiple microphone arrays and taking advantage of reflection information. Two types of sources with different directivity properties (human speech and loudspeaker speech) were evaluated for different positions and orientations. Experimental results showed the effectiveness of using reflection information, depending on the position and orientation of the sound sources relative to the array, walls, and the source type. The use of reflection information increased the source position detection rates by 10% on average and up to 60% for the best case.
    No preview · Conference Paper · Nov 2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a method for mapping the radiated sound intensity of an environment using an autonomous mobile platform. The sound intensities radiated by the objects are estimated by combining the sound intensity at the platform's position (estimated with a steered response power algorithm) and the distances to the objects (estimated using laser range finders). By combining the estimated sound intensity at the platform's position with the platform's pose obtained from a particle filter based localization algorithm, the sound intensity radiated from the objects is registered in the cells of a grid map covering the environment. This procedure creates a map of the radiated sound intensity that contains information about the sound directivity. To illustrate the effectiveness of the proposed method, a map of radiated sound intensity is created for a test environment. Then the position and the directivity of the sound sources in the test environment are estimated from this map.
    Full-text · Conference Paper · Jan 2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a method for detecting moving entities that are in the robot's path but not in the field of view of sensors like laser scanners, cameras or ultrasonic sensors. The proposed system makes use of passive acoustic localization methods which receive information from occluded regions (at intersections or corners) because of the multipath nature of sound propagation. Contrary to the conventional sensors, this method does not require line of sight. In particular, specular reflections in the environment make it possible to detect moving entities that emit sound such as a walking person or a rolling cart. This idea was exploited for safe navigation of a mobile platform at intersections. The passive acoustic localization output is combined with a 3D geometric map of the environment that is precise enough to estimate sound propagation and reflection using ray casting methods. This gives the robot the ability to detect a moving entity out of the field of view of the sensors that require line of sight. Then the robot is able to recalculate its path and waits until the detected entity is out of its path so that it is safe to move to its destination. To illustrate the performance of the proposed method, a comparison of the robot's navigation with and without the audio sensing is provided for several intersection scenarios.
    Full-text · Conference Paper · Jan 2013
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents a multi-modal sensor approach for mapping sound sources using an omni-directional microphone array on an autonomous mobile robot. A fusion of audio data (from the microphone array), odometry information and the laser range scan data (from the robot) was used to precisely localize and map the audio sources in an environment. An audio map is created while the robot is autonomously navigating through the environment by continuously generating audio scans with a steered response power (SRP) algorithm. Using the poses of the robot, rays are cast in the map in all directions given by the SRP. Then each occupied cell in the geometric map hit by a ray is assigned a likelihood of containing a sound source. This likelihood is derived from the SRP at that particular instant. Since the localization of the robot is probabilistic, the uncertainty in the pose of the robot in the geometric map is propagated to the occupied cells hit during the ray casting. This process is repeated while the robot is in motion and the map is updated after every audio scan. The generated sound maps were reused and the changes in the audio environment were updated by the robot as it identifies these changes.
    Full-text · Conference Paper · Jan 2013

  • No preview · Conference Paper · Jan 2013
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper presents an audio monitoring system for detecting and identifying people engaged in a conversation. The proposed method is hands-free as it uses a microphone array to acquire the sound. A particularity of the approach is the use of a laser range finder based human tracker system. The human tracker monitors the locations of people then local steered response power is used to detect the people speaking and localize precisely their mouths. Then an audio stream is created for each person and used to perform speaker identification. Experimental results show that the use of the human tracker has several benefits compared to an audio only approach.
    No preview · Conference Paper · Oct 2012
  • [Show abstract] [Hide abstract]
    ABSTRACT: This paper focuses on the problem of environmental noises in human-human communication and in automatic speech recognition. To deal with this problem, the use of alternative acoustic sensors -which are attached to the talker and receive the uttered speech through skin or bones- is investigated. In the current study, throat microphones and ear bone microphones are integrated with standard microphones using several fusion methods. The results obtained show that the recognition rates in noisy environments are drastically increased when these sensors are integrated with standard microphones. Moreover, the system does not show any recognition degradations in clean environments. In fact, recognition rates also increase slightly in clean environments. Using late fusion to integrate a throat microphone, an ear bone microphone, and a standard microphone, we achieved a 44% relative improvement in recognition rate in a noisy environment and a 24% relative improvement in recognition rate in a clean environment.
    No preview · Conference Paper · Mar 2012

Publication Stats

126 Citations
34.14 Total Impact Points

Institutions

  • 2015
    • Advanced Telecommunications Research Institute
      • Intelligent Robotics and Communication Laboratories
      Kioto, Kyoto, Japan
  • 2012
    • Kyoto Prefectural University of Medicine
      Kioto, Kyōto, Japan
  • 2006-2010
    • Nara Institute of Science and Technology
      • Graduate School of Information Science
      Ikuma, Nara, Japan
  • 2008
    • Kyoto University
      Kioto, Kyōto, Japan