
Kazuto Nakashima- PhD
- Assistant Professor at Kyushu University
Kazuto Nakashima
- PhD
- Assistant Professor at Kyushu University
About
41
Publications
1,688
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
195
Citations
Introduction
Skills and Expertise
Current institution
Publications
Publications (41)
In 3D scene understanding tasks using LiDAR data, constructing training data poses a challenge due to its high annotation cost. To this end, annotation-free simulator-based training has recently been gaining attention, while the domain gap between simulators and real environments often leads to decreased generalization performance. This paper intro...
Building LiDAR generative models holds promise as powerful data priors for restoration, scene manipulation, and scalable simulation in autonomous mobile robots. In recent years, approaches using diffusion models have emerged, significantly improving training stability and generation quality. Despite the success of diffusion models, generating high-...
Recently, 3D LiDAR has emerged as a promising technique in the field of gait-based person identification, serving as an alternative to traditional RGB cameras, due to its robustness under varying lighting conditions and its ability to capture 3D geometric information. However, long capture distances or the use of low-cost LiDAR sensors often result...
Gait recognition enables the non-contact identification of individuals from a distance based on their walking patterns and body shapes. For vision-based gait recognition, covariates (e.g., clothing, baggage and background) can negatively impact identification. As a result, many existing studies extract gait features from silhouettes or skeletal inf...
Gait recognition is a biometric identification method based on individual walking patterns. This modality is applied in a wide range of applications, such as criminal investigations and identification systems, since it can be performed at a long distance and requires no cooperation of interests. In general, cameras are used for gait recognition sys...
In this study, we develop a sensor terminal with multiple and various sensors named sensor pod, which collects various environmental information at a construction site. The sensor pod is equipped with a 3D-LiDAR and a vibration sensor, which can be used to predict the surrounding hazards and evaluate the ground stiffness. In this paper, we introduc...
This paper presents a retrofit backhoe remote control system that is inexpensive, compact, and easy to install. The system consists of a remote control system using a teleoperation system embedded by a construction machinery manufacturer and a small robot arm, and a remote sensing system using a multi-core microcomputer.
3D LiDAR sensors are indispensable for the robust vision of autonomous mobile robots. However, deploying LiDAR-based perception algorithms often fails due to a domain gap from the training environment, such as inconsistent angular resolution and missing properties. Existing studies have tackled the issue by learning inter-domain mapping, while the...
Generative modeling of 3D scenes is a crucial topic for aiding mobile robots to improve unreliable observations. However, despite the rapid progress in the natural image domain, building generative models is still challenging for 3D data, such as point clouds. Most existing studies on point clouds have focused on small and uniform-density data. In...
Automatic analysis of our daily lives and activities through a first-person lifelog camera provides us with opportunities to improve our life rhythms or to support our limited visual memories. Notably, to express the visual experiences, the task of generating captions from first-person lifelog images has been actively studied in recent years. First...
Terrain classification is critically important for Mars rovers, which rely on it for planning and autonomous navigation. On-board terrain classification using visual information has limitations, and is sensitive to illumination conditions. Classification can be improved if one fuses visual imagery with additional infrared (IR) imagery of the scene,...
This paper presents several multi-modal 3D datasets for the problem of categorization of places. In this problem. a robotic agent should decide on the type of place/environment where it is located (residential area, forest, etc.) using information gathered by its sensors. In addition to the 3D depth information, the datasets include additional moda...
Gait-based person recognition has received an increasing amount of attentions for monitoring and surveillance applications. One of issues in gait recognition is that it is difficult to recognize people with high performance, in case that the resolution of captured images is too low. To deal with this problem, this paper presents SFDEINet, which use...
Semantic place categorization, which is one of the essential tasks for autonomous robots and vehicles, allows them to have capabilities of self-decision and navigation in unfamiliar environments. In particular, outdoor places are more difficult targets than indoor ones due to perceptual variations, such as dynamic illuminance over 24 hours and occl...
In this paper, we present a near future perception system named “Previewed Reality”. We integrate an immersive VR display, a stereo camera and a dynamic simulator into our informationally structured environment(ISE) platform for service robot. In ISE, a human wearing the immersive VR display can see the next possible events as virtual images overla...
To provide daily-life assistance appropriately by a service robot, the management of houseware's information in a room or a house is an indispensable function. Especially, the information about what and where objects are in the environment are fundamental and critical knowledge. We can track housewares with high reliability by attaching markers suc...
In this paper, we present a system to register housewares in a room to database automatically to maintain an informationally-structured environment. We assume that housewares requested by a user are likely to be appeared in an egocentric vision of the user. The proposed system captures the egocentric vision by a smart glass, detects multi-class obj...
This paper proposes a new concept of "fourth-person sensing" for service robots. The proposed concept combines wearable cameras (the first-person viewpoint), sensors mounted on robots (the second-person viewpoint) and sensors embedded in the informationally structured environment (the third-person viewpoint). Each sensor has its advantage and disad...
Service robots, which co-exist with humans to provide various services, obtain information from sensors placed in an environment and/or sensors mounted on robots. In this paper we newly propose the concept of fourth-person sensing which combines wearable cameras (first-person sensing), sensors mounted on robots (second-person sensing), and distribu...