Hongbin Liu’s research while affiliated with King's College London and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (5)


Iterative Closest Labeled Point for Tactile Object Shape Recognition
  • Preprint

August 2017

·

Wenxuan Mou

·

·

Hongbin Liu

Tactile data and kinesthetic cues are two important sensing sources in robot object recognition and are complementary to each other. In this paper, we propose a novel algorithm named Iterative Closest Labeled Point (iCLAP) to recognize objects using both tactile and kinesthetic information.The iCLAP first assigns different local tactile features with distinct label numbers. The label numbers of the tactile features together with their associated 3D positions form a 4D point cloud of the object. In this manner, the two sensing modalities are merged to form a synthesized perception of the touched object. To recognize an object, the partial 4D point cloud obtained from a number of touches iteratively matches with all the reference cloud models to identify the best fit. An extensive evaluation study with 20 real objects shows that our proposed iCLAP approach outperforms those using either of the separate sensing modalities, with a substantial recognition rate improvement of up to 18%.


Knock-Knock: Acoustic Object Recognition by using Stacked Denoising Autoencoders

August 2017

·

2 Reads

This paper presents a successful application of deep learning for object recognition based on acoustic data. The shortcomings of previously employed approaches where handcrafted features describing the acoustic data are being used, include limiting the capability of the found representation to be widely applicable and facing the risk of capturing only insignificant characteristics for a task. In contrast, there is no need to define the feature representation format when using multilayer/deep learning architecture methods: features can be learned from raw sensor data without defining discriminative characteristics a-priori. In this paper, stacked denoising autoencoders are applied to train a deep learning model. Knocking each object in our test set 120 times with a marker pen to obtain the auditory data, thirty different objects were successfully classified in our experiment and each object was knocked 120 times by a marker pen to obtain the auditory data. By employing the proposed deep learning framework, a high accuracy of 91.50% was achieved. A traditional method using handcrafted features with a shallow classifier was taken as a benchmark and the attained recognition rate was only 58.22%. Interestingly, a recognition rate of 82.00% was achieved when using a shallow classifier with raw acoustic data as input. In addition, we could show that the time taken to classify one object using deep learning was far less (by a factor of more than 6) than utilizing the traditional method. It was also explored how different model parameters in our deep architecture affect the recognition performance.


Localizing the Object Contact through Matching Tactile Features with Visual Map

August 2017

·

1 Read

This paper presents a novel framework for integration of vision and tactile sensing by localizing tactile readings in a visual object map. Intuitively, there are some correspondences, e.g., prominent features, between visual and tactile object identification. To apply it in robotics, we propose to localize tactile readings in visual images by sharing same sets of feature descriptors through two sensing modalities. It is then treated as a probabilistic estimation problem solved in a framework of recursive Bayesian filtering. Feature-based measurement model and Gaussian based motion model are thus built. In our tests, a tactile array sensor is utilized to generate tactile images during interaction with objects and the results have proven the feasibility of our proposed framework.


Ex vivo study of prostate cancer localization using rolling mechanical imaging towards minimally invasive surgery

February 2017

·

52 Reads

·

4 Citations

Medical Engineering & Physics

·

Hongbin Liu

·

Matthew Brown

·

[...]

·

Rolling mechanical imaging (RMI) is a novel technique towards the detection and quantification of malignant tissue in locations that are inaccessible to palpation during robotic minimally invasive surgery (MIS); the approach is shown to achieve results of higher precision than is possible using the human hand. Using a passive robotic manipulator, a lightweight and force sensitive wheeled probe is driven across the surface of tissue samples to collect continuous measurements of wheel-tissue dynamics. A color-coded map is then generated to visualize the stiffness distribution within the internal tissue structure. Having developed the RMI device in-house, we aim to compare the accuracy of this technique to commonly used methods of localizing prostate cancer in current practice: digital rectal exam (DRE), magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) biopsy. Final histology is the gold standard used for comparison. A total of 126 sites from 21 robotic-assisted radical prostatectomy specimens were examined. Analysis was performed for sensitivity, specificity, accuracy, and predictive value across all patient risk profiles (defined by PSA, Gleason score and pathological score). Of all techniques, pre-operative biopsy had the highest sensitivity (76.2%) and accuracy (64.3%) in the localization of tumor in the final specimen. However, RMI had a higher sensitivity (44.4%) and accuracy (57.9%) than both DRE (38.1% and 52.4%, respectively) and MRI (33.3% and 57.9%, respectively). These findings suggest a role for RMI towards MIS, where haptic feedback is lacking. While our approach has focused on urological tumors, RMI has potential applicability to other extirpative oncological procedures and to diagnostics (e.g., breast cancer screening).


Citations (1)


... There is a continuing clinical need to improve both sensitivity and specificity of simple early screening methods, to better stratify risk for further diagnostic steps such as magnetic resonance imaging (MRI), transrectal ultrasound (TRUS) and biopsy. Recent research has seen increasing interest in instrumenting prostate gland palpation, either for the purpose of enhancing DRE (Kim et al. 2014;Scanlan et al. 2015;Palacio-Torralba et al. 2016) or minimally invasive robot-assisted surgery (Li et al. 2017). The data acquired from such methods are often in the format of force feedback when probing the prostate, either quasi-statically or dynamically, and integrating such strategy of data analysis into DRE procedures has shown promise for improving the effectiveness of early screening for PCa (Hammer et al. 2017). ...

Reference:

Locating and sizing tumor nodules in human prostate using instrumented probing - computational framework and experimental validation
Ex vivo study of prostate cancer localization using rolling mechanical imaging towards minimally invasive surgery
  • Citing Article
  • February 2017

Medical Engineering & Physics