Francis wyffels’s research while affiliated with Ghent University and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (106)


A typical run of the unfolding and folding pipeline shown as 16 key moments, grouped in four stages. Each row corresponds with a stage in the pipeline.
Two examples of images overlayed with the heatmap from the keypoint detector. The keypoint detector, trained on images of flat towels (left), also produces useful detections on deformed towels (right).
UnfoldIR 9 tactile sensors. One fingertip emits infrared (IR) light, the other receives it. A grasped layer of cloth reduces the light captured by the receiver grid, as can be seen in the exemplary sensor readout (right).
The 12 towels used for the evaluation from the Household Cloth Object Set. 11
Four examples of grasp-related failures. Top left: Grip lost during tracing. Top right: Grip lost when laying down the result of tracing. Bottom left: Cloth sticks to gripper after laying down the result of tracing. Bottom right: Cloth does not release from the gripper during pretrace.

+3

Insights for robotic cloth manipulation: A comprehensive analysis of a competition-winning system
  • Article
  • Full-text available

March 2025

·

16 Reads

Victor-Louis De Gusseme

·

·

Thomas Lips

·

[...]

·

Francis wyffels

Robotic cloth manipulation will be important for assistive robots. To thoroughly evaluate progress in this field, a Cloth Manipulation and Perception Competition was organised at IROS 2022 and ICRA 2023. In this article, we present the system that won the folding track at IROS 2022 and the folding and unfolding tracks at ICRA 2023. By combining visual and tactile information with engineered motions, we built a system that can generalise to a range of patterned towels made from various materials, as required for the competition. We describe our system and its limitations, which we relate to future work with the goal of creating systems that can deal with any cloth, robot, or surface.

Download

Self-Mixing Laser Interferometry for Robotic Tactile Sensing

February 2025

·

8 Reads

Self-mixing interferometry (SMI) has been lauded for its sensitivity in detecting microvibrations, while requiring no physical contact with its target. In robotics, microvibrations have traditionally been interpreted as a marker for object slip, and recently as a salient indicator of extrinsic contact. We present the first-ever robotic fingertip making use of SMI for slip and extrinsic contact sensing. The design is validated through measurement of controlled vibration sources, both before and after encasing the readout circuit in its fingertip package. Then, the SMI fingertip is compared to acoustic sensing through three experiments. The results are distilled into a technology decision map. SMI was found to be more sensitive to subtle slip events and significantly more robust against ambient noise. We conclude that the integration of SMI in robotic fingertips offers a new, promising branch of tactile sensing in robotics.


Enabling high-throughput quantitative wood anatomy through a dedicated pipeline

February 2025

·

219 Reads

Plant Methods

Throughout their lifetime, trees store valuable environmental information within their wood. Unlocking this information requires quantitative analysis, in most cases of the surface of wood. The conventional pathway for high-resolution digitization of wood surfaces and segmentation of wood features requires several manual and time consuming steps. We present a semi-automated high-throughput pipeline for sample preparation, gigapixel imaging, and analysis of the anatomy of the end-grain surfaces of discs and increment cores. The pipeline consists of a collaborative robot (Cobot) with sander for surface preparation, a custom-built open-source robot for gigapixel imaging (Gigapixel Woodbot), and a Python routine for deep-learning analysis of gigapixel images. The robotic sander allows to obtain high-quality surfaces with minimal sanding or polishing artefacts. It is designed for precise and consistent sanding and polishing of wood surfaces, revealing detailed wood anatomical structures by applying consecutively finer grits of sandpaper. Multiple samples can be processed autonomously at once. The custom-built open-source Gigapixel Woodbot is a modular imaging system that enables automated scanning of large wood surfaces. The frame of the robot is a CNC (Computer Numerical Control) machine to position a camera above the objects. Images are taken at different focus points, with a small overlap between consecutive images in the X-Y plane, and merged by mosaic stitching, into a gigapixel image. Multiple scans can be initiated through the graphical application, allowing the system to autonomously image several objects and large surfaces. Finally, a Python routine using a trained YOLOv8 deep learning network allows for fully automated analysis of the gigapixel images, here shown as a proof-of-concept for the quantification of vessels and rays on full disc surfaces and increment cores. We present fully digitized beech discs of 30–35 cm diameter at a resolution of 2.25 \upmum, for which we automatically quantified the number of vessels (up to 13 million) and rays. We showcase the same process for five 30 cm length beech increment cores also digitized at a resolution of 2.25 \upmum, and generated pith-to-bark profiles of vessel density. This pipeline allows researchers to perform high-detail analysis of anatomical features on large surfaces, test fundamental hypotheses in ecophysiology, ecology, dendroclimatology, and many more with sufficient sample replication.


Cauliflower centre detection and 3-dimensional tracking for robotic intrarow weeding

February 2025

·

24 Reads

Precision Agriculture

Mechanical weeding is an important part of integrated weed management. It destroys weeds between (interrow) and in (intrarow) crop rows. Preventing crop damage requires precise detection and tracking of the plants. In this work, a detection and tracking algorithm was developed and integrated on an intrarow hoeing prototype. The algorithm was developed and validated on 12 rows of 950 cauliflower plants. Therefore, a methodology was provided to automatically generate a label based on the crop plants’ Global Navigation Satellite System (GNSS) position during data collection with a robot platform. A CenterNet architecture was adjusted for plant centre detection by comparing different encoder networks and selecting the optimal hyperparameters. The monocular camera projection error of the plant centre detections in pixel to 3D coordinates was evaluated and used in a position- and velocity-based tracking algorithm to determine the timing for intrarow hoeing knife actuation. A dataset of 53k labelled images was created. The best CenterNet model resulted in an F1 score on the test set of 0.986 for detecting cauliflower centres. The position tracking had an average variation of 1.62 cm. Velocity tracking had a standard deviation of 0.008 ms1\mathrm {m\,\,s^{-1}} with respect to the robot’s operational target velocity. Overall, the entire integration showed effective actuation of the prototype in field conditions. Only one false positive detection occurred during operation in two test rows of 135 cauliflowers.


Fig. 1: Experimental setup.
Fig. 2: Structure of the tactile sensor.
Fig. 3: ICC for emotions.
Fig. 4: SVM confusion matrix
Conveying Emotions to Robots through Touch and Sound

December 2024

·

32 Reads

Human emotions can be conveyed through nuanced touch gestures. However, there is a lack of understanding of how consistently emotions can be conveyed to robots through touch. This study explores the consistency of touch-based emotional expression toward a robot by integrating tactile and auditory sensory reading of affective haptic expressions. We developed a piezoresistive pressure sensor and used a microphone to mimic touch and sound channels, respectively. In a study with 28 participants, each conveyed 10 emotions to a robot using spontaneous touch gestures. Our findings reveal a statistically significant consistency in emotion expression among participants. However, some emotions obtained low intraclass correlation values. Additionally, certain emotions with similar levels of arousal or valence did not exhibit significant differences in the way they were conveyed. We subsequently constructed a multi-modal integrating touch and audio features to decode the 10 emotions. A support vector machine (SVM) model demonstrated the highest accuracy, achieving 40% for 10 classes, with "Attention" being the most accurately conveyed emotion at a balanced accuracy of 87.65%.



Fig. 4. Comparison of average keypoint distances for different values of the Controlnet conditioning scale (CCS). The optimal value depends on the category, but 1.5 (marked in green) is a sensible default.
Evaluating Text-to-Image Diffusion Models for Texturing Synthetic Data

November 2024

·

48 Reads

Building generic robotic manipulation systems often requires large amounts of real-world data, which can be dificult to collect. Synthetic data generation offers a promising alternative, but limiting the sim-to-real gap requires significant engineering efforts. To reduce this engineering effort, we investigate the use of pretrained text-to-image diffusion models for texturing synthetic images and compare this approach with using random textures, a common domain randomization technique in synthetic data generation. We focus on generating object-centric representations, such as keypoints and segmentation masks, which are important for robotic manipulation and require precise annotations. We evaluate the efficacy of the texturing methods by training models on the synthetic data and measuring their performance on real-world datasets for three object categories: shoes, T-shirts, and mugs. Surprisingly, we find that texturing using a diffusion model performs on par with random textures, despite generating seemingly more realistic images. Our results suggest that, for now, using diffusion models for texturing does not benefit synthetic data generation for robotics. The code, data and trained models are available at \url{https://github.com/tlpss/diffusing-synthetic-data.git}.


Agro-morphological characterization of Coffea canephora (Robusta) genotypes from the INERA Yangambi Coffee Collection, Democratic Republic of the Congo

October 2024

·

77 Reads

·

1 Citation

Meeting rising quality standards while at the same time addressing climate challenges will make the commercial cultivation of Robusta coffee increasingly difficult. Whereas breeding new varieties may be an important part of the solution, such efforts for Robusta lag behind, with much of its genetic diversity still unexplored. By screening existing field genebanks to identify accessions with desirable traits, breeding programs can be significantly facilitated. This study quantifies the morphological diversity and agronomic potential of 70 genotypes from the INERA Coffee Collection in Yangambi, Democratic Republic of the Congo. We measured 29 traits, comprising vegetative, reproductive, tree architecture, and yield traits. Classification models were applied to establish whether these traits could accurately classify genotypes based on their background. Furthermore, the agronomic potential and green bean quality of the genotypes were studied. While significant variation in morphological traits was observed, no combination of traits could reliably predict the genetic background of different genotypes. Genotypes with promising traits for green beans were identified in both ‘Lula’ and ‘Lula’ – Wild hybrids, while promising yield traits were found in ‘Lula’ – Congolese subgroup A hybrids. Additionally, certain ‘Lula’ – Wild hybrids showed low specific leaf area and stomatal density, indicating potential fitness advantages in dry environments, warranting further study. Our findings highlight the agronomic potential of underexplored Robusta coffee genotypes from the Democratic Republic of the Congo and indicate the need for further screening to maximize their value.



Learning Keypoints for Robotic Cloth Manipulation Using Synthetic Data

July 2024

·

11 Reads

·

7 Citations

IEEE Robotics and Automation Letters

Assistive robots should be able to wash, fold or iron clothes. However, due to the variety, deformability and self-occlusions of clothes, creating robot systems for cloth manipulation is challenging. Synthetic data is a promising direction to improve generalization, but the sim-to-real gap limits its effectiveness. To advance the use of synthetic data for cloth manipulation tasks such as robotic folding, we present a synthetic data pipeline to train keypoint detectors for almost-flattened cloth items. To evaluate its performance, we have also collected a real-world dataset. We train detectors for both T-shirts, towels and shorts and obtain an average precision of 64% and an average keypoint distance of 18 pixels. Fine-tuning on real-world data improves performance to 74% mAP and an average distance of only 9 pixels. Furthermore, we describe failure modes of the keypoint detectors and compare different approaches to obtain cloth meshes and materials. We also quantify the remaining sim-to-real gap and argue that further improvements to the fidelity of cloth assets will be required to further reduce this gap. The code, dataset and trained models are available here.


Citations (60)


... An improved version of the synthetic data pipeline used to generate these images was later published. 30 For towels, we take the four corners as the semantic locations of interest. The keypoint detector predicts spatial heatmaps from which the keypoints can be extracted, as shown in Figure 2. ...

Reference:

Insights for robotic cloth manipulation: A comprehensive analysis of a competition-winning system
Learning Keypoints for Robotic Cloth Manipulation Using Synthetic Data
  • Citing Article
  • July 2024

IEEE Robotics and Automation Letters

... In Engineering and computing education, CT is integral to understanding complex systems and formulation algorithms and addressing real-world challenges systematically. Despite its growing importance, measuring and assessing CT skills remain complex and often subjective, particularly when examining students' self-perceptions versus objectively Recent studies on CT skill assessment often use either self-assessment tools or objective tests using gamification across primary, college and vocational levels (Chen et al., 2023;Hermans et al., 2024;National Research Council et al., 2011;Relkin et al., 2020;Wilensky & Reisman, 2006). A study by (El-Hamamsy et al., 2023) has introduced the competent Computational Thinking Test (CTT), validated for longitudinal studies among primary students. ...

Empowering Vocational Students: A Research-Based Framework for Computational Thinking Integration

... Researchers have introduced a wide range of methods for utilizing synthetic data in CV. One approach attempts to generate photo-realistic images, that match the real world as closely as possible [2,6,20,22]. These methods have been shown to generalize well between domains, but producing high-quality images is a significant challenge due to the complexity of accu-rately modeling lighting conditions and surface textures. ...

Sim-to-Real Dataset of Industrial Metal Objects

... Another component of machine learning known as reinforcement learning (RL) teaches an agent how to behave and react in a given environment by having it carry out specific tasks and then watching the rewards or outcomes. This technique is already employed in different agricultural domains, such as crop yield prediction and a completely autonomous precision agricultural aerial scouting technique [21][22][23][24]. ...

Plant science in the age of simulation intelligence

... The regression output was removed from the CenterNet architecture only to output a heatmap prediction (Vierbergen et al., 2023;Duan et al., 2019). Experiments were done with the Resnet (18, 34 and 50) and Efficientnet (B4 and V2 small) backbones for the model encoder. ...

Sim2real flower detection towards automated Calendula harvesting
  • Citing Article
  • October 2023

Biosystems Engineering

... Omitted variable bias is managed by integrating ML with model predictive control (MPC) to estimate unknown parameters ( [10]) and by combining physics-based knowledge with AI in metal additive manufacturing ( [34]). Representation bias is tackled through self-supervised learning for object retrieval ( [18]). Sampling bias is mitigated using the DB-CGAN model for distribution bias in industrial IoT ( [42]) and a fault diagnosis method combining Gramian Angular Difference Field and Improved Dual Attention Residual Network (IDARN) ( [37]). ...

Self-supervised learning for robust object retrieval without human annotations
  • Citing Article
  • June 2023

Computers & Graphics

... Our system is based on prior work on synthetic data generation for clothing, 14 simulation-based fold optimisations, 15 and the use of tactile sensing for unfolding. 9 In particular, our work adds to the very limited set of fully integrated crumpled-tofolded cloth manipulation pipelines in literature. 4,5 We believe this full integration gives us a more complete overview of possible failure modes, which we describe in detail. ...

UnfoldIR: Tactile Robotic Unfolding of Cloth

IEEE Robotics and Automation Letters

... Moreover, our dataset is highly useful for various computer vision tasks, including 6D pose estimation, object detection, instance segmentation, novel view synthesis, 3D reconstruction, and active perception. In a recent study, our dataset was used for 6D object pose estimation [34]. The authors first trained PVNet [35], a popular 6D object pose estimation method, on our dataset. ...

CenDerNet: Center and Curvature Representations for Render-and-Compare 6D Pose Estimation
  • Citing Chapter
  • February 2023

Lecture Notes in Computer Science

... In this article, we describe and formally evaluate our system that won the folding and unfolding track of this competition at ICRA 2023, as well as the folding track at IROS 2022 in an earlier stage. Our system is based on prior work on synthetic data generation for clothing, 14 simulation-based fold optimisations, 15 and the use of tactile sensing for unfolding. 9 In particular, our work adds to the very limited set of fully integrated crumpled-tofolded cloth manipulation pipelines in literature. ...

Effective cloth folding trajectories in simulation with only two parameters

Frontiers in Neurorobotics