86 reads in the past 30 days
A hybrid tendon-driven continuum robot that avoids torsion under external loadMay 2025
·
86 Reads
Published by Frontiers
Online ISSN: 2296-9144
86 reads in the past 30 days
A hybrid tendon-driven continuum robot that avoids torsion under external loadMay 2025
·
86 Reads
66 reads in the past 30 days
Learning to reason over scene graphs: a case study of finetuning GPT-2 into a robot language model for grounded task planningAugust 2023
·
2,308 Reads
·
23 Citations
53 reads in the past 30 days
Factors influencing subjective opinion attribution to conversational robotsApril 2025
·
81 Reads
49 reads in the past 30 days
Neurotechnology for enhancing human operation of robotic and semi-autonomous systemsMay 2025
·
53 Reads
43 reads in the past 30 days
ROSA: a knowledge-based solution for robot self-adaptationMay 2025
·
43 Reads
Frontiers in Robotics and AI is a leading multidisciplinary journal focusing on the theory of robotics, technology, and artificial intelligence, and their applications - from biomedical to space robotics. Led by Field Chief Prof Editor Kostas J Kyriakopoulos (National Technical University of Athens, Greece), the journal is indexed in Scopus, Web of Science (ESCI) and the DOAJ, among others, and publishes theoretical and applied robotics research which advance technological developments to aid and enhance modern society. Frontiers in Robotics and AI explores robotics technology across multiple, diverse disciplines from biomedical and micro-nano systems to field robotics. Featured disciplines include, but are not limited to:
The journal is particularly interested in research showcasing artificial intelligence as applied to robotics, advancing perception through innovative vision and sensor systems, developing autonomous robot behavior, and establishing novel forms of interaction with the environment and humans.
June 2025
·
4 Reads
Monica Nicolescu
·
Janelle Blankenburg
·
Bashira Akter Anima
·
[...]
·
David Feil-Seifer
This paper focuses on the problem of collaborative task execution by teams comprising of people and multiple heterogeneous robots. In particular, the problem is motivated by the need for the team members to dynamically coordinate their execution, in order to avoid overlapping actions (i.e. multiple team members working on the same part of the task) and to ensure a correct execution of the task. This paper expands on our own prior work on collaborative task execution by single human-robot and single robot-robot teams, by taking an approach inspired by simulation Theory of Mind (ToM) to develop a real-time distributed architecture that enables collaborative execution of tasks with hierarchical representations and multiple types of execution constraints by teams of people and multiple robots with variable heterogeneity. First, the architecture presents a novel approach for concurrent coordination of task execution with both human and robot teammates. Second, a novel pipeline is developed in order to handle automatic grasping of objects with unknown initial locations. Furthermore, the architecture relies on a novel continuous-valued metric which accounts for a robot’s capability to perform tasks during the dynamic, on-line task allocation process. To assess the proposed approach, the architecture is validated with: 1) a heterogeneous team of two humanoid robots and 2) a heterogeneous team of one human and two humanoid robots, performing a household task in different environmental conditions. The results support the proposed approach, as different environmental conditions result in different and continuously changing values for the robots’ task execution abilities. Thus, the proposed architecture enables adaptive, real-time collaborative task execution through dynamic task allocation by a heterogeneous human-robot team, for tasks with hierarchical representations and multiple types of constraints.
June 2025
·
14 Reads
Obstacle avoidance is important for autonomous driving. Metric scale obstacle detection using a monocular camera for obstacle avoidance has been studied. In this study, metric scale obstacle detection means detecting obstacles and measuring the distance to them with a metric scale. We have already developed PMOD-Net, which realizes metric scale obstacle detection by using a monocular camera and a 3D map for autonomous driving. However, PMOD-Net’s distance error of non-fixed obstacles that do not exist on the 3D map is large. Accordingly, this study deals with the problem of improving distance estimation of non-fixed obstacles for obstacle avoidance. To solve the problem, we focused on the fact that PMOD-Net simultaneously performed object detection and distance estimation. We have developed a new loss function called “DifSeg.” DifSeg is calculated from the distance estimation results on the non-fixed obstacle region, which is defined based on the object detection results. Therefore, DifSeg makes PMOD-Net focus on non-fixed obstacles during training. We evaluated the effect of DifSeg by using CARLA simulator, KITTI, and an original indoor dataset. The evaluation results showed that the distance estimation accuracy was improved on all datasets. Especially in the case of KITTI, the distance estimation error of our method was 2.42 m, which was 2.14 m less than that of the latest monocular depth estimation method.
June 2025
·
6 Reads
Introduction The rapid advancement of collaborative robotics has driven significant interest in Human-Robot Interaction (HRI), particularly in scenarios where robots work alongside humans. This paper considers tasks where a human operator teaches the robot an operation that is then performed autonomously. Methods A multi-modal approach employing tactile fingers and proximity sensors is proposed, where tactile fingers serve as an interface, while proximity sensors enable end-effector movements through contactless interactions and collision avoidance algorithms. In addition, the system is modular to make it adaptable to different tasks. Results Demonstrative tests show the effectiveness of the proposed system and algorithms. The results illustrate how the tactile and proximity sensors can be used separately or in a combined way to achieve human-robot collaboration. Discussion The paper demonstrates the use of the proposed system for tasks involving the manipulation of electrical wires. Further studies will investigate how it behaves with object of different shapes and in more complex tasks.
June 2025
·
6 Reads
Exoskeletons aim to enhance human performance and reduce physical fatigue. However, one major challenge for active exoskeletons is the need for a power source. This demand is typically met with batteries, which limit the device’s operational time. This study presents a novel solution to this challenge: a design that enables the generation of electricity during motions where the muscles work as brakes and absorb energy, with the energy stored and subsequently returned to assist when the muscles function as motors. To achieve this goal, a knee exoskeleton design with a direct drive and a novel electronic board was designed and manufactured to capture the energy generated by the wearer’s movements and convert it into electrical energy. The harvested energy is stored in a power bank, and later, during motion, this energy is used to power the exoskeleton motor. Further, the device has torque control and can change the assistive profile and magnitude as needed for different assistance scenarios. Sit-to-stand (STS) motion was chosen as a test case for the first exoskeleton prototype. It was found that, during lowering (from stand to sit), the exoskeleton provided up to 10 Nm and harvested 9.4 J. During rising (from sit to stand), it provided up to 7.6 Nm and was able to return 6.8 J of the harvested energy. Therefore, the cycle efficiency of the exoskeleton system (return divided by harvesting) is 72.3%. In summary, this study introduces the first active exoskeleton for STS that can generate its own electrical power. The results show that the full development of this technology could reduce exoskeletons’ need for external energy sources.
June 2025
June 2025
·
1 Read
Samuel Gingras
·
Alexandre St-Jean
·
Jean-Sébastien Plante
Geared magnetorheological (MR) actuators have the potential to provide safe and fast physical interactions between human and machine due to their low inertia and high bandwidth. The use of MR actuators in collaborative robotics serial manipulators is only emerging and the design space of this approach is unknown. This paper provides a preliminary understanding of this design space by studying how much gearing can be used between the MR actuators and the joint outputs while maintaining adequate safety levels for collaborative tasks. An analytical collision model is derived for a 6 degrees-of-freedom serial manipulator based on the geometry of the well-known UR5e robot. Model validity is confirmed by comparing predictions to experimental collision data from two robots, a UR5e and a MR5 equivalent. The model is then used to study the impact of gearing level on safety during eventual collisions with human. Results show that for both technologies, robot safety is governed by the balance between the reflected mass due to structural mass and actuator rotational inertia. Results show that, for the UR5e geometry studied in this paper, MR actuators have the potential to reduce the reflected mass in collisions by a factor ranging from 2 to 6 while keeping gearing ratios above 100:1. The paper also briefly studies the influence of robot shape on optimal gearing ratios showing that smaller robots with shorter range have lower structural mass and, thus, proportionally benefit even more of MR actuators. Delocalizing wrist actuators to the elbow has a similar impact since it also reduces structural mass. In all, this work suggests that MR actuators have a strong potential to improve the “hapticness” of collaborative robots while maintaining high gearing ratios.
June 2025
·
4 Reads
·
·
·
[...]
·
Haptic feedback, or tactile perception, is presented by many authors as a technology that can greatly impact biomedical fields, such as minimally invasive surgeries. Laparoscopic interventions are considered the gold standard for many surgical interventions, providing recognized benefits, such as reduced recovery time and mortality rate. In addition to this, the advances in robotic engineering in the last few years have contributed to the increase in the number of robotic and tele-operated interventions, providing surgeons with fewer hand tremors and increased depth perception during surgery. However, currently, both techniques are totally or partially devoid of haptic feedback. This added to the fact that the skill acquisition process to be able to use these technologies shows a pronounced learning curve, has propelled biomedical engineers to aim to develop safe and realistic training programs using simulators to address surgical apprentices’ needs in safe environments for the patients. This review aims to present and summarize some of the latest engineering advances reported in the current literature related to the development of haptic feedback systems in surgical simulators and robotic surgical systems, as well as highlight the benefits that these technologies provide in medical settings for surgical training and preoperative rehearsal.
June 2025
·
5 Reads
Mauliana Mauliana
·
Ashita Ashok
·
Daniela Czernochowski
·
Karsten Berns
This exploratory study investigates how open-domain, multi-session interactions with a large language model (LLM)-powered social humanoid robot (SHR), EMAH, affect user perceptions and willingness for adoption in a university setting. Thirteen students (5 female, 8 male) engaged with EMAH across four weekly sessions, utilizing a compact open-source LLM (Flan-T5-Large) to facilitate multi-turn conversations. Mixed-method measures were employed, including subjective ratings, behavioral observations, and conversational analyses. Results revealed that perceptions of robot’s sociability, agency, and engagement remained stable over time, with engagement sustained despite repeated exposure. While perceived animacy increased with familiarity, disturbance ratings did not significantly decline, suggesting enhanced lifelikeness of SHR without reducing discomfort. Observational data showed a mid-study drop in conversation length and turn-taking, corresponding with technical challenges such as slower response generation and speech recognition errors. Although prior experience with robots weakly correlated with rapport, it did not significantly predict adoption willingness. Overall, the findings highlight the potential for LLM-powered robots to maintain open-domain interactions over time, but also underscore the need for improving technical robustness, adapting conversation strategies by personalization, and managing user expectations to foster long-term social engagement. This work provides actionable insights for advancing humanoid robot deployment in educational environments.
June 2025
·
4 Reads
This study aims to develop a robotic system that autonomously tracks insects during free walking to elucidate the relationship between olfactory sensory stimuli and behavioral changes in insects. The adaptability of organisms is defined by their ability to select appropriate behaviors based on sensory inputs in response to environmental changes, a capacity that insects exhibit through efficient adaptive behaviors despite their limited nervous systems. Consequently, new measurement techniques are needed to investigate the neuroethological processes in insects. Traditional behavioral observations of insects have been conducted using free-walking experiments and treadmill techniques; however, these methods face limitations in accurately measuring sensory stimuli and analyzing the factors contributing to detailed behavioral changes. In this study, a robotic system is employed to track free-walking insects while simultaneously recording electroantennogram (EAG) responses at the location of the antenna of the insect during movement, thus enabling the measurement of the relationship between olfactory reception and behavioral change. In this research, we focus on a male silk moth ( B o m b y x m o r i ) as the target insect and measure its odor source localization behavior. The system comprises a high-speed camera to estimate the movement direction of the insect, a drive system, and instrumentation amplifiers to measure physiological responses. The robot tracks the insect with an error margin of less than 5 mm, recording the EAG responses associated with the olfactory reception during this process. An analysis of the relationship between EAG responses and behavior revealed that the silk moth exhibits a significant amplitude in its EAG responses during the initial odor source localization stage. This suggests that the moth does not necessarily move toward the strongest odor. Further information-theoretic analysis revealed that the moth might be moving in the direction most likely to lead to odor detection, depending on the timing of its olfactory reception. This approach allows for a more natural measurement of the connection between olfactory sensory stimuli and behavior during odor source localization. The study findings are expected to deepen our understanding of the adaptive behaviors of insects.
May 2025
·
7 Reads
Drones will likely deliver packages in public spaces, where humans interact as recipients of the package and as bystanders passing by. Understanding the human needs and uncertainties that may arise during these interactions is crucial to ensure safety. This user-centered design study employed twelve interviews and four focus groups to identify key requirements for recipients and bystanders interacting with delivery drones in public spaces. Findings demonstrate different information needs and preferred interface modalities between recipients and bystanders across various interaction stages, from ordering a package to the drone’s retraction after delivery. This paper highlights essential design features and offers concrete design recommendations based on the interaction requirements. These recommendations can inform the standardization and customization of design features for each interaction stage, enhancing safety and facilitating natural human-drone interaction. Future research should build on these recommendations and validate the design concepts through experimental user studies involving human interactions with delivery drones in public spaces.
May 2025
·
6 Reads
·
5 Citations
This paper introduces Alter3, a humanoid robot that demonstrates spontaneous motion generation through the integration of GPT-4, a cutting-edge Large Language Model (LLM). This integration overcomes the challenge of applying LLMs to direct robot control, which typically struggles with the hardware-specific nuances of robotic operation. By translating linguistic descriptions of human actions into robotic movements via programming, Alter3 can autonomously perform a diverse range of actions, such as adopting a “selfie” pose or simulating a “ghost.” This approach not only shows Alter3’s few-shot learning capabilities but also its adaptability to verbal feedback for pose adjustments without manual fine-tuning. This research advances the field of humanoid robotics by bridging linguistic concepts with physical embodiment and opens new avenues for exploring spontaneity in humanoid robots.
May 2025
·
23 Reads
Introduction Autonomous vehicles (AVs) are already being featured on some public roads. However, there is evidence suggesting that the general public remains particularly concerned and skeptical regarding the ethics of collision scenarios. Methods This study presents the findings of the first qualitative research into the ethical opinions of experts responsible for the design, deployment, and regulation of AVs. A total of 46 experts were interviewed in this study and presented with two trolley-problem-like vignettes. The experts were asked for an initial opinion on the basis of which the parameters of the vignettes were changed to gauge the principles that would result in either changing or retaining an ethical opinion. Much research has been conducted on public opinion, but there are no available research findings on the ethical opinions of AV experts. Results Following reflective thematic analysis, four important findings were deduced: 1) although the expert opinions are broadly utilitarian, they are nuanced in significant ways to focus on the impacts of collision scenarios on the community as a whole. 2) Obeying the rules of the road remains a significantly strong ethical opinion. 3) Responsibility and risk play important roles in how AVs should handle collision situations. 4) Egoistic opinions were present to a limited extent. Discussion The findings show that the ethics of AVs still pose a serious challenge; furthermore, while utilitarianism appears to be a driving ethical principle on the surface, along with the need for both AVs and vulnerable road users to obey the rules, questions concerning community impacts and risk vs. responsibility remain strong influences among AV experts.
May 2025
·
3 Reads
Low trust in autonomous systems remains a significant barrier to adoption and performance. To effectively increase trust in these systems, machines must perform actions to calibrate human trust based on an accurate assessment of both their capability and human trust in real time. Existing efforts demonstrate the value of trust calibration in improving team performance but overlook the importance of machine self-assessment capabilities in the trust calibration process. In our work, we develop a closed-loop trust calibration system for a human-machine collaboration task to classify images and demonstrate about 40% improvement in human trust and 5% improvement in team performance with trained machine self-assessment compared to the baseline, despite the same machine performance level between them. Our trust calibration system applies to any semi-autonomous application requiring human-machine collaboration.
May 2025
·
39 Reads
Tasks in the meat processing sector are physically challenging, repetitive, and prone to worker scarcity. Therefore, the imperative adoption of mechanization and automation within the domain of meat processing is underscored by its key role in mitigating labor-intensive processes while concurrently enhancing productivity, safety, and operator wellbeing. This review paper gives an overview of the current research for robotic and automated systems in meat processing. The modules of a robotic system are introduced and afterward, the robotic tasks are divided into three sections with the features of processing targets including livestock, poultry, and seafood. Furthermore, we analyze the technical details of whole meat processing, including skinning, gutting, abdomen cutting, and half-carcass cutting, and discuss these systems in performance and industrial feasibility. The review also refers to some commercialized products for automation in the meat processing industry. Finally, we conclude the review and discuss potential challenges for further robotization and automation in meat processing.
May 2025
·
53 Reads
Human operators of remote and semi-autonomous systems must have a high level of executive function to safely and efficiently conduct operations. These operators face unique cognitive challenges when monitoring and controlling robotic machines, such as vehicles, drones, and construction equipment. The development of safe and experienced human operators of remote machines requires structured training and credentialing programs. This review critically evaluates the potential for incorporating neurotechnology into remote systems operator training and work to enhance human-machine interactions, performance, and safety. Recent evidence demonstrating that different noninvasive neuromodulation and neurofeedback methods can improve critical executive functions such as attention, learning, memory, and cognitive control is reviewed. We further describe how these approaches can be used to improve training outcomes, as well as teleoperator vigilance and decision-making. We also describe how neuromodulation can help remote operators during complex or high-risk tasks by mitigating impulsive decision-making and cognitive errors. While our review advocates for incorporating neurotechnology into remote operator training programs, continued research is required to evaluate the how these approaches will impact industrial safety and workforce readiness.
May 2025
·
17 Reads
Introduction This paper introduces a structured co-design methodology for developing modular robotic solutions for the care sector. Despite the widespread adoption of co-design in robotics, existing frameworks often lack clear and systematic processes to effectively incorporate user requirements into tangible robotic designs. Method To address this gap, the present work proposes an iterative, modular co-design methodology that captures, organises, and translates user insights into practical robotic modules. The methodology employs Design Research (DR) methods combined with Design for Additive Manufacturing (DfAM) principles, enabling rapid prototyping and iterative refinement based on continuous user feedback. The proposed approach was applied in the development of Robobrico, a modular robot created collaboratively with care home users. Results Outcomes from this study demonstrate that this structured process effectively aligns robot functionality with user expectations, enhances adaptability, and facilitates practical integration of modular robotic platforms in real-world care environments. Discussion This paper details the proposed methodology, the tools developed to support it, and key insights derived from its implementation.
May 2025
·
4 Reads
Introduction Public attitudes toward service robots are critical to their acceptance across various industries. Previous research suggests that human-like features and behaviours perceived as empathetic may reduce negative perceptions and enhance emotional engagement. However, there is limited empirical evidence on how structured multimodal interventions influence these responses. Methods A partially mixed experimental design was employed, featuring one between-subjects factor (group: experimental vs. control) and one within-subjects factor (time: pre-intervention vs. post-intervention), applied only to the experimental group. Two hundred twenty-eight adults (aged 18–65) were randomly assigned to either the experimental or control condition. The intervention included images, video demonstrations of human-like service robots performing socially meaningful gestures, and a narrative vignette depicting human–robot interaction. The control group completed the same assessment measures without the intervention. Outcomes included negative attitudes toward robots (Negative Attitudes Toward Robots Scale, NARS), affect (Positive and Negative Affect Schedule, PANAS), and perceived interpersonal connection (Inclusion of Other in the Self scale, IOS). Results The experimental group demonstrated a significant reduction in negative attitudes (p < 0.001, Cohen’s d = 0.37), as well as lower negative affect and a greater perceived interpersonal connection with the robots (both p < 0.001). Age moderated baseline attitudes, with younger participants reporting more positive initial views; gender was not a significant factor. Discussion These findings suggest that multimodal portrayals of human-like service robots can improve attitudes, affective responses, and interpersonal connection, offering practical insights for robot design, marketing, and public engagement strategies.
May 2025
·
21 Reads
Introduction Neurological tremors, prevalent among a large population, are one of the most rampant movement disorders. Biomechanical loading and exoskeletons show promise in enhancing patient well-being, but traditional control algorithms limit their efficacy in dynamic movements and personalized interventions. Furthermore, a pressing need exists for more comprehensive and robust validation methods to ensure the effectiveness and generalizability of proposed solutions. Methods This paper proposes a physical simulation approach modeling multiple arm joints and tremor propagation. This study also introduces a novel adaptable reinforcement learning environment tailored for disorders with tremors. We present a deep reinforcement learning-based encoder-actor controller for Parkinson’s tremors in various shoulder and elbow joint axes displayed in dynamic movements. Results Our findings suggest that such a control strategy offers a viable solution for tremor suppression in real-world scenarios. Discussion By overcoming the limitations of traditional control algorithms, this work takes a new step in adapting biomechanical loading into the everyday life of patients. This work also opens avenues for more adaptive and personalized interventions in managing movement disorders.
May 2025
·
20 Reads
Introduction Acoustophoresis has enabled novel interaction capabilities, such as levitation, volumetric displays, mid-air haptic feedback, and directional sound generation, to open new forms of multimodal interactions. However, its traditional implementation as a singular static unit limits its dynamic range and application versatility. Methods This paper introduces “AcoustoBots” — a novel convergence of acoustophoresis with a movable and reconfigurable phased array of transducers for enhanced application versatility. We mount a phased array of transducers on a swarm of robots to harness the benefits of multiple mobile acoustophoretic units. This offers a more flexible and interactive platform that enables a swarm of acoustophoretic multimodal interactions. Our novel AcoustoBots design includes a hinge actuation system that controls the orientation of the mounted phased array of transducers to achieve high flexibility in a swarm of acoustophoretic multimodal interactions. In addition, we designed a BeadDispenserBot that can deliver particles to trapping locations, which automates the acoustic levitation interaction. Results These attributes allow AcoustoBots to independently work for a common cause and interchange between modalities, allowing for novel augmentations (e.g., a swarm of haptics, audio, and levitation) and bilateral interactions with users in an expanded interaction area. Discussion We detail our design considerations, challenges, and methodological approach to extend acoustophoretic central control in distributed settings. This work demonstrates a scalable acoustic control framework with two mobile robots, laying the groundwork for future deployment in larger robotic swarms. Finally, we characterize the performance of our AcoustoBots and explore the potential interactive scenarios they can enable.
May 2025
·
43 Reads
Autonomous robots must operate in diverse environments and handle multiple tasks despite uncertainties. This creates challenges in designing software architectures and task decision-making algorithms, as different contexts may require distinct task logic and architectural configurations. To address this, robotic systems can be designed as self-adaptive systems capable of adapting their task execution and software architecture at runtime based on their context. This paper introduces ROSA, a novel knowledge-based framework for RObot Self-Adaptation, which enables task-and-architecture co-adaptation (TACA) in robotic systems. ROSA achieves this by providing a knowledge model that captures all application-specific knowledge required for adaptation and by reasoning over this knowledge at runtime to determine when and how adaptation should occur. In addition to a conceptual framework, this work provides an open-source ROS 2-based reference implementation of ROSA and evaluates its feasibility and performance in an underwater robotics application. Experimental results highlight ROSA’s advantages in reusability and development effort for designing self-adaptive robotic systems.
May 2025
·
26 Reads
Introduction As social robots gain advanced communication capabilities, users increasingly expect coherent verbal and non-verbal behaviours. Recent work has shown that Large Language Models (LLMs) can support autonomous generation of such multimodal behaviours. However, current LLM-based approaches to non-verbal behaviour often involve multi-step reasoning with large, closed-source models-resulting in significant computational overhead and limiting their feasibility in low-resource or privacy-constrained environments. Methods To address these limitations, we propose a novel method for simultaneous generation of text and gestures with minimal computational overhead compared to plain text generation. Our system does not produce low-level joint trajectories, but instead predicts high-level communicative intentions, which are mapped to platform-specific expressions. Central to our approach is the introduction of lightweight, robot-specific “gesture heads” derived from the LLM’s architecture, requiring no pose-based datasets and enabling generalisability across platforms. Results We evaluate our method on two distinct robot platforms: Furhat (facial expressions) and Pepper (bodily gestures). Experimental results demonstrate that our method maintains behavioural quality while introducing negligible computational and memory overhead. Furthermore, the gesture heads operate in parallel with the language generation component, ensuring scalability and responsiveness even on small or locally deployed models. Discussion Our approach supports the use of Small Language Models for multimodal generation, offering an effective alternative to existing high-resource methods. By abstracting gesture generation and eliminating reliance on platform-specific motion data, we enable broader applicability in real-world, low-resource, and privacy-sensitive HRI settings.
May 2025
·
13 Reads
May 2025
·
7 Reads
May 2025
·
86 Reads
Tendon-driven continuum robots usually consists of several actuators and cables pulling a flexible backbone. The tendon path alongside the backbone allows to perform complex movements with high dexterity. Yet, the integration of multiple tendons adds complexity and the lack of rigidity makes continuum robots susceptible to torsion whenever an external force or load is applied. This paper proposes a reduced complexity, hybrid tendon-driven continuum robot (HTDCR) that avoids undesired torsion under external load. Bending of the HTDCR is achieved from a single tendon with lateral joints alongside the backbone acting as mechanical constraint on the bending plane. A rotary base then provides an additional degree of freedom by allowing full rotation of the arm. We developed a robot prototype with control law based on a constant curvature model and validated it experimentally with various loads on the tip. Body deviation outside the bending plane is negligible (mm range), thereby demonstrating no torsional deformation. Tip deflection within the bending plane is smaller than the one obtained with a 4-tendon driven continuum robot. Moreover, tip deflection can be accurately estimated from the load and motor input which paves the way to possible compensation. All together, the experiments demonstrate the efficiency of the HTDCR with 450 g payload which makes it suitable in agricultural tasks such as fruit and vegetable harvesting.
May 2025
·
15 Reads
Computer-Aided Manufacturing (CAM) tools are a key component in many digital fabrication workflows, translating digital designs into machine instructions to manufacture physical objects. However, conventional CAM tools are tailored for standard manufacturing processes such as milling, turning or laser cutting, and can therefore be a limiting factor - especially for craftspeople and makers who want to employ non-standard, craft-like operations. Formalizing the tacit knowledge behind such operations to incorporate it in new CAM-routines is inherently difficult and often not feasible for the ad hoc incorporation of custom manufacturing operations in a digital fabrication workflow. In this paper, we address this gap by exploring the integration of Learning from Demonstration (LfD) into digital fabrication workflows, allowing makers to establish new manufacturing operations by providing manual demonstrations. To this end, we perform a case study on robot wood carving with hand tools, in which we integrate probabilistic movement primitives (ProMPs) into Rhino’s Grasshopper environment to achieve basic CAM-like functionality. Human demonstrations of different wood carving cuts are recorded via kinesthetic teaching and modeled by a mixture of ProMPs to capture correlations between the toolpath parameters. The ProMP model is then exposed in Grasshopper, where it functions as a translator from drawing input to toolpath output. With our pipeline, makers can create simplified 2D drawings of their carving patterns with common CAD tools and then seamlessly generate skill-informed 6 degree-of-freedom carving toolpaths from them, all in the same familiar CAD environment. We demonstrate our pipeline on multiple wood carving applications and discuss its limitations, including the need for iterative toolpath adjustments to address inaccuracies. Our findings illustrate the potential of LfD in augmenting CAM tools for specialized and highly customized manufacturing tasks. At the same time, the question of how to best represent carving skills for flexible and generalizable toolpath generation remains open and requires further investigation.
Journal Impact Factor™
CiteScore™