Frontiers in Robotics and AI

Published by Frontiers

Online ISSN: 2296-9144

Articles


Figure 1: The objective function Fob, expression (3), as a function of the output firing rate y for different values of the bias b. Fob always has two minima and diverges for extremal firing rates y → 0/1, a feature responsible for inducing limited output firing rates.
Figure 2: Illustration of the principle of minimal synaptic flux. The synaptic flux, compare expression (11), is the scalar product between the gradient ∇w          log(p) and the normal vector of the synaptic sphere, w¯∕|w¯| (left). Here, we disregard the normalization. The sensitivity ∇            w          log(p) of the neural firing-rate distribution p = p(y), with respect to the synaptic weights w = (w1, w2, w3, …), vanishes when the local synaptic flux is minimal (right), viz when w ⋅∇            w          log(p) →0. At this point, the magnitude of the synaptic weight vector w¯ will not grow anymore.
Figure 3: The roots of the adaption factors. Left: the roots Gx0,1*=0 and H(x*) = 0, respectively, compare Equation (5), as a function of the bias b. Note that the roots do not cross, as the factors G and H are conjugate to each other. Right: the respective values y(x*) of the neural activity. Note that yx1*−yx0*≥1∕2, for all values of the bias.
Figure 4: Alignment to the principal component. Simulation results for a neuron with Nw = 100 input neurons with Gaussian input distributions with one direction (the principal component) having twice the standard deviation than the other Nw − 1 directions. (A) Illustration of the input distribution density p(y1, y2, …), with the angle α between the direction of the principal component PC¯ and w¯, the synaptic weight vector. (B) Time series of the membrane potential x (blue), the bias b (yellow), the roots xG* of the limiting factor G(x) (red), and the root xH* of the Hebbian factor H(x) (green). (C) The evolution of the angle α of the synaptic weight vector w with respect to the principal component and (inset) the output distribution p(y) (red) with respect to the target exponential (blue). (D) Time series of the output y (blue) and of the roots yG* of the limiting factor G(y) (red) and the root yH* of the Hebbian factor H(y) (green). (E) Distribution of synaptic weights p(w) in the stationary state for large times. (F) Time evolution of the first ten synaptic weights {wj}, separately for the principal component (upper panel) and for nine other orthogonal directions (lower panel).
Figure 5: Scaling of the adaption rules with the number of afferent neurons. For constant simulating parameters the signal-to-noise ratio (left), defined as the ratio |w1|∕σw(non), where w1 is the synaptic strength parallel to the principal component and σw(non) the standard deviation of the orthogonal synaptic directions, compare Equation (A2), and the mean angle (right), of the synaptic weight vector with respect to the principal component. Shown are results for a range, 2:1, 4:1, 8:1, 16:1, and 32:1, of the incoming signal-to-noise ratios, defined as the ratio of the standard deviations between the large and the small components of the distributions of input activities p(yj). The outgoing signal-to-noise ratio |w1|∕σw(non) remains essentially flat, as a function of Nw; the increase observed for the average angle α is predominantly a statistical effect, caused by the presence of an increasingly large number of orthogonal synaptic weights. The orthogonal weights are all individually small in magnitude, but their statistical influence sums up increasingly with raising Nw.

+3

Generating Functionals for Computational Intelligence: The Fisher Information as an Objective Function for Self-Limiting Hebbian Learning Rules
  • Article
  • Full-text available

October 2014

·

154 Reads

·

Generating functionals may guide the evolution of a dynamical system and constitute a possible route for handling the complexity of neural networks as relevant for computational intelligence. We propose and explore a new objective function, which allows to obtain plasticity rules for the afferent synaptic weights. The adaption rules are Hebbian, self-limiting, and result from the minimization of the Fisher information with respect to the synaptic flux. We perform a series of simulations examining the behavior of the new learning rules in various circumstances. The vector of synaptic weights aligns with the principal direction of input activities, whenever one is present. A linear discrimination is performed when there are two or more principal directions; directions having bimodal firing-rate distributions, being characterized by a negative excess kurtosis, are preferred. We find robust performance and full homeostatic adaption of the synaptic weights results as a by-product of the synaptic flux minimization. This self-limiting behavior allows for stable online learning for arbitrary durations. The neuron acquires new information when the statistics of input activities is changed at a certain point of the simulation, showing however, a distinct resilience to unlearn previously acquired knowledge. Learning is fast when starting with randomly drawn synaptic weights and substantially slower when the synaptic weights are already fully adapted.
Download
Share

Figure 1: Measures of information dynamics with respect to a destination variable X. We address the information content in a measurement xn+1 of X at time n + 1 with respect to the active information storage aX(n + 1, k), and local transfer entropies tY1→X(n+1,k) and tY2→X(n+1,k) from variables Y1 and Y2.
Figure 2: Partial UML class diagram of the implementations of the conditional mutual information (equation (S.11) in Supplementary Material) and transfer entropy (equation (S.25) in Supplementary Material) measures using KSG estimators. As explained in the main text, this diagram shows the typical object-oriented structure of the implementations of various estimators for each measure. The relationships indicated on the class diagram are as follows: dotted lines with hollow triangular arrow heads indicate the realization or implementation of an interface by a class; solid lines with hollow triangular arrow heads indicate the generalization or inheritance of a child or subtype from a parent or superclass; lines with plain arrow heads indicate that one class uses another (with the solid line indicating direct usage and dotted line indicating indirect usage via the superclass).
Figure 3: Active information storage (AIS) computed by the KSG estimator (K = 4 nearest neighbors) as a function of embedded history length k for the heart and breath rate time-series data.
Figure 4: Local information dynamics in ECA rule 54 for the raw values in (A) (black for “1,” white for “0”). Thirty-five time steps are displayed for 35 cells, and time increases down the page for all CA plots. All units are in bits, as per scales on the right-hand sides. (B) Local active information storage; local apparent transfer entropy: (C) one cell to the right, and (D) one cell to the left per time step. NB: Reprinted with kind permission of Springer Science + Business Media from Lizier (2014).
JIDT: An Information-Theoretic Toolkit for Studying the Dynamics of Complex Systems

August 2014

·

748 Reads

Complex systems are increasingly being viewed as distributed information processing systems, particularly in the domains of computational neuroscience, bioinformatics and Artificial Life. This trend has resulted in a strong uptake in the use of (Shannon) information-theoretic measures to analyse the dynamics of complex systems in these fields. We introduce the Java Information Dynamics Toolkit (JIDT): a Google code project which provides a standalone, (GNU GPL v3 licensed) open-source code implementation for empirical estimation of information-theoretic measures from time-series data. While the toolkit provides classic information-theoretic measures (e.g. entropy, mutual information, conditional mutual information), it ultimately focusses on implementing higher-level measures for information dynamics. That is, JIDT focusses on quantifying information storage, transfer and modification, and the dynamics of these operations in space and time. For this purpose, it includes implementations of the transfer entropy and active information storage, their multivariate extensions and local or pointwise variants. JIDT provides implementations for both discrete and continuous-valued data for each measure, including various types of estimator for continuous data (e.g. Gaussian, box-kernel and Kraskov-Stoegbauer-Grassberger) which can be swapped at run-time due to Java's object-oriented polymorphism. Furthermore, while written in Java, the toolkit can be used directly in MATLAB, GNU Octave and Python. We present the principles behind the code design, and provide several examples to guide users.

Hybrid Collision Avoidance for ASVs Compliant With COLREGs Rules 8 and 13–17

February 2020

·

447 Reads

This paper presents a three-layered hybrid collision avoidance (COLAV) system for autonomous surface vehicles, compliant with rules 8 and 13–17 of the International Regulations for Preventing Collisions at Sea (COLREGs). The COLAV system consists of a high-level planner producing an energy-optimized trajectory, a model-predictive-control-based mid-level COLAV algorithm considering moving obstacles and the COLREGs, and the branching-course model predictive control algorithm for short-term COLAV handling emergency situations in accordance with the COLREGs. Previously developed algorithms by the authors are used for the high-level planner and short-term COLAV, while we in this paper further develop the mid-level algorithm to make it comply with COLREGs rules 13–17. This includes developing a state machine for classifying obstacle vessels using a combination of the geometrical situation, the distance and time to the closest point of approach (CPA) and a new CPA-like measure. The performance of the hybrid COLAV system is tested through numerical simulations for three scenarios representing a range of different challenges, including multi-obstacle situations with multiple simultaneously active COLREGs rules, and also obstacles ignoring the COLREGs. The COLAV system avoids collision in all the scenarios, and follows the energy-optimized trajectory when the obstacles do not interfere with it.

Embodied interactions in each example text.
Embodiment in 18th Century Depictions of Human-Machine Co-Creativity

June 2021

·

88 Reads

Artificial intelligence has a rich history in literature; fiction has shaped how we view artificial agents and their capacities in the real world. This paper looks at embodied examples of human-machine co-creation from the literature of the Long 18th Century (1,650–1,850), examining how older depictions of creative machines could inform and inspire modern day research. The works are analyzed from the perspective of design fiction with special focus on the embodiment of the systems and the creativity exhibited by them. We find that the chosen examples highlight the importance of recognizing the environment as a major factor in human-machine co-creative processes and that some of the works seem to precede current examples of artificial systems reaching into our everyday lives. The examples present embodied interaction in a positive, creativity-oriented way, but also highlight ethical risks of human-machine co-creativity. Modern day perceptions of artificial systems and creativity can be limited to some extent by the technologies available; fictitious examples from centuries past allow us to examine such limitations using a Design Fiction approach. We conclude by deriving four guidelines for future research from our fictional examples: 1) explore unlikely embodiments; 2) think of situations, not systems; 3) be aware of the disjunction between action and appearance; and 4) consider the system as a situated moral agent.

RoboEthics in COVID-19: A Case Study in Dentistry

May 2021

·

63 Reads

·

Maryam Kalvandi

·

Sofya Langman

·

[...]

·

The COVID-19 pandemic has caused dramatic effects on the healthcare system, businesses, and education. In many countries, businesses were shut down, universities and schools had to cancel in-person classes, and many workers had to work remotely and socially distance in order to prevent the spread of the virus. These measures opened the door for technologies such as robotics and artificial intelligence to play an important role in minimizing the negative effects of such closures. There have been many efforts in the design and development of robotic systems for applications such as disinfection and eldercare. Healthcare education has seen a lot of potential in simulation robots, which offer valuable opportunities for remote learning during the pandemic. However, there are ethical considerations that need to be deliberated in the design and development of such systems. In this paper, we discuss the principles of roboethics and how these can be applied in the new era of COVID-19. We focus on identifying the most relevant ethical principles and apply them to a case study in dentistry education. DenTeach was developed as a portable device that uses sensors and computer simulation to make dental education more efficient. DenTeach makes remote instruction possible by allowing students to learn and practice dental procedures from home. We evaluate DenTeach on the principles of data, common good, and safety, and highlight the importance of roboethics in Canada. The principles identified in this paper can inform researchers and educational institutions considering implementing robots in their curriculum.

FIGURE 1 | Functionality of SSD.
FIGURE 2 | Functionality of SST.
Perspective: Wearable Internet of Medical Things for Remote Tracking of Symptoms, Prediction of Health Anomalies, Implementation of Preventative Measures, and Control of Virus Spread During the Era of COVID-19

April 2021

·

40 Reads

The COVID-19 pandemic has highly impacted the communities globally by reprioritizing the means through which various societal sectors operate. Among these sectors, healthcare providers and medical workers have been impacted prominently due to the massive increase in demand for medical services under unprecedented circumstances. Hence, any tool that can help the compliance with social guidelines for COVID-19 spread prevention will have a positive impact on managing and controlling the virus outbreak and reducing the excessive burden on the healthcare system. This perspective article disseminates the perspectives of the authors regarding the use of novel biosensors and intelligent algorithms embodied in wearable IoMT frameworks for tackling this issue. We discuss how with the use of smart IoMT wearables certain biomarkers can be tracked for detection of COVID-19 in exposed individuals. We enumerate several machine learning algorithms which can be used to process a wide range of collected biomarkers for detecting (a) multiple symptoms of SARS-CoV-2 infection and (b) the dynamical likelihood of contracting the virus through interpersonal interaction. Eventually, we enunciate how a systematic use of smart wearable IoMT devices in various social sectors can intelligently help controlling the spread of COVID-19 in communities as they enter the reopening phase. We explain how this framework can benefit individuals and their medical correspondents by introducing Systems for Symptom Decoding (SSD), and how the use of this technology can be generalized on a societal level for the control of spread by introducing Systems for Spread Tracing (SST).

FIGURE 1 | Schematic of tele-rehabilitation where patients can continue their rehabilitation with the help of an assistive device while therapist can monitor the progress remotely.
FIGURE 2 | Different virtual reality (VR) games that can emulate Activities of Daily Living (ADL), such as using a spoon (eating), pen (writing), knife (cutting), and glass (pouring) in clockwise order from top-left.
FIGURE 3 | Schematic demonstrating a haptic device and virtual reality (VR)-based home rehabilitation setup.
FIGURE 5 | Schematic showing the rehabilitation therapy in three different setups: (1) community-based, (2) home-based, and (3) ambulatory. Present robotic systems are geared toward hospital or home-based approaches. However, due to COVID pandemic, we need to modify and adapt the current system taking into account social distancing norms, emotional stress due to lockdown, and safety of health care workers and patients.
Upper Limb Home-Based Robotic Rehabilitation During COVID-19 Outbreak
The coronavirus disease (COVID-19) outbreak requires rapid reshaping of rehabilitation services to include patients recovering from severe COVID-19 with post-intensive care syndromes, which results in physical deconditioning and cognitive impairments, patients with comorbid conditions, and other patients requiring physical therapy during the outbreak with no or limited access to hospital and rehabilitation centers. Considering the access barriers to quality rehabilitation settings and services imposed by social distancing and stay-at-home orders, these patients can be benefited from providing access to affordable and good quality care through home-based rehabilitation. The success of such treatment will depend highly on the intensity of the therapy and effort invested by the patient. Monitoring patients' compliance and designing a home-based rehabilitation that can mentally engage them are the critical elements in home-based therapy's success. Hence, we study the state-of-the-art telerehabilitation frameworks and robotic devices, and comment about a hybrid model that can use existing telerehabilitation framework and home-based robotic devices for treatment and simultaneously assess patient's progress remotely. Second, we comment on the patients' social support and engagement, which is critical for the success of telerehabilitation service. As the therapists are not physically present to guide the patients, we also discuss the adaptability requirement of home-based telerehabilitation. Finally, we suggest that the reformed rehabilitation services should consider both home-based solutions for enhancing the activities of daily living and an on-demand ambulatory rehabilitation unit for extensive training where we can monitor both cognitive and motor performance of the patients remotely.

FIGURE 1 | Categories of robotics application considered in this study: (1) Telemedicine (TEL), Disinfection (DIS), and Assistance (ASL). The exhibited characteristics refer to the main advantages of using robots of the corresponding category (i.e., the yellow section for TEL, orange for DIS, and gray for ASL) in clinic environments.
Designed questions for the Knowledge, Attitude, Perception (KAP) survey used in this study.
Brief description of the different robot categories that were used in the study.
Demographic data of the healthcare personnel who participated in the study.
Expectations and Perceptions of Healthcare Professionals for Robot Deployment in Hospital Environments During the COVID-19 Pandemic

June 2021

·

1,491 Reads

Several challenges to guarantee medical care have been exposed during the current COVID-19 pandemic. Although the literature has shown some robotics applications to overcome the potential hazards and risks in hospital environments, the implementation of those developments is limited, and few studies measure the perception and the acceptance of clinicians. This work presents the design and implementation of several perception questionnaires to assess healthcare provider's level of acceptance and education toward robotics for COVID-19 control in clinic scenarios. Specifically, 41 healthcare professionals satisfactorily accomplished the surveys, exhibiting a low level of knowledge about robotics applications in this scenario. Likewise, the surveys revealed that the fear of being replaced by robots remains in the medical community. In the Colombian context, 82.9% of participants indicated a positive perception concerning the development and implementation of robotics in clinic environments. Finally, in general terms, the participants exhibited a positive attitude toward using robots and recommended them to be used in the current panorama.


Automated AMBU Ventilator With Negative Pressure Headbox and Transporting Capsule for COVID-19 Patient Transfer

January 2021

·

226 Reads

Purpose: It is now clear that the COVID-19 viruses can be transferred via airborne transmission. The objective of this study was to attempt the design and fabrication of an AMBU ventilator with a negative pressure headbox linked to a negative pressure transporting capsule, which could provide a low-cost construction, flexible usage unit, and also airborne prevention that could be manufactured without a high level of technology. Method: The machine consists of an automated AMBU bag ventilator, a negative pressure headbox, and a transporting capsule. The function and working duration of each component were tested. Results: The two main settings of the ventilator include an active mode that can be set at the time range of 0 s–9 h 59 min 59 s and a resting mode, which could work continuously for 24 h. The blower motor and battery system, which were used to power the ventilator, create negative air pressure within the headbox, and the transporting capsule, could run for at least 2 h without being recharged. The transporting capsule was able to create an air change rate of 21.76 ACH with-10 Pa internal pressure. Conclusion: This automated AMBU ventilator allowed flow rate, rhythm, and volume of oxygen to be set. The hazardous expired air was treated by a HEPA filter. The patient’s transporting capsule is of a compact size and incorporates the air treatment systems. Further development of this machine should focus on how to link seamlessly with imaging technology, to verify standardization, to test using human subjects, and then to be the commercialized.

FIGURE 1 | Comparison of conventional endoscopy with robotic flexible endoscopy which can increase distance and decrease number of people in the room.
Guidelines for Robotic Flexible Endoscopy at the Time of COVID-19

February 2021

·

134 Reads

Flexible endoscopy involves the insertion of a long narrow flexible tube into the body for diagnostic and therapeutic procedures. In the gastrointestinal (GI) tract, flexible endoscopy plays a major role in cancer screening, surveillance, and treatment programs. As a result of gas insufflation during the procedure, both upper and lower GI endoscopy procedures have been classified as aerosol generating by the guidelines issued by the respective societies during the COVID-19 pandemic—although no quantifiable data on aerosol generation currently exists. Due to the risk of COVID-19 transmission to healthcare workers, most societies halted non-emergency and diagnostic procedures during the lockdown. The long-term implications of stoppage in cancer diagnoses and treatment is predicted to lead to a large increase in preventable deaths. Robotics may play a major role in this field by allowing healthcare operators to control the flexible endoscope from a safe distance and pave a path for protecting healthcare workers through minimizing the risk of virus transmission without reducing diagnostic and therapeutic capacities. This review focuses on the needs and challenges associated with the design of robotic flexible endoscopes for use during a pandemic. The authors propose that a few minor changes to existing platforms or considerations for platforms in development could lead to significant benefits for use during infection control scenarios.

A Flexible Transoral Robot Towards COVID-19 Swab Sampling

April 2021

·

289 Reads

There are high risks of infection for surgeons during the face-to-face COVID-19 swab sampling due to the novel coronavirus’s infectivity. To address this issue, we propose a flexible transoral robot with a teleoperated configuration for swab sampling. The robot comprises a flexible manipulator, an endoscope with a monitor, and a master device. A 3-prismatic-universal (3-PU) flexible parallel mechanism with 3 degrees of freedom (DOF) is used to realize the manipulator’s movements. The flexibility of the manipulator improves the safety of testees. Besides, the master device is similar to the manipulator in structure. It is easy to use for operators. Under the guidance of the vision from the endoscope, the surgeon can operate the master device to control the swab’s motion attached to the manipulator for sampling. In this paper, the robotic system, the workspace, and the operation procedure are described in detail. The tongue depressor, which is used to prevent the tongue’s interference during the sampling, is also tested. The accuracy of the manipulator under visual guidance is validated intuitively. Finally, the experiment on a human phantom is conducted to demonstrate the feasibility of the robot preliminarily.

Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic

August 2021

·

234 Reads

This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews recent technological breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.

Autonomous Robotic Point-of-Care Ultrasound Imaging for Monitoring of COVID-19–Induced Pulmonary Diseases

May 2021

·

230 Reads

The COVID-19 pandemic has emerged as a serious global health crisis, with the predominant morbidity and mortality linked to pulmonary involvement. Point-of-Care ultrasound (POCUS) scanning, becoming one of the primary determinative methods for its diagnosis and staging, requires, however, close contact of healthcare workers with patients, therefore increasing the risk of infection. This work thus proposes an autonomous robotic solution that enables POCUS scanning of COVID-19 patients’ lungs for diagnosis and staging. An algorithm was developed for approximating the optimal position of an ultrasound probe on a patient from prior CT scans to reach predefined lung infiltrates. In the absence of prior CT scans, a deep learning method was developed for predicting 3D landmark positions of a human ribcage given a torso surface model. The landmarks, combined with the surface model, are subsequently used for estimating optimal ultrasound probe position on the patient for imaging infiltrates. These algorithms, combined with a force–displacement profile collection methodology, enabled the system to successfully image all points of interest in a simulated experimental setup with an average accuracy of 20.6 ± 14.7 mm using prior CT scans, and 19.8 ± 16.9 mm using only ribcage landmark estimation. A study on a full torso ultrasound phantom showed that autonomously acquired ultrasound images were 100% interpretable when using force feedback with prior CT and 88% with landmark estimation, compared to 75 and 58% without force feedback, respectively. This demonstrates the preliminary feasibility of the system, and its potential for offering a solution to help mitigate the spread of COVID-19 in vulnerable environments.

Robotic Home-Based Rehabilitation Systems Design: From a Literature Review to a Conceptual Framework for Community-Based Remote Therapy During COVID-19 Pandemic

June 2021

·

1,054 Reads

During the COVID-19 pandemic, the higher susceptibility of post-stroke patients to infection calls for extra safety precautions. Despite the imposed restrictions, early neurorehabilitation cannot be postponed due to its paramount importance for improving motor and functional recovery chances. Utilizing accessible state-of-the-art technologies, home-based rehabilitation devices are proposed as a sustainable solution in the current crisis. In this paper, a comprehensive review on developed home-based rehabilitation technologies of the last 10 years (2011–2020), categorizing them into upper and lower limb devices and considering both commercialized and state-of-the-art realms. Mechatronic, control, and software aspects of the system are discussed to provide a classified roadmap for home-based systems development. Subsequently, a conceptual framework on the development of smart and intelligent community-based home rehabilitation systems based on novel mechatronic technologies is proposed. In this framework, each rehabilitation device acts as an agent in the network, using the internet of things (IoT) technologies, which facilitates learning from the recorded data of the other agents, as well as the tele-supervision of the treatment by an expert. The presented design paradigm based on the above-mentioned leading technologies could lead to the development of promising home rehabilitation systems, which encourage stroke survivors to engage in under-supervised or unsupervised therapeutic activities.



FIGURE 1 | Flow diagram showing the study selection process.
Studies included.
(Continued) Studies included.
Serious Games: A new Approach to Foster Information and Practices About Covid-19?
The current Covid-19 pandemic poses an unprecedented global challenge in the field of education and training. As we have seen, the lack of proper information about the virus and its transmission has forced the general population and healthcare workers to rapidly acquire knowledge and learn new practices. Clearly, a well-informed population is more likely to adopt the correct precautionary measures, thus reducing the transmission of the infection; likewise, properly educated healthcare workers are better equipped to manage the emergency. However, the need to maintain physical distancing has made it impossible to provide in-presence information and training. In this regard, new technologies have proved to be an invaluable resource by facilitating distance learning. Indeed, e-learning offers significant advantages because it does not require the physical presence of learners and teachers. This innovative method applied to serious games has been considered potentially effective in enabling rapid and large-scale dissemination of information and learning through content interactivity. We will review studies that have observed the development and use of serious games to foster information and practices about Covid-19 aimed at promoting behavioral changes in the population and the healthcare personnel involved on the front line.

Robotic Ultrasound Scanning With Real-Time Image-Based Force Adjustment: Quick Response for Enabling Physical Distancing During the COVID-19 Pandemic

March 2021

·

79 Reads

During an ultrasound (US) scan, the sonographer is in close contact with the patient, which puts them at risk of COVID-19 transmission. In this paper, we propose a robot-assisted system that automatically scans tissue, increasing sonographer/patient distance and decreasing contact duration between them. This method is developed as a quick response to the COVID-19 pandemic. It considers the preferences of the sonographers in terms of how US scanning is done and can be trained quickly for different applications. Our proposed system automatically scans the tissue using a dexterous robot arm that holds US probe. The system assesses the quality of the acquired US images in real-time. This US image feedback will be used to automatically adjust the US probe contact force based on the quality of the image frame. The quality assessment algorithm is based on three US image features: correlation, compression and noise characteristics. These US image features are input to the SVM classifier, and the robot arm will adjust the US scanning force based on the SVM output. The proposed system enables the sonographer to maintain a distance from the patient because the sonographer does not have to be holding the probe and pressing against the patient's body for any prolonged time. The SVM was trained using bovine and porcine biological tissue, the system was then tested experimentally on plastisol phantom tissue. The result of the experiments shows us that our proposed quality assessment algorithm successfully maintains US image quality and is fast enough for use in a robotic control loop.

A COVID-19 Emergency Response for Remote Control of a Dialysis Machine with Mobile HRI

May 2021

·

882 Reads

Healthcare workers face a high risk of contagion during a pandemic due to their close proximity to patients. The situation is further exacerbated in the case of a shortage of personal protective equipment that can increase the risk of exposure for the healthcare workers and even non-pandemic related patients, such as those on dialysis. In this study, we propose an emergency, non-invasive remote monitoring and control response system to retrofit dialysis machines with robotic manipulators for safely supporting the treatment of patients with acute kidney disease. Specifically, as a proof-of-concept, we mock-up the touchscreen instrument control panel of a dialysis machine and live-stream it to a remote user’s tablet computer device. Then, the user performs touch-based interactions on the tablet device to send commands to the robot to manipulate the instrument controls on the touchscreen of the dialysis machine. To evaluate the performance of the proposed system, we conduct an accuracy test. Moreover, we perform qualitative user studies using two modes of interaction with the designed system to measure the user task load and system usability and to obtain user feedback. The two modes of interaction included a touch-based interaction using a tablet device and a click-based interaction using a computer. The results indicate no statistically significant difference in the relatively low task load experienced by the users for both modes of interaction. Moreover, the system usability survey results reveal no statistically significant difference in the user experience for both modes of interaction except that users experienced a more consistent performance with the click-based interaction vs. the touch-based interaction. Based on the user feedback, we suggest an improvement to the proposed system and illustrate an implementation that corrects the distorted perception of the instrumentation control panel live-stream for a better and consistent user experience.

A global bibliometric and visualized analysis of gait analysis and artificial intelligence research from 1992 to 2022
  • New
  • Article
  • Full-text available

November 2023

·

19 Reads

Gait is an important basic function of human beings and an integral part of life. Many mental and physical abnormalities can cause noticeable differences in a person’s gait. Abnormal gait can lead to serious consequences such as falls, limited mobility and reduced life satisfaction. Gait analysis, which includes joint kinematics, kinetics, and dynamic Electromyography (EMG) data, is now recognized as a clinically useful tool that can provide both quantifiable and qualitative information on performance to aid in treatment planning and evaluate its outcome. With the assistance of new artificial intelligence (AI) technology, the traditional medical environment has undergone great changes. AI has the potential to reshape medicine, making gait analysis more accurate, efficient and accessible. In this study, we analyzed basic information about gait analysis and AI articles that met inclusion criteria in the WoS Core Collection database from 1992–2022, and the VosViewer software was used for web visualization and keyword analysis. Through bibliometric and visual analysis, this article systematically introduces the research status of gait analysis and AI. We introduce the application of artificial intelligence in clinical gait analysis, which affects the identification and management of gait abnormalities found in various diseases. Machine learning (ML) and artificial neural networks (ANNs) are the most often utilized AI methods in gait analysis. By comparing the predictive capability of different AI algorithms in published studies, we evaluate their potential for gait analysis in different situations. Furthermore, the current challenges and future directions of gait analysis and AI research are discussed, which will also provide valuable reference information for investors in this field.

High-speed running quadruped robot with a multi-joint spine adopting a 1DoF closed-loop linkage

March 2023

·

69 Reads

Improving the mobility of robots is an important goal for many real-world applications and implementing an animal-like spine structure in a quadruped robot is a promising approach to achieving high-speed running. This paper proposes a feline-like multi-joint spine adopting a one-degree-of-freedom closed-loop linkage for a quadruped robot to realize high-speed running. We theoretically prove that the proposed spine structure can realize 1.5 times the horizontal range of foot motion compared to a spine structure with a single joint. Experimental results demonstrate that a robot with the proposed spine structure achieves 1.4 times the horizontal range of motion and 1.9 times the speed of a robot with a single-joint spine structure.

FIGURE 2 | The data collection pipeline of DeepClaw 2.0 is shown in (A), which includes raw sensor data (green), pose estimation data (light yellow), state-action data (orange), and post-processed data (light gray). Raw data consists of three types of information: tag data (ID number and size) as the prior information, RGB image, and depth image captured by a single fixed camera (Intel RealSense D435i in this paper). Low-level features are collected as pose estimation data, include detected corner point and 6D pose of the marker. State-action information, constructed from the above low-level features, reveals the motion of both objects and tongs during the manipulation task. Two typical ways to use structured data bags are shown in (B). The green branch indicates how the real robot reproduces the trajectory recovered from data bags, and the orange branch demonstrates steps to regain manipulation tasks.
FIGURE 3 | DeepClaw2.0 Station: (A) 3D view, (B) robot arm explosion view, (C) details about components for data collection.
FIGURE 4 | Key components in human manipulation: (A) Shows the rendered graphic user interface in DeepClaw 2.0, consists of real-time RGB data flow (highlighted by the red rectangle), low-level features (highlighted by the yellow rectangle), and high-level state-action information (highlighted by the green rectangle). (B) Shows the similarities between the assembled tongs and an OnRobot RG6 gripper, some key parameters are pointed, and in (C), six tags from the AprilTag 36h11 family used in this paper are shown.
FIGURE 6 | Experiment results and analysis of the collected data. We plot the trajectory of the third attempt of task 2, task 3, and task 8 from a human operator in (A-C). Operations (pushing, picking, or placing) and (initial or target) state can be easily distinguished by observing the motion trajectory. Sub-figures from (D) to (F) at the bottom line represent the corresponding acceleration sequence with and without smoothing. Motion-related data, such as position, velocity, and acceleration, provides a quantifiable indicator of operations.
Marker comparison for 6D pose estimation.
DeepClaw 2.0: A Data Collection Platform for Learning Human Manipulation

March 2022

·

52 Reads

Besides direct interaction, human hands are also skilled at using tools to manipulate objects for typical life and work tasks. This paper proposes DeepClaw 2.0 as a low-cost, open-sourced data collection platform for learning human manipulation. We use an RGB-D camera to visually track the motion and deformation of a pair of soft finger networks on a modified kitchen tong operated by human teachers. These fingers can be easily integrated with robotic grippers to bridge the structural mismatch between humans and robots during learning. The deformation of soft finger networks, which reveals tactile information in contact-rich manipulation, is captured passively. We collected a comprehensive sample dataset involving five human demonstrators in ten manipulation tasks with five trials per task. As a low-cost, open-sourced platform, we also developed an intuitive interface that converts the raw sensor data into state-action data for imitation learning problems. For learning-by-demonstration problems, we further demonstrated our dataset’s potential by using real robotic hardware to collect joint actuation data or using a simulated environment when limited access to the hardware.

FIGURE 2 | 3D model showing manual segmenter Porites disagreement only. Points highlighted yellow were labeled Porites by three of four segmenters, red by two, and purple by only one.
FIGURE 3 | Example of 3D neural network prediction (correct predictions are highlighted blue, false positive predictions are red, and false negative are yellow), corresponding to the same area as Figures 1, 2.
Highest automated segmentation accuracy of each neural network dimensionality as determined by validation IoU score.
Automated segmentation accuracy of the 3D network on the 2013- 2019 test set broken down by year.
Automated 2D, 2.5D, and 3D Segmentation of Coral Reef Pointclouds and Orthoprojections

May 2022

·

211 Reads

Enabled by advancing technology, coral reef researchers increasingly prefer use of image-based surveys over approaches depending solely upon in situ observations, interpretations, and recordings of divers. The images collected, and derivative products such as orthographic projections and 3D models, allow researchers to study a comprehensive digital twin of their field sites. Spatio-temporally located twins can be compared and annotated, enabling researchers to virtually return to sites long after they have left them. While these new data expand the variety and specificity of biological investigation that can be pursued, they have introduced the much-discussed Big Data Problem: research labs lack the human and computational resources required to process and analyze imagery at the rate it can be collected. The rapid development of unmanned underwater vehicles suggests researchers will soon have access to an even greater volume of imagery and other sensor measurements than can be collected by diver-piloted platforms, further exacerbating data handling limitations. Thoroughly segmenting (tracing the extent of and taxonomically identifying) organisms enables researchers to extract the information image products contain, but is very time-consuming. Analytic techniques driven by neural networks offer the possibility that the segmentation process can be greatly accelerated through automation. In this study, we examine the efficacy of automated segmentation on three different image-derived data products: 3D models, and 2D and 2.5D orthographic projections thereof; we also contrast their relative accessibility and utility to different avenues of biological inquiry. The variety of network architectures and parameters tested performed similarly, ∼80% IoU for the genus Porites , suggesting that the primary limitations to an automated workflow are 1) the current capabilities of neural network technology, and 2) consistency and quality control in image product collection and human training/testing dataset generation.

A Systematic Review of 10 Years of Augmented Reality Usability Studies: 2005 to 2014
Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.

Top-cited authors