Generating functionals may guide the evolution of a dynamical system and
constitute a possible route for handling the complexity of neural networks as
relevant for computational intelligence. We propose and explore a new objective
function, which allows to obtain plasticity rules for the afferent synaptic
weights. The adaption rules are Hebbian, self-limiting, and result from the
minimization of the Fisher information with respect to the synaptic flux. We
perform a series of simulations examining the behavior of the new learning
rules in various circumstances. The vector of synaptic weights aligns with the
principal direction of input activities, whenever one is present. A linear
discrimination is performed when there are two or more principal directions;
directions having bimodal firing-rate distributions, being characterized by a
negative excess kurtosis, are preferred. We find robust performance and full
homeostatic adaption of the synaptic weights results as a by-product of the
synaptic flux minimization. This self-limiting behavior allows for stable
online learning for arbitrary durations. The neuron acquires new information
when the statistics of input activities is changed at a certain point of the
simulation, showing however, a distinct resilience to unlearn previously
acquired knowledge. Learning is fast when starting with randomly drawn synaptic
weights and substantially slower when the synaptic weights are already fully
Complex systems are increasingly being viewed as distributed information
processing systems, particularly in the domains of computational neuroscience,
bioinformatics and Artificial Life. This trend has resulted in a strong uptake
in the use of (Shannon) information-theoretic measures to analyse the dynamics
of complex systems in these fields. We introduce the Java Information Dynamics
Toolkit (JIDT): a Google code project which provides a standalone, (GNU GPL v3
licensed) open-source code implementation for empirical estimation of
information-theoretic measures from time-series data. While the toolkit
provides classic information-theoretic measures (e.g. entropy, mutual
information, conditional mutual information), it ultimately focusses on
implementing higher-level measures for information dynamics. That is, JIDT
focusses on quantifying information storage, transfer and modification, and the
dynamics of these operations in space and time. For this purpose, it includes
implementations of the transfer entropy and active information storage, their
multivariate extensions and local or pointwise variants. JIDT provides
implementations for both discrete and continuous-valued data for each measure,
including various types of estimator for continuous data (e.g. Gaussian,
box-kernel and Kraskov-Stoegbauer-Grassberger) which can be swapped at run-time
due to Java's object-oriented polymorphism. Furthermore, while written in Java,
the toolkit can be used directly in MATLAB, GNU Octave and Python. We present
the principles behind the code design, and provide several examples to guide
This paper presents a three-layered hybrid collision avoidance (COLAV) system for autonomous surface vehicles, compliant with rules 8 and 13–17 of the International Regulations for Preventing Collisions at Sea (COLREGs). The COLAV system consists of a high-level planner producing an energy-optimized trajectory, a model-predictive-control-based mid-level COLAV algorithm considering moving obstacles and the COLREGs, and the branching-course model predictive control algorithm for short-term COLAV handling emergency situations in accordance with the COLREGs. Previously developed algorithms by the authors are used for the high-level planner and short-term COLAV, while we in this paper further develop the mid-level algorithm to make it comply with COLREGs rules 13–17. This includes developing a state machine for classifying obstacle vessels using a combination of the geometrical situation, the distance and time to the closest point of approach (CPA) and a new CPA-like measure. The performance of the hybrid COLAV system is tested through numerical simulations for three scenarios representing a range of different challenges, including multi-obstacle situations with multiple simultaneously active COLREGs rules, and also obstacles ignoring the COLREGs. The COLAV system avoids collision in all the scenarios, and follows the energy-optimized trajectory when the obstacles do not interfere with it.
Artificial intelligence has a rich history in literature; fiction has shaped how we view artificial agents and their capacities in the real world. This paper looks at embodied examples of human-machine co-creation from the literature of the Long 18th Century (1,650–1,850), examining how older depictions of creative machines could inform and inspire modern day research. The works are analyzed from the perspective of design fiction with special focus on the embodiment of the systems and the creativity exhibited by them. We find that the chosen examples highlight the importance of recognizing the environment as a major factor in human-machine co-creative processes and that some of the works seem to precede current examples of artificial systems reaching into our everyday lives. The examples present embodied interaction in a positive, creativity-oriented way, but also highlight ethical risks of human-machine co-creativity. Modern day perceptions of artificial systems and creativity can be limited to some extent by the technologies available; fictitious examples from centuries past allow us to examine such limitations using a Design Fiction approach. We conclude by deriving four guidelines for future research from our fictional examples: 1) explore unlikely embodiments; 2) think of situations, not systems; 3) be aware of the disjunction between action and appearance; and 4) consider the system as a situated moral agent.
The COVID-19 pandemic has caused dramatic effects on the healthcare system, businesses, and education. In many countries, businesses were shut down, universities and schools had to cancel in-person classes, and many workers had to work remotely and socially distance in order to prevent the spread of the virus. These measures opened the door for technologies such as robotics and artificial intelligence to play an important role in minimizing the negative effects of such closures. There have been many efforts in the design and development of robotic systems for applications such as disinfection and eldercare. Healthcare education has seen a lot of potential in simulation robots, which offer valuable opportunities for remote learning during the pandemic. However, there are ethical considerations that need to be deliberated in the design and development of such systems. In this paper, we discuss the principles of roboethics and how these can be applied in the new era of COVID-19. We focus on identifying the most relevant ethical principles and apply them to a case study in dentistry education. DenTeach was developed as a portable device that uses sensors and computer simulation to make dental education more efficient. DenTeach makes remote instruction possible by allowing students to learn and practice dental procedures from home. We evaluate DenTeach on the principles of data, common good, and safety, and highlight the importance of roboethics in Canada. The principles identified in this paper can inform researchers and educational institutions considering implementing robots in their curriculum.
The COVID-19 pandemic has highly impacted the communities globally by reprioritizing the means through which various societal sectors operate. Among these sectors, healthcare providers and medical workers have been impacted prominently due to the massive increase in demand for medical services under unprecedented circumstances. Hence, any tool that can help the compliance with social guidelines for COVID-19 spread prevention will have a positive impact on managing and controlling the virus outbreak and reducing the excessive burden on the healthcare system. This perspective article disseminates the perspectives of the authors regarding the use of novel biosensors and intelligent algorithms embodied in wearable IoMT frameworks for tackling this issue. We discuss how with the use of smart IoMT wearables certain biomarkers can be tracked for detection of COVID-19 in exposed individuals. We enumerate several machine learning algorithms which can be used to process a wide range of collected biomarkers for detecting (a) multiple symptoms of SARS-CoV-2 infection and (b) the dynamical likelihood of contracting the virus through interpersonal interaction. Eventually, we enunciate how a systematic use of smart wearable IoMT devices in various social sectors can intelligently help controlling the spread of COVID-19 in communities as they enter the reopening phase. We explain how this framework can benefit individuals and their medical correspondents by introducing Systems for Symptom Decoding (SSD), and how the use of this technology can be generalized on a societal level for the control of spread by introducing Systems for Spread Tracing (SST).
The coronavirus disease (COVID-19) outbreak requires rapid reshaping of rehabilitation services to include patients recovering from severe COVID-19 with post-intensive care syndromes, which results in physical deconditioning and cognitive impairments, patients with comorbid conditions, and other patients requiring physical therapy during the outbreak with no or limited access to hospital and rehabilitation centers. Considering the access barriers to quality rehabilitation settings and services imposed by social distancing and stay-at-home orders, these patients can be benefited from providing access to affordable and good quality care through home-based rehabilitation. The success of such treatment will depend highly on the intensity of the therapy and effort invested by the patient. Monitoring patients' compliance and designing a home-based rehabilitation that can mentally engage them are the critical elements in home-based therapy's success. Hence, we study the state-of-the-art telerehabilitation frameworks and robotic devices, and comment about a hybrid model that can use existing telerehabilitation framework and home-based robotic devices for treatment and simultaneously assess patient's progress remotely. Second, we comment on the patients' social support and engagement, which is critical for the success of telerehabilitation service. As the therapists are not physically present to guide the patients, we also discuss the adaptability requirement of home-based telerehabilitation. Finally, we suggest that the reformed rehabilitation services should consider both home-based solutions for enhancing the activities of daily living and an on-demand ambulatory rehabilitation unit for extensive training where we can monitor both cognitive and motor performance of the patients remotely.
Several challenges to guarantee medical care have been exposed during the current COVID-19 pandemic. Although the literature has shown some robotics applications to overcome the potential hazards and risks in hospital environments, the implementation of those developments is limited, and few studies measure the perception and the acceptance of clinicians. This work presents the design and implementation of several perception questionnaires to assess healthcare provider's level of acceptance and education toward robotics for COVID-19 control in clinic scenarios. Specifically, 41 healthcare professionals satisfactorily accomplished the surveys, exhibiting a low level of knowledge about robotics applications in this scenario. Likewise, the surveys revealed that the fear of being replaced by robots remains in the medical community. In the Colombian context, 82.9% of participants indicated a positive perception concerning the development and implementation of robotics in clinic environments. Finally, in general terms, the participants exhibited a positive attitude toward using robots and recommended them to be used in the current panorama.
Purpose: It is now clear that the COVID-19 viruses can be transferred via airborne transmission. The objective of this study was to attempt the design and fabrication of an AMBU ventilator with a negative pressure headbox linked to a negative pressure transporting capsule, which could provide a low-cost construction, flexible usage unit, and also airborne prevention that could be manufactured without a high level of technology.
Method: The machine consists of an automated AMBU bag ventilator, a negative pressure headbox, and a transporting capsule. The function and working duration of each component were tested.
Results: The two main settings of the ventilator include an active mode that can be set at the time range of 0 s–9 h 59 min 59 s and a resting mode, which could work continuously for 24 h. The blower motor and battery system, which were used to power the ventilator, create negative air pressure within the headbox, and the transporting capsule, could run for at least 2 h without being recharged. The transporting capsule was able to create an air change rate of 21.76 ACH with-10 Pa internal pressure.
Conclusion: This automated AMBU ventilator allowed flow rate, rhythm, and volume of oxygen to be set. The hazardous expired air was treated by a HEPA filter. The patient’s transporting capsule is of a compact size and incorporates the air treatment systems. Further development of this machine should focus on how to link seamlessly with imaging technology, to verify standardization, to test using human subjects, and then to be the commercialized.
Flexible endoscopy involves the insertion of a long narrow flexible tube into the body for diagnostic and therapeutic procedures. In the gastrointestinal (GI) tract, flexible endoscopy plays a major role in cancer screening, surveillance, and treatment programs. As a result of gas insufflation during the procedure, both upper and lower GI endoscopy procedures have been classified as aerosol generating by the guidelines issued by the respective societies during the COVID-19 pandemic—although no quantifiable data on aerosol generation currently exists. Due to the risk of COVID-19 transmission to healthcare workers, most societies halted non-emergency and diagnostic procedures during the lockdown. The long-term implications of stoppage in cancer diagnoses and treatment is predicted to lead to a large increase in preventable deaths. Robotics may play a major role in this field by allowing healthcare operators to control the flexible endoscope from a safe distance and pave a path for protecting healthcare workers through minimizing the risk of virus transmission without reducing diagnostic and therapeutic capacities. This review focuses on the needs and challenges associated with the design of robotic flexible endoscopes for use during a pandemic. The authors propose that a few minor changes to existing platforms or considerations for platforms in development could lead to significant benefits for use during infection control scenarios.
There are high risks of infection for surgeons during the face-to-face COVID-19 swab sampling due to the novel coronavirus’s infectivity. To address this issue, we propose a flexible transoral robot with a teleoperated configuration for swab sampling. The robot comprises a flexible manipulator, an endoscope with a monitor, and a master device. A 3-prismatic-universal (3-PU) flexible parallel mechanism with 3 degrees of freedom (DOF) is used to realize the manipulator’s movements. The flexibility of the manipulator improves the safety of testees. Besides, the master device is similar to the manipulator in structure. It is easy to use for operators. Under the guidance of the vision from the endoscope, the surgeon can operate the master device to control the swab’s motion attached to the manipulator for sampling. In this paper, the robotic system, the workspace, and the operation procedure are described in detail. The tongue depressor, which is used to prevent the tongue’s interference during the sampling, is also tested. The accuracy of the manipulator under visual guidance is validated intuitively. Finally, the experiment on a human phantom is conducted to demonstrate the feasibility of the robot preliminarily.
This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews recent technological breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.
The COVID-19 pandemic has emerged as a serious global health crisis, with the predominant morbidity and mortality linked to pulmonary involvement. Point-of-Care ultrasound (POCUS) scanning, becoming one of the primary determinative methods for its diagnosis and staging, requires, however, close contact of healthcare workers with patients, therefore increasing the risk of infection. This work thus proposes an autonomous robotic solution that enables POCUS scanning of COVID-19 patients’ lungs for diagnosis and staging. An algorithm was developed for approximating the optimal position of an ultrasound probe on a patient from prior CT scans to reach predefined lung infiltrates. In the absence of prior CT scans, a deep learning method was developed for predicting 3D landmark positions of a human ribcage given a torso surface model. The landmarks, combined with the surface model, are subsequently used for estimating optimal ultrasound probe position on the patient for imaging infiltrates. These algorithms, combined with a force–displacement profile collection methodology, enabled the system to successfully image all points of interest in a simulated experimental setup with an average accuracy of 20.6 ± 14.7 mm using prior CT scans, and 19.8 ± 16.9 mm using only ribcage landmark estimation. A study on a full torso ultrasound phantom showed that autonomously acquired ultrasound images were 100% interpretable when using force feedback with prior CT and 88% with landmark estimation, compared to 75 and 58% without force feedback, respectively. This demonstrates the preliminary feasibility of the system, and its potential for offering a solution to help mitigate the spread of COVID-19 in vulnerable environments.
During the COVID-19 pandemic, the higher susceptibility of post-stroke patients to infection calls for extra safety precautions. Despite the imposed restrictions, early neurorehabilitation cannot be postponed due to its paramount importance for improving motor and functional recovery chances. Utilizing accessible state-of-the-art technologies, home-based rehabilitation devices are proposed as a sustainable solution in the current crisis. In this paper, a comprehensive review on developed home-based rehabilitation technologies of the last 10 years (2011–2020), categorizing them into upper and lower limb devices and considering both commercialized and state-of-the-art realms. Mechatronic, control, and software aspects of the system are discussed to provide a classified roadmap for home-based systems development. Subsequently, a conceptual framework on the development of smart and intelligent community-based home rehabilitation systems based on novel mechatronic technologies is proposed. In this framework, each rehabilitation device acts as an agent in the network, using the internet of things (IoT) technologies, which facilitates learning from the recorded data of the other agents, as well as the tele-supervision of the treatment by an expert. The presented design paradigm based on the above-mentioned leading technologies could lead to the development of promising home rehabilitation systems, which encourage stroke survivors to engage in under-supervised or unsupervised therapeutic activities.
The current Covid-19 pandemic poses an unprecedented global challenge in the field of education and training. As we have seen, the lack of proper information about the virus and its transmission has forced the general population and healthcare workers to rapidly acquire knowledge and learn new practices. Clearly, a well-informed population is more likely to adopt the correct precautionary measures, thus reducing the transmission of the infection; likewise, properly educated healthcare workers are better equipped to manage the emergency. However, the need to maintain physical distancing has made it impossible to provide in-presence information and training. In this regard, new technologies have proved to be an invaluable resource by facilitating distance learning. Indeed, e-learning offers significant advantages because it does not require the physical presence of learners and teachers. This innovative method applied to serious games has been considered potentially effective in enabling rapid and large-scale dissemination of information and learning through content interactivity. We will review studies that have observed the development and use of serious games to foster information and practices about Covid-19 aimed at promoting behavioral changes in the population and the healthcare personnel involved on the front line.
During an ultrasound (US) scan, the sonographer is in close contact with the patient, which puts them at risk of COVID-19 transmission. In this paper, we propose a robot-assisted system that automatically scans tissue, increasing sonographer/patient distance and decreasing contact duration between them. This method is developed as a quick response to the COVID-19 pandemic. It considers the preferences of the sonographers in terms of how US scanning is done and can be trained quickly for different applications. Our proposed system automatically scans the tissue using a dexterous robot arm that holds US probe. The system assesses the quality of the acquired US images in real-time. This US image feedback will be used to automatically adjust the US probe contact force based on the quality of the image frame. The quality assessment algorithm is based on three US image features: correlation, compression and noise characteristics. These US image features are input to the SVM classifier, and the robot arm will adjust the US scanning force based on the SVM output. The proposed system enables the sonographer to maintain a distance from the patient because the sonographer does not have to be holding the probe and pressing against the patient's body for any prolonged time. The SVM was trained using bovine and porcine biological tissue, the system was then tested experimentally on plastisol phantom tissue. The result of the experiments shows us that our proposed quality assessment algorithm successfully maintains US image quality and is fast enough for use in a robotic control loop.
Healthcare workers face a high risk of contagion during a pandemic due to their close proximity to patients. The situation is further exacerbated in the case of a shortage of personal protective equipment that can increase the risk of exposure for the healthcare workers and even non-pandemic related patients, such as those on dialysis. In this study, we propose an emergency, non-invasive remote monitoring and control response system to retrofit dialysis machines with robotic manipulators for safely supporting the treatment of patients with acute kidney disease. Specifically, as a proof-of-concept, we mock-up the touchscreen instrument control panel of a dialysis machine and live-stream it to a remote user’s tablet computer device. Then, the user performs touch-based interactions on the tablet device to send commands to the robot to manipulate the instrument controls on the touchscreen of the dialysis machine. To evaluate the performance of the proposed system, we conduct an accuracy test. Moreover, we perform qualitative user studies using two modes of interaction with the designed system to measure the user task load and system usability and to obtain user feedback. The two modes of interaction included a touch-based interaction using a tablet device and a click-based interaction using a computer. The results indicate no statistically significant difference in the relatively low task load experienced by the users for both modes of interaction. Moreover, the system usability survey results reveal no statistically significant difference in the user experience for both modes of interaction except that users experienced a more consistent performance with the click-based interaction vs. the touch-based interaction. Based on the user feedback, we suggest an improvement to the proposed system and illustrate an implementation that corrects the distorted perception of the instrumentation control panel live-stream for a better and consistent user experience.
Gait is an important basic function of human beings and an integral part of life. Many mental and physical abnormalities can cause noticeable differences in a person’s gait. Abnormal gait can lead to serious consequences such as falls, limited mobility and reduced life satisfaction. Gait analysis, which includes joint kinematics, kinetics, and dynamic Electromyography (EMG) data, is now recognized as a clinically useful tool that can provide both quantifiable and qualitative information on performance to aid in treatment planning and evaluate its outcome. With the assistance of new artificial intelligence (AI) technology, the traditional medical environment has undergone great changes. AI has the potential to reshape medicine, making gait analysis more accurate, efficient and accessible. In this study, we analyzed basic information about gait analysis and AI articles that met inclusion criteria in the WoS Core Collection database from 1992–2022, and the VosViewer software was used for web visualization and keyword analysis. Through bibliometric and visual analysis, this article systematically introduces the research status of gait analysis and AI. We introduce the application of artificial intelligence in clinical gait analysis, which affects the identification and management of gait abnormalities found in various diseases. Machine learning (ML) and artificial neural networks (ANNs) are the most often utilized AI methods in gait analysis. By comparing the predictive capability of different AI algorithms in published studies, we evaluate their potential for gait analysis in different situations. Furthermore, the current challenges and future directions of gait analysis and AI research are discussed, which will also provide valuable reference information for investors in this field.
Improving the mobility of robots is an important goal for many real-world applications and implementing an animal-like spine structure in a quadruped robot is a promising approach to achieving high-speed running. This paper proposes a feline-like multi-joint spine adopting a one-degree-of-freedom closed-loop linkage for a quadruped robot to realize high-speed running. We theoretically prove that the proposed spine structure can realize 1.5 times the horizontal range of foot motion compared to a spine structure with a single joint. Experimental results demonstrate that a robot with the proposed spine structure achieves 1.4 times the horizontal range of motion and 1.9 times the speed of a robot with a single-joint spine structure.
Besides direct interaction, human hands are also skilled at using tools to manipulate objects for typical life and work tasks. This paper proposes DeepClaw 2.0 as a low-cost, open-sourced data collection platform for learning human manipulation. We use an RGB-D camera to visually track the motion and deformation of a pair of soft finger networks on a modified kitchen tong operated by human teachers. These fingers can be easily integrated with robotic grippers to bridge the structural mismatch between humans and robots during learning. The deformation of soft finger networks, which reveals tactile information in contact-rich manipulation, is captured passively. We collected a comprehensive sample dataset involving five human demonstrators in ten manipulation tasks with five trials per task. As a low-cost, open-sourced platform, we also developed an intuitive interface that converts the raw sensor data into state-action data for imitation learning problems. For learning-by-demonstration problems, we further demonstrated our dataset’s potential by using real robotic hardware to collect joint actuation data or using a simulated environment when limited access to the hardware.
Enabled by advancing technology, coral reef researchers increasingly prefer use of image-based surveys over approaches depending solely upon in situ observations, interpretations, and recordings of divers. The images collected, and derivative products such as orthographic projections and 3D models, allow researchers to study a comprehensive digital twin of their field sites. Spatio-temporally located twins can be compared and annotated, enabling researchers to virtually return to sites long after they have left them. While these new data expand the variety and specificity of biological investigation that can be pursued, they have introduced the much-discussed Big Data Problem: research labs lack the human and computational resources required to process and analyze imagery at the rate it can be collected. The rapid development of unmanned underwater vehicles suggests researchers will soon have access to an even greater volume of imagery and other sensor measurements than can be collected by diver-piloted platforms, further exacerbating data handling limitations. Thoroughly segmenting (tracing the extent of and taxonomically identifying) organisms enables researchers to extract the information image products contain, but is very time-consuming. Analytic techniques driven by neural networks offer the possibility that the segmentation process can be greatly accelerated through automation. In this study, we examine the efficacy of automated segmentation on three different image-derived data products: 3D models, and 2D and 2.5D orthographic projections thereof; we also contrast their relative accessibility and utility to different avenues of biological inquiry. The variety of network architectures and parameters tested performed similarly, ∼80% IoU for the genus Porites , suggesting that the primary limitations to an automated workflow are 1) the current capabilities of neural network technology, and 2) consistency and quality control in image product collection and human training/testing dataset generation.
Augmented Reality (AR) interfaces have been studied extensively over the last few decades, with a growing number of user-based experiments. In this paper, we systematically review 10 years of the most influential AR user studies, from 2005 to 2014. A total of 291 papers with 369 individual user studies have been reviewed and classified based on their application areas. The primary contribution of the review is to present the broad landscape of user-based AR research, and to provide a high-level view of how that landscape has changed. We summarize the high-level contributions from each category of papers, and present examples of the most influential user studies. We also identify areas where there have been few user studies, and opportunities for future research. Among other things, we find that there is a growing trend toward handheld AR user studies, and that most studies are conducted in laboratory settings and do not involve pilot testing. This research will be useful for AR researchers who want to follow best practices in designing their own AR user studies.